Accuracy Assessment of Coastal Topography Derived from Uav Images
NASA Astrophysics Data System (ADS)
Long, N.; Millescamps, B.; Pouget, F.; Dumon, A.; Lachaussée, N.; Bertin, X.
2016-06-01
To monitor coastal environments, Unmanned Aerial Vehicle (UAV) is a low-cost and easy to use solution to enable data acquisition with high temporal frequency and spatial resolution. Compared to Light Detection And Ranging (LiDAR) or Terrestrial Laser Scanning (TLS), this solution produces Digital Surface Model (DSM) with a similar accuracy. To evaluate the DSM accuracy on a coastal environment, a campaign was carried out with a flying wing (eBee) combined with a digital camera. Using the Photoscan software and the photogrammetry process (Structure From Motion algorithm), a DSM and an orthomosaic were produced. Compared to GNSS surveys, the DSM accuracy is estimated. Two parameters are tested: the influence of the methodology (number and distribution of Ground Control Points, GCPs) and the influence of spatial image resolution (4.6 cm vs 2 cm). The results show that this solution is able to reproduce the topography of a coastal area with a high vertical accuracy (< 10 cm). The georeferencing of the DSM require a homogeneous distribution and a large number of GCPs. The accuracy is correlated with the number of GCPs (use 19 GCPs instead of 10 allows to reduce the difference of 4 cm); the required accuracy should be dependant of the research problematic. Last, in this particular environment, the presence of very small water surfaces on the sand bank does not allow to improve the accuracy when the spatial resolution of images is decreased.
Cross-coherent vector sensor processing for spatially distributed glider networks.
Nichols, Brendan; Sabra, Karim G
2015-09-01
Autonomous underwater gliders fitted with vector sensors can be used as a spatially distributed sensor array to passively locate underwater sources. However, to date, the positional accuracy required for robust array processing (especially coherent processing) is not achievable using dead-reckoning while the gliders remain submerged. To obtain such accuracy, the gliders can be temporarily surfaced to allow for global positioning system contact, but the acoustically active sea surface introduces locally additional sensor noise. This letter demonstrates that cross-coherent array processing, which inherently mitigates the effects of local noise, outperforms traditional incoherent processing source localization methods for this spatially distributed vector sensor network.
Guitet, Stéphane; Hérault, Bruno; Molto, Quentin; Brunaux, Olivier; Couteron, Pierre
2015-01-01
Precise mapping of above-ground biomass (AGB) is a major challenge for the success of REDD+ processes in tropical rainforest. The usual mapping methods are based on two hypotheses: a large and long-ranged spatial autocorrelation and a strong environment influence at the regional scale. However, there are no studies of the spatial structure of AGB at the landscapes scale to support these assumptions. We studied spatial variation in AGB at various scales using two large forest inventories conducted in French Guiana. The dataset comprised 2507 plots (0.4 to 0.5 ha) of undisturbed rainforest distributed over the whole region. After checking the uncertainties of estimates obtained from these data, we used half of the dataset to develop explicit predictive models including spatial and environmental effects and tested the accuracy of the resulting maps according to their resolution using the rest of the data. Forest inventories provided accurate AGB estimates at the plot scale, for a mean of 325 Mg.ha-1. They revealed high local variability combined with a weak autocorrelation up to distances of no more than10 km. Environmental variables accounted for a minor part of spatial variation. Accuracy of the best model including spatial effects was 90 Mg.ha-1 at plot scale but coarse graining up to 2-km resolution allowed mapping AGB with accuracy lower than 50 Mg.ha-1. Whatever the resolution, no agreement was found with available pan-tropical reference maps at all resolutions. We concluded that the combined weak autocorrelation and weak environmental effect limit AGB maps accuracy in rainforest, and that a trade-off has to be found between spatial resolution and effective accuracy until adequate "wall-to-wall" remote sensing signals provide reliable AGB predictions. Waiting for this, using large forest inventories with low sampling rate (<0.5%) may be an efficient way to increase the global coverage of AGB maps with acceptable accuracy at kilometric resolution.
Calvo-Ortega, Juan-Francisco; Hermida-López, Marcelino; Moragues-Femenía, Sandra; Pozo-Massó, Miquel; Casals-Farran, Joan
2017-03-01
To evaluate the spatial accuracy of a frameless cone-beam computed tomography (CBCT)-guided cranial radiosurgery (SRS) using an end-to-end (E2E) phantom test methodology. Five clinical SRS plans were mapped to an acrylic phantom containing a radiochromic film. The resulting phantom-based plans (E2E plans) were delivered four times. The phantom was setup on the treatment table with intentional misalignments, and CBCT-imaging was used to align it prior to E2E plan delivery. Comparisons (global gamma analysis) of the planned and delivered dose to the film were performed using a commercial triple-channel film dosimetry software. The necessary distance-to-agreement to achieve a 95% (DTA95) gamma passing rate for a fixed 3% dose difference provided an estimate of the spatial accuracy of CBCT-guided SRS. Systematic (∑) and random (σ) error components, as well as 95% confidence levels were derived for the DTA95 metric. The overall systematic spatial accuracy averaged over all tests was 1.4mm (SD: 0.2mm), with a corresponding 95% confidence level of 1.8mm. The systematic (Σ) and random (σ) spatial components of the accuracy derived from the E2E tests were 0.2mm and 0.8mm, respectively. The E2E methodology used in this study allowed an estimation of the spatial accuracy of our CBCT-guided SRS procedure. Subsequently, a PTV margin of 2.0mm is currently used in our department. Copyright © 2017 Associazione Italiana di Fisica Medica. Published by Elsevier Ltd. All rights reserved.
Image Stability Requirements For a Geostationary Imaging Fourier Transform Spectrometer (GIFTS)
NASA Technical Reports Server (NTRS)
Bingham, G. E.; Cantwell, G.; Robinson, R. C.; Revercomb, H. E.; Smith, W. L.
2001-01-01
A Geostationary Imaging Fourier Transform Spectrometer (GIFTS) has been selected for the NASA New Millennium Program (NMP) Earth Observing-3 (EO-3) mission. Our paper will discuss one of the key GIFTS measurement requirements, Field of View (FOV) stability, and its impact on required system performance. The GIFTS NMP mission is designed to demonstrate new and emerging sensor and data processing technologies with the goal of making revolutionary improvements in meteorological observational capability and forecasting accuracy. The GIFTS payload is a versatile imaging FTS with programmable spectral resolution and spatial scene selection that allows radiometric accuracy and atmospheric sounding precision to be traded in near real time for area coverage. The GIFTS sensor combines high sensitivity with a massively parallel spatial data collection scheme to allow high spatial resolution measurement of the Earth's atmosphere and rapid broad area coverage. An objective of the GIFTS mission is to demonstrate the advantages of high spatial resolution (4 km ground sample distance - gsd) on temperature and water vapor retrieval by allowing sampling in broken cloud regions. This small gsd, combined with the relatively long scan time required (approximately 10 s) to collect high resolution spectra from geostationary (GEO) orbit, may require extremely good pointing control. This paper discusses the analysis of this requirement.
Classification with spatio-temporal interpixel class dependency contexts
NASA Technical Reports Server (NTRS)
Jeon, Byeungwoo; Landgrebe, David A.
1992-01-01
A contextual classifier which can utilize both spatial and temporal interpixel dependency contexts is investigated. After spatial and temporal neighbors are defined, a general form of maximum a posterior spatiotemporal contextual classifier is derived. This contextual classifier is simplified under several assumptions. Joint prior probabilities of the classes of each pixel and its spatial neighbors are modeled by the Gibbs random field. The classification is performed in a recursive manner to allow a computationally efficient contextual classification. Experimental results with bitemporal TM data show significant improvement of classification accuracy over noncontextual pixelwise classifiers. This spatiotemporal contextual classifier should find use in many applications of remote sensing, especially when the classification accuracy is important.
Guitet, Stéphane; Hérault, Bruno; Molto, Quentin; Brunaux, Olivier; Couteron, Pierre
2015-01-01
Precise mapping of above-ground biomass (AGB) is a major challenge for the success of REDD+ processes in tropical rainforest. The usual mapping methods are based on two hypotheses: a large and long-ranged spatial autocorrelation and a strong environment influence at the regional scale. However, there are no studies of the spatial structure of AGB at the landscapes scale to support these assumptions. We studied spatial variation in AGB at various scales using two large forest inventories conducted in French Guiana. The dataset comprised 2507 plots (0.4 to 0.5 ha) of undisturbed rainforest distributed over the whole region. After checking the uncertainties of estimates obtained from these data, we used half of the dataset to develop explicit predictive models including spatial and environmental effects and tested the accuracy of the resulting maps according to their resolution using the rest of the data. Forest inventories provided accurate AGB estimates at the plot scale, for a mean of 325 Mg.ha-1. They revealed high local variability combined with a weak autocorrelation up to distances of no more than10 km. Environmental variables accounted for a minor part of spatial variation. Accuracy of the best model including spatial effects was 90 Mg.ha-1 at plot scale but coarse graining up to 2-km resolution allowed mapping AGB with accuracy lower than 50 Mg.ha-1. Whatever the resolution, no agreement was found with available pan-tropical reference maps at all resolutions. We concluded that the combined weak autocorrelation and weak environmental effect limit AGB maps accuracy in rainforest, and that a trade-off has to be found between spatial resolution and effective accuracy until adequate “wall-to-wall” remote sensing signals provide reliable AGB predictions. Waiting for this, using large forest inventories with low sampling rate (<0.5%) may be an efficient way to increase the global coverage of AGB maps with acceptable accuracy at kilometric resolution. PMID:26402522
Alcohol-related hot-spot analysis and prediction : final report.
DOT National Transportation Integrated Search
2017-05-01
This project developed methods to more accurately identify alcohol-related crash hot spots, ultimately allowing for more effective and efficient enforcement and safety campaigns. Advancements in accuracy came from improving the calculation of spatial...
Local indicators of geocoding accuracy (LIGA): theory and application
Jacquez, Geoffrey M; Rommel, Robert
2009-01-01
Background Although sources of positional error in geographic locations (e.g. geocoding error) used for describing and modeling spatial patterns are widely acknowledged, research on how such error impacts the statistical results has been limited. In this paper we explore techniques for quantifying the perturbability of spatial weights to different specifications of positional error. Results We find that a family of curves describes the relationship between perturbability and positional error, and use these curves to evaluate sensitivity of alternative spatial weight specifications to positional error both globally (when all locations are considered simultaneously) and locally (to identify those locations that would benefit most from increased geocoding accuracy). We evaluate the approach in simulation studies, and demonstrate it using a case-control study of bladder cancer in south-eastern Michigan. Conclusion Three results are significant. First, the shape of the probability distributions of positional error (e.g. circular, elliptical, cross) has little impact on the perturbability of spatial weights, which instead depends on the mean positional error. Second, our methodology allows researchers to evaluate the sensitivity of spatial statistics to positional accuracy for specific geographies. This has substantial practical implications since it makes possible routine sensitivity analysis of spatial statistics to positional error arising in geocoded street addresses, global positioning systems, LIDAR and other geographic data. Third, those locations with high perturbability (most sensitive to positional error) and high leverage (that contribute the most to the spatial weight being considered) will benefit the most from increased positional accuracy. These are rapidly identified using a new visualization tool we call the LIGA scatterplot. Herein lies a paradox for spatial analysis: For a given level of positional error increasing sample density to more accurately follow the underlying population distribution increases perturbability and introduces error into the spatial weights matrix. In some studies positional error may not impact the statistical results, and in others it might invalidate the results. We therefore must understand the relationships between positional accuracy and the perturbability of the spatial weights in order to have confidence in a study's results. PMID:19863795
Modeling spatial competition for light in plant populations with the porous medium equation.
Beyer, Robert; Etard, Octave; Cournède, Paul-Henry; Laurent-Gengoux, Pascal
2015-02-01
We consider a plant's local leaf area index as a spatially continuous variable, subject to particular reaction-diffusion dynamics of allocation, senescence and spatial propagation. The latter notably incorporates the plant's tendency to form new leaves in bright rather than shaded locations. Applying a generalized Beer-Lambert law allows to link existing foliage to production dynamics. The approach allows for inter-individual variability and competition for light while maintaining robustness-a key weakness of comparable existing models. The analysis of the single plant case leads to a significant simplification of the system's key equation when transforming it into the well studied porous medium equation. Confronting the theoretical model to experimental data of sugar beet populations, differing in configuration density, demonstrates its accuracy.
NASA Astrophysics Data System (ADS)
Nguyen, Thinh; Potter, Thomas; Grossman, Robert; Zhang, Yingchun
2018-06-01
Objective. Neuroimaging has been employed as a promising approach to advance our understanding of brain networks in both basic and clinical neuroscience. Electroencephalography (EEG) and functional magnetic resonance imaging (fMRI) represent two neuroimaging modalities with complementary features; EEG has high temporal resolution and low spatial resolution while fMRI has high spatial resolution and low temporal resolution. Multimodal EEG inverse methods have attempted to capitalize on these properties but have been subjected to localization error. The dynamic brain transition network (DBTN) approach, a spatiotemporal fMRI constrained EEG source imaging method, has recently been developed to address these issues by solving the EEG inverse problem in a Bayesian framework, utilizing fMRI priors in a spatial and temporal variant manner. This paper presents a computer simulation study to provide a detailed characterization of the spatial and temporal accuracy of the DBTN method. Approach. Synthetic EEG data were generated in a series of computer simulations, designed to represent realistic and complex brain activity at superficial and deep sources with highly dynamical activity time-courses. The source reconstruction performance of the DBTN method was tested against the fMRI-constrained minimum norm estimates algorithm (fMRIMNE). The performances of the two inverse methods were evaluated both in terms of spatial and temporal accuracy. Main results. In comparison with the commonly used fMRIMNE method, results showed that the DBTN method produces results with increased spatial and temporal accuracy. The DBTN method also demonstrated the capability to reduce crosstalk in the reconstructed cortical time-course(s) induced by neighboring regions, mitigate depth bias and improve overall localization accuracy. Significance. The improved spatiotemporal accuracy of the reconstruction allows for an improved characterization of complex neural activity. This improvement can be extended to any subsequent brain connectivity analyses used to construct the associated dynamic brain networks.
Scheerlinck, Thierry; Polfliet, Mathias; Deklerck, Rudi; Van Gompel, Gert; Buls, Nico; Vandemeulebroucke, Jef
2016-01-01
We developed a marker-free automated CT-based spatial analysis (CTSA) method to detect stem-bone migration in consecutive CT datasets and assessed the accuracy and precision in vitro. Our aim was to demonstrate that in vitro accuracy and precision of CTSA is comparable to that of radiostereometric analysis (RSA). Stem and bone were segmented in 2 CT datasets and both were registered pairwise. The resulting rigid transformations were compared and transferred to an anatomically sound coordinate system, taking the stem as reference. This resulted in 3 translation parameters and 3 rotation parameters describing the relative amount of stem-bone displacement, and it allowed calculation of the point of maximal stem migration. Accuracy was evaluated in 39 comparisons by imposing known stem migration on a stem-bone model. Precision was estimated in 20 comparisons based on a zero-migration model, and in 5 patients without stem loosening. Limits of the 95% tolerance intervals (TIs) for accuracy did not exceed 0.28 mm for translations and 0.20° for rotations (largest standard deviation of the signed error (SD(SE)): 0.081 mm and 0.057°). In vitro, limits of the 95% TI for precision in a clinically relevant setting (8 comparisons) were below 0.09 mm and 0.14° (largest SD(SE): 0.012 mm and 0.020°). In patients, the precision was lower, but acceptable, and dependent on CT scan resolution. CTSA allows detection of stem-bone migration with an accuracy and precision comparable to that of RSA. It could be valuable for evaluation of subtle stem loosening in clinical practice.
NASA Technical Reports Server (NTRS)
Fatemi, Emad; Jerome, Joseph; Osher, Stanley
1989-01-01
A micron n+ - n - n+ silicon diode is simulated via the hydrodynamic model for carrier transport. The numerical algorithms employed are for the non-steady case, and a limiting process is used to reach steady state. The simulation employs shock capturing algorithms, and indeed shocks, or very rapid transition regimes, are observed in the transient case for the coupled system, consisting of the potential equation and the conservation equations describing charge, momentum, and energy transfer for the electron carriers. These algorithms, termed essentially non-oscillatory, were successfully applied in other contexts to model the flow in gas dynamics, magnetohydrodynamics, and other physical situations involving the conservation laws in fluid mechanics. The method here is first order in time, but the use of small time steps allows for good accuracy. Runge-Kutta methods allow one to achieve higher accuracy in time if desired. The spatial accuracy is of high order in regions of smoothness.
Endogenous spatial attention: evidence for intact functioning in adults with autism
Grubb, Michael A.; Behrmann, Marlene; Egan, Ryan; Minshew, Nancy J.; Carrasco, Marisa; Heeger, David J.
2012-01-01
Lay Abstract Attention allows us to selectively process the vast amount of information with which we are confronted. Focusing on a certain location of the visual scene (visual spatial attention) enables the prioritization of some aspects of information while ignoring others. Rapid manipulation of the attention field (i.e., the location and spread of visual spatial attention) is a critical aspect of human cognition, and previous research on spatial attention in individuals with autism spectrum disorders (ASD) has produced inconsistent results. In a series of three experiments, we evaluated claims in the literature that individuals with ASD exhibit a deficit in voluntarily controlling the deployment and size of the spatial attention field. We measured how well participants perform a visual discrimination task (accuracy) and how quickly they do so (reaction time), with and without spatial uncertainty (i.e., the lack of predictability concerning the spatial position of the upcoming stimulus). We found that high–functioning adults with autism exhibited slower reactions times overall with spatial uncertainty, but the effects of attention on performance accuracies and reaction times were indistinguishable between individuals with autism and typically developing individuals, in all three experiments. These results provide evidence of intact endogenous spatial attention function in high–functioning adults with ASD, suggesting that atypical endogenous spatial attention cannot be a latent characteristic of autism in general. Scientific Abstract Rapid manipulation of the attention field (i.e., the location and spread of visual spatial attention) is a critical aspect of human cognition, and previous research on spatial attention in individuals with autism spectrum disorders (ASD) has produced inconsistent results. In a series of three psychophysical experiments, we evaluated claims in the literature that individuals with ASD exhibit a deficit in voluntarily controlling the deployment and size of the spatial attention field. We measured the spatial distribution of performance accuracies and reaction times to quantify the sizes and locations of the attention field, with and without spatial uncertainty (i.e., the lack of predictability concerning the spatial position of the upcoming stimulus). We found that high–functioning adults with autism exhibited slower reactions times overall with spatial uncertainty, but the effects of attention on performance accuracies and reaction times were indistinguishable between individuals with autism and typically developing individuals, in all three experiments. These results provide evidence of intact endogenous spatial attention function in high–functioning adults with ASD, suggesting that atypical endogenous attention cannot be a latent characteristic of autism in general. PMID:23427075
A k-space method for acoustic propagation using coupled first-order equations in three dimensions.
Tillett, Jason C; Daoud, Mohammad I; Lacefield, James C; Waag, Robert C
2009-09-01
A previously described two-dimensional k-space method for large-scale calculation of acoustic wave propagation in tissues is extended to three dimensions. The three-dimensional method contains all of the two-dimensional method features that allow accurate and stable calculation of propagation. These features are spectral calculation of spatial derivatives, temporal correction that produces exact propagation in a homogeneous medium, staggered spatial and temporal grids, and a perfectly matched boundary layer. Spectral evaluation of spatial derivatives is accomplished using a fast Fourier transform in three dimensions. This computational bottleneck requires all-to-all communication; execution time in a parallel implementation is therefore sensitive to node interconnect latency and bandwidth. Accuracy of the three-dimensional method is evaluated through comparisons with exact solutions for media having spherical inhomogeneities. Large-scale calculations in three dimensions were performed by distributing the nearly 50 variables per voxel that are used to implement the method over a cluster of computers. Two computer clusters used to evaluate method accuracy are compared. Comparisons of k-space calculations with exact methods including absorption highlight the need to model accurately the medium dispersion relationships, especially in large-scale media. Accurately modeled media allow the k-space method to calculate acoustic propagation in tissues over hundreds of wavelengths.
Spatial reconstruction of single-cell gene expression
Satija, Rahul; Farrell, Jeffrey A.; Gennert, David; Schier, Alexander F.; Regev, Aviv
2015-01-01
Spatial localization is a key determinant of cellular fate and behavior, but spatial RNA assays traditionally rely on staining for a limited number of RNA species. In contrast, single-cell RNA-seq allows for deep profiling of cellular gene expression, but established methods separate cells from their native spatial context. Here we present Seurat, a computational strategy to infer cellular localization by integrating single-cell RNA-seq data with in situ RNA patterns. We applied Seurat to spatially map 851 single cells from dissociated zebrafish (Danio rerio) embryos, inferring a transcriptome-wide map of spatial patterning. We confirmed Seurat’s accuracy using several experimental approaches, and used it to identify a set of archetypal expression patterns and spatial markers. Additionally, Seurat correctly localizes rare subpopulations, accurately mapping both spatially restricted and scattered groups. Seurat will be applicable to mapping cellular localization within complex patterned tissues in diverse systems. PMID:25867923
Murphy, Matthew C; Poplawsky, Alexander J; Vazquez, Alberto L; Chan, Kevin C; Kim, Seong-Gi; Fukuda, Mitsuhiro
2016-08-15
Functional MRI (fMRI) is a popular and important tool for noninvasive mapping of neural activity. As fMRI measures the hemodynamic response, the resulting activation maps do not perfectly reflect the underlying neural activity. The purpose of this work was to design a data-driven model to improve the spatial accuracy of fMRI maps in the rat olfactory bulb. This system is an ideal choice for this investigation since the bulb circuit is well characterized, allowing for an accurate definition of activity patterns in order to train the model. We generated models for both cerebral blood volume weighted (CBVw) and blood oxygen level dependent (BOLD) fMRI data. The results indicate that the spatial accuracy of the activation maps is either significantly improved or at worst not significantly different when using the learned models compared to a conventional general linear model approach, particularly for BOLD images and activity patterns involving deep layers of the bulb. Furthermore, the activation maps computed by CBVw and BOLD data show increased agreement when using the learned models, lending more confidence to their accuracy. The models presented here could have an immediate impact on studies of the olfactory bulb, but perhaps more importantly, demonstrate the potential for similar flexible, data-driven models to improve the quality of activation maps calculated using fMRI data. Copyright © 2016 Elsevier Inc. All rights reserved.
Gaze-independent brain-computer interfaces based on covert attention and feature attention
NASA Astrophysics Data System (ADS)
Treder, M. S.; Schmidt, N. M.; Blankertz, B.
2011-10-01
There is evidence that conventional visual brain-computer interfaces (BCIs) based on event-related potentials cannot be operated efficiently when eye movements are not allowed. To overcome this limitation, the aim of this study was to develop a visual speller that does not require eye movements. Three different variants of a two-stage visual speller based on covert spatial attention and non-spatial feature attention (i.e. attention to colour and form) were tested in an online experiment with 13 healthy participants. All participants achieved highly accurate BCI control. They could select one out of thirty symbols (chance level 3.3%) with mean accuracies of 88%-97% for the different spellers. The best results were obtained for a speller that was operated using non-spatial feature attention only. These results show that, using feature attention, it is possible to realize high-accuracy, fast-paced visual spellers that have a large vocabulary and are independent of eye gaze.
Fresnel coefficients and Fabry-Perot formula for spatially dispersive metallic layers
NASA Astrophysics Data System (ADS)
Pitelet, Armel; Mallet, Émilien; Centeno, Emmanuel; Moreau, Antoine
2017-07-01
The repulsion between free electrons inside a metal makes its optical response spatially dispersive, so that it is not described by Drude's model but by a hydrodynamic model. We give here fully analytic results for a metallic slab in this framework, thanks to a two-mode cavity formalism leading to a Fabry-Perot formula, and show that a simplification can be made that preserves the accuracy of the results while allowing much simpler analytic expressions. For metallic layers thicker than 2.7 nm modified Fresnel coefficients can actually be used to accurately predict the response of any multilayer with spatially dispersive metals (for reflection, transmission, or the guided modes). Finally, this explains why adding a small dielectric layer [Y. Luo et al., Phys. Rev. Lett. 111, 093901 (2013), 10.1103/PhysRevLett.111.093901] allows one to reproduce the effects of nonlocality in many cases, and especially for multilayers.
MAPPING SPATIAL THEMATIC ACCURACY WITH FUZZY SETS
Thematic map accuracy is not spatially homogenous but variable across a landscape. Properly analyzing and representing spatial pattern and degree of thematic map accuracy would provide valuable information for using thematic maps. However, current thematic map accuracy measures (...
Dynamic MTF, an innovative test bench for detector characterization
NASA Astrophysics Data System (ADS)
Emmanuel, Rossi; Raphaël, Lardière; Delmonte, Stephane
2017-11-01
PLEIADES HR are High Resolution satellites for Earth observation. Placed at 695km they reach a 0.7m spatial resolution. To allow such performances, the detectors are working in a TDI mode (Time and Delay Integration) which consists in a continuous charge transfer from one line to the consecutive one while the image is passing on the detector. The spatial resolution, one of the most important parameter to test, is characterized by the MTF (Modulation Transfer Function). Usually, detectors are tested in a staring mode. For a higher level of performances assessment, a dedicated bench has been set-up, allowing detectors' MTF characterization in the TDI mode. Accuracy and reproducibility are impressive, opening the door to new perspectives in term of HR imaging systems testing.
NASA Astrophysics Data System (ADS)
Sycheva, Elena A.; Vasilev, Aleksandr S.; Lashmanov, Oleg U.; Korotaev, Valery V.
2017-06-01
The article is devoted to the optimization of optoelectronic systems of the spatial position of objects. Probabilistic characteristics of the detection of an active structured mark on a random noisy background are investigated. The developed computer model and the results of the study allow us to estimate the probabilistic characteristics of detection of a complex structured mark on a random gradient background, and estimate the error of spatial coordinates. The results of the study make it possible to improve the accuracy of measuring the coordinates of the object. Based on the research recommendations are given on the choice of parameters of the optimal mark structure for use in opticalelectronic systems for monitoring the spatial position of large-sized structures.
Kozunov, Vladimir V.; Ossadtchi, Alexei
2015-01-01
Although MEG/EEG signals are highly variable between subjects, they allow characterizing systematic changes of cortical activity in both space and time. Traditionally a two-step procedure is used. The first step is a transition from sensor to source space by the means of solving an ill-posed inverse problem for each subject individually. The second is mapping of cortical regions consistently active across subjects. In practice the first step often leads to a set of active cortical regions whose location and timecourses display a great amount of interindividual variability hindering the subsequent group analysis. We propose Group Analysis Leads to Accuracy (GALA)—a solution that combines the two steps into one. GALA takes advantage of individual variations of cortical geometry and sensor locations. It exploits the ensuing variability in electromagnetic forward model as a source of additional information. We assume that for different subjects functionally identical cortical regions are located in close proximity and partially overlap and their timecourses are correlated. This relaxed similarity constraint on the inverse solution can be expressed within a probabilistic framework, allowing for an iterative algorithm solving the inverse problem jointly for all subjects. A systematic simulation study showed that GALA, as compared with the standard min-norm approach, improves accuracy of true activity recovery, when accuracy is assessed both in terms of spatial proximity of the estimated and true activations and correct specification of spatial extent of the activated regions. This improvement obtained without using any noise normalization techniques for both solutions, preserved for a wide range of between-subject variations in both spatial and temporal features of regional activation. The corresponding activation timecourses exhibit significantly higher similarity across subjects. Similar results were obtained for a real MEG dataset of face-specific evoked responses. PMID:25954141
Spatial Patterns of NLCD Land Cover Change Thematic Accuracy (2001 - 2011)
Research on spatial non-stationarity of land cover classification accuracy has been ongoing for over two decades. We extend the understanding of thematic map accuracy spatial patterns by: 1) quantifying spatial patterns of map-reference agreement for class-specific land cover c...
NASA Technical Reports Server (NTRS)
Fatemi, Emad; Osher, Stanley; Jerome, Joseph
1991-01-01
A micron n+ - n - n+ silicon diode is simulated via the hydrodynamic model for carrier transport. The numerical algorithms employed are for the non-steady case, and a limiting process is used to reach steady state. The simulation employs shock capturing algorithms, and indeed shocks, or very rapid transition regimes, are observed in the transient case for the coupled system, consisting of the potential equation and the conservation equations describing charge, momentum, and energy transfer for the electron carriers. These algorithms, termed essentially nonoscillatory, were successfully applied in other contexts to model the flow in gas dynamics, magnetohydrodynamics, and other physical situations involving the conservation laws in fluid mechanics. The method here is first order in time, but the use of small time steps allows for good accuracy. Runge-Kutta methods allow one to achieve higher accuracy in time if desired. The spatial accuracy is of high order in regions of smoothness.
NASA Technical Reports Server (NTRS)
Korb, C. L.; Gentry, Bruce M.
1995-01-01
The goal of the Army Research Office (ARO) Geosciences Program is to measure the three dimensional wind field in the planetary boundary layer (PBL) over a measurement volume with a 50 meter spatial resolution and with measurement accuracies of the order of 20 cm/sec. The objective of this work is to develop and evaluate a high vertical resolution lidar experiment using the edge technique for high accuracy measurement of the atmospheric wind field to meet the ARO requirements. This experiment allows the powerful capabilities of the edge technique to be quantitatively evaluated. In the edge technique, a laser is located on the steep slope of a high resolution spectral filter. This produces large changes in measured signal for small Doppler shifts. A differential frequency technique renders the Doppler shift measurement insensitive to both laser and filter frequency jitter and drift. The measurement is also relatively insensitive to the laser spectral width for widths less than the width of the edge filter. Thus, the goal is to develop a system which will yield a substantial improvement in the state of the art of wind profile measurement in terms of both vertical resolution and accuracy and which will provide a unique capability for atmospheric wind studies.
a Comparative Analysis of Five Cropland Datasets in Africa
NASA Astrophysics Data System (ADS)
Wei, Y.; Lu, M.; Wu, W.
2018-04-01
The food security, particularly in Africa, is a challenge to be resolved. The cropland area and spatial distribution obtained from remote sensing imagery are vital information. In this paper, according to cropland area and spatial location, we compare five global cropland datasets including CCI Land Cover, GlobCover, MODIS Collection 5, GlobeLand30 and Unified Cropland in circa 2010 of Africa in terms of cropland area and spatial location. The accuracy of cropland area calculated from five datasets was analyzed compared with statistic data. Based on validation samples, the accuracies of spatial location for the five cropland products were assessed by error matrix. The results show that GlobeLand30 has the best fitness with the statistics, followed by MODIS Collection 5 and Unified Cropland, GlobCover and CCI Land Cover have the lower accuracies. For the accuracy of spatial location of cropland, GlobeLand30 reaches the highest accuracy, followed by Unified Cropland, MODIS Collection 5 and GlobCover, CCI Land Cover has the lowest accuracy. The spatial location accuracy of five datasets in the Csa with suitable farming condition is generally higher than in the Bsk.
NASA Astrophysics Data System (ADS)
Grefenstette, Brian W.; Bhalerao, Varun; Cook, W. Rick; Harrison, Fiona A.; Kitaguchi, Takao; Madsen, Kristin K.; Mao, Peter H.; Miyasaka, Hiromasa; Rana, Vikram
2017-08-01
Pixelated Cadmium Zinc Telluride (CdZnTe) detectors are currently flying on the Nuclear Spectroscopic Telescope ARray (NuSTAR) NASA Astrophysics Small Explorer. While the pixel pitch of the detectors is ≍ 605 μm, we can leverage the detector readout architecture to determine the interaction location of an individual photon to much higher spatial accuracy. The sub-pixel spatial location allows us to finely oversample the point spread function of the optics and reduces imaging artifacts due to pixelation. In this paper we demonstrate how the sub-pixel information is obtained, how the detectors were calibrated, and provide ground verification of the quantum efficiency of our Monte Carlo model of the detector response.
The accuracy of thematic map products is not spatially homogenous, but instead variable across most landscapes. Properly analyzing and representing the spatial distribution (pattern) of thematic map accuracy would provide valuable user information for assessing appropriate applic...
NASA Astrophysics Data System (ADS)
Hekmatmanesh, Amin; Jamaloo, Fatemeh; Wu, Huapeng; Handroos, Heikki; Kilpeläinen, Asko
2018-04-01
Brain Computer Interface (BCI) can be a challenge for developing of robotic, prosthesis and human-controlled systems. This work focuses on the implementation of a common spatial pattern (CSP) base algorithm to detect event related desynchronization patterns. Utilizing famous previous work in this area, features are extracted by filter bank with common spatial pattern (FBCSP) method, and then weighted by a sensitive learning vector quantization (SLVQ) algorithm. In the current work, application of the radial basis function (RBF) as a mapping kernel of linear discriminant analysis (KLDA) method on the weighted features, allows the transfer of data into a higher dimension for more discriminated data scattering by RBF kernel. Afterwards, support vector machine (SVM) with generalized radial basis function (GRBF) kernel is employed to improve the efficiency and robustness of the classification. Averagely, 89.60% accuracy and 74.19% robustness are achieved. BCI Competition III, Iva data set is used to evaluate the algorithm for detecting right hand and foot imagery movement patterns. Results show that combination of KLDA with SVM-GRBF classifier makes 8.9% and 14.19% improvements in accuracy and robustness, respectively. For all the subjects, it is concluded that mapping the CSP features into a higher dimension by RBF and utilization GRBF as a kernel of SVM, improve the accuracy and reliability of the proposed method.
NASA Astrophysics Data System (ADS)
Wright, L.; Coddington, O.; Pilewskie, P.
2015-12-01
Current challenges in Earth remote sensing require improved instrument spectral resolution, spectral coverage, and radiometric accuracy. Hyperspectral instruments, deployed on both aircraft and spacecraft, are a growing class of Earth observing sensors designed to meet these challenges. They collect large amounts of spectral data, allowing thorough characterization of both atmospheric and surface properties. The higher accuracy and increased spectral and spatial resolutions of new imagers require new numerical approaches for processing imagery and separating surface and atmospheric signals. One potential approach is source separation, which allows us to determine the underlying physical causes of observed changes. Improved signal separation will allow hyperspectral instruments to better address key science questions relevant to climate change, including land-use changes, trends in clouds and atmospheric water vapor, and aerosol characteristics. In this work, we investigate a Non-negative Matrix Factorization (NMF) method for the separation of atmospheric and land surface signal sources. NMF offers marked benefits over other commonly employed techniques, including non-negativity, which avoids physically impossible results, and adaptability, which allows the method to be tailored to hyperspectral source separation. We adapt our NMF algorithm to distinguish between contributions from different physically distinct sources by introducing constraints on spectral and spatial variability and by using library spectra to inform separation. We evaluate our NMF algorithm with simulated hyperspectral images as well as hyperspectral imagery from several instruments including, the NASA Airborne Visible/Infrared Imaging Spectrometer (AVIRIS), NASA Hyperspectral Imager for the Coastal Ocean (HICO) and National Ecological Observatory Network (NEON) Imaging Spectrometer.
Pixels, Blocks of Pixels, and Polygons: Choosing a Spatial Unit for Thematic Accuracy Assessment
Pixels, polygons, and blocks of pixels are all potentially viable spatial assessment units for conducting an accuracy assessment. We develop a statistical population-based framework to examine how the spatial unit chosen affects the outcome of an accuracy assessment. The populati...
This paper presents a fuzzy set-based method of mapping spatial accuracy of thematic map and computing several ecological indicators while taking into account spatial variation of accuracy associated with different land cover types and other factors (e.g., slope, soil type, etc.)...
Analysis of spatial distribution of land cover maps accuracy
NASA Astrophysics Data System (ADS)
Khatami, R.; Mountrakis, G.; Stehman, S. V.
2017-12-01
Land cover maps have become one of the most important products of remote sensing science. However, classification errors will exist in any classified map and affect the reliability of subsequent map usage. Moreover, classification accuracy often varies over different regions of a classified map. These variations of accuracy will affect the reliability of subsequent analyses of different regions based on the classified maps. The traditional approach of map accuracy assessment based on an error matrix does not capture the spatial variation in classification accuracy. Here, per-pixel accuracy prediction methods are proposed based on interpolating accuracy values from a test sample to produce wall-to-wall accuracy maps. Different accuracy prediction methods were developed based on four factors: predictive domain (spatial versus spectral), interpolation function (constant, linear, Gaussian, and logistic), incorporation of class information (interpolating each class separately versus grouping them together), and sample size. Incorporation of spectral domain as explanatory feature spaces of classification accuracy interpolation was done for the first time in this research. Performance of the prediction methods was evaluated using 26 test blocks, with 10 km × 10 km dimensions, dispersed throughout the United States. The performance of the predictions was evaluated using the area under the curve (AUC) of the receiver operating characteristic. Relative to existing accuracy prediction methods, our proposed methods resulted in improvements of AUC of 0.15 or greater. Evaluation of the four factors comprising the accuracy prediction methods demonstrated that: i) interpolations should be done separately for each class instead of grouping all classes together; ii) if an all-classes approach is used, the spectral domain will result in substantially greater AUC than the spatial domain; iii) for the smaller sample size and per-class predictions, the spectral and spatial domain yielded similar AUC; iv) for the larger sample size (i.e., very dense spatial sample) and per-class predictions, the spatial domain yielded larger AUC; v) increasing the sample size improved accuracy predictions with a greater benefit accruing to the spatial domain; and vi) the function used for interpolation had the smallest effect on AUC.
Sampling procedures for inventory of commercial volume tree species in Amazon Forest.
Netto, Sylvio P; Pelissari, Allan L; Cysneiros, Vinicius C; Bonazza, Marcelo; Sanquetta, Carlos R
2017-01-01
The spatial distribution of tropical tree species can affect the consistency of the estimators in commercial forest inventories, therefore, appropriate sampling procedures are required to survey species with different spatial patterns in the Amazon Forest. For this, the present study aims to evaluate the conventional sampling procedures and introduce the adaptive cluster sampling for volumetric inventories of Amazonian tree species, considering the hypotheses that the density, the spatial distribution and the zero-plots affect the consistency of the estimators, and that the adaptive cluster sampling allows to obtain more accurate volumetric estimation. We use data from a census carried out in Jamari National Forest, Brazil, where trees with diameters equal to or higher than 40 cm were measured in 1,355 plots. Species with different spatial patterns were selected and sampled with simple random sampling, systematic sampling, linear cluster sampling and adaptive cluster sampling, whereby the accuracy of the volumetric estimation and presence of zero-plots were evaluated. The sampling procedures applied to species were affected by the low density of trees and the large number of zero-plots, wherein the adaptive clusters allowed concentrating the sampling effort in plots with trees and, thus, agglutinating more representative samples to estimate the commercial volume.
Prediction of Ba, Mn and Zn for tropical soils using iron oxides and magnetic susceptibility
NASA Astrophysics Data System (ADS)
Marques Júnior, José; Arantes Camargo, Livia; Reynaldo Ferracciú Alleoni, Luís; Tadeu Pereira, Gener; De Bortoli Teixeira, Daniel; Santos Rabelo de Souza Bahia, Angelica
2017-04-01
Agricultural activity is an important source of potentially toxic elements (PTEs) in soil worldwide but particularly in heavily farmed areas. Spatial distribution characterization of PTE contents in farming areas is crucial to assess further environmental impacts caused by soil contamination. Designing prediction models become quite useful to characterize the spatial variability of continuous variables, as it allows prediction of soil attributes that might be difficult to attain in a large number of samples through conventional methods. This study aimed to evaluate, in three geomorphic surfaces of Oxisols, the capacity for predicting PTEs (Ba, Mn, Zn) and their spatial variability using iron oxides and magnetic susceptibility (MS). Soil samples were collected from three geomorphic surfaces and analyzed for chemical, physical, mineralogical properties, as well as magnetic susceptibility (MS). PTE prediction models were calibrated by multiple linear regression (MLR). MLR calibration accuracy was evaluated using the coefficient of determination (R2). PTE spatial distribution maps were built using the values calculated by the calibrated models that reached the best accuracy by means of geostatistics. The high correlations between the attributes clay, MS, hematite (Hm), iron oxides extracted by sodium dithionite-citrate-bicarbonate (Fed), and iron oxides extracted using acid ammonium oxalate (Feo) with the elements Ba, Mn, and Zn enabled them to be selected as predictors for PTEs. Stepwise multiple linear regression showed that MS and Fed were the best PTE predictors individually, as they promoted no significant increase in R2 when two or more attributes were considered together. The MS-calibrated models for Ba, Mn, and Zn prediction exhibited R2 values of 0.88, 0.66, and 0.55, respectively. These are promising results since MS is a fast, cheap, and non-destructive tool, allowing the prediction of a large number of samples, which in turn enables detailed mapping of large areas. MS predicted values enabled the characterization and the understanding of spatial variability of the studied PTEs.
Acoustical stability of a sonoluminescing bubble
NASA Astrophysics Data System (ADS)
Holzfuss, Joachim; Rüggeberg, Matthias; Holt, R. Glynn
2002-10-01
In the parameter region for sonoluminescence of a single levitated bubble in a water-filled resonator it is observed that the bubble may have an enormous spatial stability leaving it ``pinned'' in the fluid and allowing it to emit light pulses of picosecond accuracy. We report here observations of a complex harmonic structure in the acoustic field surrounding a sonoluminescing bubble. We show that this complex sound field determines the position of the bubble and may either increase or decrease its spatial stability. The acoustic environment of the bubble is the result of the excitation of high-order normal modes of the resonator by the outgoing shock wave generated by the bubble collapse.
Increased dimensionality of cell-cell communication can decrease the precision of gradient sensing
NASA Astrophysics Data System (ADS)
Smith, Tyler; Levchenko, Andre; Nemenman, Ilya; Mugler, Andrew
Gradient sensing is a biological computation that involves comparison of concentrations measured in at least two different locations. As such, the pre- cision of gradient sensing is limited by the intrinsic stochasticity in the com- munication that brings such distributed information to the same location. We have recently analyzed such limitations experimentally and theoretically in multicellular gradient sensing in mammary epithelial cell organoids. For 1d chains of collectively sensing cells, the communication noise puts a se- vere constraint on how the accuracy of gradient sensing increases with the number of cells in the sensor. A question remains as to whether the effect of the noise can be mitigated by the extra spatial averaging allowed in sensing by 2d and 3d cellular organoids. Here we show using computer simulations that, counterintuitively, such spatial averaging decreases gradient sensitiv- ity (while it increases concentration sensitivity). We explain the findings analytically and propose that a recently introduced Regional Excitation - Global Inhibition model of gradient sensing can overcome this limitation and use 2d or 3d spatial averaging to improve the sensing accuracy. Supported by NSF Grant PHY/1410978 and James S. McDonnell Foundation Grant # 220020321.
Bednarkiewicz, Artur; Whelan, Maurice P
2008-01-01
Fluorescence lifetime imaging (FLIM) is very demanding from a technical and computational perspective, and the output is usually a compromise between acquisition/processing time and data accuracy and precision. We present a new approach to acquisition, analysis, and reconstruction of microscopic FLIM images by employing a digital micromirror device (DMD) as a spatial illuminator. In the first step, the whole field fluorescence image is collected by a color charge-coupled device (CCD) camera. Further qualitative spectral analysis and sample segmentation are performed to spatially distinguish between spectrally different regions on the sample. Next, the fluorescence of the sample is excited segment by segment, and fluorescence lifetimes are acquired with a photon counting technique. FLIM image reconstruction is performed by either raster scanning the sample or by directly accessing specific regions of interest. The unique features of the DMD illuminator allow the rapid on-line measurement of global good initial parameters (GIP), which are supplied to the first iteration of the fitting algorithm. As a consequence, a decrease of the computation time required to obtain a satisfactory quality-of-fit is achieved without compromising the accuracy and precision of the lifetime measurements.
ASSESSING THE ACCURACY OF NATIONAL LAND COVER DATASET AREA ESTIMATES AT MULTIPLE SPATIAL EXTENTS
Site specific accuracy assessments provide fine-scale evaluation of the thematic accuracy of land use/land cover (LULC) datasets; however, they provide little insight into LULC accuracy across varying spatial extents. Additionally, LULC data are typically used to describe lands...
Panico, Francesco; Sagliano, Laura; Grossi, Dario; Trojano, Luigi
2016-06-01
The aim of this study is to clarify the specific role of the cerebellum during prism adaptation procedure (PAP), considering its involvement in early prism exposure (i.e., in the recalibration process) and in post-exposure phase (i.e., in the after-effect, related to spatial realignment). For this purpose we interfered with cerebellar activity by means of cathodal transcranial direct current stimulation (tDCS), while young healthy individuals were asked to perform a pointing task on a touch screen before, during and after wearing base-left prism glasses. The distance from the target dot in each trial (in terms of pixels) on horizontal and vertical axes was recorded and served as an index of accuracy. Results on horizontal axis, that was shifted by prism glasses, revealed that participants who received cathodal stimulation showed increased rightward deviation from the actual position of the target while wearing prisms and a larger leftward deviation from the target after prisms removal. Results on vertical axis, in which no shift was induced, revealed a general trend in the two groups to improve accuracy through the different phases of the task, and a trend, more visible in cathodal stimulated participants, to worsen accuracy from the first to the last movements in each phase. Data on horizontal axis allow to confirm that the cerebellum is involved in all stages of PAP, contributing to early strategic recalibration process, as well as to spatial realignment. On vertical axis, the improving performance across the different stages of the task and the worsening accuracy within each task phase can be ascribed, respectively, to a learning process and to the task-related fatigue. Copyright © 2016 Elsevier Inc. All rights reserved.
High Resolution Insights into Snow Distribution Provided by Drone Photogrammetry
NASA Astrophysics Data System (ADS)
Redpath, T.; Sirguey, P. J.; Cullen, N. J.; Fitzsimons, S.
2017-12-01
Dynamic in time and space, New Zealand's seasonal snow is largely confined to remote alpine areas, complicating ongoing in situ measurement and characterisation. Improved understanding and modeling of the seasonal snowpack requires fine scale resolution of snow distribution and spatial variability. The potential of remotely piloted aircraft system (RPAS) photogrammetry to resolve spatial and temporal variability of snow depth and water equivalent in a New Zealand alpine catchment is assessed in the Pisa Range, Central Otago. This approach yielded orthophotomosaics and digital surface models (DSM) at 0.05 and 0.15 m spatial resolution, respectively. An autumn reference DSM allowed mapping of winter (02/08/2016) and spring (10/09/2016) snow depth at 0.15 m spatial resolution, via DSM differencing. The consistency and accuracy of the RPAS-derived surface was assessed by comparison of snow-free regions of the spring and autumn DSMs, while accuracy of RPAS retrieved snow depth was assessed with 86 in situ snow probe measurements. Results show a mean vertical residual of 0.024 m between DSMs acquired in autumn and spring. This residual approximated a Laplace distribution, reflecting the influence of large outliers on the small overall bias. Propagation of errors associated with successive DSMs saw snow depth mapped with an accuracy of ± 0.09 m (95% c.l.). Comparing RPAS and in situ snow depth measurements revealed the influence of geo-location uncertainty and interactions between vegetation and the snowpack on snow depth uncertainty and bias. Semi-variogram analysis revealed that the RPAS outperformed systematic in situ measurements in resolving fine scale spatial variability. Despite limitations accompanying RPAS photogrammetry, this study demonstrates a repeatable means of accurately mapping snow depth for an entire, yet relatively small, hydrological basin ( 0.5 km2), at high resolution. Resolving snowpack features associated with re-distribution and preferential accumulation and ablation, snow depth maps provide geostatistically robust insights into seasonal snow processes, with unprecedented detail. Such data may enhance understanding of physical processes controlling spatial and temporal distribution of seasonal snow, and their relative importance at varying spatial and temporal scales.
Bayesian spatiotemporal crash frequency models with mixture components for space-time interactions.
Cheng, Wen; Gill, Gurdiljot Singh; Zhang, Yongping; Cao, Zhong
2018-03-01
The traffic safety research has developed spatiotemporal models to explore the variations in the spatial pattern of crash risk over time. Many studies observed notable benefits associated with the inclusion of spatial and temporal correlation and their interactions. However, the safety literature lacks sufficient research for the comparison of different temporal treatments and their interaction with spatial component. This study developed four spatiotemporal models with varying complexity due to the different temporal treatments such as (I) linear time trend; (II) quadratic time trend; (III) Autoregressive-1 (AR-1); and (IV) time adjacency. Moreover, the study introduced a flexible two-component mixture for the space-time interaction which allows greater flexibility compared to the traditional linear space-time interaction. The mixture component allows the accommodation of global space-time interaction as well as the departures from the overall spatial and temporal risk patterns. This study performed a comprehensive assessment of mixture models based on the diverse criteria pertaining to goodness-of-fit, cross-validation and evaluation based on in-sample data for predictive accuracy of crash estimates. The assessment of model performance in terms of goodness-of-fit clearly established the superiority of the time-adjacency specification which was evidently more complex due to the addition of information borrowed from neighboring years, but this addition of parameters allowed significant advantage at posterior deviance which subsequently benefited overall fit to crash data. The Base models were also developed to study the comparison between the proposed mixture and traditional space-time components for each temporal model. The mixture models consistently outperformed the corresponding Base models due to the advantages of much lower deviance. For cross-validation comparison of predictive accuracy, linear time trend model was adjudged the best as it recorded the highest value of log pseudo marginal likelihood (LPML). Four other evaluation criteria were considered for typical validation using the same data for model development. Under each criterion, observed crash counts were compared with three types of data containing Bayesian estimated, normal predicted, and model replicated ones. The linear model again performed the best in most scenarios except one case of using model replicated data and two cases involving prediction without including random effects. These phenomena indicated the mediocre performance of linear trend when random effects were excluded for evaluation. This might be due to the flexible mixture space-time interaction which can efficiently absorb the residual variability escaping from the predictable part of the model. The comparison of Base and mixture models in terms of prediction accuracy further bolstered the superiority of the mixture models as the mixture ones generated more precise estimated crash counts across all four models, suggesting that the advantages associated with mixture component at model fit were transferable to prediction accuracy. Finally, the residual analysis demonstrated the consistently superior performance of random effect models which validates the importance of incorporating the correlation structures to account for unobserved heterogeneity. Copyright © 2017 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Sun, D.; Zheng, J. H.; Ma, T.; Chen, J. J.; Li, X.
2018-04-01
The rodent disaster is one of the main biological disasters in grassland in northern Xinjiang. The eating and digging behaviors will cause the destruction of ground vegetation, which seriously affected the development of animal husbandry and grassland ecological security. UAV low altitude remote sensing, as an emerging technique with high spatial resolution, can effectively recognize the burrows. However, how to select the appropriate spatial resolution to monitor the calamity of the rodent disaster is the first problem we need to pay attention to. The purpose of this study is to explore the optimal spatial scale on identification of the burrows by evaluating the impact of different spatial resolution for the burrows identification accuracy. In this study, we shoot burrows from different flight heights to obtain visible images of different spatial resolution. Then an object-oriented method is used to identify the caves, and we also evaluate the accuracy of the classification. We found that the highest classification accuracy of holes, the average has reached more than 80 %. At the altitude of 24 m and the spatial resolution of 1cm, the accuracy of the classification is the highest We have created a unique and effective way to identify burrows by using UAVs visible images. We draw the following conclusion: the best spatial resolution of burrows recognition is 1 cm using DJI PHANTOM-3 UAV, and the improvement of spatial resolution does not necessarily lead to the improvement of classification accuracy. This study lays the foundation for future research and can be extended to similar studies elsewhere.
Brain-Computer Interface Based on Generation of Visual Images
Bobrov, Pavel; Frolov, Alexander; Cantor, Charles; Fedulova, Irina; Bakhnyan, Mikhail; Zhavoronkov, Alexander
2011-01-01
This paper examines the task of recognizing EEG patterns that correspond to performing three mental tasks: relaxation and imagining of two types of pictures: faces and houses. The experiments were performed using two EEG headsets: BrainProducts ActiCap and Emotiv EPOC. The Emotiv headset becomes widely used in consumer BCI application allowing for conducting large-scale EEG experiments in the future. Since classification accuracy significantly exceeded the level of random classification during the first three days of the experiment with EPOC headset, a control experiment was performed on the fourth day using ActiCap. The control experiment has shown that utilization of high-quality research equipment can enhance classification accuracy (up to 68% in some subjects) and that the accuracy is independent of the presence of EEG artifacts related to blinking and eye movement. This study also shows that computationally-inexpensive Bayesian classifier based on covariance matrix analysis yields similar classification accuracy in this problem as a more sophisticated Multi-class Common Spatial Patterns (MCSP) classifier. PMID:21695206
Distributed wavefront reconstruction with SABRE for real-time large scale adaptive optics control
NASA Astrophysics Data System (ADS)
Brunner, Elisabeth; de Visser, Cornelis C.; Verhaegen, Michel
2014-08-01
We present advances on Spline based ABerration REconstruction (SABRE) from (Shack-)Hartmann (SH) wavefront measurements for large-scale adaptive optics systems. SABRE locally models the wavefront with simplex B-spline basis functions on triangular partitions which are defined on the SH subaperture array. This approach allows high accuracy through the possible use of nonlinear basis functions and great adaptability to any wavefront sensor and pupil geometry. The main contribution of this paper is a distributed wavefront reconstruction method, D-SABRE, which is a 2 stage procedure based on decomposing the sensor domain into sub-domains each supporting a local SABRE model. D-SABRE greatly decreases the computational complexity of the method and removes the need for centralized reconstruction while obtaining a reconstruction accuracy for simulated E-ELT turbulences within 1% of the global method's accuracy. Further, a generalization of the methodology is proposed making direct use of SH intensity measurements which leads to an improved accuracy of the reconstruction compared to centroid algorithms using spatial gradients.
NASA Astrophysics Data System (ADS)
Diesing, Markus; Green, Sophie L.; Stephens, David; Lark, R. Murray; Stewart, Heather A.; Dove, Dayton
2014-08-01
Marine spatial planning and conservation need underpinning with sufficiently detailed and accurate seabed substrate and habitat maps. Although multibeam echosounders enable us to map the seabed with high resolution and spatial accuracy, there is still a lack of fit-for-purpose seabed maps. This is due to the high costs involved in carrying out systematic seabed mapping programmes and the fact that the development of validated, repeatable, quantitative and objective methods of swath acoustic data interpretation is still in its infancy. We compared a wide spectrum of approaches including manual interpretation, geostatistics, object-based image analysis and machine-learning to gain further insights into the accuracy and comparability of acoustic data interpretation approaches based on multibeam echosounder data (bathymetry, backscatter and derivatives) and seabed samples with the aim to derive seabed substrate maps. Sample data were split into a training and validation data set to allow us to carry out an accuracy assessment. Overall thematic classification accuracy ranged from 67% to 76% and Cohen's kappa varied between 0.34 and 0.52. However, these differences were not statistically significant at the 5% level. Misclassifications were mainly associated with uncommon classes, which were rarely sampled. Map outputs were between 68% and 87% identical. To improve classification accuracy in seabed mapping, we suggest that more studies on the effects of factors affecting the classification performance as well as comparative studies testing the performance of different approaches need to be carried out with a view to developing guidelines for selecting an appropriate method for a given dataset. In the meantime, classification accuracy might be improved by combining different techniques to hybrid approaches and multi-method ensembles.
Wang, Junqiang; Wang, Yu; Zhu, Gang; Chen, Xiangqian; Zhao, Xiangrui; Qiao, Huiting; Fan, Yubo
2018-06-01
Spatial positioning accuracy is a key issue in a computer-assisted orthopaedic surgery (CAOS) system. Since intraoperative fluoroscopic images are one of the most important input data to the CAOS system, the quality of these images should have a significant influence on the accuracy of the CAOS system. But the regularities and mechanism of the influence of the quality of intraoperative images on the accuracy of a CAOS system have yet to be studied. Two typical spatial positioning methods - a C-arm calibration-based method and a bi-planar positioning method - are used to study the influence of different image quality parameters, such as resolution, distortion, contrast and signal-to-noise ratio, on positioning accuracy. The error propagation rules of image error in different spatial positioning methods are analyzed by the Monte Carlo method. Correlation analysis showed that resolution and distortion had a significant influence on spatial positioning accuracy. In addition the C-arm calibration-based method was more sensitive to image distortion, while the bi-planar positioning method was more susceptible to image resolution. The image contrast and signal-to-noise ratio have no significant influence on the spatial positioning accuracy. The result of Monte Carlo analysis proved that generally the bi-planar positioning method was more sensitive to image quality than the C-arm calibration-based method. The quality of intraoperative fluoroscopic images is a key issue in the spatial positioning accuracy of a CAOS system. Although the 2 typical positioning methods have very similar mathematical principles, they showed different sensitivities to different image quality parameters. The result of this research may help to create a realistic standard for intraoperative fluoroscopic images for CAOS systems. Copyright © 2018 John Wiley & Sons, Ltd.
Water quality modeling in the dead end sections of drinking water distribution networks.
Abokifa, Ahmed A; Yang, Y Jeffrey; Lo, Cynthia S; Biswas, Pratim
2016-02-01
Dead-end sections of drinking water distribution networks are known to be problematic zones in terms of water quality degradation. Extended residence time due to water stagnation leads to rapid reduction of disinfectant residuals allowing the regrowth of microbial pathogens. Water quality models developed so far apply spatial aggregation and temporal averaging techniques for hydraulic parameters by assigning hourly averaged water demands to the main nodes of the network. Although this practice has generally resulted in minimal loss of accuracy for the predicted disinfectant concentrations in main water transmission lines, this is not the case for the peripheries of the distribution network. This study proposes a new approach for simulating disinfectant residuals in dead end pipes while accounting for both spatial and temporal variability in hydraulic and transport parameters. A stochastic demand generator was developed to represent residential water pulses based on a non-homogenous Poisson process. Dispersive solute transport was considered using highly dynamic dispersion rates. A genetic algorithm was used to calibrate the axial hydraulic profile of the dead-end pipe based on the different demand shares of the withdrawal nodes. A parametric sensitivity analysis was done to assess the model performance under variation of different simulation parameters. A group of Monte-Carlo ensembles was carried out to investigate the influence of spatial and temporal variations in flow demands on the simulation accuracy. A set of three correction factors were analytically derived to adjust residence time, dispersion rate and wall demand to overcome simulation error caused by spatial aggregation approximation. The current model results show better agreement with field-measured concentrations of conservative fluoride tracer and free chlorine disinfectant than the simulations of recent advection dispersion reaction models published in the literature. Accuracy of the simulated concentration profiles showed significant dependence on the spatial distribution of the flow demands compared to temporal variation. Copyright © 2015 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Benaud, P.; Anderson, K.; Quine, T. A.; James, M. R.; Quinton, J.; Brazier, R. E.
2016-12-01
While total sediment capture can accurately quantify soil loss via water erosion, it isn't practical at the field scale and provides little information on the spatial nature of soil erosion processes. Consequently, high-resolution, remote sensing, point cloud data provide an alternative method for quantifying soil loss. The accessibility of Structure-from-Motion Multi-Stereo View (SfM) and the potential for multi-temporal applications, offers an exciting opportunity to spatially quantify soil erosion. Accordingly, published research provides examples of the successful quantification of large erosion features and events, to centimetre accuracy. Through rigorous control of the camera and image network geometry, the centimetre accuracy achievable at the field scale, can translate to sub-millimetre accuracies within a laboratory environment. Accordingly, this study looks to understand how the ultra-high-resolution spatial information on soil surface topography, derived from SfM, can be integrated with a multi-element sediment tracer to develop a mechanistic understanding of rill and inter-rill erosion, under experimental conditions. A rainfall simulator was used to create three soil surface conditions; compaction and rainsplash, inter-rill erosion, and rill erosion, at two experimental scales (0.15 m2 and 3 m2). Total sediment capture was the primary validation for the experiments, allowing the comparison between structurally and volumetrically derived change, and true soil loss. A Terrestrial Laser Scanner (resolution of ca. 0.8mm) has been employed to assess spatial discrepancies within the SfM data sets and to provide an alternative measure of volumetric change. Preliminary results show the SfM approach used can achieve a ground resolution of less than 0.2 mm per pixel, and a RMSE of less than 0.3 mm. Consequently, it is expected that the ultra-high-resolution SfM point clouds can be utilised to provide a detailed assessment of soil loss via water erosion processes.
Comparison of 2c- and 3cLIF droplet temperature imaging
NASA Astrophysics Data System (ADS)
Palmer, Johannes; Reddemann, Manuel A.; Kirsch, Valeri; Kneer, Reinhold
2018-06-01
This work presents "pulsed 2D-3cLIF-EET" as a measurement setup for micro-droplet internal temperature imaging. The setup relies on a third color channel that allows correcting spatially changing energy transfer rates between the two applied fluorescent dyes. First measurement results are compared with results of two slightly different versions of the recent "pulsed 2D-2cLIF-EET" method. Results reveal a higher temperature measurement accuracy of the recent 2cLIF setup. Average droplet temperature is determined by the 2cLIF setup with an uncertainty of less than 1 K and a spatial deviation of about 3.7 K. The new 3cLIF approach would become competitive, if the existing droplet size dependency is anticipated by an additional calibration and if the processing algorithm includes spatial measurement errors more appropriately.
Indirect monitoring shot-to-shot shock waves strength reproducibility during pump-probe experiments
NASA Astrophysics Data System (ADS)
Pikuz, T. A.; Faenov, A. Ya.; Ozaki, N.; Hartley, N. J.; Albertazzi, B.; Matsuoka, T.; Takahashi, K.; Habara, H.; Tange, Y.; Matsuyama, S.; Yamauchi, K.; Ochante, R.; Sueda, K.; Sakata, O.; Sekine, T.; Sato, T.; Umeda, Y.; Inubushi, Y.; Yabuuchi, T.; Togashi, T.; Katayama, T.; Yabashi, M.; Harmand, M.; Morard, G.; Koenig, M.; Zhakhovsky, V.; Inogamov, N.; Safronova, A. S.; Stafford, A.; Skobelev, I. Yu.; Pikuz, S. A.; Okuchi, T.; Seto, Y.; Tanaka, K. A.; Ishikawa, T.; Kodama, R.
2016-07-01
We present an indirect method of estimating the strength of a shock wave, allowing on line monitoring of its reproducibility in each laser shot. This method is based on a shot-to-shot measurement of the X-ray emission from the ablated plasma by a high resolution, spatially resolved focusing spectrometer. An optical pump laser with energy of 1.0 J and pulse duration of ˜660 ps was used to irradiate solid targets or foils with various thicknesses containing Oxygen, Aluminum, Iron, and Tantalum. The high sensitivity and resolving power of the X-ray spectrometer allowed spectra to be obtained on each laser shot and to control fluctuations of the spectral intensity emitted by different plasmas with an accuracy of ˜2%, implying an accuracy in the derived electron plasma temperature of 5%-10% in pump-probe high energy density science experiments. At nano- and sub-nanosecond duration of laser pulse with relatively low laser intensities and ratio Z/A ˜ 0.5, the electron temperature follows Te ˜ Ilas2/3. Thus, measurements of the electron plasma temperature allow indirect estimation of the laser flux on the target and control its shot-to-shot fluctuation. Knowing the laser flux intensity and its fluctuation gives us the possibility of monitoring shot-to-shot reproducibility of shock wave strength generation with high accuracy.
NASA Astrophysics Data System (ADS)
Dronova, I.; Gong, P.; Wang, L.; Clinton, N.; Fu, W.; Qi, S.
2011-12-01
Remote sensing-based vegetation classifications representing plant function such as photosynthesis and productivity are challenging in wetlands with complex cover and difficult field access. Recent advances in object-based image analysis (OBIA) and machine-learning algorithms offer new classification tools; however, few comparisons of different algorithms and spatial scales have been discussed to date. We applied OBIA to delineate wetland plant functional types (PFTs) for Poyang Lake, the largest freshwater lake in China and Ramsar wetland conservation site, from 30-m Landsat TM scene at the peak of spring growing season. We targeted major PFTs (C3 grasses, C3 forbs and different types of C4 grasses and aquatic vegetation) that are both key players in system's biogeochemical cycles and critical providers of waterbird habitat. Classification results were compared among: a) several object segmentation scales (with average object sizes 900-9000 m2); b) several families of statistical classifiers (including Bayesian, Logistic, Neural Network, Decision Trees and Support Vector Machines) and c) two hierarchical levels of vegetation classification, a generalized 3-class set and more detailed 6-class set. We found that classification benefited from object-based approach which allowed including object shape, texture and context descriptors in classification. While a number of classifiers achieved high accuracy at the finest pixel-equivalent segmentation scale, the highest accuracies and best agreement among algorithms occurred at coarser object scales. No single classifier was consistently superior across all scales, although selected algorithms of Neural Network, Logistic and K-Nearest Neighbors families frequently provided the best discrimination of classes at different scales. The choice of vegetation categories also affected classification accuracy. The 6-class set allowed for higher individual class accuracies but lower overall accuracies than the 3-class set because individual classes differed in scales at which they were best discriminated from others. Main classification challenges included a) presence of C3 grasses in C4-grass areas, particularly following harvesting of C4 reeds and b) mixtures of emergent, floating and submerged aquatic plants at sub-object and sub-pixel scales. We conclude that OBIA with advanced statistical classifiers offers useful instruments for landscape vegetation analyses, and that spatial scale considerations are critical in mapping PFTs, while multi-scale comparisons can be used to guide class selection. Future work will further apply fuzzy classification and field-collected spectral data for PFT analysis and compare results with MODIS PFT products.
Niechwiej-Szwedo, Ewa; Gonzalez, David; Nouredanesh, Mina; Tung, James
2018-01-01
Kinematic analysis of upper limb reaching provides insight into the central nervous system control of movements. Until recently, kinematic examination of motor control has been limited to studies conducted in traditional research laboratories because motion capture equipment used for data collection is not easily portable and expensive. A recently developed markerless system, the Leap Motion Controller (LMC), is a portable and inexpensive tracking device that allows recording of 3D hand and finger position. The main goal of this study was to assess the concurrent reliability and validity of the LMC as compared to the Optotrak, a criterion-standard motion capture system, for measures of temporal accuracy and peak velocity during the performance of upper limb, visually-guided movements. In experiment 1, 14 participants executed aiming movements to visual targets presented on a computer monitor. Bland-Altman analysis was conducted to assess the validity and limits of agreement for measures of temporal accuracy (movement time, duration of deceleration interval), peak velocity, and spatial accuracy (endpoint accuracy). In addition, a one-sample t-test was used to test the hypothesis that the error difference between measures obtained from Optotrak and LMC is zero. In experiment 2, 15 participants performed a Fitts' type aiming task in order to assess whether the LMC is capable of assessing a well-known speed-accuracy trade-off relationship. Experiment 3 assessed the temporal coordination pattern during the performance of a sequence consisting of a reaching, grasping, and placement task in 15 participants. Results from the t-test showed that the error difference in temporal measures was significantly different from zero. Based on the results from the 3 experiments, the average temporal error in movement time was 40±44 ms, and the error in peak velocity was 0.024±0.103 m/s. The limits of agreement between the LMC and Optotrak for spatial accuracy measures ranged between 2-5 cm. Although the LMC system is a low-cost, highly portable system, which could facilitate collection of kinematic data outside of the traditional laboratory settings, the temporal and spatial errors may limit the use of the device in some settings.
Gonzalez, David; Nouredanesh, Mina; Tung, James
2018-01-01
Kinematic analysis of upper limb reaching provides insight into the central nervous system control of movements. Until recently, kinematic examination of motor control has been limited to studies conducted in traditional research laboratories because motion capture equipment used for data collection is not easily portable and expensive. A recently developed markerless system, the Leap Motion Controller (LMC), is a portable and inexpensive tracking device that allows recording of 3D hand and finger position. The main goal of this study was to assess the concurrent reliability and validity of the LMC as compared to the Optotrak, a criterion-standard motion capture system, for measures of temporal accuracy and peak velocity during the performance of upper limb, visually-guided movements. In experiment 1, 14 participants executed aiming movements to visual targets presented on a computer monitor. Bland-Altman analysis was conducted to assess the validity and limits of agreement for measures of temporal accuracy (movement time, duration of deceleration interval), peak velocity, and spatial accuracy (endpoint accuracy). In addition, a one-sample t-test was used to test the hypothesis that the error difference between measures obtained from Optotrak and LMC is zero. In experiment 2, 15 participants performed a Fitts’ type aiming task in order to assess whether the LMC is capable of assessing a well-known speed-accuracy trade-off relationship. Experiment 3 assessed the temporal coordination pattern during the performance of a sequence consisting of a reaching, grasping, and placement task in 15 participants. Results from the t-test showed that the error difference in temporal measures was significantly different from zero. Based on the results from the 3 experiments, the average temporal error in movement time was 40±44 ms, and the error in peak velocity was 0.024±0.103 m/s. The limits of agreement between the LMC and Optotrak for spatial accuracy measures ranged between 2–5 cm. Although the LMC system is a low-cost, highly portable system, which could facilitate collection of kinematic data outside of the traditional laboratory settings, the temporal and spatial errors may limit the use of the device in some settings. PMID:29529064
Endogenous synchronous fluorescence spectroscopy (SFS) of basal cell carcinoma-initial study
NASA Astrophysics Data System (ADS)
Borisova, E.; Zhelyazkova, Al.; Keremedchiev, M.; Penkov, N.; Semyachkina-Glushkovskaya, O.; Avramov, L.
2016-01-01
The human skin is a complex, multilayered and inhomogeneous organ with spatially varying optical properties. Analysis of cutaneous fluorescence spectra could be a very complicated task; therefore researchers apply complex mathematical tools for data evaluation, or try to find some specific approaches, that would simplify the spectral analysis. Synchronous fluorescence spectroscopy (SFS) allows improving the spectral resolution, which could be useful for the biological tissue fluorescence characterization and could increase the tumour detection diagnostic accuracy.
Spatial/Spectral Identification of Endmembers from AVIRIS Data using Mathematical Morphology
NASA Technical Reports Server (NTRS)
Plaza, Antonio; Martinez, Pablo; Gualtieri, J. Anthony; Perez, Rosa M.
2001-01-01
During the last several years, a number of airborne and satellite hyperspectral sensors have been developed or improved for remote sensing applications. Imaging spectrometry allows the detection of materials, objects and regions in a particular scene with a high degree of accuracy. Hyperspectral data typically consist of hundreds of thousands of spectra, so the analysis of this information is a key issue. Mathematical morphology theory is a widely used nonlinear technique for image analysis and pattern recognition. Although it is especially well suited to segment binary or grayscale images with irregular and complex shapes, its application in the classification/segmentation of multispectral or hyperspectral images has been quite rare. In this paper, we discuss a new completely automated methodology to find endmembers in the hyperspectral data cube using mathematical morphology. The extension of classic morphology to the hyperspectral domain allows us to integrate spectral and spatial information in the analysis process. In Section 3, some basic concepts about mathematical morphology and the technical details of our algorithm are provided. In Section 4, the accuracy of the proposed method is tested by its application to real hyperspectral data obtained from the Airborne Visible/Infrared Imaging Spectrometer (AVIRIS) imaging spectrometer. Some details about these data and reference results, obtained by well-known endmember extraction techniques, are provided in Section 2. Finally, in Section 5 we expose the main conclusions at which we have arrived.
On the Accuracy Potential in Underwater/Multimedia Photogrammetry.
Maas, Hans-Gerd
2015-07-24
Underwater applications of photogrammetric measurement techniques usually need to deal with multimedia photogrammetry aspects, which are characterized by the necessity of handling optical rays that are refracted at interfaces between optical media with different refractive indices according to Snell's Law. This so-called multimedia geometry has to be incorporated into geometric models in order to achieve correct measurement results. The paper shows a flexible yet strict geometric model for the handling of refraction effects on the optical path, which can be implemented as a module into photogrammetric standard tools such as spatial resection, spatial intersection, bundle adjustment or epipolar line computation. The module is especially well suited for applications, where an object in water is observed by cameras in air through one or more planar glass interfaces, as it allows for some simplifications here. In the second part of the paper, several aspects, which are relevant for an assessment of the accuracy potential in underwater/multimedia photogrammetry, are discussed. These aspects include network geometry and interface planarity issues as well as effects caused by refractive index variations and dispersion and diffusion under water. All these factors contribute to a rather significant degradation of the geometric accuracy potential in underwater/multimedia photogrammetry. In practical experiments, a degradation of the quality of results by a factor two could be determined under relatively favorable conditions.
Mapping water table depth using geophysical and environmental variables.
Buchanan, S; Triantafilis, J
2009-01-01
Despite its importance, accurate representation of the spatial distribution of water table depth remains one of the greatest deficiencies in many hydrological investigations. Historically, both inverse distance weighting (IDW) and ordinary kriging (OK) have been used to interpolate depths. These methods, however, have major limitations: namely they require large numbers of measurements to represent the spatial variability of water table depth and they do not represent the variation between measurement points. We address this issue by assessing the benefits of using stepwise multiple linear regression (MLR) with three different ancillary data sets to predict the water table depth at 100-m intervals. The ancillary data sets used are Electromagnetic (EM34 and EM38), gamma radiometric: potassium (K), uranium (eU), thorium (eTh), total count (TC), and morphometric data. Results show that MLR offers significant precision and accuracy benefits over OK and IDW. Inclusion of the morphometric data set yielded the greatest (16%) improvement in prediction accuracy compared with IDW, followed by the electromagnetic data set (5%). Use of the gamma radiometric data set showed no improvement. The greatest improvement, however, resulted when all data sets were combined (37% increase in prediction accuracy over IDW). Significantly, however, the use of MLR also allows for prediction in variations in water table depth between measurement points, which is crucial for land management.
Next-generation pushbroom filter radiometers for remote sensing
NASA Astrophysics Data System (ADS)
Tarde, Richard W.; Dittman, Michael G.; Kvaran, Geir E.
2012-09-01
Individual focal plane size, yield, and quality continue to improve, as does the technology required to combine these into large tiled formats. As a result, next-generation pushbroom imagers are replacing traditional scanning technologies in remote sensing applications. Pushbroom architecture has inherently better radiometric sensitivity and significantly reduced payload mass, power, and volume than previous generation scanning technologies. However, the architecture creates challenges achieving the required radiometric accuracy performance. Achieving good radiometric accuracy, including image spectral and spatial uniformity, requires creative optical design, high quality focal planes and filters, careful consideration of on-board calibration sources, and state-of-the-art ground test facilities. Ball Aerospace built the Landsat Data Continuity Mission (LDCM) next-generation Operational Landsat Imager (OLI) payload. Scheduled to launch in 2013, OLI provides imagery consistent with the historical Landsat spectral, spatial, radiometric, and geometric data record and completes the generational technology upgrade from the Enhanced Thematic Mapper (ETM+) whiskbroom technology to modern pushbroom technology afforded by advanced focal planes. We explain how Ball's capabilities allowed producing the innovative next-generational OLI pushbroom filter radiometer that meets challenging radiometric accuracy or calibration requirements. OLI will improve the multi-decadal land surface observation dataset dating back to the 1972 launch of ERTS-1 or Landsat 1.
Contemporary NMR Studies of Protein Electrostatics.
Hass, Mathias A S; Mulder, Frans A A
2015-01-01
Electrostatics play an important role in many aspects of protein chemistry. However, the accurate determination of side chain proton affinity in proteins by experiment and theory remains challenging. In recent years the field of nuclear magnetic resonance spectroscopy has advanced the way that protonation states are measured, allowing researchers to examine electrostatic interactions at an unprecedented level of detail and accuracy. Experiments are now in place that follow pH-dependent (13)C and (15)N chemical shifts as spatially close as possible to the sites of protonation, allowing all titratable amino acid side chains to be probed sequence specifically. The strong and telling response of carefully selected reporter nuclei allows individual titration events to be monitored. At the same time, improved frameworks allow researchers to model multiple coupled protonation equilibria and to identify the underlying pH-dependent contributions to the chemical shifts.
Phase unwrapping with a virtual Hartmann-Shack wavefront sensor.
Akondi, Vyas; Falldorf, Claas; Marcos, Susana; Vohnsen, Brian
2015-10-05
The use of a spatial light modulator for implementing a digital phase-shifting (PS) point diffraction interferometer (PDI) allows tunability in fringe spacing and in achieving PS without the need for mechanically moving parts. However, a small amount of detector or scatter noise could affect the accuracy of wavefront sensing. Here, a novel method of wavefront reconstruction incorporating a virtual Hartmann-Shack (HS) wavefront sensor is proposed that allows easy tuning of several wavefront sensor parameters. The proposed method was tested and compared with a Fourier unwrapping method implemented on a digital PS PDI. The rewrapping of the Fourier reconstructed wavefronts resulted in phase maps that matched well the original wrapped phase and the performance was found to be more stable and accurate than conventional methods. Through simulation studies, the superiority of the proposed virtual HS phase unwrapping method is shown in comparison with the Fourier unwrapping method in the presence of noise. Further, combining the two methods could improve accuracy when the signal-to-noise ratio is sufficiently high.
Processing the image gradient field using a topographic primal sketch approach.
Gambaruto, A M
2015-03-01
The spatial derivatives of the image intensity provide topographic information that may be used to identify and segment objects. The accurate computation of the derivatives is often hampered in medical images by the presence of noise and a limited resolution. This paper focuses on accurate computation of spatial derivatives and their subsequent use to process an image gradient field directly, from which an image with improved characteristics can be reconstructed. The improvements include noise reduction, contrast enhancement, thinning object contours and the preservation of edges. Processing the gradient field directly instead of the image is shown to have numerous benefits. The approach is developed such that the steps are modular, allowing the overall method to be improved and possibly tailored to different applications. As presented, the approach relies on a topographic representation and primal sketch of an image. Comparisons with existing image processing methods on a synthetic image and different medical images show improved results and accuracy in segmentation. Here, the focus is on objects with low spatial resolution, which is often the case in medical images. The methods developed show the importance of improved accuracy in derivative calculation and the potential in processing the image gradient field directly. Copyright © 2015 John Wiley & Sons, Ltd. Copyright © 2015 John Wiley & Sons, Ltd.
Testing the accuracy of clustering redshifts with simulations
NASA Astrophysics Data System (ADS)
Scottez, V.; Benoit-Lévy, A.; Coupon, J.; Ilbert, O.; Mellier, Y.
2018-03-01
We explore the accuracy of clustering-based redshift inference within the MICE2 simulation. This method uses the spatial clustering of galaxies between a spectroscopic reference sample and an unknown sample. This study give an estimate of the reachable accuracy of this method. First, we discuss the requirements for the number objects in the two samples, confirming that this method does not require a representative spectroscopic sample for calibration. In the context of next generation of cosmological surveys, we estimated that the density of the Quasi Stellar Objects in BOSS allows us to reach 0.2 per cent accuracy in the mean redshift. Secondly, we estimate individual redshifts for galaxies in the densest regions of colour space ( ˜ 30 per cent of the galaxies) without using the photometric redshifts procedure. The advantage of this procedure is threefold. It allows: (i) the use of cluster-zs for any field in astronomy, (ii) the possibility to combine photo-zs and cluster-zs to get an improved redshift estimation, (iii) the use of cluster-z to define tomographic bins for weak lensing. Finally, we explore this last option and build five cluster-z selected tomographic bins from redshift 0.2 to 1. We found a bias on the mean redshift estimate of 0.002 per bin. We conclude that cluster-z could be used as a primary redshift estimator by next generation of cosmological surveys.
Performance of the Multi-Radar Multi-Sensor System over the Lower Colorado River, Texas
NASA Astrophysics Data System (ADS)
Bayabil, H. K.; Sharif, H. O.; Fares, A.; Awal, R.; Risch, E.
2017-12-01
Recently observed increases in intensities and frequencies of climate extremes (e.g., floods, dam failure, and overtopping of river banks) necessitate the development of effective disaster prevention and mitigation strategies. Hydrologic models can be useful tools in predicting such events at different spatial and temporal scales. However, accuracy and prediction capability of such models are often constrained by the availability of high-quality representative hydro-meteorological data (e.g., precipitation) that are required to calibrate and validate such models. Improved technologies and products such as the Multi-Radar Multi-Sensor (MRMS) system that allows gathering and transmission of vast meteorological data have been developed to provide such data needs. While the MRMS data are available with high spatial and temporal resolutions (1 km and 15 min, respectively), its accuracy in estimating precipitation is yet to be fully investigated. Therefore, the main objective of this study is to evaluate the performance of the MRMS system in effectively capturing precipitation over the Lower Colorado River, Texas using observations from a dense rain gauge network. In addition, effects of spatial and temporal aggregation scales on the performance of the MRMS system were evaluated. Point scale comparisons were made at 215 gauging locations using rain gauges and MRMS data from May 2015. Moreover, the effects of temporal and spatial data aggregation scales (30, 45, 60, 75, 90, 105, and 120 min) and (4 to 50 km), respectively on the performance of the MRMS system were tested. Overall, the MRMS system (at 15 min temporal resolution) captured precipitation reasonably well, with an average R2 value of 0.65 and RMSE of 0.5 mm. In addition, spatial and temporal data aggregations resulted in increases in R2 values. However, reduction in RMSE was achieved only with an increase in spatial aggregations.
NASA Technical Reports Server (NTRS)
Nalepka, R. F. (Principal Investigator); Sadowski, F. E.; Sarno, J. E.
1976-01-01
The author has identified the following significant results. A supervised classification within two separate ground areas of the Sam Houston National Forest was carried out for two sq meters spatial resolution MSS data. Data were progressively coarsened to simulate five additional cases of spatial resolution ranging up to 64 sq meters. Similar processing and analysis of all spatial resolutions enabled evaluations of the effect of spatial resolution on classification accuracy for various levels of detail and the effects on area proportion estimation for very general forest features. For very coarse resolutions, a subset of spectral channels which simulated the proposed thematic mapper channels was used to study classification accuracy.
Cui, Jiwen; Zhao, Shiyuan; Yang, Di; Ding, Zhenyang
2018-02-20
We use a spectrum interpolation technique to improve the distributed strain measurement accuracy in a Rayleigh-scatter-based optical frequency domain reflectometry sensing system. We demonstrate that strain accuracy is not limited by the "uncertainty principle" that exists in the time-frequency analysis. Different interpolation methods are investigated and used to improve the accuracy of peak position of the cross-correlation and, therefore, improve the accuracy of the strain. Interpolation implemented by padding zeros on one side of the windowed data in the spatial domain, before the inverse fast Fourier transform, is found to have the best accuracy. Using this method, the strain accuracy and resolution are both improved without decreasing the spatial resolution. The strain of 3 μϵ within the spatial resolution of 1 cm at the position of 21.4 m is distinguished, and the measurement uncertainty is 3.3 μϵ.
Wong, Wang I
2017-06-01
Spatial abilities are pertinent to mathematical competence, but evidence of the space-math link has largely been confined to older samples and intrinsic spatial abilities (e.g., mental transformation). The roles of gender and affective factors are also unclear. This study examined the correlations between counting ability, mental transformation, and targeting accuracy in 182 Hong Kong preschoolers, and whether these relationships were weaker at higher spatial anxiety levels. Both spatial abilities related with counting similarly for boys and girls. Targeting accuracy also mediated the male advantage in counting. Interestingly, spatial anxiety moderated the space-math links, but differently for boys and girls. For boys, spatial abilities were irrelevant to counting at high anxiety levels; for girls, the role of anxiety on the space-math link is less clear. Results extend the evidence base of the space-math link to include an extrinsic spatial ability (targeting accuracy) and have implications for intervention programmes. Statement of contribution What is already known on this subject? Much evidence of a space-math link in adolescent and adult samples and for intrinsic spatial abilities. What does this study add? Extended the space-math link to include both intrinsic and extrinsic spatial abilities in a preschool sample. Showed how spatial anxiety moderated the space-math link differently for boys and girls. © 2016 The British Psychological Society.
Magpies can use local cues to retrieve their food caches.
Feenders, Gesa; Smulders, Tom V
2011-03-01
Much importance has been placed on the use of spatial cues by food-hoarding birds in the retrieval of their caches. In this study, we investigate whether food-hoarding birds can be trained to use local cues ("beacons") in their cache retrieval. We test magpies (Pica pica) in an active hoarding-retrieval paradigm, where local cues are always reliable, while spatial cues are not. Our results show that the birds use the local cues to retrieve their caches, even when occasionally contradicting spatial information is available. The design of our study does not allow us to test rigorously whether the birds prefer using local over spatial cues, nor to investigate the process through which they learn to use local cues. We furthermore provide evidence that magpies develop landmark preferences, which improve their retrieval accuracy. Our findings support the hypothesis that birds are flexible in their use of memory information, using a combination of the most reliable or salient information to retrieve their caches. © Springer-Verlag 2010
Mesoscopic-microscopic spatial stochastic simulation with automatic system partitioning.
Hellander, Stefan; Hellander, Andreas; Petzold, Linda
2017-12-21
The reaction-diffusion master equation (RDME) is a model that allows for efficient on-lattice simulation of spatially resolved stochastic chemical kinetics. Compared to off-lattice hard-sphere simulations with Brownian dynamics or Green's function reaction dynamics, the RDME can be orders of magnitude faster if the lattice spacing can be chosen coarse enough. However, strongly diffusion-controlled reactions mandate a very fine mesh resolution for acceptable accuracy. It is common that reactions in the same model differ in their degree of diffusion control and therefore require different degrees of mesh resolution. This renders mesoscopic simulation inefficient for systems with multiscale properties. Mesoscopic-microscopic hybrid methods address this problem by resolving the most challenging reactions with a microscale, off-lattice simulation. However, all methods to date require manual partitioning of a system, effectively limiting their usefulness as "black-box" simulation codes. In this paper, we propose a hybrid simulation algorithm with automatic system partitioning based on indirect a priori error estimates. We demonstrate the accuracy and efficiency of the method on models of diffusion-controlled networks in 3D.
Multiscale modeling of metabolism, flows, and exchanges in heterogeneous organs
Bassingthwaighte, James B.; Raymond, Gary M.; Butterworth, Erik; Alessio, Adam; Caldwell, James H.
2010-01-01
Large-scale models accounting for the processes supporting metabolism and function in an organ or tissue with a marked heterogeneity of flows and metabolic rates are computationally complex and tedious to compute. Their use in the analysis of data from positron emission tomography (PET) and magnetic resonance imaging (MRI) requires model reduction since the data are composed of concentration–time curves from hundreds of regions of interest (ROI) within the organ. Within each ROI, one must account for blood flow, intracapillary gradients in concentrations, transmembrane transport, and intracellular reactions. Using modular design, we configured a whole organ model, GENTEX, to allow adaptive usage for multiple reacting molecular species while omitting computation of unused components. The temporal and spatial resolution and the number of species are adaptable and the numerical accuracy and computational speed is adjustable during optimization runs, which increases accuracy and spatial resolution as convergence approaches. An application to the interpretation of PET image sequences after intravenous injection of 13NH3 provides functional image maps of regional myocardial blood flows. PMID:20201893
Utilizing Weather RADAR for Rapid Location of Meteorite Falls and Space Debris Re-Entry
NASA Technical Reports Server (NTRS)
Fries, Marc D.
2016-01-01
This activity utilizes existing NOAA weather RADAR imagery to locate meteorite falls and space debris falls. The near-real-time availability and spatial accuracy of these data allow rapid recovery of material from both meteorite falls and space debris re-entry events. To date, at least 22 meteorite fall recoveries have benefitted from RADAR detection and fall modeling, and multiple debris re-entry events over the United States have been observed in unprecedented detail.
Electroinduction disk sensor of electric field strength
NASA Astrophysics Data System (ADS)
Biryukov, S. V.; Korolyova, M. A.
2018-01-01
Measurement of the level of electric fields exposure to the technical and biological objects for a long time is an urgent task. To solve this problem, the required electric field sensors with specified metrological characteristics. The aim of the study is the establishment of theoretical assumptions for the calculation of the flat electric field sensors. It is proved that the accuracy of the sensor does not exceed 3% in the spatial range 0
Jeff Jenness; J. Judson Wynne
2005-01-01
In the field of spatially explicit modeling, well-developed accuracy assessment methodologies are often poorly applied. Deriving model accuracy metrics have been possible for decades, but these calculations were made by hand or with the use of a spreadsheet application. Accuracy assessments may be useful for: (1) ascertaining the quality of a model; (2) improving model...
H. Todd Mowrer; Raymond L. Czaplewski; R. H. Hamre
1996-01-01
This international symposium on theory and techniques for assessing the accuracy of spatial data and spatial analyses included more than ninety presentations by representatives from government, academic, and private institutions in over twenty countries throughout the world. To encourage interactions across disciplines, presentations in the general subject areas of...
Indirect monitoring shot-to-shot shock waves strength reproducibility during pump–probe experiments
DOE Office of Scientific and Technical Information (OSTI.GOV)
Pikuz, T. A., E-mail: tatiana.pikuz@eie.eng.osaka-u.ac.jp; Photon Pioneers Center, Osaka University, Suita, Osaka 565-0871 Japan; Joint Institute for High Temperatures, Russian Academy of Sciences, Moscow 125412
We present an indirect method of estimating the strength of a shock wave, allowing on line monitoring of its reproducibility in each laser shot. This method is based on a shot-to-shot measurement of the X-ray emission from the ablated plasma by a high resolution, spatially resolved focusing spectrometer. An optical pump laser with energy of 1.0 J and pulse duration of ∼660 ps was used to irradiate solid targets or foils with various thicknesses containing Oxygen, Aluminum, Iron, and Tantalum. The high sensitivity and resolving power of the X-ray spectrometer allowed spectra to be obtained on each laser shot and tomore » control fluctuations of the spectral intensity emitted by different plasmas with an accuracy of ∼2%, implying an accuracy in the derived electron plasma temperature of 5%–10% in pump–probe high energy density science experiments. At nano- and sub-nanosecond duration of laser pulse with relatively low laser intensities and ratio Z/A ∼ 0.5, the electron temperature follows T{sub e} ∼ I{sub las}{sup 2/3}. Thus, measurements of the electron plasma temperature allow indirect estimation of the laser flux on the target and control its shot-to-shot fluctuation. Knowing the laser flux intensity and its fluctuation gives us the possibility of monitoring shot-to-shot reproducibility of shock wave strength generation with high accuracy.« less
NASA Astrophysics Data System (ADS)
Bratic, G.; Brovelli, M. A.; Molinari, M. E.
2018-04-01
The availability of thematic maps has significantly increased over the last few years. Validation of these maps is a key factor in assessing their suitability for different applications. The evaluation of the accuracy of classified data is carried out through a comparison with a reference dataset and the generation of a confusion matrix from which many quality indexes can be derived. In this work, an ad hoc free and open source Python tool was implemented to automatically compute all the matrix confusion-derived accuracy indexes proposed by literature. The tool was integrated into GRASS GIS environment and successfully applied to evaluate the quality of three high-resolution global datasets (GlobeLand30, Global Urban Footprint, Global Human Settlement Layer Built-Up Grid) in the Lombardy Region area (Italy). In addition to the most commonly used accuracy measures, e.g. overall accuracy and Kappa, the tool allowed to compute and investigate less known indexes such as the Ground Truth and the Classification Success Index. The promising tool will be further extended with spatial autocorrelation analysis functions and made available to researcher and user community.
NASA Technical Reports Server (NTRS)
Carpenter, Mark H.; Gottlieb, David; Abarbanel, Saul; Don, Wai-Sun
1993-01-01
The conventional method of imposing time dependent boundary conditions for Runge-Kutta (RK) time advancement reduces the formal accuracy of the space-time method to first order locally, and second order globally, independently of the spatial operator. This counter intuitive result is analyzed in this paper. Two methods of eliminating this problem are proposed for the linear constant coefficient case: (1) impose the exact boundary condition only at the end of the complete RK cycle, (2) impose consistent intermediate boundary conditions derived from the physical boundary condition and its derivatives. The first method, while retaining the RK accuracy in all cases, results in a scheme with much reduced CFL condition, rendering the RK scheme less attractive. The second method retains the same allowable time step as the periodic problem. However it is a general remedy only for the linear case. For non-linear hyperbolic equations the second method is effective only for for RK schemes of third order accuracy or less. Numerical studies are presented to verify the efficacy of each approach.
Stehman, S.V.; Wickham, J.D.; Wade, T.G.; Smith, J.H.
2008-01-01
The database design and diverse application of NLCD 2001 pose significant challenges for accuracy assessment because numerous objectives are of interest, including accuracy of land-cover, percent urban imperviousness, percent tree canopy, land-cover composition, and net change. A multi-support approach is needed because these objectives require spatial units of different sizes for reference data collection and analysis. Determining a sampling design that meets the full suite of desirable objectives for the NLCD 2001 accuracy assessment requires reconciling potentially conflicting design features that arise from targeting the different objectives. Multi-stage cluster sampling provides the general structure to achieve a multi-support assessment, and the flexibility to target different objectives at different stages of the design. We describe the implementation of two-stage cluster sampling for the initial phase of the NLCD 2001 assessment, and identify gaps in existing knowledge where research is needed to allow full implementation of a multi-objective, multi-support assessment. ?? 2008 American Society for Photogrammetry and Remote Sensing.
RapidEye constellation relative radiometric accuracy measurement using lunar images
NASA Astrophysics Data System (ADS)
Steyn, Joe; Tyc, George; Beckett, Keith; Hashida, Yoshi
2009-09-01
The RapidEye constellation includes five identical satellites in Low Earth Orbit (LEO). Each satellite has a 5-band (blue, green, red, red-edge and near infrared (NIR)) multispectral imager at 6.5m GSD. A three-axes attitude control system allows pointing the imager of each satellite at the Moon during lunations. It is therefore possible to image the Moon from near identical viewing geometry within a span of 80 minutes with each one of the imagers. Comparing the radiometrically corrected images obtained from each band and each satellite allows a near instantaneous relative radiometric accuracy measurement and determination of relative gain changes between the five imagers. A more traditional terrestrial vicarious radiometric calibration program has also been completed by MDA on RapidEye. The two components of this program provide for spatial radiometric calibration ensuring that detector-to-detector response remains flat, while a temporal radiometric calibration approach has accumulated images of specific dry dessert calibration sites. These images are used to measure the constellation relative radiometric response and make on-ground gain and offset adjustments in order to maintain the relative accuracy of the constellation within +/-2.5%. A quantitative comparison between the gain changes measured by the lunar method and the terrestrial temporal radiometric calibration method is performed and will be presented.
Regulations in the field of Geo-Information
NASA Astrophysics Data System (ADS)
Felus, Y.; Keinan, E.; Regev, R.
2013-10-01
The geomatics profession has gone through a major revolution during the last two decades with the emergence of advanced GNSS, GIS and Remote Sensing technologies. These technologies have changed the core principles and working procedures of geomatics professionals. For this reason, surveying and mapping regulations, standards and specifications should be updated to reflect these changes. In Israel, the "Survey Regulations" is the principal document that regulates the professional activities in four key areas geodetic control, mapping, cadastre and Georaphic information systems. Licensed Surveyors and mapping professionals in Israel are required to work according to those regulations. This year a new set of regulations have been published and include a few major amendments as follows: In the Geodesy chapter, horizontal control is officially based on the Israeli network of Continuously Operating GNSS Reference Stations (CORS). The regulations were phrased in a manner that will allow minor datum changes to the CORS stations due to Earth Crustal Movements. Moreover, the regulations permit the use of GNSS for low accuracy height measurements. In the Cadastre chapter, the most critical change is the move to Coordinate Based Cadastre (CBC). Each parcel corner point is ranked according to its quality (accuracy and clarity of definition). The highest ranking for a parcel corner is 1. A point with a rank of 1 is defined by its coordinates alone. Any other contradicting evidence is inferior to the coordinates values. Cadastral Information is stored and managed via the National Cadastral Databases. In the Mapping and GIS chapter; the traditional paper maps (ranked by scale) are replaced by digital maps or spatial databases. These spatial databases are ranked by their quality level. Quality level is determined (similar to the ISO19157 Standard) by logical consistency, completeness, positional accuracy, attribute accuracy, temporal accuracy and usability. Metadata is another critical component of any spatial database. Every component in a map should have a metadata identification, even if the map was compiled from multiple resources. The regulations permit the use of advanced sensors and mapping techniques including LIDAR and digita l cameras that have been certified and meet the defined criteria. The article reviews these new regulations and the decision that led to them.
NASA Astrophysics Data System (ADS)
Rupasinghe, P. A.; Markle, C. E.; Marcaccio, J. V.; Chow-Fraser, P.
2017-12-01
Phragmites australis (European common reed), is a relatively recent invader of wetlands and beaches in Ontario. It can establish large homogenous stands within wetlands and disperse widely throughout the landscape by wind and vehicular traffic. A first step in managing this invasive species includes accurate mapping and quantification of its distribution. This is challenging because Phragimtes is distributed in a large spatial extent, which makes the mapping more costly and time consuming. Here, we used freely available multispectral satellite images taken monthly (cloud free images as available) for the calendar year to determine the optimum phenological state of Phragmites that would allow it to be accurately identified using remote sensing data. We analyzed time series, Landsat-8 OLI and Sentinel-2 images for Big Creek Wildlife Area, ON using image classification (Support Vector Machines), Normalized Difference Vegetation Index (NDVI) and Normalized Difference Water Index (NDWI). We used field sampling data and high resolution image collected using Unmanned Aerial Vehicle (UAV; 8 cm spatial resolution) as training data and for the validation of the classified images. The accuracy for all land cover classes and for Phragmites alone were low at both the start and end of the calendar year, but reached overall accuracy >85% by mid to late summer. The highest classification accuracies for Landsat-8 OLI were associated with late July and early August imagery. We observed similar trends using the Sentinel-2 images, with higher overall accuracy for all land cover classes and for Phragmites alone from late July to late September. During this period, we found the greatest difference between Phragmites and Typha, commonly confused classes, with respect to near-infrared and shortwave infrared reflectance. Therefore, the unique spectral signature of Phragmites can be attributed to both the level of greenness and factors related to water content in the leaves during late summer. Landsat-8 OLI or Sentinel-2 images acquired in late summer can be used as a cost effective approach to mapping Phragmites at a large spatial scale without sacrificing accuracy.
Xie, Jianwen; Douglas, Pamela K; Wu, Ying Nian; Brody, Arthur L; Anderson, Ariana E
2017-04-15
Brain networks in fMRI are typically identified using spatial independent component analysis (ICA), yet other mathematical constraints provide alternate biologically-plausible frameworks for generating brain networks. Non-negative matrix factorization (NMF) would suppress negative BOLD signal by enforcing positivity. Spatial sparse coding algorithms (L1 Regularized Learning and K-SVD) would impose local specialization and a discouragement of multitasking, where the total observed activity in a single voxel originates from a restricted number of possible brain networks. The assumptions of independence, positivity, and sparsity to encode task-related brain networks are compared; the resulting brain networks within scan for different constraints are used as basis functions to encode observed functional activity. These encodings are then decoded using machine learning, by using the time series weights to predict within scan whether a subject is viewing a video, listening to an audio cue, or at rest, in 304 fMRI scans from 51 subjects. The sparse coding algorithm of L1 Regularized Learning outperformed 4 variations of ICA (p<0.001) for predicting the task being performed within each scan using artifact-cleaned components. The NMF algorithms, which suppressed negative BOLD signal, had the poorest accuracy compared to the ICA and sparse coding algorithms. Holding constant the effect of the extraction algorithm, encodings using sparser spatial networks (containing more zero-valued voxels) had higher classification accuracy (p<0.001). Lower classification accuracy occurred when the extracted spatial maps contained more CSF regions (p<0.001). The success of sparse coding algorithms suggests that algorithms which enforce sparsity, discourage multitasking, and promote local specialization may capture better the underlying source processes than those which allow inexhaustible local processes such as ICA. Negative BOLD signal may capture task-related activations. Copyright © 2017 Elsevier B.V. All rights reserved.
On the Accuracy Potential in Underwater/Multimedia Photogrammetry
Maas, Hans-Gerd
2015-01-01
Underwater applications of photogrammetric measurement techniques usually need to deal with multimedia photogrammetry aspects, which are characterized by the necessity of handling optical rays that are refracted at interfaces between optical media with different refractive indices according to Snell’s Law. This so-called multimedia geometry has to be incorporated into geometric models in order to achieve correct measurement results. The paper shows a flexible yet strict geometric model for the handling of refraction effects on the optical path, which can be implemented as a module into photogrammetric standard tools such as spatial resection, spatial intersection, bundle adjustment or epipolar line computation. The module is especially well suited for applications, where an object in water is observed by cameras in air through one or more planar glass interfaces, as it allows for some simplifications here. In the second part of the paper, several aspects, which are relevant for an assessment of the accuracy potential in underwater/multimedia photogrammetry, are discussed. These aspects include network geometry and interface planarity issues as well as effects caused by refractive index variations and dispersion and diffusion under water. All these factors contribute to a rather significant degradation of the geometric accuracy potential in underwater/multimedia photogrammetry. In practical experiments, a degradation of the quality of results by a factor two could be determined under relatively favorable conditions. PMID:26213942
Combination probes for stagnation pressure and temperature measurements in gas turbine engines
NASA Astrophysics Data System (ADS)
Bonham, C.; Thorpe, S. J.; Erlund, M. N.; Stevenson, R. J.
2018-01-01
During gas turbine engine testing, steady-state gas-path stagnation pressures and temperatures are measured in order to calculate the efficiencies of the main components of turbomachinery. These measurements are acquired using fixed intrusive probes, which are installed at the inlet and outlet of each component at discrete point locations across the gas-path. The overall uncertainty in calculated component efficiency is sensitive to the accuracy of discrete point pressures and temperatures, as well as the spatial sampling across the gas-path. Both of these aspects of the measurement system must be considered if more accurate component efficiencies are to be determined. High accuracy has become increasingly important as engine manufacturers have begun to pursue small gains in component performance, which require efficiencies to be resolved to within less than ± 1% . This article reports on three new probe designs that have been developed in a response to this demand. The probes adopt a compact combination arrangement that facilitates up to twice the spatial coverage compared to individual stagnation pressure and temperature probes. The probes also utilise novel temperature sensors and high recovery factor shield designs that facilitate improvements in point measurement accuracy compared to standard Kiel probes used in engine testing. These changes allow efficiencies to be resolved within ± 1% over a wider range of conditions than is currently achievable with Kiel probes.
ERIC Educational Resources Information Center
Tretter, Thomas R.; Jones, M. Gail; Minogue, James
2006-01-01
The use of unifying themes that span the various branches of science is recommended to enhance curricular coherence in science instruction. Conceptions of spatial scale are one such unifying theme. This research explored the accuracy of spatial scale conceptions of science phenomena across a spectrum of 215 participants: fifth grade, seventh…
Are You Sure the Library Is That Way? Metacognitive Monitoring of Spatial Judgments
ERIC Educational Resources Information Center
Stevens, Christopher A.; Carlson, Richard A.
2016-01-01
Many studies have examined how people recall the locations of objects in spatial layouts. However, little is known about how people monitor the accuracy of judgments based on those memories. The goal of the present experiments was to examine the effect of reference frame characteristics on metacognitive accuracy for spatial judgments. Reference…
Spatial and thematic assessment of object-based forest stand delineation using an OFA-matrix
NASA Astrophysics Data System (ADS)
Hernando, A.; Tiede, D.; Albrecht, F.; Lang, S.
2012-10-01
The delineation and classification of forest stands is a crucial aspect of forest management. Object-based image analysis (OBIA) can be used to produce detailed maps of forest stands from either orthophotos or very high resolution satellite imagery. However, measures are then required for evaluating and quantifying both the spatial and thematic accuracy of the OBIA output. In this paper we present an approach for delineating forest stands and a new Object Fate Analysis (OFA) matrix for accuracy assessment. A two-level object-based orthophoto analysis was first carried out to delineate stands on the Dehesa Boyal public land in central Spain (Avila Province). Two structural features were first created for use in class modelling, enabling good differentiation between stands: a relational tree cover cluster feature, and an arithmetic ratio shadow/tree feature. We then extended the OFA comparison approach with an OFA-matrix to enable concurrent validation of thematic and spatial accuracies. Its diagonal shows the proportion of spatial and thematic coincidence between a reference data and the corresponding classification. New parameters for Spatial Thematic Loyalty (STL), Spatial Thematic Loyalty Overall (STLOVERALL) and Maximal Interfering Object (MIO) are introduced to summarise the OFA-matrix accuracy assessment. A stands map generated by OBIA (classification data) was compared with a map of the same area produced from photo interpretation and field data (reference data). In our example the OFA-matrix results indicate good spatial and thematic accuracies (>65%) for all stand classes except for the shrub stands (31.8%), and a good STLOVERALL (69.8%). The OFA-matrix has therefore been shown to be a valid tool for OBIA accuracy assessment.
Adjusting for sampling variability in sparse data: geostatistical approaches to disease mapping
2011-01-01
Background Disease maps of crude rates from routinely collected health data indexed at a small geographical resolution pose specific statistical problems due to the sparse nature of the data. Spatial smoothers allow areas to borrow strength from neighboring regions to produce a more stable estimate of the areal value. Geostatistical smoothers are able to quantify the uncertainty in smoothed rate estimates without a high computational burden. In this paper, we introduce a uniform model extension of Bayesian Maximum Entropy (UMBME) and compare its performance to that of Poisson kriging in measures of smoothing strength and estimation accuracy as applied to simulated data and the real data example of HIV infection in North Carolina. The aim is to produce more reliable maps of disease rates in small areas to improve identification of spatial trends at the local level. Results In all data environments, Poisson kriging exhibited greater smoothing strength than UMBME. With the simulated data where the true latent rate of infection was known, Poisson kriging resulted in greater estimation accuracy with data that displayed low spatial autocorrelation, while UMBME provided more accurate estimators with data that displayed higher spatial autocorrelation. With the HIV data, UMBME performed slightly better than Poisson kriging in cross-validatory predictive checks, with both models performing better than the observed data model with no smoothing. Conclusions Smoothing methods have different advantages depending upon both internal model assumptions that affect smoothing strength and external data environments, such as spatial correlation of the observed data. Further model comparisons in different data environments are required to provide public health practitioners with guidelines needed in choosing the most appropriate smoothing method for their particular health dataset. PMID:21978359
Adjusting for sampling variability in sparse data: geostatistical approaches to disease mapping.
Hampton, Kristen H; Serre, Marc L; Gesink, Dionne C; Pilcher, Christopher D; Miller, William C
2011-10-06
Disease maps of crude rates from routinely collected health data indexed at a small geographical resolution pose specific statistical problems due to the sparse nature of the data. Spatial smoothers allow areas to borrow strength from neighboring regions to produce a more stable estimate of the areal value. Geostatistical smoothers are able to quantify the uncertainty in smoothed rate estimates without a high computational burden. In this paper, we introduce a uniform model extension of Bayesian Maximum Entropy (UMBME) and compare its performance to that of Poisson kriging in measures of smoothing strength and estimation accuracy as applied to simulated data and the real data example of HIV infection in North Carolina. The aim is to produce more reliable maps of disease rates in small areas to improve identification of spatial trends at the local level. In all data environments, Poisson kriging exhibited greater smoothing strength than UMBME. With the simulated data where the true latent rate of infection was known, Poisson kriging resulted in greater estimation accuracy with data that displayed low spatial autocorrelation, while UMBME provided more accurate estimators with data that displayed higher spatial autocorrelation. With the HIV data, UMBME performed slightly better than Poisson kriging in cross-validatory predictive checks, with both models performing better than the observed data model with no smoothing. Smoothing methods have different advantages depending upon both internal model assumptions that affect smoothing strength and external data environments, such as spatial correlation of the observed data. Further model comparisons in different data environments are required to provide public health practitioners with guidelines needed in choosing the most appropriate smoothing method for their particular health dataset.
NASA Astrophysics Data System (ADS)
Ossés de Eicker, Margarita; Zah, Rainer; Triviño, Rubén; Hurni, Hans
The spatial accuracy of top-down traffic emission inventory maps obtained with a simplified disaggregation method based on street density was assessed in seven mid-sized Chilean cities. Each top-down emission inventory map was compared against a reference, namely a more accurate bottom-up emission inventory map from the same study area. The comparison was carried out using a combination of numerical indicators and visual interpretation. Statistically significant differences were found between the seven cities with regard to the spatial accuracy of their top-down emission inventory maps. In compact cities with a simple street network and a single center, a good accuracy of the spatial distribution of emissions was achieved with correlation values>0.8 with respect to the bottom-up emission inventory of reference. In contrast, the simplified disaggregation method is not suitable for complex cities consisting of interconnected nuclei, resulting in correlation values<0.5. Although top-down disaggregation of traffic emissions generally exhibits low accuracy, the accuracy is significantly higher in compact cities and might be further improved by applying a correction factor for the city center. Therefore, the method can be used by local environmental authorities in cities with limited resources and with little knowledge on the pollution situation to get an overview on the spatial distribution of the emissions generated by traffic activities.
NASA Astrophysics Data System (ADS)
Benaud, Pia; Anderson, Karen; Quine, Timothy; James, Mike; Quinton, John; Brazier, Richard E.
2017-04-01
The accessibility of Structure-from-Motion Multi-Stereo View (SfM) and the potential for multi-temporal applications, offers an exciting opportunity to quantify soil erosion spatially. Accordingly, published research provides examples of the successful quantification of large erosion features and events, to centimetre accuracy. Through rigorous control of the camera and image network geometry, the centimetre accuracy achievable at the field scale, can translate to sub-millimetre accuracies within a laboratory environment. The broad aim of this study, therefore, was to understand how ultra-high-resolution spatial information on soil surface topography, derived from SfM, can be utilised to develop a spatially explicit, mechanistic understanding of rill and inter-rill erosion, under experimental conditions. A rainfall simulator was used to create three soil surface conditions; compaction and rainsplash erosion, inter-rill erosion, and rill erosion. Total sediment capture was the primary validation for the experiments, allowing the comparison between structurally and volumetrically derived change, and true soil loss. A Terrestrial Laser Scanner (resolution of ca. 0.8mm) was employed to assess spatial discrepancies within the SfM datasets and to provide an alternative measure of volumetric change. The body of work will present the workflow that has been developed for the laboratory-scale studies and provide information on the importance of DTM resolution for volumetric calculations of soil loss, under different soil surface conditions. To-date, using the methodology presented, point clouds with ca. 3.38 x 107 points per m2, and RMSE values of 0.17 to 0.43 mm (relative precision 1:2023-5117), were constructed. Preliminary results suggest a decrease in DTM resolution from 0.5 to 10 mm does not result in a significant change in volumetric calculations (p = 0.088), while affording a 24-fold decrease in processing times, but may impact negatively on mechanistic understanding of patterns of erosion. It is argued that the approach can be an invaluable tool for the spatially-explicit evaluation of soil erosion models.
An implicit spatial and high-order temporal finite difference scheme for 2D acoustic modelling
NASA Astrophysics Data System (ADS)
Wang, Enjiang; Liu, Yang
2018-01-01
The finite difference (FD) method exhibits great superiority over other numerical methods due to its easy implementation and small computational requirement. We propose an effective FD method, characterised by implicit spatial and high-order temporal schemes, to reduce both the temporal and spatial dispersions simultaneously. For the temporal derivative, apart from the conventional second-order FD approximation, a special rhombus FD scheme is included to reach high-order accuracy in time. Compared with the Lax-Wendroff FD scheme, this scheme can achieve nearly the same temporal accuracy but requires less floating-point operation times and thus less computational cost when the same operator length is adopted. For the spatial derivatives, we adopt the implicit FD scheme to improve the spatial accuracy. Apart from the existing Taylor series expansion-based FD coefficients, we derive the least square optimisation based implicit spatial FD coefficients. Dispersion analysis and modelling examples demonstrate that, our proposed method can effectively decrease both the temporal and spatial dispersions, thus can provide more accurate wavefields.
NASA Astrophysics Data System (ADS)
Arantes Camargo, Livia; Marques, José, Jr.
2015-04-01
The prediction of erodibility using indirect methods such as diffuse reflectance spectroscopy could facilitate the characterization of the spatial variability in large areas and optimize implementation of conservation practices. The aim of this study was to evaluate the prediction of interrill erodibility (Ki) and rill erodibility (Kr) by means of iron oxides content and soil color using multiple linear regression and diffuse reflectance spectroscopy (DRS) using regression analysis by least squares partial (PLSR). The soils were collected from three geomorphic surfaces and analyzed for chemical, physical and mineralogical properties, plus scanned in the spectral range from the visible and infrared. Maps of spatial distribution of Ki and Kr were built with the values calculated by the calibrated models that obtained the best accuracy using geostatistics. Interrill-rill erodibility presented negative correlation with iron extracted by dithionite-citrate-bicarbonate, hematite, and chroma, confirming the influence of iron oxides in soil structural stability. Hematite and hue were the attributes that most contributed in calibration models by multiple linear regression for the prediction of Ki (R2 = 0.55) and Kr (R2 = 0.53). The diffuse reflectance spectroscopy via PLSR allowed to predict Interrill-rill erodibility with high accuracy (R2adj = 0.76, 0.81 respectively and RPD> 2.0) in the range of the visible spectrum (380-800 nm) and the characterization of the spatial variability of these attributes by geostatistics.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hammond, Glenn Edward; Song, Xuehang; Ye, Ming
A new approach is developed to delineate the spatial distribution of discrete facies (geological units that have unique distributions of hydraulic, physical, and/or chemical properties) conditioned not only on direct data (measurements directly related to facies properties, e.g., grain size distribution obtained from borehole samples) but also on indirect data (observations indirectly related to facies distribution, e.g., hydraulic head and tracer concentration). Our method integrates for the first time ensemble data assimilation with traditional transition probability-based geostatistics. The concept of level set is introduced to build shape parameterization that allows transformation between discrete facies indicators and continuous random variables. Themore » spatial structure of different facies is simulated by indicator models using conditioning points selected adaptively during the iterative process of data assimilation. To evaluate the new method, a two-dimensional semi-synthetic example is designed to estimate the spatial distribution and permeability of two distinct facies from transient head data induced by pumping tests. The example demonstrates that our new method adequately captures the spatial pattern of facies distribution by imposing spatial continuity through conditioning points. The new method also reproduces the overall response in hydraulic head field with better accuracy compared to data assimilation with no constraints on spatial continuity on facies.« less
Daly, Keith R; Tracy, Saoirse R; Crout, Neil M J; Mairhofer, Stefan; Pridmore, Tony P; Mooney, Sacha J; Roose, Tiina
2018-01-01
Spatially averaged models of root-soil interactions are often used to calculate plant water uptake. Using a combination of X-ray computed tomography (CT) and image-based modelling, we tested the accuracy of this spatial averaging by directly calculating plant water uptake for young wheat plants in two soil types. The root system was imaged using X-ray CT at 2, 4, 6, 8 and 12 d after transplanting. The roots were segmented using semi-automated root tracking for speed and reproducibility. The segmented geometries were converted to a mesh suitable for the numerical solution of Richards' equation. Richards' equation was parameterized using existing pore scale studies of soil hydraulic properties in the rhizosphere of wheat plants. Image-based modelling allows the spatial distribution of water around the root to be visualized and the fluxes into the root to be calculated. By comparing the results obtained through image-based modelling to spatially averaged models, the impact of root architecture and geometry in water uptake was quantified. We observed that the spatially averaged models performed well in comparison to the image-based models with <2% difference in uptake. However, the spatial averaging loses important information regarding the spatial distribution of water near the root system. © 2017 John Wiley & Sons Ltd.
NASA Technical Reports Server (NTRS)
Sadowski, F. E.; Sarno, J. E.
1976-01-01
First, an analysis of forest feature signatures was used to help explain the large variation in classification accuracy that can occur among individual forest features for any one case of spatial resolution and the inconsistent changes in classification accuracy that were demonstrated among features as spatial resolution was degraded. Second, the classification rejection threshold was varied in an effort to reduce the large proportion of unclassified resolution elements that previously appeared in the processing of coarse resolution data when a constant rejection threshold was used for all cases of spatial resolution. For the signature analysis, two-channel ellipse plots showing the feature signature distributions for several cases of spatial resolution indicated that the capability of signatures to correctly identify their respective features is dependent on the amount of statistical overlap among signatures. Reductions in signature variance that occur in data of degraded spatial resolution may not necessarily decrease the amount of statistical overlap among signatures having large variance and small mean separations. Features classified by such signatures may thus continue to have similar amounts of misclassified elements in coarser resolution data, and thus, not necessarily improve in classification accuracy.
NASA Astrophysics Data System (ADS)
Kostencka, Julianna; Kozacki, Tomasz; Hennelly, Bryan; Sheridan, John T.
2017-06-01
Holographic tomography (HT) allows noninvasive, quantitative, 3D imaging of transparent microobjects, such as living biological cells and fiber optics elements. The technique is based on acquisition of multiple scattered fields for various sample perspectives using digital holographic microscopy. Then, the captured data is processed with one of the tomographic reconstruction algorithms, which enables 3D reconstruction of refractive index distribution. In our recent works we addressed the issue of spatially variant accuracy of the HT reconstructions, which results from the insufficient model of diffraction that is applied in the widely-used tomographic reconstruction algorithms basing on the Rytov approximation. In the present study, we continue investigating the spatially variant properties of the HT imaging, however, we are now focusing on the limited spatial size of holograms as a source of this problem. Using the Wigner distribution representation and the Ewald sphere approach, we show that the limited size of the holograms results in a decreased quality of tomographic imaging in off-center regions of the HT reconstructions. This is because the finite detector extent becomes a limiting aperture that prohibits acquisition of full information about diffracted fields coming from the out-of-focus structures of a sample. The incompleteness of the data results in an effective truncation of the tomographic transfer function for the out-of-center regions of the tomographic image. In this paper, the described effect is quantitatively characterized for three types of the tomographic systems: the configuration with 1) object rotation, 2) scanning of the illumination direction, 3) the hybrid HT solution combing both previous approaches.
NASA Astrophysics Data System (ADS)
El Alem, A.
2016-12-01
Harmful algal bloom (HAB) causes negative impacts to other organisms by producing natural toxins, mechanical damage to other micro-organisms, or simply by degrading waters quality. Contaminated waters could expose several billions of population to serious intoxications problems. Traditionally, HAB monitoring is made with standard methods limited to a restricted network of sampling points. However, rapid evolution of HABs makes it difficult to monitor their variation in time and space, threating then public safety. Daily monitoring is then the best way to control and to mitigate their harmful effect upon population, particularly for sources feeding cities. Recently, an approach for estimating chlorophyll-a (Chl-a) concentration, as a proxy of HAB presence, in inland waters based MODIS imagery downscaled to 250 meters spatial resolution was developed. Statistical evaluation of the developed approach highlighted the accuracy of Chl-a estimate with a R2 = 0.98, a relative RMSE of 15%, a relative BIAS of -2%, and a relative NASH of 0.95. Temporal resolution of MODIS sensor allows then a daily monitoring of HAB spatial distribution for inland waters of more than 2.25 Km2 of surface. Groupe-Hemisphere, a company specialized in environmental and sustainable planning in Quebec, has shown a great interest to the developed approach. Given the complexity of the preprocessing (geometric and atmospheric corrections as well as downscaling spatial resolution) and processing (Chl-a estimate) of images, a standalone application under the MATLAB's GUI environment was developed. The application allows an automated process for all preprocessing and processing steps. Outputs produced by the application for end users, many of whom may be decision makers or policy makers in the public and private sectors, allows a near-real time monitoring of water quality for a more efficient management.
Chandra Observations of Neutron Stars: An Overview
NASA Technical Reports Server (NTRS)
Weisskopf, Martin C.; Karovska, M.; Pavlov, G. G.; Zavlin, V. E.; Clarke, Tracy
2006-01-01
We present a brief review of Chandra X-ray Observatory observations of neutron stars. The outstanding spatial and spectral resolution of this great observatory have allowed for observations of unprecedented clarity and accuracy. Many of these observations have provided new insights into neutron star physics. We present an admittedly biased and overly brief overview of these observations, highlighting some new discoveries made possible by the Observatory's unique capabilities. We also include our analysis of recent multiwavelength observations of the putative pulsar and its pulsar-wind nebula in the IC 443 SNR.
Simulation of separated flow past a bluff body using Navier-Stokes equations
NASA Technical Reports Server (NTRS)
Ghia, K. N.; Ghia, U.; Osswald, G. A.; Liu, C. A.
1987-01-01
Two-dimensional flow past a bluff body is presently simulated on the basis of an analysis that employs the incompressible, unsteady Navier-Stokes equations in terms of vorticity and stream function. The fully implicit, time-marching, alternating-direction, implicit-block Gaussian elimination used is a direct method with second-order spatial accuracy; this allows it to avoid the introduction of any artificial viscosity. Attention is given to the simulation of flow past a circular cylinder with and without symmetry, requiring the use of either the half or the full cylinder, respectively.
Value-at-Risk forecasts by a spatiotemporal model in Chinese stock market
NASA Astrophysics Data System (ADS)
Gong, Pu; Weng, Yingliang
2016-01-01
This paper generalizes a recently proposed spatial autoregressive model and introduces a spatiotemporal model for forecasting stock returns. We support the view that stock returns are affected not only by the absolute values of factors such as firm size, book-to-market ratio and momentum but also by the relative values of factors like trading volume ranking and market capitalization ranking in each period. This article studies a new method for constructing stocks' reference groups; the method is called quartile method. Applying the method empirically to the Shanghai Stock Exchange 50 Index, we compare the daily volatility forecasting performance and the out-of-sample forecasting performance of Value-at-Risk (VaR) estimated by different models. The empirical results show that the spatiotemporal model performs surprisingly well in terms of capturing spatial dependences among individual stocks, and it produces more accurate VaR forecasts than the other three models introduced in the previous literature. Moreover, the findings indicate that both allowing for serial correlation in the disturbances and using time-varying spatial weight matrices can greatly improve the predictive accuracy of a spatial autoregressive model.
Accuracy of stream habitat interpolations across spatial scales
Sheehan, Kenneth R.; Welsh, Stuart A.
2013-01-01
Stream habitat data are often collected across spatial scales because relationships among habitat, species occurrence, and management plans are linked at multiple spatial scales. Unfortunately, scale is often a factor limiting insight gained from spatial analysis of stream habitat data. Considerable cost is often expended to collect data at several spatial scales to provide accurate evaluation of spatial relationships in streams. To address utility of single scale set of stream habitat data used at varying scales, we examined the influence that data scaling had on accuracy of natural neighbor predictions of depth, flow, and benthic substrate. To achieve this goal, we measured two streams at gridded resolution of 0.33 × 0.33 meter cell size over a combined area of 934 m2 to create a baseline for natural neighbor interpolated maps at 12 incremental scales ranging from a raster cell size of 0.11 m2 to 16 m2 . Analysis of predictive maps showed a logarithmic linear decay pattern in RMSE values in interpolation accuracy for variables as resolution of data used to interpolate study areas became coarser. Proportional accuracy of interpolated models (r2 ) decreased, but it was maintained up to 78% as interpolation scale moved from 0.11 m2 to 16 m2 . Results indicated that accuracy retention was suitable for assessment and management purposes at various scales different from the data collection scale. Our study is relevant to spatial modeling, fish habitat assessment, and stream habitat management because it highlights the potential of using a single dataset to fulfill analysis needs rather than investing considerable cost to develop several scaled datasets.
Selective 4D modelling framework for spatial-temporal land information management system
NASA Astrophysics Data System (ADS)
Doulamis, Anastasios; Soile, Sofia; Doulamis, Nikolaos; Chrisouli, Christina; Grammalidis, Nikos; Dimitropoulos, Kosmas; Manesis, Charalambos; Potsiou, Chryssy; Ioannidis, Charalabos
2015-06-01
This paper introduces a predictive (selective) 4D modelling framework where only the spatial 3D differences are modelled at the forthcoming time instances, while regions of no significant spatial-temporal alterations remain intact. To accomplish this, initially spatial-temporal analysis is applied between 3D digital models captured at different time instances. So, the creation of dynamic change history maps is made. Change history maps indicate spatial probabilities of regions needed further 3D modelling at forthcoming instances. Thus, change history maps are good examples for a predictive assessment, that is, to localize surfaces within the objects where a high accuracy reconstruction process needs to be activated at the forthcoming time instances. The proposed 4D Land Information Management System (LIMS) is implemented using open interoperable standards based on the CityGML framework. CityGML allows the description of the semantic metadata information and the rights of the land resources. Visualization aspects are also supported to allow easy manipulation, interaction and representation of the 4D LIMS digital parcels and the respective semantic information. The open source 3DCityDB incorporating a PostgreSQL geo-database is used to manage and manipulate 3D data and their semantics. An application is made to detect the change through time of a 3D block of plots in an urban area of Athens, Greece. Starting with an accurate 3D model of the buildings in 1983, a change history map is created using automated dense image matching on aerial photos of 2010. For both time instances meshes are created and through their comparison the changes are detected.
Baker, Jannah; White, Nicole; Mengersen, Kerrie
2014-11-20
Spatial analysis is increasingly important for identifying modifiable geographic risk factors for disease. However, spatial health data from surveys are often incomplete, ranging from missing data for only a few variables, to missing data for many variables. For spatial analyses of health outcomes, selection of an appropriate imputation method is critical in order to produce the most accurate inferences. We present a cross-validation approach to select between three imputation methods for health survey data with correlated lifestyle covariates, using as a case study, type II diabetes mellitus (DM II) risk across 71 Queensland Local Government Areas (LGAs). We compare the accuracy of mean imputation to imputation using multivariate normal and conditional autoregressive prior distributions. Choice of imputation method depends upon the application and is not necessarily the most complex method. Mean imputation was selected as the most accurate method in this application. Selecting an appropriate imputation method for health survey data, after accounting for spatial correlation and correlation between covariates, allows more complete analysis of geographic risk factors for disease with more confidence in the results to inform public policy decision-making.
Comparison of bipolar vs. tripolar concentric ring electrode Laplacian estimates.
Besio, W; Aakula, R; Dai, W
2004-01-01
Potentials on the body surface from the heart are of a spatial and temporal function. The 12-lead electrocardiogram (ECG) provides useful global temporal assessment, but it yields limited spatial information due to the smoothing effect caused by the volume conductor. The smoothing complicates identification of multiple simultaneous bioelectrical events. In an attempt to circumvent the smoothing problem, some researchers used a five-point method (FPM) to numerically estimate the analytical solution of the Laplacian with an array of monopolar electrodes. The FPM is generalized to develop a bi-polar concentric ring electrode system. We have developed a new Laplacian ECG sensor, a trielectrode sensor, based on a nine-point method (NPM) numerical approximation of the analytical Laplacian. For a comparison, the NPM, FPM and compact NPM were calculated over a 400 x 400 mesh with 1/400 spacing. Tri and bi-electrode sensors were also simulated and their Laplacian estimates were compared against the analytical Laplacian. We found that tri-electrode sensors have a much-improved accuracy with significantly less relative and maximum errors in estimating the Laplacian operator. Apart from the higher accuracy, our new electrode configuration will allow better localization of the electrical activity of the heart than bi-electrode configurations.
Age-related similarities and differences in monitoring spatial cognition.
Ariel, Robert; Moffat, Scott D
2018-05-01
Spatial cognitive performance is impaired in later adulthood but it is unclear whether the metacognitive processes involved in monitoring spatial cognitive performance are also compromised. Inaccurate monitoring could affect whether people choose to engage in tasks that require spatial thinking and also the strategies they use in spatial domains such as navigation. The current experiment examined potential age differences in monitoring spatial cognitive performance in a variety of spatial domains including visual-spatial working memory, spatial orientation, spatial visualization, navigation, and place learning. Younger and older adults completed a 2D mental rotation test, 3D mental rotation test, paper folding test, spatial memory span test, two virtual navigation tasks, and a cognitive mapping test. Participants also made metacognitive judgments of performance (confidence judgments, judgments of learning, or navigation time estimates) on each trial for all spatial tasks. Preference for allocentric or egocentric navigation strategies was also measured. Overall, performance was poorer and confidence in performance was lower for older adults than younger adults. In most spatial domains, the absolute and relative accuracy of metacognitive judgments was equivalent for both age groups. However, age differences in monitoring accuracy (specifically relative accuracy) emerged in spatial tasks involving navigation. Confidence in navigating for a target location also mediated age differences in allocentric navigation strategy use. These findings suggest that with the possible exception of navigation monitoring, spatial cognition may be spared from age-related decline even though spatial cognition itself is impaired in older age.
A Novel Multi-Digital Camera System Based on Tilt-Shift Photography Technology
Sun, Tao; Fang, Jun-yong; Zhao, Dong; Liu, Xue; Tong, Qing-xi
2015-01-01
Multi-digital camera systems (MDCS) are constantly being improved to meet the increasing requirement of high-resolution spatial data. This study identifies the insufficiencies of traditional MDCSs and proposes a new category MDCS based on tilt-shift photography to improve ability of the MDCS to acquire high-accuracy spatial data. A prototype system, including two or four tilt-shift cameras (TSC, camera model: Nikon D90), is developed to validate the feasibility and correctness of proposed MDCS. Similar to the cameras of traditional MDCSs, calibration is also essential for TSC of new MDCS. The study constructs indoor control fields and proposes appropriate calibration methods for TSC, including digital distortion model (DDM) approach and two-step calibrated strategy. The characteristics of TSC are analyzed in detail via a calibration experiment; for example, the edge distortion of TSC. Finally, the ability of the new MDCS to acquire high-accuracy spatial data is verified through flight experiments. The results of flight experiments illustrate that geo-position accuracy of prototype system achieves 0.3 m at a flight height of 800 m, and spatial resolution of 0.15 m. In addition, results of the comparison between the traditional (MADC II) and proposed MDCS demonstrate that the latter (0.3 m) provides spatial data with higher accuracy than the former (only 0.6 m) under the same conditions. We also take the attitude that using higher accuracy TSC in the new MDCS should further improve the accuracy of the photogrammetry senior product. PMID:25835187
A novel multi-digital camera system based on tilt-shift photography technology.
Sun, Tao; Fang, Jun-Yong; Zhao, Dong; Liu, Xue; Tong, Qing-Xi
2015-03-31
Multi-digital camera systems (MDCS) are constantly being improved to meet the increasing requirement of high-resolution spatial data. This study identifies the insufficiencies of traditional MDCSs and proposes a new category MDCS based on tilt-shift photography to improve ability of the MDCS to acquire high-accuracy spatial data. A prototype system, including two or four tilt-shift cameras (TSC, camera model: Nikon D90), is developed to validate the feasibility and correctness of proposed MDCS. Similar to the cameras of traditional MDCSs, calibration is also essential for TSC of new MDCS. The study constructs indoor control fields and proposes appropriate calibration methods for TSC, including digital distortion model (DDM) approach and two-step calibrated strategy. The characteristics of TSC are analyzed in detail via a calibration experiment; for example, the edge distortion of TSC. Finally, the ability of the new MDCS to acquire high-accuracy spatial data is verified through flight experiments. The results of flight experiments illustrate that geo-position accuracy of prototype system achieves 0.3 m at a flight height of 800 m, and spatial resolution of 0.15 m. In addition, results of the comparison between the traditional (MADC II) and proposed MDCS demonstrate that the latter (0.3 m) provides spatial data with higher accuracy than the former (only 0.6 m) under the same conditions. We also take the attitude that using higher accuracy TSC in the new MDCS should further improve the accuracy of the photogrammetry senior product.
[Real time 3D echocardiography
NASA Technical Reports Server (NTRS)
Bauer, F.; Shiota, T.; Thomas, J. D.
2001-01-01
Three-dimensional representation of the heart is an old concern. Usually, 3D reconstruction of the cardiac mass is made by successive acquisition of 2D sections, the spatial localisation and orientation of which require complex guiding systems. More recently, the concept of volumetric acquisition has been introduced. A matricial emitter-receiver probe complex with parallel data processing provides instantaneous of a pyramidal 64 degrees x 64 degrees volume. The image is restituted in real time and is composed of 3 planes (planes B and C) which can be displaced in all spatial directions at any time during acquisition. The flexibility of this system of acquisition allows volume and mass measurement with greater accuracy and reproducibility, limiting inter-observer variability. Free navigation of the planes of investigation allows reconstruction for qualitative and quantitative analysis of valvular heart disease and other pathologies. Although real time 3D echocardiography is ready for clinical usage, some improvements are still necessary to improve its conviviality. Then real time 3D echocardiography could be the essential tool for understanding, diagnosis and management of patients.
Crespo-García, Maité; Zeiller, Monika; Leupold, Claudia; Kreiselmeyer, Gernot; Rampp, Stefan; Hamer, Hajo M; Dalal, Sarang S
2016-11-15
Human hippocampal theta oscillations play a key role in accurate spatial coding. Associative encoding involves similar hippocampal networks but, paradoxically, is also characterized by theta power decreases. Here, we investigated how theta activity relates to associative encoding of place contexts resulting in accurate navigation. Using MEG, we found that slow-theta (2-5Hz) power negatively correlated with subsequent spatial accuracy for virtual contextual locations in posterior hippocampus and other cortical structures involved in spatial cognition. A rare opportunity to simultaneously record MEG and intracranial EEG in an epilepsy patient provided crucial insights: during power decreases, slow-theta in right anterior hippocampus and left inferior frontal gyrus phase-led the left temporal cortex and predicted spatial accuracy. Our findings indicate that decreased slow-theta activity reflects local and long-range neural mechanisms that encode accurate spatial contexts, and strengthens the view that local suppression of low-frequency activity is essential for more efficient processing of detailed information. Copyright © 2016 Elsevier Inc. All rights reserved.
NASA Astrophysics Data System (ADS)
Kruglyakov, Mikhail; Kuvshinov, Alexey
2018-05-01
3-D interpretation of electromagnetic (EM) data of different origin and scale becomes a common practice worldwide. However, 3-D EM numerical simulations (modeling)—a key part of any 3-D EM data analysis—with realistic levels of complexity, accuracy and spatial detail still remains challenging from the computational point of view. We present a novel, efficient 3-D numerical solver based on a volume integral equation (IE) method. The efficiency is achieved by using a high-order polynomial (HOP) basis instead of the zero-order (piecewise constant) basis that is invoked in all routinely used IE-based solvers. We demonstrate that usage of the HOP basis allows us to decrease substantially the number of unknowns (preserving the same accuracy), with corresponding speed increase and memory saving.
Spatial adaptation procedures on tetrahedral meshes for unsteady aerodynamic flow calculations
NASA Technical Reports Server (NTRS)
Rausch, Russ D.; Batina, John T.; Yang, Henry T. Y.
1993-01-01
Spatial adaptation procedures for the accurate and efficient solution of steady and unsteady inviscid flow problems are described. The adaptation procedures were developed and implemented within a three-dimensional, unstructured-grid, upwind-type Euler code. These procedures involve mesh enrichment and mesh coarsening to either add points in high gradient regions of the flow or remove points where they are not needed, respectively, to produce solutions of high spatial accuracy at minimal computational cost. A detailed description of the enrichment and coarsening procedures are presented and comparisons with experimental data for an ONERA M6 wing and an exact solution for a shock-tube problem are presented to provide an assessment of the accuracy and efficiency of the capability. Steady and unsteady results, obtained using spatial adaptation procedures, are shown to be of high spatial accuracy, primarily in that discontinuities such as shock waves are captured very sharply.
Spatial adaptation procedures on tetrahedral meshes for unsteady aerodynamic flow calculations
NASA Technical Reports Server (NTRS)
Rausch, Russ D.; Batina, John T.; Yang, Henry T. Y.
1993-01-01
Spatial adaptation procedures for the accurate and efficient solution of steady and unsteady inviscid flow problems are described. The adaptation procedures were developed and implemented within a three-dimensional, unstructured-grid, upwind-type Euler code. These procedures involve mesh enrichment and mesh coarsening to either add points in high gradient regions of the flow or remove points where they are not needed, respectively, to produce solutions of high spatial accuracy at minimal computational cost. The paper gives a detailed description of the enrichment and coarsening procedures and presents comparisons with experimental data for an ONERA M6 wing and an exact solution for a shock-tube problem to provide an assessment of the accuracy and efficiency of the capability. Steady and unsteady results, obtained using spatial adaptation procedures, are shown to be of high spatial accuracy, primarily in that discontinuities such as shock waves are captured very sharply.
Zhang, Shengwei; Arfanakis, Konstantinos
2012-01-01
Purpose To investigate the effect of standardized and study-specific human brain diffusion tensor templates on the accuracy of spatial normalization, without ignoring the important roles of data quality and registration algorithm effectiveness. Materials and Methods Two groups of diffusion tensor imaging (DTI) datasets, with and without visible artifacts, were normalized to two standardized diffusion tensor templates (IIT2, ICBM81) as well as study-specific templates, using three registration approaches. The accuracy of inter-subject spatial normalization was compared across templates, using the most effective registration technique for each template and group of data. Results It was demonstrated that, for DTI data with visible artifacts, the study-specific template resulted in significantly higher spatial normalization accuracy than standardized templates. However, for data without visible artifacts, the study-specific template and the standardized template of higher quality (IIT2) resulted in similar normalization accuracy. Conclusion For DTI data with visible artifacts, a carefully constructed study-specific template may achieve higher normalization accuracy than that of standardized templates. However, as DTI data quality improves, a high-quality standardized template may be more advantageous than a study-specific template, since in addition to high normalization accuracy, it provides a standard reference across studies, as well as automated localization/segmentation when accompanied by anatomical labels. PMID:23034880
Martinuzzi, Sebastián; Ramos-González, Olga M; Muñoz-Erickson, Tischa A; Locke, Dexter H; Lugo, Ariel E; Radeloff, Volker C
2018-04-01
Fine-scale information about urban vegetation and social-ecological relationships is crucial to inform both urban planning and ecological research, and high spatial resolution imagery is a valuable tool for assessing urban areas. However, urban ecology and remote sensing have largely focused on cities in temperate zones. Our goal was to characterize urban vegetation cover with sub-meter (<1 m) resolution aerial imagery, and identify social-ecological relationships of urban vegetation patterns in a tropical city, the San Juan Metropolitan Area, Puerto Rico. Our specific objectives were to (1) map vegetation cover using sub-meter spatial resolution (0.3-m) imagery, (2) quantify the amount of residential and non-residential vegetation, and (3) investigate the relationship between patterns of urban vegetation vs. socioeconomic and environmental factors. We found that 61% of the San Juan Metropolitan Area was green and that our combination of high spatial resolution imagery and object-based classification was highly successful for extracting vegetation cover in a moist tropical city (97% accuracy). In addition, simple spatial pattern analysis allowed us to separate residential from non-residential vegetation with 76% accuracy, and patterns of residential and non-residential vegetation varied greatly across the city. Both socioeconomic (e.g., population density, building age, detached homes) and environmental variables (e.g., topography) were important in explaining variations in vegetation cover in our spatial regression models. However, important socioeconomic drivers found in cities in temperate zones, such as income and home value, were not important in San Juan. Climatic and cultural differences between tropical and temperate cities may result in different social-ecological relationships. Our study provides novel information for local land use planners, highlights the value of high spatial resolution remote sensing data to advance ecological research and urban planning in tropical cities, and emphasizes the need for more studies in tropical cities. © 2017 by the Ecological Society of America.
RNA Imaging with Multiplexed Error Robust Fluorescence in situ Hybridization
Moffitt, Jeffrey R.; Zhuang, Xiaowei
2016-01-01
Quantitative measurements of both the copy number and spatial distribution of large fractions of the transcriptome in single-cells could revolutionize our understanding of a variety of cellular and tissue behaviors in both healthy and diseased states. Single-molecule Fluorescence In Situ Hybridization (smFISH)—an approach where individual RNAs are labeled with fluorescent probes and imaged in their native cellular and tissue context—provides both the copy number and spatial context of RNAs but has been limited in the number of RNA species that can be measured simultaneously. Here we describe Multiplexed Error Robust Fluorescence In Situ Hybridization (MERFISH), a massively parallelized form of smFISH that can image and identify hundreds to thousands of different RNA species simultaneously with high accuracy in individual cells in their native spatial context. We provide detailed protocols on all aspects of MERFISH, including probe design, data collection, and data analysis to allow interested laboratories to perform MERFISH measurements themselves. PMID:27241748
NASA Astrophysics Data System (ADS)
Burkholder, E. F.
2016-12-01
One way to address challenges of replacing NAD 83, NGVD 88 and IGLD 85 is to exploit the characteristics of 3-D digital spatial data. This presentation describes the 3-D global spatial data model (GSDM) which accommodates rigorous scientific endeavors while simultaneously supporting a local flat-earth view of the world. The GSDM is based upon the assumption of a single origin for 3-D spatial data and uses rules of solid geometry for manipulating spatial data components. This approach exploits the characteristics of 3-D digital spatial data and preserves the quality of geodetic measurements while providing spatial data users the option of working with rectangular flat-earth components and computational procedures for local applications. This flexibility is provided by using a bidirectional rotation matrix that allows any 3-D vector to be used in a geodetic reference frame for high-end applications and/or the local frame for flat-earth users. The GSDM is viewed as compatible with the datum products being developed by NGS and provides for unambiguous exchange of 3-D spatial data between disciplines and users worldwide. Three geometrical models will be summarized - geodetic, map projection, and 3-D. Geodetic computations are performed on an ellipsoid and are without equal in providing rigorous coordinate values for latitude, longitude, and ellipsoid height. Members of the user community have, for generations, sought ways to "flatten the world" to accommodate a flat-earth view and to avoid the complexity of working on an ellipsoid. Map projections have been defined for a wide variety of applications and remain very useful for visualizing spatial data. But, the GSDM supports computations based on 3-D components that have not been distorted in a 2-D map projection. The GSDM does not invalidate either geodesy or cartographic computational processes but provides a geometrically correct view of any point cloud from any point selected by the user. As a bonus, the GSDM also defines spatial data accuracy and includes procedures for establishing, tracking and using spatial data accuracy - increasingly important in many applications but especially relevant given development of procedures for tracking drones (primarily absolute) and intelligent vehicles (primarily relative).
NASA Astrophysics Data System (ADS)
Laura, J. R.; Miller, D.; Paul, M. V.
2012-03-01
An accuracy assessment of AMES Stereo Pipeline derived DEMs for lunar site selection using weighted spatial dependence simulation and a call for outside AMES derived DEMs to facilitate a statistical precision analysis.
Fukuyama, Atsushi; Isoda, Haruo; Morita, Kento; Mori, Marika; Watanabe, Tomoya; Ishiguro, Kenta; Komori, Yoshiaki; Kosugi, Takafumi
2017-01-01
Introduction: We aim to elucidate the effect of spatial resolution of three-dimensional cine phase contrast magnetic resonance (3D cine PC MR) imaging on the accuracy of the blood flow analysis, and examine the optimal setting for spatial resolution using flow phantoms. Materials and Methods: The flow phantom has five types of acrylic pipes that represent human blood vessels (inner diameters: 15, 12, 9, 6, and 3 mm). The pipes were fixed with 1% agarose containing 0.025 mol/L gadolinium contrast agent. A blood-mimicking fluid with human blood property values was circulated through the pipes at a steady flow. Magnetic resonance (MR) images (three-directional phase images with speed information and magnitude images for information of shape) were acquired using the 3-Tesla MR system and receiving coil. Temporal changes in spatially-averaged velocity and maximum velocity were calculated using hemodynamic analysis software. We calculated the error rates of the flow velocities based on the volume flow rates measured with a flowmeter and examined measurement accuracy. Results: When the acrylic pipe was the size of the thoracicoabdominal or cervical artery and the ratio of pixel size for the pipe was set at 30% or lower, spatially-averaged velocity measurements were highly accurate. When the pixel size ratio was set at 10% or lower, maximum velocity could be measured with high accuracy. It was difficult to accurately measure maximum velocity of the 3-mm pipe, which was the size of an intracranial major artery, but the error for spatially-averaged velocity was 20% or less. Conclusions: Flow velocity measurement accuracy of 3D cine PC MR imaging for pipes with inner sizes equivalent to vessels in the cervical and thoracicoabdominal arteries is good. The flow velocity accuracy for the pipe with a 3-mm-diameter that is equivalent to major intracranial arteries is poor for maximum velocity, but it is relatively good for spatially-averaged velocity. PMID:28132996
Accuracy assessment of seven global land cover datasets over China
NASA Astrophysics Data System (ADS)
Yang, Yongke; Xiao, Pengfeng; Feng, Xuezhi; Li, Haixing
2017-03-01
Land cover (LC) is the vital foundation to Earth science. Up to now, several global LC datasets have arisen with efforts of many scientific communities. To provide guidelines for data usage over China, nine LC maps from seven global LC datasets (IGBP DISCover, UMD, GLC, MCD12Q1, GLCNMO, CCI-LC, and GlobeLand30) were evaluated in this study. First, we compared their similarities and discrepancies in both area and spatial patterns, and analysed their inherent relations to data sources and classification schemes and methods. Next, five sets of validation sample units (VSUs) were collected to calculate their accuracy quantitatively. Further, we built a spatial analysis model and depicted their spatial variation in accuracy based on the five sets of VSUs. The results show that, there are evident discrepancies among these LC maps in both area and spatial patterns. For LC maps produced by different institutes, GLC 2000 and CCI-LC 2000 have the highest overall spatial agreement (53.8%). For LC maps produced by same institutes, overall spatial agreement of CCI-LC 2000 and 2010, and MCD12Q1 2001 and 2010 reach up to 99.8% and 73.2%, respectively; while more efforts are still needed if we hope to use these LC maps as time series data for model inputting, since both CCI-LC and MCD12Q1 fail to represent the rapid changing trend of several key LC classes in the early 21st century, in particular urban and built-up, snow and ice, water bodies, and permanent wetlands. With the highest spatial resolution, the overall accuracy of GlobeLand30 2010 is 82.39%. For the other six LC datasets with coarse resolution, CCI-LC 2010/2000 has the highest overall accuracy, and following are MCD12Q1 2010/2001, GLC 2000, GLCNMO 2008, IGBP DISCover, and UMD in turn. Beside that all maps exhibit high accuracy in homogeneous regions; local accuracies in other regions are quite different, particularly in Farming-Pastoral Zone of North China, mountains in Northeast China, and Southeast Hills. Special attention should be paid for data users who are interested in these regions.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Cates, J; Drzymala, R
2014-06-01
Purpose: The purpose of the study was to implement a method for accurate rat brain irradiation using the Gamma Knife Perfexion unit. The system needed to be repeatable, efficient, and dosimetrically and spatially accurate. Methods: A platform (“rat holder”) was made such that it is attachable to the Leskell Gamma Knife G Frame. The rat holder utilizes two ear bars contacting bony anatomy and a front tooth bar to secure the rat. The rat holder fits inside of the Leskell localizer box, which utilizes fiducial markers to register with the GammaPlan planning system. This method allows for accurate, repeatable setup.Amore » cylindrical phantom was made so that film can be placed axially in the phantom. We then acquired CT image sets of the rat holder and localizer box with both a rat and the phantom. Three treatment plans were created: a plan on the rat CT dataset, a phantom plan with the same prescription dose as the rat plan, and a phantom plan with the same delivery time as the rat plan. Results: Film analysis from the phantom showed that our setup is spatially accurate and repeatable. It is also dosimetrically accurate, with an difference between predicted and measured dose of 2.9%. Film analysis with prescription dose equal between rat and phantom plans showed a difference of 3.8%, showing that our phantom is a good representation of the rat for dosimetry purposes, allowing for +/- 3mm diameter variation. Film analysis with treatment time equal showed an error of 2.6%, which means we can deliver a prescription dose within 3% accuracy. Conclusion: Our method for irradiation of rat brain has been shown to be repeatable, efficient, and accurate, both dosimetrically and spatially. We can treat a large number of rats efficiently while delivering prescription doses within 3% at millimeter level accuracy.« less
Estimating carbon and showing impacts of drought using satellite data in regression-tree models
Boyte, Stephen; Wylie, Bruce K.; Howard, Danny; Dahal, Devendra; Gilmanov, Tagir G.
2018-01-01
Integrating spatially explicit biogeophysical and remotely sensed data into regression-tree models enables the spatial extrapolation of training data over large geographic spaces, allowing a better understanding of broad-scale ecosystem processes. The current study presents annual gross primary production (GPP) and annual ecosystem respiration (RE) for 2000–2013 in several short-statured vegetation types using carbon flux data from towers that are located strategically across the conterminous United States (CONUS). We calculate carbon fluxes (annual net ecosystem production [NEP]) for each year in our study period, which includes 2012 when drought and higher-than-normal temperatures influence vegetation productivity in large parts of the study area. We present and analyse carbon flux dynamics in the CONUS to better understand how drought affects GPP, RE, and NEP. Model accuracy metrics show strong correlation coefficients (r) (r ≥ 94%) between training and estimated data for both GPP and RE. Overall, average annual GPP, RE, and NEP are relatively constant throughout the study period except during 2012 when almost 60% less carbon is sequestered than normal. These results allow us to conclude that this modelling method effectively estimates carbon dynamics through time and allows the exploration of impacts of meteorological anomalies and vegetation types on carbon dynamics.
Differentiation of plant age in grasses using remote sensing
NASA Astrophysics Data System (ADS)
Knox, Nichola M.; Skidmore, Andrew K.; van der Werff, Harald M. A.; Groen, Thomas A.; de Boer, Willem F.; Prins, Herbert H. T.; Kohi, Edward; Peel, Mike
2013-10-01
Phenological or plant age classification across a landscape allows for examination of micro-topographical effects on plant growth, improvement in the accuracy of species discrimination, and will improve our understanding of the spatial variation in plant growth. In this paper six vegetation indices used in phenological studies (including the newly proposed PhIX index) were analysed for their ability to statistically differentiate grasses of different ages in the sequence of their development. Spectra of grasses of different ages were collected from a greenhouse study. These were used to determine if NDVI, NDWI, CAI, EVI, EVI2 and the newly proposed PhIX index could sequentially discriminate grasses of different ages, and subsequently classify grasses into their respective age category. The PhIX index was defined as: (AVNIRn+log(ASWIR2n))/(AVNIRn-log(ASWIR2n)), where AVNIRn and ASWIR2n are the respective normalised areas under the continuum removed reflectance curve within the VNIR (500-800 nm) and SWIR2 (2000-2210 nm) regions. The PhIX index was found to produce the highest phenological classification accuracy (Overall Accuracy: 79%, and Kappa Accuracy: 75%) and similar to the NDVI, EVI and EVI2 indices it statistically sequentially separates out the developmental age classes. Discrimination between seedling and dormant age classes and the adult and flowering classes was problematic for most of the tested indices. Combining information from the visible near infrared (VNIR) and shortwave infrared region (SWIR) region into a single phenological index captures the phenological changes associated with plant pigments and the ligno-cellulose absorption feature, providing a robust method to discriminate the age classes of grasses. This work provides a valuable contribution into mapping spatial variation and monitoring plant growth across savanna and grassland ecosystems.
AIRS Science Accomplishments Version 4.0/Plans for Version 5
NASA Technical Reports Server (NTRS)
Pagano, Thomas S.; Aumann, Hartmut; Elliott, Denis; Granger, Stephanie; Kahn, Brain; Eldering, Annmarie; Irion, Bill; Fetzer, Eric; Olsen, Ed; Lee, Sung-Yung;
2006-01-01
This talk is about accomplishments with AIRS data and what we have learned from almost three years of data what part of this is emerging in Version 4.0 what part we would like to see filtering into Version 5.0 and what part constitute limitations in the AIRS requirements, such as spectral and spatial resolution, which have to be deferred to the wish list for the next generation hyperspectral sounder. The AIRS calibration accuracy at the 1OOmK and stability at the 6 mK/year level are amazing. It establishes the unique capability of a cooled grating array spectrometer in Earth orbit for climate research. Data which are sufficiently clear to match the radiometric accuracy of the instrument, have a yield of less than 1%. This is OK for calibration. The 2616/cm window channel combined with the RTG.SST for tropical ocean allow excellent assessment radiometric calibration accuracy and stability. For absolute calibration verification 100mK is the limit due to cloud contamination. The 10 micron window channels can be used for stability assessment, but accuracy is limited at 300mK due to water continuum absorption uncertainties.
Cognitive ability is heritable and predicts the success of an alternative mating tactic
Smith, Carl; Philips, André; Reichard, Martin
2015-01-01
The ability to attract mates, acquire resources for reproduction, and successfully outcompete rivals for fertilizations may make demands on cognitive traits—the mechanisms by which an animal acquires, processes, stores and acts upon information from its environment. Consequently, cognitive traits potentially undergo sexual selection in some mating systems. We investigated the role of cognitive traits on the reproductive performance of male rose bitterling (Rhodeus ocellatus), a freshwater fish with a complex mating system and alternative mating tactics. We quantified the learning accuracy of males and females in a spatial learning task and scored them for learning accuracy. Males were subsequently allowed to play the roles of a guarder and a sneaker in competitive mating trials, with reproductive success measured using paternity analysis. We detected a significant interaction between male mating role and learning accuracy on reproductive success, with the best-performing males in maze trials showing greater reproductive success in a sneaker role than as a guarder. Using a cross-classified breeding design, learning accuracy was demonstrated to be heritable, with significant additive maternal and paternal effects. Our results imply that male cognitive traits may undergo intra-sexual selection. PMID:26041347
Cognitive ability is heritable and predicts the success of an alternative mating tactic.
Smith, Carl; Philips, André; Reichard, Martin
2015-06-22
The ability to attract mates, acquire resources for reproduction, and successfully outcompete rivals for fertilizations may make demands on cognitive traits--the mechanisms by which an animal acquires, processes, stores and acts upon information from its environment. Consequently, cognitive traits potentially undergo sexual selection in some mating systems. We investigated the role of cognitive traits on the reproductive performance of male rose bitterling (Rhodeus ocellatus), a freshwater fish with a complex mating system and alternative mating tactics. We quantified the learning accuracy of males and females in a spatial learning task and scored them for learning accuracy. Males were subsequently allowed to play the roles of a guarder and a sneaker in competitive mating trials, with reproductive success measured using paternity analysis. We detected a significant interaction between male mating role and learning accuracy on reproductive success, with the best-performing males in maze trials showing greater reproductive success in a sneaker role than as a guarder. Using a cross-classified breeding design, learning accuracy was demonstrated to be heritable, with significant additive maternal and paternal effects. Our results imply that male cognitive traits may undergo intra-sexual selection. © 2015 The Author(s) Published by the Royal Society. All rights reserved.
Pressure-Sensitive Paint Measurements on Surfaces with Non-Uniform Temperature
NASA Technical Reports Server (NTRS)
Bencic, Timothy J.
1999-01-01
Pressure-sensitive paint (PSP) has become a useful tool to augment conventional pressure taps in measuring the surface pressure distribution of aerodynamic components in wind tunnel testing. While the PSP offers the advantage of a non-intrusive global mapping of the surface pressure, one prominent drawback to the accuracy of this technique is the inherent temperature sensitivity of the coating's luminescent intensity. A typical aerodynamic surface PSP test has relied on the coated surface to be both spatially and temporally isothermal, along with conventional instrumentation for an in situ calibration to generate the highest accuracy pressure mappings. In some tests however, spatial and temporal thermal gradients are generated by the nature of the test as in a blowing jet impinging on a surface. In these cases, the temperature variations on the painted surface must be accounted for in order to yield high accuracy and reliable data. A new temperature correction technique was developed at NASA Lewis to collapse a "family" of PSP calibration curves to a single intensity ratio versus pressure curve. This correction allows a streamlined procedure to be followed whether or not temperature information is used in the data reduction of the PSP. This paper explores the use of conventional instrumentation such as thermocouples and pressure taps along with temperature-sensitive paint (TSP) to correct for the thermal gradients that exist in aeropropulsion PSP tests. Temperature corrected PSP measurements for both a supersonic mixer ejector and jet cavity interaction tests are presented.
Validation of the MODIS Collection 6 MCD64 Global Burned Area Product
NASA Astrophysics Data System (ADS)
Boschetti, L.; Roy, D. P.; Giglio, L.; Stehman, S. V.; Humber, M. L.; Sathyachandran, S. K.; Zubkova, M.; Melchiorre, A.; Huang, H.; Huo, L. Z.
2017-12-01
The research, policy and management applications of satellite products place a high priority on rigorously assessing their accuracy. A number of NASA, ESA and EU funded global and continental burned area products have been developed using coarse spatial resolution satellite data, and have the potential to become part of a long-term fire Essential Climate Variable. These products have usually been validated by comparison with reference burned area maps derived by visual interpretation of Landsat or similar spatial resolution data selected on an ad hoc basis. More optimally, a design-based validation method should be adopted, characterized by the selection of reference data via probability sampling. Design based techniques have been used for annual land cover and land cover change product validation, but have not been widely used for burned area products, or for other products that are highly variable in time and space (e.g. snow, floods, other non-permanent phenomena). This has been due to the challenge of designing an appropriate sampling strategy, and to the cost and limited availability of independent reference data. This paper describes the validation procedure adopted for the latest Collection 6 version of the MODIS Global Burned Area product (MCD64, Giglio et al, 2009). We used a tri-dimensional sampling grid that allows for probability sampling of Landsat data in time and in space (Boschetti et al, 2016). To sample the globe in the spatial domain with non-overlapping sampling units, the Thiessen Scene Area (TSA) tessellation of the Landsat WRS path/rows is used. The TSA grid is then combined with the 16-day Landsat acquisition calendar to provide tri-dimensonal elements (voxels). This allows the implementation of a sampling design where not only the location but also the time interval of the reference data is explicitly drawn through stratified random sampling. The novel sampling approach was used for the selection of a reference dataset consisting of 700 Landsat 8 image pairs, interpreted according to the CEOS Burned Area Validation Protocol (Boschetti et al., 2009). Standard quantitative burned area product accuracy measures that are important for different types of fire users (Boschetti et al, 2016, Roy and Boschetti, 2009, Boschetti et al, 2004) are computed to characterize the accuracy of the MCD64 product.
Validating long-term satellite-derived disturbance products: the case of burned areas
NASA Astrophysics Data System (ADS)
Boschetti, L.; Roy, D. P.
2015-12-01
The potential research, policy and management applications of satellite products place a high priority on providing statements about their accuracy. A number of NASA, ESA and EU funded global and continental burned area products have been developed using coarse spatial resolution satellite data, and have the potential to become part of a long-term fire Climate Data Record. These products have usually been validated by comparison with reference burned area maps derived by visual interpretation of Landsat or similar spatial resolution data selected on an ad hoc basis. More optimally, a design-based validation method should be adopted that is characterized by the selection of reference data via a probability sampling that can subsequently be used to compute accuracy metrics, taking into account the sampling probability. Design based techniques have been used for annual land cover and land cover change product validation, but have not been widely used for burned area products, or for the validation of global products that are highly variable in time and space (e.g. snow, floods or other non-permanent phenomena). This has been due to the challenge of designing an appropriate sampling strategy, and to the cost of collecting independent reference data. We propose a tri-dimensional sampling grid that allows for probability sampling of Landsat data in time and in space. To sample the globe in the spatial domain with non-overlapping sampling units, the Thiessen Scene Area (TSA) tessellation of the Landsat WRS path/rows is used. The TSA grid is then combined with the 16-day Landsat acquisition calendar to provide tri-dimensonal elements (voxels). This allows the implementation of a sampling design where not only the location but also the time interval of the reference data is explicitly drawn by probability sampling. The proposed sampling design is a stratified random sampling, with two-level stratification of the voxels based on biomes and fire activity (Figure 1). The novel validation approach, used for the validation of the MODIS and forthcoming VIIRS global burned area products, is a general one, and could be used for the validation of other global products that are highly variable in space and time and is required to assess the accuracy of climate records. The approach is demonstrated using a 1 year dataset of MODIS fire products.
Validation of spatially resolved all sky imager derived DNI nowcasts
NASA Astrophysics Data System (ADS)
Kuhn, Pascal; Wilbert, Stefan; Schüler, David; Prahl, Christoph; Haase, Thomas; Ramirez, Lourdes; Zarzalejo, Luis; Meyer, Angela; Vuilleumier, Laurent; Blanc, Philippe; Dubrana, Jean; Kazantzidis, Andreas; Schroedter-Homscheidt, Marion; Hirsch, Tobias; Pitz-Paal, Robert
2017-06-01
Mainly due to clouds, Direct Normal Irradiance (DNI) displays short-term local variabilities affecting the efficiency of concentrating solar power (CSP) plants. To enable efficient plant operation, DNI nowcasts in high spatial and temporal resolutions for 15 to 30 minutes ahead are required. Ground-based All Sky Imagers (ASI) can be used to detect, track and predict 3D positions of clouds possibly shading the plant. The accuracy and reliability of these ASI-derived DNI nowcasts must be known to allow its application in solar power plants. Within the framework of the European project DNICast, an ASI-based nowcasting system was developed and implemented at the Plataforma Solar de Almería (PSA). Its validation methodology and validation results are presented in this work. The nowcasting system outperforms persistence forecasts for volatile irradiance situations.
Ouweltjes, W; Gussekloo, S W S; Spoor, C W; van Leeuwen, J L
2016-02-01
Claw and locomotion problems are widespread in ungulates. Although it is presumed that mechanical overload is an important contributor to claw tissue damage and impaired locomotion, deformation and claw injury as a result of mechanical loading has been poorly quantified and, as a result, practical solutions to reduce such lesions have been established mostly through trial and error. In this study, an experimental technique was developed that allowed the measurement under controlled loading regimes of minute deformations in the lower limbs of dissected specimens from large ungulates. Roentgen stereophotogrammetric analysis (RSA) was applied to obtain 3D marker coordinates with an accuracy of up to 0.1 mm with optimal contrast and to determine changes in the spatial conformation. A force plate was used to record the applied forces in three dimensions. The results obtained for a test sample (cattle hind leg) under three loading conditions showed that small load-induced deformations and translations as well as small changes in centres of force application could be measured. Accuracy of the order of 0.2-0.3 mm was feasible under practical circumstances with suboptimal contrast. These quantifications of claw deformation during loading improve understanding of the spatial strain distribution as a result of external loading and the risks of tissue overload. The method promises to be useful in determining load-deformation relationships for a wide variety of specimens and circumstances. Copyright © 2015 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Marshall, Hans-Peter
The distribution of water in the snow-covered areas of the world is an important climate change indicator, and it is a vital component of the water cycle. At local and regional scales, the snow water equivalent (SWE), the amount of liquid water a given area of the snowpack represents, is very important for water resource management, flood forecasting, and prediction of available hydropower energy. Measurements from only a few automatic weather stations, such as the SNOTEL network, or sparse manual snowpack measurements are typically extrapolated for estimating SWE over an entire basin. Widespread spatial variability in the distribution of SWE and snowpack stratigraphy at local scales causes large errors in these basin estimates. Remote sensing measurements offer a promising alternative, due to their large spatial coverage and high temporal resolution. Although snow cover extent can currently be estimated from remote sensing data, accurately quantifying SWE from remote sensing measurements has remained difficult, due to a high sensitivity to variations in grain size and stratigraphy. In alpine snowpacks, the large degree of spatial variability of snowpack properties and geometry, caused by topographic, vegetative, and microclimatic effects, also makes prediction of snow avalanches very difficult. Ground-based radar and penetrometer measurements can quickly and accurately characterize snowpack properties and SWE in the field. A portable lightweight radar was developed, and allows a real-time estimate of SWE to within 10%, as well as measurements of depths of all major density transitions within the snowpack. New analysis techniques developed in this thesis allow accurate estimates of mechanical properties and an index of grain size to be retrieved from the SnowMicroPenetrometer. These two tools together allow rapid characterization of the snowpack's geometry, mechanical properties, and SWE, and are used to guide a finite element model to study the stress distribution on a slope. The ability to accurately characterize snowpack properties at much higher resolutions and spatial extent than previously possible will hopefully help lead to a more complete understanding of spatial variability, its effect on remote sensing measurements and snow slope stability, and result in improvements in avalanche prediction and accuracy of SWE estimates from space.
Two-stream Convolutional Neural Network for Methane Emissions Quantification
NASA Astrophysics Data System (ADS)
Wang, J.; Ravikumar, A. P.; McGuire, M.; Bell, C.; Tchapmi, L. P.; Brandt, A. R.
2017-12-01
Methane, a key component of natural gas, has a 25x higher global warming potential than carbon dioxide on a 100-year basis. Accurately monitoring and mitigating methane emissions require cost-effective detection and quantification technologies. Optical gas imaging, one of the most commonly used leak detection technology, adopted by Environmental Protection Agency, cannot estimate leak-sizes. In this work, we harness advances in computer science to allow for rapid and automatic leak quantification. Particularly, we utilize two-stream deep Convolutional Networks (ConvNets) to estimate leak-size by capturing complementary spatial information from still plume frames, and temporal information from plume motion between frames. We build large leak datasets for training and evaluating purposes by collecting about 20 videos (i.e. 397,400 frames) of leaks. The videos were recorded at six distances from the source, covering 10 -60 ft. Leak sources included natural gas well-heads, separators, and tanks. All frames were labeled with a true leak size, which has eight levels ranging from 0 to 140 MCFH. Preliminary analysis shows that two-stream ConvNets provides significant accuracy advantage over single steam ConvNets. Spatial stream ConvNet can achieve an accuracy of 65.2%, by extracting important features, including texture, plume area, and pattern. Temporal stream, fed by the results of optical flow analysis, results in an accuracy of 58.3%. The integration of the two-stream ConvNets gives a combined accuracy of 77.6%. For future work, we will split the training and testing datasets in distinct ways in order to test the generalization of the algorithm for different leak sources. Several analytic metrics, including confusion matrix and visualization of key features, will be used to understand accuracy rates and occurrences of false positives. The quantification algorithm can help to find and fix super-emitters, and improve the cost-effectiveness of leak detection and repair programs.
NASA Astrophysics Data System (ADS)
Underwood, Emma C.; Ustin, Susan L.; Ramirez, Carlos M.
2007-01-01
We explored the potential of detecting three target invasive species: iceplant ( Carpobrotus edulis), jubata grass ( Cortaderia jubata), and blue gum ( Eucalyptus globulus) at Vandenberg Air Force Base, California. We compared the accuracy of mapping six communities (intact coastal scrub, iceplant invaded coastal scrub, iceplant invaded chaparral, jubata grass invaded chaparral, blue gum invaded chaparral, and intact chaparral) using four images with different combinations of spatial and spectral resolution: hyperspectral AVIRIS imagery (174 wavebands, 4 m spatial resolution), spatially degraded AVIRIS (174 bands, 30 m), spectrally degraded AVIRIS (6 bands, 4 m), and both spatially and spectrally degraded AVIRIS (6 bands, 30 m, i.e., simulated Landsat ETM data). Overall success rates for classifying the six classes was 75% (kappa 0.7) using full resolution AVIRIS, 58% (kappa 0.5) for the spatially degraded AVIRIS, 42% (kappa 0.3) for the spectrally degraded AVIRIS, and 37% (kappa 0.3) for the spatially and spectrally degraded AVIRIS. A true Landsat ETM image was also classified to illustrate that the results from the simulated ETM data were representative, which provided an accuracy of 50% (kappa 0.4). Mapping accuracies using different resolution images are evaluated in the context of community heterogeneity (species richness, diversity, and percent species cover). Findings illustrate that higher mapping accuracies are achieved with images possessing high spectral resolution, thus capturing information across the visible and reflected infrared solar spectrum. Understanding the tradeoffs in spectral and spatial resolution can assist land managers in deciding the most appropriate imagery with respect to target invasives and community characteristics.
Research on Horizontal Accuracy Method of High Spatial Resolution Remotely Sensed Orthophoto Image
NASA Astrophysics Data System (ADS)
Xu, Y. M.; Zhang, J. X.; Yu, F.; Dong, S.
2018-04-01
At present, in the inspection and acceptance of high spatial resolution remotly sensed orthophoto image, the horizontal accuracy detection is testing and evaluating the accuracy of images, which mostly based on a set of testing points with the same accuracy and reliability. However, it is difficult to get a set of testing points with the same accuracy and reliability in the areas where the field measurement is difficult and the reference data with high accuracy is not enough. So it is difficult to test and evaluate the horizontal accuracy of the orthophoto image. The uncertainty of the horizontal accuracy has become a bottleneck for the application of satellite borne high-resolution remote sensing image and the scope of service expansion. Therefore, this paper proposes a new method to test the horizontal accuracy of orthophoto image. This method using the testing points with different accuracy and reliability. These points' source is high accuracy reference data and field measurement. The new method solves the horizontal accuracy detection of the orthophoto image in the difficult areas and provides the basis for providing reliable orthophoto images to the users.
Entropy of space-time outcome in a movement speed-accuracy task.
Hsieh, Tsung-Yu; Pacheco, Matheus Maia; Newell, Karl M
2015-12-01
The experiment reported was set-up to investigate the space-time entropy of movement outcome as a function of a range of spatial (10, 20 and 30 cm) and temporal (250-2500 ms) criteria in a discrete aiming task. The variability and information entropy of the movement spatial and temporal errors considered separately increased and decreased on the respective dimension as a function of an increment of movement velocity. However, the joint space-time entropy was lowest when the relative contribution of spatial and temporal task criteria was comparable (i.e., mid-range of space-time constraints), and it increased with a greater trade-off between spatial or temporal task demands, revealing a U-shaped function across space-time task criteria. The traditional speed-accuracy functions of spatial error and temporal error considered independently mapped to this joint space-time U-shaped entropy function. The trade-off in movement tasks with joint space-time criteria is between spatial error and timing error, rather than movement speed and accuracy. Copyright © 2015 Elsevier B.V. All rights reserved.
NASA Technical Reports Server (NTRS)
Wrigley, R. C.; Acevedo, W.; Alexander, D.; Buis, J.; Card, D.
1984-01-01
An experiment of a factorial design was conducted to test the effects on classification accuracy of land cover types due to the improved spatial, spectral and radiometric characteristics of the Thematic Mapper (TM) in comparison to the Multispectral Scanner (MSS). High altitude aircraft scanner data from the Airborne Thematic Mapper instrument was acquired over central California in August, 1983 and used to simulate Thematic Mapper data as well as all combinations of the three characteristics for eight data sets in all. Results for the training sites (field center pixels) showed better classification accuracies for MSS spatial resolution, TM spectral bands and TM radiometry in order of importance.
NASA Technical Reports Server (NTRS)
Lester, D. F.; Harvey, P. M.; Joy, M.; Ellis, H. B., Jr.
1986-01-01
Far-infrared continuum studies from the Kuiper Airborne Observatory are described that are designed to fully exploit the small-scale spatial information that this facility can provide. This work gives the clearest picture to data on the structure of galactic and extragalactic star forming regions in the far infrared. Work is presently being done with slit scans taken simultaneously at 50 and 100 microns, yielding one-dimensional data. Scans of sources in different directions have been used to get certain information on two dimensional structure. Planned work with linear arrays will allow us to generalize our techniques to two dimensional image restoration. For faint sources, spatial information at the diffraction limit of the telescope is obtained, while for brighter sources, nonlinear deconvolution techniques have allowed us to improve over the diffraction limit by as much as a factor of four. Information on the details of the color temperature distribution is derived as well. This is made possible by the accuracy with which the instrumental point-source profile (PSP) is determined at both wavelengths. While these two PSPs are different, data at different wavelengths can be compared by proper spatial filtering. Considerable effort has been devoted to implementing deconvolution algorithms. Nonlinear deconvolution methods offer the potential of superresolution -- that is, inference of power at spatial frequencies that exceed D lambda. This potential is made possible by the implicit assumption by the algorithm of positivity of the deconvolved data, a universally justifiable constraint for photon processes. We have tested two nonlinear deconvolution algorithms on our data; the Richardson-Lucy (R-L) method and the Maximum Entropy Method (MEM). The limits of image deconvolution techniques for achieving spatial resolution are addressed.
NASA Technical Reports Server (NTRS)
Rausch, Russ D.; Batina, John T.; Yang, Henry T. Y.
1991-01-01
Spatial adaption procedures for the accurate and efficient solution of steady and unsteady inviscid flow problems are described. The adaption procedures were developed and implemented within a two-dimensional unstructured-grid upwind-type Euler code. These procedures involve mesh enrichment and mesh coarsening to either add points in a high gradient region or the flow or remove points where they are not needed, respectively, to produce solutions of high spatial accuracy at minimal computational costs. A detailed description is given of the enrichment and coarsening procedures and comparisons with alternative results and experimental data are presented to provide an assessment of the accuracy and efficiency of the capability. Steady and unsteady transonic results, obtained using spatial adaption for the NACA 0012 airfoil, are shown to be of high spatial accuracy, primarily in that the shock waves are very sharply captured. The results were obtained with a computational savings of a factor of approximately fifty-three for a steady case and as much as twenty-five for the unsteady cases.
NASA Technical Reports Server (NTRS)
Rausch, Russ D.; Yang, Henry T. Y.; Batina, John T.
1991-01-01
Spatial adaption procedures for the accurate and efficient solution of steady and unsteady inviscid flow problems are described. The adaption procedures were developed and implemented within a two-dimensional unstructured-grid upwind-type Euler code. These procedures involve mesh enrichment and mesh coarsening to either add points in high gradient regions of the flow or remove points where they are not needed, respectively, to produce solutions of high spatial accuracy at minimal computational cost. The paper gives a detailed description of the enrichment and coarsening procedures and presents comparisons with alternative results and experimental data to provide an assessment of the accuracy and efficiency of the capability. Steady and unsteady transonic results, obtained using spatial adaption for the NACA 0012 airfoil, are shown to be of high spatial accuracy, primarily in that the shock waves are very sharply captured. The results were obtained with a computational savings of a factor of approximately fifty-three for a steady case and as much as twenty-five for the unsteady cases.
Pham, Quang Duc; Kusumi, Yuichi; Hasegawa, Satoshi; Hayasaki, Yoshio
2012-10-01
We propose a new method for three-dimensional (3D) position measurement of nanoparticles using an in-line digital holographic microscope. The method improves the signal-to-noise ratio of the amplitude of the interference fringes to achieve higher accuracy in the position measurement by increasing weak scattered light from a nanoparticle relative to the reference light by using a low spatial frequency attenuation filter. We demonstrated the improvements of signal-to-noise ratio of the optical system and contrast of the interference fringes, allowing the 3D positions of nanoparticles to be determined more precisely.
Image quality assessment by preprocessing and full reference model combination
NASA Astrophysics Data System (ADS)
Bianco, S.; Ciocca, G.; Marini, F.; Schettini, R.
2009-01-01
This paper focuses on full-reference image quality assessment and presents different computational strategies aimed to improve the robustness and accuracy of some well known and widely used state of the art models, namely the Structural Similarity approach (SSIM) by Wang and Bovik and the S-CIELAB spatial-color model by Zhang and Wandell. We investigate the hypothesis that combining error images with a visual attention model could allow a better fit of the psycho-visual data of the LIVE Image Quality assessment Database Release 2. We show that the proposed quality assessment metric better correlates with the experimental data.
Earth Rotation Dynamics: Review and Prospects
NASA Technical Reports Server (NTRS)
Chao, Benjamin F.
2004-01-01
Modem space geodetic measurement of Earth rotation variations, particularly by means of the VLBI technique, has over the years allowed studies of Earth rotation dynamics to advance in ever-increasing precision, accuracy, and temporal resolution. A review will be presented on our understanding of the geophysical and climatic causes, or "excitations", for length-of-day change, polar motion, and nutations. These excitations sources come from mass transports that constantly take place in the Earth system comprised of the atmosphere, hydrosphere, cryosphere, lithosphere, mantle, and the cores. In this sense, together with other space geodetic measurements of time-variable gravity and geocenter motion, Earth rotation variations become a remote-sensing tool for the integral of all mass transports, providing valuable information about the latter on a wide range of spatial and temporal scales. Future prospects with respect to geophysical studies with even higher accuracy and resolution will be discussed.
Earth Rotational Variations Excited by Geophysical Fluids
NASA Technical Reports Server (NTRS)
Chao, Benjamin F.
2004-01-01
Modern space geodetic measurement of Earth rotation variations, particularly by means of the VLBI technique, has over the years allowed studies of Earth rotation dynamics to advance in ever-increasing precision, accuracy, and temporal resolution. A review will be presented on our understanding of the geophysical and climatic causes, or "excitations". for length-of-day change, polar motion, and nutations. These excitations sources come from mass transports that constantly take place in the Earth system comprised of the atmosphere, hydrosphere, cryosphere, lithosphere, mantle, and the cores. In this sense, together with other space geodetic measurements of time-variable gravity and geocenter motion, Earth rotation variations become a remote-sensing tool for the integral of all mass transports, providing valuable information about the latter on a wide range of spatial and temporal scales. Future prospects with respect to geophysical studies with even higher accuracy and resolution will be discussed.
Hanks, E.M.; Hooten, M.B.; Baker, F.A.
2011-01-01
Ecological spatial data often come from multiple sources, varying in extent and accuracy. We describe a general approach to reconciling such data sets through the use of the Bayesian hierarchical framework. This approach provides a way for the data sets to borrow strength from one another while allowing for inference on the underlying ecological process. We apply this approach to study the incidence of eastern spruce dwarf mistletoe (Arceuthobium pusillum) in Minnesota black spruce (Picea mariana). A Minnesota Department of Natural Resources operational inventory of black spruce stands in northern Minnesota found mistletoe in 11% of surveyed stands, while a small, specific-pest survey found mistletoe in 56% of the surveyed stands. We reconcile these two surveys within a Bayesian hierarchical framework and predict that 35-59% of black spruce stands in northern Minnesota are infested with dwarf mistletoe. ?? 2011 by the Ecological Society of America.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hayashida, Misa; Malac, Marek; Egerton, Ray F.
Electron tomography is a method whereby a three-dimensional reconstruction of a nanoscale object is obtained from a series of projected images measured in a transmission electron microscope. We developed an electron-diffraction method to measure the tilt and azimuth angles, with Kikuchi lines used to align a series of diffraction patterns obtained with each image of the tilt series. Since it is based on electron diffraction, the method is not affected by sample drift and is not sensitive to sample thickness, whereas tilt angle measurement and alignment using fiducial-marker methods are affected by both sample drift and thickness. The accuracy ofmore » the diffraction method benefits reconstructions with a large number of voxels, where both high spatial resolution and a large field of view are desired. The diffraction method allows both the tilt and azimuth angle to be measured, while fiducial marker methods typically treat the tilt and azimuth angle as an unknown parameter. The diffraction method can be also used to estimate the accuracy of the fiducial marker method, and the sample-stage accuracy. A nano-dot fiducial marker measurement differs from a diffraction measurement by no more than ±1°.« less
Castillo, Edward; Castillo, Richard; Fuentes, David; Guerrero, Thomas
2014-01-01
Purpose: Block matching is a well-known strategy for estimating corresponding voxel locations between a pair of images according to an image similarity metric. Though robust to issues such as image noise and large magnitude voxel displacements, the estimated point matches are not guaranteed to be spatially accurate. However, the underlying optimization problem solved by the block matching procedure is similar in structure to the class of optimization problem associated with B-spline based registration methods. By exploiting this relationship, the authors derive a numerical method for computing a global minimizer to a constrained B-spline registration problem that incorporates the robustness of block matching with the global smoothness properties inherent to B-spline parameterization. Methods: The method reformulates the traditional B-spline registration problem as a basis pursuit problem describing the minimal l1-perturbation to block match pairs required to produce a B-spline fitting error within a given tolerance. The sparsity pattern of the optimal perturbation then defines a voxel point cloud subset on which the B-spline fit is a global minimizer to a constrained variant of the B-spline registration problem. As opposed to traditional B-spline algorithms, the optimization step involving the actual image data is addressed by block matching. Results: The performance of the method is measured in terms of spatial accuracy using ten inhale/exhale thoracic CT image pairs (available for download at www.dir-lab.com) obtained from the COPDgene dataset and corresponding sets of expert-determined landmark point pairs. The results of the validation procedure demonstrate that the method can achieve a high spatial accuracy on a significantly complex image set. Conclusions: The proposed methodology is demonstrated to achieve a high spatial accuracy and is generalizable in that in can employ any displacement field parameterization described as a least squares fit to block match generated estimates. Thus, the framework allows for a wide range of image similarity block match metric and physical modeling combinations. PMID:24694135
Spatial decoupling of targets and flashing stimuli for visual brain-computer interfaces
NASA Astrophysics Data System (ADS)
Waytowich, Nicholas R.; Krusienski, Dean J.
2015-06-01
Objective. Recently, paradigms using code-modulated visual evoked potentials (c-VEPs) have proven to achieve among the highest information transfer rates for noninvasive brain-computer interfaces (BCIs). One issue with current c-VEP paradigms, and visual-evoked paradigms in general, is that they require direct foveal fixation of the flashing stimuli. These interfaces are often visually unpleasant and can be irritating and fatiguing to the user, thus adversely impacting practical performance. In this study, a novel c-VEP BCI paradigm is presented that attempts to perform spatial decoupling of the targets and flashing stimuli using two distinct concepts: spatial separation and boundary positioning. Approach. For the paradigm, the flashing stimuli form a ring that encompasses the intended non-flashing targets, which are spatially separated from the stimuli. The user fixates on the desired target, which is classified using the changes to the EEG induced by the flashing stimuli located in the non-foveal visual field. Additionally, a subset of targets is also positioned at or near the stimulus boundaries, which decouples targets from direct association with a single stimulus. This allows a greater number of target locations for a fixed number of flashing stimuli. Main results. Results from 11 subjects showed practical classification accuracies for the non-foveal condition, with comparable performance to the direct-foveal condition for longer observation lengths. Online results from 5 subjects confirmed the offline results with an average accuracy across subjects of 95.6% for a 4-target condition. The offline analysis also indicated that targets positioned at or near the boundaries of two stimuli could be classified with the same accuracy as traditional superimposed (non-boundary) targets. Significance. The implications of this research are that c-VEPs can be detected and accurately classified to achieve comparable BCI performance without requiring potentially irritating direct foveation of flashing stimuli. Furthermore, this study shows that it is possible to increase the number of targets beyond the number of stimuli without degrading performance. Given the superior information transfer rate of c-VEP paradigms, these results can lead to the development of more practical and ergonomic BCIs.
Improving Genomic Prediction in Cassava Field Experiments Using Spatial Analysis.
Elias, Ani A; Rabbi, Ismail; Kulakow, Peter; Jannink, Jean-Luc
2018-01-04
Cassava ( Manihot esculenta Crantz) is an important staple food in sub-Saharan Africa. Breeding experiments were conducted at the International Institute of Tropical Agriculture in cassava to select elite parents. Taking into account the heterogeneity in the field while evaluating these trials can increase the accuracy in estimation of breeding values. We used an exploratory approach using the parametric spatial kernels Power, Spherical, and Gaussian to determine the best kernel for a given scenario. The spatial kernel was fit simultaneously with a genomic kernel in a genomic selection model. Predictability of these models was tested through a 10-fold cross-validation method repeated five times. The best model was chosen as the one with the lowest prediction root mean squared error compared to that of the base model having no spatial kernel. Results from our real and simulated data studies indicated that predictability can be increased by accounting for spatial variation irrespective of the heritability of the trait. In real data scenarios we observed that the accuracy can be increased by a median value of 3.4%. Through simulations, we showed that a 21% increase in accuracy can be achieved. We also found that Range (row) directional spatial kernels, mostly Gaussian, explained the spatial variance in 71% of the scenarios when spatial correlation was significant. Copyright © 2018 Elias et al.
NASA Astrophysics Data System (ADS)
Poyatos, Rafael; Sus, Oliver; Badiella, Llorenç; Mencuccini, Maurizio; Martínez-Vilalta, Jordi
2018-05-01
The ubiquity of missing data in plant trait databases may hinder trait-based analyses of ecological patterns and processes. Spatially explicit datasets with information on intraspecific trait variability are rare but offer great promise in improving our understanding of functional biogeography. At the same time, they offer specific challenges in terms of data imputation. Here we compare statistical imputation approaches, using varying levels of environmental information, for five plant traits (leaf biomass to sapwood area ratio, leaf nitrogen content, maximum tree height, leaf mass per area and wood density) in a spatially explicit plant trait dataset of temperate and Mediterranean tree species (Ecological and Forest Inventory of Catalonia, IEFC, dataset for Catalonia, north-east Iberian Peninsula, 31 900 km2). We simulated gaps at different missingness levels (10-80 %) in a complete trait matrix, and we used overall trait means, species means, k nearest neighbours (kNN), ordinary and regression kriging, and multivariate imputation using chained equations (MICE) to impute missing trait values. We assessed these methods in terms of their accuracy and of their ability to preserve trait distributions, multi-trait correlation structure and bivariate trait relationships. The relatively good performance of mean and species mean imputations in terms of accuracy masked a poor representation of trait distributions and multivariate trait structure. Species identity improved MICE imputations for all traits, whereas forest structure and topography improved imputations for some traits. No method performed best consistently for the five studied traits, but, considering all traits and performance metrics, MICE informed by relevant ecological variables gave the best results. However, at higher missingness (> 30 %), species mean imputations and regression kriging tended to outperform MICE for some traits. MICE informed by relevant ecological variables allowed us to fill the gaps in the IEFC incomplete dataset (5495 plots) and quantify imputation uncertainty. Resulting spatial patterns of the studied traits in Catalan forests were broadly similar when using species means, regression kriging or the best-performing MICE application, but some important discrepancies were observed at the local level. Our results highlight the need to assess imputation quality beyond just imputation accuracy and show that including environmental information in statistical imputation approaches yields more plausible imputations in spatially explicit plant trait datasets.
NASA Astrophysics Data System (ADS)
Al-Doasari, Ahmad E.
The 1991 Gulf War caused massive environmental damage in Kuwait. Deposition of oil and soot droplets from hundreds of burning oil-wells created a layer of tarcrete on the desert surface covering over 900 km2. This research investigates the spatial change in the tarcrete extent from 1991 to 1998 using Landsat Thematic Mapper (TM) imagery and statistical modeling techniques. The pixel structure of TM data allows the spatial analysis of the change in tarcrete extent to be conducted at the pixel (cell) level within a geographical information system (GIS). There are two components to this research. The first is a comparison of three remote sensing classification techniques used to map the tarcrete layer. The second is a spatial-temporal analysis and simulation of tarcrete changes through time. The analysis focuses on an area of 389 km2 located south of the Al-Burgan oil field. Five TM images acquired in 1991, 1993, 1994, 1995, and 1998 were geometrically and atmospherically corrected. These images were classified into six classes: oil lakes; heavy, intermediate, light, and traces of tarcrete; and sand. The classification methods tested were unsupervised, supervised, and neural network supervised (fuzzy ARTMAP). Field data of tarcrete characteristics were collected to support the classification process and to evaluate the classification accuracies. Overall, the neural network method is more accurate (60 percent) than the other two methods; both the unsupervised and the supervised classification accuracy assessments resulted in 46 percent accuracy. The five classifications were used in a lagged autologistic model to analyze the spatial changes of the tarcrete through time. The autologistic model correctly identified overall tarcrete contraction between 1991--1993 and 1995--1998. However, tarcrete contraction between 1993--1994 and 1994--1995 was less well marked, in part because of classification errors in the maps from these time periods. Initial simulations of tarcrete contraction with a cellular automaton model were not very successful. However, more accurate classifications could improve the simulations. This study illustrates how an empirical investigation using satellite images, field data, GIS, and spatial statistics can simulate dynamic land-cover change through the use of a discrete statistical and cellular automaton model.
2016-01-01
Moderate Resolution Imaging Spectroradiometer (MODIS) data forms the basis for numerous land use and land cover (LULC) mapping and analysis frameworks at regional scale. Compared to other satellite sensors, the spatial, temporal and spectral specifications of MODIS are considered as highly suitable for LULC classifications which support many different aspects of social, environmental and developmental research. The LULC mapping of this study was carried out in the context of the development of an evaluation approach for Zimbabwe’s land reform program. Within the discourse about the success of this program, a lack of spatially explicit methods to produce objective data, such as on the extent of agricultural area, is apparent. We therefore assessed the suitability of moderate spatial and high temporal resolution imagery and phenological parameters to retrieve regional figures about the extent of cropland area in former freehold tenure in a series of 13 years from 2001–2013. Time-series data was processed with TIMESAT and was stratified according to agro-ecological potential zoning of Zimbabwe. Random Forest (RF) classifications were used to produce annual binary crop/non crop maps which were evaluated with high spatial resolution data from other satellite sensors. We assessed the cropland products in former freehold tenure in terms of classification accuracy, inter-annual comparability and heterogeneity. Although general LULC patterns were depicted in classification results and an overall accuracy of over 80% was achieved, user accuracies for rainfed agriculture were limited to below 65%. We conclude that phenological analysis has to be treated with caution when rainfed agriculture and grassland in semi-humid tropical regions have to be separated based on MODIS spectral data and phenological parameters. Because classification results significantly underestimate redistributed commercial farmland in Zimbabwe, we argue that the method cannot be used to produce spatial information on land-use which could be linked to tenure change. Hence capabilities of moderate resolution data are limited to assess Zimbabwe’s land reform. To make use of the unquestionable potential of MODIS time-series analysis, we propose an analysis of plant productivity which allows to link annual growth and production of vegetation to ownership after Zimbabwe’s land reform. PMID:27253327
Examining Impulse-Variability in Kicking.
Chappell, Andrew; Molina, Sergio L; McKibben, Jonathon; Stodden, David F
2016-07-01
This study examined variability in kicking speed and spatial accuracy to test the impulse-variability theory prediction of an inverted-U function and the speed-accuracy trade-off. Twenty-eight 18- to 25-year-old adults kicked a playground ball at various percentages (50-100%) of their maximum speed at a wall target. Speed variability and spatial error were analyzed using repeated-measures ANOVA with built-in polynomial contrasts. Results indicated a significant inverse linear trajectory for speed variability (p < .001, η2= .345) where 50% and 60% maximum speed had significantly higher variability than the 100% condition. A significant quadratic fit was found for spatial error scores of mean radial error (p < .0001, η2 = .474) and subject-centroid radial error (p < .0001, η2 = .453). Findings suggest variability and accuracy of multijoint, ballistic skill performance may not follow the general principles of impulse-variability theory or the speed-accuracy trade-off.
NASA Astrophysics Data System (ADS)
Cardille, J. A.; Lee, J.
2017-12-01
With the opening of the Landsat archive, there is a dramatically increased potential for creating high-quality time series of land use/land-cover (LULC) classifications derived from remote sensing. Although LULC time series are appealing, their creation is typically challenging in two fundamental ways. First, there is a need to create maximally correct LULC maps for consideration at each time step; and second, there is a need to have the elements of the time series be consistent with each other, without pixels that flip improbably between covers due only to unavoidable, stray classification errors. We have developed the Bayesian Updating of Land Cover - Unsupervised (BULC-U) algorithm to address these challenges simultaneously, and introduce and apply it here for two related but distinct purposes. First, with minimal human intervention, we produced an internally consistent, high-accuracy LULC time series in rapidly changing Mato Grosso, Brazil for a time interval (1986-2000) in which cropland area more than doubled. The spatial and temporal resolution of the 59 LULC snapshots allows users to witness the establishment of towns and farms at the expense of forest. The new time series could be used by policy-makers and analysts to unravel important considerations for conservation and management, including the timing and location of past development, the rate and nature of changes in forest connectivity, the connection with road infrastructure, and more. The second application of BULC-U is to sharpen the well-known GlobCover 2009 classification from 300m to 30m, while improving accuracy measures for every class. The greatly improved resolution and accuracy permits a better representation of the true LULC proportions, the use of this map in models, and quantification of the potential impacts of changes. Given that there may easily be thousands and potentially millions of images available to harvest for an LULC time series, it is imperative to build useful algorithms requiring minimal human intervention. Through image segmentation and classification, BULC-U allows us to use both the spectral and spatial characteristics of imagery to sharpen classifications and create time series. It is hoped that this study may allow us and other users of this new method to consider time series across ever larger areas.
Correction of Spatial Bias in Oligonucleotide Array Data
Lemieux, Sébastien
2013-01-01
Background. Oligonucleotide microarrays allow for high-throughput gene expression profiling assays. The technology relies on the fundamental assumption that observed hybridization signal intensities (HSIs) for each intended target, on average, correlate with their target's true concentration in the sample. However, systematic, nonbiological variation from several sources undermines this hypothesis. Background hybridization signal has been previously identified as one such important source, one manifestation of which appears in the form of spatial autocorrelation. Results. We propose an algorithm, pyn, for the elimination of spatial autocorrelation in HSIs, exploiting the duality of desirable mutual information shared by probes in a common probe set and undesirable mutual information shared by spatially proximate probes. We show that this correction procedure reduces spatial autocorrelation in HSIs; increases HSI reproducibility across replicate arrays; increases differentially expressed gene detection power; and performs better than previously published methods. Conclusions. The proposed algorithm increases both precision and accuracy, while requiring virtually no changes to users' current analysis pipelines: the correction consists merely of a transformation of raw HSIs (e.g., CEL files for Affymetrix arrays). A free, open-source implementation is provided as an R package, compatible with standard Bioconductor tools. The approach may also be tailored to other platform types and other sources of bias. PMID:23573083
Christe, Blaise; Burkhard, Pierre R; Pegna, Alan J; Mayer, Eugene; Hauert, Claude-Alain
2007-01-01
In this study, we developed a digitizing tablet-based instrument for the clinical assessment of human voluntary movements targeting motor processes of planning, programming and execution. The tool was used to investigate an adaptation of Fitts' reciprocal tapping task [10], comprising four conditions, each of them modulated by three indices of difficulty related to the amplitude of movement required. Temporal, spatial and sequential constraints underlying the various conditions allowed the intricate motor processes to be dissociated. Data obtained from a group of elderly healthy subjects (N=50) were in agreement with the literature on motor control, in the temporal and spatial domains. Speed constraints generated gains in the temporal domain and costs in the spatial one, while spatial constraints generated gain in the spatial domain and costs in the temporal one; finally, sequential constraints revealed the integrative nature of the cognitive operations involved in motor production. This versatile instrument proved capable of providing quantitative, accurate and sensitive measures of the various processes sustaining voluntary movement in healthy subjects. Altogether, analyses performed in this study generated a theoretical framework and reference data which could be used in the future for the clinical assessment of patients with various movement disorders, in particular Parkinson's disease.
NASA Astrophysics Data System (ADS)
Yankiv-Vitkovska, Liubov; Dzhuman, Bogdan
2017-04-01
Due to the wide application of global navigation satellite systems (GNSS), the development of the modern GNSS infrastructure moved the monitoring of the Earth's ionosphere to a new methodological and technological level. The peculiarity of such monitoring is that it allows conducting different experimental studies including the study of the ionosphere directly while using the existing networks of reference GNSS stations intended for solving other problems. The application of the modern GNSS infrastructure is another innovative step in the ionospheric studies as such networks allow to conduct measurements continuously over time in any place. This is used during the monitoring of the ionosphere and allows studying the global and regional phenomena in the ionosphere in real time. Application of a network of continuously operating reference stations to determine numerical characteristics of the Earth's ionosphere allows creating an effective technology to monitor the ionosphere regionally. This technology is intended to solve both scientific problems concerning the space weather, and practical tasks such as providing coordinates of the geodetic level accuracy. For continuously operating reference GNSS stations, the results of the determined ionization identifier TEC (Total Electron Content). On the one hand, this data reflects the state of the ionosphere during the observation; on the other hand, it is a substantial tool for accuracy improvement and reliable determination of coordinates of the observation place. Thus, it was decided to solve a problem of restoring the spatial position of the ionospheric state or its ionization field according to the regular definitions of the TEC identifier, i.e. VTEC (Vertical TEC). The description below shows one of the possible solutions that is based on the spherical cap harmonic analysis method for modeling VTEC parameter. This method involves transformation of the initial data to a spherical cap and construction of model using associated Legendre functions of integer order but not necessarily of integer degree. Such functions form two orthogonal systems of functions on the spherical cap. The method was tested for network of permanent stations ZAKPOS.
The Influence of Endogenous and Exogenous Spatial Attention on Decision Confidence.
Kurtz, Phillipp; Shapcott, Katharine A; Kaiser, Jochen; Schmiedt, Joscha T; Schmid, Michael C
2017-07-25
Spatial attention allows us to make more accurate decisions about events in our environment. Decision confidence is thought to be intimately linked to the decision making process as confidence ratings are tightly coupled to decision accuracy. While both spatial attention and decision confidence have been subjected to extensive research, surprisingly little is known about the interaction between these two processes. Since attention increases performance it might be expected that confidence would also increase. However, two studies investigating the effects of endogenous attention on decision confidence found contradictory results. Here we investigated the effects of two distinct forms of spatial attention on decision confidence; endogenous attention and exogenous attention. We used an orientation-matching task, comparing the two attention conditions (endogenous and exogenous) to a control condition without directed attention. Participants performed better under both attention conditions than in the control condition. Higher confidence ratings than the control condition were found under endogenous attention but not under exogenous attention. This finding suggests that while attention can increase confidence ratings, it must be voluntarily deployed for this increase to take place. We discuss possible implications of this relative overconfidence found only during endogenous attention with respect to the theoretical background of decision confidence.
A test of the reward-value hypothesis.
Smith, Alexandra E; Dalecki, Stefan J; Crystal, Jonathon D
2017-03-01
Rats retain source memory (memory for the origin of information) over a retention interval of at least 1 week, whereas their spatial working memory (radial maze locations) decays within approximately 1 day. We have argued that different forgetting functions dissociate memory systems. However, the two tasks, in our previous work, used different reward values. The source memory task used multiple pellets of a preferred food flavor (chocolate), whereas the spatial working memory task provided access to a single pellet of standard chow-flavored food at each location. Thus, according to the reward-value hypothesis, enhanced performance in the source memory task stems from enhanced encoding/memory of a preferred reward. We tested the reward-value hypothesis by using a standard 8-arm radial maze task to compare spatial working memory accuracy of rats rewarded with either multiple chocolate or chow pellets at each location using a between-subjects design. The reward-value hypothesis predicts superior accuracy for high-valued rewards. We documented equivalent spatial memory accuracy for high- and low-value rewards. Importantly, a 24-h retention interval produced equivalent spatial working memory accuracy for both flavors. These data are inconsistent with the reward-value hypothesis and suggest that reward value does not explain our earlier findings that source memory survives unusually long retention intervals.
Coarse climate change projections for species living in a fine-scaled world.
Nadeau, Christopher P; Urban, Mark C; Bridle, Jon R
2017-01-01
Accurately predicting biological impacts of climate change is necessary to guide policy. However, the resolution of climate data could be affecting the accuracy of climate change impact assessments. Here, we review the spatial and temporal resolution of climate data used in impact assessments and demonstrate that these resolutions are often too coarse relative to biologically relevant scales. We then develop a framework that partitions climate into three important components: trend, variance, and autocorrelation. We apply this framework to map different global climate regimes and identify where coarse climate data is most and least likely to reduce the accuracy of impact assessments. We show that impact assessments for many large mammals and birds use climate data with a spatial resolution similar to the biologically relevant area encompassing population dynamics. Conversely, impact assessments for many small mammals, herpetofauna, and plants use climate data with a spatial resolution that is orders of magnitude larger than the area encompassing population dynamics. Most impact assessments also use climate data with a coarse temporal resolution. We suggest that climate data with a coarse spatial resolution is likely to reduce the accuracy of impact assessments the most in climates with high spatial trend and variance (e.g., much of western North and South America) and the least in climates with low spatial trend and variance (e.g., the Great Plains of the USA). Climate data with a coarse temporal resolution is likely to reduce the accuracy of impact assessments the most in the northern half of the northern hemisphere where temporal climatic variance is high. Our framework provides one way to identify where improving the resolution of climate data will have the largest impact on the accuracy of biological predictions under climate change. © 2016 John Wiley & Sons Ltd.
Wavelet-based multicomponent denoising on GPU to improve the classification of hyperspectral images
NASA Astrophysics Data System (ADS)
Quesada-Barriuso, Pablo; Heras, Dora B.; Argüello, Francisco; Mouriño, J. C.
2017-10-01
Supervised classification allows handling a wide range of remote sensing hyperspectral applications. Enhancing the spatial organization of the pixels over the image has proven to be beneficial for the interpretation of the image content, thus increasing the classification accuracy. Denoising in the spatial domain of the image has been shown as a technique that enhances the structures in the image. This paper proposes a multi-component denoising approach in order to increase the classification accuracy when a classification method is applied. It is computed on multicore CPUs and NVIDIA GPUs. The method combines feature extraction based on a 1Ddiscrete wavelet transform (DWT) applied in the spectral dimension followed by an Extended Morphological Profile (EMP) and a classifier (SVM or ELM). The multi-component noise reduction is applied to the EMP just before the classification. The denoising recursively applies a separable 2D DWT after which the number of wavelet coefficients is reduced by using a threshold. Finally, inverse 2D-DWT filters are applied to reconstruct the noise free original component. The computational cost of the classifiers as well as the cost of the whole classification chain is high but it is reduced achieving real-time behavior for some applications through their computation on NVIDIA multi-GPU platforms.
The Use of 3D Printing Technology in the Ilizarov Method Treatment: Pilot Study.
Burzyńska, Karolina; Morasiewicz, Piotr; Filipiak, Jarosław
2016-01-01
Significant developments in additive manufacturing technology have occurred in recent years. 3D printing techniques can also be helpful in the Ilizarov method treatment. The aim of this study was to evaluate the usefulness of 3D printing technology in the Ilizarov method treatment. Physical models of bones used to plan the spatial design of Ilizarov external fixator were manufactured by FDM (Fused Deposition Modeling) spatial printing technology. Bone models were made of poly(L-lactide) (PLA). Printed 3D models of both lower leg bones allow doctors to prepare in advance for the Ilizarov method treatment: detailed consideration of the spatial configuration of the external fixation, experimental assembly of the Ilizarov external fixator onto the physical models of bones prior to surgery, planning individual osteotomy level and Kirschner wires introduction sites. Printed 3D bone models allow for accurate preparation of the Ilizarov apparatus spatially matched to the size of the bones and prospective bone distortion. Employment of the printed 3D models of bone will enable a more precise design of the apparatus, which is especially useful in multiplanar distortion and in the treatment of axis distortion and limb length discrepancy in young children. In the course of planning the use of physical models manufactured with additive technology, attention should be paid to certain technical aspects of model printing that have an impact on the accuracy of mapping of the geometry and physical properties of the model. 3D printing technique is very useful in 3D planning of the Ilizarov method treatment.
NASA Astrophysics Data System (ADS)
Furlong, Cosme; Yokum, Jeffrey S.; Pryputniewicz, Ryszard J.
2002-06-01
Sensitivity, accuracy, and precision characteristics in quantitative optical metrology techniques, and specifically in optoelectronic holography based on fiber optics and high-spatial and high-digital resolution cameras, are discussed in this paper. It is shown that sensitivity, accuracy, and precision dependent on both, the effective determination of optical phase and the effective characterization of the illumination-observation conditions. Sensitivity, accuracy, and precision are investigated with the aid of National Institute of Standards and Technology (NIST) traceable gages, demonstrating the applicability of quantitative optical metrology techniques to satisfy constantly increasing needs for the study and development of emerging technologies.
Analysing magnetism using scanning SQUID microscopy.
Reith, P; Renshaw Wang, X; Hilgenkamp, H
2017-12-01
Scanning superconducting quantum interference device microscopy (SSM) is a scanning probe technique that images local magnetic flux, which allows for mapping of magnetic fields with high field and spatial accuracy. Many studies involving SSM have been published in the last few decades, using SSM to make qualitative statements about magnetism. However, quantitative analysis using SSM has received less attention. In this work, we discuss several aspects of interpreting SSM images and methods to improve quantitative analysis. First, we analyse the spatial resolution and how it depends on several factors. Second, we discuss the analysis of SSM scans and the information obtained from the SSM data. Using simulations, we show how signals evolve as a function of changing scan height, SQUID loop size, magnetization strength, and orientation. We also investigated 2-dimensional autocorrelation analysis to extract information about the size, shape, and symmetry of magnetic features. Finally, we provide an outlook on possible future applications and improvements.
Analysing magnetism using scanning SQUID microscopy
NASA Astrophysics Data System (ADS)
Reith, P.; Renshaw Wang, X.; Hilgenkamp, H.
2017-12-01
Scanning superconducting quantum interference device microscopy (SSM) is a scanning probe technique that images local magnetic flux, which allows for mapping of magnetic fields with high field and spatial accuracy. Many studies involving SSM have been published in the last few decades, using SSM to make qualitative statements about magnetism. However, quantitative analysis using SSM has received less attention. In this work, we discuss several aspects of interpreting SSM images and methods to improve quantitative analysis. First, we analyse the spatial resolution and how it depends on several factors. Second, we discuss the analysis of SSM scans and the information obtained from the SSM data. Using simulations, we show how signals evolve as a function of changing scan height, SQUID loop size, magnetization strength, and orientation. We also investigated 2-dimensional autocorrelation analysis to extract information about the size, shape, and symmetry of magnetic features. Finally, we provide an outlook on possible future applications and improvements.
Relationships Between Long-Range Lightning Networks and TRMM/LIS Observations
NASA Technical Reports Server (NTRS)
Rudlosky, Scott D.; Holzworth, Robert H.; Carey, Lawrence D.; Schultz, Chris J.; Bateman, Monte; Cummins, Kenneth L.; Cummins, Kenneth L.; Blakeslee, Richard J.; Goodman, Steven J.
2012-01-01
Recent advances in long-range lightning detection technologies have improved our understanding of thunderstorm evolution in the data sparse oceanic regions. Although the expansion and improvement of long-range lightning datasets have increased their applicability, these applications (e.g., data assimilation, atmospheric chemistry, and aviation weather hazards) require knowledge of the network detection capabilities. The present study intercompares long-range lightning data with observations from the Lightning Imaging Sensor (LIS) aboard the Tropical Rainfall Measurement Mission (TRMM) satellite. The study examines network detection efficiency and location accuracy relative to LIS observations, describes spatial variability in these performance metrics, and documents the characteristics of LIS flashes that are detected by the long-range networks. Improved knowledge of relationships between these datasets will allow researchers, algorithm developers, and operational users to better prepare for the spatial and temporal coverage of the upcoming GOES-R Geostationary Lightning Mapper (GLM).
Evaluation of Long-Range Lightning Detection Networks Using TRMM/LIS Observations
NASA Technical Reports Server (NTRS)
Rudlosky, Scott D.; Holzworth, Robert H.; Carey, Lawrence D.; Schultz, Chris J.; Bateman, Monte; Cecil, Daniel J.; Cummins, Kenneth L.; Petersen, Walter A.; Blakeslee, Richard J.; Goodman, Steven J.
2011-01-01
Recent advances in long-range lightning detection technologies have improved our understanding of thunderstorm evolution in the data sparse oceanic regions. Although the expansion and improvement of long-range lightning datasets have increased their applicability, these applications (e.g., data assimilation, atmospheric chemistry, and aviation weather hazards) require knowledge of the network detection capabilities. Toward this end, the present study evaluates data from the World Wide Lightning Location Network (WWLLN) using observations from the Lightning Imaging Sensor (LIS) aboard the Tropical Rainfall Measurement Mission (TRMM) satellite. The study documents the WWLLN detection efficiency and location accuracy relative to LIS observations, describes the spatial variability in these performance metrics, and documents the characteristics of LIS flashes that are detected by WWLLN. Improved knowledge of the WWLLN detection capabilities will allow researchers, algorithm developers, and operational users to better prepare for the spatial and temporal coverage of the upcoming GOES-R Geostationary Lightning Mapper (GLM).
Double-Referential Holography and Spatial Quadrature Amplitude Modulation
NASA Astrophysics Data System (ADS)
Zukeran, Keisuke; Okamoto, Atsushi; Takabayashi, Masanori; Shibukawa, Atsushi; Sato, Kunihiro; Tomita, Akihisa
2013-09-01
We proposed a double-referential holography (DRH) that allows phase-detection without external additional beams. In the DRH, phantom beams, prepared in the same optical path as signal beams and preliminary multiplexed in a recording medium along with the signal, are used to produce interference fringes on an imager for converting a phase into an intensity distribution. The DRH enables stable and high-accuracy phase detection independent of the fluctuations and vibrations of the optical system owing to medium shift and temperature variation. Besides, the collinear arrangement of the signal and phantom beams leads to the compactness of the optical data storage system. We conducted an experiment using binary phase modulation signals for verifying the DRH operation. In addition, 38-level spatial quadrature amplitude modulation signals were successfully reproduced with the DRH by numerical simulation. Furthermore, we verified that the distributed phase-shifting method moderates the dynamic range consumption for the exposure of phantom beams.
NASA Astrophysics Data System (ADS)
Bhrawy, A. H.; Zaky, M. A.
2015-01-01
In this paper, we propose and analyze an efficient operational formulation of spectral tau method for multi-term time-space fractional differential equation with Dirichlet boundary conditions. The shifted Jacobi operational matrices of Riemann-Liouville fractional integral, left-sided and right-sided Caputo fractional derivatives are presented. By using these operational matrices, we propose a shifted Jacobi tau method for both temporal and spatial discretizations, which allows us to present an efficient spectral method for solving such problem. Furthermore, the error is estimated and the proposed method has reasonable convergence rates in spatial and temporal discretizations. In addition, some known spectral tau approximations can be derived as special cases from our algorithm if we suitably choose the corresponding special cases of Jacobi parameters θ and ϑ. Finally, in order to demonstrate its accuracy, we compare our method with those reported in the literature.
A microprocessor-based one dimensional optical data processor for spatial frequency analysis
NASA Technical Reports Server (NTRS)
Collier, R. L.; Ballard, G. S.
1982-01-01
A high degree of accuracy was obtained in measuring the spatial frequency spectrum of known samples using an optical data processor based on a microprocessor, which reliably collected intensity versus angle data. Stray light control, system alignment, and angle measurement problems were addressed and solved. The capabilities of the instrument were extended by the addition of appropriate optics to allow the use of different wavelengths of laser radiation and by increasing the travel limits of the rotating arm to + or - 160 degrees. The acquisition, storage, and plotting of data by the computer permits the researcher a free hand in data manipulation such as subtracting background scattering from a diffraction pattern. Tests conducted to verify the operation of the processor using a 25 mm diameter pinhole, a 39.37 line pairs per mm series of multiple slits, and a microscope slide coated with 1.091 mm diameter polystyrene latex spheres are described.
Edge profile measurements using Thomson scattering on the KSTAR tokamak
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lee, J. H., E-mail: jhleel@nfri.re.kr; Ko, W. H.; Department of Nuclear Fusion and Plasma Science, University of Science and Technology
2014-11-15
In the KSTAR Tokamak, a “Tangential Thomson Scattering” (TTS) diagnostic system has been designed and installed to measure electron density and temperature profiles. In the edge system, TTS has 12 optical fiber bundles to measure the edge profiles with 10–15 mm spatial resolution. These 12 optical fibers and their spatial resolution are not enough to measure the pedestal width with a high accuracy but allow observations of L-H transition or H-L transitions at the edge. For these measurements, the prototype ITER edge Thomson Nd:YAG laser system manufactured by JAEA in Japan is installed. In this paper, the KSTAR TTS systemmore » is briefly described and some TTS edge profiles are presented and compared against the KSTAR Charge Exchange Spectroscopy and other diagnostics. The future upgrade plan of the system is also discussed in this paper.« less
NASA Astrophysics Data System (ADS)
Zhang, K.; Han, B.; Mansaray, L. R.; Xu, X.; Guo, Q.; Jingfeng, H.
2017-12-01
Synthetic aperture radar (SAR) instruments on board satellites are valuable for high-resolution wind field mapping, especially for coastal studies. Since the launch of Sentinel-1A on April 3, 2014, followed by Sentinel-1B on April 25, 2016, large amount of C-band SAR data have been added to a growing accumulation of SAR datasets (ERS-1/2, RADARSAT-1/2, ENVISAT). These new developments are of great significance for a wide range of applications in coastal sea areas, especially for high spatial resolution wind resource assessment, in which the accuracy of retrieved wind fields is extremely crucial. Recently, it is reported that wind speeds can also be retrieved from C-band cross-polarized SAR images, which is an important complement to wind speed retrieval from co-polarization. However, there is no consensus on the optimal resolution for wind speed retrieval from cross-polarized SAR images. This paper presents a comparison strategy for investigating the influence of spatial resolutions on sea surface wind speed retrieval accuracy with cross-polarized SAR images. Firstly, for wind speeds retrieved from VV-polarized images, the optimal geophysical C-band model (CMOD) function was selected among four CMOD functions. Secondly, the most suitable C-band cross-polarized ocean (C-2PO) model was selected between two C-2POs for the VH-polarized image dataset. Then, the VH-wind speeds retrieved by the selected C-2PO were compared with the VV-polarized sea surface wind speeds retrieved using the optimal CMOD, which served as reference, at different spatial resolutions. Results show that the VH-polarized wind speed retrieval accuracy increases rapidly with the decrease in spatial resolutions from 100 m to 1000 m, with a drop in RMSE of 42%. However, the improvement in wind speed retrieval accuracy levels off with spatial resolutions decreasing from 1000 m to 5000 m. This demonstrates that the pixel spacing of 1 km may be the compromising choice for the tradeoff between the spatial resolution and wind speed retrieval accuracy with cross-polarized images obtained from RADASAT-2 fine quad polarization mode. Figs. 1 illustrate the variation of the following statistical parameters: Bias, Corr, R2, RMSE and STD as a function of spatial resolution.
Spatial and temporal modulation of joint stiffness during multijoint movement.
Mah, C D
2001-02-01
Joint stiffness measurements during small transient perturbations have suggested that stiffness during movement is different from that observed during posture. These observations are problematic for theories like the classical equilibrium point hypothesis, which suggest that desired trajectories during movement are enforced by joint stiffness. We measured arm impedances during large, slow perturbations to obtain detailed information about the spatial and temporal modulation of stiffness and viscosity during movement. While our measurements of stiffness magnitudes during movement generally agreed with the results of measurements using fast perturbations, they revealed that joint stiffness undergoes stereotyped changes in magnitude and aspect ratio which depend on the direction of movement and show a strong dependence on joint angles. Movement simulations using measured parameters show that the measured modulation of impedance acts as an energy conserving force field to constrain movement. This mechanism allows for a computationally simplified account of the execution of multijoint movement. While our measurements do not rule out a role for afferent feedback in force generation, the observed stereotyped restoring forces can allow a dramatic relaxation of the accuracy requirements for forces generated by other control mechanisms, such as inverse dynamical models.
High throughput integrated thermal characterization with non-contact optical calorimetry
NASA Astrophysics Data System (ADS)
Hou, Sichao; Huo, Ruiqing; Su, Ming
2017-10-01
Commonly used thermal analysis tools such as calorimeter and thermal conductivity meter are separated instruments and limited by low throughput, where only one sample is examined each time. This work reports an infrared based optical calorimetry with its theoretical foundation, which is able to provide an integrated solution to characterize thermal properties of materials with high throughput. By taking time domain temperature information of spatially distributed samples, this method allows a single device (infrared camera) to determine the thermal properties of both phase change systems (melting temperature and latent heat of fusion) and non-phase change systems (thermal conductivity and heat capacity). This method further allows these thermal properties of multiple samples to be determined rapidly, remotely, and simultaneously. In this proof-of-concept experiment, the thermal properties of a panel of 16 samples including melting temperatures, latent heats of fusion, heat capacities, and thermal conductivities have been determined in 2 min with high accuracy. Given the high thermal, spatial, and temporal resolutions of the advanced infrared camera, this method has the potential to revolutionize the thermal characterization of materials by providing an integrated solution with high throughput, high sensitivity, and short analysis time.
Assessing the consistency of UAV-derived point clouds and images acquired at different altitudes
NASA Astrophysics Data System (ADS)
Ozcan, O.
2016-12-01
Unmanned Aerial Vehicles (UAVs) offer several advantages in terms of cost and image resolution compared to terrestrial photogrammetry and satellite remote sensing system. Nowadays, UAVs that bridge the gap between the satellite scale and field scale applications were initiated to be used in various application areas to acquire hyperspatial and high temporal resolution imageries due to working capacity and acquiring in a short span of time with regard to conventional photogrammetry methods. UAVs have been used for various fields such as for the creation of 3-D earth models, production of high resolution orthophotos, network planning, field monitoring and agricultural lands as well. Thus, geometric accuracy of orthophotos and volumetric accuracy of point clouds are of capital importance for land surveying applications. Correspondingly, Structure from Motion (SfM) photogrammetry, which is frequently used in conjunction with UAV, recently appeared in environmental sciences as an impressive tool allowing for the creation of 3-D models from unstructured imagery. In this study, it was aimed to reveal the spatial accuracy of the images acquired from integrated digital camera and the volumetric accuracy of Digital Surface Models (DSMs) which were derived from UAV flight plans at different altitudes using SfM methodology. Low-altitude multispectral overlapping aerial photography was collected at the altitudes of 30 to 100 meters and georeferenced with RTK-GPS ground control points. These altitudes allow hyperspatial imagery with the resolutions of 1-5 cm depending upon the sensor being used. Preliminary results revealed that the vertical comparison of UAV-derived point clouds with respect to GPS measurements pointed out an average distance at cm-level. Larger values are found in areas where instantaneous changes in surface are present.
The Laguerre finite difference one-way equation solver
NASA Astrophysics Data System (ADS)
Terekhov, Andrew V.
2017-05-01
This paper presents a new finite difference algorithm for solving the 2D one-way wave equation with a preliminary approximation of a pseudo-differential operator by a system of partial differential equations. As opposed to the existing approaches, the integral Laguerre transform instead of Fourier transform is used. After carrying out the approximation of spatial variables it is possible to obtain systems of linear algebraic equations with better computing properties and to reduce computer costs for their solution. High accuracy of calculations is attained at the expense of employing finite difference approximations of higher accuracy order that are based on the dispersion-relationship-preserving method and the Richardson extrapolation in the downward continuation direction. The numerical experiments have verified that as compared to the spectral difference method based on Fourier transform, the new algorithm allows one to calculate wave fields with a higher degree of accuracy and a lower level of numerical noise and artifacts including those for non-smooth velocity models. In the context of solving the geophysical problem the post-stack migration for velocity models of the types Syncline and Sigsbee2A has been carried out. It is shown that the images obtained contain lesser noise and are considerably better focused as compared to those obtained by the known Fourier Finite Difference and Phase-Shift Plus Interpolation methods. There is an opinion that purely finite difference approaches do not allow carrying out the seismic migration procedure with sufficient accuracy, however the results obtained disprove this statement. For the supercomputer implementation it is proposed to use the parallel dichotomy algorithm when solving systems of linear algebraic equations with block-tridiagonal matrices.
Weather models as virtual sensors to data-driven rainfall predictions in urban watersheds
NASA Astrophysics Data System (ADS)
Cozzi, Lorenzo; Galelli, Stefano; Pascal, Samuel Jolivet De Marc; Castelletti, Andrea
2013-04-01
Weather and climate predictions are a key element of urban hydrology where they are used to inform water management and assist in flood warning delivering. Indeed, the modelling of the very fast dynamics of urbanized catchments can be substantially improved by the use of weather/rainfall predictions. For example, in Singapore Marina Reservoir catchment runoff processes have a very short time of concentration (roughly one hour) and observational data are thus nearly useless for runoff predictions and weather prediction are required. Unfortunately, radar nowcasting methods do not allow to carrying out long - term weather predictions, whereas numerical models are limited by their coarse spatial scale. Moreover, numerical models are usually poorly reliable because of the fast motion and limited spatial extension of rainfall events. In this study we investigate the combined use of data-driven modelling techniques and weather variables observed/simulated with a numerical model as a way to improve rainfall prediction accuracy and lead time in the Singapore metropolitan area. To explore the feasibility of the approach, we use a Weather Research and Forecast (WRF) model as a virtual sensor network for the input variables (the states of the WRF model) to a machine learning rainfall prediction model. More precisely, we combine an input variable selection method and a non-parametric tree-based model to characterize the empirical relation between the rainfall measured at the catchment level and all possible weather input variables provided by WRF model. We explore different lead time to evaluate the model reliability for different long - term predictions, as well as different time lags to see how past information could improve results. Results show that the proposed approach allow a significant improvement of the prediction accuracy of the WRF model on the Singapore urban area.
Seeland, Marco; Rzanny, Michael; Alaqraa, Nedal; Wäldchen, Jana; Mäder, Patrick
2017-01-01
Steady improvements of image description methods induced a growing interest in image-based plant species classification, a task vital to the study of biodiversity and ecological sensitivity. Various techniques have been proposed for general object classification over the past years and several of them have already been studied for plant species classification. However, results of these studies are selective in the evaluated steps of a classification pipeline, in the utilized datasets for evaluation, and in the compared baseline methods. No study is available that evaluates the main competing methods for building an image representation on the same datasets allowing for generalized findings regarding flower-based plant species classification. The aim of this paper is to comparatively evaluate methods, method combinations, and their parameters towards classification accuracy. The investigated methods span from detection, extraction, fusion, pooling, to encoding of local features for quantifying shape and color information of flower images. We selected the flower image datasets Oxford Flower 17 and Oxford Flower 102 as well as our own Jena Flower 30 dataset for our experiments. Findings show large differences among the various studied techniques and that their wisely chosen orchestration allows for high accuracies in species classification. We further found that true local feature detectors in combination with advanced encoding methods yield higher classification results at lower computational costs compared to commonly used dense sampling and spatial pooling methods. Color was found to be an indispensable feature for high classification results, especially while preserving spatial correspondence to gray-level features. In result, our study provides a comprehensive overview of competing techniques and the implications of their main parameters for flower-based plant species classification. PMID:28234999
Yao, Rongjiang; Yang, Jingsong; Wu, Danhua; Xie, Wenping; Gao, Peng; Jin, Wenhui
2016-01-01
Reliable and real-time information on soil and crop properties is important for the development of management practices in accordance with the requirements of a specific soil and crop within individual field units. This is particularly the case in salt-affected agricultural landscape where managing the spatial variability of soil salinity is essential to minimize salinization and maximize crop output. The primary objectives were to use linear mixed-effects model for soil salinity and crop yield calibration with horizontal and vertical electromagnetic induction (EMI) measurements as ancillary data, to characterize the spatial distribution of soil salinity and crop yield and to verify the accuracy of spatial estimation. Horizontal and vertical EMI (type EM38) measurements at 252 locations were made during each survey, and root zone soil samples and crop samples at 64 sampling sites were collected. This work was periodically conducted on eight dates from June 2012 to May 2013 in a coastal salt-affected mud farmland. Multiple linear regression (MLR) and restricted maximum likelihood (REML) were applied to calibrate root zone soil salinity (ECe) and crop annual output (CAO) using ancillary data, and spatial distribution of soil ECe and CAO was generated using digital soil mapping (DSM) and the precision of spatial estimation was examined using the collected meteorological and groundwater data. Results indicated that a reduced model with EMh as a predictor was satisfactory for root zone ECe calibration, whereas a full model with both EMh and EMv as predictors met the requirement of CAO calibration. The obtained distribution maps of ECe showed consistency with those of EMI measurements at the corresponding time, and the spatial distribution of CAO generated from ancillary data showed agreement with that derived from raw crop data. Statistics of jackknifing procedure confirmed that the spatial estimation of ECe and CAO exhibited reliability and high accuracy. A general increasing trend of ECe was observed and moderately saline and very saline soils were predominant during the survey period. The temporal dynamics of root zone ECe coincided with those of daily rainfall, water table and groundwater data. Long-range EMI surveys and data collection are needed to capture the spatial and temporal variability of soil and crop parameters. Such results allowed us to conclude that, cost-effective and efficient EMI surveys, as one part of multi-source data for DSM, could be successfully used to characterize the spatial variability of soil salinity, to monitor the spatial and temporal dynamics of soil salinity, and to spatially estimate potential crop yield. PMID:27203697
Yao, Rongjiang; Yang, Jingsong; Wu, Danhua; Xie, Wenping; Gao, Peng; Jin, Wenhui
2016-01-01
Reliable and real-time information on soil and crop properties is important for the development of management practices in accordance with the requirements of a specific soil and crop within individual field units. This is particularly the case in salt-affected agricultural landscape where managing the spatial variability of soil salinity is essential to minimize salinization and maximize crop output. The primary objectives were to use linear mixed-effects model for soil salinity and crop yield calibration with horizontal and vertical electromagnetic induction (EMI) measurements as ancillary data, to characterize the spatial distribution of soil salinity and crop yield and to verify the accuracy of spatial estimation. Horizontal and vertical EMI (type EM38) measurements at 252 locations were made during each survey, and root zone soil samples and crop samples at 64 sampling sites were collected. This work was periodically conducted on eight dates from June 2012 to May 2013 in a coastal salt-affected mud farmland. Multiple linear regression (MLR) and restricted maximum likelihood (REML) were applied to calibrate root zone soil salinity (ECe) and crop annual output (CAO) using ancillary data, and spatial distribution of soil ECe and CAO was generated using digital soil mapping (DSM) and the precision of spatial estimation was examined using the collected meteorological and groundwater data. Results indicated that a reduced model with EMh as a predictor was satisfactory for root zone ECe calibration, whereas a full model with both EMh and EMv as predictors met the requirement of CAO calibration. The obtained distribution maps of ECe showed consistency with those of EMI measurements at the corresponding time, and the spatial distribution of CAO generated from ancillary data showed agreement with that derived from raw crop data. Statistics of jackknifing procedure confirmed that the spatial estimation of ECe and CAO exhibited reliability and high accuracy. A general increasing trend of ECe was observed and moderately saline and very saline soils were predominant during the survey period. The temporal dynamics of root zone ECe coincided with those of daily rainfall, water table and groundwater data. Long-range EMI surveys and data collection are needed to capture the spatial and temporal variability of soil and crop parameters. Such results allowed us to conclude that, cost-effective and efficient EMI surveys, as one part of multi-source data for DSM, could be successfully used to characterize the spatial variability of soil salinity, to monitor the spatial and temporal dynamics of soil salinity, and to spatially estimate potential crop yield.
Comparing ordinary kriging and inverse distance weighting for soil as pollution in Beijing.
Qiao, Pengwei; Lei, Mei; Yang, Sucai; Yang, Jun; Guo, Guanghui; Zhou, Xiaoyong
2018-06-01
Spatial interpolation method is the basis of soil heavy metal pollution assessment and remediation. The existing evaluation index for interpolation accuracy did not combine with actual situation. The selection of interpolation methods needs to be based on specific research purposes and research object characteristics. In this paper, As pollution in soils of Beijing was taken as an example. The prediction accuracy of ordinary kriging (OK) and inverse distance weighted (IDW) were evaluated based on the cross validation results and spatial distribution characteristics of influencing factors. The results showed that, under the condition of specific spatial correlation, the cross validation results of OK and IDW for every soil point and the prediction accuracy of spatial distribution trend are similar. But the prediction accuracy of OK for the maximum and minimum is less than IDW, while the number of high pollution areas identified by OK are less than IDW. It is difficult to identify the high pollution areas fully by OK, which shows that the smoothing effect of OK is obvious. In addition, with increasing of the spatial correlation of As concentration, the cross validation error of OK and IDW decreases, and the high pollution area identified by OK is approaching the result of IDW, which can identify the high pollution areas more comprehensively. However, because the semivariogram constructed by OK interpolation method is more subjective and requires larger number of soil samples, IDW is more suitable for spatial prediction of heavy metal pollution in soils.
Jia, Zhenyi; Zhou, Shenglu; Su, Quanlong; Yi, Haomin; Wang, Junxiao
2017-12-26
Soil pollution by metal(loid)s resulting from rapid economic development is a major concern. Accurately estimating the spatial distribution of soil metal(loid) pollution has great significance in preventing and controlling soil pollution. In this study, 126 topsoil samples were collected in Kunshan City and the geo-accumulation index was selected as a pollution index. We used Kriging interpolation and BP neural network methods to estimate the spatial distribution of arsenic (As) and cadmium (Cd) pollution in the study area. Additionally, we introduced a cross-validation method to measure the errors of the estimation results by the two interpolation methods and discussed the accuracy of the information contained in the estimation results. The conclusions are as follows: data distribution characteristics, spatial variability, and mean square errors (MSE) of the different methods showed large differences. Estimation results from BP neural network models have a higher accuracy, the MSE of As and Cd are 0.0661 and 0.1743, respectively. However, the interpolation results show significant skewed distribution, and spatial autocorrelation is strong. Using Kriging interpolation, the MSE of As and Cd are 0.0804 and 0.2983, respectively. The estimation results have poorer accuracy. Combining the two methods can improve the accuracy of the Kriging interpolation and more comprehensively represent the spatial distribution characteristics of metal(loid)s in regional soil. The study may provide a scientific basis and technical support for the regulation of soil metal(loid) pollution.
Object detection in natural scenes: Independent effects of spatial and category-based attention.
Stein, Timo; Peelen, Marius V
2017-04-01
Humans are remarkably efficient in detecting highly familiar object categories in natural scenes, with evidence suggesting that such object detection can be performed in the (near) absence of attention. Here we systematically explored the influences of both spatial attention and category-based attention on the accuracy of object detection in natural scenes. Manipulating both types of attention additionally allowed for addressing how these factors interact: whether the requirement for spatial attention depends on the extent to which observers are prepared to detect a specific object category-that is, on category-based attention. The results showed that the detection of targets from one category (animals or vehicles) was better than the detection of targets from two categories (animals and vehicles), demonstrating the beneficial effect of category-based attention. This effect did not depend on the semantic congruency of the target object and the background scene, indicating that observers attended to visual features diagnostic of the foreground target objects from the cued category. Importantly, in three experiments the detection of objects in scenes presented in the periphery was significantly impaired when observers simultaneously performed an attentionally demanding task at fixation, showing that spatial attention affects natural scene perception. In all experiments, the effects of category-based attention and spatial attention on object detection performance were additive rather than interactive. Finally, neither spatial nor category-based attention influenced metacognitive ability for object detection performance. These findings demonstrate that efficient object detection in natural scenes is independently facilitated by spatial and category-based attention.
Atallah, Vincent; Escarmant, Patrick; Vinh‐Hung, Vincent
2016-01-01
Monitoring and controlling respiratory motion is a challenge for the accuracy and safety of therapeutic irradiation of thoracic tumors. Various commercial systems based on the monitoring of internal or external surrogates have been developed but remain costly. In this article we describe and validate Madibreast, an in‐house‐made respiratory monitoring and processing device based on optical tracking of external markers. We designed an optical apparatus to ensure real‐time submillimetric image resolution at 4 m. Using OpenCv libraries, we optically tracked high‐contrast markers set on patients' breasts. Validation of spatial and time accuracy was performed on a mechanical phantom and on human breast. Madibreast was able to track motion of markers up to a 5 cm/s speed, at a frame rate of 30 fps, with submillimetric accuracy on mechanical phantom and human breasts. Latency was below 100 ms. Concomitant monitoring of three different locations on the breast showed discrepancies in axial motion up to 4 mm for deep‐breathing patterns. This low‐cost, computer‐vision system for real‐time motion monitoring of the irradiation of breast cancer patients showed submillimetric accuracy and acceptable latency. It allowed the authors to highlight differences in surface motion that may be correlated to tumor motion. PACS number(s): 87.55.km PMID:27685116
Leduc, Nicolas; Atallah, Vincent; Escarmant, Patrick; Vinh-Hung, Vincent
2016-09-08
Monitoring and controlling respiratory motion is a challenge for the accuracy and safety of therapeutic irradiation of thoracic tumors. Various commercial systems based on the monitoring of internal or external surrogates have been developed but remain costly. In this article we describe and validate Madibreast, an in-house-made respiratory monitoring and processing device based on optical tracking of external markers. We designed an optical apparatus to ensure real-time submillimetric image resolution at 4 m. Using OpenCv libraries, we optically tracked high-contrast markers set on patients' breasts. Validation of spatial and time accuracy was performed on a mechanical phantom and on human breast. Madibreast was able to track motion of markers up to a 5 cm/s speed, at a frame rate of 30 fps, with submillimetric accuracy on mechanical phantom and human breasts. Latency was below 100 ms. Concomitant monitoring of three different locations on the breast showed discrepancies in axial motion up to 4 mm for deep-breathing patterns. This low-cost, computer-vision system for real-time motion monitoring of the irradiation of breast cancer patients showed submillimetric accuracy and acceptable latency. It allowed the authors to highlight differences in surface motion that may be correlated to tumor motion.v. © 2016 The Authors.
Will it Blend? Visualization and Accuracy Evaluation of High-Resolution Fuzzy Vegetation Maps
NASA Astrophysics Data System (ADS)
Zlinszky, A.; Kania, A.
2016-06-01
Instead of assigning every map pixel to a single class, fuzzy classification includes information on the class assigned to each pixel but also the certainty of this class and the alternative possible classes based on fuzzy set theory. The advantages of fuzzy classification for vegetation mapping are well recognized, but the accuracy and uncertainty of fuzzy maps cannot be directly quantified with indices developed for hard-boundary categorizations. The rich information in such a map is impossible to convey with a single map product or accuracy figure. Here we introduce a suite of evaluation indices and visualization products for fuzzy maps generated with ensemble classifiers. We also propose a way of evaluating classwise prediction certainty with "dominance profiles" visualizing the number of pixels in bins according to the probability of the dominant class, also showing the probability of all the other classes. Together, these data products allow a quantitative understanding of the rich information in a fuzzy raster map both for individual classes and in terms of variability in space, and also establish the connection between spatially explicit class certainty and traditional accuracy metrics. These map products are directly comparable to widely used hard boundary evaluation procedures, support active learning-based iterative classification and can be applied for operational use.
Feature Selection Methods for Zero-Shot Learning of Neural Activity.
Caceres, Carlos A; Roos, Matthew J; Rupp, Kyle M; Milsap, Griffin; Crone, Nathan E; Wolmetz, Michael E; Ratto, Christopher R
2017-01-01
Dimensionality poses a serious challenge when making predictions from human neuroimaging data. Across imaging modalities, large pools of potential neural features (e.g., responses from particular voxels, electrodes, and temporal windows) have to be related to typically limited sets of stimuli and samples. In recent years, zero-shot prediction models have been introduced for mapping between neural signals and semantic attributes, which allows for classification of stimulus classes not explicitly included in the training set. While choices about feature selection can have a substantial impact when closed-set accuracy, open-set robustness, and runtime are competing design objectives, no systematic study of feature selection for these models has been reported. Instead, a relatively straightforward feature stability approach has been adopted and successfully applied across models and imaging modalities. To characterize the tradeoffs in feature selection for zero-shot learning, we compared correlation-based stability to several other feature selection techniques on comparable data sets from two distinct imaging modalities: functional Magnetic Resonance Imaging and Electrocorticography. While most of the feature selection methods resulted in similar zero-shot prediction accuracies and spatial/spectral patterns of selected features, there was one exception; A novel feature/attribute correlation approach was able to achieve those accuracies with far fewer features, suggesting the potential for simpler prediction models that yield high zero-shot classification accuracy.
Application of Geodetic Techniques for Antenna Positioning in a Ground Penetrating Radar Method
NASA Astrophysics Data System (ADS)
Mazurkiewicz, Ewelina; Ortyl, Łukasz; Karczewski, Jerzy
2018-03-01
The accuracy of determining the location of detectable subsurface objects is related to the accuracy of the position of georadar traces in a given profile, which in turn depends on the precise assessment of the distance covered by an antenna. During georadar measurements the distance covered by an antenna can be determined with a variety of methods. Recording traces at fixed time intervals is the simplest of them. A method which allows for more precise location of georadar traces is recording them at fixed distance intervals, which can be performed with the use of distance triggers (such as a measuring wheel or a hip chain). The search for methods eliminating these discrepancies can be based on the measurement of spatial coordinates of georadar traces conducted with the use of modern geodetic techniques for 3-D location. These techniques include above all a GNSS satellite system and electronic tachymeters. Application of the above mentioned methods increases the accuracy of space location of georadar traces. The article presents the results of georadar measurements performed with the use of geodetic techniques in the test area of Mydlniki in Krakow. A satellite receiver Leica system 1200 and a electronic tachymeter Leica 1102 TCRA were integrated with the georadar equipment. The accuracy of locating chosen subsurface structures was compared.
NASA Astrophysics Data System (ADS)
Mazurova, Elena; Mikhaylov, Aleksandr
2013-04-01
The selenocentric network of objects setting the coordinate system on the Moon, with the origin coinciding with the mass centre and axes directed along the inertia axes can become one of basic elements of the coordinate-time support for lunar navigation with use of cartographic materials and control objects. A powerful array of highly-precise and multiparameter information obtained by modern space vehicles allows one to establish Lunar Reference Frames (LRF) of an essentially another accuracy. Here, a special role is played by the results of scanning the lunar surface by the Lunar Reconnaissance Orbiter(LRO) American mission. The coordinates of points calculated only from the results of laser scanning have high enough accuracy of position definition with respect to each other, but it is possible to check up the real accuracy of spatial tie and improve the coordinates only by a network of points whose coordinates are computed both from laser scanning and other methods too, for example, by terrestrial laser location, space photogrammetry methods, and so on. The paper presents the algorithm for transforming selenocentric coordinate systems and the accuracy estimation of changing from one lunar coordinate system to another one. Keywords: selenocentric coordinate system, coordinate-time support.
Representation control increases task efficiency in complex graphical representations.
Moritz, Julia; Meyerhoff, Hauke S; Meyer-Dernbecher, Claudia; Schwan, Stephan
2018-01-01
In complex graphical representations, the relevant information for a specific task is often distributed across multiple spatial locations. In such situations, understanding the representation requires internal transformation processes in order to extract the relevant information. However, digital technology enables observers to alter the spatial arrangement of depicted information and therefore to offload the transformation processes. The objective of this study was to investigate the use of such a representation control (i.e. the users' option to decide how information should be displayed) in order to accomplish an information extraction task in terms of solution time and accuracy. In the representation control condition, the participants were allowed to reorganize the graphical representation and reduce information density. In the control condition, no interactive features were offered. We observed that participants in the representation control condition solved tasks that required reorganization of the maps faster and more accurate than participants without representation control. The present findings demonstrate how processes of cognitive offloading, spatial contiguity, and information coherence interact in knowledge media intended for broad and diverse groups of recipients.
Representation control increases task efficiency in complex graphical representations
Meyerhoff, Hauke S.; Meyer-Dernbecher, Claudia; Schwan, Stephan
2018-01-01
In complex graphical representations, the relevant information for a specific task is often distributed across multiple spatial locations. In such situations, understanding the representation requires internal transformation processes in order to extract the relevant information. However, digital technology enables observers to alter the spatial arrangement of depicted information and therefore to offload the transformation processes. The objective of this study was to investigate the use of such a representation control (i.e. the users' option to decide how information should be displayed) in order to accomplish an information extraction task in terms of solution time and accuracy. In the representation control condition, the participants were allowed to reorganize the graphical representation and reduce information density. In the control condition, no interactive features were offered. We observed that participants in the representation control condition solved tasks that required reorganization of the maps faster and more accurate than participants without representation control. The present findings demonstrate how processes of cognitive offloading, spatial contiguity, and information coherence interact in knowledge media intended for broad and diverse groups of recipients. PMID:29698443
DOE Office of Scientific and Technical Information (OSTI.GOV)
Karagiannis, Georgios, E-mail: georgios.karagiannis@pnnl.gov; Lin, Guang, E-mail: guang.lin@pnnl.gov
2014-02-15
Generalized polynomial chaos (gPC) expansions allow us to represent the solution of a stochastic system using a series of polynomial chaos basis functions. The number of gPC terms increases dramatically as the dimension of the random input variables increases. When the number of the gPC terms is larger than that of the available samples, a scenario that often occurs when the corresponding deterministic solver is computationally expensive, evaluation of the gPC expansion can be inaccurate due to over-fitting. We propose a fully Bayesian approach that allows for global recovery of the stochastic solutions, in both spatial and random domains, bymore » coupling Bayesian model uncertainty and regularization regression methods. It allows the evaluation of the PC coefficients on a grid of spatial points, via (1) the Bayesian model average (BMA) or (2) the median probability model, and their construction as spatial functions on the spatial domain via spline interpolation. The former accounts for the model uncertainty and provides Bayes-optimal predictions; while the latter provides a sparse representation of the stochastic solutions by evaluating the expansion on a subset of dominating gPC bases. Moreover, the proposed methods quantify the importance of the gPC bases in the probabilistic sense through inclusion probabilities. We design a Markov chain Monte Carlo (MCMC) sampler that evaluates all the unknown quantities without the need of ad-hoc techniques. The proposed methods are suitable for, but not restricted to, problems whose stochastic solutions are sparse in the stochastic space with respect to the gPC bases while the deterministic solver involved is expensive. We demonstrate the accuracy and performance of the proposed methods and make comparisons with other approaches on solving elliptic SPDEs with 1-, 14- and 40-random dimensions.« less
An Approach to Evaluate the Spatial Fidelity of Satellite-Derived Sea Surface Temperature Fields
NASA Astrophysics Data System (ADS)
Cornillon, P. C.; Wu, F.; Guan, L.; Boussidi, B.
2016-12-01
An approach to evaluate the spatial fidelity of satellite-derived SST fields for spatial scales in the range of one to a few tens of pixels is presented. The approach is based on spatial spectra of the SST fields in an oceanographically `quiet' region, the Sargasso Sea between the southern edge of the Gulf Stream and Bermuda. Spectra are relatively isotropic in this region, allowing for analysis of the spectra in along-scan and cross-scan directions for level 2 fields and in coordinate directions for level 3 and level 4 fields, and spectral energy levels tend to be low for the ocean, allowing for a diagnosis of the pixel-to-pixel noise levels in the associated spectra. The focus on the spatial fidelity of the derived fields is intended to fill a gap in the measure of the overall quality of satellite-derived SST fields. To date the primary measure of these data has been via the comparison of in situ buoy measurements with `match-ups' from the satellite-derived fields. Such measures provide for the accuracy of the retrievals but not of their spatial precision. The approach presented here addresses the latter. Spectra obtained in this region from the satellite-borne sensors are compared with those obtained from a thermal recorder on the container ship Oleander making weekly roundtrips between Port Elizabeth, NJ and Bermuda. To demonstrate the approach, it is applied to Level 2 VIIRS and AVHRR SST fields. The most accurate spectra for VIIRS fields are obtained for nighttime sections in the along-scan direction within 500 km of nadir. Along-track sections show signs of banding from the multiple detectors of the VIIRS instrument. By contrast AVHRR spectra show elevated energy at the submesoscale (<25km), likely due to instrument noise but poor cloud-screening may also contribute the spectral energy at these scales.
Jabar, Syaheed B; Filipowicz, Alex; Anderson, Britt
2017-11-01
When a location is cued, targets appearing at that location are detected more quickly. When a target feature is cued, targets bearing that feature are detected more quickly. These attentional cueing effects are only superficially similar. More detailed analyses find distinct temporal and accuracy profiles for the two different types of cues. This pattern parallels work with probability manipulations, where both feature and spatial probability are known to affect detection accuracy and reaction times. However, little has been done by way of comparing these effects. Are probability manipulations on space and features distinct? In a series of five experiments, we systematically varied spatial probability and feature probability along two dimensions (orientation or color). In addition, we decomposed response times into initiation and movement components. Targets appearing at the probable location were reported more quickly and more accurately regardless of whether the report was based on orientation or color. On the other hand, when either color probability or orientation probability was manipulated, response time and accuracy improvements were specific for that probable feature dimension. Decomposition of the response time benefits demonstrated that spatial probability only affected initiation times, whereas manipulations of feature probability affected both initiation and movement times. As detection was made more difficult, the two effects further diverged, with spatial probability disproportionally affecting initiation times and feature probability disproportionately affecting accuracy. In conclusion, all manipulations of probability, whether spatial or featural, affect detection. However, only feature probability affects perceptual precision, and precision effects are specific to the probable attribute.
Examining the utility of satellite-based wind sheltering estimates for lake hydrodynamic modeling
Van Den Hoek, Jamon; Read, Jordan S.; Winslow, Luke A.; Montesano, Paul; Markfort, Corey D.
2015-01-01
Satellite-based measurements of vegetation canopy structure have been in common use for the last decade but have never been used to estimate canopy's impact on wind sheltering of individual lakes. Wind sheltering is caused by slower winds in the wake of topography and shoreline obstacles (e.g. forest canopy) and influences heat loss and the flux of wind-driven mixing energy into lakes, which control lake temperatures and indirectly structure lake ecosystem processes, including carbon cycling and thermal habitat partitioning. Lakeshore wind sheltering has often been parameterized by lake surface area but such empirical relationships are only based on forested lakeshores and overlook the contributions of local land cover and terrain to wind sheltering. This study is the first to examine the utility of satellite imagery-derived broad-scale estimates of wind sheltering across a diversity of land covers. Using 30 m spatial resolution ASTER GDEM2 elevation data, the mean sheltering height, hs, being the combination of local topographic rise and canopy height above the lake surface, is calculated within 100 m-wide buffers surrounding 76,000 lakes in the U.S. state of Wisconsin. Uncertainty of GDEM2-derived hs was compared to SRTM-, high-resolution G-LiHT lidar-, and ICESat-derived estimates of hs, respective influences of land cover type and buffer width on hsare examined; and the effect of including satellite-based hs on the accuracy of a statewide lake hydrodynamic model was discussed. Though GDEM2 hs uncertainty was comparable to or better than other satellite-based measures of hs, its higher spatial resolution and broader spatial coverage allowed more lakes to be included in modeling efforts. GDEM2 was shown to offer superior utility for estimating hs compared to other satellite-derived data, but was limited by its consistent underestimation of hs, inability to detect within-buffer hs variability, and differing accuracy across land cover types. Nonetheless, considering a GDEM2 hs-derived wind sheltering potential improved the modeled lake temperature root mean square error for non-forested lakes by 0.72 °C compared to a commonly used wind sheltering model based on lake area alone. While results from this study show promise, the limitations of near-global GDEM2 data in timeliness, temporal and spatial resolution, and vertical accuracy were apparent. As hydrodynamic modeling and high-resolution topographic mapping efforts both expand, future remote sensing-derived vegetation structure data must be improved to meet wind sheltering accuracy requirements to expand our understanding of lake processes.
Hernández-Ceballos, M A; Skjøth, C A; García-Mozo, H; Bolívar, J P; Galán, C
2014-12-01
Airborne pollen transport at micro-, meso-gamma and meso-beta scales must be studied by atmospheric models, having special relevance in complex terrain. In these cases, the accuracy of these models is mainly determined by the spatial resolution of the underlying meteorological dataset. This work examines how meteorological datasets determine the results obtained from atmospheric transport models used to describe pollen transport in the atmosphere. We investigate the effect of the spatial resolution when computing backward trajectories with the HYSPLIT model. We have used meteorological datasets from the WRF model with 27, 9 and 3 km resolutions and from the GDAS files with 1° resolution. This work allows characterizing atmospheric transport of Olea pollen in a region with complex flows. The results show that the complex terrain affects the trajectories and this effect varies with the different meteorological datasets. Overall, the change from GDAS to WRF-ARW inputs improves the analyses with the HYSPLIT model, thereby increasing the understanding the pollen episode. The results indicate that a spatial resolution of at least 9 km is needed to simulate atmospheric flows that are considerable affected by the relief of the landscape. The results suggest that the appropriate meteorological files should be considered when atmospheric models are used to characterize the atmospheric transport of pollen on micro-, meso-gamma and meso-beta scales. Furthermore, at these scales, the results are believed to be generally applicable for related areas such as the description of atmospheric transport of radionuclides or in the definition of nuclear-radioactivity emergency preparedness.
NASA Astrophysics Data System (ADS)
Hernández-Ceballos, M. A.; Skjøth, C. A.; García-Mozo, H.; Bolívar, J. P.; Galán, C.
2014-12-01
Airborne pollen transport at micro-, meso-gamma and meso-beta scales must be studied by atmospheric models, having special relevance in complex terrain. In these cases, the accuracy of these models is mainly determined by the spatial resolution of the underlying meteorological dataset. This work examines how meteorological datasets determine the results obtained from atmospheric transport models used to describe pollen transport in the atmosphere. We investigate the effect of the spatial resolution when computing backward trajectories with the HYSPLIT model. We have used meteorological datasets from the WRF model with 27, 9 and 3 km resolutions and from the GDAS files with 1 ° resolution. This work allows characterizing atmospheric transport of Olea pollen in a region with complex flows. The results show that the complex terrain affects the trajectories and this effect varies with the different meteorological datasets. Overall, the change from GDAS to WRF-ARW inputs improves the analyses with the HYSPLIT model, thereby increasing the understanding the pollen episode. The results indicate that a spatial resolution of at least 9 km is needed to simulate atmospheric flows that are considerable affected by the relief of the landscape. The results suggest that the appropriate meteorological files should be considered when atmospheric models are used to characterize the atmospheric transport of pollen on micro-, meso-gamma and meso-beta scales. Furthermore, at these scales, the results are believed to be generally applicable for related areas such as the description of atmospheric transport of radionuclides or in the definition of nuclear-radioactivity emergency preparedness.
Carolyn B. Meyer; Sherri L. Miller; C. John Ralph
2004-01-01
The scale at which habitat variables are measured affects the accuracy of resource selection functions in predicting animal use of sites. We used logistic regression models for a wide-ranging species, the marbled murrelet, (Brachyramphus marmoratus) in a large region in California to address how much changing the spatial or temporal scale of...
Ensemble coding remains accurate under object and spatial visual working memory load.
Epstein, Michael L; Emmanouil, Tatiana A
2017-10-01
A number of studies have provided evidence that the visual system statistically summarizes large amounts of information that would exceed the limitations of attention and working memory (ensemble coding). However the necessity of working memory resources for ensemble coding has not yet been tested directly. In the current study, we used a dual task design to test the effect of object and spatial visual working memory load on size averaging accuracy. In Experiment 1, we tested participants' accuracy in comparing the mean size of two sets under various levels of object visual working memory load. Although the accuracy of average size judgments depended on the difference in mean size between the two sets, we found no effect of working memory load. In Experiment 2, we tested the same average size judgment while participants were under spatial visual working memory load, again finding no effect of load on averaging accuracy. Overall our results reveal that ensemble coding can proceed unimpeded and highly accurately under both object and spatial visual working memory load, providing further evidence that ensemble coding reflects a basic perceptual process distinct from that of individual object processing.
Resolution limits of ultrafast ultrasound localization microscopy
NASA Astrophysics Data System (ADS)
Desailly, Yann; Pierre, Juliette; Couture, Olivier; Tanter, Mickael
2015-11-01
As in other imaging methods based on waves, the resolution of ultrasound imaging is limited by the wavelength. However, the diffraction-limit can be overcome by super-localizing single events from isolated sources. In recent years, we developed plane-wave ultrasound allowing frame rates up to 20 000 fps. Ultrafast processes such as rapid movement or disruption of ultrasound contrast agents (UCA) can thus be monitored, providing us with distinct punctual sources that could be localized beyond the diffraction limit. We previously showed experimentally that resolutions beyond λ/10 can be reached in ultrafast ultrasound localization microscopy (uULM) using a 128 transducer matrix in reception. Higher resolutions are theoretically achievable and the aim of this study is to predict the maximum resolution in uULM with respect to acquisition parameters (frequency, transducer geometry, sampling electronics). The accuracy of uULM is the error on the localization of a bubble, considered a point-source in a homogeneous medium. The proposed model consists in two steps: determining the timing accuracy of the microbubble echo in radiofrequency data, then transferring this time accuracy into spatial accuracy. The simplified model predicts a maximum resolution of 40 μm for a 1.75 MHz transducer matrix composed of two rows of 64 elements. Experimental confirmation of the model was performed by flowing microbubbles within a 60 μm microfluidic channel and localizing their blinking under ultrafast imaging (500 Hz frame rate). The experimental resolution, determined as the standard deviation in the positioning of the microbubbles, was predicted within 6 μm (13%) of the theoretical values and followed the analytical relationship with respect to the number of elements and depth. Understanding the underlying physical principles determining the resolution of superlocalization will allow the optimization of the imaging setup for each organ. Ultimately, accuracies better than the size of capillaries are achievable at several centimeter depths.
Abstract for poster presentation:
Site-specific accuracy assessments evaluate fine-scale accuracy of land-use/land-cover(LULC) datasets but provide little insight into accuracy of area estimates of LULC
classes derived from sampling units of varying size. Additiona...
NASA Technical Reports Server (NTRS)
Kloog, Itai; Chudnovsky, Alexandra A.; Just, Allan C.; Nordio, Francesco; Koutrakis, Petros; Coull, Brent A.; Lyapustin, Alexei; Wang, Yujie; Schwartz, Joel
2014-01-01
The use of satellite-based aerosol optical depth (AOD) to estimate fine particulate matter PM(sub 2.5) for epidemiology studies has increased substantially over the past few years. These recent studies often report moderate predictive power, which can generate downward bias in effect estimates. In addition, AOD measurements have only moderate spatial resolution, and have substantial missing data. We make use of recent advances in MODIS satellite data processing algorithms (Multi-Angle Implementation of Atmospheric Correction (MAIAC), which allow us to use 1 km (versus currently available 10 km) resolution AOD data.We developed and cross validated models to predict daily PM(sub 2.5) at a 1X 1 km resolution across the northeastern USA (New England, New York and New Jersey) for the years 2003-2011, allowing us to better differentiate daily and long term exposure between urban, suburban, and rural areas. Additionally, we developed an approach that allows us to generate daily high-resolution 200 m localized predictions representing deviations from the area 1 X 1 km grid predictions. We used mixed models regressing PM(sub 2.5) measurements against day-specific random intercepts, and fixed and random AOD and temperature slopes. We then use generalized additive mixed models with spatial smoothing to generate grid cell predictions when AOD was missing. Finally, to get 200 m localized predictions, we regressed the residuals from the final model for each monitor against the local spatial and temporal variables at each monitoring site. Our model performance was excellent (mean out-of-sample R(sup 2) = 0.88). The spatial and temporal components of the out-of-sample results also presented very good fits to the withheld data (R(sup 2) = 0.87, R(sup)2 = 0.87). In addition, our results revealed very little bias in the predicted concentrations (Slope of predictions versus withheld observations = 0.99). Our daily model results show high predictive accuracy at high spatial resolutions and will be useful in reconstructing exposure histories for epidemiological studies across this region.
Kloog, Itai; Chudnovsky, Alexandra A; Just, Allan C; Nordio, Francesco; Koutrakis, Petros; Coull, Brent A; Lyapustin, Alexei; Wang, Yujie; Schwartz, Joel
2014-10-01
The use of satellite-based aerosol optical depth (AOD) to estimate fine particulate matter (PM 2.5 ) for epidemiology studies has increased substantially over the past few years. These recent studies often report moderate predictive power, which can generate downward bias in effect estimates. In addition, AOD measurements have only moderate spatial resolution, and have substantial missing data. We make use of recent advances in MODIS satellite data processing algorithms (Multi-Angle Implementation of Atmospheric Correction (MAIAC), which allow us to use 1 km (versus currently available 10 km) resolution AOD data. We developed and cross validated models to predict daily PM 2.5 at a 1×1km resolution across the northeastern USA (New England, New York and New Jersey) for the years 2003-2011, allowing us to better differentiate daily and long term exposure between urban, suburban, and rural areas. Additionally, we developed an approach that allows us to generate daily high-resolution 200 m localized predictions representing deviations from the area 1×1 km grid predictions. We used mixed models regressing PM 2.5 measurements against day-specific random intercepts, and fixed and random AOD and temperature slopes. We then use generalized additive mixed models with spatial smoothing to generate grid cell predictions when AOD was missing. Finally, to get 200 m localized predictions, we regressed the residuals from the final model for each monitor against the local spatial and temporal variables at each monitoring site. Our model performance was excellent (mean out-of-sample R 2 =0.88). The spatial and temporal components of the out-of-sample results also presented very good fits to the withheld data (R 2 =0.87, R 2 =0.87). In addition, our results revealed very little bias in the predicted concentrations (Slope of predictions versus withheld observations = 0.99). Our daily model results show high predictive accuracy at high spatial resolutions and will be useful in reconstructing exposure histories for epidemiological studies across this region.
Kloog, Itai; Chudnovsky, Alexandra A.; Just, Allan C.; Nordio, Francesco; Koutrakis, Petros; Coull, Brent A.; Lyapustin, Alexei; Wang, Yujie; Schwartz, Joel
2017-01-01
Background The use of satellite-based aerosol optical depth (AOD) to estimate fine particulate matter (PM2.5) for epidemiology studies has increased substantially over the past few years. These recent studies often report moderate predictive power, which can generate downward bias in effect estimates. In addition, AOD measurements have only moderate spatial resolution, and have substantial missing data. Methods We make use of recent advances in MODIS satellite data processing algorithms (Multi-Angle Implementation of Atmospheric Correction (MAIAC), which allow us to use 1 km (versus currently available 10 km) resolution AOD data. We developed and cross validated models to predict daily PM2.5 at a 1×1km resolution across the northeastern USA (New England, New York and New Jersey) for the years 2003–2011, allowing us to better differentiate daily and long term exposure between urban, suburban, and rural areas. Additionally, we developed an approach that allows us to generate daily high-resolution 200 m localized predictions representing deviations from the area 1×1 km grid predictions. We used mixed models regressing PM2.5 measurements against day-specific random intercepts, and fixed and random AOD and temperature slopes. We then use generalized additive mixed models with spatial smoothing to generate grid cell predictions when AOD was missing. Finally, to get 200 m localized predictions, we regressed the residuals from the final model for each monitor against the local spatial and temporal variables at each monitoring site. Results Our model performance was excellent (mean out-of-sample R2=0.88). The spatial and temporal components of the out-of-sample results also presented very good fits to the withheld data (R2=0.87, R2=0.87). In addition, our results revealed very little bias in the predicted concentrations (Slope of predictions versus withheld observations = 0.99). Conclusion Our daily model results show high predictive accuracy at high spatial resolutions and will be useful in reconstructing exposure histories for epidemiological studies across this region. PMID:28966552
Rapid simulation of spatial epidemics: a spectral method.
Brand, Samuel P C; Tildesley, Michael J; Keeling, Matthew J
2015-04-07
Spatial structure and hence the spatial position of host populations plays a vital role in the spread of infection. In the majority of situations, it is only possible to predict the spatial spread of infection using simulation models, which can be computationally demanding especially for large population sizes. Here we develop an approximation method that vastly reduces this computational burden. We assume that the transmission rates between individuals or sub-populations are determined by a spatial transmission kernel. This kernel is assumed to be isotropic, such that the transmission rate is simply a function of the distance between susceptible and infectious individuals; as such this provides the ideal mechanism for modelling localised transmission in a spatial environment. We show that the spatial force of infection acting on all susceptibles can be represented as a spatial convolution between the transmission kernel and a spatially extended 'image' of the infection state. This representation allows the rapid calculation of stochastic rates of infection using fast-Fourier transform (FFT) routines, which greatly improves the computational efficiency of spatial simulations. We demonstrate the efficiency and accuracy of this fast spectral rate recalculation (FSR) method with two examples: an idealised scenario simulating an SIR-type epidemic outbreak amongst N habitats distributed across a two-dimensional plane; the spread of infection between US cattle farms, illustrating that the FSR method makes continental-scale outbreak forecasting feasible with desktop processing power. The latter model demonstrates which areas of the US are at consistently high risk for cattle-infections, although predictions of epidemic size are highly dependent on assumptions about the tail of the transmission kernel. Copyright © 2015 Elsevier Ltd. All rights reserved.
Two-dimensional photoacoustic imaging of femtosecond filament in water
NASA Astrophysics Data System (ADS)
Potemkin, F. V.; Mareev, E. I.; Rumiantsev, B. V.; Bychkov, A. S.; Karabutov, A. A.; Cherepetskaya, E. B.; Makarov, V. A.
2018-07-01
We report a first-of-its-kind optoacoustic tomography of a femtosecond filament in water. Using a broadband (~100 MHz) piezoelectric transducer and a back-projection reconstruction technique, a single filament profile was retrieved. Obtained pressure distribution induced by the femtosecond filament allowed us to identify the size of the core and the energy reservoir with spatial resolution better than 10 µm. The photoacoustic imaging provides direct measurements of the energy deposition into the medium under filamentation of ultrashort laser pulses that cannot be obtained by existing techniques. In combination with a relative simplicity and high accuracy, photoacoustic imaging can be considered as a breakthrough instrument for filamentation investigation.
Real-Time Laser Ultrasound Tomography for Profilometry of Solids
NASA Astrophysics Data System (ADS)
Zarubin, V. P.; Bychkov, A. S.; Karabutov, A. A.; Simonova, V. A.; Kudinov, I. A.; Cherepetskaya, E. B.
2018-01-01
We studied the possibility of applying laser ultrasound tomography for profilometry of solids. The proposed approach provides high spatial resolution and efficiency, as well as profilometry of contaminated objects or objects submerged in liquids. The algorithms for the construction of tomograms and recognition of the profiles of studied objects using the parallel programming technology NDIVIA CUDA are proposed. A prototype of the real-time laser ultrasound profilometer was used to obtain the profiles of solid surfaces of revolution. The proposed method allows the real-time determination of the surface position for cylindrical objects with an approximation accuracy of up to 16 μm.
The acoustic environment of a sonoluminescing bubble
NASA Astrophysics Data System (ADS)
Holzfuss, Joachim; Rüggeberg, Matthias; Holt, R. Glynn
2000-07-01
A bubble is levitated in water in a cylindrical resonator which is driven by ultrasound. It has been shown that in a certain region of parameter space the bubble is emitting light pulses (sonoluminescence). One of the properties observed is the enormous spatial stability leaving the bubble "pinned" in space allowing it to emit light with a timing of picosecond accuracy. We argue that the observed stability is due to interactions of the bubble with the resonator. A shock wave emitted at collapse time together with a self generated complex sound field, which is experimentally mapped with high resolution, is responsible for the observed effects.
Web Service for Positional Quality Assessment: the Wps Tier
NASA Astrophysics Data System (ADS)
Xavier, E. M. A.; Ariza-López, F. J.; Ureña-Cámara, M. A.
2015-08-01
In the field of spatial data every day we have more and more information available, but we still have little or very little information about the quality of spatial data. We consider that the automation of the spatial data quality assessment is a true need for the geomatic sector, and that automation is possible by means of web processing services (WPS), and the application of specific assessment procedures. In this paper we propose and develop a WPS tier centered on the automation of the positional quality assessment. An experiment using the NSSDA positional accuracy method is presented. The experiment involves the uploading by the client of two datasets (reference and evaluation data). The processing is to determine homologous pairs of points (by distance) and calculate the value of positional accuracy under the NSSDA standard. The process generates a small report that is sent to the client. From our experiment, we reached some conclusions on the advantages and disadvantages of WPSs when applied to the automation of spatial data accuracy assessments.
Going the distance: spatial scale of athletic experience affects the accuracy of path integration.
Smith, Alastair D; Howard, Christina J; Alcock, Niall; Cater, Kirsten
2010-09-01
Evidence suggests that athletically trained individuals are more accurate than untrained individuals in updating their spatial position through idiothetic cues. We assessed whether training at different spatial scales affects the accuracy of path integration. Groups of rugby players (large-scale training) and martial artists (small-scale training) participated in a triangle-completion task: they were led (blindfolded) along two sides of a right-angled triangle and were required to complete the hypotenuse by returning to the origin. The groups did not differ in their assessment of the distance to the origin, but rugby players were more accurate than martial artists in assessing the correct angle to turn (heading), and landed significantly closer to the origin. These data support evidence that distance and heading components can be dissociated. Furthermore, they suggest that the spatial scale at which an individual is trained may affect the accuracy of one component of path integration but not the other.
Evaluation of spatial filtering on the accuracy of wheat area estimate
NASA Technical Reports Server (NTRS)
Dejesusparada, N. (Principal Investigator); Moreira, M. A.; Chen, S. C.; Delima, A. M.
1982-01-01
A 3 x 3 pixel spatial filter for postclassification was used for wheat classification to evaluate the effects of this procedure on the accuracy of area estimation using LANDSAT digital data obtained from a single pass. Quantitative analyses were carried out in five test sites (approx 40 sq km each) and t tests showed that filtering with threshold values significantly decreased errors of commission and omission. In area estimation filtering improved the overestimate of 4.5% to 2.7% and the root-mean-square error decreased from 126.18 ha to 107.02 ha. Extrapolating the same procedure of automatic classification using spatial filtering for postclassification to the whole study area, the accuracy in area estimate was improved from the overestimate of 10.9% to 9.7%. It is concluded that when single pass LANDSAT data is used for crop identification and area estimation the postclassification procedure using a spatial filter provides a more accurate area estimate by reducing classification errors.
Zhou, Shenglu; Su, Quanlong; Yi, Haomin
2017-01-01
Soil pollution by metal(loid)s resulting from rapid economic development is a major concern. Accurately estimating the spatial distribution of soil metal(loid) pollution has great significance in preventing and controlling soil pollution. In this study, 126 topsoil samples were collected in Kunshan City and the geo-accumulation index was selected as a pollution index. We used Kriging interpolation and BP neural network methods to estimate the spatial distribution of arsenic (As) and cadmium (Cd) pollution in the study area. Additionally, we introduced a cross-validation method to measure the errors of the estimation results by the two interpolation methods and discussed the accuracy of the information contained in the estimation results. The conclusions are as follows: data distribution characteristics, spatial variability, and mean square errors (MSE) of the different methods showed large differences. Estimation results from BP neural network models have a higher accuracy, the MSE of As and Cd are 0.0661 and 0.1743, respectively. However, the interpolation results show significant skewed distribution, and spatial autocorrelation is strong. Using Kriging interpolation, the MSE of As and Cd are 0.0804 and 0.2983, respectively. The estimation results have poorer accuracy. Combining the two methods can improve the accuracy of the Kriging interpolation and more comprehensively represent the spatial distribution characteristics of metal(loid)s in regional soil. The study may provide a scientific basis and technical support for the regulation of soil metal(loid) pollution. PMID:29278363
Biomass Burning Aerosol Absorption Measurements with MODIS Using the Critical Reflectance Method
NASA Technical Reports Server (NTRS)
Zhu, Li; Martins, Vanderlei J.; Remer, Lorraine A.
2010-01-01
This research uses the critical reflectance technique, a space-based remote sensing method, to measure the spatial distribution of aerosol absorption properties over land. Choosing two regions dominated by biomass burning aerosols, a series of sensitivity studies were undertaken to analyze the potential limitations of this method for the type of aerosol to be encountered in the selected study areas, and to show that the retrieved results are relatively insensitive to uncertainties in the assumptions used in the retrieval of smoke aerosol. The critical reflectance technique is then applied to Moderate Resolution Imaging Spectrometer (MODIS) data to retrieve the spectral aerosol single scattering albedo (SSA) in South African and South American 35 biomass burning events. The retrieved results were validated with collocated Aerosol Robotic Network (AERONET) retrievals. One standard deviation of mean MODIS retrievals match AERONET products to within 0.03, the magnitude of the AERONET uncertainty. The overlap of the two retrievals increases to 88%, allowing for measurement variance in the MODIS retrievals as well. The ensemble average of MODIS-derived SSA for the Amazon forest station is 0.92 at 670 nm, and 0.84-0.89 for the southern African savanna stations. The critical reflectance technique allows evaluation of the spatial variability of SSA, and shows that SSA in South America exhibits higher spatial variation than in South Africa. The accuracy of the retrieved aerosol SSA from MODIS data indicates that this product can help to better understand 44 how aerosols affect the regional and global climate.
Age-Related Differences in Multiple Task Monitoring
Todorov, Ivo; Del Missier, Fabio; Mäntylä, Timo
2014-01-01
Coordinating multiple tasks with narrow deadlines is particularly challenging for older adults because of age related decline in cognitive control functions. We tested the hypothesis that multiple task performance reflects age- and gender-related differences in executive functioning and spatial ability. Young and older adults completed a multitasking session with four monitoring tasks as well as separate tasks measuring executive functioning and spatial ability. For both age groups, men exceeded women in multitasking, measured as monitoring accuracy. Individual differences in executive functioning and spatial ability were independent predictors of young adults' monitoring accuracy, but only spatial ability was related to sex differences. For older adults, age and executive functioning, but not spatial ability, predicted multitasking performance. These results suggest that executive functions contribute to multiple task performance across the adult life span and that reliance on spatial skills for coordinating deadlines is modulated by age. PMID:25215609
Age-related differences in multiple task monitoring.
Todorov, Ivo; Del Missier, Fabio; Mäntylä, Timo
2014-01-01
Coordinating multiple tasks with narrow deadlines is particularly challenging for older adults because of age related decline in cognitive control functions. We tested the hypothesis that multiple task performance reflects age- and gender-related differences in executive functioning and spatial ability. Young and older adults completed a multitasking session with four monitoring tasks as well as separate tasks measuring executive functioning and spatial ability. For both age groups, men exceeded women in multitasking, measured as monitoring accuracy. Individual differences in executive functioning and spatial ability were independent predictors of young adults' monitoring accuracy, but only spatial ability was related to sex differences. For older adults, age and executive functioning, but not spatial ability, predicted multitasking performance. These results suggest that executive functions contribute to multiple task performance across the adult life span and that reliance on spatial skills for coordinating deadlines is modulated by age.
Binomial tau-leap spatial stochastic simulation algorithm for applications in chemical kinetics.
Marquez-Lago, Tatiana T; Burrage, Kevin
2007-09-14
In cell biology, cell signaling pathway problems are often tackled with deterministic temporal models, well mixed stochastic simulators, and/or hybrid methods. But, in fact, three dimensional stochastic spatial modeling of reactions happening inside the cell is needed in order to fully understand these cell signaling pathways. This is because noise effects, low molecular concentrations, and spatial heterogeneity can all affect the cellular dynamics. However, there are ways in which important effects can be accounted without going to the extent of using highly resolved spatial simulators (such as single-particle software), hence reducing the overall computation time significantly. We present a new coarse grained modified version of the next subvolume method that allows the user to consider both diffusion and reaction events in relatively long simulation time spans as compared with the original method and other commonly used fully stochastic computational methods. Benchmarking of the simulation algorithm was performed through comparison with the next subvolume method and well mixed models (MATLAB), as well as stochastic particle reaction and transport simulations (CHEMCELL, Sandia National Laboratories). Additionally, we construct a model based on a set of chemical reactions in the epidermal growth factor receptor pathway. For this particular application and a bistable chemical system example, we analyze and outline the advantages of our presented binomial tau-leap spatial stochastic simulation algorithm, in terms of efficiency and accuracy, in scenarios of both molecular homogeneity and heterogeneity.
Spatial-Temporal Data Collection with Compressive Sensing in Mobile Sensor Networks
Li, Jiayin; Guo, Wenzhong; Chen, Zhonghui; Xiong, Neal
2017-01-01
Compressive sensing (CS) provides an energy-efficient paradigm for data gathering in wireless sensor networks (WSNs). However, the existing work on spatial-temporal data gathering using compressive sensing only considers either multi-hop relaying based or multiple random walks based approaches. In this paper, we exploit the mobility pattern for spatial-temporal data collection and propose a novel mobile data gathering scheme by employing the Metropolis-Hastings algorithm with delayed acceptance, an improved random walk algorithm for a mobile collector to collect data from a sensing field. The proposed scheme exploits Kronecker compressive sensing (KCS) for spatial-temporal correlation of sensory data by allowing the mobile collector to gather temporal compressive measurements from a small subset of randomly selected nodes along a random routing path. More importantly, from the theoretical perspective we prove that the equivalent sensing matrix constructed from the proposed scheme for spatial-temporal compressible signal can satisfy the property of KCS models. The simulation results demonstrate that the proposed scheme can not only significantly reduce communication cost but also improve recovery accuracy for mobile data gathering compared to the other existing schemes. In particular, we also show that the proposed scheme is robust in unreliable wireless environment under various packet losses. All this indicates that the proposed scheme can be an efficient alternative for data gathering application in WSNs. PMID:29117152
Gordon, Jeremy W.; Niles, David J.; Fain, Sean B.; Johnson, Kevin M.
2014-01-01
Purpose To develop a novel imaging technique to reduce the number of excitations and required scan time for hyperpolarized 13C imaging. Methods A least-squares based optimization and reconstruction is developed to simultaneously solve for both spatial and spectral encoding. By jointly solving both domains, spectral imaging can potentially be performed with a spatially oversampled single echo spiral acquisition. Digital simulations, phantom experiments, and initial in vivo hyperpolarized [1-13C]pyruvate experiments were performed to assess the performance of the algorithm as compared to a multi-echo approach. Results Simulations and phantom data indicate that accurate single echo imaging is possible when coupled with oversampling factors greater than six (corresponding to a worst case of pyruvate to metabolite ratio < 9%), even in situations of substantial T2* decay and B0 heterogeneity. With lower oversampling rates, two echoes are required for similar accuracy. These results were confirmed with in vivo data experiments, showing accurate single echo spectral imaging with an oversampling factor of 7 and two echo imaging with an oversampling factor of 4. Conclusion The proposed k-t approach increases data acquisition efficiency by reducing the number of echoes required to generate spectroscopic images, thereby allowing accelerated acquisition speed, preserved polarization, and/or improved temporal or spatial resolution. Magn Reson Med PMID:23716402
Spatial-Temporal Data Collection with Compressive Sensing in Mobile Sensor Networks.
Zheng, Haifeng; Li, Jiayin; Feng, Xinxin; Guo, Wenzhong; Chen, Zhonghui; Xiong, Neal
2017-11-08
Compressive sensing (CS) provides an energy-efficient paradigm for data gathering in wireless sensor networks (WSNs). However, the existing work on spatial-temporal data gathering using compressive sensing only considers either multi-hop relaying based or multiple random walks based approaches. In this paper, we exploit the mobility pattern for spatial-temporal data collection and propose a novel mobile data gathering scheme by employing the Metropolis-Hastings algorithm with delayed acceptance, an improved random walk algorithm for a mobile collector to collect data from a sensing field. The proposed scheme exploits Kronecker compressive sensing (KCS) for spatial-temporal correlation of sensory data by allowing the mobile collector to gather temporal compressive measurements from a small subset of randomly selected nodes along a random routing path. More importantly, from the theoretical perspective we prove that the equivalent sensing matrix constructed from the proposed scheme for spatial-temporal compressible signal can satisfy the property of KCS models. The simulation results demonstrate that the proposed scheme can not only significantly reduce communication cost but also improve recovery accuracy for mobile data gathering compared to the other existing schemes. In particular, we also show that the proposed scheme is robust in unreliable wireless environment under various packet losses. All this indicates that the proposed scheme can be an efficient alternative for data gathering application in WSNs .
Development of a Brillouin scattering based distributed fibre optic strain sensor
NASA Astrophysics Data System (ADS)
Brown, Anthony Wayne
2001-07-01
The parameters of the Brillouin spectrum of an optical fibre depend upon the strain and temperature conditions of the fibre. As a result, fibre optic distributed sensors based on Brillouin scattering can measure strain and temperature in arbitrary regions of a sensing fibre. In the past, such sensors have often been demonstrated under laboratory conditions, demonstrating the principle of operation. Although some field tests of temperature sensing have been reported, the actual deployment of such sensors in the field for strain measurements has been limited by poor spatial resolution (typically 1 m or more) and poor strain accuracy (+/-100 muepsilon). Also, cross-sensitivity of the Brillouin spectrum to temperature further reduces the accuracy of strain measurement while long acquisition times hinders field use. The high level of user knowledge and lack of automation required to operate the equipment is another limiting factor of the only commercially available unit. The potential benefits of distributed measurements are great for instrumentation of civil structures provided that the above limitations are overcome. However, before this system is used with confidence by practitioners, it is essential that it can be effectively operated in field conditions. In light of this, the fibre optics group at the University of New Brunswick has been developing an automated system for field measurement of strain in civil structures, particularly in reinforced concrete. The development of the sensing system hardware and software was the main focus of this thesis. This has been made possible, in part, by observation of the Brillouin spectrum for the case of using very short light pulses (<10 ns). The end product of the development is a sensor with a spatial resolution that has been improved to 100 mm. Measurement techniques that improve system performance to measure strain to an accuracy of 10 muepsilon; and allow the simultaneous measurement of strain and temperature to an accuracy of 204 muepsilon and 3°C are presented. Finally, the results of field measurement of strain on a concrete structure are presented.
Absolute vs. relative error characterization of electromagnetic tracking accuracy
NASA Astrophysics Data System (ADS)
Matinfar, Mohammad; Narayanasamy, Ganesh; Gutierrez, Luis; Chan, Raymond; Jain, Ameet
2010-02-01
Electromagnetic (EM) tracking systems are often used for real time navigation of medical tools in an Image Guided Therapy (IGT) system. They are specifically advantageous when the medical device requires tracking within the body of a patient where line of sight constraints prevent the use of conventional optical tracking. EM tracking systems are however very sensitive to electromagnetic field distortions. These distortions, arising from changes in the electromagnetic environment due to the presence of conductive ferromagnetic surgical tools or other medical equipment, limit the accuracy of EM tracking, in some cases potentially rendering tracking data unusable. We present a mapping method for the operating region over which EM tracking sensors are used, allowing for characterization of measurement errors, in turn providing physicians with visual feedback about measurement confidence or reliability of localization estimates. In this instance, we employ a calibration phantom to assess distortion within the operating field of the EM tracker and to display in real time the distribution of measurement errors, as well as the location and extent of the field associated with minimal spatial distortion. The accuracy is assessed relative to successive measurements. Error is computed for a reference point and consecutive measurement errors are displayed relative to the reference in order to characterize the accuracy in near-real-time. In an initial set-up phase, the phantom geometry is calibrated by registering the data from a multitude of EM sensors in a non-ferromagnetic ("clean") EM environment. The registration results in the locations of sensors with respect to each other and defines the geometry of the sensors in the phantom. In a measurement phase, the position and orientation data from all sensors are compared with the known geometry of the sensor spacing, and localization errors (displacement and orientation) are computed. Based on error thresholds provided by the operator, the spatial distribution of localization errors are clustered and dynamically displayed as separate confidence zones within the operating region of the EM tracker space.
NASA Astrophysics Data System (ADS)
Castillo, Richard; Castillo, Edward; Fuentes, David; Ahmad, Moiz; Wood, Abbie M.; Ludwig, Michelle S.; Guerrero, Thomas
2013-05-01
Landmark point-pairs provide a strategy to assess deformable image registration (DIR) accuracy in terms of the spatial registration of the underlying anatomy depicted in medical images. In this study, we propose to augment a publicly available database (www.dir-lab.com) of medical images with large sets of manually identified anatomic feature pairs between breath-hold computed tomography (BH-CT) images for DIR spatial accuracy evaluation. Ten BH-CT image pairs were randomly selected from the COPDgene study cases. Each patient had received CT imaging of the entire thorax in the supine position at one-fourth dose normal expiration and maximum effort full dose inspiration. Using dedicated in-house software, an imaging expert manually identified large sets of anatomic feature pairs between images. Estimates of inter- and intra-observer spatial variation in feature localization were determined by repeat measurements of multiple observers over subsets of randomly selected features. 7298 anatomic landmark features were manually paired between the 10 sets of images. Quantity of feature pairs per case ranged from 447 to 1172. Average 3D Euclidean landmark displacements varied substantially among cases, ranging from 12.29 (SD: 6.39) to 30.90 (SD: 14.05) mm. Repeat registration of uniformly sampled subsets of 150 landmarks for each case yielded estimates of observer localization error, which ranged in average from 0.58 (SD: 0.87) to 1.06 (SD: 2.38) mm for each case. The additions to the online web database (www.dir-lab.com) described in this work will broaden the applicability of the reference data, providing a freely available common dataset for targeted critical evaluation of DIR spatial accuracy performance in multiple clinical settings. Estimates of observer variance in feature localization suggest consistent spatial accuracy for all observers across both four-dimensional CT and COPDgene patient cohorts.
Data Processing and Quality Evaluation of a Boat-Based Mobile Laser Scanning System
Vaaja, Matti; Kukko, Antero; Kaartinen, Harri; Kurkela, Matti; Kasvi, Elina; Flener, Claude; Hyyppä, Hannu; Hyyppä, Juha; Järvelä, Juha; Alho, Petteri
2013-01-01
Mobile mapping systems (MMSs) are used for mapping topographic and urban features which are difficult and time consuming to measure with other instruments. The benefits of MMSs include efficient data collection and versatile usability. This paper investigates the data processing steps and quality of a boat-based mobile mapping system (BoMMS) data for generating terrain and vegetation points in a river environment. Our aim in data processing was to filter noise points, detect shorelines as well as points below water surface and conduct ground point classification. Previous studies of BoMMS have investigated elevation accuracies and usability in detection of fluvial erosion and deposition areas. The new findings concerning BoMMS data are that the improved data processing approach allows for identification of multipath reflections and shoreline delineation. We demonstrate the possibility to measure bathymetry data in shallow (0–1 m) and clear water. Furthermore, we evaluate for the first time the accuracy of the BoMMS ground points classification compared to manually classified data. We also demonstrate the spatial variations of the ground point density and assess elevation and vertical accuracies of the BoMMS data. PMID:24048340
Data processing and quality evaluation of a boat-based mobile laser scanning system.
Vaaja, Matti; Kukko, Antero; Kaartinen, Harri; Kurkela, Matti; Kasvi, Elina; Flener, Claude; Hyyppä, Hannu; Hyyppä, Juha; Järvelä, Juha; Alho, Petteri
2013-09-17
Mobile mapping systems (MMSs) are used for mapping topographic and urban features which are difficult and time consuming to measure with other instruments. The benefits of MMSs include efficient data collection and versatile usability. This paper investigates the data processing steps and quality of a boat-based mobile mapping system (BoMMS) data for generating terrain and vegetation points in a river environment. Our aim in data processing was to filter noise points, detect shorelines as well as points below water surface and conduct ground point classification. Previous studies of BoMMS have investigated elevation accuracies and usability in detection of fluvial erosion and deposition areas. The new findings concerning BoMMS data are that the improved data processing approach allows for identification of multipath reflections and shoreline delineation. We demonstrate the possibility to measure bathymetry data in shallow (0-1 m) and clear water. Furthermore, we evaluate for the first time the accuracy of the BoMMS ground points classification compared to manually classified data. We also demonstrate the spatial variations of the ground point density and assess elevation and vertical accuracies of the BoMMS data.
Lightdrum—Portable Light Stage for Accurate BTF Measurement on Site
Havran, Vlastimil; Hošek, Jan; Němcová, Šárka; Čáp, Jiří; Bittner, Jiří
2017-01-01
We propose a miniaturised light stage for measuring the bidirectional reflectance distribution function (BRDF) and the bidirectional texture function (BTF) of surfaces on site in real world application scenarios. The main principle of our lightweight BTF acquisition gantry is a compact hemispherical skeleton with cameras along the meridian and with light emitting diode (LED) modules shining light onto a sample surface. The proposed device is portable and achieves a high speed of measurement while maintaining high degree of accuracy. While the positions of the LEDs are fixed on the hemisphere, the cameras allow us to cover the range of the zenith angle from 0∘ to 75∘ and by rotating the cameras along the axis of the hemisphere we can cover all possible camera directions. This allows us to take measurements with almost the same quality as existing stationary BTF gantries. Two degrees of freedom can be set arbitrarily for measurements and the other two degrees of freedom are fixed, which provides a tradeoff between accuracy of measurements and practical applicability. Assuming that a measured sample is locally flat and spatially accessible, we can set the correct perpendicular direction against the measured sample by means of an auto-collimator prior to measuring. Further, we have designed and used a marker sticker method to allow for the easy rectification and alignment of acquired images during data processing. We show the results of our approach by images rendered for 36 measured material samples. PMID:28241466
NASA Astrophysics Data System (ADS)
McMullen, Kyla A.
Although the concept of virtual spatial audio has existed for almost twenty-five years, only in the past fifteen years has modern computing technology enabled the real-time processing needed to deliver high-precision spatial audio. Furthermore, the concept of virtually walking through an auditory environment did not exist. The applications of such an interface have numerous potential uses. Spatial audio has the potential to be used in various manners ranging from enhancing sounds delivered in virtual gaming worlds to conveying spatial locations in real-time emergency response systems. To incorporate this technology in real-world systems, various concerns should be addressed. First, to widely incorporate spatial audio into real-world systems, head-related transfer functions (HRTFs) must be inexpensively created for each user. The present study further investigated an HRTF subjective selection procedure previously developed within our research group. Users discriminated auditory cues to subjectively select their preferred HRTF from a publicly available database. Next, the issue of training to find virtual sources was addressed. Listeners participated in a localization training experiment using their selected HRTFs. The training procedure was created from the characterization of successful search strategies in prior auditory search experiments. Search accuracy significantly improved after listeners performed the training procedure. Next, in the investigation of auditory spatial memory, listeners completed three search and recall tasks with differing recall methods. Recall accuracy significantly decreased in tasks that required the storage of sound source configurations in memory. To assess the impacts of practical scenarios, the present work assessed the performance effects of: signal uncertainty, visual augmentation, and different attenuation modeling. Fortunately, source uncertainty did not affect listeners' ability to recall or identify sound sources. The present study also found that the presence of visual reference frames significantly increased recall accuracy. Additionally, the incorporation of drastic attenuation significantly improved environment recall accuracy. Through investigating the aforementioned concerns, the present study made initial footsteps guiding the design of virtual auditory environments that support spatial configuration recall.
Increasing Accuracy in Computed Inviscid Boundary Conditions
NASA Technical Reports Server (NTRS)
Dyson, Roger
2004-01-01
A technique has been devised to increase the accuracy of computational simulations of flows of inviscid fluids by increasing the accuracy with which surface boundary conditions are represented. This technique is expected to be especially beneficial for computational aeroacoustics, wherein it enables proper accounting, not only for acoustic waves, but also for vorticity and entropy waves, at surfaces. Heretofore, inviscid nonlinear surface boundary conditions have been limited to third-order accuracy in time for stationary surfaces and to first-order accuracy in time for moving surfaces. For steady-state calculations, it may be possible to achieve higher accuracy in space, but high accuracy in time is needed for efficient simulation of multiscale unsteady flow phenomena. The present technique is the first surface treatment that provides the needed high accuracy through proper accounting of higher-order time derivatives. The present technique is founded on a method known in art as the Hermitian modified solution approximation (MESA) scheme. This is because high time accuracy at a surface depends upon, among other things, correction of the spatial cross-derivatives of flow variables, and many of these cross-derivatives are included explicitly on the computational grid in the MESA scheme. (Alternatively, a related method other than the MESA scheme could be used, as long as the method involves consistent application of the effects of the cross-derivatives.) While the mathematical derivation of the present technique is too lengthy and complex to fit within the space available for this article, the technique itself can be characterized in relatively simple terms: The technique involves correction of surface-normal spatial pressure derivatives at a boundary surface to satisfy the governing equations and the boundary conditions and thereby achieve arbitrarily high orders of time accuracy in special cases. The boundary conditions can now include a potentially infinite number of time derivatives of surface-normal velocity (consistent with no flow through the boundary) up to arbitrarily high order. The corrections for the first-order spatial derivatives of pressure are calculated by use of the first-order time derivative velocity. The corrected first-order spatial derivatives are used to calculate the second- order time derivatives of velocity, which, in turn, are used to calculate the corrections for the second-order pressure derivatives. The process as described is repeated, progressing through increasing orders of derivatives, until the desired accuracy is attained.
Jia, Chao; Luo, Bowen; Wang, Haoyu; Bian, Yongqian; Li, Xueyong; Li, Shaohua; Wang, Hongjun
2017-09-01
Advances in nano-/microfabrication allow the fabrication of biomimetic substrates for various biomedical applications. In particular, it would be beneficial to control the distribution of cells and relevant biomolecules on an extracellular matrix (ECM)-like substrate with arbitrary micropatterns. In this regard, the possibilities of patterning biomolecules and cells on nanofibrous matrices are explored here by combining inkjet printing and electrospinning. Upon investigation of key parameters for patterning accuracy and reproducibility, three independent studies are performed to demonstrate the potential of this platform for: i) transforming growth factor (TGF)-β1-induced spatial differentiation of fibroblasts, ii) spatiotemporal interactions between breast cancer cells and stromal cells, and iii) cancer-regulated angiogenesis. The results show that TGF-β1 induces local fibroblast-to-myofibroblast differentiation in a dose-dependent fashion, and breast cancer clusters recruit activated stromal cells and guide the sprouting of endothelial cells in a spatially resolved manner. The established platform not only provides strategies to fabricate ECM-like interfaces for medical devices, but also offers the capability of spatially controlling cell organization for fundamental studies, and for high-throughput screening of various biomolecules for stem cell differentiation and cancer therapeutics. © 2017 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
NASA Astrophysics Data System (ADS)
Othman, Arsalan A.; Gloaguen, Richard
2017-09-01
Lithological mapping in mountainous regions is often impeded by limited accessibility due to relief. This study aims to evaluate (1) the performance of different supervised classification approaches using remote sensing data and (2) the use of additional information such as geomorphology. We exemplify the methodology in the Bardi-Zard area in NE Iraq, a part of the Zagros Fold - Thrust Belt, known for its chromite deposits. We highlighted the improvement of remote sensing geological classification by integrating geomorphic features and spatial information in the classification scheme. We performed a Maximum Likelihood (ML) classification method besides two Machine Learning Algorithms (MLA): Support Vector Machine (SVM) and Random Forest (RF) to allow the joint use of geomorphic features, Band Ratio (BR), Principal Component Analysis (PCA), spatial information (spatial coordinates) and multispectral data of the Advanced Space-borne Thermal Emission and Reflection radiometer (ASTER) satellite. The RF algorithm showed reliable results and discriminated serpentinite, talus and terrace deposits, red argillites with conglomerates and limestone, limy conglomerates and limestone conglomerates, tuffites interbedded with basic lavas, limestone and Metamorphosed limestone and reddish green shales. The best overall accuracy (∼80%) was achieved by Random Forest (RF) algorithms in the majority of the sixteen tested combination datasets.
Plis, Sergey M; George, J S; Jun, S C; Paré-Blagoev, J; Ranken, D M; Wood, C C; Schmidt, D M
2007-01-01
We propose a new model to approximate spatiotemporal noise covariance for use in neural electromagnetic source analysis, which better captures temporal variability in background activity. As with other existing formalisms, our model employs a Kronecker product of matrices representing temporal and spatial covariance. In our model, spatial components are allowed to have differing temporal covariances. Variability is represented as a series of Kronecker products of spatial component covariances and corresponding temporal covariances. Unlike previous attempts to model covariance through a sum of Kronecker products, our model is designed to have a computationally manageable inverse. Despite increased descriptive power, inversion of the model is fast, making it useful in source analysis. We have explored two versions of the model. One is estimated based on the assumption that spatial components of background noise have uncorrelated time courses. Another version, which gives closer approximation, is based on the assumption that time courses are statistically independent. The accuracy of the structural approximation is compared to an existing model, based on a single Kronecker product, using both Frobenius norm of the difference between spatiotemporal sample covariance and a model, and scatter plots. Performance of ours and previous models is compared in source analysis of a large number of single dipole problems with simulated time courses and with background from authentic magnetoencephalography data.
Processing of airborne laser scanning data to generate accurate DTM for floodplain wetland
NASA Astrophysics Data System (ADS)
Szporak-Wasilewska, Sylwia; Mirosław-Świątek, Dorota; Grygoruk, Mateusz; Michałowski, Robert; Kardel, Ignacy
2015-10-01
Structure of the floodplain, especially its topography and vegetation, influences the overland flow and dynamics of floods which are key factors shaping ecosystems in surface water-fed wetlands. Therefore elaboration of the digital terrain model (DTM) of a high spatial accuracy is crucial in hydrodynamic flow modelling in river valleys. In this study the research was conducted in the unique Central European complex of fens and marshes - the Lower Biebrza river valley. The area is represented mainly by peat ecosystems which according to EU Water Framework Directive (WFD) are called "water-dependent ecosystems". Development of accurate DTM in these areas which are overgrown by dense wetland vegetation consisting of alder forest, willow shrubs, reed, sedges and grass is very difficult, therefore to represent terrain in high accuracy the airborne laser scanning data (ALS) with scanning density of 4 points/m2 was used and the correction of the "vegetation effect" on DTM was executed. This correction was performed utilizing remotely sensed images, topographical survey using the Real Time Kinematic positioning and vegetation height measurements. In order to classify different types of vegetation within research area the object based image analysis (OBIA) was used. OBIA allowed partitioning remotely sensed imagery into meaningful image-objects, and assessing their characteristics through spatial and spectral scale. The final maps of vegetation patches that include attributes of vegetation height and vegetation spectral properties, utilized both the laser scanning data and the vegetation indices developed on the basis of airborne and satellite imagery. This data was used in process of segmentation, attribution and classification. Several different vegetation indices were tested to distinguish different types of vegetation in wetland area. The OBIA classification allowed correction of the "vegetation effect" on DTM. The final digital terrain model was compared and examined within distinguished land cover classes (formed mainly by natural vegetation of the river valley) with archival height models developed through interpolation of ground points measured with GPS RTK and also with elevation models from the ASTER-GDEM and SRTM programs. The research presented in this paper allowed improving quality of hydrodynamic modelling in the surface water-fed wetlands protected within Biebrza National Park. Additionally, the comparison with other digital terrain models allowed to demonstrate the importance of accurate topography products in such modelling. The ALS data also significantly improved the accuracy and actuality of the river Biebrza course, its tributaries and location of numerous oxbows typical in this part of the river valley in comparison to previously available data. This type of data also helped to refine the river valley cross-sections, designate river banks and to develop the slope map of the research area.
Detection of the spatial accuracy of an O-arm in the region of surgical interest
NASA Astrophysics Data System (ADS)
Koivukangas, Tapani; Katisko, Jani P. A.; Koivukangsa, John P.
2013-03-01
Medical imaging is an essential component of a wide range of surgical procedures1. For image guided surgical (IGS) procedures, medical images are the main source of information2. The IGS procedures rely largely on obtained image data, so the data needs to provide differentiation between normal and abnormal tissues, especially when other surgical guidance devices are used in the procedures. The image data also needs to provide accurate spatial representation of the patient3. This research has concentrated on the concept of accuracy assessment of IGS devices to meet the needs of quality assurance in the hospital environment. For this purpose, two precision engineered accuracy assessment phantoms have been developed as advanced materials and methods for the community. The phantoms were designed to mimic the volume of a human head as the common region of surgical interest (ROSI). This paper introduces the utilization of the phantoms in spatial accuracy assessment of a commercial surgical 3D CT scanner, the O-Arm. The study presents methods and results of image quality detection of possible geometrical distortions in the region of surgical interest. The results show that in the pre-determined ROSI there are clear image distortion and artefacts using too high imaging parameters when scanning the objects. On the other hand, when using optimal parameters, the O-Arm causes minimal error in IGS accuracy. The detected spatial inaccuracy of the O-Arm with used parameters was in the range of less than 1.00 mm.
Comparative analysis of Worldview-2 and Landsat 8 for coastal saltmarsh mapping accuracy assessment
NASA Astrophysics Data System (ADS)
Rasel, Sikdar M. M.; Chang, Hsing-Chung; Diti, Israt Jahan; Ralph, Tim; Saintilan, Neil
2016-05-01
Coastal saltmarsh and their constituent components and processes are of an interest scientifically due to their ecological function and services. However, heterogeneity and seasonal dynamic of the coastal wetland system makes it challenging to map saltmarshes with remotely sensed data. This study selected four important saltmarsh species Pragmitis australis, Sporobolus virginicus, Ficiona nodosa and Schoeloplectus sp. as well as a Mangrove and Pine tree species, Avecinia and Casuarina sp respectively. High Spatial Resolution Worldview-2 data and Coarse Spatial resolution Landsat 8 imagery were selected in this study. Among the selected vegetation types some patches ware fragmented and close to the spatial resolution of Worldview-2 data while and some patch were larger than the 30 meter resolution of Landsat 8 data. This study aims to test the effectiveness of different classifier for the imagery with various spatial and spectral resolutions. Three different classification algorithm, Maximum Likelihood Classifier (MLC), Support Vector Machine (SVM) and Artificial Neural Network (ANN) were tested and compared with their mapping accuracy of the results derived from both satellite imagery. For Worldview-2 data SVM was giving the higher overall accuracy (92.12%, kappa =0.90) followed by ANN (90.82%, Kappa 0.89) and MLC (90.55%, kappa = 0.88). For Landsat 8 data, MLC (82.04%) showed the highest classification accuracy comparing to SVM (77.31%) and ANN (75.23%). The producer accuracy of the classification results were also presented in the paper.
Estimation and Validation of Oceanic Mass Circulation from the GRACE Mission
NASA Technical Reports Server (NTRS)
Boy, J.-P.; Rowlands, D. D.; Sabaka, T. J.; Luthcke, S. B.; Lemoine, F. G.
2011-01-01
Since the launch of the Gravity Recovery And Climate Experiment (GRACE) in March 2002, the Earth's surface mass variations have been monitored with unprecedented accuracy and resolution. Compared to the classical spherical harmonic solutions, global high-resolution mascon solutions allows the retrieval of mass variations with higher spatial and temporal sampling (2 degrees and 10 days). We present here the validation of the GRACE global mascon solutions by comparing mass estimates to a set of about 100 ocean bottom pressure (OSP) records, and show that the forward modelling of continental hydrology prior to the inversion of the K-band range rate data allows better estimates of ocean mass variations. We also validate our GRACE results to OSP variations modelled by different state-of-the-art ocean general circulation models, including ECCO (Estimating the Circulation and Climate of the Ocean) and operational and reanalysis from the MERCATOR project.
An information-theoretic approach to designing the plane spacing for multifocal plane microscopy
Tahmasbi, Amir; Ram, Sripad; Chao, Jerry; Abraham, Anish V.; Ward, E. Sally; Ober, Raimund J.
2015-01-01
Multifocal plane microscopy (MUM) is a 3D imaging modality which enables the localization and tracking of single molecules at high spatial and temporal resolution by simultaneously imaging distinct focal planes within the sample. MUM overcomes the depth discrimination problem of conventional microscopy and allows high accuracy localization of a single molecule in 3D along the z-axis. An important question in the design of MUM experiments concerns the appropriate number of focal planes and their spacings to achieve the best possible 3D localization accuracy along the z-axis. Ideally, it is desired to obtain a 3D localization accuracy that is uniform over a large depth and has small numerical values, which guarantee that the single molecule is continuously detectable. Here, we address this concern by developing a plane spacing design strategy based on the Fisher information. In particular, we analyze the Fisher information matrix for the 3D localization problem along the z-axis and propose spacing scenarios termed the strong coupling and the weak coupling spacings, which provide appropriate 3D localization accuracies. Using these spacing scenarios, we investigate the detectability of the single molecule along the z-axis and study the effect of changing the number of focal planes on the 3D localization accuracy. We further review a software module we recently introduced, the MUMDesignTool, that helps to design the plane spacings for a MUM setup. PMID:26113764
Zarco-Perello, Salvador; Simões, Nuno
2017-01-01
Information about the distribution and abundance of the habitat-forming sessile organisms in marine ecosystems is of great importance for conservation and natural resource managers. Spatial interpolation methodologies can be useful to generate this information from in situ sampling points, especially in circumstances where remote sensing methodologies cannot be applied due to small-scale spatial variability of the natural communities and low light penetration in the water column. Interpolation methods are widely used in environmental sciences; however, published studies using these methodologies in coral reef science are scarce. We compared the accuracy of the two most commonly used interpolation methods in all disciplines, inverse distance weighting (IDW) and ordinary kriging (OK), to predict the distribution and abundance of hard corals, octocorals, macroalgae, sponges and zoantharians and identify hotspots of these habitat-forming organisms using data sampled at three different spatial scales (5, 10 and 20 m) in Madagascar reef, Gulf of Mexico. The deeper sandy environments of the leeward and windward regions of Madagascar reef were dominated by macroalgae and seconded by octocorals. However, the shallow rocky environments of the reef crest had the highest richness of habitat-forming groups of organisms; here, we registered high abundances of octocorals and macroalgae, with sponges, Millepora alcicornis and zoantharians dominating in some patches, creating high levels of habitat heterogeneity. IDW and OK generated similar maps of distribution for all the taxa; however, cross-validation tests showed that IDW outperformed OK in the prediction of their abundances. When the sampling distance was at 20 m, both interpolation techniques performed poorly, but as the sampling was done at shorter distances prediction accuracies increased, especially for IDW. OK had higher mean prediction errors and failed to correctly interpolate the highest abundance values measured in situ , except for macroalgae, whereas IDW had lower mean prediction errors and high correlations between predicted and measured values in all cases when sampling was every 5 m. The accurate spatial interpolations created using IDW allowed us to see the spatial variability of each taxa at a biological and spatial resolution that remote sensing would not have been able to produce. Our study sets the basis for further research projects and conservation management in Madagascar reef and encourages similar studies in the region and other parts of the world where remote sensing technologies are not suitable for use.
Simões, Nuno
2017-01-01
Information about the distribution and abundance of the habitat-forming sessile organisms in marine ecosystems is of great importance for conservation and natural resource managers. Spatial interpolation methodologies can be useful to generate this information from in situ sampling points, especially in circumstances where remote sensing methodologies cannot be applied due to small-scale spatial variability of the natural communities and low light penetration in the water column. Interpolation methods are widely used in environmental sciences; however, published studies using these methodologies in coral reef science are scarce. We compared the accuracy of the two most commonly used interpolation methods in all disciplines, inverse distance weighting (IDW) and ordinary kriging (OK), to predict the distribution and abundance of hard corals, octocorals, macroalgae, sponges and zoantharians and identify hotspots of these habitat-forming organisms using data sampled at three different spatial scales (5, 10 and 20 m) in Madagascar reef, Gulf of Mexico. The deeper sandy environments of the leeward and windward regions of Madagascar reef were dominated by macroalgae and seconded by octocorals. However, the shallow rocky environments of the reef crest had the highest richness of habitat-forming groups of organisms; here, we registered high abundances of octocorals and macroalgae, with sponges, Millepora alcicornis and zoantharians dominating in some patches, creating high levels of habitat heterogeneity. IDW and OK generated similar maps of distribution for all the taxa; however, cross-validation tests showed that IDW outperformed OK in the prediction of their abundances. When the sampling distance was at 20 m, both interpolation techniques performed poorly, but as the sampling was done at shorter distances prediction accuracies increased, especially for IDW. OK had higher mean prediction errors and failed to correctly interpolate the highest abundance values measured in situ, except for macroalgae, whereas IDW had lower mean prediction errors and high correlations between predicted and measured values in all cases when sampling was every 5 m. The accurate spatial interpolations created using IDW allowed us to see the spatial variability of each taxa at a biological and spatial resolution that remote sensing would not have been able to produce. Our study sets the basis for further research projects and conservation management in Madagascar reef and encourages similar studies in the region and other parts of the world where remote sensing technologies are not suitable for use. PMID:29204321
Huperzine A: Behavioral and Pharmacological Evaluation in Rhesus Monkeys
2008-06-01
challenged with 30 ug/kg scopolamine . Doses of 1 and 10 ug/kg HUP improved choice accuracy on a previously learned delayed spatial memory task in the...elderly subjects, and doses of 10 and 100 ug/kg reversed the scopolamine -induced deficits in the younger monkeys. Unfortunately, no data regarding...interval) in the spatial memory task differentially modulated the drug effects on performance. Specifically, scopolamine impaired accuracy
Evaluating RGB photogrammetry and multi-temporal digital surface models for detecting soil erosion
NASA Astrophysics Data System (ADS)
Anders, Niels; Keesstra, Saskia; Seeger, Manuel
2013-04-01
Photogrammetry is a widely used tool for generating high-resolution digital surface models. Unmanned Aerial Vehicles (UAVs), equipped with a Red Green Blue (RGB) camera, have great potential in quickly acquiring multi-temporal high-resolution orthophotos and surface models. Such datasets would ease the monitoring of geomorphological processes, such as local soil erosion and rill formation after heavy rainfall events. In this study we test a photogrammetric setup to determine data requirements for soil erosion studies with UAVs. We used a rainfall simulator (5 m2) and above a rig with attached a Panasonic GX1 16 megapixel digital camera and 20mm lens. The soil material in the simulator consisted of loamy sand at an angle of 5 degrees. Stereo pair images were taken before and after rainfall simulation with 75-85% overlap. Acquired images were automatically mosaicked to create high-resolution orthorectified images and digital surface models (DSM). We resampled the DSM to different spatial resolutions to analyze the effect of cell size to the accuracy of measured rill depth and soil loss estimations, and determined an optimal cell size (thus flight altitude). Furthermore, the high spatial accuracy of the acquired surface models allows further analysis of rill formation and channel initiation related to e.g. surface roughness. We suggest implementing near-infrared and temperature sensors to combine soil moisture and soil physical properties with surface morphology for future investigations.
Breast density characterization using texton distributions.
Petroudi, Styliani; Brady, Michael
2011-01-01
Breast density has been shown to be one of the most significant risks for developing breast cancer, with women with dense breasts at four to six times higher risk. The Breast Imaging Reporting and Data System (BI-RADS) has a four class classification scheme that describes the different breast densities. However, there is great inter and intra observer variability among clinicians in reporting a mammogram's density class. This work presents a novel texture classification method and its application for the development of a completely automated breast density classification system. The new method represents the mammogram using textons, which can be thought of as the building blocks of texture under the operational definition of Leung and Malik as clustered filter responses. The new proposed method characterizes the mammographic appearance of the different density patterns by evaluating the texton spatial dependence matrix (TDSM) in the breast region's corresponding texton map. The TSDM is a texture model that captures both statistical and structural texture characteristics. The normalized TSDM matrices are evaluated for mammograms from the different density classes and corresponding texture models are established. Classification is achieved using a chi-square distance measure. The fully automated TSDM breast density classification method is quantitatively evaluated on mammograms from all density classes from the Oxford Mammogram Database. The incorporation of texton spatial dependencies allows for classification accuracy reaching over 82%. The breast density classification accuracy is better using texton TSDM compared to simple texton histograms.
Spatial localization deficits and auditory cortical dysfunction in schizophrenia
Perrin, Megan A.; Butler, Pamela D.; DiCostanzo, Joanna; Forchelli, Gina; Silipo, Gail; Javitt, Daniel C.
2014-01-01
Background Schizophrenia is associated with deficits in the ability to discriminate auditory features such as pitch and duration that localize to primary cortical regions. Lesions of primary vs. secondary auditory cortex also produce differentiable effects on ability to localize and discriminate free-field sound, with primary cortical lesions affecting variability as well as accuracy of response. Variability of sound localization has not previously been studied in schizophrenia. Methods The study compared performance between patients with schizophrenia (n=21) and healthy controls (n=20) on sound localization and spatial discrimination tasks using low frequency tones generated from seven speakers concavely arranged with 30 degrees separation. Results For the sound localization task, patients showed reduced accuracy (p=0.004) and greater overall response variability (p=0.032), particularly in the right hemifield. Performance was also impaired on the spatial discrimination task (p=0.018). On both tasks, poorer accuracy in the right hemifield was associated with greater cognitive symptom severity. Better accuracy in the left hemifield was associated with greater hallucination severity on the sound localization task (p=0.026), but no significant association was found for the spatial discrimination task. Conclusion Patients show impairments in both sound localization and spatial discrimination of sounds presented free-field, with a pattern comparable to that of individuals with right superior temporal lobe lesions that include primary auditory cortex (Heschl’s gyrus). Right primary auditory cortex dysfunction may protect against hallucinations by influencing laterality of functioning. PMID:20619608
NASA Astrophysics Data System (ADS)
Othman, Arsalan; Gloaguen, Richard
2015-04-01
Topographic effects and complex vegetation cover hinder lithology classification in mountain regions based not only in field, but also in reflectance remote sensing data. The area of interest "Bardi-Zard" is located in the NE of Iraq. It is part of the Zagros orogenic belt, where seven lithological units outcrop and is known for its chromite deposit. The aim of this study is to compare three machine learning algorithms (MLAs): Maximum Likelihood (ML), Support Vector Machines (SVM), and Random Forest (RF) in the context of a supervised lithology classification task using Advanced Space-borne Thermal Emission and Reflection radiometer (ASTER) satellite, its derived, spatial information (spatial coordinates) and geomorphic data. We emphasize the enhancement in remote sensing lithological mapping accuracy that arises from the integration of geomorphic features and spatial information (spatial coordinates) in classifications. This study identifies that RF is better than ML and SVM algorithms in almost the sixteen combination datasets, which were tested. The overall accuracy of the best dataset combination with the RF map for the all seven classes reach ~80% and the producer and user's accuracies are ~73.91% and 76.09% respectively while the kappa coefficient is ~0.76. TPI is more effective with SVM algorithm than an RF algorithm. This paper demonstrates that adding geomorphic indices such as TPI and spatial information in the dataset increases the lithological classification accuracy.
A Unified Cropland Layer at 250-m for global agriculture monitoring
Waldner, François; Fritz, Steffen; Di Gregorio, Antonio; Plotnikov, Dmitry; Bartalev, Sergey; Kussul, Nataliia; Gong, Peng; Thenkabail, Prasad S.; Hazeu, Gerard; Klein, Igor; Löw, Fabian; Miettinen, Jukka; Dadhwal, Vinay Kumar; Lamarche, Céline; Bontemps, Sophie; Defourny, Pierre
2016-01-01
Accurate and timely information on the global cropland extent is critical for food security monitoring, water management and earth system modeling. Principally, it allows for analyzing satellite image time-series to assess the crop conditions and permits isolation of the agricultural component to focus on food security and impacts of various climatic scenarios. However, despite its critical importance, accurate information on the spatial extent, cropland mapping with remote sensing imagery remains a major challenge. Following an exhaustive identification and collection of existing land cover maps, a multi-criteria analysis was designed at the country level to evaluate the fitness of a cropland map with regards to four dimensions: its timeliness, its legend, its resolution adequacy and its confidence level. As a result, a Unified Cropland Layer that combines the fittest products into a 250 m global cropland map was assembled. With an evaluated accuracy ranging from 82% to 95%, the Unified Cropland Layer successfully improved the accuracy compared to single global products.
Quantification of Reflection Patterns in Ground-Penetrating Radar Data
NASA Astrophysics Data System (ADS)
Moysey, S.; Knight, R. J.; Jol, H. M.; Allen-King, R. M.; Gaylord, D. R.
2005-12-01
Radar facies analysis provides a way of interpreting the large-scale structure of the subsurface from ground-penetrating radar (GPR) data. Radar facies are often distinguished from each other by the presence of patterns, such as flat-lying, dipping, or chaotic reflections, in different regions of a radar image. When these patterns can be associated with radar facies in a repeated and predictable manner we refer to them as `radar textures'. While it is often possible to qualitatively differentiate between radar textures visually, pattern recognition tools, like neural networks, require a quantitative measure to discriminate between them. We investigate whether currently available tools, such as instantaneous attributes or metrics adapted from standard texture analysis techniques, can be used to improve the classification of radar facies. To this end, we use a neural network to perform cross-validation tests that assess the efficacy of different textural measures for classifying radar facies in GPR data collected from the William River delta, Saskatchewan, Canada. We found that the highest classification accuracies (>93%) were obtained for measures of texture that preserve information about the spatial arrangement of reflections in the radar image, e.g., spatial covariance. Lower accuracy (87%) was obtained for classifications based directly on windows of amplitude data extracted from the radar image. Measures that did not account for the spatial arrangement of reflections in the image, e.g., instantaneous attributes and amplitude variance, yielded classification accuracies of less than 65%. Optimal classifications were obtained for textural measures that extracted sufficient information from the radar data to discriminate between radar facies but were insensitive to other facies specific characteristics. For example, the rotationally invariant Fourier-Mellin transform delivered better classification results than the spatial covariance because dip angle of the reflections, but not dip direction, was an important discriminator between radar facies at the William River delta. To extend the use of radar texture beyond the identification of radar facies to sedimentary facies we are investigating how sedimentary features are encoded in GPR data at Borden, Ontario, Canada. At this site, we have collected extensive sedimentary and hydrologic data over the area imaged by GPR. Analysis of this data coupled with synthetic modeling of the radar signal has allowed us to develop insight into the generation of radar texture in complex geologic environments.
On the effects of scale for ecosystem services mapping
Grêt-Regamey, Adrienne; Weibel, Bettina; Bagstad, Kenneth J.; Ferrari, Marika; Geneletti, Davide; Klug, Hermann; Schirpke, Uta; Tappeiner, Ulrike
2014-01-01
Ecosystems provide life-sustaining services upon which human civilization depends, but their degradation largely continues unabated. Spatially explicit information on ecosystem services (ES) provision is required to better guide decision making, particularly for mountain systems, which are characterized by vertical gradients and isolation with high topographic complexity, making them particularly sensitive to global change. But while spatially explicit ES quantification and valuation allows the identification of areas of abundant or limited supply of and demand for ES, the accuracy and usefulness of the information varies considerably depending on the scale and methods used. Using four case studies from mountainous regions in Europe and the U.S., we quantify information gains and losses when mapping five ES - carbon sequestration, flood regulation, agricultural production, timber harvest, and scenic beauty - at coarse and fine resolution (250 m vs. 25 m in Europe and 300 m vs. 30 m in the U.S.). We analyze the effects of scale on ES estimates and their spatial pattern and show how these effects are related to different ES, terrain structure and model properties. ES estimates differ substantially between the fine and coarse resolution analyses in all case studies and across all services. This scale effect is not equally strong for all ES. We show that spatially explicit information about non-clustered, isolated ES tends to be lost at coarse resolution and against expectation, mainly in less rugged terrain, which calls for finer resolution assessments in such contexts. The effect of terrain ruggedness is also related to model properties such as dependency on land use-land cover data. We close with recommendations for mapping ES to make the resulting maps more comparable, and suggest a four-step approach to address the issue of scale when mapping ES that can deliver information to support ES-based decision making with greater accuracy and reliability.
On the Effects of Scale for Ecosystem Services Mapping
Grêt-Regamey, Adrienne; Weibel, Bettina; Bagstad, Kenneth J.; Ferrari, Marika; Geneletti, Davide; Klug, Hermann; Schirpke, Uta; Tappeiner, Ulrike
2014-01-01
Ecosystems provide life-sustaining services upon which human civilization depends, but their degradation largely continues unabated. Spatially explicit information on ecosystem services (ES) provision is required to better guide decision making, particularly for mountain systems, which are characterized by vertical gradients and isolation with high topographic complexity, making them particularly sensitive to global change. But while spatially explicit ES quantification and valuation allows the identification of areas of abundant or limited supply of and demand for ES, the accuracy and usefulness of the information varies considerably depending on the scale and methods used. Using four case studies from mountainous regions in Europe and the U.S., we quantify information gains and losses when mapping five ES - carbon sequestration, flood regulation, agricultural production, timber harvest, and scenic beauty - at coarse and fine resolution (250 m vs. 25 m in Europe and 300 m vs. 30 m in the U.S.). We analyze the effects of scale on ES estimates and their spatial pattern and show how these effects are related to different ES, terrain structure and model properties. ES estimates differ substantially between the fine and coarse resolution analyses in all case studies and across all services. This scale effect is not equally strong for all ES. We show that spatially explicit information about non-clustered, isolated ES tends to be lost at coarse resolution and against expectation, mainly in less rugged terrain, which calls for finer resolution assessments in such contexts. The effect of terrain ruggedness is also related to model properties such as dependency on land use-land cover data. We close with recommendations for mapping ES to make the resulting maps more comparable, and suggest a four-step approach to address the issue of scale when mapping ES that can deliver information to support ES-based decision making with greater accuracy and reliability. PMID:25549256
Measuring atmospheric aerosols of organic origin on multirotor Unmanned Aerial Vehicles (UAVs).
NASA Astrophysics Data System (ADS)
Crazzolara, Claudio; Platis, Andreas; Bange, Jens
2017-04-01
In-situ measurements of the spatial distribution and transportation of atmospheric organic particles such as pollen and spores are of great interdisciplinary interest such as: - In agriculture to investigate the spread of transgenetic material, - In paleoclimatology to improve the accuracy of paleoclimate models derived from pollen grains retrieved from sediments, and - In meteorology/climate research to determine the role of spores and pollen acting as nuclei in cloud formation processes. The few known state of the art in-situ measurement systems are using passive sampling units carried by fixed wing UAVs, thus providing only limited spatial resolution of aerosol concentration. Also the passively sampled air volume is determined with low accuracy as it is only calculated by the length of the flight path. We will present a new approach, which is based on the use of a multirotor UAV providing a versatile platform. On this UAV an optical particle counter in addition to a particle collecting unit, e.g. a conventional filter element and/or a inertial mass separator were installed. Both sampling units were driven by a mass flow controlled blower. This allows not only an accurate determination of the number and size concentration, but also an exact classification of the type of collected aerosol particles as well as an accurate determination of the sampled air volume. In addition, due to the application of a multirotor UAV with its automated position stabilisation system, the aerosol concentration can be measured with a very high spatial resolution of less than 1 m in all three dimensions. The combination of comprehensive determination of number, type and classification of aerosol particles in combination with the very high spatial resolution provides not only valuable progress in agriculture, paleoclimatology and meteorology, but also opens up the application of multirotor UAVs in new fields, for example for precise determination of the mechanisms of generation and distribution of fine particulate matter as the result of road traffic.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ruby, J. J.; Pak, A., E-mail: pak5@llnl.gov; Field, J. E.
2016-07-15
A technique for measuring residual motion during the stagnation phase of an indirectly driven inertial confinement experiment has been implemented. This method infers a velocity from spatially and temporally resolved images of the X-ray emission from two orthogonal lines of sight. This work investigates the accuracy of recovering spatially resolved velocities from the X-ray emission data. A detailed analytical and numerical modeling of the X-ray emission measurement shows that the accuracy of this method increases as the displacement that results from a residual velocity increase. For the typical experimental configuration, signal-to-noise ratios, and duration of X-ray emission, it is estimatedmore » that the fractional error in the inferred velocity rises above 50% as the velocity of emission falls below 24 μm/ns. By inputting measured parameters into this model, error estimates of the residual velocity as inferred from the X-ray emission measurements are now able to be generated for experimental data. Details of this analysis are presented for an implosion experiment conducted with an unintentional radiation flux asymmetry. The analysis shows a bright localized region of emission that moves through the larger emitting volume at a relatively higher velocity towards the location of the imposed flux deficit. This technique allows for the possibility of spatially resolving velocity flows within the so-called central hot spot of an implosion. This information would help to refine our interpretation of the thermal temperature inferred from the neutron time of flight detectors and the effect of localized hydrodynamic instabilities during the stagnation phase. Across several experiments, along a single line of sight, the average difference in magnitude and direction of the measured residual velocity as inferred from the X-ray and neutron time of flight detectors was found to be ∼13 μm/ns and ∼14°, respectively.« less
Ruby, J. J.; Pak, A.; Field, J. E.; ...
2016-07-01
A technique for measuring residual motion during the stagnation phase of an indirectly driven inertial confinement experiment has been implemented. Our method infers a velocity from spatially and temporally resolved images of the X-ray emission from two orthogonal lines of sight. This work investigates the accuracy of recovering spatially resolved velocities from the X-ray emission data. A detailed analytical and numerical modeling of the X-ray emission measurement shows that the accuracy of this method increases as the displacement that results from a residual velocity increase. For the typical experimental configuration, signal-to-noise ratios, and duration of X-ray emission, it is estimatedmore » that the fractional error in the inferred velocity rises above 50% as the velocity of emission falls below 24 μm/ns. Furthermore, by inputting measured parameters into this model, error estimates of the residual velocity as inferred from the X-ray emission measurements are now able to be generated for experimental data. Details of this analysis are presented for an implosion experiment conducted with an unintentional radiation flux asymmetry. The analysis shows a bright localized region of emission that moves through the larger emitting volume at a relatively higher velocity towards the location of the imposed flux deficit. Our technique allows for the possibility of spatially resolving velocity flows within the so-called central hot spot of an implosion. This information would help to refine our interpretation of the thermal temperature inferred from the neutron time of flight detectors and the effect of localized hydrodynamic instabilities during the stagnation phase. Across several experiments, along a single line of sight, the average difference in magnitude and direction of the measured residual velocity as inferred from the X-ray and neutron time of flight detectors was found to be ~13 μm/ns and ~14°, respectively.« less
On the effects of scale for ecosystem services mapping.
Grêt-Regamey, Adrienne; Weibel, Bettina; Bagstad, Kenneth J; Ferrari, Marika; Geneletti, Davide; Klug, Hermann; Schirpke, Uta; Tappeiner, Ulrike
2014-01-01
Ecosystems provide life-sustaining services upon which human civilization depends, but their degradation largely continues unabated. Spatially explicit information on ecosystem services (ES) provision is required to better guide decision making, particularly for mountain systems, which are characterized by vertical gradients and isolation with high topographic complexity, making them particularly sensitive to global change. But while spatially explicit ES quantification and valuation allows the identification of areas of abundant or limited supply of and demand for ES, the accuracy and usefulness of the information varies considerably depending on the scale and methods used. Using four case studies from mountainous regions in Europe and the U.S., we quantify information gains and losses when mapping five ES - carbon sequestration, flood regulation, agricultural production, timber harvest, and scenic beauty - at coarse and fine resolution (250 m vs. 25 m in Europe and 300 m vs. 30 m in the U.S.). We analyze the effects of scale on ES estimates and their spatial pattern and show how these effects are related to different ES, terrain structure and model properties. ES estimates differ substantially between the fine and coarse resolution analyses in all case studies and across all services. This scale effect is not equally strong for all ES. We show that spatially explicit information about non-clustered, isolated ES tends to be lost at coarse resolution and against expectation, mainly in less rugged terrain, which calls for finer resolution assessments in such contexts. The effect of terrain ruggedness is also related to model properties such as dependency on land use-land cover data. We close with recommendations for mapping ES to make the resulting maps more comparable, and suggest a four-step approach to address the issue of scale when mapping ES that can deliver information to support ES-based decision making with greater accuracy and reliability.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kronfeld, Andrea; Müller-Forell, Wibke; Buchholz, Hans-Georg
Purpose: Image registration is one prerequisite for the analysis of brain regions in magnetic-resonance-imaging (MRI) or positron-emission-tomography (PET) studies. Diffeomorphic anatomical registration through exponentiated Lie algebra (DARTEL) is a nonlinear, diffeomorphic algorithm for image registration and construction of image templates. The goal of this small animal study was (1) the evaluation of a MRI and calculation of several cannabinoid type 1 (CB1) receptor PET templates constructed using DARTEL and (2) the analysis of the image registration accuracy of MR and PET images to their DARTEL templates with reference to analytical and iterative PET reconstruction algorithms. Methods: Five male Sprague Dawleymore » rats were investigated for template construction using MRI and [{sup 18}F]MK-9470 PET for CB1 receptor representation. PET images were reconstructed using the algorithms filtered back-projection, ordered subset expectation maximization in 2D, and maximum a posteriori in 3D. Landmarks were defined on each MR image, and templates were constructed under different settings, i.e., based on different tissue class images [gray matter (GM), white matter (WM), and GM + WM] and regularization forms (“linear elastic energy,” “membrane energy,” and “bending energy”). Registration accuracy for MRI and PET templates was evaluated by means of the distance between landmark coordinates. Results: The best MRI template was constructed based on gray and white matter images and the regularization form linear elastic energy. In this case, most distances between landmark coordinates were <1 mm. Accordingly, MRI-based spatial normalization was most accurate, but results of the PET-based spatial normalization were quite comparable. Conclusions: Image registration using DARTEL provides a standardized and automatic framework for small animal brain data analysis. The authors were able to show that this method works with high reliability and validity. Using DARTEL templates together with nonlinear registration algorithms allows for accurate spatial normalization of combined MRI/PET or PET-only studies.« less
Feature Selection Methods for Zero-Shot Learning of Neural Activity
Caceres, Carlos A.; Roos, Matthew J.; Rupp, Kyle M.; Milsap, Griffin; Crone, Nathan E.; Wolmetz, Michael E.; Ratto, Christopher R.
2017-01-01
Dimensionality poses a serious challenge when making predictions from human neuroimaging data. Across imaging modalities, large pools of potential neural features (e.g., responses from particular voxels, electrodes, and temporal windows) have to be related to typically limited sets of stimuli and samples. In recent years, zero-shot prediction models have been introduced for mapping between neural signals and semantic attributes, which allows for classification of stimulus classes not explicitly included in the training set. While choices about feature selection can have a substantial impact when closed-set accuracy, open-set robustness, and runtime are competing design objectives, no systematic study of feature selection for these models has been reported. Instead, a relatively straightforward feature stability approach has been adopted and successfully applied across models and imaging modalities. To characterize the tradeoffs in feature selection for zero-shot learning, we compared correlation-based stability to several other feature selection techniques on comparable data sets from two distinct imaging modalities: functional Magnetic Resonance Imaging and Electrocorticography. While most of the feature selection methods resulted in similar zero-shot prediction accuracies and spatial/spectral patterns of selected features, there was one exception; A novel feature/attribute correlation approach was able to achieve those accuracies with far fewer features, suggesting the potential for simpler prediction models that yield high zero-shot classification accuracy. PMID:28690513
[Simulation of lung motions using an artificial neural network].
Laurent, R; Henriet, J; Salomon, M; Sauget, M; Nguyen, F; Gschwind, R; Makovicka, L
2011-04-01
A way to improve the accuracy of lung radiotherapy for a patient is to get a better understanding of its lung motion. Indeed, thanks to this knowledge it becomes possible to follow the displacements of the clinical target volume (CTV) induced by the lung breathing. This paper presents a feasibility study of an original method to simulate the positions of points in patient's lung at all breathing phases. This method, based on an artificial neural network, allowed learning the lung motion on real cases and then to simulate it for new patients for which only the beginning and the end breathing data are known. The neural network learning set is made up of more than 600 points. These points, shared out on three patients and gathered on a specific lung area, were plotted by a MD. The first results are promising: an average accuracy of 1mm is obtained for a spatial resolution of 1 × 1 × 2.5mm(3). We have demonstrated that it is possible to simulate lung motion with accuracy using an artificial neural network. As future work we plan to improve the accuracy of our method with the addition of new patient data and a coverage of the whole lungs. Copyright © 2010 Société française de radiothérapie oncologique (SFRO). Published by Elsevier SAS. All rights reserved.
Localization of synchronous cortical neural sources.
Zerouali, Younes; Herry, Christophe L; Jemel, Boutheina; Lina, Jean-Marc
2013-03-01
Neural synchronization is a key mechanism to a wide variety of brain functions, such as cognition, perception, or memory. High temporal resolution achieved by EEG recordings allows the study of the dynamical properties of synchronous patterns of activity at a very fine temporal scale but with very low spatial resolution. Spatial resolution can be improved by retrieving the neural sources of EEG signal, thus solving the so-called inverse problem. Although many methods have been proposed to solve the inverse problem and localize brain activity, few of them target the synchronous brain regions. In this paper, we propose a novel algorithm aimed at localizing specifically synchronous brain regions and reconstructing the time course of their activity. Using multivariate wavelet ridge analysis, we extract signals capturing the synchronous events buried in the EEG and then solve the inverse problem on these signals. Using simulated data, we compare results of source reconstruction accuracy achieved by our method to a standard source reconstruction approach. We show that the proposed method performs better across a wide range of noise levels and source configurations. In addition, we applied our method on real dataset and identified successfully cortical areas involved in the functional network underlying visual face perception. We conclude that the proposed approach allows an accurate localization of synchronous brain regions and a robust estimation of their activity.
openPSTD: The open source pseudospectral time-domain method for acoustic propagation
NASA Astrophysics Data System (ADS)
Hornikx, Maarten; Krijnen, Thomas; van Harten, Louis
2016-06-01
An open source implementation of the Fourier pseudospectral time-domain (PSTD) method for computing the propagation of sound is presented, which is geared towards applications in the built environment. Being a wave-based method, PSTD captures phenomena like diffraction, but maintains efficiency in processing time and memory usage as it allows to spatially sample close to the Nyquist criterion, thus keeping both the required spatial and temporal resolution coarse. In the implementation it has been opted to model the physical geometry as a composition of rectangular two-dimensional subdomains, hence initially restricting the implementation to orthogonal and two-dimensional situations. The strategy of using subdomains divides the problem domain into local subsets, which enables the simulation software to be built according to Object-Oriented Programming best practices and allows room for further computational parallelization. The software is built using the open source components, Blender, Numpy and Python, and has been published under an open source license itself as well. For accelerating the software, an option has been included to accelerate the calculations by a partial implementation of the code on the Graphical Processing Unit (GPU), which increases the throughput by up to fifteen times. The details of the implementation are reported, as well as the accuracy of the code.
Sampling design for spatially distributed hydrogeologic and environmental processes
Christakos, G.; Olea, R.A.
1992-01-01
A methodology for the design of sampling networks over space is proposed. The methodology is based on spatial random field representations of nonhomogeneous natural processes, and on optimal spatial estimation techniques. One of the most important results of random field theory for physical sciences is its rationalization of correlations in spatial variability of natural processes. This correlation is extremely important both for interpreting spatially distributed observations and for predictive performance. The extent of site sampling and the types of data to be collected will depend on the relationship of subsurface variability to predictive uncertainty. While hypothesis formulation and initial identification of spatial variability characteristics are based on scientific understanding (such as knowledge of the physics of the underlying phenomena, geological interpretations, intuition and experience), the support offered by field data is statistically modelled. This model is not limited by the geometric nature of sampling and covers a wide range in subsurface uncertainties. A factorization scheme of the sampling error variance is derived, which possesses certain atttactive properties allowing significant savings in computations. By means of this scheme, a practical sampling design procedure providing suitable indices of the sampling error variance is established. These indices can be used by way of multiobjective decision criteria to obtain the best sampling strategy. Neither the actual implementation of the in-situ sampling nor the solution of the large spatial estimation systems of equations are necessary. The required values of the accuracy parameters involved in the network design are derived using reference charts (readily available for various combinations of data configurations and spatial variability parameters) and certain simple yet accurate analytical formulas. Insight is gained by applying the proposed sampling procedure to realistic examples related to sampling problems in two dimensions. ?? 1992.
Spatially distributed modeling of soil organic carbon across China with improved accuracy
NASA Astrophysics Data System (ADS)
Li, Qi-quan; Zhang, Hao; Jiang, Xin-ye; Luo, Youlin; Wang, Chang-quan; Yue, Tian-xiang; Li, Bing; Gao, Xue-song
2017-06-01
There is a need for more detailed spatial information on soil organic carbon (SOC) for the accurate estimation of SOC stock and earth system models. As it is effective to use environmental factors as auxiliary variables to improve the prediction accuracy of spatially distributed modeling, a combined method (HASM_EF) was developed to predict the spatial pattern of SOC across China using high accuracy surface modeling (HASM), artificial neural network (ANN), and principal component analysis (PCA) to introduce land uses, soil types, climatic factors, topographic attributes, and vegetation cover as predictors. The performance of HASM_EF was compared with ordinary kriging (OK), OK, and HASM combined, respectively, with land uses and soil types (OK_LS and HASM_LS), and regression kriging combined with land uses and soil types (RK_LS). Results showed that HASM_EF obtained the lowest prediction errors and the ratio of performance to deviation (RPD) presented the relative improvements of 89.91%, 63.77%, 55.86%, and 42.14%, respectively, compared to the other four methods. Furthermore, HASM_EF generated more details and more realistic spatial information on SOC. The improved performance of HASM_EF can be attributed to the introduction of more environmental factors, to explicit consideration of the multicollinearity of selected factors and the spatial nonstationarity and nonlinearity of relationships between SOC and selected factors, and to the performance of HASM and ANN. This method may play a useful tool in providing more precise spatial information on soil parameters for global modeling across large areas.
NASA Technical Reports Server (NTRS)
Gramenopoulos, N. (Principal Investigator)
1973-01-01
The author has identified the following significant results. For the recognition of terrain types, spatial signatures are developed from the diffraction patterns of small areas of ERTS-1 images. This knowledge is exploited for the measurements of a small number of meaningful spatial features from the digital Fourier transforms of ERTS-1 image cells containing 32 x 32 picture elements. Using these spatial features and a heuristic algorithm, the terrain types in the vicinity of Phoenix, Arizona were recognized by the computer with a high accuracy. Then, the spatial features were combined with spectral features and using the maximum likelihood criterion the recognition accuracy of terrain types increased substantially. It was determined that the recognition accuracy with the maximum likelihood criterion depends on the statistics of the feature vectors. Nonlinear transformations of the feature vectors are required so that the terrain class statistics become approximately Gaussian. It was also determined that for a given geographic area the statistics of the classes remain invariable for a period of a month but vary substantially between seasons.
Barbee, David L; Flynn, Ryan T; Holden, James E; Nickles, Robert J; Jeraj, Robert
2010-01-01
Tumor heterogeneities observed in positron emission tomography (PET) imaging are frequently compromised of partial volume effects which may affect treatment prognosis, assessment, or future implementations such as biologically optimized treatment planning (dose painting). This paper presents a method for partial volume correction of PET-imaged heterogeneous tumors. A point source was scanned on a GE Discover LS at positions of increasing radii from the scanner’s center to obtain the spatially varying point spread function (PSF). PSF images were fit in three dimensions to Gaussian distributions using least squares optimization. Continuous expressions were devised for each Gaussian width as a function of radial distance, allowing for generation of the system PSF at any position in space. A spatially varying partial volume correction (SV-PVC) technique was developed using expectation maximization (EM) and a stopping criterion based on the method’s correction matrix generated for each iteration. The SV-PVC was validated using a standard tumor phantom and a tumor heterogeneity phantom, and was applied to a heterogeneous patient tumor. SV-PVC results were compared to results obtained from spatially invariant partial volume correction (SINV-PVC), which used directionally uniform three dimensional kernels. SV-PVC of the standard tumor phantom increased the maximum observed sphere activity by 55 and 40% for 10 and 13 mm diameter spheres, respectively. Tumor heterogeneity phantom results demonstrated that as net changes in the EM correction matrix decreased below 35%, further iterations improved overall quantitative accuracy by less than 1%. SV-PVC of clinically observed tumors frequently exhibited changes of ±30% in regions of heterogeneity. The SV-PVC method implemented spatially varying kernel widths and automatically determined the number of iterations for optimal restoration, parameters which are arbitrarily chosen in SINV-PVC. Comparing SV-PVC to SINV-PVC demonstrated that similar results could be reached using both methods, but large differences result for the arbitrary selection of SINV-PVC parameters. The presented SV-PVC method was performed without user intervention, requiring only a tumor mask as input. Research involving PET-imaged tumor heterogeneity should include correcting for partial volume effects to improve the quantitative accuracy of results. PMID:20009194
NASA Astrophysics Data System (ADS)
Li, Bai; Tanaka, Kisei R.; Chen, Yong; Brady, Damian C.; Thomas, Andrew C.
2017-09-01
The Finite-Volume Community Ocean Model (FVCOM) is an advanced coastal circulation model widely utilized for its ability to simulate spatially and temporally evolving three-dimensional geophysical conditions of complex and dynamic coastal regions. While a body of literature evaluates model skill in surface fields, independent studies validating model skill in bottom fields over large spatial and temporal scales are scarce because these fields cannot be remotely sensed. In this study, an evaluation of FVCOM skill in modeling bottom water temperature was conducted by comparison to hourly in situ observed bottom temperatures recorded by the Environmental Monitors on Lobster Traps (eMOLT), a program that attached thermistors to commercial lobster traps from 2001 to 2013. Over 2 × 106 pairs of FVCOM-eMOLT records were evaluated by a series of statistical measures to quantify accuracy and precision of the modeled data across the Northwest Atlantic Shelf region. The overall comparison between modeled and observed data indicates reliable skill of FVCOM (r2 = 0.72; root mean squared error = 2.28 °C). Seasonally, the average absolute errors show higher model skill in spring, fall and winter than summer. We speculate that this is due to the increased difficulty of modeling high frequency variability in the exact position of the thermocline and frontal zones. The spatial patterns of the residuals suggest that there is improved similarity between modeled and observed data at higher latitudes. We speculate that this is due to increased tidal mixing at higher latitudes in our study area that reduces stratification in winter, allowing improved model accuracy. Modeled bottom water temperatures around Cape Cod, the continental shelf edges, and at one location at the entrance to Penobscot Bay were characterized by relatively high errors. Constraints for future uses of FVCOM bottom water temperature are provided based on the uncertainties in temporal-spatial patterns. This study is novel as it is the first skill assessment of a regional ocean circulation model in bottom fields at high spatial and temporal scales in the Northwest Atlantic Shelf region.
NASA Astrophysics Data System (ADS)
Plattner, A.; Maurer, H. R.; Vorloeper, J.; Dahmen, W.
2010-08-01
Despite the ever-increasing power of modern computers, realistic modelling of complex 3-D earth models is still a challenging task and requires substantial computing resources. The overwhelming majority of current geophysical modelling approaches includes either finite difference or non-adaptive finite element algorithms and variants thereof. These numerical methods usually require the subsurface to be discretized with a fine mesh to accurately capture the behaviour of the physical fields. However, this may result in excessive memory consumption and computing times. A common feature of most of these algorithms is that the modelled data discretizations are independent of the model complexity, which may be wasteful when there are only minor to moderate spatial variations in the subsurface parameters. Recent developments in the theory of adaptive numerical solvers have the potential to overcome this problem. Here, we consider an adaptive wavelet-based approach that is applicable to a large range of problems, also including nonlinear problems. In comparison with earlier applications of adaptive solvers to geophysical problems we employ here a new adaptive scheme whose core ingredients arose from a rigorous analysis of the overall asymptotically optimal computational complexity, including in particular, an optimal work/accuracy rate. Our adaptive wavelet algorithm offers several attractive features: (i) for a given subsurface model, it allows the forward modelling domain to be discretized with a quasi minimal number of degrees of freedom, (ii) sparsity of the associated system matrices is guaranteed, which makes the algorithm memory efficient and (iii) the modelling accuracy scales linearly with computing time. We have implemented the adaptive wavelet algorithm for solving 3-D geoelectric problems. To test its performance, numerical experiments were conducted with a series of conductivity models exhibiting varying degrees of structural complexity. Results were compared with a non-adaptive finite element algorithm, which incorporates an unstructured mesh to best-fitting subsurface boundaries. Such algorithms represent the current state-of-the-art in geoelectric modelling. An analysis of the numerical accuracy as a function of the number of degrees of freedom revealed that the adaptive wavelet algorithm outperforms the finite element solver for simple and moderately complex models, whereas the results become comparable for models with high spatial variability of electrical conductivities. The linear dependence of the modelling error and the computing time proved to be model-independent. This feature will allow very efficient computations using large-scale models as soon as our experimental code is optimized in terms of its implementation.
Indoor Spatial Updating With Impaired Vision
Legge, Gordon E.; Granquist, Christina; Baek, Yihwa; Gage, Rachel
2016-01-01
Purpose Spatial updating is the ability to keep track of position and orientation while moving through an environment. We asked how normally sighted and visually impaired subjects compare in spatial updating and in estimating room dimensions. Methods Groups of 32 normally sighted, 16 low-vision, and 16 blind subjects estimated the dimensions of six rectangular rooms. Updating was assessed by guiding the subjects along three-segment paths in the rooms. At the end of each path, they estimated the distance and direction to the starting location, and to a designated target. Spatial updating was tested in five conditions ranging from free viewing to full auditory and visual deprivation. Results The normally sighted and low-vision groups did not differ in their accuracy for judging room dimensions. Correlations between estimated size and physical size were high. Accuracy of low-vision performance was not correlated with acuity, contrast sensitivity, or field status. Accuracy was lower for the blind subjects. The three groups were very similar in spatial-updating performance, and exhibited only weak dependence on the nature of the viewing conditions. Conclusions People with a wide range of low-vision conditions are able to judge room dimensions as accurately as people with normal vision. Blind subjects have difficulty in judging the dimensions of quiet rooms, but some information is available from echolocation. Vision status has little impact on performance in simple spatial updating; proprioceptive and vestibular cues are sufficient. PMID:27978556
Indoor Spatial Updating With Impaired Vision.
Legge, Gordon E; Granquist, Christina; Baek, Yihwa; Gage, Rachel
2016-12-01
Spatial updating is the ability to keep track of position and orientation while moving through an environment. We asked how normally sighted and visually impaired subjects compare in spatial updating and in estimating room dimensions. Groups of 32 normally sighted, 16 low-vision, and 16 blind subjects estimated the dimensions of six rectangular rooms. Updating was assessed by guiding the subjects along three-segment paths in the rooms. At the end of each path, they estimated the distance and direction to the starting location, and to a designated target. Spatial updating was tested in five conditions ranging from free viewing to full auditory and visual deprivation. The normally sighted and low-vision groups did not differ in their accuracy for judging room dimensions. Correlations between estimated size and physical size were high. Accuracy of low-vision performance was not correlated with acuity, contrast sensitivity, or field status. Accuracy was lower for the blind subjects. The three groups were very similar in spatial-updating performance, and exhibited only weak dependence on the nature of the viewing conditions. People with a wide range of low-vision conditions are able to judge room dimensions as accurately as people with normal vision. Blind subjects have difficulty in judging the dimensions of quiet rooms, but some information is available from echolocation. Vision status has little impact on performance in simple spatial updating; proprioceptive and vestibular cues are sufficient.
Laboratory demonstration of a Brillouin lidar to remotely measure temperature profiles of the ocean
NASA Astrophysics Data System (ADS)
Rudolf, Andreas; Walther, Thomas
2014-05-01
We report on the successful laboratory demonstration of a real-time lidar system to remotely measure temperature profiles in water. In the near future, it is intended to be operated from a mobile platform, e.g., a helicopter or vessel, in order to precisely determine the temperature of the surface mixed layer of the ocean with high spatial resolution. The working principle relies on the active generation and detection of spontaneous Brillouin scattering. The light source consists of a frequency-doubled fiber-amplified external cavity diode laser and provides high-energy, Fourier transform-limited laser pulses in the green spectral range. The detector is based on an atomic edge filter and allows the challenging extraction of the temperature information from the Brillouin scattered light. In the lab environment, depending on the amount of averaging, water temperatures were resolved with a mean accuracy of up to 0.07°C and a spatial resolution of 1 m, proving the feasibility and the large potential of the overall system.
Microfluidic PDMS on paper (POP) devices.
Shangguan, Jin-Wen; Liu, Yu; Pan, Jian-Bin; Xu, Bi-Yi; Xu, Jing-Juan; Chen, Hong-Yuan
2016-12-20
In this paper, we propose a generalized concept of microfluidic polydimethylsiloxane (PDMS) on paper (POP) devices, which combines well the merits of paper chips and PDMS chips. First, we optimized the conditions for accurate PDMS spatial patterning on paper, based on screen printing and a high temperature enabled superfast curing technique, which enables PDMS patterning to an accuracy of tens of microns in less than ten seconds. This, in turn, makes it available for seamless, reversible and reliable integration of the resulting paper layer with other PDMS channel structures. The integrated POP devices allow for both porous paper and smooth channels to be spatially defined on the devices, greatly extending the flexibility for designers to be able to construct powerful functional structures. To demonstrate the versatility of this design, a prototype POP device for the colorimetric analysis of liver function markers, serum protein, alkaline phosphatase (ALP) and aspartate aminotransferase (AST), was constructed. On this POP device, quantitative sample loading, mixing and multiplex analysis have all been realized.
NASA Astrophysics Data System (ADS)
Wei, Dong; Weinstein, Susan; Hsieh, Meng-Kang; Pantalone, Lauren; Kontos, Despina
2018-03-01
The relative amount of fibroglandular tissue (FGT) in the breast has been shown to be a risk factor for breast cancer. However, automatic segmentation of FGT in breast MRI is challenging due mainly to its wide variation in anatomy (e.g., amount, location and pattern, etc.), and various imaging artifacts especially the prevalent bias-field artifact. Motivated by a previous work demonstrating improved FGT segmentation with 2-D a priori likelihood atlas, we propose a machine learning-based framework using 3-D FGT context. The framework uses features specifically defined with respect to the breast anatomy to capture spatially varying likelihood of FGT, and allows (a) intuitive standardization across breasts of different sizes and shapes, and (b) easy incorporation of additional information helpful to the segmentation (e.g., texture). Extended from the concept of 2-D atlas, our framework not only captures spatial likelihood of FGT in 3-D context, but also broadens its applicability to both sagittal and axial breast MRI rather than being limited to the plane in which the 2-D atlas is constructed. Experimental results showed improved segmentation accuracy over the 2-D atlas method, and demonstrated further improvement by incorporating well-established texture descriptors.
Padilla-Buritica, Jorge I.; Martinez-Vargas, Juan D.; Castellanos-Dominguez, German
2016-01-01
Lately, research on computational models of emotion had been getting much attention due to their potential for understanding the mechanisms of emotions and their promising broad range of applications that potentially bridge the gap between human and machine interactions. We propose a new method for emotion classification that relies on features extracted from those active brain areas that are most likely related to emotions. To this end, we carry out the selection of spatially compact regions of interest that are computed using the brain neural activity reconstructed from Electroencephalography data. Throughout this study, we consider three representative feature extraction methods widely applied to emotion detection tasks, including Power spectral density, Wavelet, and Hjorth parameters. Further feature selection is carried out using principal component analysis. For validation purpose, these features are used to feed a support vector machine classifier that is trained under the leave-one-out cross-validation strategy. Obtained results on real affective data show that incorporation of the proposed training method in combination with the enhanced spatial resolution provided by the source estimation allows improving the performed accuracy of discrimination in most of the considered emotions, namely: dominance, valence, and liking. PMID:27489541
An optimization framework for measuring spatial access over healthcare networks.
Li, Zihao; Serban, Nicoleta; Swann, Julie L
2015-07-17
Measurement of healthcare spatial access over a network involves accounting for demand, supply, and network structure. Popular approaches are based on floating catchment areas; however the methods can overestimate demand over the network and fail to capture cascading effects across the system. Optimization is presented as a framework to measure spatial access. Questions related to when and why optimization should be used are addressed. The accuracy of the optimization models compared to the two-step floating catchment area method and its variations is analytically demonstrated, and a case study of specialty care for Cystic Fibrosis over the continental United States is used to compare these approaches. The optimization models capture a patient's experience rather than their opportunities and avoid overestimating patient demand. They can also capture system effects due to change based on congestion. Furthermore, the optimization models provide more elements of access than traditional catchment methods. Optimization models can incorporate user choice and other variations, and they can be useful towards targeting interventions to improve access. They can be easily adapted to measure access for different types of patients, over different provider types, or with capacity constraints in the network. Moreover, optimization models allow differences in access in rural and urban areas.
NASA Astrophysics Data System (ADS)
Mengaldo, Gianmarco; De Grazia, Daniele; Moura, Rodrigo C.; Sherwin, Spencer J.
2018-04-01
This study focuses on the dispersion and diffusion characteristics of high-order energy-stable flux reconstruction (ESFR) schemes via the spatial eigensolution analysis framework proposed in [1]. The analysis is performed for five ESFR schemes, where the parameter 'c' dictating the properties of the specific scheme recovered is chosen such that it spans the entire class of ESFR methods, also referred to as VCJH schemes, proposed in [2]. In particular, we used five values of 'c', two that correspond to its lower and upper bounds and the others that identify three schemes that are linked to common high-order methods, namely the ESFR recovering two versions of discontinuous Galerkin methods and one recovering the spectral difference scheme. The performance of each scheme is assessed when using different numerical intercell fluxes (e.g. different levels of upwinding), ranging from "under-" to "over-upwinding". In contrast to the more common temporal analysis, the spatial eigensolution analysis framework adopted here allows one to grasp crucial insights into the diffusion and dispersion properties of FR schemes for problems involving non-periodic boundary conditions, typically found in open-flow problems, including turbulence, unsteady aerodynamics and aeroacoustics.
NASA Astrophysics Data System (ADS)
Podhorský, Dušan; Fabo, Peter
2016-12-01
The article deals with a method of acquiring the temporal and spatial distribution of local precipitation from measurement of performance characteristics of local sources of high frequency electromagnetic radiation in the 1-3GHz frequency range in the lower layers of the troposphere up to 100 m. The method was experimentally proven by monitoring the GSM G2 base stations of cell phone providers in the frequency range of 920-960MHz using methods of frequential and spatial diversity reception. Modification of the SART method for localization of precipitation was also proposed. The achieved results allow us to obtain the timeframe of the intensity of local precipitation in the observed area with a temporal resolution of 10 sec. A spatial accuracy of 100m in localization of precipitation is expected, after a network of receivers is built. The acquired data can be used as one of the inputs for meteorological forecasting models, in agriculture, hydrology as a supplementary method to ombrograph stations and measurements for the weather radar network, in transportation as part of a warning system and in many other areas.
A number of articles have investigated the impact of sampling design on remotely sensed landcover accuracy estimates. Gong and Howarth (1990) found significant differences for Kappa accuracy values when comparing purepixel sampling, stratified random sampling, and stratified sys...
Entropy of Movement Outcome in Space-Time.
Lai, Shih-Chiung; Hsieh, Tsung-Yu; Newell, Karl M
2015-07-01
Information entropy of the joint spatial and temporal (space-time) probability of discrete movement outcome was investigated in two experiments as a function of different movement strategies (space-time, space, and time instructional emphases), task goals (point-aiming and target-aiming) and movement speed-accuracy constraints. The variance of the movement spatial and temporal errors was reduced by instructional emphasis on the respective spatial or temporal dimension, but increased on the other dimension. The space-time entropy was lower in targetaiming task than the point aiming task but did not differ between instructional emphases. However, the joint probabilistic measure of spatial and temporal entropy showed that spatial error is traded for timing error in tasks with space-time criteria and that the pattern of movement error depends on the dimension of the measurement process. The unified entropy measure of movement outcome in space-time reveals a new relation for the speed-accuracy.
Uav-Based Crops Classification with Joint Features from Orthoimage and Dsm Data
NASA Astrophysics Data System (ADS)
Liu, B.; Shi, Y.; Duan, Y.; Wu, W.
2018-04-01
Accurate crops classification remains a challenging task due to the same crop with different spectra and different crops with same spectrum phenomenon. Recently, UAV-based remote sensing approach gains popularity not only for its high spatial and temporal resolution, but also for its ability to obtain spectraand spatial data at the same time. This paper focus on how to take full advantages of spatial and spectrum features to improve crops classification accuracy, based on an UAV platform equipped with a general digital camera. Texture and spatial features extracted from the RGB orthoimage and the digital surface model of the monitoring area are analysed and integrated within a SVM classification framework. Extensive experiences results indicate that the overall classification accuracy is drastically improved from 72.9 % to 94.5 % when the spatial features are combined together, which verified the feasibility and effectiveness of the proposed method.
A millimeter-wave reflectometer for whole-body hydration sensing
NASA Astrophysics Data System (ADS)
Zhang, W.-D.; Brown, E. R.
2016-05-01
This paper demonstrates a non-invasive method to determine the hydration level of human skin by measuring the reflectance of W-band (75-110 GHz) and Ka-band (26-40 GHz) radiation. Ka-band provides higher hydration accuracy (<1%) and greater depth of penetration (> 1 mm), thereby allowing access to the important dermis layer of skin. W-band provides less depth of penetration but finer spatial resolution (~2 mm). Both the hydration sensing concept and experimental results are presented here. The goal is to make a human hydration sensor that is 1% accurate or better, operable by mechanically scanning, and fast enough to measure large areas of the human body in seconds.
NASA Astrophysics Data System (ADS)
Stindt, A.; Andrade, M. A. B.; Albrecht, M.; Adamowski, J. C.; Panne, U.; Riedel, J.
2014-01-01
A novel method for predictions of the sound pressure distribution in acoustic levitators is based on a matrix representation of the Rayleigh integral. This method allows for a fast calculation of the acoustic field within the resonator. To make sure that the underlying assumptions and simplifications are justified, this approach was tested by a direct comparison to experimental data. The experimental sound pressure distributions were recorded by high spatially resolved frequency selective microphone scanning. To emphasize the general applicability of the two approaches, the comparative studies were conducted for four different resonator geometries. In all cases, the results show an excellent agreement, demonstrating the accuracy of the matrix method.
NASA Astrophysics Data System (ADS)
Moslehi, M.; de Barros, F.
2017-12-01
Complexity of hydrogeological systems arises from the multi-scale heterogeneity and insufficient measurements of their underlying parameters such as hydraulic conductivity and porosity. An inadequate characterization of hydrogeological properties can significantly decrease the trustworthiness of numerical models that predict groundwater flow and solute transport. Therefore, a variety of data assimilation methods have been proposed in order to estimate hydrogeological parameters from spatially scarce data by incorporating the governing physical models. In this work, we propose a novel framework for evaluating the performance of these estimation methods. We focus on the Ensemble Kalman Filter (EnKF) approach that is a widely used data assimilation technique. It reconciles multiple sources of measurements to sequentially estimate model parameters such as the hydraulic conductivity. Several methods have been used in the literature to quantify the accuracy of the estimations obtained by EnKF, including Rank Histograms, RMSE and Ensemble Spread. However, these commonly used methods do not regard the spatial information and variability of geological formations. This can cause hydraulic conductivity fields with very different spatial structures to have similar histograms or RMSE. We propose a vision-based approach that can quantify the accuracy of estimations by considering the spatial structure embedded in the estimated fields. Our new approach consists of adapting a new metric, Color Coherent Vectors (CCV), to evaluate the accuracy of estimated fields achieved by EnKF. CCV is a histogram-based technique for comparing images that incorporate spatial information. We represent estimated fields as digital three-channel images and use CCV to compare and quantify the accuracy of estimations. The sensitivity of CCV to spatial information makes it a suitable metric for assessing the performance of spatial data assimilation techniques. Under various factors of data assimilation methods such as number, layout, and type of measurements, we compare the performance of CCV with other metrics such as RMSE. By simulating hydrogeological processes using estimated and true fields, we observe that CCV outperforms other existing evaluation metrics.
Mennis, Jeremy; Mason, Michael; Ambrus, Andreea; Way, Thomas; Henry, Kevin
2017-09-01
Geographic ecological momentary assessment (GEMA) combines ecological momentary assessment (EMA) with global positioning systems (GPS) and geographic information systems (GIS). This study evaluates the spatial accuracy of GEMA location data and bias due to subject and environmental data characteristics. Using data for 72 subjects enrolled in a study of urban adolescent substance use, we compared the GPS-based location of EMA responses in which the subject indicated they were at home to the geocoded home address. We calculated the percentage of EMA locations within a sixteenth, eighth, quarter, and half miles from the home, and the percentage within the same tract and block group as the home. We investigated if the accuracy measures were associated with subject demographics, substance use, and emotional dysregulation, as well as environmental characteristics of the home neighborhood. Half of all subjects had more than 88% of their EMA locations within a half mile, 72% within a quarter mile, 55% within an eighth mile, 50% within a sixteenth of a mile, 83% in the correct tract, and 71% in the correct block group. There were no significant associations with subject or environmental characteristics. Results support the use of GEMA for analyzing subjects' exposures to urban environments. Researchers should be aware of the issue of spatial accuracy inherent in GEMA, and interpret results accordingly. Understanding spatial accuracy is particularly relevant for the development of 'ecological momentary interventions' (EMI), which may depend on accurate location information, though issues of privacy protection remain a concern. Copyright © 2017 Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Vincent, Sébastien; Lemercier, Blandine; Berthier, Lionel; Walter, Christian
2015-04-01
Accurate soil information over large extent is essential to manage agronomical and environmental issues. Where it exists, information on soil is often sparse or available at coarser resolution than required. Typically, the spatial distribution of soil at regional scale is represented as a set of polygons defining soil map units (SMU), each one describing several soil types not spatially delineated, and a semantic database describing these objects. Delineation of soil types within SMU, ie spatial disaggregation of SMU allows improved soil information's accuracy using legacy data. The aim of this study was to predict soil types by spatial disaggregation of SMU through a decision tree approach, considering expert knowledge on soil-landscape relationships embedded in soil databases. The DSMART (Disaggregation and Harmonization of Soil Map Units Through resampled Classification Trees) algorithm developed by Odgers et al. (2014) was used. It requires soil information, environmental covariates, and calibration samples, to build then extrapolate decision trees. To assign a soil type to a particular spatial position, a weighed random allocation approach is applied: each soil type in the SMU is weighted according to its assumed proportion of occurrence in the SMU. Thus soil-landscape relationships are not considered in the current version of DSMART. Expert rules on soil distribution considering the relief, parent material and wetlands location were proposed to drive the procedure of allocation of soil type to sampled positions, in order to integrate the soil-landscape relationships. Semantic information about spatial organization of soil types within SMU and exhaustive landscape descriptors were used. In the eastern part of Brittany (NW France), 171 soil types were described; their relative area in the SMU were estimated, geomorphological and geological contexts were recorded. The model predicted 144 soil types. An external validation was performed by comparing predicted with effectively observed soil types derived from available soil maps at scale of 1:25.000 or 1:50.000. Overall accuracies were 63.1% and 36.2%, respectively considering or not the adjacent pixels. The introduction of expert rules based on soil-landscape relationships to allocate soil types to calibration samples enhanced dramatically the results in comparison with a simple weighted random allocation procedure. It also enabled the production of a comprehensive soil map, retrieving expected spatial organization of soils. Estimation of soil properties for various depths is planned using disaggregated soil types, according to the GlobalSoilmap.net specifications. Odgers, N.P., Sun, W., McBratney, A.B., Minasny, B., Clifford, D., 2014. Disaggregating and harmonising soil map units through resampled classification trees. Geoderma 214, 91-100.
Unlocking the spatial inversion of large scanning magnetic microscopy datasets
NASA Astrophysics Data System (ADS)
Myre, J. M.; Lascu, I.; Andrade Lima, E.; Feinberg, J. M.; Saar, M. O.; Weiss, B. P.
2013-12-01
Modern scanning magnetic microscopy provides the ability to perform high-resolution, ultra-high sensitivity moment magnetometry, with spatial resolutions better than 10^-4 m and magnetic moments as weak as 10^-16 Am^2. These microscopy capabilities have enhanced numerous magnetic studies, including investigations of the paleointensity of the Earth's magnetic field, shock magnetization and demagnetization of impacts, magnetostratigraphy, the magnetic record in speleothems, and the records of ancient core dynamos of planetary bodies. A common component among many studies utilizing scanning magnetic microscopy is solving an inverse problem to determine the non-negative magnitude of the magnetic moments that produce the measured component of the magnetic field. The two most frequently used methods to solve this inverse problem are classic fast Fourier techniques in the frequency domain and non-negative least squares (NNLS) methods in the spatial domain. Although Fourier techniques are extremely fast, they typically violate non-negativity and it is difficult to implement constraints associated with the space domain. NNLS methods do not violate non-negativity, but have typically been computation time prohibitive for samples of practical size or resolution. Existing NNLS methods use multiple techniques to attain tractable computation. To reduce computation time in the past, typically sample size or scan resolution would have to be reduced. Similarly, multiple inversions of smaller sample subdivisions can be performed, although this frequently results in undesirable artifacts at subdivision boundaries. Dipole interactions can also be filtered to only compute interactions above a threshold which enables the use of sparse methods through artificial sparsity. To improve upon existing spatial domain techniques, we present the application of the TNT algorithm, named TNT as it is a "dynamite" non-negative least squares algorithm which enhances the performance and accuracy of spatial domain inversions. We show that the TNT algorithm reduces the execution time of spatial domain inversions from months to hours and that inverse solution accuracy is improved as the TNT algorithm naturally produces solutions with small norms. Using sIRM and NRM measures of multiple synthetic and natural samples we show that the capabilities of the TNT algorithm allow very large samples to be inverted without the need for alternative techniques to make the problems tractable. Ultimately, the TNT algorithm enables accurate spatial domain analysis of scanning magnetic microscopy data on an accelerated time scale that renders spatial domain analyses tractable for numerous studies, including searches for the best fit of unidirectional magnetization direction and high-resolution step-wise magnetization and demagnetization.
Gender differences in multitasking reflect spatial ability.
Mäntylä, Timo
2013-04-01
Demands involving the scheduling and interleaving of multiple activities have become increasingly prevalent, especially for women in both their paid and unpaid work hours. Despite the ubiquity of everyday requirements to multitask, individual and gender-related differences in multitasking have gained minimal attention in past research. In two experiments, participants completed a multitasking session with four gender-fair monitoring tasks and separate tasks measuring executive functioning (working memory updating) and spatial ability (mental rotation). In both experiments, males outperformed females in monitoring accuracy. Individual differences in executive functioning and spatial ability were independent predictors of monitoring accuracy, but only spatial ability mediated gender differences in multitasking. Menstrual changes accentuated these effects, such that gender differences in multitasking (and spatial ability) were eliminated between males and females who were in the menstrual phase of the menstrual cycle but not between males and females who were in the luteal phase. These findings suggest that multitasking involves spatiotemporal task coordination and that gender differences in multiple-task performance reflect differences in spatial ability.
WATSFAR: numerical simulation of soil WATer and Solute fluxes using a FAst and Robust method
NASA Astrophysics Data System (ADS)
Crevoisier, David; Voltz, Marc
2013-04-01
To simulate the evolution of hydro- and agro-systems, numerous spatialised models are based on a multi-local approach and improvement of simulation accuracy by data-assimilation techniques are now used in many application field. The latest acquisition techniques provide a large amount of experimental data, which increase the efficiency of parameters estimation and inverse modelling approaches. In turn simulations are often run on large temporal and spatial domains which requires a large number of model runs. Eventually, despite the regular increase in computing capacities, the development of fast and robust methods describing the evolution of saturated-unsaturated soil water and solute fluxes is still a challenge. Ross (2003, Agron J; 95:1352-1361) proposed a method, solving 1D Richards' and convection-diffusion equation, that fulfil these characteristics. The method is based on a non iterative approach which reduces the numerical divergence risks and allows the use of coarser spatial and temporal discretisations, while assuring a satisfying accuracy of the results. Crevoisier et al. (2009, Adv Wat Res; 32:936-947) proposed some technical improvements and validated this method on a wider range of agro- pedo- climatic situations. In this poster, we present the simulation code WATSFAR which generalises the Ross method to other mathematical representations of soil water retention curve (i.e. standard and modified van Genuchten model) and includes a dual permeability context (preferential fluxes) for both water and solute transfers. The situations tested are those known to be the less favourable when using standard numerical methods: fine textured and extremely dry soils, intense rainfall and solute fluxes, soils near saturation, ... The results of WATSFAR have been compared with the standard finite element model Hydrus. The analysis of these comparisons highlights two main advantages for WATSFAR, i) robustness: even on fine textured soil or high water and solute fluxes - where Hydrus simulations may fail to converge - no numerical problem appears, and ii) accuracy of simulations even for loose spatial domain discretisations, which can only be obtained by Hydrus with fine discretisations.
ERIC Educational Resources Information Center
Verdine, Brian N.; Golinkoff, Roberta M.; Hirsh-Pasek, Kathryn; Newcombe, Nora S.; Filipowicz, Andrew T.; Chang, Alicia
2014-01-01
This study focuses on three main goals: First, 3-year-olds' spatial assembly skills are probed using interlocking block constructions (N = 102). A detailed scoring scheme provides insight into early spatial processing and offers information beyond a basic accuracy score. Second, the relation of spatial assembly to early mathematical skills…
NASA Astrophysics Data System (ADS)
Liu, Wanjun; Liang, Xuejian; Qu, Haicheng
2017-11-01
Hyperspectral image (HSI) classification is one of the most popular topics in remote sensing community. Traditional and deep learning-based classification methods were proposed constantly in recent years. In order to improve the classification accuracy and robustness, a dimensionality-varied convolutional neural network (DVCNN) was proposed in this paper. DVCNN was a novel deep architecture based on convolutional neural network (CNN). The input of DVCNN was a set of 3D patches selected from HSI which contained spectral-spatial joint information. In the following feature extraction process, each patch was transformed into some different 1D vectors by 3D convolution kernels, which were able to extract features from spectral-spatial data. The rest of DVCNN was about the same as general CNN and processed 2D matrix which was constituted by by all 1D data. So that the DVCNN could not only extract more accurate and rich features than CNN, but also fused spectral-spatial information to improve classification accuracy. Moreover, the robustness of network on water-absorption bands was enhanced in the process of spectral-spatial fusion by 3D convolution, and the calculation was simplified by dimensionality varied convolution. Experiments were performed on both Indian Pines and Pavia University scene datasets, and the results showed that the classification accuracy of DVCNN improved by 32.87% on Indian Pines and 19.63% on Pavia University scene than spectral-only CNN. The maximum accuracy improvement of DVCNN achievement was 13.72% compared with other state-of-the-art HSI classification methods, and the robustness of DVCNN on water-absorption bands noise was demonstrated.
Harold S.J. Zald; Janet L. Ohmann; Heather M. Roberts; Matthew J. Gregory; Emilie B. Henderson; Robert J. McGaughey; Justin Braaten
2014-01-01
This study investigated how lidar-derived vegetation indices, disturbance history from Landsat time series (LTS) imagery, plot location accuracy, and plot size influenced accuracy of statistical spatial models (nearest-neighbor imputation maps) of forest vegetation composition and structure. Nearest-neighbor (NN) imputation maps were developed for 539,000 ha in the...
NASA Astrophysics Data System (ADS)
Deo, R. K.; Domke, G. M.; Russell, M.; Woodall, C. W.
2017-12-01
Landsat data have been widely used to support strategic forest inventory and management decisions despite the limited success of passive optical remote sensing for accurate estimation of aboveground biomass (AGB). The archive of publicly available Landsat data, available at 30-m spatial resolutions since 1984, has been a valuable resource for cost-effective large-area estimation of AGB to inform national requirements such as for the US national greenhouse gas inventory (NGHGI). In addition, other optical satellite data such as MODIS imagery of wider spatial coverage and higher temporal resolution are enriching the domain of spatial predictors for regional scale mapping of AGB. Because NGHGIs require national scale AGB information and there are tradeoffs in the prediction accuracy versus operational efficiency of Landsat, this study evaluated the impact of various resolutions of Landsat predictors on the accuracy of regional AGB models across three different sites in the eastern USA: Maine, Pennsylvania-New Jersey, and South Carolina. We used recent national forest inventory (NFI) data with numerous Landsat-derived predictors at ten different spatial resolutions ranging from 30 to 1000 m to understand the optimal spatial resolution of the optical data for enhanced spatial inventory of AGB for NGHGI reporting. Ten generic spatial models at different spatial resolutions were developed for all sites and large-area estimates were evaluated (i) at the county-level against the independent designed-based estimates via the US NFI Evalidator tool and (ii) within a large number of strips ( 1 km wide) predicted via LiDAR metrics at a high spatial resolution. The county-level estimates by the Evalidator and Landsat models were statistically equivalent and produced coefficients of determination (R2) above 0.85 that varied with sites and resolution of predictors. The mean and standard deviation of county-level estimates followed increasing and decreasing trends, respectively, with models of decreasing resolutions. The Landsat-based total AGB estimates within the strips against the total AGB obtained using LiDAR metrics did not differ significantly and were within ±15 Mg/ha for each of the sites. We conclude that the optical satellite data at resolutions up to 1000 m provide acceptable accuracy for the US' NGHGI.
NASA Astrophysics Data System (ADS)
Croghan, Danny; Van Loon, Anne; Bradley, Chris; Sadler, Jon; Hannnah, David
2017-04-01
Studies relating rainfall events to river water quality are frequently hindered by the lack of high resolution rainfall data. Local studies are particularly vulnerable due to the spatial variability of precipitation, whilst studies in urban environments require precipitation data at high spatial and temporal resolutions. The use of point-source data makes identifying causal effects of storms on water quality problematic and can lead to erroneous interpretations. High spatial and temporal resolution rainfall radar data offers great potential to address these issues. Here we use rainfall radar data with a 1km spatial resolution and 5 minute temporal resolution sourced from the UK Met Office Nimrod system to study the effects of storm events on water temperature (WTemp) in Birmingham, UK. 28 WTemp loggers were placed over 3 catchments on a rural-urban land use gradient to identify trends in WTemp during extreme events within urban environments. Using GIS, the catchment associated with each logger was estimated, and 5 min. rainfall totals and intensities were produced for each sub-catchment. Comparisons of rainfall radar data to meteorological stations in the same grid cell revealed the high accuracy of rainfall radar data in our catchments (<5% difference for studied months). The rainfall radar data revealed substantial differences in rainfall quantity between the three adjacent catchments. The most urban catchment generally received more rainfall, with this effect greatest in the highest intensity storms, suggesting the possibility of urban heat island effects on precipitation dynamics within the catchment. Rainfall radar data provided more accurate sub-catchment rainfall totals allowing better modelled estimates of storm flow, whilst spatial fluctuations in both discharge and WTemp can be simply related to precipitation intensity. Storm flow inputs for each sub-catchment were estimated and linked to changes in WTemp. WTemp showed substantial fluctuations (>1 °C) over short durations (<30 minutes) during storm events in urbanised sub-catchments, however WTemp recovery times were more prolonged. Use of the rainfall radar data allowed increased accuracy in estimates of storm flow timings and rainfall quantities at each sub-catchment, from which the impact of storm flow on WTemp could be quantified. We are currently using the radar data to derive thresholds for rainfall amount and intensity at which these storm deviations occur for each logger, from which the relative effects of land use and other catchment characteristics in each sub-catchment can be assessed. Our use of the rainfall radar data calls into question the validity of using station based data for small scale studies, particularly in urban areas, with high variation apparent in rainfall intensity both spatially and temporally. Variation was particularly high within the heavily urbanised catchment. For water quality studies, high resolution rainfall radar can be implemented to increase the reliability of interpretations of the response of water quality variables to storm water inputs in urban catchments.
Mapping spatial patterns with morphological image processing
Peter Vogt; Kurt H. Riitters; Christine Estreguil; Jacek Kozak; Timothy G. Wade; James D. Wickham
2006-01-01
We use morphological image processing for classifying spatial patterns at the pixel level on binary land-cover maps. Land-cover pattern is classified as 'perforated,' 'edge,' 'patch,' and 'core' with higher spatial precision and thematic accuracy compared to a previous approach based on image convolution, while retaining the...
Improving the accuracy of Laplacian estimation with novel multipolar concentric ring electrodes
Ding, Quan; Besio, Walter G.
2015-01-01
Conventional electroencephalography with disc electrodes has major drawbacks including poor spatial resolution, selectivity and low signal-to-noise ratio that are critically limiting its use. Concentric ring electrodes, consisting of several elements including the central disc and a number of concentric rings, are a promising alternative with potential to improve all of the aforementioned aspects significantly. In our previous work, the tripolar concentric ring electrode was successfully used in a wide range of applications demonstrating its superiority to conventional disc electrode, in particular, in accuracy of Laplacian estimation. This paper takes the next step toward further improving the Laplacian estimation with novel multipolar concentric ring electrodes by completing and validating a general approach to estimation of the Laplacian for an (n + 1)-polar electrode with n rings using the (4n + 1)-point method for n ≥ 2 that allows cancellation of all the truncation terms up to the order of 2n. An explicit formula based on inversion of a square Vandermonde matrix is derived to make computation of multipolar Laplacian more efficient. To confirm the analytic result of the accuracy of Laplacian estimate increasing with the increase of n and to assess the significance of this gain in accuracy for practical applications finite element method model analysis has been performed. Multipolar concentric ring electrode configurations with n ranging from 1 ring (bipolar electrode configuration) to 6 rings (septapolar electrode configuration) were directly compared and obtained results suggest the significance of the increase in Laplacian accuracy caused by increase of n. PMID:26693200
Characterization and delineation of caribou habitat on Unimak Island using remote sensing techniques
NASA Astrophysics Data System (ADS)
Atkinson, Brain M.
The assessment of herbivore habitat quality is traditionally based on quantifying the forages available to the animal across their home range through ground-based techniques. While these methods are highly accurate, they can be time-consuming and highly expensive, especially for herbivores that occupy vast spatial landscapes. The Unimak Island caribou herd has been decreasing in the last decade at rates that have prompted discussion of management intervention. Frequent inclement weather in this region of Alaska has provided for little opportunity to study the caribou forage habitat on Unimak Island. The overall objectives of this study were two-fold 1) to assess the feasibility of using high-resolution color and near-infrared aerial imagery to map the forage distribution of caribou habitat on Unimak Island and 2) to assess the use of a new high-resolution multispectral satellite imagery platform, RapidEye, and use of the "red-edge" spectral band on vegetation classification accuracy. Maximum likelihood classification algorithms were used to create land cover maps in aerial and satellite imagery. Accuracy assessments and transformed divergence values were produced to assess vegetative spectral information and classification accuracy. By using RapidEye and aerial digital imagery in a hierarchical supervised classification technique, we were able to produce a high resolution land cover map of Unimak Island. We obtained overall accuracy rates of 71.4 percent which are comparable to other land cover maps using RapidEye imagery. The "red-edge" spectral band included in the RapidEye imagery provides additional spectral information that allows for a more accurate overall classification, raising overall accuracy 5.2 percent.
NASA Astrophysics Data System (ADS)
Peterson, James Preston, II
Unmanned Aerial Systems (UAS) are rapidly blurring the lines between traditional and close range photogrammetry, and between surveying and photogrammetry. UAS are providing an economic platform for performing aerial surveying on small projects. The focus of this research was to describe traditional photogrammetric imagery and Light Detection and Ranging (LiDAR) geospatial products, describe close range photogrammetry (CRP), introduce UAS and computer vision (CV), and investigate whether industry mapping standards for accuracy can be met using UAS collection and CV processing. A 120-acre site was selected and 97 aerial targets were surveyed for evaluation purposes. Four UAS flights of varying heights above ground level (AGL) were executed, and three different target patterns of varying distances between targets were analyzed for compliance with American Society for Photogrammetry and Remote Sensing (ASPRS) and National Standard for Spatial Data Accuracy (NSSDA) mapping standards. This analysis resulted in twelve datasets. Error patterns were evaluated and reasons for these errors were determined. The relationship between the AGL, ground sample distance, target spacing and the root mean square error of the targets is exploited by this research to develop guidelines that use the ASPRS and NSSDA map standard as the template. These guidelines allow the user to select the desired mapping accuracy and determine what target spacing and AGL is required to produce the desired accuracy. These guidelines also address how UAS/CV phenomena affect map accuracy. General guidelines and recommendations are presented that give the user helpful information for planning a UAS flight using CV technology.
Improving the accuracy of Laplacian estimation with novel multipolar concentric ring electrodes.
Makeyev, Oleksandr; Ding, Quan; Besio, Walter G
2016-02-01
Conventional electroencephalography with disc electrodes has major drawbacks including poor spatial resolution, selectivity and low signal-to-noise ratio that are critically limiting its use. Concentric ring electrodes, consisting of several elements including the central disc and a number of concentric rings, are a promising alternative with potential to improve all of the aforementioned aspects significantly. In our previous work, the tripolar concentric ring electrode was successfully used in a wide range of applications demonstrating its superiority to conventional disc electrode, in particular, in accuracy of Laplacian estimation. This paper takes the next step toward further improving the Laplacian estimation with novel multipolar concentric ring electrodes by completing and validating a general approach to estimation of the Laplacian for an ( n + 1)-polar electrode with n rings using the (4 n + 1)-point method for n ≥ 2 that allows cancellation of all the truncation terms up to the order of 2 n . An explicit formula based on inversion of a square Vandermonde matrix is derived to make computation of multipolar Laplacian more efficient. To confirm the analytic result of the accuracy of Laplacian estimate increasing with the increase of n and to assess the significance of this gain in accuracy for practical applications finite element method model analysis has been performed. Multipolar concentric ring electrode configurations with n ranging from 1 ring (bipolar electrode configuration) to 6 rings (septapolar electrode configuration) were directly compared and obtained results suggest the significance of the increase in Laplacian accuracy caused by increase of n .
Molina, Sergio L; Stodden, David F
2018-04-01
This study examined variability in throwing speed and spatial error to test the prediction of an inverted-U function (i.e., impulse-variability [IV] theory) and the speed-accuracy trade-off. Forty-five 9- to 11-year-old children were instructed to throw at a specified percentage of maximum speed (45%, 65%, 85%, and 100%) and hit the wall target. Results indicated no statistically significant differences in variable error across the target conditions (p = .72), failing to support the inverted-U hypothesis. Spatial accuracy results indicated no statistically significant differences with mean radial error (p = .18), centroid radial error (p = .13), and bivariate variable error (p = .08) also failing to support the speed-accuracy trade-off in overarm throwing. As neither throwing performance variability nor accuracy changed across percentages of maximum speed in this sample of children as well as in a previous adult sample, current policy and practices of practitioners may need to be reevaluated.
A Computational Model of Spatial Visualization Capacity
ERIC Educational Resources Information Center
Lyon, Don R.; Gunzelmann, Glenn; Gluck, Kevin A.
2008-01-01
Visualizing spatial material is a cornerstone of human problem solving, but human visualization capacity is sharply limited. To investigate the sources of this limit, we developed a new task to measure visualization accuracy for verbally-described spatial paths (similar to street directions), and implemented a computational process model to…
Spatial image modulation to improve performance of computed tomography imaging spectrometer
NASA Technical Reports Server (NTRS)
Bearman, Gregory H. (Inventor); Wilson, Daniel W. (Inventor); Johnson, William R. (Inventor)
2010-01-01
Computed tomography imaging spectrometers ("CTIS"s) having patterns for imposing spatial structure are provided. The pattern may be imposed either directly on the object scene being imaged or at the field stop aperture. The use of the pattern improves the accuracy of the captured spatial and spectral information.
NASA Astrophysics Data System (ADS)
West, J. B.; Ehleringer, J. R.; Cerling, T.
2006-12-01
Understanding how the biosphere responds to change it at the heart of biogeochemistry, ecology, and other Earth sciences. The dramatic increase in human population and technological capacity over the past 200 years or so has resulted in numerous, simultaneous changes to biosphere structure and function. This, then, has lead to increased urgency in the scientific community to try to understand how systems have already responded to these changes, and how they might do so in the future. Since all biospheric processes exhibit some patchiness or patterns over space, as well as time, we believe that understanding the dynamic interactions between natural systems and human technological manipulations can be improved if these systems are studied in an explicitly spatial context. We present here results of some of our efforts to model the spatial variation in the stable isotope ratios (δ2H and δ18O) of plants over large spatial extents, and how these spatial model predictions compare to spatially explicit data. Stable isotopes trace and record ecological processes and as such, if modeled correctly over Earth's surface allow us insights into changes in biosphere states and processes across spatial scales. The data-model comparisons show good agreement, in spite of the remaining uncertainties (e.g., plant source water isotopic composition). For example, inter-annual changes in climate are recorded in wine stable isotope ratios. Also, a much simpler model of leaf water enrichment driven with spatially continuous global rasters of precipitation and climate normals largely agrees with complex GCM modeling that includes leaf water δ18O. Our results suggest that modeling plant stable isotope ratios across large spatial extents may be done with reasonable accuracy, including over time. These spatial maps, or isoscapes, can now be utilized to help understand spatially distributed data, as well as to help guide future studies designed to understand ecological change across landscapes.
NASA Astrophysics Data System (ADS)
Maljaars, Jakob M.; Labeur, Robert Jan; Möller, Matthias
2018-04-01
A generic particle-mesh method using a hybridized discontinuous Galerkin (HDG) framework is presented and validated for the solution of the incompressible Navier-Stokes equations. Building upon particle-in-cell concepts, the method is formulated in terms of an operator splitting technique in which Lagrangian particles are used to discretize an advection operator, and an Eulerian mesh-based HDG method is employed for the constitutive modeling to account for the inter-particle interactions. Key to the method is the variational framework provided by the HDG method. This allows to formulate the projections between the Lagrangian particle space and the Eulerian finite element space in terms of local (i.e. cellwise) ℓ2-projections efficiently. Furthermore, exploiting the HDG framework for solving the constitutive equations results in velocity fields which excellently approach the incompressibility constraint in a local sense. By advecting the particles through these velocity fields, the particle distribution remains uniform over time, obviating the need for additional quality control. The presented methodology allows for a straightforward extension to arbitrary-order spatial accuracy on general meshes. A range of numerical examples shows that optimal convergence rates are obtained in space and, given the particular time stepping strategy, second-order accuracy is obtained in time. The model capabilities are further demonstrated by presenting results for the flow over a backward facing step and for the flow around a cylinder.
NASA Astrophysics Data System (ADS)
Navratil, Peter; Wilps, Hans
2013-01-01
Three different object-based image classification techniques are applied to high-resolution satellite data for the mapping of the habitats of Asian migratory locust (Locusta migratoria migratoria) in the southern Aral Sea basin, Uzbekistan. A set of panchromatic and multispectral Système Pour l'Observation de la Terre-5 satellite images was spectrally enhanced by normalized difference vegetation index and tasseled cap transformation and segmented into image objects, which were then classified by three different classification approaches: a rule-based hierarchical fuzzy threshold (HFT) classification method was compared to a supervised nearest neighbor classifier and classification tree analysis by the quick, unbiased, efficient statistical trees algorithm. Special emphasis was laid on the discrimination of locust feeding and breeding habitats due to the significance of this discrimination for practical locust control. Field data on vegetation and land cover, collected at the time of satellite image acquisition, was used to evaluate classification accuracy. The results show that a robust HFT classifier outperformed the two automated procedures by 13% overall accuracy. The classification method allowed a reliable discrimination of locust feeding and breeding habitats, which is of significant importance for the application of the resulting data for an economically and environmentally sound control of locust pests because exact spatial knowledge on the habitat types allows a more effective surveying and use of pesticides.
Kedia, Prashant; Gaidhane, Monica
2013-01-01
Endoscopic ultrasound-guided fine needle aspiration (EUS-FNA) is one of the least invasive and most effective modality in diagnosing pancreatic adenocarcinoma in solid pancreatic lesions, with a higher diagnostic accuracy than cystic tumors. EUS-FNA has been shown to detect tumors less than 3 mm, due to high spatial resolution allowing the detection of very small lesions and vascular invasion, particularly in the pancreatic head and neck, which may not be detected on transverse computed tomography. Furthermore, this minimally invasive procedure is often ideal in the endoscopic procurement of tissue in patients with unresectable tumors. While EUS-FNA has been increasingly used as a diagnostic tool, most studies have collectively looked at all primary pancreatic solid lesions, including lymphomas and pancreatic neuroendocrine neoplasms, whereas very few studies have examined the diagnostic utility of EUS-FNA of pancreatic ductal carcinoma only. As with any novel and advanced endoscopic procedure that may incorporate several practices and approaches, endoscopists have adopted diverse techniques to improve the tissue procurement practice and increase diagnostic accuracy. In this article, we present a review of literature to date and discuss currently practiced EUS-FNA technique, including indications, technical details, equipment, patient selection, and diagnostic accuracy. PMID:24143320
Zhang, Jie; Hodge, Bri -Mathias; Lu, Siyuan; ...
2015-11-10
Accurate solar photovoltaic (PV) power forecasting allows utilities to reliably utilize solar resources on their systems. However, to truly measure the improvements that any new solar forecasting methods provide, it is important to develop a methodology for determining baseline and target values for the accuracy of solar forecasting at different spatial and temporal scales. This paper aims at developing a framework to derive baseline and target values for a suite of generally applicable, value-based, and custom-designed solar forecasting metrics. The work was informed by close collaboration with utility and independent system operator partners. The baseline values are established based onmore » state-of-the-art numerical weather prediction models and persistence models in combination with a radiative transfer model. The target values are determined based on the reduction in the amount of reserves that must be held to accommodate the uncertainty of PV power output. The proposed reserve-based methodology is a reasonable and practical approach that can be used to assess the economic benefits gained from improvements in accuracy of solar forecasting. Lastly, the financial baseline and targets can be translated back to forecasting accuracy metrics and requirements, which will guide research on solar forecasting improvements toward the areas that are most beneficial to power systems operations.« less
Multi-stage robust scheme for citrus identification from high resolution airborne images
NASA Astrophysics Data System (ADS)
Amorós-López, Julia; Izquierdo Verdiguier, Emma; Gómez-Chova, Luis; Muñoz-Marí, Jordi; Zoilo Rodríguez-Barreiro, Jorge; Camps-Valls, Gustavo; Calpe-Maravilla, Javier
2008-10-01
Identification of land cover types is one of the most critical activities in remote sensing. Nowadays, managing land resources by using remote sensing techniques is becoming a common procedure to speed up the process while reducing costs. However, data analysis procedures should satisfy the accuracy figures demanded by institutions and governments for further administrative actions. This paper presents a methodological scheme to update the citrus Geographical Information Systems (GIS) of the Comunidad Valenciana autonomous region, Spain). The proposed approach introduces a multi-stage automatic scheme to reduce visual photointerpretation and ground validation tasks. First, an object-oriented feature extraction process is carried out for each cadastral parcel from very high spatial resolution (VHR) images (0.5m) acquired in the visible and near infrared. Next, several automatic classifiers (decision trees, multilayer perceptron, and support vector machines) are trained and combined to improve the final accuracy of the results. The proposed strategy fulfills the high accuracy demanded by policy makers by means of combining automatic classification methods with visual photointerpretation available resources. A level of confidence based on the agreement between classifiers allows us an effective management by fixing the quantity of parcels to be reviewed. The proposed methodology can be applied to similar problems and applications.
Filters for Improvement of Multiscale Data from Atomistic Simulations
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gardner, David J.; Reynolds, Daniel R.
Multiscale computational models strive to produce accurate and efficient numerical simulations of systems involving interactions across multiple spatial and temporal scales that typically differ by several orders of magnitude. Some such models utilize a hybrid continuum-atomistic approach combining continuum approximations with first-principles-based atomistic models to capture multiscale behavior. By following the heterogeneous multiscale method framework for developing multiscale computational models, unknown continuum scale data can be computed from an atomistic model. Concurrently coupling the two models requires performing numerous atomistic simulations which can dominate the computational cost of the method. Furthermore, when the resulting continuum data is noisy due tomore » sampling error, stochasticity in the model, or randomness in the initial conditions, filtering can result in significant accuracy gains in the computed multiscale data without increasing the size or duration of the atomistic simulations. In this work, we demonstrate the effectiveness of spectral filtering for increasing the accuracy of noisy multiscale data obtained from atomistic simulations. Moreover, we present a robust and automatic method for closely approximating the optimum level of filtering in the case of additive white noise. By improving the accuracy of this filtered simulation data, it leads to a dramatic computational savings by allowing for shorter and smaller atomistic simulations to achieve the same desired multiscale simulation precision.« less
Filters for Improvement of Multiscale Data from Atomistic Simulations
Gardner, David J.; Reynolds, Daniel R.
2017-01-05
Multiscale computational models strive to produce accurate and efficient numerical simulations of systems involving interactions across multiple spatial and temporal scales that typically differ by several orders of magnitude. Some such models utilize a hybrid continuum-atomistic approach combining continuum approximations with first-principles-based atomistic models to capture multiscale behavior. By following the heterogeneous multiscale method framework for developing multiscale computational models, unknown continuum scale data can be computed from an atomistic model. Concurrently coupling the two models requires performing numerous atomistic simulations which can dominate the computational cost of the method. Furthermore, when the resulting continuum data is noisy due tomore » sampling error, stochasticity in the model, or randomness in the initial conditions, filtering can result in significant accuracy gains in the computed multiscale data without increasing the size or duration of the atomistic simulations. In this work, we demonstrate the effectiveness of spectral filtering for increasing the accuracy of noisy multiscale data obtained from atomistic simulations. Moreover, we present a robust and automatic method for closely approximating the optimum level of filtering in the case of additive white noise. By improving the accuracy of this filtered simulation data, it leads to a dramatic computational savings by allowing for shorter and smaller atomistic simulations to achieve the same desired multiscale simulation precision.« less
NASA Astrophysics Data System (ADS)
Gómez Giménez, M.; Della Peruta, R.; de Jong, R.; Keller, A.; Schaepman, M. E.
2015-12-01
Agroecosystems play an important role providing economic and ecosystem services, which directly impact society. Inappropriate land use and unsustainable agricultural management with associated nutrient cycles can jeopardize important soil functions such as food production, livestock feeding and conservation of biodiversity. The objective of this study was to integrate remotely sensed land cover information into a regional Land Management Model (LMM) to improve the assessment of spatial explicit nutrient balances for agroecosystems. Remotely sensed data as well as an optimized parameter set contributed to feed the LMM providing a better spatial allocation of agricultural data aggregated at farm level. The integration of land use information in the land allocation process relied predominantly on three factors: i) spatial resolution, ii) classification accuracy and iii) parcels definition. The best-input parameter combination resulted in two different land cover classifications with overall accuracies of 98%, improving the LMM performance by 16% as compared to using non-spatially explicit input. Firstly, the use of spatial explicit information improved the spatial allocation output resulting in a pattern that better followed parcel boundaries (Figure 1). Second, the high classification accuracies ensured consistency between the datasets used. Third, the use of a suitable spatial unit to define the parcels boundaries influenced the model in terms of computational time and the amount of farmland allocated. We conclude that the combined use of remote sensing (RS) data with the LMM has the potential to provide highly accurate information of spatial explicit nutrient balances that are crucial for policy options concerning sustainable management of agricultural soils. Figure 1. Details of the spatial pattern obtained: a) Using only the farm census data, b) using also land use information. Framed in black in the left image (a), examples of artifacts that disappeared when using land use information (right image, b). Colors represent different ownership.
NASA Astrophysics Data System (ADS)
Lin, S.; Li, J.; Liu, Q.
2018-04-01
Satellite remote sensing data provide spatially continuous and temporally repetitive observations of land surfaces, and they have become increasingly important for monitoring large region of vegetation photosynthetic dynamic. But remote sensing data have their limitation on spatial and temporal scale, for example, higher spatial resolution data as Landsat data have 30-m spatial resolution but 16 days revisit period, while high temporal scale data such as geostationary data have 30-minute imaging period, which has lower spatial resolution (> 1 km). The objective of this study is to investigate whether combining high spatial and temporal resolution remote sensing data can improve the gross primary production (GPP) estimation accuracy in cropland. For this analysis we used three years (from 2010 to 2012) Landsat based NDVI data, MOD13 vegetation index product and Geostationary Operational Environmental Satellite (GOES) geostationary data as input parameters to estimate GPP in a small region cropland of Nebraska, US. Then we validated the remote sensing based GPP with the in-situ measurement carbon flux data. Results showed that: 1) the overall correlation between GOES visible band and in-situ measurement photosynthesis active radiation (PAR) is about 50 % (R2 = 0.52) and the European Center for Medium-Range Weather Forecasts ERA-Interim reanalysis data can explain 64 % of PAR variance (R2 = 0.64); 2) estimating GPP with Landsat 30-m spatial resolution data and ERA daily meteorology data has the highest accuracy(R2 = 0.85, RMSE < 3 gC/m2/day), which has better performance than using MODIS 1-km NDVI/EVI product import; 3) using daily meteorology data as input for GPP estimation in high spatial resolution data would have higher relevance than 8-day and 16-day input. Generally speaking, using the high spatial resolution and high frequency satellite based remote sensing data can improve GPP estimation accuracy in cropland.
NASA Astrophysics Data System (ADS)
Gill, G.; Sakrani, T.; Cheng, W.; Zhou, J.
2017-09-01
Many studies have utilized the spatial correlations among traffic crash data to develop crash prediction models with the aim to investigate the influential factors or predict crash counts at different sites. The spatial correlation have been observed to account for heterogeneity in different forms of weight matrices which improves the estimation performance of models. But very rarely have the weight matrices been compared for the prediction accuracy for estimation of crash counts. This study was targeted at the comparison of two different approaches for modelling the spatial correlations among crash data at macro-level (County). Multivariate Full Bayesian crash prediction models were developed using Decay-50 (distance-based) and Queen-1 (adjacency-based) weight matrices for simultaneous estimation crash counts of four different modes: vehicle, motorcycle, bike, and pedestrian. The goodness-of-fit and different criteria for accuracy at prediction of crash count reveled the superiority of Decay-50 over Queen-1. Decay-50 was essentially different from Queen-1 with the selection of neighbors and more robust spatial weight structure which rendered the flexibility to accommodate the spatially correlated crash data. The consistently better performance of Decay-50 at prediction accuracy further bolstered its superiority. Although the data collection efforts to gather centroid distance among counties for Decay-50 may appear to be a downside, but the model has a significant edge to fit the crash data without losing the simplicity of computation of estimated crash count.
Ma, Heng; Yang, Jun; Liu, Jing; Ge, Lan; An, Jing; Tang, Qing; Li, Han; Zhang, Yu; Chen, David; Wang, Yong; Liu, Jiabin; Liang, Zhigang; Lin, Kai; Jin, Lixin; Bi, Xiaoming; Li, Kuncheng; Li, Debiao
2012-04-15
Myocardial perfusion magnetic resonance imaging (MRI) with sliding-window conjugate-gradient highly constrained back-projection reconstruction (SW-CG-HYPR) allows whole left ventricular coverage, improved temporal and spatial resolution and signal/noise ratio, and reduced cardiac motion-related image artifacts. The accuracy of this technique for detecting coronary artery disease (CAD) has not been determined in a large number of patients. We prospectively evaluated the diagnostic performance of myocardial perfusion MRI with SW-CG-HYPR in patients with suspected CAD. A total of 50 consecutive patients who were scheduled for coronary angiography with suspected CAD underwent myocardial perfusion MRI with SW-CG-HYPR at 3.0 T. The perfusion defects were interpreted qualitatively by 2 blinded observers and were correlated with x-ray angiographic stenoses ≥50%. The prevalence of CAD was 56%. In the per-patient analysis, the sensitivity, specificity, positive predictive value, negative predictive value, and accuracy of SW-CG-HYPR was 96% (95% confidence interval 82% to 100%), 82% (95% confidence interval 60% to 95%), 87% (95% confidence interval 70% to 96%), 95% (95% confidence interval 74% to100%), and 90% (95% confidence interval 82% to 98%), respectively. In the per-vessel analysis, the corresponding values were 98% (95% confidence interval 91% to 100%), 89% (95% confidence interval 80% to 94%), 86% (95% confidence interval 76% to 93%), 99% (95% confidence interval 93% to 100%), and 93% (95% confidence interval 89% to 97%), respectively. In conclusion, myocardial perfusion MRI using SW-CG-HYPR allows whole left ventricular coverage and high resolution and has high diagnostic accuracy in patients with suspected CAD. Copyright © 2012 Elsevier Inc. All rights reserved.
Extended Kalman filtering for continuous volumetric MR-temperature imaging.
Denis de Senneville, Baudouin; Roujol, Sébastien; Hey, Silke; Moonen, Chrit; Ries, Mario
2013-04-01
Real time magnetic resonance (MR) thermometry has evolved into the method of choice for the guidance of high-intensity focused ultrasound (HIFU) interventions. For this role, MR-thermometry should preferably have a high temporal and spatial resolution and allow observing the temperature over the entire targeted area and its vicinity with a high accuracy. In addition, the precision of real time MR-thermometry for therapy guidance is generally limited by the available signal-to-noise ratio (SNR) and the influence of physiological noise. MR-guided HIFU would benefit of the large coverage volumetric temperature maps, including characterization of volumetric heating trajectories as well as near- and far-field heating. In this paper, continuous volumetric MR-temperature monitoring was obtained as follows. The targeted area was continuously scanned during the heating process by a multi-slice sequence. Measured data and a priori knowledge of 3-D data derived from a forecast based on a physical model were combined using an extended Kalman filter (EKF). The proposed reconstruction improved the temperature measurement resolution and precision while maintaining guaranteed output accuracy. The method was evaluated experimentally ex vivo on a phantom, and in vivo on a porcine kidney, using HIFU heating. On the in vivo experiment, it allowed the reconstruction from a spatio-temporally under-sampled data set (with an update rate for each voxel of 1.143 s) to a 3-D dataset covering a field of view of 142.5×285×54 mm(3) with a voxel size of 3×3×6 mm(3) and a temporal resolution of 0.127 s. The method also provided noise reduction, while having a minimal impact on accuracy and latency.
Automatic Extraction of Small Spatial Plots from Geo-Registered UAS Imagery
NASA Astrophysics Data System (ADS)
Cherkauer, Keith; Hearst, Anthony
2015-04-01
Accurate extraction of spatial plots from high-resolution imagery acquired by Unmanned Aircraft Systems (UAS), is a prerequisite for accurate assessment of experimental plots in many geoscience fields. If the imagery is correctly geo-registered, then it may be possible to accurately extract plots from the imagery based on their map coordinates. To test this approach, a UAS was used to acquire visual imagery of 5 ha of soybean fields containing 6.0 m2 plots in a complex planting scheme. Sixteen artificial targets were setup in the fields before flights and different spatial configurations of 0 to 6 targets were used as Ground Control Points (GCPs) for geo-registration, resulting in a total of 175 geo-registered image mosaics with a broad range of geo-registration accuracies. Geo-registration accuracy was quantified based on the horizontal Root Mean Squared Error (RMSE) of targets used as checkpoints. Twenty test plots were extracted from the geo-registered imagery. Plot extraction accuracy was quantified based on the percentage of the desired plot area that was extracted. It was found that using 4 GCPs along the perimeter of the field minimized the horizontal RMSE and enabled a plot extraction accuracy of at least 70%, with a mean plot extraction accuracy of 92%. The methods developed are suitable for work in many fields where replicates across time and space are necessary to quantify variability.
Mori, Shinichiro; Inaniwa, Taku; Kumagai, Motoki; Kuwae, Tsunekazu; Matsuzaki, Yuka; Furukawa, Takuji; Shirai, Toshiyuki; Noda, Koji
2012-06-01
To increase the accuracy of carbon ion beam scanning therapy, we have developed a graphical user interface-based digitally-reconstructed radiograph (DRR) software system for use in routine clinical practice at our center. The DRR software is used in particular scenarios in the new treatment facility to achieve the same level of geometrical accuracy at the treatment as at the imaging session. DRR calculation is implemented simply as the summation of CT image voxel values along the X-ray projection ray. Since we implemented graphics processing unit-based computation, the DRR images are calculated with a speed sufficient for the particular clinical practice requirements. Since high spatial resolution flat panel detector (FPD) images should be registered to the reference DRR images in patient setup process in any scenarios, the DRR images also needs higher spatial resolution close to that of FPD images. To overcome the limitation of the CT spatial resolution imposed by the CT voxel size, we applied image processing to improve the calculated DRR spatial resolution. The DRR software introduced here enabled patient positioning with sufficient accuracy for the implementation of carbon-ion beam scanning therapy at our center.
NASA Astrophysics Data System (ADS)
Fergason, R. L.; Laura, J.; Hare, T. M.; Otero, R.; Edgar, L. A.
2017-12-01
A Spatial Data Infrastructure (SDI) is a robust framework for data and data products, metadata, data access mechanisms, standards, policy, and a user community that helps to define and standardize the data necessary to meet some specified goal. The primary objective of an SDI is to improve communication, to enhance data access, and to aid in identifying gaps in knowledge. We are developing an SDI that describes the foundational data sets and accuracy requirements to evaluate landing site safety, facilitate the successful operation of Terrain Relative Navigation (TRN), and assist in the operation of the rover once it has successfully landed on Mars. Thru current development efforts, an implicit SDI exists for the Mars 2020 mission. An explicit SDI will allow us to identify any potential gaps in knowledge, facilitate communication between the different institutions involved in landing site evaluation and TRN development, and help ensure a smooth transition from landing to surface operations. This SDI is currently relevant to the Mars 2020 rover mission, but can also serve as a means to document current requirements for foundational data products and standards for future landed missions to Mars and other planetary bodies. To generate a Mars 2020-specific SDI, we must first document and rationalize data set and accuracy requirements for evaluating landing sites, performing surface operations, and inventorying Mars 2020 mission needs in terms of an SDI framework. This step will allow us to 1) evaluate and define what is needed for the acquisition of data and the generation and validation of data products, 2) articulate the accuracy and co-registration requirements, and 3) identify needs for data access (and eventual archiving). This SDI document will serve as a means to communicate the existing foundational products, standards that were followed in producing these products, and where and how these products can be accessed by the planetary community. This SDI will also facilitate discussions between the landing and surface operations groups to communicate the available data and identify unique needs to surface operations. Our goal is to continually review and update this SDI throughout the Mars 2020 landing site evaluation and operations, so that it remains relevant and effective as data availability and needs evolve.
Application of a single-flicker online SSVEP BCI for spatial navigation.
Chen, Jingjing; Zhang, Dan; Engel, Andreas K; Gong, Qin; Maye, Alexander
2017-01-01
A promising approach for brain-computer interfaces (BCIs) employs the steady-state visual evoked potential (SSVEP) for extracting control information. Main advantages of these SSVEP BCIs are a simple and low-cost setup, little effort to adjust the system parameters to the user and comparatively high information transfer rates (ITR). However, traditional frequency-coded SSVEP BCIs require the user to gaze directly at the selected flicker stimulus, which is liable to cause fatigue or even photic epileptic seizures. The spatially coded SSVEP BCI we present in this article addresses this issue. It uses a single flicker stimulus that appears always in the extrafoveal field of view, yet it allows the user to control four control channels. We demonstrate the embedding of this novel SSVEP stimulation paradigm in the user interface of an online BCI for navigating a 2-dimensional computer game. Offline analysis of the training data reveals an average classification accuracy of 96.9±1.64%, corresponding to an information transfer rate of 30.1±1.8 bits/min. In online mode, the average classification accuracy reached 87.9±11.4%, which resulted in an ITR of 23.8±6.75 bits/min. We did not observe a strong relation between a subject's offline and online performance. Analysis of the online performance over time shows that users can reliably control the new BCI paradigm with stable performance over at least 30 minutes of continuous operation.
MR-based source localization for MR-guided HDR brachytherapy
NASA Astrophysics Data System (ADS)
Beld, E.; Moerland, M. A.; Zijlstra, F.; Viergever, M. A.; Lagendijk, J. J. W.; Seevinck, P. R.
2018-04-01
For the purpose of MR-guided high-dose-rate (HDR) brachytherapy, a method for real-time localization of an HDR brachytherapy source was developed, which requires high spatial and temporal resolutions. MR-based localization of an HDR source serves two main aims. First, it enables real-time treatment verification by determination of the HDR source positions during treatment. Second, when using a dummy source, MR-based source localization provides an automatic detection of the source dwell positions after catheter insertion, allowing elimination of the catheter reconstruction procedure. Localization of the HDR source was conducted by simulation of the MR artifacts, followed by a phase correlation localization algorithm applied to the MR images and the simulated images, to determine the position of the HDR source in the MR images. To increase the temporal resolution of the MR acquisition, the spatial resolution was decreased, and a subpixel localization operation was introduced. Furthermore, parallel imaging (sensitivity encoding) was applied to further decrease the MR scan time. The localization method was validated by a comparison with CT, and the accuracy and precision were investigated. The results demonstrated that the described method could be used to determine the HDR source position with a high accuracy (0.4–0.6 mm) and a high precision (⩽0.1 mm), at high temporal resolutions (0.15–1.2 s per slice). This would enable real-time treatment verification as well as an automatic detection of the source dwell positions.
Combining geostatistics with Moran's I analysis for mapping soil heavy metals in Beijing, China.
Huo, Xiao-Ni; Li, Hong; Sun, Dan-Feng; Zhou, Lian-Di; Li, Bao-Guo
2012-03-01
Production of high quality interpolation maps of heavy metals is important for risk assessment of environmental pollution. In this paper, the spatial correlation characteristics information obtained from Moran's I analysis was used to supplement the traditional geostatistics. According to Moran's I analysis, four characteristics distances were obtained and used as the active lag distance to calculate the semivariance. Validation of the optimality of semivariance demonstrated that using the two distances where the Moran's I and the standardized Moran's I, Z(I) reached a maximum as the active lag distance can improve the fitting accuracy of semivariance. Then, spatial interpolation was produced based on the two distances and their nested model. The comparative analysis of estimation accuracy and the measured and predicted pollution status showed that the method combining geostatistics with Moran's I analysis was better than traditional geostatistics. Thus, Moran's I analysis is a useful complement for geostatistics to improve the spatial interpolation accuracy of heavy metals.
Combining Geostatistics with Moran’s I Analysis for Mapping Soil Heavy Metals in Beijing, China
Huo, Xiao-Ni; Li, Hong; Sun, Dan-Feng; Zhou, Lian-Di; Li, Bao-Guo
2012-01-01
Production of high quality interpolation maps of heavy metals is important for risk assessment of environmental pollution. In this paper, the spatial correlation characteristics information obtained from Moran’s I analysis was used to supplement the traditional geostatistics. According to Moran’s I analysis, four characteristics distances were obtained and used as the active lag distance to calculate the semivariance. Validation of the optimality of semivariance demonstrated that using the two distances where the Moran’s I and the standardized Moran’s I, Z(I) reached a maximum as the active lag distance can improve the fitting accuracy of semivariance. Then, spatial interpolation was produced based on the two distances and their nested model. The comparative analysis of estimation accuracy and the measured and predicted pollution status showed that the method combining geostatistics with Moran’s I analysis was better than traditional geostatistics. Thus, Moran’s I analysis is a useful complement for geostatistics to improve the spatial interpolation accuracy of heavy metals. PMID:22690179
GEOSPATIAL DATA ACCURACY ASSESSMENT
The development of robust accuracy assessment methods for the validation of spatial data represent's a difficult scientific challenge for the geospatial science community. The importance and timeliness of this issue is related directly to the dramatic escalation in the developmen...
Meneghetti, Chiara; Labate, Enia; Pazzaglia, Francesca; Hamilton, Colin; Gyselinck, Valérie
2017-05-01
This study examines the involvement of spatial and visual working memory (WM) in the construction of flexible spatial models derived from survey and route descriptions. Sixty young adults listened to environment descriptions, 30 from a survey perspective and the other 30 from a route perspective, while they performed spatial (spatial tapping [ST]) and visual (dynamic visual noise [DVN]) secondary tasks - believed to overload the spatial and visual working memory (WM) components, respectively - or no secondary task (control, C). Their mental representations of the environment were tested by free recall and a verification test with both route and survey statements. Results showed that, for both recall tasks, accuracy was worse in the ST than in the C or DVN conditions. In the verification test, the effect of both ST and DVN was a decreasing accuracy for sentences testing spatial relations from the opposite perspective to the one learnt than if the perspective was the same; only ST had a stronger interference effect than the C condition for sentences from the opposite perspective from the one learnt. Overall, these findings indicate that both visual and spatial WM, and especially the latter, are involved in the construction of perspective-flexible spatial models. © 2016 The British Psychological Society.
Active machine learning for rapid landslide inventory mapping with VHR satellite images (Invited)
NASA Astrophysics Data System (ADS)
Stumpf, A.; Lachiche, N.; Malet, J.; Kerle, N.; Puissant, A.
2013-12-01
VHR satellite images have become a primary source for landslide inventory mapping after major triggering events such as earthquakes and heavy rainfalls. Visual image interpretation is still the prevailing standard method for operational purposes but is time-consuming and not well suited to fully exploit the increasingly better supply of remote sensing data. Recent studies have addressed the development of more automated image analysis workflows for landslide inventory mapping. In particular object-oriented approaches that account for spatial and textural image information have been demonstrated to be more adequate than pixel-based classification but manually elaborated rule-based classifiers are difficult to adapt under changing scene characteristics. Machine learning algorithm allow learning classification rules for complex image patterns from labelled examples and can be adapted straightforwardly with available training data. In order to reduce the amount of costly training data active learning (AL) has evolved as a key concept to guide the sampling for many applications. The underlying idea of AL is to initialize a machine learning model with a small training set, and to subsequently exploit the model state and data structure to iteratively select the most valuable samples that should be labelled by the user. With relatively few queries and labelled samples, an AL strategy yields higher accuracies than an equivalent classifier trained with many randomly selected samples. This study addressed the development of an AL method for landslide mapping from VHR remote sensing images with special consideration of the spatial distribution of the samples. Our approach [1] is based on the Random Forest algorithm and considers the classifier uncertainty as well as the variance of potential sampling regions to guide the user towards the most valuable sampling areas. The algorithm explicitly searches for compact regions and thereby avoids a spatially disperse sampling pattern inherent to most other AL methods. The accuracy, the sampling time and the computational runtime of the algorithm were evaluated on multiple satellite images capturing recent large scale landslide events. Sampling between 1-4% of the study areas the accuracies between 74% and 80% were achieved, whereas standard sampling schemes yielded only accuracies between 28% and 50% with equal sampling costs. Compared to commonly used point-wise AL algorithm the proposed approach significantly reduces the number of iterations and hence the computational runtime. Since the user can focus on relatively few compact areas (rather than on hundreds of distributed points) the overall labeling time is reduced by more than 50% compared to point-wise queries. An experimental evaluation of multiple expert mappings demonstrated strong relationships between the uncertainties of the experts and the machine learning model. It revealed that the achieved accuracies are within the range of the inter-expert disagreement and that it will be indispensable to consider ground truth uncertainties to truly achieve further enhancements in the future. The proposed method is generally applicable to a wide range of optical satellite images and landslide types. [1] A. Stumpf, N. Lachiche, J.-P. Malet, N. Kerle, and A. Puissant, Active learning in the spatial domain for remote sensing image classification, IEEE Transactions on Geosciece and Remote Sensing. 2013, DOI 10.1109/TGRS.2013.2262052.
NASA Technical Reports Server (NTRS)
Hanold, Gregg T.; Hanold, David T.
2010-01-01
This paper presents a new Route Generation Algorithm that accurately and realistically represents human route planning and navigation for Military Operations in Urban Terrain (MOUT). The accuracy of this algorithm in representing human behavior is measured using the Unreal Tournament(Trademark) 2004 (UT2004) Game Engine to provide the simulation environment in which the differences between the routes taken by the human player and those of a Synthetic Agent (BOT) executing the A-star algorithm and the new Route Generation Algorithm can be compared. The new Route Generation Algorithm computes the BOT route based on partial or incomplete knowledge received from the UT2004 game engine during game play. To allow BOT navigation to occur continuously throughout the game play with incomplete knowledge of the terrain, a spatial network model of the UT2004 MOUT terrain is captured and stored in an Oracle 11 9 Spatial Data Object (SOO). The SOO allows a partial data query to be executed to generate continuous route updates based on the terrain knowledge, and stored dynamic BOT, Player and environmental parameters returned by the query. The partial data query permits the dynamic adjustment of the planned routes by the Route Generation Algorithm based on the current state of the environment during a simulation. The dynamic nature of this algorithm more accurately allows the BOT to mimic the routes taken by the human executing under the same conditions thereby improving the realism of the BOT in a MOUT simulation environment.
Direct Detection Doppler Lidar for Spaceborne Wind Measurement
NASA Technical Reports Server (NTRS)
Korb, C. Laurence; Flesia, Cristina
1999-01-01
The theory of double edge lidar techniques for measuring the atmospheric wind using aerosol and molecular backscatter is described. Two high spectral resolution filters with opposite slopes are located about the laser frequency for the aerosol based measurement or in the wings of the Rayleigh - Brillouin profile for the molecular measurement. This doubles the signal change per unit Doppler shift and improves the measurement accuracy by nearly a factor of 2 relative to the single edge technique. For the aerosol based measurement, the use of two high resolution edge filters reduces the effects of background, Rayleigh scattering, by as much as an order of magnitude and substantially improves the measurement accuracy. Also, we describe a method that allows the Rayleigh and aerosol components of the signal to be independently determined. A measurement accuracy of 1.2 m/s can be obtained for a signal level of 1000 detected photons which corresponds to signal levels in the boundary layer. For the molecular based measurement, we describe the use of a crossover region where the sensitivity of a molecular and aerosol-based measurement are equal. This desensitizes the molecular measurement to the effects of aerosol scattering and greatly simplifies the measurement. Simulations using a conical scanning spaceborne lidar at 355 nm give an accuracy of 2-3 m/s for altitudes of 2-15 km for a 1 km vertical resolution, a satellite altitude of 400 km, and a 200 km x 200 km spatial.
EnviroAtlas -- Fresno, California -- One Meter Resolution Urban Land Cover Data (2010)
The Fresno, CA EnviroAtlas One-Meter-scale Urban Land Cover Data were generated via supervised classification of combined aerial photography and LiDAR data. The air photos were United States Department of Agriculture (USDA) National Agricultural Imagery Program (NAIP) four band (red, green, blue, and near infrared) aerial photography at 1-m spatial resolution. Aerial photography ('imagery') was collected on multiple dates in summer 2010. Seven land cover classes were mapped: Water, impervious surfaces (Impervious), soil and barren (Soil), trees and forest (Tree), and grass and herbaceous non-woody vegetation (Grass), agriculture (Ag), and Orchards. An accuracy assessment of 500 completely random and 103 stratified random points yielded an overall User's fuzzy accuracy of 81.1 percent (see below). The area mapped is defined by the US Census Bureau's 2010 Urban Statistical Area for Fresno, CA plus a 1-km buffer. Where imagery was available, additional areas outside the 1-km boundary were also mapped but not included in the accuracy assessment. We expect the accuracy of the areas outside of the 1-km boundary to be consistent with those within. This dataset was produced by the US EPA to support research and online mapping activities related to EnviroAtlas. EnviroAtlas (https://www.epa.gov/enviroatlas) allows the user to interact with a web-based, easy-to-use, mapping application to view and analyze multiple ecosystem services for the contiguous United States. The da
EnviroAtlas -- Fresno, California -- One Meter Resolution Urban Land Cover Data (2010) Web Service
This EnviroAtlas web service supports research and online mapping activities related to EnviroAtlas (https://www.epa.gov/enviroatlas). The Fresno, CA EnviroAtlas One-Meter-scale Urban Land Cover Data were generated via supervised classification of combined aerial photography and LiDAR data. The air photos were United States Department of Agriculture (USDA) National Agricultural Imagery Program (NAIP) four band (red, green, blue, and near infrared) aerial photography at 1-m spatial resolution. Aerial photography ('imagery') was collected on multiple dates in summer 2010. Seven land cover classes were mapped: Water, impervious surfaces (Impervious), soil and barren (Soil), trees and forest (Tree), and grass and herbaceous non-woody vegetation (Grass), agriculture (Ag), and Orchards. An accuracy assessment of 500 completely random and 103 stratified random points yielded an overall User's fuzzy accuracy of 81.1 percent (see below). The area mapped is defined by the US Census Bureau's 2010 Urban Statistical Area for Fresno, CA plus a 1-km buffer. Where imagery was available, additional areas outside the 1-km boundary were also mapped but not included in the accuracy assessment. We expect the accuracy of the areas outside of the 1-km boundary to be consistent with those within. This dataset was produced by the US EPA to support research and online mapping activities related to EnviroAtlas. EnviroAtlas (https://www.epa.gov/enviroatlas) allows the user to interact with
Land cover classification of VHR airborne images for citrus grove identification
NASA Astrophysics Data System (ADS)
Amorós López, J.; Izquierdo Verdiguier, E.; Gómez Chova, L.; Muñoz Marí, J.; Rodríguez Barreiro, J. Z.; Camps Valls, G.; Calpe Maravilla, J.
Managing land resources using remote sensing techniques is becoming a common practice. However, data analysis procedures should satisfy the high accuracy levels demanded by users (public or private companies and governments) in order to be extensively used. This paper presents a multi-stage classification scheme to update the citrus Geographical Information System (GIS) of the Comunidad Valenciana region (Spain). Spain is the first citrus fruit producer in Europe and the fourth in the world. In particular, citrus fruits represent 67% of the agricultural production in this region, with a total production of 4.24 million tons (campaign 2006-2007). The citrus GIS inventory, created in 2001, needs to be regularly updated in order to monitor changes quickly enough, and allow appropriate policy making and citrus production forecasting. Automatic methods are proposed in this work to facilitate this update, whose processing scheme is summarized as follows. First, an object-oriented feature extraction process is carried out for each cadastral parcel from very high spatial resolution aerial images (0.5 m). Next, several automatic classifiers (decision trees, artificial neural networks, and support vector machines) are trained and combined to improve the final classification accuracy. Finally, the citrus GIS is automatically updated if a high enough level of confidence, based on the agreement between classifiers, is achieved. This is the case for 85% of the parcels and accuracy results exceed 94%. The remaining parcels are classified by expert photo-interpreters in order to guarantee the high accuracy demanded by policy makers.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Roring, J; Saenz, D; Cruz, W
2015-06-15
Purpose: The commissioning criteria of water tank phantoms are essential for proper accuracy and reproducibility in a clinical setting. This study outlines the results of mechanical and dosimetric testing between PTW MP3-M water tank system and the Standard Imaging Doseview 3D water tank system. Methods: Measurements were taken of each axis of movement on the tank using 30 cm calipers at 1, 5, 10, 50, 100, and 200 mm for accuracy and reproducibility of tank movement. Dosimetric quantities such as percent depth dose and dose profiles were compared between tanks using a 6 MV beam from a Varian 23EX LINAC.more » Properties such as scanning speed effects, central axis depth dose agreement with static measurements, reproducibility of measurements, symmetry and flatness, and scan time between tanks were also investigated. Results: Results showed high geometric accuracy within 0.2 mm. Central axis PDD and in-field profiles agreed within 0.75% between the tanks. These outcomes test many possible discrepancies in dose measurements across the two tanks and form a basis for comparison on a broader range of tanks in the future. Conclusion: Both 3D water scanning phantoms possess a high degree of spatial accuracy, allowing for equivalence in measurements regardless of the phantom used. A commissioning procedure when changing water tanks or upon receipt of a new tank is nevertheless critical to ensure consistent operation before and after the arrival of new hardware.« less
Lammert-Siepmann, Nils; Bestgen, Anne-Kathrin; Edler, Dennis; Kuchinke, Lars; Dickmann, Frank
2017-01-01
Knowing the correct location of a specific object learned from a (topographic) map is fundamental for orientation and navigation tasks. Spatial reference systems, such as coordinates or cardinal directions, are helpful tools for any geometric localization of positions that aims to be as exact as possible. Considering modern visualization techniques of multimedia cartography, map elements transferred through the auditory channel can be added easily. Audiovisual approaches have been discussed in the cartographic community for many years. However, the effectiveness of audiovisual map elements for map use has hardly been explored so far. Within an interdisciplinary (cartography-cognitive psychology) research project, it is examined whether map users remember object-locations better if they do not just read the corresponding place names, but also listen to them as voice recordings. This approach is based on the idea that learning object-identities influences learning object-locations, which is crucial for map-reading tasks. The results of an empirical study show that the additional auditory communication of object names not only improves memory for the names (object-identities), but also for the spatial accuracy of their corresponding object-locations. The audiovisual communication of semantic attribute information of a spatial object seems to improve the binding of object-identity and object-location, which enhances the spatial accuracy of object-location memory.
Bestgen, Anne-Kathrin; Edler, Dennis; Kuchinke, Lars; Dickmann, Frank
2017-01-01
Knowing the correct location of a specific object learned from a (topographic) map is fundamental for orientation and navigation tasks. Spatial reference systems, such as coordinates or cardinal directions, are helpful tools for any geometric localization of positions that aims to be as exact as possible. Considering modern visualization techniques of multimedia cartography, map elements transferred through the auditory channel can be added easily. Audiovisual approaches have been discussed in the cartographic community for many years. However, the effectiveness of audiovisual map elements for map use has hardly been explored so far. Within an interdisciplinary (cartography-cognitive psychology) research project, it is examined whether map users remember object-locations better if they do not just read the corresponding place names, but also listen to them as voice recordings. This approach is based on the idea that learning object-identities influences learning object-locations, which is crucial for map-reading tasks. The results of an empirical study show that the additional auditory communication of object names not only improves memory for the names (object-identities), but also for the spatial accuracy of their corresponding object-locations. The audiovisual communication of semantic attribute information of a spatial object seems to improve the binding of object-identity and object-location, which enhances the spatial accuracy of object-location memory. PMID:29059237
Comparable Rest-related Promotion of Spatial Memory Consolidation in Younger and Older Adults
Craig, Michael; Wolbers, Thomas; Harris, Mathew A.; Hauff, Patrick; Della Sala, Sergio; Dewar, Michaela
2017-01-01
Flexible spatial navigation depends on cognitive mapping, a function that declines with increasing age. In young adults, a brief period of post-navigation rest promotes the consolidation/integration of spatial memories into accurate cognitive maps. We examined (1) whether rest promotes spatial memory consolidation/integration in older adults and (2) whether the magnitude of the rest benefit changes with increasing age. Young and older adults learned a route through a virtual environment, followed by a 10min delay comprising either wakeful rest or a perceptual task, and a subsequent cognitive mapping task, requiring the pointing to landmarks from different locations. Pointing accuracy was lower in the older than younger adults. However, there was a comparable rest-related enhancement in pointing accuracy in the two age groups. Together our findings suggest that (i) the age-related decline in cognitive mapping cannot be explained by increased consolidation interference in older adults, and (ii) as we grow older rest continues to support the consolidation/integration of spatial memories. PMID:27689512
A task-irrelevant stimulus attribute affects perception and short-term memory
Huang, Jie; Kahana, Michael J.; Sekuler, Robert
2010-01-01
Selective attention protects cognition against intrusions of task-irrelevant stimulus attributes. This protective function was tested in coordinated psychophysical and memory experiments. Stimuli were superimposed, horizontally and vertically oriented gratings of varying spatial frequency; only one orientation was task relevant. Experiment 1 demonstrated that a task-irrelevant spatial frequency interfered with visual discrimination of the task-relevant spatial frequency. Experiment 2 adopted a two-item Sternberg task, using stimuli that had been scaled to neutralize interference at the level of vision. Despite being visually neutralized, the task-irrelevant attribute strongly influenced recognition accuracy and associated reaction times (RTs). This effect was sharply tuned, with the task-irrelevant spatial frequency having an impact only when the task-relevant spatial frequencies of the probe and study items were highly similar to one another. Model-based analyses of judgment accuracy and RT distributional properties converged on the point that the irrelevant orientation operates at an early stage in memory processing, not at a later one that supports decision making. PMID:19933454
Morey, Candice C; Miron, Monica D
2016-12-01
Among models of working memory, there is not yet a consensus about how to describe functions specific to storing verbal or visual-spatial memories. We presented aural-verbal and visual-spatial lists simultaneously and sometimes cued one type of information after presentation, comparing accuracy in conditions with and without informative retro-cues. This design isolates interference due specifically to maintenance, which appears most clearly in the uncued trials, from interference due to encoding, which occurs in all dual-task trials. When recall accuracy was comparable between tasks, we found that spatial memory was worse in uncued than in retro-cued trials, whereas verbal memory was not. Our findings bolster proposals that maintenance of spatial serial order, like maintenance of visual materials more broadly, relies on general rather than specialized resources, while maintenance of verbal sequences may rely on domain-specific resources. We argue that this asymmetry should be explicitly incorporated into models of working memory. (PsycINFO Database Record (c) 2016 APA, all rights reserved).
2007-09-27
the spatial and spectral resolution ...variety of geological and vegetation mapping efforts, the Hymap sensor offered the best available combination of spectral and spatial resolution , signal... The limitations of the technology currently relate to spatial and spectral resolution and geo- correction accuracy. Secondly, HSI datasets
Accessibility versus Accuracy in Retrieving Spatial Memory: Evidence for Suboptimal Assumed Headings
ERIC Educational Resources Information Center
Yerramsetti, Ashok; Marchette, Steven A.; Shelton, Amy L.
2013-01-01
Orientation dependence in spatial memory has often been interpreted in terms of accessibility: Object locations are encoded relative to a reference orientation that affords the most accurate access to spatial memory. An open question, however, is whether people naturally use this "preferred" orientation whenever recalling the space. We…
NASA Astrophysics Data System (ADS)
Langhammer, Jakub; Lendzioch, Theodora; Mirijovsky, Jakub
2016-04-01
Granulometric analysis represents a traditional, important and for the description of sedimentary material substantial method with various applications in sedimentology, hydrology and geomorphology. However, the conventional granulometric field survey methods are time consuming, laborious, costly and are invasive to the surface being sampled, which can be limiting factor for their applicability in protected areas.. The optical granulometry has recently emerged as an image analysis technique, enabling non-invasive survey, employing semi-automated identification of clasts from calibrated digital imagery, taken on site by conventional high resolution digital camera and calibrated frame. The image processing allows detection and measurement of mixed size natural grains, their sorting and quantitative analysis using standard granulometric approaches. Despite known limitations, the technique today presents reliable tool, significantly easing and speeding the field survey in fluvial geomorphology. However, the nature of such survey has still limitations in spatial coverage of the sites and applicability in research at multitemporal scale. In our study, we are presenting novel approach, based on fusion of two image analysis techniques - optical granulometry and UAV-based photogrammetry, allowing to bridge the gap between the needs of high resolution structural information for granulometric analysis and spatially accurate and data coverage. We have developed and tested a workflow that, using UAV imaging platform enabling to deliver seamless, high resolution and spatially accurate imagery of the study site from which can be derived the granulometric properties of the sedimentary material. We have set up a workflow modeling chain, providing (i) the optimum flight parameters for UAV imagery to balance the two key divergent requirements - imagery resolution and seamless spatial coverage, (ii) the workflow for the processing of UAV acquired imagery by means of the optical granulometry and (iii) the workflow for analysis of spatial distribution and temporal changes of granulometric properties across the point bar. The proposed technique was tested on a case study of an active point bar of mid-latitude mountain stream at Sumava mountains, Czech Republic, exposed to repeated flooding. The UAV photogrammetry was used to acquire very high resolution imagery to build high-precision digital terrain models and orthoimage. The orthoimage was then analyzed using the digital optical granulometric tool BaseGrain. This approach allowed us (i) to analyze the spatial distribution of the grain size in a seamless transects over an active point bar and (ii) to assess the multitemporal changes of granulometric properties of the point bar material resulting from flooding. The tested framework prove the applicability of the proposed method for granulometric analysis with accuracy comparable with field optical granulometry. The seamless nature of the data enables to study spatial distribution of granulometric properties across the study sites as well as the analysis of multitemporal changes, resulting from repeated imaging.
Knauer, Uwe; Matros, Andrea; Petrovic, Tijana; Zanker, Timothy; Scott, Eileen S; Seiffert, Udo
2017-01-01
Hyperspectral imaging is an emerging means of assessing plant vitality, stress parameters, nutrition status, and diseases. Extraction of target values from the high-dimensional datasets either relies on pixel-wise processing of the full spectral information, appropriate selection of individual bands, or calculation of spectral indices. Limitations of such approaches are reduced classification accuracy, reduced robustness due to spatial variation of the spectral information across the surface of the objects measured as well as a loss of information intrinsic to band selection and use of spectral indices. In this paper we present an improved spatial-spectral segmentation approach for the analysis of hyperspectral imaging data and its application for the prediction of powdery mildew infection levels (disease severity) of intact Chardonnay grape bunches shortly before veraison. Instead of calculating texture features (spatial features) for the huge number of spectral bands independently, dimensionality reduction by means of Linear Discriminant Analysis (LDA) was applied first to derive a few descriptive image bands. Subsequent classification was based on modified Random Forest classifiers and selective extraction of texture parameters from the integral image representation of the image bands generated. Dimensionality reduction, integral images, and the selective feature extraction led to improved classification accuracies of up to [Formula: see text] for detached berries used as a reference sample (training dataset). Our approach was validated by predicting infection levels for a sample of 30 intact bunches. Classification accuracy improved with the number of decision trees of the Random Forest classifier. These results corresponded with qPCR results. An accuracy of 0.87 was achieved in classification of healthy, infected, and severely diseased bunches. However, discrimination between visually healthy and infected bunches proved to be challenging for a few samples, perhaps due to colonized berries or sparse mycelia hidden within the bunch or airborne conidia on the berries that were detected by qPCR. An advanced approach to hyperspectral image classification based on combined spatial and spectral image features, potentially applicable to many available hyperspectral sensor technologies, has been developed and validated to improve the detection of powdery mildew infection levels of Chardonnay grape bunches. The spatial-spectral approach improved especially the detection of light infection levels compared with pixel-wise spectral data analysis. This approach is expected to improve the speed and accuracy of disease detection once the thresholds for fungal biomass detected by hyperspectral imaging are established; it can also facilitate monitoring in plant phenotyping of grapevine and additional crops.
Remote sensing of atmospheric aerosols with the SPEX spectropolarimeter
NASA Astrophysics Data System (ADS)
van Harten, G.; Rietjens, J.; Smit, M.; Snik, F.; Keller, C. U.; di Noia, A.; Hasekamp, O.; Vonk, J.; Volten, H.
2013-12-01
Characterizing atmospheric aerosols is key to understanding their influence on climate through their direct and indirect radiative forcing. This requires long-term global coverage, at high spatial (~km) and temporal (~days) resolution, which can only be provided by satellite remote sensing. Aerosol load and properties such as particle size, shape and chemical composition can be derived from multi-wavelength radiance and polarization measurements of sunlight that is scattered by the Earth's atmosphere at different angles. The required polarimetric accuracy of ~10^(-3) is very challenging, particularly since the instrument is located on a rapidly moving platform. Our Spectropolarimeter for Planetary EXploration (SPEX) is based on a novel, snapshot spectral modulator, with the intrinsic ability to measure polarization at high accuracy. It exhibits minimal instrumental polarization and is completely solid-state and passive. An athermal set of birefringent crystals in front of an analyzer encodes the incoming linear polarization into a sinusoidal modulation in the intensity spectrum. Moreover, a dual beam implementation yields redundancy that allows for a mutual correction in both the spectrally and spatially modulated data to increase the measurement accuracy. A partially polarized calibration stimulus has been developed, consisting of a carefully depolarized source followed by tilted glass plates to induce polarization in a controlled way. Preliminary calibration measurements show an accuracy of SPEX of well below 10^(-3), with a sensitivity limit of 2*10^(-4). We demonstrate the potential of the SPEX concept by presenting retrievals of aerosol properties based on clear sky measurements using a prototype satellite instrument and a dedicated ground-based SPEX. The retrieval algorithm, originally designed for POLDER data, performs iterative fitting of aerosol properties and surface albedo, where the initial guess is provided by a look-up table. The retrieved aerosol properties, including aerosol optical thickness, single scattering albedo, size distribution and complex refractive index, will be compared with the on-site AERONET sun-photometer, lidar, particle counter and sizer, and PM10 and PM2.5 monitoring instruments. Retrievals of the aerosol layer height based on polarization measurements in the O2A absorption band will be compared with lidar profiles. Furthermore, the possibility of enhancing the retrieval accuracy by replacing the look-up table with a neural network based initial guess will be discussed, using retrievals from simulated ground-based data.
Imaging Performance of Quantitative Transmission Ultrasound
Lenox, Mark W.; Wiskin, James; Lewis, Matthew A.; Darrouzet, Stephen; Borup, David; Hsieh, Scott
2015-01-01
Quantitative Transmission Ultrasound (QTUS) is a tomographic transmission ultrasound modality that is capable of generating 3D speed-of-sound maps of objects in the field of view. It performs this measurement by propagating a plane wave through the medium from a transmitter on one side of a water tank to a high resolution receiver on the opposite side. This information is then used via inverse scattering to compute a speed map. In addition, the presence of reflection transducers allows the creation of a high resolution, spatially compounded reflection map that is natively coregistered to the speed map. A prototype QTUS system was evaluated for measurement and geometric accuracy as well as for the ability to correctly determine speed of sound. PMID:26604918
Swadling, G F; Lebedev, S V; Hall, G N; Patankar, S; Stewart, N H; Smith, R A; Harvey-Thompson, A J; Burdiak, G C; de Grouchy, P; Skidmore, J; Suttle, L; Suzuki-Vidal, F; Bland, S N; Kwek, K H; Pickworth, L; Bennett, M; Hare, J D; Rozmus, W; Yuan, J
2014-11-01
A suite of laser based diagnostics is used to study interactions of magnetised, supersonic, radiatively cooled plasma flows produced using the Magpie pulse power generator (1.4 MA, 240 ns rise time). Collective optical Thomson scattering measures the time-resolved local flow velocity and temperature across 7-14 spatial positions. The scattering spectrum is recorded from multiple directions, allowing more accurate reconstruction of the flow velocity vectors. The areal electron density is measured using 2D interferometry; optimisation and analysis are discussed. The Faraday rotation diagnostic, operating at 1053 nm, measures the magnetic field distribution in the plasma. Measurements obtained simultaneously by these diagnostics are used to constrain analysis, increasing the accuracy of interpretation.
Yeo, Boon Y.; McLaughlin, Robert A.; Kirk, Rodney W.; Sampson, David D.
2012-01-01
We present a high-resolution three-dimensional position tracking method that allows an optical coherence tomography (OCT) needle probe to be scanned laterally by hand, providing the high degree of flexibility and freedom required in clinical usage. The method is based on a magnetic tracking system, which is augmented by cross-correlation-based resampling and a two-stage moving window average algorithm to improve upon the tracker's limited intrinsic spatial resolution, achieving 18 µm RMS position accuracy. A proof-of-principle system was developed, with successful image reconstruction demonstrated on phantoms and on ex vivo human breast tissue validated against histology. This freehand scanning method could contribute toward clinical implementation of OCT needle imaging. PMID:22808429
Mode-based microparticle conveyor belt in air-filled hollow-core photonic crystal fiber.
Schmidt, Oliver A; Euser, Tijmen G; Russell, Philip St J
2013-12-02
We show how microparticles can be moved over long distances and precisely positioned in a low-loss air-filled hollow-core photonic crystal fiber using a coherent superposition of two co-propagating spatial modes, balanced by a backward-propagating fundamental mode. This creates a series of trapping positions spaced by half the beat-length between the forward-propagating modes (typically a fraction of a millimeter). The system allows a trapped microparticle to be moved along the fiber by continuously tuning the relative phase between the two forward-propagating modes. This mode-based optical conveyor belt combines long-range transport of microparticles with a positional accuracy of 1 µm. The technique also has potential uses in waveguide-based optofluidic systems.
Development of a Liquefied Noble Gas Time Projection Chamber
NASA Astrophysics Data System (ADS)
Lesser, Ezra; White, Aaron; Aidala, Christine
2015-10-01
Liquefied noble gas detectors have been used for various applications in recent years for detecting neutrinos, neutrons, photons, and potentially dark matter. The University of Michigan is developing a detector with liquid argon to produce scintillation light and ionization electrons. Our data collection method will allow high-resolution energy measurement and spatial reconstruction of detected particles by using multi-pixel silicon photomultipliers (SiPM) and a cylindrical time projection chamber (TPC) with a multi-wire endplate. We have already designed a liquid argon condenser and purification unit surrounded by an insulating vacuum, constructed circuitry for temperature and pressure sensors, and created software to obtain high-accuracy sensor readouts. The status of detector development will be presented. Funded through the Michigan Memorial Phoenix Project.
Smooth 2D manifold extraction from 3D image stack
Shihavuddin, Asm; Basu, Sreetama; Rexhepaj, Elton; Delestro, Felipe; Menezes, Nikita; Sigoillot, Séverine M; Del Nery, Elaine; Selimi, Fekrije; Spassky, Nathalie; Genovesio, Auguste
2017-01-01
Three-dimensional fluorescence microscopy followed by image processing is routinely used to study biological objects at various scales such as cells and tissue. However, maximum intensity projection, the most broadly used rendering tool, extracts a discontinuous layer of voxels, obliviously creating important artifacts and possibly misleading interpretation. Here we propose smooth manifold extraction, an algorithm that produces a continuous focused 2D extraction from a 3D volume, hence preserving local spatial relationships. We demonstrate the usefulness of our approach by applying it to various biological applications using confocal and wide-field microscopy 3D image stacks. We provide a parameter-free ImageJ/Fiji plugin that allows 2D visualization and interpretation of 3D image stacks with maximum accuracy. PMID:28561033
Pixel-based absolute surface metrology by three flat test with shifted and rotated maps
NASA Astrophysics Data System (ADS)
Zhai, Dede; Chen, Shanyong; Xue, Shuai; Yin, Ziqiang
2018-03-01
In traditional three flat test, it only provides the absolute profile along one surface diameter. In this paper, an absolute testing algorithm based on shift-rotation with three flat test has been proposed to reconstruct two-dimensional surface exactly. Pitch and yaw error during shift procedure is analyzed and compensated in our method. Compared with multi-rotation method proposed before, it only needs a 90° rotation and a shift, which is easy to carry out especially in condition of large size surface. It allows pixel level spatial resolution to be achieved without interpolation or assumption to the test surface. In addition, numerical simulations and optical tests are implemented and show the high accuracy recovery capability of the proposed method.
Registration of 3D and Multispectral Data for the Study of Cultural Heritage Surfaces
Chane, Camille Simon; Schütze, Rainer; Boochs, Frank; Marzani, Franck S.
2013-01-01
We present a technique for the multi-sensor registration of featureless datasets based on the photogrammetric tracking of the acquisition systems in use. This method is developed for the in situ study of cultural heritage objects and is tested by digitizing a small canvas successively with a 3D digitization system and a multispectral camera while simultaneously tracking the acquisition systems with four cameras and using a cubic target frame with a side length of 500 mm. The achieved tracking accuracy is better than 0.03 mm spatially and 0.150 mrad angularly. This allows us to seamlessly register the 3D acquisitions and to project the multispectral acquisitions on the 3D model. PMID:23322103
Brain MR image segmentation based on an improved active contour model
Meng, Xiangrui; Gu, Wenya; Zhang, Jianwei
2017-01-01
It is often a difficult task to accurately segment brain magnetic resonance (MR) images with intensity in-homogeneity and noise. This paper introduces a novel level set method for simultaneous brain MR image segmentation and intensity inhomogeneity correction. To reduce the effect of noise, novel anisotropic spatial information, which can preserve more details of edges and corners, is proposed by incorporating the inner relationships among the neighbor pixels. Then the proposed energy function uses the multivariate Student's t-distribution to fit the distribution of the intensities of each tissue. Furthermore, the proposed model utilizes Hidden Markov random fields to model the spatial correlation between neigh-boring pixels/voxels. The means of the multivariate Student's t-distribution can be adaptively estimated by multiplying a bias field to reduce the effect of intensity inhomogeneity. In the end, we reconstructed the energy function to be convex and calculated it by using the Split Bregman method, which allows our framework for random initialization, thereby allowing fully automated applications. Our method can obtain the final result in less than 1 second for 2D image with size 256 × 256 and less than 300 seconds for 3D image with size 256 × 256 × 171. The proposed method was compared to other state-of-the-art segmentation methods using both synthetic and clinical brain MR images and increased the accuracies of the results more than 3%. PMID:28854235
High-performance holographic technologies for fluid-dynamics experiments
Orlov, Sergei S.; Abarzhi, Snezhana I.; Oh, Se Baek; Barbastathis, George; Sreenivasan, Katepalli R.
2010-01-01
Modern technologies offer new opportunities for experimentalists in a variety of research areas of fluid dynamics. Improvements are now possible in the state-of-the-art in precision, dynamic range, reproducibility, motion-control accuracy, data-acquisition rate and information capacity. These improvements are required for understanding complex turbulent flows under realistic conditions, and for allowing unambiguous comparisons to be made with new theoretical approaches and large-scale numerical simulations. One of the new technologies is high-performance digital holography. State-of-the-art motion control, electronics and optical imaging allow for the realization of turbulent flows with very high Reynolds number (more than 107) on a relatively small laboratory scale, and quantification of their properties with high space–time resolutions and bandwidth. In-line digital holographic technology can provide complete three-dimensional mapping of the flow velocity and density fields at high data rates (over 1000 frames per second) over a relatively large spatial area with high spatial (1–10 μm) and temporal (better than a few nanoseconds) resolution, and can give accurate quantitative description of the fluid flows, including those of multi-phase and unsteady conditions. This technology can be applied in a variety of problems to study fundamental properties of flow–particle interactions, rotating flows, non-canonical boundary layers and Rayleigh–Taylor mixing. Some of these examples are discussed briefly. PMID:20211881
Repurposing the Microsoft Kinect for Windows v2 for external head motion tracking for brain PET.
Noonan, P J; Howard, J; Hallett, W A; Gunn, R N
2015-11-21
Medical imaging systems such as those used in positron emission tomography (PET) are capable of spatial resolutions that enable the imaging of small, functionally important brain structures. However, the quality of data from PET brain studies is often limited by subject motion during acquisition. This is particularly challenging for patients with neurological disorders or with dynamic research studies that can last 90 min or more. Restraining head movement during the scan does not eliminate motion entirely and can be unpleasant for the subject. Head motion can be detected and measured using a variety of techniques that either use the PET data itself or an external tracking system. Advances in computer vision arising from the video gaming industry could offer significant benefits when re-purposed for medical applications. A method for measuring rigid body type head motion using the Microsoft Kinect v2 is described with results presenting ⩽0.5 mm spatial accuracy. Motion data is measured in real-time at 30 Hz using the KinectFusion algorithm. Non-rigid motion is detected using the residual alignment energy data of the KinectFusion algorithm allowing for unreliable motion to be discarded. Motion data is aligned to PET listmode data using injected pulse sequences into the PET/CT gantry allowing for correction of rigid body motion. Pilot data from a clinical dynamic PET/CT examination is shown.
The Neural Representation of Prospective Choice during Spatial Planning and Decisions
Kaplan, Raphael; Koster, Raphael; Penny, William D.; Burgess, Neil; Friston, Karl J.
2017-01-01
We are remarkably adept at inferring the consequences of our actions, yet the neuronal mechanisms that allow us to plan a sequence of novel choices remain unclear. We used functional magnetic resonance imaging (fMRI) to investigate how the human brain plans the shortest path to a goal in novel mazes with one (shallow maze) or two (deep maze) choice points. We observed two distinct anterior prefrontal responses to demanding choices at the second choice point: one in rostrodorsal medial prefrontal cortex (rd-mPFC)/superior frontal gyrus (SFG) that was also sensitive to (deactivated by) demanding initial choices and another in lateral frontopolar cortex (lFPC), which was only engaged by demanding choices at the second choice point. Furthermore, we identified hippocampal responses during planning that correlated with subsequent choice accuracy and response time, particularly in mazes affording sequential choices. Psychophysiological interaction (PPI) analyses showed that coupling between the hippocampus and rd-mPFC increases during sequential (deep versus shallow) planning and is higher before correct versus incorrect choices. In short, using a naturalistic spatial planning paradigm, we reveal how the human brain represents sequential choices during planning without extensive training. Our data highlight a network centred on the cortical midline and hippocampus that allows us to make prospective choices while maintaining initial choices during planning in novel environments. PMID:28081125
Repurposing the Microsoft Kinect for Windows v2 for external head motion tracking for brain PET
NASA Astrophysics Data System (ADS)
Noonan, P. J.; Howard, J.; Hallett, W. A.; Gunn, R. N.
2015-11-01
Medical imaging systems such as those used in positron emission tomography (PET) are capable of spatial resolutions that enable the imaging of small, functionally important brain structures. However, the quality of data from PET brain studies is often limited by subject motion during acquisition. This is particularly challenging for patients with neurological disorders or with dynamic research studies that can last 90 min or more. Restraining head movement during the scan does not eliminate motion entirely and can be unpleasant for the subject. Head motion can be detected and measured using a variety of techniques that either use the PET data itself or an external tracking system. Advances in computer vision arising from the video gaming industry could offer significant benefits when re-purposed for medical applications. A method for measuring rigid body type head motion using the Microsoft Kinect v2 is described with results presenting ⩽0.5 mm spatial accuracy. Motion data is measured in real-time at 30 Hz using the KinectFusion algorithm. Non-rigid motion is detected using the residual alignment energy data of the KinectFusion algorithm allowing for unreliable motion to be discarded. Motion data is aligned to PET listmode data using injected pulse sequences into the PET/CT gantry allowing for correction of rigid body motion. Pilot data from a clinical dynamic PET/CT examination is shown.
NASA Technical Reports Server (NTRS)
Bates, J. R.; Semazzi, F. H. M.; Higgins, R. W.; Barros, Saulo R. M.
1990-01-01
A vector semi-Lagrangian semi-implicit two-time-level finite-difference integration scheme for the shallow water equations on the sphere is presented. A C-grid is used for the spatial differencing. The trajectory-centered discretization of the momentum equation in vector form eliminates pole problems and, at comparable cost, gives greater accuracy than a previous semi-Lagrangian finite-difference scheme which used a rotated spherical coordinate system. In terms of the insensitivity of the results to increasing timestep, the new scheme is as successful as recent spectral semi-Lagrangian schemes. In addition, the use of a multigrid method for solving the elliptic equation for the geopotential allows efficient integration with an operation count which, at high resolution, is of lower order than in the case of the spectral models. The properties of the new scheme should allow finite-difference models to compete with spectral models more effectively than has previously been possible.
Physiology driven adaptivity for the numerical solution of the bidomain equations.
Whiteley, Jonathan P
2007-09-01
Previous work [Whiteley, J. P. IEEE Trans. Biomed. Eng. 53:2139-2147, 2006] derived a stable, semi-implicit numerical scheme for solving the bidomain equations. This scheme allows the timestep used when solving the bidomain equations numerically to be chosen by accuracy considerations rather than stability considerations. In this study we modify this scheme to allow an adaptive numerical solution in both time and space. The spatial mesh size is determined by the gradient of the transmembrane and extracellular potentials while the timestep is determined by the values of: (i) the fast sodium current; and (ii) the calcium release from junctional sarcoplasmic reticulum to myoplasm current. For two-dimensional simulations presented here, combining the numerical algorithm in the paper cited above with the adaptive algorithm presented here leads to an increase in computational efficiency by a factor of around 250 over previous work, together with significantly less computational memory being required. The speedup for three-dimensional simulations is likely to be more impressive.
NASA Astrophysics Data System (ADS)
Robinson, T. P.; Wardell-Johnson, G. W.; Pracilio, G.; Brown, C.; Corner, R.; van Klinken, R. D.
2016-02-01
Invasive plants pose significant threats to biodiversity and ecosystem function globally, leading to costly monitoring and management effort. While remote sensing promises cost-effective, robust and repeatable monitoring tools to support intervention, it has been largely restricted to airborne platforms that have higher spatial and spectral resolutions, but which lack the coverage and versatility of satellite-based platforms. This study tests the ability of the WorldView-2 (WV2) eight-band satellite sensor for detecting the invasive shrub mesquite (Prosopis spp.) in the north-west Pilbara region of Australia. Detectability was challenged by the target taxa being largely defoliated by a leaf-tying biological control agent (Gelechiidae: Evippe sp. #1) and the presence of other shrubs and trees. Variable importance in the projection (VIP) scores identified bands offering greatest capacity for discrimination were those covering the near-infrared, red, and red-edge wavelengths. Wavelengths between 400 nm and 630 nm (coastal blue, blue, green, yellow) were not useful for species level discrimination in this case. Classification accuracy was tested on three band sets (simulated standard multispectral, all bands, and bands with VIP scores ≥1). Overall accuracies were comparable amongst all band-sets (Kappa = 0.71-0.77). However, mesquite omission rates were unacceptably high (21.3%) when using all eight bands relative to the simulated standard multispectral band-set (9.5%) and the band-set informed by VIP scores (11.9%). An incremental cover evaluation on the latter identified most omissions to be for objects <16 m2. Mesquite omissions reduced to 2.6% and overall accuracy significantly improved (Kappa = 0.88) when these objects were left out of the confusion matrix calculations. Very high mapping accuracy of objects >16 m2 allows application for mapping mesquite shrubs and coalesced stands, the former not previously possible, even with 3 m resolution hyperspectral imagery. WV2 imagery offers excellent portability potential for detecting other species where spectral/spatial resolution or coverage has been an impediment. New generation satellite sensors are removing barriers previously preventing widespread adoption of remote sensing technologies in natural resource management.
An Active Learning Framework for Hyperspectral Image Classification Using Hierarchical Segmentation
NASA Technical Reports Server (NTRS)
Zhang, Zhou; Pasolli, Edoardo; Crawford, Melba M.; Tilton, James C.
2015-01-01
Augmenting spectral data with spatial information for image classification has recently gained significant attention, as classification accuracy can often be improved by extracting spatial information from neighboring pixels. In this paper, we propose a new framework in which active learning (AL) and hierarchical segmentation (HSeg) are combined for spectral-spatial classification of hyperspectral images. The spatial information is extracted from a best segmentation obtained by pruning the HSeg tree using a new supervised strategy. The best segmentation is updated at each iteration of the AL process, thus taking advantage of informative labeled samples provided by the user. The proposed strategy incorporates spatial information in two ways: 1) concatenating the extracted spatial features and the original spectral features into a stacked vector and 2) extending the training set using a self-learning-based semi-supervised learning (SSL) approach. Finally, the two strategies are combined within an AL framework. The proposed framework is validated with two benchmark hyperspectral datasets. Higher classification accuracies are obtained by the proposed framework with respect to five other state-of-the-art spectral-spatial classification approaches. Moreover, the effectiveness of the proposed pruning strategy is also demonstrated relative to the approaches based on a fixed segmentation.
Uncertainty characterization of particle location from refocused plenoptic images.
Hall, Elise M; Guildenbecher, Daniel R; Thurow, Brian S
2017-09-04
Plenoptic imaging is a 3D imaging technique that has been applied for quantification of 3D particle locations and sizes. This work experimentally evaluates the accuracy and precision of such measurements by investigating a static particle field translated to known displacements. Measured 3D displacement values are determined from sharpness metrics applied to volumetric representations of the particle field created using refocused plenoptic images, corrected using a recently developed calibration technique. Comparison of measured and known displacements for many thousands of particles allows for evaluation of measurement uncertainty. Mean displacement error, as a measure of accuracy, is shown to agree with predicted spatial resolution over the entire measurement domain, indicating robustness of the calibration methods. On the other hand, variation in the error, as a measure of precision, fluctuates as a function of particle depth in the optical direction. Error shows the smallest variation within the predicted depth of field of the plenoptic camera, with a gradual increase outside this range. The quantitative uncertainty values provided here can guide future measurement optimization and will serve as useful metrics for design of improved processing algorithms.
Fajnerová, Iveta; Rodriguez, Mabel; Levčík, David; Konrádová, Lucie; Mikoláš, Pavol; Brom, Cyril; Stuchlík, Aleš; Vlček, Kamil; Horáček, Jiří
2014-01-01
Objectives: Cognitive deficit is considered to be a characteristic feature of schizophrenia disorder. A similar cognitive dysfunction was demonstrated in animal models of schizophrenia. However, the poor comparability of methods used to assess cognition in animals and humans could be responsible for low predictive validity of current animal models. In order to assess spatial abilities in schizophrenia and compare our results with the data obtained in animal models, we designed a virtual analog of the Morris water maze (MWM), the virtual Four Goals Navigation (vFGN) task. Methods: Twenty-nine patients after the first psychotic episode with schizophrenia symptoms and a matched group of healthy volunteers performed the vFGN task. They were required to find and remember four hidden goal positions in an enclosed virtual arena. The task consisted of two parts. The Reference memory (RM) session with a stable goal position was designed to test spatial learning. The Delayed-matching-to-place (DMP) session presented a modified working memory protocol designed to test the ability to remember a sequence of three hidden goal positions. Results: Data obtained in the RM session show impaired spatial learning in schizophrenia patients compared to the healthy controls in pointing and navigation accuracy. The DMP session showed impaired spatial memory in schizophrenia during the recall of spatial sequence and a similar deficit in spatial bias in the probe trials. The pointing accuracy and the quadrant preference showed higher sensitivity toward the cognitive deficit than the navigation accuracy. Direct navigation to the goal was affected by sex and age of the tested subjects. The age affected spatial performance only in healthy controls. Conclusions: Despite some limitations of the study, our results correspond well with the previous studies in animal models of schizophrenia and support the decline of spatial cognition in schizophrenia, indicating the usefulness of the vFGN task in comparative research. PMID:24904329
NASA Astrophysics Data System (ADS)
Deo, Ram K.; Domke, Grant M.; Russell, Matthew B.; Woodall, Christopher W.; Andersen, Hans-Erik
2018-05-01
Aboveground biomass (AGB) estimates for regional-scale forest planning have become cost-effective with the free access to satellite data from sensors such as Landsat and MODIS. However, the accuracy of AGB predictions based on passive optical data depends on spatial resolution and spatial extent of target area as fine resolution (small pixels) data are associated with smaller coverage and longer repeat cycles compared to coarse resolution data. This study evaluated various spatial resolutions of Landsat-derived predictors on the accuracy of regional AGB models at three different sites in the eastern USA: Maine, Pennsylvania-New Jersey, and South Carolina. We combined national forest inventory data with Landsat-derived predictors at spatial resolutions ranging from 30–1000 m to understand the optimal spatial resolution of optical data for large-area (regional) AGB estimation. Ten generic models were developed using the data collected in 2014, 2015 and 2016, and the predictions were evaluated (i) at the county-level against the estimates of the USFS Forest Inventory and Analysis Program which relied on EVALIDator tool and national forest inventory data from the 2009–2013 cycle and (ii) within a large number of strips (~1 km wide) predicted via LiDAR metrics at 30 m spatial resolution. The county-level estimates by the EVALIDator and Landsat models were highly related (R 2 > 0.66), although the R 2 varied significantly across sites and resolution of predictors. The mean and standard deviation of county-level estimates followed increasing and decreasing trends, respectively, with models of coarser resolution. The Landsat-based total AGB estimates were larger than the LiDAR-based total estimates within the strips, however the mean of AGB predictions by LiDAR were mostly within one-standard deviations of the mean predictions obtained from the Landsat-based model at any of the resolutions. We conclude that satellite data at resolutions up to 1000 m provide acceptable accuracy for continental scale analysis of AGB.
Color camera computed tomography imaging spectrometer for improved spatial-spectral image accuracy
NASA Technical Reports Server (NTRS)
Wilson, Daniel W. (Inventor); Bearman, Gregory H. (Inventor); Johnson, William R. (Inventor)
2011-01-01
Computed tomography imaging spectrometers ("CTIS"s) having color focal plane array detectors are provided. The color FPA detector may comprise a digital color camera including a digital image sensor, such as a Foveon X3.RTM. digital image sensor or a Bayer color filter mosaic. In another embodiment, the CTIS includes a pattern imposed either directly on the object scene being imaged or at the field stop aperture. The use of a color FPA detector and the pattern improves the accuracy of the captured spatial and spectral information.
Application of geo-spatial technology in schistosomiasis modelling in Africa: a review.
Manyangadze, Tawanda; Chimbari, Moses John; Gebreslasie, Michael; Mukaratirwa, Samson
2015-11-04
Schistosomiasis continues to impact socio-economic development negatively in sub-Saharan Africa. The advent of spatial technologies, including geographic information systems (GIS), Earth observation (EO) and global positioning systems (GPS) assist modelling efforts. However, there is increasing concern regarding the accuracy and precision of the current spatial models. This paper reviews the literature regarding the progress and challenges in the development and utilization of spatial technology with special reference to predictive models for schistosomiasis in Africa. Peer-reviewed papers identified through a PubMed search using the following keywords: geo-spatial analysis OR remote sensing OR modelling OR earth observation OR geographic information systems OR prediction OR mapping AND schistosomiasis AND Africa were used. Statistical uncertainty, low spatial and temporal resolution satellite data and poor validation were identified as some of the factors that compromise the precision and accuracy of the existing predictive models. The need for high spatial resolution of remote sensing data in conjunction with ancillary data viz. ground-measured climatic and environmental information, local presence/absence intermediate host snail surveys as well as prevalence and intensity of human infection for model calibration and validation are discussed. The importance of a multidisciplinary approach in developing robust, spatial data capturing, modelling techniques and products applicable in epidemiology is highlighted.
TTLEM: Open access tool for building numerically accurate landscape evolution models in MATLAB
NASA Astrophysics Data System (ADS)
Campforts, Benjamin; Schwanghart, Wolfgang; Govers, Gerard
2017-04-01
Despite a growing interest in LEMs, accuracy assessment of the numerical methods they are based on has received little attention. Here, we present TTLEM which is an open access landscape evolution package designed to develop and test your own scenarios and hypothesises. TTLEM uses a higher order flux-limiting finite-volume method to simulate river incision and tectonic displacement. We show that this scheme significantly influences the evolution of simulated landscapes and the spatial and temporal variability of erosion rates. Moreover, it allows the simulation of lateral tectonic displacement on a fixed grid. Through the use of a simple GUI the software produces visible output of evolving landscapes through model run time. In this contribution, we illustrate numerical landscape evolution through a set of movies spanning different spatial and temporal scales. We focus on the erosional domain and use both spatially constant and variable input values for uplift, lateral tectonic shortening, erodibility and precipitation. Moreover, we illustrate the relevance of a stochastic approach for realistic hillslope response modelling. TTLEM is a fully open source software package, written in MATLAB and based on the TopoToolbox platform (topotoolbox.wordpress.com). Installation instructions can be found on this website and the therefore designed GitHub repository.
Computed tomography imaging and angiography - principles.
Kamalian, Shervin; Lev, Michael H; Gupta, Rajiv
2016-01-01
The evaluation of patients with diverse neurologic disorders was forever changed in the summer of 1973, when the first commercial computed tomography (CT) scanners were introduced. Until then, the detection and characterization of intracranial or spinal lesions could only be inferred by limited spatial resolution radioisotope scans, or by the patterns of tissue and vascular displacement on invasive pneumoencaphalography and direct carotid puncture catheter arteriography. Even the earliest-generation CT scanners - which required tens of minutes for the acquisition and reconstruction of low-resolution images (128×128 matrix) - could, based on density, noninvasively distinguish infarct, hemorrhage, and other mass lesions with unprecedented accuracy. Iodinated, intravenous contrast added further sensitivity and specificity in regions of blood-brain barrier breakdown. The advent of rapid multidetector row CT scanning in the early 1990s created renewed enthusiasm for CT, with CT angiography largely replacing direct catheter angiography. More recently, iterative reconstruction postprocessing techniques have made possible high spatial resolution, reduced noise, very low radiation dose CT scanning. The speed, spatial resolution, contrast resolution, and low radiation dose capability of present-day scanners have also facilitated dual-energy imaging which, like magnetic resonance imaging, for the first time, has allowed tissue-specific CT imaging characterization of intracranial pathology. © 2016 Elsevier B.V. All rights reserved.
Higher resolution satellite remote sensing and the impact on image mapping
Watkins, Allen H.; Thormodsgard, June M.
1987-01-01
Recent advances in spatial, spectral, and temporal resolution of civil land remote sensing satellite data are presenting new opportunities for image mapping applications. The U.S. Geological Survey's experimental satellite image mapping program is evolving toward larger scale image map products with increased information content as a result of improved image processing techniques and increased resolution. Thematic mapper data are being used to produce experimental image maps at 1:100,000 scale that meet established U.S. and European map accuracy standards. Availability of high quality, cloud-free, 30-meter ground resolution multispectral data from the Landsat thematic mapper sensor, along with 10-meter ground resolution panchromatic and 20-meter ground resolution multispectral data from the recently launched French SPOT satellite, present new cartographic and image processing challenges.The need to fully exploit these higher resolution data increases the complexity of processing the images into large-scale image maps. The removal of radiometric artifacts and noise prior to geometric correction can be accomplished by using a variety of image processing filters and transforms. Sensor modeling and image restoration techniques allow maximum retention of spatial and radiometric information. An optimum combination of spectral information and spatial resolution can be obtained by merging different sensor types. These processing techniques are discussed and examples are presented.
Water quality modeling in the dead end sections of drinking water (Supplement)
Dead-end sections of drinking water distribution networks are known to be problematic zones in terms of water quality degradation. Extended residence time due to water stagnation leads to rapid reduction of disinfectant residuals allowing the regrowth of microbial pathogens. Water quality models developed so far apply spatial aggregation and temporal averaging techniques for hydraulic parameters by assigning hourly averaged water demands to the main nodes of the network. Although this practice has generally resulted in minimal loss of accuracy for the predicted disinfectant concentrations in main water transmission lines, this is not the case for the peripheries of the distribution network. This study proposes a new approach for simulating disinfectant residuals in dead end pipes while accounting for both spatial and temporal variability in hydraulic and transport parameters. A stochastic demand generator was developed to represent residential water pulses based on a non-homogenous Poisson process. Dispersive solute transport was considered using highly dynamic dispersion rates. A genetic algorithm was used tocalibrate the axial hydraulic profile of the dead-end pipe based on the different demand shares of the withdrawal nodes. A parametric sensitivity analysis was done to assess the model performance under variation of different simulation parameters. A group of Monte-Carlo ensembles was carried out to investigate the influence of spatial and temporal variation
Holographically generated structured illumination for cell stimulation in optogenetics
NASA Astrophysics Data System (ADS)
Schmieder, Felix; Büttner, Lars; Czarske, Jürgen; Torres, Maria Leilani; Heisterkamp, Alexander; Klapper, Simon; Busskamp, Volker
2017-06-01
In Optogenetics, cells, e.g. neurons or cardiac cells, are genetically altered to produce for example the lightsensitive protein Channelrhodopsin-2. Illuminating these cells induces action potentials or contractions and therefore allows to control electrical activity. Thus, light-induced cell stimulation can be used to gain insight to various biological processes. Many optogenetics studies, however, use only full field illumination and thus gain no local information about their specimen. But using modern spatial light modulators (SLM) in conjunction with computer-generated holograms (CGH), cells may be stimulated locally, thus enabling the research of the foundations of cell networks and cell communications. In our contribution, we present a digital holographic system for the patterned, spatially resolved stimulation of cell networks. We employ a fast ferroelectric liquid crystal on silicon SLM to display CGH at up to 1.7 kHz. With an effective working distance of 33 mm, we achieve a focus of 10 μm at a positioning accuracy of the individual foci of about 8 μm. We utilized our setup for the optogenetic stimulation of clusters of cardiac cells derived from induced pluripotent stem cells and were able to observe contractions correlated to both temporal frequency and spatial power distribution of the light incident on the cell clusters.
Basal melt rates of Filchner Ice Shelf, Antarctica
NASA Astrophysics Data System (ADS)
Humbert, A.; Nicholls, K. W.; Corr, H. F. J.; Steinhage, D.; Stewart, C.; Zeising, O.
2017-12-01
Thinning of ice shelves around Antarctica has been found to be partly driven by an increase in basal melt as a result of warmer waters entering the sub-ice shelf cavity. In-situ observations of basal melt rate are, however, sparse. A new robust and efficient phase sensitive radio echo sounder (pRES) allows to measure change in ice thickness and vertical strain at high accuracy, so that the contribution of basal melt to the change in thickness can be estimated. As modeling studies suggest that the cavity beneath Filchner Ice Shelf, Antarctica, might be prone to intrusion of warm water pulses within this century, we wished to derive a baseline dataset and an understanding of its present day spatial variability. Here we present results from pRES measurements over two field seasons, 2015/16-16/17, comprising 86 datasets over the southern Filchner Ice Shelf, covering an area of about 6500km2. The maximum melt rate is only slightly more than 1m/a, but the spatial distribution exhibits a complex pattern. For the purpose of testing variability of basal melt rates on small spatial scales, we performed 26 measurements over distances of about 1km, and show that the melt rates do not vary by more than 0.25m/a.
Water Quality Modeling in the Dead End Sections of Drinking ...
Dead-end sections of drinking water distribution networks are known to be problematic zones in terms of water quality degradation. Extended residence time due to water stagnation leads to rapid reduction of disinfectant residuals allowing the regrowth of microbial pathogens. Water quality models developed so far apply spatial aggregation and temporal averaging techniques for hydraulic parameters by assigning hourly averaged water demands to the main nodes of the network. Although this practice has generally resulted in minimal loss of accuracy for the predicted disinfectant concentrations in main water transmission lines, this is not the case for the peripheries of a distribution network. This study proposes a new approach for simulating disinfectant residuals in dead end pipes while accounting for both spatial and temporal variability in hydraulic and transport parameters. A stochastic demand generator was developed to represent residential water pulses based on a non-homogenous Poisson process. Dispersive solute transport was considered using highly dynamic dispersion rates. A genetic algorithm was used to calibrate the axial hydraulic profile of the dead-end pipe based on the different demand shares of the withdrawal nodes. A parametric sensitivity analysis was done to assess the model performance under variation of different simulation parameters. A group of Monte-Carlo ensembles was carried out to investigate the influence of spatial and temporal variations
Malyarenko, Dariya I; Pang, Yuxi; Senegas, Julien; Ivancevic, Marko K; Ross, Brian D; Chenevert, Thomas L
2015-12-01
Spatially non-uniform diffusion weighting bias due to gradient nonlinearity (GNL) causes substantial errors in apparent diffusion coefficient (ADC) maps for anatomical regions imaged distant from magnet isocenter. Our previously-described approach allowed effective removal of spatial ADC bias from three orthogonal DWI measurements for mono-exponential media of arbitrary anisotropy. The present work evaluates correction feasibility and performance for quantitative diffusion parameters of the two-component IVIM model for well-perfused and nearly isotropic renal tissue. Sagittal kidney DWI scans of a volunteer were performed on a clinical 3T MRI scanner near isocenter and offset superiorly. Spatially non-uniform diffusion weighting due to GNL resulted both in shift and broadening of perfusion-suppressed ADC histograms for off-center DWI relative to unbiased measurements close to isocenter. Direction-average DW-bias correctors were computed based on the known gradient design provided by vendor. The computed bias maps were empirically confirmed by coronal DWI measurements for an isotropic gel-flood phantom. Both phantom and renal tissue ADC bias for off-center measurements was effectively removed by applying pre-computed 3D correction maps. Comparable ADC accuracy was achieved for corrections of both b -maps and DWI intensities in presence of IVIM perfusion. No significant bias impact was observed for IVIM perfusion fraction.
Malyarenko, Dariya I.; Pang, Yuxi; Senegas, Julien; Ivancevic, Marko K.; Ross, Brian D.; Chenevert, Thomas L.
2015-01-01
Spatially non-uniform diffusion weighting bias due to gradient nonlinearity (GNL) causes substantial errors in apparent diffusion coefficient (ADC) maps for anatomical regions imaged distant from magnet isocenter. Our previously-described approach allowed effective removal of spatial ADC bias from three orthogonal DWI measurements for mono-exponential media of arbitrary anisotropy. The present work evaluates correction feasibility and performance for quantitative diffusion parameters of the two-component IVIM model for well-perfused and nearly isotropic renal tissue. Sagittal kidney DWI scans of a volunteer were performed on a clinical 3T MRI scanner near isocenter and offset superiorly. Spatially non-uniform diffusion weighting due to GNL resulted both in shift and broadening of perfusion-suppressed ADC histograms for off-center DWI relative to unbiased measurements close to isocenter. Direction-average DW-bias correctors were computed based on the known gradient design provided by vendor. The computed bias maps were empirically confirmed by coronal DWI measurements for an isotropic gel-flood phantom. Both phantom and renal tissue ADC bias for off-center measurements was effectively removed by applying pre-computed 3D correction maps. Comparable ADC accuracy was achieved for corrections of both b-maps and DWI intensities in presence of IVIM perfusion. No significant bias impact was observed for IVIM perfusion fraction. PMID:26811845
Genetic risk prediction using a spatial autoregressive model with adaptive lasso.
Wen, Yalu; Shen, Xiaoxi; Lu, Qing
2018-05-31
With rapidly evolving high-throughput technologies, studies are being initiated to accelerate the process toward precision medicine. The collection of the vast amounts of sequencing data provides us with great opportunities to systematically study the role of a deep catalog of sequencing variants in risk prediction. Nevertheless, the massive amount of noise signals and low frequencies of rare variants in sequencing data pose great analytical challenges on risk prediction modeling. Motivated by the development in spatial statistics, we propose a spatial autoregressive model with adaptive lasso (SARAL) for risk prediction modeling using high-dimensional sequencing data. The SARAL is a set-based approach, and thus, it reduces the data dimension and accumulates genetic effects within a single-nucleotide variant (SNV) set. Moreover, it allows different SNV sets having various magnitudes and directions of effect sizes, which reflects the nature of complex diseases. With the adaptive lasso implemented, SARAL can shrink the effects of noise SNV sets to be zero and, thus, further improve prediction accuracy. Through simulation studies, we demonstrate that, overall, SARAL is comparable to, if not better than, the genomic best linear unbiased prediction method. The method is further illustrated by an application to the sequencing data from the Alzheimer's Disease Neuroimaging Initiative. Copyright © 2018 John Wiley & Sons, Ltd.
Taffe, Michael A.; Taffe, William J.
2011-01-01
Several nonhuman primate species have been reported to employ a distance-minimizing, traveling salesman-like, strategy during foraging as well as in experimental spatial search tasks involving lesser amounts of locomotion. Spatial sequencing may optimize performance by reducing reference or episodic memory loads, locomotor costs, competition or other demands. A computerized self-ordered spatial search (SOSS) memory task has been adapted from a human neuropsychological testing battery (CANTAB, Cambridge Cognition, Ltd) for use in monkeys. Accurate completion of a trial requires sequential responses to colored boxes in two or more spatial locations without repetition of a previous location. Marmosets have been reported to employ a circling pattern of search, suggesting spontaneous adoption of a strategy to reduce working memory load. In this study the SOSS performance of rhesus monkeys was assessed to determine if the use of a distance-minimizing search path enhances accuracy. A novel strategy score, independent of the trial difficulty and arrangement of boxes, has been devised. Analysis of the performance of 21 monkeys trained on SOSS over two years shows that a distance-minimizing search strategy is associated with improved accuracy. This effect is observed within individuals as they improve over many cumulative sessions of training on the task and across individuals at any given level of training. Erroneous trials were associated with a failure to deploy the strategy. It is concluded that the effect of utilizing the strategy on this locomotion-free, laboratory task is to enhance accuracy by reducing demands on spatial working memory resources. PMID:21840507
NASA Astrophysics Data System (ADS)
Cruden, A. R.; Vollgger, S.
2016-12-01
The emerging capability of UAV photogrammetry combines a simple and cost-effective method to acquire digital aerial images with advanced computer vision algorithms that compute spatial datasets from a sequence of overlapping digital photographs from various viewpoints. Depending on flight altitude and camera setup, sub-centimeter spatial resolution orthophotographs and textured dense point clouds can be achieved. Orientation data can be collected for detailed structural analysis by digitally mapping such high-resolution spatial datasets in a fraction of time and with higher fidelity compared to traditional mapping techniques. Here we describe a photogrammetric workflow applied to a structural study of folds and fractures within alternating layers of sandstone and mudstone at a coastal outcrop in SE Australia. We surveyed this location using a downward looking digital camera mounted on commercially available multi-rotor UAV that autonomously followed waypoints at a set altitude and speed to ensure sufficient image overlap, minimum motion blur and an appropriate resolution. The use of surveyed ground control points allowed us to produce a geo-referenced 3D point cloud and an orthophotograph from hundreds of digital images at a spatial resolution < 10 mm per pixel, and cm-scale location accuracy. Orientation data of brittle and ductile structures were semi-automatically extracted from these high-resolution datasets using open-source software. This resulted in an extensive and statistically relevant orientation dataset that was used to 1) interpret the progressive development of folds and faults in the region, and 2) to generate a 3D structural model that underlines the complex internal structure of the outcrop and quantifies spatial variations in fold geometries. Overall, our work highlights how UAV photogrammetry can contribute to new insights in structural analysis.
Towards real-time thermometry using simultaneous multislice MRI
NASA Astrophysics Data System (ADS)
Borman, P. T. S.; Bos, C.; de Boorder, T.; Raaymakers, B. W.; Moonen, C. T. W.; Crijns, S. P. M.
2016-09-01
MR-guided thermal therapies, such as high-intensity focused ultrasound (MRgHIFU) and laser-induced thermal therapy (MRgLITT) are increasingly being applied in oncology and neurology. MRI is used for guidance since it can measure temperature noninvasively based on the proton resonance frequency shift (PRFS). For therapy guidance using PRFS thermometry, high temporal resolution and large spatial coverage are desirable. We propose to use the parallel imaging technique simultaneous multislice (SMS) in combination with controlled aliasing (CAIPIRINHA) to accelerate the acquisition. We compare this with the sensitivity encoding (SENSE) acceleration technique. Two experiments were performed to validate that SMS can be used to increase the spatial coverage or the temporal resolution. The first was performed in agar gel using LITT heating and a gradient-echo sequence with echo-planar imaging (EPI), and the second was performed in bovine muscle using HIFU heating and a gradient-echo sequence without EPI. In both experiments temperature curves from an unaccelerated scan and from SMS, SENSE, and SENSE/SMS accelerated scans were compared. The precision was quantified by a standard deviation analysis of scans without heating. Both experiments showed a good agreement between the temperature curves obtained from the unaccelerated, and SMS accelerated scans, confirming that accuracy was maintained during SMS acceleration. The standard deviations of the temperature measurements obtained with SMS were significantly smaller than when SENSE was used, implying that SMS allows for higher acceleration. In the LITT and HIFU experiments SMS factors up to 4 and 3 were reached, respectively, with a loss of precision of less than a factor of 3. Based on these results we conclude that SMS acceleration of PRFS thermometry is a valuable addition to SENSE, because it allows for a higher temporal resolution or bigger spatial coverage, with a higher precision.
NASA Astrophysics Data System (ADS)
Etchanchu, J.; Delogu, E.; Saadi, S.; Chebbi, W.; Trapon, D.; Rivalland, V.; Boulet, G.; Boone, A. A.; Fanise, P.; Mougenot, B.; LE Dantec, V.
2017-12-01
Evapotranspiration and sensible-latent heat flux partition are important decision critera to manage crops, detect water stress and plan irrigation, particularly in a semi-arid context. Nowadays, remote sensing information (at medium -MODIS- and high resolution -LANDSAT, SPOT-) allows us to spatially estimate the different terms of the energy balance at daily and infra-daily time step through various approaches, either by forcing data in an energy balance model (EVASPA, Gallego-Elvira et al., 2013, and SPARSE, Boulet et al., 2015) or data assimilation in coupled water/energy balance models (SURFEX-ISBA, Noilhan et Planton, 1989). However, these different methods of flux estimations still require an evaluation through comparison to in-situ measurements and inter-comparison.The area selected for this study is the Kairouan agricultural plain, a semi-arid region in central Tunisia. Different flux datasets were acquired over two years, on an extensive rainfed oliveyard with very low vegetation cover, and on irrigated and rainfed wheat plots. In the same time, a third dataset has been acquired over a complex agricultural landscape with an eXtra-Large Aperture Scintillometer (XLAS) set-up on a 4 km transect.First, EC fluxes from towers are compared to the different model simulations at plot scale. Then a spatial comparison with retrievals of sensible and latent heat fluxes from XLAS is performed which allows to take into account the heterogeneity of the landscape (mix of wheat, irrigated oliveyards and bare soil). Effects on irrigation scenarios, through an automatic irrigation triggering method are tested and discussed. Finally, we cross-compare the different modeling approaches.We tackle the various issues: the accuracy of the measurements, the temporal frequency of remote sensing data, and the difficulty to calibrate the models.
ERIC Educational Resources Information Center
Morey, Candice C.; Miron, Monica D.
2016-01-01
Among models of working memory, there is not yet a consensus about how to describe functions specific to storing verbal or visual-spatial memories. We presented aural-verbal and visual-spatial lists simultaneously and sometimes cued one type of information after presentation, comparing accuracy in conditions with and without informative…
ERIC Educational Resources Information Center
Rakitin, Brian C.
2005-01-01
Five experiments examined the relations between timing and attention using a choice time production task in which the latency of a spatial choice response is matched to a target interval (3 or 5 s). Experiments 1 and 2 indicated that spatial stimulus-response incompatibility increased nonscalar timing variability without affecting timing accuracy…
Low elementary movement speed is associated with poor motor skill in Turner's syndrome.
Nijhuis-van der Sanden, Maria W G; Smits-Engelsman, Bouwien C M; Eling, Paul A T M; Nijhuis, Bianca J G; Van Galen, Gerard P
2002-01-01
The article aims to discriminate between 2 features that in principle both may be characteristic of the frequently observed poor motor performance in girls with Turner's syndrome (TS). On the one hand, a reduced movement speed that is independent of variations in spatial accuracy demands and therefore suggests a problem in motor execution. On the other hand, a disproportional slowing down of movement speed under spatial-accuracy demands, indicating a more central problem in motor programming. To assess their motor performance problems, 15 girls with TS (age 9.6-13.0 years) and 14 female controls (age 9.1-13.0 years) were tested using the Movement Assessment Battery for Children (MABC). In additionally, an experimental procedure using a variant of Fitts' graphic aiming task was used to try and disentangle the role of spatial-accuracy demands in different motor task conditions. The results of the MABC reestablish that overall motor performance in girls with TS is poor. The data from the Fitts' task reveal that TS girls move with the same accuracy as their normal peers but show a significantly lower speed independent of task difficulty. We conclude that a problem in motor execution is the main factor determining performance differences between girls with TS and controls.
Cluster Detection Tests in Spatial Epidemiology: A Global Indicator for Performance Assessment
Guttmann, Aline; Li, Xinran; Feschet, Fabien; Gaudart, Jean; Demongeot, Jacques; Boire, Jean-Yves; Ouchchane, Lemlih
2015-01-01
In cluster detection of disease, the use of local cluster detection tests (CDTs) is current. These methods aim both at locating likely clusters and testing for their statistical significance. New or improved CDTs are regularly proposed to epidemiologists and must be subjected to performance assessment. Because location accuracy has to be considered, performance assessment goes beyond the raw estimation of type I or II errors. As no consensus exists for performance evaluations, heterogeneous methods are used, and therefore studies are rarely comparable. A global indicator of performance, which assesses both spatial accuracy and usual power, would facilitate the exploration of CDTs behaviour and help between-studies comparisons. The Tanimoto coefficient (TC) is a well-known measure of similarity that can assess location accuracy but only for one detected cluster. In a simulation study, performance is measured for many tests. From the TC, we here propose two statistics, the averaged TC and the cumulated TC, as indicators able to provide a global overview of CDTs performance for both usual power and location accuracy. We evidence the properties of these two indicators and the superiority of the cumulated TC to assess performance. We tested these indicators to conduct a systematic spatial assessment displayed through performance maps. PMID:26086911
An improved triangulation laser rangefinder using a custom CMOS HDR linear image sensor
NASA Astrophysics Data System (ADS)
Liscombe, Michael
3-D triangulation laser rangefinders are used in many modern applications, from terrain mapping to biometric identification. Although a wide variety of designs have been proposed, laser speckle noise still provides a fundamental limitation on range accuracy. These works propose a new triangulation laser rangefinder designed specifically to mitigate the effects of laser speckle noise. The proposed rangefinder uses a precision linear translator to laterally reposition the imaging system (e.g., image sensor and imaging lens). For a given spatial location of the laser spot, capturing N spatially uncorrelated laser spot profiles is shown to improve range accuracy by a factor of N . This technique has many advantages over past speckle-reduction technologies, such as a fixed system cost and form factor, and the ability to virtually eliminate laser speckle noise. These advantages are made possible through spatial diversity and come at the cost of increased acquisition time. The rangefinder makes use of the ICFYKWG1 linear image sensor, a custom CMOS sensor developed at the Vision Sensor Laboratory (York University). Tests are performed on the image sensor's innovative high dynamic range technology to determine its effects on range accuracy. As expected, experimental results have shown that the sensor provides a trade-off between dynamic range and range accuracy.
Wickham, J.D.; Stehman, S.V.; Smith, J.H.; Wade, T.G.; Yang, L.
2004-01-01
Two-stage cluster sampling reduces the cost of collecting accuracy assessment reference data by constraining sample elements to fall within a limited number of geographic domains (clusters). However, because classification error is typically positively spatially correlated, within-cluster correlation may reduce the precision of the accuracy estimates. The detailed population information to quantify a priori the effect of within-cluster correlation on precision is typically unavailable. Consequently, a convenient, practical approach to evaluate the likely performance of a two-stage cluster sample is needed. We describe such an a priori evaluation protocol focusing on the spatial distribution of the sample by land-cover class across different cluster sizes and costs of different sampling options, including options not imposing clustering. This protocol also assesses the two-stage design's adequacy for estimating the precision of accuracy estimates for rare land-cover classes. We illustrate the approach using two large-area, regional accuracy assessments from the National Land-Cover Data (NLCD), and describe how the a priorievaluation was used as a decision-making tool when implementing the NLCD design.
Cadastral Database Positional Accuracy Improvement
NASA Astrophysics Data System (ADS)
Hashim, N. M.; Omar, A. H.; Ramli, S. N. M.; Omar, K. M.; Din, N.
2017-10-01
Positional Accuracy Improvement (PAI) is the refining process of the geometry feature in a geospatial dataset to improve its actual position. This actual position relates to the absolute position in specific coordinate system and the relation to the neighborhood features. With the growth of spatial based technology especially Geographical Information System (GIS) and Global Navigation Satellite System (GNSS), the PAI campaign is inevitable especially to the legacy cadastral database. Integration of legacy dataset and higher accuracy dataset like GNSS observation is a potential solution for improving the legacy dataset. However, by merely integrating both datasets will lead to a distortion of the relative geometry. The improved dataset should be further treated to minimize inherent errors and fitting to the new accurate dataset. The main focus of this study is to describe a method of angular based Least Square Adjustment (LSA) for PAI process of legacy dataset. The existing high accuracy dataset known as National Digital Cadastral Database (NDCDB) is then used as bench mark to validate the results. It was found that the propose technique is highly possible for positional accuracy improvement of legacy spatial datasets.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Calamida, A.; Saha, A.; Strampelli, G.
2017-04-01
We present a multi-band photometric catalog of ≈1.7 million cluster members for a field of view of ≈2° × 2° across ω Cen. Photometry is based on images collected with the Dark Energy Camera on the 4 m Blanco telescope and the Advanced Camera for Surveys on the Hubble Space Telescope . The unprecedented photometric accuracy and field coverage allowed us, for the first time, to investigate the spatial distribution of ω Cen multiple populations from the core to the tidal radius, confirming its very complex structure. We found that the frequency of blue main-sequence stars is increasing compared to red main-sequencemore » stars starting from a distance of ≈25′ from the cluster center. Blue main-sequence stars also show a clumpy spatial distribution, with an excess in the northeast quadrant of the cluster pointing toward the direction of the Galactic center. Stars belonging to the reddest and faintest red-giant branch also show a more extended spatial distribution in the outskirts of ω Cen, a region never explored before. Both these stellar sub-populations, according to spectroscopic measurements, are more metal-rich compared to the cluster main stellar population. These findings, once confirmed, make ω Cen the only stellar system currently known where metal-rich stars have a more extended spatial distribution compared to metal-poor stars. Kinematic and chemical abundance measurements are now needed for stars in the external regions of ω Cen to better characterize the properties of these sub-populations.« less
Stimulus specificity of a steady-state visual-evoked potential-based brain-computer interface.
Ng, Kian B; Bradley, Andrew P; Cunnington, Ross
2012-06-01
The mechanisms of neural excitation and inhibition when given a visual stimulus are well studied. It has been established that changing stimulus specificity such as luminance contrast or spatial frequency can alter the neuronal activity and thus modulate the visual-evoked response. In this paper, we study the effect that stimulus specificity has on the classification performance of a steady-state visual-evoked potential-based brain-computer interface (SSVEP-BCI). For example, we investigate how closely two visual stimuli can be placed before they compete for neural representation in the cortex and thus influence BCI classification accuracy. We characterize stimulus specificity using the four stimulus parameters commonly encountered in SSVEP-BCI design: temporal frequency, spatial size, number of simultaneously displayed stimuli and their spatial proximity. By varying these quantities and measuring the SSVEP-BCI classification accuracy, we are able to determine the parameters that provide optimal performance. Our results show that superior SSVEP-BCI accuracy is attained when stimuli are placed spatially more than 5° apart, with size that subtends at least 2° of visual angle, when using a tagging frequency of between high alpha and beta band. These findings may assist in deciding the stimulus parameters for optimal SSVEP-BCI design.
Stimulus specificity of a steady-state visual-evoked potential-based brain-computer interface
NASA Astrophysics Data System (ADS)
Ng, Kian B.; Bradley, Andrew P.; Cunnington, Ross
2012-06-01
The mechanisms of neural excitation and inhibition when given a visual stimulus are well studied. It has been established that changing stimulus specificity such as luminance contrast or spatial frequency can alter the neuronal activity and thus modulate the visual-evoked response. In this paper, we study the effect that stimulus specificity has on the classification performance of a steady-state visual-evoked potential-based brain-computer interface (SSVEP-BCI). For example, we investigate how closely two visual stimuli can be placed before they compete for neural representation in the cortex and thus influence BCI classification accuracy. We characterize stimulus specificity using the four stimulus parameters commonly encountered in SSVEP-BCI design: temporal frequency, spatial size, number of simultaneously displayed stimuli and their spatial proximity. By varying these quantities and measuring the SSVEP-BCI classification accuracy, we are able to determine the parameters that provide optimal performance. Our results show that superior SSVEP-BCI accuracy is attained when stimuli are placed spatially more than 5° apart, with size that subtends at least 2° of visual angle, when using a tagging frequency of between high alpha and beta band. These findings may assist in deciding the stimulus parameters for optimal SSVEP-BCI design.
Compressed Sensing for Resolution Enhancement of Hyperpolarized 13C Flyback 3D-MRSI
Hu, Simon; Lustig, Michael; Chen, Albert P.; Crane, Jason; Kerr, Adam; Kelley, Douglas A.C.; Hurd, Ralph; Kurhanewicz, John; Nelson, Sarah J.; Pauly, John M.; Vigneron, Daniel B.
2008-01-01
High polarization of nuclear spins in liquid state through dynamic nuclear polarization has enabled the direct monitoring of 13C metabolites in vivo at very high signal to noise, allowing for rapid assessment of tissue metabolism. The abundant SNR afforded by this hyperpolarization technique makes high resolution 13C 3D-MRSI feasible. However, the number of phase encodes that can be fit into the short acquisition time for hyperpolarized imaging limits spatial coverage and resolution. To take advantage of the high SNR available from hyperpolarization, we have applied compressed sensing to achieve a factor of 2 enhancement in spatial resolution without increasing acquisition time or decreasing coverage. In this paper, the design and testing of compressed sensing suited for a flyback 13C 3D-MRSI sequence are presented. The key to this design was the undersampling of spectral k-space using a novel blipped scheme, thus taking advantage of the considerable sparsity in typical hyperpolarized 13C spectra. Phantom tests validated the accuracy of the compressed sensing approach and initial mouse experiments demonstrated in vivo feasibility. PMID:18367420
Building Change Detection in Very High Resolution Satellite Stereo Image Time Series
NASA Astrophysics Data System (ADS)
Tian, J.; Qin, R.; Cerra, D.; Reinartz, P.
2016-06-01
There is an increasing demand for robust methods on urban sprawl monitoring. The steadily increasing number of high resolution and multi-view sensors allows producing datasets with high temporal and spatial resolution; however, less effort has been dedicated to employ very high resolution (VHR) satellite image time series (SITS) to monitor the changes in buildings with higher accuracy. In addition, these VHR data are often acquired from different sensors. The objective of this research is to propose a robust time-series data analysis method for VHR stereo imagery. Firstly, the spatial-temporal information of the stereo imagery and the Digital Surface Models (DSMs) generated from them are combined, and building probability maps (BPM) are calculated for all acquisition dates. In the second step, an object-based change analysis is performed based on the derivative features of the BPM sets. The change consistence between object-level and pixel-level are checked to remove any outlier pixels. Results are assessed on six pairs of VHR satellite images acquired within a time span of 7 years. The evaluation results have proved the efficiency of the proposed method.
The Sensitivity of Coded Mask Telescopes
NASA Technical Reports Server (NTRS)
Skinner, Gerald K.
2008-01-01
Simple formulae are often used to estimate the sensitivity of coded mask X-ray or gamma-ray telescopes, but t,hese are strictly only applicable if a number of basic assumptions are met. Complications arise, for example, if a grid structure is used to support the mask elements, if the detector spatial resolution is not good enough to completely resolve all the detail in the shadow of the mask or if any of a number of other simplifying conditions are not fulfilled. We derive more general expressions for the Poisson-noise-limited sensitivity of astronomical telescopes using the coded mask technique, noting explicitly in what circumstances they are applicable. The emphasis is on using nomenclature and techniques that result in simple and revealing results. Where no convenient expression is available a procedure is given which allows the calculation of the sensitivity. We consider certain aspects of the optimisation of the design of a coded mask telescope and show that when the detector spatial resolution and the mask to detector separation are fixed, the best source location accuracy is obtained when the mask elements are equal in size to the detector pixels.
Luegmair, Georg; Mehta, Daryush D.; Kobler, James B.; Döllinger, Michael
2015-01-01
Vocal fold kinematics and its interaction with aerodynamic characteristics play a primary role in acoustic sound production of the human voice. Investigating the temporal details of these kinematics using high-speed videoendoscopic imaging techniques has proven challenging in part due to the limitations of quantifying complex vocal fold vibratory behavior using only two spatial dimensions. Thus, we propose an optical method of reconstructing the superior vocal fold surface in three spatial dimensions using a high-speed video camera and laser projection system. Using stereo-triangulation principles, we extend the camera-laser projector method and present an efficient image processing workflow to generate the three-dimensional vocal fold surfaces during phonation captured at 4000 frames per second. Initial results are provided for airflow-driven vibration of an ex vivo vocal fold model in which at least 75% of visible laser points contributed to the reconstructed surface. The method captures the vertical motion of the vocal folds at a high accuracy to allow for the computation of three-dimensional mucosal wave features such as vibratory amplitude, velocity, and asymmetry. PMID:26087485
Kapeller, Christoph; Kamada, Kyousuke; Ogawa, Hiroshi; Prueckl, Robert; Scharinger, Josef; Guger, Christoph
2014-01-01
A brain-computer-interface (BCI) allows the user to control a device or software with brain activity. Many BCIs rely on visual stimuli with constant stimulation cycles that elicit steady-state visual evoked potentials (SSVEP) in the electroencephalogram (EEG). This EEG response can be generated with a LED or a computer screen flashing at a constant frequency, and similar EEG activity can be elicited with pseudo-random stimulation sequences on a screen (code-based BCI). Using electrocorticography (ECoG) instead of EEG promises higher spatial and temporal resolution and leads to more dominant evoked potentials due to visual stimulation. This work is focused on BCIs based on visual evoked potentials (VEP) and its capability as a continuous control interface for augmentation of video applications. One 35 year old female subject with implanted subdural grids participated in the study. The task was to select one out of four visual targets, while each was flickering with a code sequence. After a calibration run including 200 code sequences, a linear classifier was used during an evaluation run to identify the selected visual target based on the generated code-based VEPs over 20 trials. Multiple ECoG buffer lengths were tested and the subject reached a mean online classification accuracy of 99.21% for a window length of 3.15 s. Finally, the subject performed an unsupervised free run in combination with visual feedback of the current selection. Additionally, an algorithm was implemented that allowed to suppress false positive selections and this allowed the subject to start and stop the BCI at any time. The code-based BCI system attained very high online accuracy, which makes this approach very promising for control applications where a continuous control signal is needed. PMID:25147509
Optical diffraction tomography: accuracy of an off-axis reconstruction
NASA Astrophysics Data System (ADS)
Kostencka, Julianna; Kozacki, Tomasz
2014-05-01
Optical diffraction tomography is an increasingly popular method that allows for reconstruction of three-dimensional refractive index distribution of semi-transparent samples using multiple measurements of an optical field transmitted through the sample for various illumination directions. The process of assembly of the angular measurements is usually performed with one of two methods: filtered backprojection (FBPJ) or filtered backpropagation (FBPP) tomographic reconstruction algorithm. The former approach, although conceptually very simple, provides an accurate reconstruction for the object regions located close to the plane of focus. However, since FBPJ ignores diffraction, its use for spatially extended structures is arguable. According to the theory of scattering, more precise restoration of a 3D structure shall be achieved with the FBPP algorithm, which unlike the former approach incorporates diffraction. It is believed that with this method one is allowed to obtain a high accuracy reconstruction in a large measurement volume exceeding depth of focus of an imaging system. However, some studies have suggested that a considerable improvement of the FBPP results can be achieved with prior propagation of the transmitted fields back to the centre of the object. This, supposedly, enables reduction of errors due to approximated diffraction formulas used in FBPP. In our view this finding casts doubt on quality of the FBPP reconstruction in the regions far from the rotation axis. The objective of this paper is to investigate limitation of the FBPP algorithm in terms of an off-axis reconstruction and compare its performance with the FBPJ approach. Moreover, in this work we propose some modifications to the FBPP algorithm that allow for more precise restoration of a sample structure in off-axis locations. The research is based on extensive numerical simulations supported with wave-propagation method.
Spatial tools for managing hemlock woolly adelgid in the southern Appalachians
NASA Astrophysics Data System (ADS)
Koch, Frank Henry, Jr.
The hemlock woolly adelgid (Adelges tsugae) has recently spread into the southern Appalachians. This insect attacks both native hemlock species (Tsuga canadensis and T. caroliniana ), has no natural enemies, and can kill hemlocks within four years. Biological control displays promise for combating the pest, but counter-measures are impeded because adelgid and hemlock distribution patterns have been detailed poorly. We developed a spatial management system to better target control efforts, with two components: (1) a protocol for mapping hemlock stands, and (2) a technique to map areas at risk of imminent infestation. To construct a hemlock classifier, we used topographically normalized satellite images from Great Smoky Mountains National Park. Employing a decision tree approach that supplemented image spectral data with several environmental variables, we generated rules distinguishing hemlock areas from other forest types. We then implemented these rules in a geographic information system and generated hemlock distribution maps. Assessment yielded an overall thematic accuracy of 90% for one study area, and 75% accuracy in capturing hemlocks in a second study area. To map areas at risk, we combined first-year infestation locations from Great Smoky Mountains National Park and the Blue Ridge Parkway with points from uninfested hemlock stands, recording a suite of environmental variables for each point. We applied four different multivariate classification techniques to generate models from this sample predicting locations with high infestation risk, and used the resulting models to generate risk maps for the study region. All techniques performed well, accurately capturing 70--90% of training and validation samples, with the logistic regression model best balancing accuracy and regional applicability. Areas close to trails, roads, and streams appear to have the highest initial risk, perhaps due to bird- or human-mediated dispersal. Both components of our management system are general enough for use throughout the southern Appalachians. Overlay of derived maps will allow forest managers to reduce the area where they must focus their control efforts and thus allocate resources more efficiently.
Dauwalter, D.C.; Fisher, W.L.; Belt, K.C.
2006-01-01
We tested the precision and accuracy of the Trimble GeoXT??? global positioning system (GPS) handheld receiver on point and area features and compared estimates of stream habitat dimensions (e.g., lengths and areas of riffles and pools) that were made in three different Oklahoma streams using the GPS receiver and a tape measure. The precision of differentially corrected GPS (DGPS) points was not affected by the number of GPS position fixes (i.e., geographic location estimates) averaged per DGPS point. Horizontal error of points ranged from 0.03 to 2.77 m and did not differ with the number of position fixes per point. The error of area measurements ranged from 0.1% to 110.1% but decreased as the area increased. Again, error was independent of the number of position fixes averaged per polygon corner. The estimates of habitat lengths, widths, and areas did not differ when measured using two methods of data collection (GPS and a tape measure), nor did the differences among methods change at three stream sites with contrasting morphologies. Measuring features with a GPS receiver was up to 3.3 times faster on average than using a tape measure, although signal interference from high streambanks or overhanging vegetation occasionally limited satellite signal availability and prolonged measurements with a GPS receiver. There were also no differences in precision of habitat dimensions when mapped using a continuous versus a position fix average GPS data collection method. Despite there being some disadvantages to using the GPS in stream habitat studies, measuring stream habitats with a GPS resulted in spatially referenced data that allowed the assessment of relative habitat position and changes in habitats over time, and was often faster than using a tape measure. For most spatial scales of interest, the precision and accuracy of DGPS data are adequate and have logistical advantages when compared to traditional methods of measurement. ?? 2006 Springer Science+Business Media, Inc.
Lado, Bettina; Matus, Ivan; Rodríguez, Alejandra; Inostroza, Luis; Poland, Jesse; Belzile, François; del Pozo, Alejandro; Quincke, Martín; Castro, Marina; von Zitzewitz, Jarislav
2013-12-09
In crop breeding, the interest of predicting the performance of candidate cultivars in the field has increased due to recent advances in molecular breeding technologies. However, the complexity of the wheat genome presents some challenges for applying new technologies in molecular marker identification with next-generation sequencing. We applied genotyping-by-sequencing, a recently developed method to identify single-nucleotide polymorphisms, in the genomes of 384 wheat (Triticum aestivum) genotypes that were field tested under three different water regimes in Mediterranean climatic conditions: rain-fed only, mild water stress, and fully irrigated. We identified 102,324 single-nucleotide polymorphisms in these genotypes, and the phenotypic data were used to train and test genomic selection models intended to predict yield, thousand-kernel weight, number of kernels per spike, and heading date. Phenotypic data showed marked spatial variation. Therefore, different models were tested to correct the trends observed in the field. A mixed-model using moving-means as a covariate was found to best fit the data. When we applied the genomic selection models, the accuracy of predicted traits increased with spatial adjustment. Multiple genomic selection models were tested, and a Gaussian kernel model was determined to give the highest accuracy. The best predictions between environments were obtained when data from different years were used to train the model. Our results confirm that genotyping-by-sequencing is an effective tool to obtain genome-wide information for crops with complex genomes, that these data are efficient for predicting traits, and that correction of spatial variation is a crucial ingredient to increase prediction accuracy in genomic selection models.
NASA Astrophysics Data System (ADS)
Murillo Feo, C. A.; Martnez Martinez, L. J.; Correa Muñoz, N. A.
2016-06-01
The accuracy of locating attributes on topographic surfaces when, using GPS in mountainous areas, is affected by obstacles to wave propagation. As part of this research on the semi-automatic detection of landslides, we evaluate the accuracy and spatial distribution of the horizontal error in GPS positioning in the tertiary road network of six municipalities located in mountainous areas in the department of Cauca, Colombia, using geo-referencing with GPS mapping equipment and static-fast and pseudo-kinematic methods. We obtained quality parameters for the GPS surveys with differential correction, using a post-processing method. The consolidated database underwent exploratory analyses to determine the statistical distribution, a multivariate analysis to establish relationships and partnerships between the variables, and an analysis of the spatial variability and calculus of accuracy, considering the effect of non-Gaussian distribution errors. The evaluation of the internal validity of the data provide metrics with a confidence level of 95% between 1.24 and 2.45 m in the static-fast mode and between 0.86 and 4.2 m in the pseudo-kinematic mode. The external validity had an absolute error of 4.69 m, indicating that this descriptor is more critical than precision. Based on the ASPRS standard, the scale obtained with the evaluated equipment was in the order of 1:20000, a level of detail expected in the landslide-mapping project. Modelling the spatial variability of the horizontal errors from the empirical semi-variogram analysis showed predictions errors close to the external validity of the devices.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Liu, Derek; Mutanga, Theodore
Purpose: An end-to-end testing methodology was designed to evaluate the overall SRS treatment fidelity, incorporating all steps in the linac-based frameless radiosurgery treatment delivery process. The study details our commissioning experience of the Steev (CIRS, Norfolk, VA) stereotactic anthropomorphic head phantom including modification, test design, and baseline measurements. Methods: Repeated MR and CT scans were performed with interchanging inserts. MR-CT fusion accuracy was evaluated and the insert spatial coincidence was verified on CT. Five non-coplanar arcs delivered a prescription dose to a 15 mm spherical CTV with 2 mm PTV margin. Following setup, CBCT-based shifts were applied as per protocol.more » Sequential measurements were performed by interchanging inserts without disturbing the setup. Spatial and dosimetric accuracy was assessed by a combination of CBCT hidden target, radiochromic film, and ion chamber measurements. To facilitate film registration, the film insert was modified in-house by etching marks. Results: MR fusion error and insert spatial coincidences were within 0.3 mm. Both CBCT and film measurements showed spatial displacements of 1.0 mm in similar directions. Both coronal and sagittal films reported 2.3 % higher target dose relative to the treatment plan. The corrected ion chamber measurement was similarly greater by 1.0 %. The 3 %/2 mm gamma pass rate was 99% for both films Conclusions: A comprehensive end-to-end testing methodology was implemented for our SRS QA program. The Steev phantom enabled realistic evaluation of the entire treatment process. Overall spatial and dosimetric accuracy of the delivery were 1 mm and 3 % respectively.« less
NASA Astrophysics Data System (ADS)
Arantes Camargo, Livia; Marques Júnior, José; Reynaldo Ferracciú Alleoni, Luís; Tadeu Pereira, Gener; De Bortoli Teixeira, Daniel; Santos Rabelo de Souza Bahia, Angélica
2017-04-01
Environmental impact assessments may be assisted by spatial characterization of potentially toxic elements (PTEs). Diffuse reflectance spectroscopy (DRS) and X-ray fluorescence spectroscopy (XRF) are rapid, non-destructive, low-cost, prediction tools for a simultaneous characterization of different soil attributes. Although low concentrations of PTEs might preclude the observation of spectral features, their contents can be predicted using spectroscopy by exploring the existing relationship between the PTEs and soil attributes with spectral features. This study aimed to evaluate, in three geomorphic surfaces of Oxisols, the capacity for predicting PTEs (Ba, Co, and Ni) and their spatial variability by means of diffuse reflectance spectroscopy (DRS) and X-ray fluorescence spectroscopy (XRF). For that, soil samples were collected from three geomorphic surfaces and analyzed for chemical, physical, and mineralogical properties, and then analyzed in DRS (visible + near infrared - VIS+NIR and medium infrared - MIR) and XRF equipment. PTE prediction models were calibrated using partial least squares regression (PLSR). PTE spatial distribution maps were built using the values calculated by the calibrated models that reached the best accuracy using geostatistics. PTE prediction models were satisfactorily calibrated using MIR DRS for Ba, and Co (residual prediction deviation - RPD > 3.0), Vis DRS for Ni (RPD > 2.0) and FRX for all the studied PTEs (RPD > 1.8). DRS- and XRF-predicted values allowed the characterization and the understanding of spatial variability of the studied PTEs.
Adding spatial flexibility to source-receptor relationships for air quality modeling.
Pisoni, E; Clappier, A; Degraeuwe, B; Thunis, P
2017-04-01
To cope with computing power limitations, air quality models that are used in integrated assessment applications are generally approximated by simpler expressions referred to as "source-receptor relationships (SRR)". In addition to speed, it is desirable for the SRR also to be spatially flexible (application over a wide range of situations) and to require a "light setup" (based on a limited number of full Air Quality Models - AQM simulations). But "speed", "flexibility" and "light setup" do not naturally come together and a good compromise must be ensured that preserves "accuracy", i.e. a good comparability between SRR results and AQM. In this work we further develop a SRR methodology to better capture spatial flexibility. The updated methodology is based on a cell-to-cell relationship, in which a bell-shape function links emissions to concentrations. Maintaining a cell-to-cell relationship is shown to be the key element needed to ensure spatial flexibility, while at the same time the proposed approach to link emissions and concentrations guarantees a "light set-up" phase. Validation has been repeated on different areas and domain sizes (countries, regions, province throughout Europe) for precursors reduced independently or contemporarily. All runs showed a bias around 10% between the full AQM and the SRR. This methodology allows assessing the impact on air quality of emission scenarios applied over any given area in Europe (regions, set of regions, countries), provided that a limited number of AQM simulations are performed for training.
Assessment of Required Accuracy of Digital Elevation Data for Hydrologic Modeling
NASA Technical Reports Server (NTRS)
Kenward, T.; Lettenmaier, D. P.
1997-01-01
The effect of vertical accuracy of Digital Elevation Models (DEMs) on hydrologic models is evaluated by comparing three DEMs and resulting hydrologic model predictions applied to a 7.2 sq km USDA - ARS watershed at Mahantango Creek, PA. The high resolution (5 m) DEM was resempled to a 30 m resolution using method that constrained the spatial structure of the elevations to be comparable with the USGS and SIR-C DEMs. This resulting 30 m DEM was used as the reference product for subsequent comparisons. Spatial fields of directly derived quantities, such as elevation differences, slope, and contributing area, were compared to the reference product, as were hydrologic model output fields derived using each of the three DEMs at the common 30 m spatial resolution.
Integration of Heterogenous Digital Surface Models
NASA Astrophysics Data System (ADS)
Boesch, R.; Ginzler, C.
2011-08-01
The application of extended digital surface models often reveals, that despite an acceptable global accuracy for a given dataset, the local accuracy of the model can vary in a wide range. For high resolution applications which cover the spatial extent of a whole country, this can be a major drawback. Within the Swiss National Forest Inventory (NFI), two digital surface models are available, one derived from LiDAR point data and the other from aerial images. Automatic photogrammetric image matching with ADS80 aerial infrared images with 25cm and 50cm resolution is used to generate a surface model (ADS-DSM) with 1m resolution covering whole switzerland (approx. 41000 km2). The spatially corresponding LiDAR dataset has a global point density of 0.5 points per m2 and is mainly used in applications as interpolated grid with 2m resolution (LiDAR-DSM). Although both surface models seem to offer a comparable accuracy from a global view, local analysis shows significant differences. Both datasets have been acquired over several years. Concerning LiDAR-DSM, different flight patterns and inconsistent quality control result in a significantly varying point density. The image acquisition of the ADS-DSM is also stretched over several years and the model generation is hampered by clouds, varying illumination and shadow effects. Nevertheless many classification and feature extraction applications requiring high resolution data depend on the local accuracy of the used surface model, therefore precise knowledge of the local data quality is essential. The commercial photogrammetric software NGATE (part of SOCET SET) generates the image based surface model (ADS-DSM) and delivers also a map with figures of merit (FOM) of the matching process for each calculated height pixel. The FOM-map contains matching codes like high slope, excessive shift or low correlation. For the generation of the LiDAR-DSM only first- and last-pulse data was available. Therefore only the point distribution can be used to derive a local accuracy measure. For the calculation of a robust point distribution measure, a constrained triangulation of local points (within an area of 100m2) has been implemented using the Open Source project CGAL. The area of each triangle is a measure for the spatial distribution of raw points in this local area. Combining the FOM-map with the local evaluation of LiDAR points allows an appropriate local accuracy evaluation of both surface models. The currently implemented strategy ("partial replacement") uses the hypothesis, that the ADS-DSM is superior due to its better global accuracy of 1m. If the local analysis of the FOM-map within the 100m2 area shows significant matching errors, the corresponding area of the triangulated LiDAR points is analyzed. If the point density and distribution is sufficient, the LiDAR-DSM will be used in favor of the ADS-DSM at this location. If the local triangulation reflects low point density or the variance of triangle areas exceeds a threshold, the investigated location will be marked as NODATA area. In a future implementation ("anisotropic fusion") an anisotropic inverse distance weighting (IDW) will be used, which merges both surface models in the point data space by using FOM-map and local triangulation to derive a quality weight for each of the interpolation points. The "partial replacement" implementation and the "fusion" prototype for the anisotropic IDW make use of the Open Source projects CGAL (Computational Geometry Algorithms Library), GDAL (Geospatial Data Abstraction Library) and OpenCV (Open Source Computer Vision).
A PIXEL COMPOSITION-BASED REFERENCE DATA SET FOR THEMATIC ACCURACY ASSESSMENT
Developing reference data sets for accuracy assessment of land-cover classifications derived from coarse spatial resolution sensors such as MODIS can be difficult due to the large resolution differences between the image data and available reference data sources. Ideally, the spa...
Spatial Classification of Orchards and Vineyards with High Spatial Resolution Panchromatic Imagery
DOE Office of Scientific and Technical Information (OSTI.GOV)
Warner, Timothy; Steinmaus, Karen L.
2005-02-01
New high resolution single spectral band imagery offers the capability to conduct image classifications based on spatial patterns in imagery. A classification algorithm based on autocorrelation patterns was developed to automatically extract orchards and vineyards from satellite imagery. The algorithm was tested on IKONOS imagery over Granger, WA, which resulted in a classification accuracy of 95%.
ERIC Educational Resources Information Center
Collin, Charles A.; Liu, Chang Hong; Troje, Nikolaus F.; McMullen, Patricia A.; Chaudhuri, Avi
2004-01-01
Previous studies have suggested that face identification is more sensitive to variations in spatial frequency content than object recognition, but none have compared how sensitive the 2 processes are to variations in spatial frequency overlap (SFO). The authors tested face and object matching accuracy under varying SFO conditions. Their results…
NASA Astrophysics Data System (ADS)
Maksimova, L. A.; Ryabukho, P. V.; Mysina, N. Yu.; Lyakin, D. V.; Ryabukho, V. P.
2018-04-01
We have investigated the capabilities of the method of digital speckle interferometry for determining subpixel displacements of a speckle structure formed by a displaceable or deformable object with a scattering surface. An analysis of spatial spectra of speckle structures makes it possible to perform measurements with a subpixel accuracy and to extend the lower boundary of the range of measurements of displacements of speckle structures to the range of subpixel values. The method is realized on the basis of digital recording of the images of undisplaced and displaced speckle structures, their spatial frequency analysis using numerically specified constant phase shifts, and correlation analysis of spatial spectra of speckle structures. Transformation into the frequency range makes it possible to obtain quantities to be measured with a subpixel accuracy from the shift of the interference-pattern minimum in the diffraction halo by introducing an additional phase shift into the complex spatial spectrum of the speckle structure or from the slope of the linear plot of the function of accumulated phase difference in the field of the complex spatial spectrum of the displaced speckle structure. The capabilities of the method have been investigated in natural experiment.
NASA Astrophysics Data System (ADS)
Philip, Sajeev; Martin, Randall V.; Keller, Christoph A.
2016-05-01
Chemistry-transport models involve considerable computational expense. Fine temporal resolution offers accuracy at the expense of computation time. Assessment is needed of the sensitivity of simulation accuracy to the duration of chemical and transport operators. We conduct a series of simulations with the GEOS-Chem chemistry-transport model at different temporal and spatial resolutions to examine the sensitivity of simulated atmospheric composition to operator duration. Subsequently, we compare the species simulated with operator durations from 10 to 60 min as typically used by global chemistry-transport models, and identify the operator durations that optimize both computational expense and simulation accuracy. We find that longer continuous transport operator duration increases concentrations of emitted species such as nitrogen oxides and carbon monoxide since a more homogeneous distribution reduces loss through chemical reactions and dry deposition. The increased concentrations of ozone precursors increase ozone production with longer transport operator duration. Longer chemical operator duration decreases sulfate and ammonium but increases nitrate due to feedbacks with in-cloud sulfur dioxide oxidation and aerosol thermodynamics. The simulation duration decreases by up to a factor of 5 from fine (5 min) to coarse (60 min) operator duration. We assess the change in simulation accuracy with resolution by comparing the root mean square difference in ground-level concentrations of nitrogen oxides, secondary inorganic aerosols, ozone and carbon monoxide with a finer temporal or spatial resolution taken as "truth". Relative simulation error for these species increases by more than a factor of 5 from the shortest (5 min) to longest (60 min) operator duration. Chemical operator duration twice that of the transport operator duration offers more simulation accuracy per unit computation. However, the relative simulation error from coarser spatial resolution generally exceeds that from longer operator duration; e.g., degrading from 2° × 2.5° to 4° × 5° increases error by an order of magnitude. We recommend prioritizing fine spatial resolution before considering different operator durations in offline chemistry-transport models. We encourage chemistry-transport model users to specify in publications the durations of operators due to their effects on simulation accuracy.
NASA Astrophysics Data System (ADS)
Philip, S.; Martin, R. V.; Keller, C. A.
2015-11-01
Chemical transport models involve considerable computational expense. Fine temporal resolution offers accuracy at the expense of computation time. Assessment is needed of the sensitivity of simulation accuracy to the duration of chemical and transport operators. We conduct a series of simulations with the GEOS-Chem chemical transport model at different temporal and spatial resolutions to examine the sensitivity of simulated atmospheric composition to temporal resolution. Subsequently, we compare the tracers simulated with operator durations from 10 to 60 min as typically used by global chemical transport models, and identify the timesteps that optimize both computational expense and simulation accuracy. We found that longer transport timesteps increase concentrations of emitted species such as nitrogen oxides and carbon monoxide since a more homogeneous distribution reduces loss through chemical reactions and dry deposition. The increased concentrations of ozone precursors increase ozone production at longer transport timesteps. Longer chemical timesteps decrease sulfate and ammonium but increase nitrate due to feedbacks with in-cloud sulfur dioxide oxidation and aerosol thermodynamics. The simulation duration decreases by an order of magnitude from fine (5 min) to coarse (60 min) temporal resolution. We assess the change in simulation accuracy with resolution by comparing the root mean square difference in ground-level concentrations of nitrogen oxides, ozone, carbon monoxide and secondary inorganic aerosols with a finer temporal or spatial resolution taken as truth. Simulation error for these species increases by more than a factor of 5 from the shortest (5 min) to longest (60 min) temporal resolution. Chemical timesteps twice that of the transport timestep offer more simulation accuracy per unit computation. However, simulation error from coarser spatial resolution generally exceeds that from longer timesteps; e.g. degrading from 2° × 2.5° to 4° × 5° increases error by an order of magnitude. We recommend prioritizing fine spatial resolution before considering different temporal resolutions in offline chemical transport models. We encourage the chemical transport model users to specify in publications the durations of operators due to their effects on simulation accuracy.
NASA Astrophysics Data System (ADS)
Dedic, Chloe Elizabeth
Hybrid femtosecond/picosecond coherent anti-Stokes Raman scattering (fs/ps CARS) is developed for measuring internal energy distributions, species concentration, and pressure for highly dynamic gas-phase environments. Systems of interest include next-generation combustors, plasma-based manufacturing and plasma-assisted combustion, and high-speed aerodynamic flow. These challenging environments include spatial variations and fast dynamics that require the spatial and temporal resolution offered by hybrid fs/ps CARS. A novel dual-pump fs/ps CARS approach is developed to simultaneously excite pure-rotational and rovibrational Raman coherences for dynamic thermometry (300-2400 K) and detection of major combustion species. This approach was also used to measure single-shot vibrational and rotational energy distributions of the nonequilibrium environment of a dielectric barrier discharge plasma. Detailed spatial distributions and shot-to-shot fluctuations of rotational and vibrational temperatures spanning 325-450 K and 1200-5000 K were recorded across the plasma and surrounding flow, and are compared to plasma emission spectroscopy measurements. Dual-pump hybrid fs/ps CARS allows for concise, kHz-rate measurements of vibrational and rotational energy distributions or temperatures at equilibrium and nonequilibrium without nonresonant wave-mixing or molecular collisional interference. Additionally, a highly transient ns laser spark is explored using CARS to measure temperature and pressure behind the shock wave and temperature of the expanding plasma kernel. Vibrational energy distributions at the exit of a microscale gaseous detonation tube are presented. Theory required to model fs/ps CARS response, including nonthermal energy distributions, is presented. The impact of nonequilibrium on measurement accuracy is explored, and a coherent line-mixing model is validated with high-pressure measurements. Temperature and pressure sensitivity are investigated for multiple measurement configurations, and accuracy and precision is quantified as a function of signal-to-noise for the fs/ps CARS system.
The Use of Scale-Dependent Precision to Increase Forecast Accuracy in Earth System Modelling
NASA Astrophysics Data System (ADS)
Thornes, Tobias; Duben, Peter; Palmer, Tim
2016-04-01
At the current pace of development, it may be decades before the 'exa-scale' computers needed to resolve individual convective clouds in weather and climate models become available to forecasters, and such machines will incur very high power demands. But the resolution could be improved today by switching to more efficient, 'inexact' hardware with which variables can be represented in 'reduced precision'. Currently, all numbers in our models are represented as double-precision floating points - each requiring 64 bits of memory - to minimise rounding errors, regardless of spatial scale. Yet observational and modelling constraints mean that values of atmospheric variables are inevitably known less precisely on smaller scales, suggesting that this may be a waste of computer resources. More accurate forecasts might therefore be obtained by taking a scale-selective approach whereby the precision of variables is gradually decreased at smaller spatial scales to optimise the overall efficiency of the model. To study the effect of reducing precision to different levels on multiple spatial scales, we here introduce a new model atmosphere developed by extending the Lorenz '96 idealised system to encompass three tiers of variables - which represent large-, medium- and small-scale features - for the first time. In this chaotic but computationally tractable system, the 'true' state can be defined by explicitly resolving all three tiers. The abilities of low resolution (single-tier) double-precision models and similar-cost high resolution (two-tier) models in mixed-precision to produce accurate forecasts of this 'truth' are compared. The high resolution models outperform the low resolution ones even when small-scale variables are resolved in half-precision (16 bits). This suggests that using scale-dependent levels of precision in more complicated real-world Earth System models could allow forecasts to be made at higher resolution and with improved accuracy. If adopted, this new paradigm would represent a revolution in numerical modelling that could be of great benefit to the world.
NASA Astrophysics Data System (ADS)
Dobriyal, Pariva; Qureshi, Ashi; Badola, Ruchi; Hussain, Syed Ainul
2012-08-01
SummaryThe maintenance of elevated soil moisture is an important ecosystem service of the natural ecosystems. Understanding the patterns of soil moisture distribution is useful to a wide range of agencies concerned with the weather and climate, soil conservation, agricultural production and landscape management. However, the great heterogeneity in the spatial and temporal distribution of soil moisture and the lack of standard methods to estimate this property limit its quantification and use in research. This literature based review aims to (i) compile the available knowledge on the methods used to estimate soil moisture at the landscape level, (ii) compare and evaluate the available methods on the basis of common parameters such as resource efficiency, accuracy of results and spatial coverage and (iii) identify the method that will be most useful for forested landscapes in developing countries. On the basis of the strengths and weaknesses of each of the methods reviewed we conclude that the direct method (gravimetric method) is accurate and inexpensive but is destructive, slow and time consuming and does not allow replications thereby having limited spatial coverage. The suitability of indirect methods depends on the cost, accuracy, response time, effort involved in installation, management and durability of the equipment. Our review concludes that measurements of soil moisture using the Time Domain Reflectometry (TDR) and Ground Penetrating Radar (GPR) methods are instantaneously obtained and accurate. GPR may be used over larger areas (up to 500 × 500 m a day) but is not cost-effective and difficult to use in forested landscapes in comparison to TDR. This review will be helpful to researchers, foresters, natural resource managers and agricultural scientists in selecting the appropriate method for estimation of soil moisture keeping in view the time and resources available to them and to generate information for efficient allocation of water resources and maintenance of soil moisture regime.
Systems and methods for knowledge discovery in spatial data
Obradovic, Zoran; Fiez, Timothy E.; Vucetic, Slobodan; Lazarevic, Aleksandar; Pokrajac, Dragoljub; Hoskinson, Reed L.
2005-03-08
Systems and methods are provided for knowledge discovery in spatial data as well as to systems and methods for optimizing recipes used in spatial environments such as may be found in precision agriculture. A spatial data analysis and modeling module is provided which allows users to interactively and flexibly analyze and mine spatial data. The spatial data analysis and modeling module applies spatial data mining algorithms through a number of steps. The data loading and generation module obtains or generates spatial data and allows for basic partitioning. The inspection module provides basic statistical analysis. The preprocessing module smoothes and cleans the data and allows for basic manipulation of the data. The partitioning module provides for more advanced data partitioning. The prediction module applies regression and classification algorithms on the spatial data. The integration module enhances prediction methods by combining and integrating models. The recommendation module provides the user with site-specific recommendations as to how to optimize a recipe for a spatial environment such as a fertilizer recipe for an agricultural field.
Breast cancer mitosis detection in histopathological images with spatial feature extraction
NASA Astrophysics Data System (ADS)
Albayrak, Abdülkadir; Bilgin, Gökhan
2013-12-01
In this work, cellular mitosis detection in histopathological images has been investigated. Mitosis detection is very expensive and time consuming process. Development of digital imaging in pathology has enabled reasonable and effective solution to this problem. Segmentation of digital images provides easier analysis of cell structures in histopathological data. To differentiate normal and mitotic cells in histopathological images, feature extraction step is very crucial step for the system accuracy. A mitotic cell has more distinctive textural dissimilarities than the other normal cells. Hence, it is important to incorporate spatial information in feature extraction or in post-processing steps. As a main part of this study, Haralick texture descriptor has been proposed with different spatial window sizes in RGB and La*b* color spaces. So, spatial dependencies of normal and mitotic cellular pixels can be evaluated within different pixel neighborhoods. Extracted features are compared with various sample sizes by Support Vector Machines using k-fold cross validation method. According to the represented results, it has been shown that separation accuracy on mitotic and non-mitotic cellular pixels gets better with the increasing size of spatial window.
Common mechanisms of spatial attention in memory and perception: a tactile dual-task study.
Katus, Tobias; Andersen, Søren K; Müller, Matthias M
2014-03-01
Orienting attention to locations in mnemonic representations engages processes that functionally and anatomically overlap the neural circuitry guiding prospective shifts of spatial attention. The attention-based rehearsal account predicts that the requirement to withdraw attention from a memorized location impairs memory accuracy. In a dual-task study, we simultaneously presented retro-cues and pre-cues to guide spatial attention in short-term memory (STM) and perception, respectively. The spatial direction of each cue was independent of the other. The locations indicated by the combined cues could be compatible (same hand) or incompatible (opposite hands). Incompatible directional cues decreased lateralized activity in brain potentials evoked by visual cues, indicating interference in the generation of prospective attention shifts. The detection of external stimuli at the prospectively cued location was impaired when the memorized location was part of the perceptually ignored hand. The disruption of attention-based rehearsal by means of incompatible pre-cues reduced memory accuracy and affected encoding of tactile test stimuli at the retrospectively cued hand. These findings highlight the functional significance of spatial attention for spatial STM. The bidirectional interactions between both tasks demonstrate that spatial attention is a shared neural resource of a capacity-limited system that regulates information processing in internal and external stimulus representations.
A correction function method for the wave equation with interface jump conditions
NASA Astrophysics Data System (ADS)
Abraham, David S.; Marques, Alexandre Noll; Nave, Jean-Christophe
2018-01-01
In this paper a novel method to solve the constant coefficient wave equation, subject to interface jump conditions, is presented. In general, such problems pose issues for standard finite difference solvers, as the inherent discontinuity in the solution results in erroneous derivative information wherever the stencils straddle the given interface. Here, however, the recently proposed Correction Function Method (CFM) is used, in which correction terms are computed from the interface conditions, and added to affected nodes to compensate for the discontinuity. In contrast to existing methods, these corrections are not simply defined at affected nodes, but rather generalized to a continuous function within a small region surrounding the interface. As a result, the correction function may be defined in terms of its own governing partial differential equation (PDE) which may be solved, in principle, to arbitrary order of accuracy. The resulting scheme is not only arbitrarily high order, but also robust, having already seen application to Poisson problems and the heat equation. By extending the CFM to this new class of PDEs, the treatment of wave interface discontinuities in homogeneous media becomes possible. This allows, for example, for the straightforward treatment of infinitesimal source terms and sharp boundaries, free of staircasing errors. Additionally, new modifications to the CFM are derived, allowing compatibility with explicit multi-step methods, such as Runge-Kutta (RK4), without a reduction in accuracy. These results are then verified through numerous numerical experiments in one and two spatial dimensions.
A review of supervised object-based land-cover image classification
NASA Astrophysics Data System (ADS)
Ma, Lei; Li, Manchun; Ma, Xiaoxue; Cheng, Liang; Du, Peijun; Liu, Yongxue
2017-08-01
Object-based image classification for land-cover mapping purposes using remote-sensing imagery has attracted significant attention in recent years. Numerous studies conducted over the past decade have investigated a broad array of sensors, feature selection, classifiers, and other factors of interest. However, these research results have not yet been synthesized to provide coherent guidance on the effect of different supervised object-based land-cover classification processes. In this study, we first construct a database with 28 fields using qualitative and quantitative information extracted from 254 experimental cases described in 173 scientific papers. Second, the results of the meta-analysis are reported, including general characteristics of the studies (e.g., the geographic range of relevant institutes, preferred journals) and the relationships between factors of interest (e.g., spatial resolution and study area or optimal segmentation scale, accuracy and number of targeted classes), especially with respect to the classification accuracy of different sensors, segmentation scale, training set size, supervised classifiers, and land-cover types. Third, useful data on supervised object-based image classification are determined from the meta-analysis. For example, we find that supervised object-based classification is currently experiencing rapid advances, while development of the fuzzy technique is limited in the object-based framework. Furthermore, spatial resolution correlates with the optimal segmentation scale and study area, and Random Forest (RF) shows the best performance in object-based classification. The area-based accuracy assessment method can obtain stable classification performance, and indicates a strong correlation between accuracy and training set size, while the accuracy of the point-based method is likely to be unstable due to mixed objects. In addition, the overall accuracy benefits from higher spatial resolution images (e.g., unmanned aerial vehicle) or agricultural sites where it also correlates with the number of targeted classes. More than 95.6% of studies involve an area less than 300 ha, and the spatial resolution of images is predominantly between 0 and 2 m. Furthermore, we identify some methods that may advance supervised object-based image classification. For example, deep learning and type-2 fuzzy techniques may further improve classification accuracy. Lastly, scientists are strongly encouraged to report results of uncertainty studies to further explore the effects of varied factors on supervised object-based image classification.
TOPEX/Poseidon precision orbit determination production and expert system
NASA Technical Reports Server (NTRS)
Putney, Barbara; Zelensky, Nikita; Klosko, Steven
1993-01-01
TOPEX/Poseidon (T/P) is a joint mission between NASA and the Centre National d'Etudes Spatiales (CNES), the French Space Agency. The TOPEX/Poseidon Precision Orbit Determination Production System (PODPS) was developed at Goddard Space Flight Center (NASA/GSFC) to produce the absolute orbital reference required to support the fundamental ocean science goals of this satellite altimeter mission within NASA. The orbital trajectory for T/P is required to have a RMS accuracy of 13 centimeters in its radial component. This requirement is based on the effective use of the satellite altimetry for the isolation of absolute long-wavelength ocean topography important for monitoring global changes in the ocean circulation system. This orbit modeling requirement is at an unprecedented accuracy level for this type of satellite. In order to routinely produce and evaluate these orbits, GSFC has developed a production and supporting expert system. The PODPS is a menu driven system allowing routine importation and processing of tracking data for orbit determination, and an evaluation of the quality of the orbit so produced through a progressive series of tests. Phase 1 of the expert system grades the orbit and displays test results. Later phases undergoing implementation, will prescribe corrective actions when unsatisfactory results are seen. This paper describes the design and implementation of this orbit determination production system and the basis for its orbit accuracy assessment within the expert system.
Estimating crustal heterogeneity from double-difference tomography
Got, J.-L.; Monteiller, V.; Virieux, J.; Okubo, P.
2006-01-01
Seismic velocity parameters in limited, but heterogeneous volumes can be inferred using a double-difference tomographic algorithm, but to obtain meaningful results accuracy must be maintained at every step of the computation. MONTEILLER et al. (2005) have devised a double-difference tomographic algorithm that takes full advantage of the accuracy of cross-spectral time-delays of large correlated event sets. This algorithm performs an accurate computation of theoretical travel-time delays in heterogeneous media and applies a suitable inversion scheme based on optimization theory. When applied to Kilauea Volcano, in Hawaii, the double-difference tomography approach shows significant and coherent changes to the velocity model in the well-resolved volumes beneath the Kilauea caldera and the upper east rift. In this paper, we first compare the results obtained using MONTEILLER et al.'s algorithm with those obtained using the classic travel-time tomographic approach. Then, we evaluated the effect of using data series of different accuracies, such as handpicked arrival-time differences ("picking differences"), on the results produced by double-difference tomographic algorithms. We show that picking differences have a non-Gaussian probability density function (pdf). Using a hyperbolic secant pdf instead of a Gaussian pdf allows improvement of the double-difference tomographic result when using picking difference data. We completed our study by investigating the use of spatially discontinuous time-delay data. ?? Birkha??user Verlag, Basel, 2006.
a Method for the Positioning and Orientation of Rail-Bound Vehicles in Gnss-Free Environments
NASA Astrophysics Data System (ADS)
Hung, R.; King, B. A.; Chen, W.
2016-06-01
Mobile Mapping System (MMS) are increasingly applied for spatial data collection to support different fields because of their efficiencies and the levels of detail they can provide. The Position and Orientation System (POS), which is conventionally employed for locating and orienting MMS, allows direct georeferencing of spatial data in real-time. Since the performance of a POS depends on both the Inertial Navigation System (INS) and the Global Navigation Satellite System (GNSS), poor GNSS conditions, such as in long tunnels and underground, introduce the necessity for post-processing. In above-ground railways, mobile mapping technology is employed with high performance sensors for finite usage, which has considerable potential for enhancing railway safety and management in real-time. In contrast, underground railways present a challenge for a conventional POS thus alternative configurations are necessary to maintain data accuracy and alleviate the need for post-processing. This paper introduces a method of rail-bound navigation to replace the role of GNSS for railway applications. The proposed method integrates INS and track alignment data for environment-independent navigation and reduces the demand of post-processing. The principle of rail-bound navigation is presented and its performance is verified by an experiment using a consumer-grade Inertial Measurement Unit (IMU) and a small-scale railway model. The method produced a substantial improvement in position and orientation for a poorly initialised system in centimetre positional accuracy. The potential improvements indicated by, and limitations of rail-bound navigation are also considered for further development in existing railway systems.
NASA Astrophysics Data System (ADS)
Wulder, M. A.
1998-03-01
Forest stand data are normally stored in a geographic information system (GIS) on the basis of areas of similar species combinations. Polygons are created based upon species assemblages and given labels relating the percentage of areal coverage by each significant species type within the specified area. As a result, estimation of leaf area index (LAI) from the digital numbers found within GIS-stored polygons lack accuracy as the predictive equations for LAI are normally developed for individual species, not species assemblages. A Landsat TM image was acquired to enable a classification which allows for the decomposition of forest-stand polygons into greater species detail. Knowledge of the actual internal composition of the stand polygons provides for computation of LAI values based upon the appropriate predictive equation resulting in higher accuracy of these estimates. To accomplish this goal it was necessary to extract, for each cover type in each polygon, descriptive values to represent the digital numbers located in that portion of the polygon. The classified image dictates the species composition of the various portions of the polygon and within these areas the raster pixel values are tabulated and averaged. Due to a lack of existing software tools to assess the raster values occurring within GIS polygons a combination of remote sensing, GIS, UNIX, and specifically coded C programs were necessary. Such tools are frequently used by the spatial analyst and indicate the complexity of what may appear to be a straight-forward spatial analysis problem.
Task-induced frequency modulation features for brain-computer interfacing.
Jayaram, Vinay; Hohmann, Matthias; Just, Jennifer; Schölkopf, Bernhard; Grosse-Wentrup, Moritz
2017-10-01
Task-induced amplitude modulation of neural oscillations is routinely used in brain-computer interfaces (BCIs) for decoding subjects' intents, and underlies some of the most robust and common methods in the field, such as common spatial patterns and Riemannian geometry. While there has been some interest in phase-related features for classification, both techniques usually presuppose that the frequencies of neural oscillations remain stable across various tasks. We investigate here whether features based on task-induced modulation of the frequency of neural oscillations enable decoding of subjects' intents with an accuracy comparable to task-induced amplitude modulation. We compare cross-validated classification accuracies using the amplitude and frequency modulated features, as well as a joint feature space, across subjects in various paradigms and pre-processing conditions. We show results with a motor imagery task, a cognitive task, and also preliminary results in patients with amyotrophic lateral sclerosis (ALS), as well as using common spatial patterns and Laplacian filtering. The frequency features alone do not significantly out-perform traditional amplitude modulation features, and in some cases perform significantly worse. However, across both tasks and pre-processing in healthy subjects the joint space significantly out-performs either the frequency or amplitude features alone. This result only does not hold for ALS patients, for whom the dataset is of insufficient size to draw any statistically significant conclusions. Task-induced frequency modulation is robust and straight forward to compute, and increases performance when added to standard amplitude modulation features across paradigms. This allows more information to be extracted from the EEG signal cheaply and can be used throughout the field of BCIs.
Lightning Mapping and Leader Propagation Reconstruction using LOFAR-LIM
NASA Astrophysics Data System (ADS)
Hare, B.; Ebert, U.; Rutjes, C.; Scholten, O.; Trinh, G. T. N.
2017-12-01
LOFAR (LOw Frequency ARray) is a radio telescope that consists of a large number of dual-polarized antennas spread over the northern Netherlands and beyond. The LOFAR for Lightning Imaging project (LOFAR-LIM) has successfully used LOFAR to map out lightning in the Netherlands. Since LOFAR covers a large frequency range (10-90 MHz), has antennas spread over a large area, and saves the raw trace data from the antennas, LOFAR-LIM can combine all the strongest aspects of both lightning mapping arrays and lightning interferometers. These aspects include a nanosecond resolution between pulses, nanosecond timing accuracy, and an ability to map lightning in all 3 spatial dimensions and time. LOFAR should be able to map out overhead lightning with a spatial accuracy on the order of meters. The large amount of complex data provide by LOFAR has presented new data processing challenges, such as handling the time offsets between stations with large baselines and locating as many sources as possible. New algorithms to handle these challenges have been developed and will be discussed. Since the antennas are dual-polarized, all three components of the electric field can be extracted and the structure of the R.F. pulses can be investigated at a large number of distances and angles relative to the lightning source, potentially allowing for modeling of lightning current distributions relevant to the 10 to 90 MHz frequency range. R.F. pulses due to leader propagation will be presented, which show a complex sub-structure, indicating intricate physics that could potentially be reconstructed.
Tsuchida, Satoshi; Thome, Kurtis
2017-01-01
Radiometric cross-calibration between the Advanced Spaceborne Thermal Emission and Reflection Radiometer (ASTER) and the Terra-Moderate Resolution Imaging Spectroradiometer (MODIS) has been partially used to derive the ASTER radiometric calibration coefficient (RCC) curve as a function of date on visible to near-infrared bands. However, cross-calibration is not sufficiently accurate, since the effects of the differences in the sensor’s spectral and spatial responses are not fully mitigated. The present study attempts to evaluate radiometric consistency across two sensors using an improved cross-calibration algorithm to address the spectral and spatial effects and derive cross-calibration-based RCCs, which increases the ASTER calibration accuracy. Overall, radiances measured with ASTER bands 1 and 2 are on averages 3.9% and 3.6% greater than the ones measured on the same scene with their MODIS counterparts and ASTER band 3N (nadir) is 0.6% smaller than its MODIS counterpart in current radiance/reflectance products. The percentage root mean squared errors (%RMSEs) between the radiances of two sensors are 3.7, 4.2, and 2.3 for ASTER band 1, 2, and 3N, respectively, which are slightly greater or smaller than the required ASTER radiometric calibration accuracy (4%). The uncertainty of the cross-calibration is analyzed by elaborating the error budget table to evaluate the International System of Units (SI)-traceability of the results. The use of the derived RCCs will allow further reduction of errors in ASTER radiometric calibration and subsequently improve interoperability across sensors for synergistic applications. PMID:28777329
MODSNOW-Tool: an operational tool for daily snow cover monitoring using MODIS data
NASA Astrophysics Data System (ADS)
Gafurov, Abror; Lüdtke, Stefan; Unger-Shayesteh, Katy; Vorogushyn, Sergiy; Schöne, Tilo; Schmidt, Sebastian; Kalashnikova, Olga; Merz, Bruno
2017-04-01
Spatially distributed snow cover information in mountain areas is extremely important for water storage estimations, seasonal water availability forecasting, or the assessment of snow-related hazards (e.g. enhanced snow-melt following intensive rains, or avalanche events). Moreover, spatially distributed snow cover information can be used to calibrate and/or validate hydrological models. We present the MODSNOW-Tool - an operational monitoring tool offers a user-friendly application which can be used for catchment-based operational snow cover monitoring. The application automatically downloads and processes freely available daily Moderate Resolution Imaging Spectroradiometer (MODIS) snow cover data. The MODSNOW-Tool uses a step-wise approach for cloud removal and delivers cloud-free snow cover maps for the selected river basins including basin specific snow cover extent statistics. The accuracy of cloud-eliminated MODSNOW snow cover maps was validated for 84 almost cloud-free days in the Karadarya river basin in Central Asia, and an average accuracy of 94 % was achieved. The MODSNOW-Tool can be used in operational and non-operational mode. In the operational mode, the tool is set up as a scheduled task on a local computer allowing automatic execution without user interaction and delivers snow cover maps on a daily basis. In the non-operational mode, the tool can be used to process historical time series of snow cover maps. The MODSNOW-Tool is currently implemented and in use at the national hydrometeorological services of four Central Asian states - Kazakhstan, Kyrgyzstan, Uzbekistan and Turkmenistan and used for seasonal water availability forecast.
Li, Jin; Tran, Maggie; Siwabessy, Justy
2016-01-01
Spatially continuous predictions of seabed hardness are important baseline environmental information for sustainable management of Australia’s marine jurisdiction. Seabed hardness is often inferred from multibeam backscatter data with unknown accuracy and can be inferred from underwater video footage at limited locations. In this study, we classified the seabed into four classes based on two new seabed hardness classification schemes (i.e., hard90 and hard70). We developed optimal predictive models to predict seabed hardness using random forest (RF) based on the point data of hardness classes and spatially continuous multibeam data. Five feature selection (FS) methods that are variable importance (VI), averaged variable importance (AVI), knowledge informed AVI (KIAVI), Boruta and regularized RF (RRF) were tested based on predictive accuracy. Effects of highly correlated, important and unimportant predictors on the accuracy of RF predictive models were examined. Finally, spatial predictions generated using the most accurate models were visually examined and analysed. This study confirmed that: 1) hard90 and hard70 are effective seabed hardness classification schemes; 2) seabed hardness of four classes can be predicted with a high degree of accuracy; 3) the typical approach used to pre-select predictive variables by excluding highly correlated variables needs to be re-examined; 4) the identification of the important and unimportant predictors provides useful guidelines for further improving predictive models; 5) FS methods select the most accurate predictive model(s) instead of the most parsimonious ones, and AVI and Boruta are recommended for future studies; and 6) RF is an effective modelling method with high predictive accuracy for multi-level categorical data and can be applied to ‘small p and large n’ problems in environmental sciences. Additionally, automated computational programs for AVI need to be developed to increase its computational efficiency and caution should be taken when applying filter FS methods in selecting predictive models. PMID:26890307
Li, Jin; Tran, Maggie; Siwabessy, Justy
2016-01-01
Spatially continuous predictions of seabed hardness are important baseline environmental information for sustainable management of Australia's marine jurisdiction. Seabed hardness is often inferred from multibeam backscatter data with unknown accuracy and can be inferred from underwater video footage at limited locations. In this study, we classified the seabed into four classes based on two new seabed hardness classification schemes (i.e., hard90 and hard70). We developed optimal predictive models to predict seabed hardness using random forest (RF) based on the point data of hardness classes and spatially continuous multibeam data. Five feature selection (FS) methods that are variable importance (VI), averaged variable importance (AVI), knowledge informed AVI (KIAVI), Boruta and regularized RF (RRF) were tested based on predictive accuracy. Effects of highly correlated, important and unimportant predictors on the accuracy of RF predictive models were examined. Finally, spatial predictions generated using the most accurate models were visually examined and analysed. This study confirmed that: 1) hard90 and hard70 are effective seabed hardness classification schemes; 2) seabed hardness of four classes can be predicted with a high degree of accuracy; 3) the typical approach used to pre-select predictive variables by excluding highly correlated variables needs to be re-examined; 4) the identification of the important and unimportant predictors provides useful guidelines for further improving predictive models; 5) FS methods select the most accurate predictive model(s) instead of the most parsimonious ones, and AVI and Boruta are recommended for future studies; and 6) RF is an effective modelling method with high predictive accuracy for multi-level categorical data and can be applied to 'small p and large n' problems in environmental sciences. Additionally, automated computational programs for AVI need to be developed to increase its computational efficiency and caution should be taken when applying filter FS methods in selecting predictive models.
Comparing Features for Classification of MEG Responses to Motor Imagery.
Halme, Hanna-Leena; Parkkonen, Lauri
2016-01-01
Motor imagery (MI) with real-time neurofeedback could be a viable approach, e.g., in rehabilitation of cerebral stroke. Magnetoencephalography (MEG) noninvasively measures electric brain activity at high temporal resolution and is well-suited for recording oscillatory brain signals. MI is known to modulate 10- and 20-Hz oscillations in the somatomotor system. In order to provide accurate feedback to the subject, the most relevant MI-related features should be extracted from MEG data. In this study, we evaluated several MEG signal features for discriminating between left- and right-hand MI and between MI and rest. MEG was measured from nine healthy participants imagining either left- or right-hand finger tapping according to visual cues. Data preprocessing, feature extraction and classification were performed offline. The evaluated MI-related features were power spectral density (PSD), Morlet wavelets, short-time Fourier transform (STFT), common spatial patterns (CSP), filter-bank common spatial patterns (FBCSP), spatio-spectral decomposition (SSD), and combined SSD+CSP, CSP+PSD, CSP+Morlet, and CSP+STFT. We also compared four classifiers applied to single trials using 5-fold cross-validation for evaluating the classification accuracy and its possible dependence on the classification algorithm. In addition, we estimated the inter-session left-vs-right accuracy for each subject. The SSD+CSP combination yielded the best accuracy in both left-vs-right (mean 73.7%) and MI-vs-rest (mean 81.3%) classification. CSP+Morlet yielded the best mean accuracy in inter-session left-vs-right classification (mean 69.1%). There were large inter-subject differences in classification accuracy, and the level of the 20-Hz suppression correlated significantly with the subjective MI-vs-rest accuracy. Selection of the classification algorithm had only a minor effect on the results. We obtained good accuracy in sensor-level decoding of MI from single-trial MEG data. Feature extraction methods utilizing both the spatial and spectral profile of MI-related signals provided the best classification results, suggesting good performance of these methods in an online MEG neurofeedback system.
Hu, Junguo; Zhou, Jian; Zhou, Guomo; Luo, Yiqi; Xu, Xiaojun; Li, Pingheng; Liang, Junyi
2016-01-01
Soil respiration inherently shows strong spatial variability. It is difficult to obtain an accurate characterization of soil respiration with an insufficient number of monitoring points. However, it is expensive and cumbersome to deploy many sensors. To solve this problem, we proposed employing the Bayesian Maximum Entropy (BME) algorithm, using soil temperature as auxiliary information, to study the spatial distribution of soil respiration. The BME algorithm used the soft data (auxiliary information) effectively to improve the estimation accuracy of the spatiotemporal distribution of soil respiration. Based on the functional relationship between soil temperature and soil respiration, the BME algorithm satisfactorily integrated soil temperature data into said spatial distribution. As a means of comparison, we also applied the Ordinary Kriging (OK) and Co-Kriging (Co-OK) methods. The results indicated that the root mean squared errors (RMSEs) and absolute values of bias for both Day 1 and Day 2 were the lowest for the BME method, thus demonstrating its higher estimation accuracy. Further, we compared the performance of the BME algorithm coupled with auxiliary information, namely soil temperature data, and the OK method without auxiliary information in the same study area for 9, 21, and 37 sampled points. The results showed that the RMSEs for the BME algorithm (0.972 and 1.193) were less than those for the OK method (1.146 and 1.539) when the number of sampled points was 9 and 37, respectively. This indicates that the former method using auxiliary information could reduce the required number of sampling points for studying spatial distribution of soil respiration. Thus, the BME algorithm, coupled with soil temperature data, can not only improve the accuracy of soil respiration spatial interpolation but can also reduce the number of sampling points.
Hu, Junguo; Zhou, Jian; Zhou, Guomo; Luo, Yiqi; Xu, Xiaojun; Li, Pingheng; Liang, Junyi
2016-01-01
Soil respiration inherently shows strong spatial variability. It is difficult to obtain an accurate characterization of soil respiration with an insufficient number of monitoring points. However, it is expensive and cumbersome to deploy many sensors. To solve this problem, we proposed employing the Bayesian Maximum Entropy (BME) algorithm, using soil temperature as auxiliary information, to study the spatial distribution of soil respiration. The BME algorithm used the soft data (auxiliary information) effectively to improve the estimation accuracy of the spatiotemporal distribution of soil respiration. Based on the functional relationship between soil temperature and soil respiration, the BME algorithm satisfactorily integrated soil temperature data into said spatial distribution. As a means of comparison, we also applied the Ordinary Kriging (OK) and Co-Kriging (Co-OK) methods. The results indicated that the root mean squared errors (RMSEs) and absolute values of bias for both Day 1 and Day 2 were the lowest for the BME method, thus demonstrating its higher estimation accuracy. Further, we compared the performance of the BME algorithm coupled with auxiliary information, namely soil temperature data, and the OK method without auxiliary information in the same study area for 9, 21, and 37 sampled points. The results showed that the RMSEs for the BME algorithm (0.972 and 1.193) were less than those for the OK method (1.146 and 1.539) when the number of sampled points was 9 and 37, respectively. This indicates that the former method using auxiliary information could reduce the required number of sampling points for studying spatial distribution of soil respiration. Thus, the BME algorithm, coupled with soil temperature data, can not only improve the accuracy of soil respiration spatial interpolation but can also reduce the number of sampling points. PMID:26807579
Plasticity of spatial hearing: behavioural effects of cortical inactivation
Nodal, Fernando R; Bajo, Victoria M; King, Andrew J
2012-01-01
The contribution of auditory cortex to spatial information processing was explored behaviourally in adult ferrets by reversibly deactivating different cortical areas by subdural placement of a polymer that released the GABAA agonist muscimol over a period of weeks. The spatial extent and time course of cortical inactivation were determined electrophysiologically. Muscimol-Elvax was placed bilaterally over the anterior (AEG), middle (MEG) or posterior ectosylvian gyrus (PEG), so that different regions of the auditory cortex could be deactivated in different cases. Sound localization accuracy in the horizontal plane was assessed by measuring both the initial head orienting and approach-to-target responses made by the animals. Head orienting behaviour was unaffected by silencing any region of the auditory cortex, whereas the accuracy of approach-to-target responses to brief sounds (40 ms noise bursts) was reduced by muscimol-Elvax but not by drug-free implants. Modest but significant localization impairments were observed after deactivating the MEG, AEG or PEG, although the largest deficits were produced in animals in which the MEG, where the primary auditory fields are located, was silenced. We also examined experience-induced spatial plasticity by reversibly plugging one ear. In control animals, localization accuracy for both approach-to-target and head orienting responses was initially impaired by monaural occlusion, but recovered with training over the next few days. Deactivating any part of the auditory cortex resulted in less complete recovery than in controls, with the largest deficits observed after silencing the higher-level cortical areas in the AEG and PEG. Although suggesting that each region of auditory cortex contributes to spatial learning, differences in the localization deficits and degree of adaptation between groups imply a regional specialization in the processing of spatial information across the auditory cortex. PMID:22547635
Fine Particulate Matter Predictions Using High Resolution Aerosol Optical Depth (AOD) Retrievals
NASA Technical Reports Server (NTRS)
Chudnovsky, Alexandra A.; Koutrakis, Petros; Kloog, Itai; Melly, Steven; Nordio, Francesco; Lyapustin, Alexei; Wang, Jujie; Schwartz, Joel
2014-01-01
To date, spatial-temporal patterns of particulate matter (PM) within urban areas have primarily been examined using models. On the other hand, satellites extend spatial coverage but their spatial resolution is too coarse. In order to address this issue, here we report on spatial variability in PM levels derived from high 1 km resolution AOD product of Multi-Angle Implementation of Atmospheric Correction (MAIAC) algorithm developed for MODIS satellite. We apply day-specific calibrations of AOD data to predict PM(sub 2.5) concentrations within the New England area of the United States. To improve the accuracy of our model, land use and meteorological variables were incorporated. We used inverse probability weighting (IPW) to account for nonrandom missingness of AOD and nested regions within days to capture spatial variation. With this approach we can control for the inherent day-to-day variability in the AOD-PM(sub 2.5) relationship, which depends on time-varying parameters such as particle optical properties, vertical and diurnal concentration profiles and ground surface reflectance among others. Out-of-sample "ten-fold" cross-validation was used to quantify the accuracy of model predictions. Our results show that the model-predicted PM(sub 2.5) mass concentrations are highly correlated with the actual observations, with out-of- sample R(sub 2) of 0.89. Furthermore, our study shows that the model captures the pollution levels along highways and many urban locations thereby extending our ability to investigate the spatial patterns of urban air quality, such as examining exposures in areas with high traffic. Our results also show high accuracy within the cities of Boston and New Haven thereby indicating that MAIAC data can be used to examine intra-urban exposure contrasts in PM(sub 2.5) levels.
Land cover mapping at sub-pixel scales
NASA Astrophysics Data System (ADS)
Makido, Yasuyo Kato
One of the biggest drawbacks of land cover mapping from remotely sensed images relates to spatial resolution, which determines the level of spatial details depicted in an image. Fine spatial resolution images from satellite sensors such as IKONOS and QuickBird are now available. However, these images are not suitable for large-area studies, since a single image is very small and therefore it is costly for large area studies. Much research has focused on attempting to extract land cover types at sub-pixel scale, and little research has been conducted concerning the spatial allocation of land cover types within a pixel. This study is devoted to the development of new algorithms for predicting land cover distribution using remote sensory imagery at sub-pixel level. The "pixel-swapping" optimization algorithm, which was proposed by Atkinson for predicting sub-pixel land cover distribution, is investigated in this study. Two limitations of this method, the arbitrary spatial range value and the arbitrary exponential model of spatial autocorrelation, are assessed. Various weighting functions, as alternatives to the exponential model, are evaluated in order to derive the optimum weighting function. Two different simulation models were employed to develop spatially autocorrelated binary class maps. In all tested models, Gaussian, Exponential, and IDW, the pixel swapping method improved classification accuracy compared with the initial random allocation of sub-pixels. However the results suggested that equal weight could be used to increase accuracy and sub-pixel spatial autocorrelation instead of using these more complex models of spatial structure. New algorithms for modeling the spatial distribution of multiple land cover classes at sub-pixel scales are developed and evaluated. Three methods are examined: sequential categorical swapping, simultaneous categorical swapping, and simulated annealing. These three methods are applied to classified Landsat ETM+ data that has been resampled to 210 meters. The result suggested that the simultaneous method can be considered as the optimum method in terms of accuracy performance and computation time. The case study employs remote sensing imagery at the following sites: tropical forests in Brazil and temperate multiple land mosaic in East China. Sub-areas for both sites are used to examine how the characteristics of the landscape affect the ability of the optimum technique. Three types of measurement: Moran's I, mean patch size (MPS), and patch size standard deviation (STDEV), are used to characterize the landscape. All results suggested that this technique could increase the classification accuracy more than traditional hard classification. The methods developed in this study can benefit researchers who employ coarse remote sensing imagery but are interested in detailed landscape information. In many cases, the satellite sensor that provides large spatial coverage has insufficient spatial detail to identify landscape patterns. Application of the super-resolution technique described in this dissertation could potentially solve this problem by providing detailed land cover predictions from the coarse resolution satellite sensor imagery.
Two-stage cluster sampling reduces the cost of collecting accuracy assessment reference data by constraining sample elements to fall within a limited number of geographic domains (clusters). However, because classification error is typically positively spatially correlated, withi...
Decision Accuracy and the Role of Spatial Interaction in Opinion Dynamics
NASA Astrophysics Data System (ADS)
Torney, Colin J.; Levin, Simon A.; Couzin, Iain D.
2013-04-01
The opinions and actions of individuals within interacting groups are frequently determined by both social and personal information. When sociality (or the pressure to conform) is strong and individual preferences are weak, groups will remain cohesive until a consensus decision is reached. When group decisions are subject to a bias, representing for example private information known by some members of the population or imperfect information known by all, then the accuracy achieved for a fixed level of bias will increase with population size. In this work we determine how the scaling between accuracy and group size can be related to the microscopic properties of the decision-making process. By simulating a spatial model of opinion dynamics we show that the relationship between the instantaneous fraction of leaders in the population ( L), system size ( N), and accuracy depends on the frequency of individual opinion switches and the level of population viscosity. When social mixing is slow, and individual opinion changes are frequent, accuracy is determined by the absolute number of informed individuals. As mixing rates increase, or the rate of opinion updates decrease, a transition occurs to a regime where accuracy is determined by the value of L√{ N}. We investigate the transition between different scaling regimes analytically by examining a well-mixed limit.
Ding, Qian; Wang, Yong; Zhuang, Dafang
2018-04-15
The appropriate spatial interpolation methods must be selected to analyze the spatial distributions of Potentially Toxic Elements (PTEs), which is a precondition for evaluating PTE pollution. The accuracy and effect of different spatial interpolation methods, which include inverse distance weighting interpolation (IDW) (power = 1, 2, 3), radial basis function interpolation (RBF) (basis function: thin-plate spline (TPS), spline with tension (ST), completely regularized spline (CRS), multiquadric (MQ) and inverse multiquadric (IMQ)) and ordinary kriging interpolation (OK) (semivariogram model: spherical, exponential, gaussian and linear), were compared using 166 unevenly distributed soil PTE samples (As, Pb, Cu and Zn) in the Suxian District, Chenzhou City, Hunan Province as the study subject. The reasons for the accuracy differences of the interpolation methods and the uncertainties of the interpolation results are discussed, then several suggestions for improving the interpolation accuracy are proposed, and the direction of pollution control is determined. The results of this study are as follows: (i) RBF-ST and OK (exponential) are the optimal interpolation methods for As and Cu, and the optimal interpolation method for Pb and Zn is RBF-IMQ. (ii) The interpolation uncertainty is positively correlated with the PTE concentration, and higher uncertainties are primarily distributed around mines, which is related to the strong spatial variability of PTE concentrations caused by human interference. (iii) The interpolation accuracy can be improved by increasing the sample size around the mines, introducing auxiliary variables in the case of incomplete sampling and adopting the partition prediction method. (iv) It is necessary to strengthen the prevention and control of As and Pb pollution, particularly in the central and northern areas. The results of this study can provide an effective reference for the optimization of interpolation methods and parameters for unevenly distributed soil PTE data in mining areas. Copyright © 2018 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Stumpf, A.; Lachiche, N.; Malet, J.; Kerle, N.; Puissant, A.
2011-12-01
VHR satellite images have become a primary source for landslide inventory mapping after major triggering events such as earthquakes and heavy rainfalls. Visual image interpretation is still the prevailing standard method for operational purposes but is time-consuming and not well suited to fully exploit the increasingly better supply of remote sensing data. Recent studies have addressed the development of more automated image analysis workflows for landslide inventory mapping. In particular object-oriented approaches that account for spatial and textural image information have been demonstrated to be more adequate than pixel-based classification but manually elaborated rule-based classifiers are difficult to adapt under changing scene characteristics. Machine learning algorithm allow learning classification rules for complex image patterns from labelled examples and can be adapted straightforwardly with available training data. In order to reduce the amount of costly training data active learning (AL) has evolved as a key concept to guide the sampling for many applications. The underlying idea of AL is to initialize a machine learning model with a small training set, and to subsequently exploit the model state and data structure to iteratively select the most valuable samples that should be labelled by the user. With relatively few queries and labelled samples, an AL strategy yields higher accuracies than an equivalent classifier trained with many randomly selected samples. This study addressed the development of an AL method for landslide mapping from VHR remote sensing images with special consideration of the spatial distribution of the samples. Our approach [1] is based on the Random Forest algorithm and considers the classifier uncertainty as well as the variance of potential sampling regions to guide the user towards the most valuable sampling areas. The algorithm explicitly searches for compact regions and thereby avoids a spatially disperse sampling pattern inherent to most other AL methods. The accuracy, the sampling time and the computational runtime of the algorithm were evaluated on multiple satellite images capturing recent large scale landslide events. Sampling between 1-4% of the study areas the accuracies between 74% and 80% were achieved, whereas standard sampling schemes yielded only accuracies between 28% and 50% with equal sampling costs. Compared to commonly used point-wise AL algorithm the proposed approach significantly reduces the number of iterations and hence the computational runtime. Since the user can focus on relatively few compact areas (rather than on hundreds of distributed points) the overall labeling time is reduced by more than 50% compared to point-wise queries. An experimental evaluation of multiple expert mappings demonstrated strong relationships between the uncertainties of the experts and the machine learning model. It revealed that the achieved accuracies are within the range of the inter-expert disagreement and that it will be indispensable to consider ground truth uncertainties to truly achieve further enhancements in the future. The proposed method is generally applicable to a wide range of optical satellite images and landslide types. [1] A. Stumpf, N. Lachiche, J.-P. Malet, N. Kerle, and A. Puissant, Active learning in the spatial domain for remote sensing image classification, IEEE Transactions on Geosciece and Remote Sensing. 2013, DOI 10.1109/TGRS.2013.2262052.
4D measurements of biological and synthetic structures using a dynamic interferometer
NASA Astrophysics Data System (ADS)
Toto-Arellano, Noel-Ivan
2017-12-01
Considering the deficiency of time elapsed for phase-stepping interferometric techniques and the need of developing non-contact and on-line measurement with high accuracy, a single-shot phase-shifting triple-interferometer (PSTI) is developed for analysis of characteristics of transparent structures and optical path difference (OPD) measurements. In the proposed PSTI, coupled three interferometers which generate four interference patterns, and a polarizer array is used as phase shifters to produce four spatially separated interferograms with π/2-phase shifts, which are recorded in a single capture by a camera. The configuration of the PSTI allows dynamic measurements (4D measurements) and does not require vibration isolation. We have applied the developed system to examine the size and OPD of cells, and the slope of thin films
NASA Astrophysics Data System (ADS)
Guerra, J. E.; Ullrich, P. A.
2015-12-01
Tempest is a next-generation global climate and weather simulation platform designed to allow experimentation with numerical methods at very high spatial resolutions. The atmospheric fluid equations are discretized by continuous / discontinuous finite elements in the horizontal and by a staggered nodal finite element method (SNFEM) in the vertical, coupled with implicit/explicit time integration. At global horizontal resolutions below 10km, many important questions remain on optimal techniques for solving the fluid equations. We present results from a suite of meso-scale test cases to validate the performance of the SNFEM applied in the vertical. Internal gravity wave, mountain wave, convective, and Cartesian baroclinic instability tests will be shown at various vertical orders of accuracy and compared with known results.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Swadling, G. F., E-mail: swadling@imperial.ac.uk; Lebedev, S. V.; Hall, G. N.
2014-11-15
A suite of laser based diagnostics is used to study interactions of magnetised, supersonic, radiatively cooled plasma flows produced using the Magpie pulse power generator (1.4 MA, 240 ns rise time). Collective optical Thomson scattering measures the time-resolved local flow velocity and temperature across 7–14 spatial positions. The scattering spectrum is recorded from multiple directions, allowing more accurate reconstruction of the flow velocity vectors. The areal electron density is measured using 2D interferometry; optimisation and analysis are discussed. The Faraday rotation diagnostic, operating at 1053 nm, measures the magnetic field distribution in the plasma. Measurements obtained simultaneously by these diagnosticsmore » are used to constrain analysis, increasing the accuracy of interpretation.« less
Paladini, Rebecca E.; Diana, Lorenzo; Zito, Giuseppe A.; Nyffeler, Thomas; Wyss, Patric; Mosimann, Urs P.; Müri, René M.; Nef, Tobias
2018-01-01
Cross-modal spatial cueing can affect performance in a visual search task. For example, search performance improves if a visual target and an auditory cue originate from the same spatial location, and it deteriorates if they originate from different locations. Moreover, it has recently been postulated that multisensory settings, i.e., experimental settings, in which critical stimuli are concurrently presented in different sensory modalities (e.g., visual and auditory), may trigger asymmetries in visuospatial attention. Thereby, a facilitation has been observed for visual stimuli presented in the right compared to the left visual space. However, it remains unclear whether auditory cueing of attention differentially affects search performance in the left and the right hemifields in audio-visual search tasks. The present study investigated whether spatial asymmetries would occur in a search task with cross-modal spatial cueing. Participants completed a visual search task that contained no auditory cues (i.e., unimodal visual condition), spatially congruent, spatially incongruent, and spatially non-informative auditory cues. To further assess participants’ accuracy in localising the auditory cues, a unimodal auditory spatial localisation task was also administered. The results demonstrated no left/right asymmetries in the unimodal visual search condition. Both an additional incongruent, as well as a spatially non-informative, auditory cue resulted in lateral asymmetries. Thereby, search times were increased for targets presented in the left compared to the right hemifield. No such spatial asymmetry was observed in the congruent condition. However, participants’ performance in the congruent condition was modulated by their tone localisation accuracy. The findings of the present study demonstrate that spatial asymmetries in multisensory processing depend on the validity of the cross-modal cues, and occur under specific attentional conditions, i.e., when visual attention has to be reoriented towards the left hemifield. PMID:29293637
Wognum, S; Heethuis, S E; Rosario, T; Hoogeman, M S; Bel, A
2014-07-01
The spatial accuracy of deformable image registration (DIR) is important in the implementation of image guided adaptive radiotherapy techniques for cancer in the pelvic region. Validation of algorithms is best performed on phantoms with fiducial markers undergoing controlled large deformations. Excised porcine bladders, exhibiting similar filling and voiding behavior as human bladders, provide such an environment. The aim of this study was to determine the spatial accuracy of different DIR algorithms on CT images of ex vivo porcine bladders with radiopaque fiducial markers applied to the outer surface, for a range of bladder volumes, using various accuracy metrics. Five excised porcine bladders with a grid of 30-40 radiopaque fiducial markers attached to the outer wall were suspended inside a water-filled phantom. The bladder was filled with a controlled amount of water with added contrast medium for a range of filling volumes (100-400 ml in steps of 50 ml) using a luer lock syringe, and CT scans were acquired at each filling volume. DIR was performed for each data set, with the 100 ml bladder as the reference image. Six intensity-based algorithms (optical flow or demons-based) implemented in theMATLAB platform DIRART, a b-spline algorithm implemented in the commercial software package VelocityAI, and a structure-based algorithm (Symmetric Thin Plate Spline Robust Point Matching) were validated, using adequate parameter settings according to values previously published. The resulting deformation vector field from each registration was applied to the contoured bladder structures and to the marker coordinates for spatial error calculation. The quality of the algorithms was assessed by comparing the different error metrics across the different algorithms, and by comparing the effect of deformation magnitude (bladder volume difference) per algorithm, using the Independent Samples Kruskal-Wallis test. The authors found good structure accuracy without dependency on bladder volume difference for all but one algorithm, and with the best result for the structure-based algorithm. Spatial accuracy as assessed from marker errors was disappointing for all algorithms, especially for large volume differences, implying that the deformations described by the registration did not represent anatomically correct deformations. The structure-based algorithm performed the best in terms of marker error for the large volume difference (100-400 ml). In general, for the small volume difference (100-150 ml) the algorithms performed relatively similarly. The structure-based algorithm exhibited the best balance in performance between small and large volume differences, and among the intensity-based algorithms, the algorithm implemented in VelocityAI exhibited the best balance. Validation of multiple DIR algorithms on a novel physiological bladder phantom revealed that the structure accuracy was good for most algorithms, but that the spatial accuracy as assessed from markers was low for all algorithms, especially for large deformations. Hence, many of the available algorithms exhibit sufficient accuracy for contour propagation purposes, but possibly not for accurate dose accumulation.
A Comparative Study of Precise Point Positioning (PPP) Accuracy Using Online Services
NASA Astrophysics Data System (ADS)
Malinowski, Marcin; Kwiecień, Janusz
2016-12-01
Precise Point Positioning (PPP) is a technique used to determine the position of receiver antenna without communication with the reference station. It may be an alternative solution to differential measurements, where maintaining a connection with a single RTK station or a regional network of reference stations RTN is necessary. This situation is especially common in areas with poorly developed infrastructure of ground stations. A lot of research conducted so far on the use of the PPP technique has been concerned about the development of entire day observation sessions. However, this paper presents the results of a comparative analysis of accuracy of absolute determination of position from observations which last between 1 to 7 hours with the use of four permanent services which execute calculations with PPP technique such as: Automatic Precise Positioning Service (APPS), Canadian Spatial Reference System Precise Point Positioning (CSRS-PPP), GNSS Analysis and Positioning Software (GAPS) and magicPPP - Precise Point Positioning Solution (magicGNSS). On the basis of acquired results of measurements, it can be concluded that at least two-hour long measurements allow acquiring an absolute position with an accuracy of 2-4 cm. An evaluation of the impact on the accuracy of simultaneous positioning of three points test network on the change of the horizontal distance and the relative height difference between measured triangle vertices was also conducted. Distances and relative height differences between points of the triangular test network measured with a laser station Leica TDRA6000 were adopted as references. The analyses of results show that at least two hours long measurement sessions can be used to determine the horizontal distance or the difference in height with an accuracy of 1-2 cm. Rapid products employed in calculations conducted with PPP technique reached the accuracy of determining coordinates on a close level as in elaborations which employ Final products.
Carrier-phase multipath corrections for GPS-based satellite attitude determination
NASA Technical Reports Server (NTRS)
Axelrad, A.; Reichert, P.
2001-01-01
This paper demonstrates the high degree of spatial repeatability of these errors for a spacecraft environment and describes a correction technique, termed the sky map method, which exploits the spatial correlation to correct measurements and improve the accuracy of GPS-based attitude solutions.
Robotic guidance benefits the learning of dynamic, but not of spatial movement characteristics.
Lüttgen, Jenna; Heuer, Herbert
2012-10-01
Robotic guidance is an engineered form of haptic-guidance training and intended to enhance motor learning in rehabilitation, surgery, and sports. However, its benefits (and pitfalls) are still debated. Here, we investigate the effects of different presentation modes on the reproduction of a spatiotemporal movement pattern. In three different groups of participants, the movement was demonstrated in three different modalities, namely visual, haptic, and visuo-haptic. After demonstration, participants had to reproduce the movement in two alternating recall conditions: haptic and visuo-haptic. Performance of the three groups during recall was compared with regard to spatial and dynamic movement characteristics. After haptic presentation, participants showed superior dynamic accuracy, whereas after visual presentation, participants performed better with regard to spatial accuracy. Added visual feedback during recall always led to enhanced performance, independent of the movement characteristic and the presentation modality. These findings substantiate the different benefits of different presentation modes for different movement characteristics. In particular, robotic guidance is beneficial for the learning of dynamic, but not of spatial movement characteristics.
Randall A., Jr. Schultz; Thomas C., Jr. Edwards; Gretchen G. Moisen; Tracey S. Frescino
2005-01-01
The ability of USDA Forest Service Forest Inventory and Analysis (FIA) generated spatial products to increase the predictive accuracy of spatially explicit, macroscale habitat models was examined for nest-site selection by cavity-nesting birds in Fishlake National Forest, Utah. One FIA-derived variable (percent basal area of aspen trees) was significant in the habitat...
NASA Astrophysics Data System (ADS)
Jin, Y.; Lee, D.
2017-12-01
North Korea (the Democratic People's Republic of Korea, DPRK) is known to have some of the most degraded forest in the world. The characteristics of forest landscape in North Korea is complex and heterogeneous, the major vegetation cover types in the forest are hillside farm, unstocked forest, natural forest, and plateau vegetation. Better classification of types in high spatial resolution of deforested areas could provide essential information for decisions about forest management priorities and restoration of deforested areas. For mapping heterogeneous vegetation covers, the phenology-based indices are helpful to overcome the reflectance value confusion that occurs when using one season images. Coarse spatial resolution images may be acquired with a high repetition rate and it is useful for analyzing phenology characteristics, but may not capture the spatial detail of the land cover mosaic of the region of interest. Previous spatial-temporal fusion methods were only capture the temporal change, or focused on both temporal change and spatial change but with low accuracy in heterogeneous landscapes and small patches. In this study, a new concept for spatial-temporal image fusion method focus on heterogeneous landscape was proposed to produce fine resolution images at both fine spatial and temporal resolution. We classified the three types of pixels between the base image and target image, the first type is only reflectance changed caused by phenology, this type of pixels supply the reflectance, shape and texture information; the second type is both reflectance and spectrum changed in some bands caused by phenology like rice paddy or farmland, this type of pixels only supply shape and texture information; the third type is reflectance and spectrum changed caused by land cover type change, this type of pixels don't provide any information because we can't know how land cover changed in target image; and each type of pixels were applied different prediction methods. Results show that both STARFM and FSDAF predicted in low accuracy in second type pixels and small patches. Classification results used spatial-temporal image fusion method proposed in this study showed overall classification accuracy of 89.38%, with corresponding kappa coefficients of 0.87.
Anisotropic mesh adaptation for marine ice-sheet modelling
NASA Astrophysics Data System (ADS)
Gillet-Chaulet, Fabien; Tavard, Laure; Merino, Nacho; Peyaud, Vincent; Brondex, Julien; Durand, Gael; Gagliardini, Olivier
2017-04-01
Improving forecasts of ice-sheets contribution to sea-level rise requires, amongst others, to correctly model the dynamics of the grounding line (GL), i.e. the line where the ice detaches from its underlying bed and goes afloat on the ocean. Many numerical studies, including the intercomparison exercises MISMIP and MISMIP3D, have shown that grid refinement in the GL vicinity is a key component to obtain reliable results. Improving model accuracy while maintaining the computational cost affordable has then been an important target for the development of marine icesheet models. Adaptive mesh refinement (AMR) is a method where the accuracy of the solution is controlled by spatially adapting the mesh size. It has become popular in models using the finite element method as they naturally deal with unstructured meshes, but block-structured AMR has also been successfully applied to model GL dynamics. The main difficulty with AMR is to find efficient and reliable estimators of the numerical error to control the mesh size. Here, we use the estimator proposed by Frey and Alauzet (2015). Based on the interpolation error, it has been found effective in practice to control the numerical error, and has some flexibility, such as its ability to combine metrics for different variables, that makes it attractive. Routines to compute the anisotropic metric defining the mesh size have been implemented in the finite element ice flow model Elmer/Ice (Gagliardini et al., 2013). The mesh adaptation is performed using the freely available library MMG (Dapogny et al., 2014) called from Elmer/Ice. Using a setup based on the inter-comparison exercise MISMIP+ (Asay-Davis et al., 2016), we study the accuracy of the solution when the mesh is adapted using various variables (ice thickness, velocity, basal drag, …). We show that combining these variables allows to reduce the number of mesh nodes by more than one order of magnitude, for the same numerical accuracy, when compared to uniform mesh refinement. For transient solutions where the GL is moving, we have implemented an algorithm where the computation is reiterated allowing to anticipate the GL displacement and to adapt the mesh to the transient solution. We discuss the performance and robustness of this algorithm.
Comparison of Several Numerical Methods for Simulation of Compressible Shear Layers
NASA Technical Reports Server (NTRS)
Kennedy, Christopher A.; Carpenter, Mark H.
1997-01-01
An investigation is conducted on several numerical schemes for use in the computation of two-dimensional, spatially evolving, laminar variable-density compressible shear layers. Schemes with various temporal accuracies and arbitrary spatial accuracy for both inviscid and viscous terms are presented and analyzed. All integration schemes use explicit or compact finite-difference derivative operators. Three classes of schemes are considered: an extension of MacCormack's original second-order temporally accurate method, a new third-order variant of the schemes proposed by Rusanov and by Kutier, Lomax, and Warming (RKLW), and third- and fourth-order Runge-Kutta schemes. In each scheme, stability and formal accuracy are considered for the interior operators on the convection-diffusion equation U(sub t) + aU(sub x) = alpha U(sub xx). Accuracy is also verified on the nonlinear problem, U(sub t) + F(sub x) = 0. Numerical treatments of various orders of accuracy are chosen and evaluated for asymptotic stability. Formally accurate boundary conditions are derived for several sixth- and eighth-order central-difference schemes. Damping of high wave-number data is accomplished with explicit filters of arbitrary order. Several schemes are used to compute variable-density compressible shear layers, where regions of large gradients exist.
NASA Astrophysics Data System (ADS)
Zhang, Yang; Liu, Wei; Li, Xiaodong; Yang, Fan; Gao, Peng; Jia, Zhenyuan
2015-10-01
Large-scale triangulation scanning measurement systems are widely used to measure the three-dimensional profile of large-scale components and parts. The accuracy and speed of the laser stripe center extraction are essential for guaranteeing the accuracy and efficiency of the measuring system. However, in the process of large-scale measurement, multiple factors can cause deviation of the laser stripe center, including the spatial light intensity distribution, material reflectivity characteristics, and spatial transmission characteristics. A center extraction method is proposed for improving the accuracy of the laser stripe center extraction based on image evaluation of Gaussian fitting structural similarity and analysis of the multiple source factors. First, according to the features of the gray distribution of the laser stripe, evaluation of the Gaussian fitting structural similarity is estimated to provide a threshold value for center compensation. Then using the relationships between the gray distribution of the laser stripe and the multiple source factors, a compensation method of center extraction is presented. Finally, measurement experiments for a large-scale aviation composite component are carried out. The experimental results for this specific implementation verify the feasibility of the proposed center extraction method and the improved accuracy for large-scale triangulation scanning measurements.
We developed a technique for assessing the accuracy of sub-pixel derived estimates of impervious surface extracted from LANDSAT TM imagery. We utilized spatially coincident
sub-pixel derived impervious surface estimates, high-resolution planimetric GIS data, vector--to-
r...
Working Memory Components and Problem-Solving Accuracy: Are There Multiple Pathways?
ERIC Educational Resources Information Center
Swanson, H. Lee; Fung, Wenson
2016-01-01
This study determined the working memory (WM) components (executive, phonological short-term memory [STM], and visual-spatial sketchpad) that best predicted mathematical word problem-solving accuracy in elementary schoolchildren (N = 392). The battery of tests administered to assess mediators between WM and problem-solving included measures of…
Update and review of accuracy assessment techniques for remotely sensed data
NASA Technical Reports Server (NTRS)
Congalton, R. G.; Heinen, J. T.; Oderwald, R. G.
1983-01-01
Research performed in the accuracy assessment of remotely sensed data is updated and reviewed. The use of discrete multivariate analysis techniques for the assessment of error matrices, the use of computer simulation for assessing various sampling strategies, and an investigation of spatial autocorrelation techniques are examined.
Lado, Bettina; Matus, Ivan; Rodríguez, Alejandra; Inostroza, Luis; Poland, Jesse; Belzile, François; del Pozo, Alejandro; Quincke, Martín; Castro, Marina; von Zitzewitz, Jarislav
2013-01-01
In crop breeding, the interest of predicting the performance of candidate cultivars in the field has increased due to recent advances in molecular breeding technologies. However, the complexity of the wheat genome presents some challenges for applying new technologies in molecular marker identification with next-generation sequencing. We applied genotyping-by-sequencing, a recently developed method to identify single-nucleotide polymorphisms, in the genomes of 384 wheat (Triticum aestivum) genotypes that were field tested under three different water regimes in Mediterranean climatic conditions: rain-fed only, mild water stress, and fully irrigated. We identified 102,324 single-nucleotide polymorphisms in these genotypes, and the phenotypic data were used to train and test genomic selection models intended to predict yield, thousand-kernel weight, number of kernels per spike, and heading date. Phenotypic data showed marked spatial variation. Therefore, different models were tested to correct the trends observed in the field. A mixed-model using moving-means as a covariate was found to best fit the data. When we applied the genomic selection models, the accuracy of predicted traits increased with spatial adjustment. Multiple genomic selection models were tested, and a Gaussian kernel model was determined to give the highest accuracy. The best predictions between environments were obtained when data from different years were used to train the model. Our results confirm that genotyping-by-sequencing is an effective tool to obtain genome-wide information for crops with complex genomes, that these data are efficient for predicting traits, and that correction of spatial variation is a crucial ingredient to increase prediction accuracy in genomic selection models. PMID:24082033
Influence of Gridded Standoff Measurement Resolution on Numerical Bathymetric Inversion
NASA Astrophysics Data System (ADS)
Hesser, T.; Farthing, M. W.; Brodie, K.
2016-02-01
The bathymetry from the surfzone to the shoreline incurs frequent, active movement due to wave energy interacting with the seafloor. Methodologies to measure bathymetry range from point-source in-situ instruments, vessel-mounted single-beam or multi-beam sonar surveys, airborne bathymetric lidar, as well as inversion techniques from standoff measurements of wave processes from video or radar imagery. Each type of measurement has unique sources of error and spatial and temporal resolution and availability. Numerical bathymetry estimation frameworks can use these disparate data types in combination with model-based inversion techniques to produce a "best-estimate of bathymetry" at a given time. Understanding how the sources of error and varying spatial or temporal resolution of each data type affect the end result is critical for determining best practices and in turn increase the accuracy of bathymetry estimation techniques. In this work, we consider an initial step in the development of a complete framework for estimating bathymetry in the nearshore by focusing on gridded standoff measurements and in-situ point observations in model-based inversion at the U.S. Army Corps of Engineers Field Research Facility in Duck, NC. The standoff measurement methods return wave parameters computed using linear wave theory from the direct measurements. These gridded datasets can range in temporal and spatial resolution that do not match the desired model parameters and therefore could lead to a reduction in the accuracy of these methods. Specifically, we investigate the affect of numerical resolution on the accuracy of an Ensemble Kalman Filter bathymetric inversion technique in relation to the spatial and temporal resolution of the gridded standoff measurements. The accuracies of the bathymetric estimates are compared with both high-resolution Real Time Kinematic (RTK) single-beam surveys as well as alternative direct in-situ measurements using sonic altimeters.
A Subspace Pursuit–based Iterative Greedy Hierarchical Solution to the Neuromagnetic Inverse Problem
Babadi, Behtash; Obregon-Henao, Gabriel; Lamus, Camilo; Hämäläinen, Matti S.; Brown, Emery N.; Purdon, Patrick L.
2013-01-01
Magnetoencephalography (MEG) is an important non-invasive method for studying activity within the human brain. Source localization methods can be used to estimate spatiotemporal activity from MEG measurements with high temporal resolution, but the spatial resolution of these estimates is poor due to the ill-posed nature of the MEG inverse problem. Recent developments in source localization methodology have emphasized temporal as well as spatial constraints to improve source localization accuracy, but these methods can be computationally intense. Solutions emphasizing spatial sparsity hold tremendous promise, since the underlying neurophysiological processes generating MEG signals are often sparse in nature, whether in the form of focal sources, or distributed sources representing large-scale functional networks. Recent developments in the theory of compressed sensing (CS) provide a rigorous framework to estimate signals with sparse structure. In particular, a class of CS algorithms referred to as greedy pursuit algorithms can provide both high recovery accuracy and low computational complexity. Greedy pursuit algorithms are difficult to apply directly to the MEG inverse problem because of the high-dimensional structure of the MEG source space and the high spatial correlation in MEG measurements. In this paper, we develop a novel greedy pursuit algorithm for sparse MEG source localization that overcomes these fundamental problems. This algorithm, which we refer to as the Subspace Pursuit-based Iterative Greedy Hierarchical (SPIGH) inverse solution, exhibits very low computational complexity while achieving very high localization accuracy. We evaluate the performance of the proposed algorithm using comprehensive simulations, as well as the analysis of human MEG data during spontaneous brain activity and somatosensory stimuli. These studies reveal substantial performance gains provided by the SPIGH algorithm in terms of computational complexity, localization accuracy, and robustness. PMID:24055554
NASA Astrophysics Data System (ADS)
Monroe, TalaWanda R.; Aloisi, Alessandra; Debes, John H.; Jedrzejewski, Robert I.; Lockwood, Sean A.; Peeples, Molly S.; Proffitt, Charles R.; Riley, Allyssa; Walborn, Nolan R.
2016-06-01
The variety of operating modes of the Space Telescope Imaging Spectrograph (STIS) on the Hubble Space Telescope (HST) continues to allow STIS users to obtain unique, high quality observations and cutting-edge results 19 years after its installation on HST. STIS is currently the only instrument available to the astronomy community that allows high spectral and spatial resolution spectroscopy in the FUV and NUV, including echelle modes. STIS also supports solar-blind imaging in the FUV. In the optical, STIS provides long-slit, first-order spectra that take advantage of HST's superb spatial resolution, as well as several unique unfiltered coronagraphic modes, which continue to benefit the exoplanet and debris-disk communities. The STIS instrument team monitors the instrument’s health and performance over time to characterize the effects of radiation damage and continued use of the detectors and optical elements. Additionally, the STIS team continues to improve the quality of data products for the user community. We present updates on efforts to improve the echelle flux calibration of overlapping spectral orders due to changes in the grating blaze function since HST Servicing Mission 4, and efforts to push the contrast limit and smallest inner working angle attainable with the coronagraphic BAR5 occulter. We also provide updates on the performance of the STIS calibration lamps, including work to maintain the accuracy of the wavelength calibration for all modes.
Design and performance tests of the calorimetric tract of a Compton Camera for small-animals imaging
NASA Astrophysics Data System (ADS)
Rossi, P.; Baldazzi, G.; Battistella, A.; Bello, M.; Bollini, D.; Bonvicini, V.; Fontana, C. L.; Gennaro, G.; Moschini, G.; Navarria, F.; Rashevsky, A.; Uzunov, N.; Zampa, G.; Zampa, N.; Vacchi, A.
2011-02-01
The bio-distribution and targeting capability of pharmaceuticals may be assessed in small animals by imaging gamma-rays emitted from radio-isotope markers. Detectors that exploit the Compton concept allow higher gamma-ray efficiency compared to conventional Anger cameras employing collimators, and feature sub-millimeter spatial resolution and compact geometry. We are developing a Compton Camera that has to address several requirements: the high rates typical of the Compton concept; detection of gamma-rays of different energies that may range from 140 keV ( 99 mTc) to 511 keV ( β+ emitters); presence of gamma and beta radiation with energies up to 2 MeV in case of 188Re. The camera consists of a thin position-sensitive Tracker that scatters the gamma ray, and a second position-sensitive detection system to totally absorb the energy of the scattered photons (Calorimeter). In this paper we present the design and discuss the realization of the calorimetric tract, including the choice of scintillator crystal, pixel size, and detector geometry. Simulations of the gamma-ray trajectories from source to detectors have helped to assess the accuracy of the system and decide on camera design. Crystals of different materials, such as LaBr 3 GSO and YAP, and of different size, in continuous or segmented geometry, have been optically coupled to a multi-anode Hamamatsu H8500 detector, allowing measurements of spatial resolution and efficiency.
Forecasting Influenza Outbreaks in Boroughs and Neighborhoods of New York City.
Yang, Wan; Olson, Donald R; Shaman, Jeffrey
2016-11-01
The ideal spatial scale, or granularity, at which infectious disease incidence should be monitored and forecast has been little explored. By identifying the optimal granularity for a given disease and host population, and matching surveillance and prediction efforts to this scale, response to emergent and recurrent outbreaks can be improved. Here we explore how granularity and representation of spatial structure affect influenza forecast accuracy within New York City. We develop network models at the borough and neighborhood levels, and use them in conjunction with surveillance data and a data assimilation method to forecast influenza activity. These forecasts are compared to an alternate system that predicts influenza for each borough or neighborhood in isolation. At the borough scale, influenza epidemics are highly synchronous despite substantial differences in intensity, and inclusion of network connectivity among boroughs generally improves forecast accuracy. At the neighborhood scale, we observe much greater spatial heterogeneity among influenza outbreaks including substantial differences in local outbreak timing and structure; however, inclusion of the network model structure generally degrades forecast accuracy. One notable exception is that local outbreak onset, particularly when signal is modest, is better predicted with the network model. These findings suggest that observation and forecast at sub-municipal scales within New York City provides richer, more discriminant information on influenza incidence, particularly at the neighborhood scale where greater heterogeneity exists, and that the spatial spread of influenza among localities can be forecast.
The use of Sentinel-2 imagery for seagrass mapping: Kalloni Gulf (Lesvos Island, Greece) case study
NASA Astrophysics Data System (ADS)
Topouzelis, Konstantinos; Charalampis Spondylidis, Spyridon; Papakonstantinou, Apostolos; Soulakellis, Nikolaos
2016-08-01
Seagrass meadows play a significant role in ecosystems by stabilizing sediment and improving water clarity, which enhances seagrass growing conditions. It is high on the priority of EU legislation to map and protect them. The traditional use of medium spatial resolution satellite imagery e.g. Landsat-8 (30m) is very useful for mapping seagrass meadows on a regional scale. However, the availability of Sentinel-2 data, the recent ESA's satellite with its payload Multi-Spectral Instrument (MSI) is expected to improve the mapping accuracy. MSI designed to improve coastline studies due to its enhanced spatial and spectral capabilities e.g. optical bands with 10m spatial resolution. The present work examines the quality of Sentinel-2 images for seagrass mapping, the ability of each band in detection and discrimination of different habitats and estimates the accuracy of seagrass mapping. After pre-processing steps, e.g. radiometric calibration and atmospheric correction, image classified into four classes. Classification classes included sub-bottom composition e.g. seagrass, soft bottom, and hard bottom. Concrete vectors describing the areas covered by seagrass extracted from the high-resolution satellite image and used as in situ measurements. The developed methodology applied in the Gulf of Kalloni, (Lesvos Island - Greece). Results showed that Sentinel-2 images can be robustly used for seagrass mapping due to their spatial resolution, band availability and radiometric accuracy.
Multispectral scanner system parameter study and analysis software system description, volume 2
NASA Technical Reports Server (NTRS)
Landgrebe, D. A. (Principal Investigator); Mobasseri, B. G.; Wiersma, D. J.; Wiswell, E. R.; Mcgillem, C. D.; Anuta, P. E.
1978-01-01
The author has identified the following significant results. The integration of the available methods provided the analyst with the unified scanner analysis package (USAP), the flexibility and versatility of which was superior to many previous integrated techniques. The USAP consisted of three main subsystems; (1) a spatial path, (2) a spectral path, and (3) a set of analytic classification accuracy estimators which evaluated the system performance. The spatial path consisted of satellite and/or aircraft data, data correlation analyzer, scanner IFOV, and random noise model. The output of the spatial path was fed into the analytic classification and accuracy predictor. The spectral path consisted of laboratory and/or field spectral data, EXOSYS data retrieval, optimum spectral function calculation, data transformation, and statistics calculation. The output of the spectral path was fended into the stratified posterior performance estimator.
Ivanoff, Jason; Blagdon, Ryan; Feener, Stefanie; McNeil, Melanie; Muir, Paul H.
2014-01-01
The Simon effect refers to the performance (response time and accuracy) advantage for responses that spatially correspond to the task-irrelevant location of a stimulus. It has been attributed to a natural tendency to respond toward the source of stimulation. When location is task-relevant, however, and responses are intentionally directed away (incompatible) or toward (compatible) the source of the stimulation, there is also an advantage for spatially compatible responses over spatially incompatible responses. Interestingly, a number of studies have demonstrated a reversed, or reduced, Simon effect following practice with a spatial incompatibility task. One interpretation of this finding is that practicing a spatial incompatibility task disables the natural tendency to respond toward stimuli. Here, the temporal dynamics of this stimulus-response (S-R) transfer were explored with speed-accuracy trade-offs (SATs). All experiments used the mixed-task paradigm in which Simon and spatial compatibility/incompatibility tasks were interleaved across blocks of trials. In general, bidirectional S-R transfer was observed: while the spatial incompatibility task had an influence on the Simon effect, the task-relevant S-R mapping of the Simon task also had a small impact on congruency effects within the spatial compatibility and incompatibility tasks. These effects were generally greater when the task contexts were similar. Moreover, the SAT analysis of performance in the Simon task demonstrated that the tendency to respond to the location of the stimulus was not eliminated because of the spatial incompatibility task. Rather, S-R transfer from the spatial incompatibility task appeared to partially mask the natural tendency to respond to the source of stimulation with a conflicting inclination to respond away from it. These findings support the use of SAT methodology to quantitatively describe rapid response tendencies. PMID:25191217
Multi-Scale Approach for Predicting Fish Species Distributions across Coral Reef Seascapes
Pittman, Simon J.; Brown, Kerry A.
2011-01-01
Two of the major limitations to effective management of coral reef ecosystems are a lack of information on the spatial distribution of marine species and a paucity of data on the interacting environmental variables that drive distributional patterns. Advances in marine remote sensing, together with the novel integration of landscape ecology and advanced niche modelling techniques provide an unprecedented opportunity to reliably model and map marine species distributions across many kilometres of coral reef ecosystems. We developed a multi-scale approach using three-dimensional seafloor morphology and across-shelf location to predict spatial distributions for five common Caribbean fish species. Seascape topography was quantified from high resolution bathymetry at five spatial scales (5–300 m radii) surrounding fish survey sites. Model performance and map accuracy was assessed for two high performing machine-learning algorithms: Boosted Regression Trees (BRT) and Maximum Entropy Species Distribution Modelling (MaxEnt). The three most important predictors were geographical location across the shelf, followed by a measure of topographic complexity. Predictor contribution differed among species, yet rarely changed across spatial scales. BRT provided ‘outstanding’ model predictions (AUC = >0.9) for three of five fish species. MaxEnt provided ‘outstanding’ model predictions for two of five species, with the remaining three models considered ‘excellent’ (AUC = 0.8–0.9). In contrast, MaxEnt spatial predictions were markedly more accurate (92% map accuracy) than BRT (68% map accuracy). We demonstrate that reliable spatial predictions for a range of key fish species can be achieved by modelling the interaction between the geographical location across the shelf and the topographic heterogeneity of seafloor structure. This multi-scale, analytic approach is an important new cost-effective tool to accurately delineate essential fish habitat and support conservation prioritization in marine protected area design, zoning in marine spatial planning, and ecosystem-based fisheries management. PMID:21637787
Multi-scale approach for predicting fish species distributions across coral reef seascapes.
Pittman, Simon J; Brown, Kerry A
2011-01-01
Two of the major limitations to effective management of coral reef ecosystems are a lack of information on the spatial distribution of marine species and a paucity of data on the interacting environmental variables that drive distributional patterns. Advances in marine remote sensing, together with the novel integration of landscape ecology and advanced niche modelling techniques provide an unprecedented opportunity to reliably model and map marine species distributions across many kilometres of coral reef ecosystems. We developed a multi-scale approach using three-dimensional seafloor morphology and across-shelf location to predict spatial distributions for five common Caribbean fish species. Seascape topography was quantified from high resolution bathymetry at five spatial scales (5-300 m radii) surrounding fish survey sites. Model performance and map accuracy was assessed for two high performing machine-learning algorithms: Boosted Regression Trees (BRT) and Maximum Entropy Species Distribution Modelling (MaxEnt). The three most important predictors were geographical location across the shelf, followed by a measure of topographic complexity. Predictor contribution differed among species, yet rarely changed across spatial scales. BRT provided 'outstanding' model predictions (AUC = >0.9) for three of five fish species. MaxEnt provided 'outstanding' model predictions for two of five species, with the remaining three models considered 'excellent' (AUC = 0.8-0.9). In contrast, MaxEnt spatial predictions were markedly more accurate (92% map accuracy) than BRT (68% map accuracy). We demonstrate that reliable spatial predictions for a range of key fish species can be achieved by modelling the interaction between the geographical location across the shelf and the topographic heterogeneity of seafloor structure. This multi-scale, analytic approach is an important new cost-effective tool to accurately delineate essential fish habitat and support conservation prioritization in marine protected area design, zoning in marine spatial planning, and ecosystem-based fisheries management.
NASA Technical Reports Server (NTRS)
Myint, Soe W.; Mesev, Victor; Quattrochi, Dale; Wentz, Elizabeth A.
2013-01-01
Remote sensing methods used to generate base maps to analyze the urban environment rely predominantly on digital sensor data from space-borne platforms. This is due in part from new sources of high spatial resolution data covering the globe, a variety of multispectral and multitemporal sources, sophisticated statistical and geospatial methods, and compatibility with GIS data sources and methods. The goal of this chapter is to review the four groups of classification methods for digital sensor data from space-borne platforms; per-pixel, sub-pixel, object-based (spatial-based), and geospatial methods. Per-pixel methods are widely used methods that classify pixels into distinct categories based solely on the spectral and ancillary information within that pixel. They are used for simple calculations of environmental indices (e.g., NDVI) to sophisticated expert systems to assign urban land covers. Researchers recognize however, that even with the smallest pixel size the spectral information within a pixel is really a combination of multiple urban surfaces. Sub-pixel classification methods therefore aim to statistically quantify the mixture of surfaces to improve overall classification accuracy. While within pixel variations exist, there is also significant evidence that groups of nearby pixels have similar spectral information and therefore belong to the same classification category. Object-oriented methods have emerged that group pixels prior to classification based on spectral similarity and spatial proximity. Classification accuracy using object-based methods show significant success and promise for numerous urban 3 applications. Like the object-oriented methods that recognize the importance of spatial proximity, geospatial methods for urban mapping also utilize neighboring pixels in the classification process. The primary difference though is that geostatistical methods (e.g., spatial autocorrelation methods) are utilized during both the pre- and post-classification steps. Within this chapter, each of the four approaches is described in terms of scale and accuracy classifying urban land use and urban land cover; and for its range of urban applications. We demonstrate the overview of four main classification groups in Figure 1 while Table 1 details the approaches with respect to classification requirements and procedures (e.g., reflectance conversion, steps before training sample selection, training samples, spatial approaches commonly used, classifiers, primary inputs for classification, output structures, number of output layers, and accuracy assessment). The chapter concludes with a brief summary of the methods reviewed and the challenges that remain in developing new classification methods for improving the efficiency and accuracy of mapping urban areas.
COMPARISON OF DIRECT AND INDIRECT IMPACTS OF FECAL CONTAMINATION IN TWO DIFFERENT WATERSHEDS
There are many environmental parameters that could affect the accuracy of microbial source tracking (MST) methods. Spatial and temporal determinants are among the most common factors missing in MST studies. To understand how spatial and temporal variability affect the level of fe...
Distinct regions of the hippocampus are associated with memory for different spatial locations.
Jeye, Brittany M; MacEvoy, Sean P; Karanian, Jessica M; Slotnick, Scott D
2018-05-15
In the present functional magnetic resonance imaging (fMRI) study, we aimed to evaluate whether distinct regions of the hippocampus were associated with spatial memory for items presented in different locations of the visual field. In Experiment 1, during the study phase, participants viewed abstract shapes in the left or right visual field while maintaining central fixation. At test, old shapes were presented at fixation and participants classified each shape as previously in the "left" or "right" visual field followed by an "unsure"-"sure"-"very sure" confidence rating. Accurate spatial memory for shapes in the left visual field was isolated by contrasting accurate versus inaccurate spatial location responses. This contrast produced one hippocampal activation in which the interaction between item type and accuracy was significant. The analogous contrast for right visual field shapes did not produce activity in the hippocampus; however, the contrast of high confidence versus low confidence right-hits produced one hippocampal activation in which the interaction between item type and confidence was significant. In Experiment 2, the same paradigm was used but shapes were presented in each quadrant of the visual field during the study phase. Accurate memory for shapes in each quadrant, exclusively masked by accurate memory for shapes in the other quadrants, produced a distinct activation in the hippocampus. A multi-voxel pattern analysis (MVPA) of hippocampal activity revealed a significant correlation between behavioral spatial location accuracy and hippocampal MVPA accuracy across participants. The findings of both experiments indicate that distinct hippocampal regions are associated with memory for different visual field locations. Copyright © 2018 Elsevier B.V. All rights reserved.
Evaluating an image-fusion algorithm with synthetic-image-generation tools
NASA Astrophysics Data System (ADS)
Gross, Harry N.; Schott, John R.
1996-06-01
An algorithm that combines spectral mixing and nonlinear optimization is used to fuse multiresolution images. Image fusion merges images of different spatial and spectral resolutions to create a high spatial resolution multispectral combination. High spectral resolution allows identification of materials in the scene, while high spatial resolution locates those materials. In this algorithm, conventional spectral mixing estimates the percentage of each material (called endmembers) within each low resolution pixel. Three spectral mixing models are compared; unconstrained, partially constrained, and fully constrained. In the partially constrained application, the endmember fractions are required to sum to one. In the fully constrained application, all fractions are additionally required to lie between zero and one. While negative fractions seem inappropriate, they can arise from random spectral realizations of the materials. In the second part of the algorithm, the low resolution fractions are used as inputs to a constrained nonlinear optimization that calculates the endmember fractions for the high resolution pixels. The constraints mirror the low resolution constraints and maintain consistency with the low resolution fraction results. The algorithm can use one or more higher resolution sharpening images to locate the endmembers to high spatial accuracy. The algorithm was evaluated with synthetic image generation (SIG) tools. A SIG developed image can be used to control the various error sources that are likely to impair the algorithm performance. These error sources include atmospheric effects, mismodeled spectral endmembers, and variability in topography and illumination. By controlling the introduction of these errors, the robustness of the algorithm can be studied and improved upon. The motivation for this research is to take advantage of the next generation of multi/hyperspectral sensors. Although the hyperspectral images will be of modest to low resolution, fusing them with high resolution sharpening images will produce a higher spatial resolution land cover or material map.
Interactive Mapping on Virtual Terrain Models Using RIMS (Real-time, Interactive Mapping System)
NASA Astrophysics Data System (ADS)
Bernardin, T.; Cowgill, E.; Gold, R. D.; Hamann, B.; Kreylos, O.; Schmitt, A.
2006-12-01
Recent and ongoing space missions are yielding new multispectral data for the surfaces of Earth and other planets at unprecedented rates and spatial resolution. With their high spatial resolution and widespread coverage, these data have opened new frontiers in observational Earth and planetary science. But they have also precipitated an acute need for new analytical techniques. To address this problem, we have developed RIMS, a Real-time, Interactive Mapping System that allows scientists to visualize, interact with, and map directly on, three-dimensional (3D) displays of georeferenced texture data, such as multispectral satellite imagery, that is draped over a surface representation derived from digital elevation data. The system uses a quadtree-based multiresolution method to render in real time high-resolution (3 to 10 m/pixel) data over large (800 km by 800 km) spatial areas. It allows users to map inside this interactive environment by generating georeferenced and attributed vector-based elements that are draped over the topography. We explain the technique using 15 m ASTER stereo-data from Iraq, P.R. China, and other remote locations because our particular motivation is to develop a technique that permits the detailed (10 m to 1000 m) neotectonic mapping over large (100 km to 1000 km long) active fault systems that is needed to better understand active continental deformation on Earth. RIMS also includes a virtual geologic compass that allows users to fit a plane to geologic surfaces and thereby measure their orientations. It also includes tools that allow 3D surface reconstruction of deformed and partially eroded surfaces such as folded bedding planes. These georeferenced map and measurement data can be exported to, or imported from, a standard GIS (geographic information systems) file format. Our interactive, 3D visualization and analysis system is designed for those who study planetary surfaces, including neotectonic geologists, geomorphologists, marine geophysicists, and planetary scientists. The strength of our system is that it combines interactive rendering with interactive mapping and measurement of features observed in topographic and texture data. Comparison with commercially available software indicates that our system improves mapping accuracy and efficiency. More importantly, it enables Earth scientists to rapidly achieve a deeper level of understanding of remotely sensed data, as observations can be made that are not possible with existing systems.
NASA Astrophysics Data System (ADS)
Stampoulis, D.; Reager, J. T., II; David, C. H.; Famiglietti, J. S.; Andreadis, K.
2017-12-01
Despite the numerous advances in hydrologic modeling and improvements in Land Surface Models, an accurate representation of the water table depth (WTD) still does not exist. Data assimilation of observations of the joint NASA and DLR mission, Gravity Recovery and Climate Experiment (GRACE) leads to statistically significant improvements in the accuracy of hydrologic models, ultimately resulting in more reliable estimates of water storage. However, the usually shallow groundwater compartment of the models presents a problem with GRACE assimilation techniques, as these satellite observations account for much deeper aquifers. To improve the accuracy of groundwater estimates and allow the representation of the WTD at fine spatial scales we implemented a novel approach that enables a large-scale data integration system to assimilate GRACE data. This was achieved by augmenting the Variable Infiltration Capacity (VIC) hydrologic model, which is the core component of the Regional Hydrologic Extremes Assessment System (RHEAS), a high-resolution modeling framework developed at the Jet Propulsion Laboratory (JPL) for hydrologic modeling and data assimilation. The model has insufficient subsurface characterization and therefore, to reproduce groundwater variability not only in shallow depths but also in deep aquifers, as well as to allow GRACE assimilation, a fourth soil layer of varying depth ( 1000 meters) was added in VIC as the bottom layer. To initialize a water table in the model we used gridded global WTD data at 1 km resolution which were spatially aggregated to match the model's resolution. Simulations were then performed to test the augmented model's ability to capture seasonal and inter-annual trends of groundwater. The 4-layer version of VIC was run with and without assimilating GRACE Total Water Storage anomalies (TWSA) over the Central Valley in California. This is the first-ever assimilation of GRACE TWSA for the determination of realistic water table depths, at fine scales that are required for local water management. In addition, Open Loop and GRACE-assimilation simulations of water table depth were compared to in-situ data over the state of California, derived from observation wells operated/maintained by the U.S. Geological Service.
Sub-micron resolution selected area electron channeling patterns.
Guyon, J; Mansour, H; Gey, N; Crimp, M A; Chalal, S; Maloufi, N
2015-02-01
Collection of selected area channeling patterns (SACPs) on a high resolution FEG-SEM is essential to carry out quantitative electron channeling contrast imaging (ECCI) studies, as it facilitates accurate determination of the crystal plane normal with respect to the incident beam direction and thus allows control the electron channeling conditions. Unfortunately commercial SACP modes developed in the past were limited in spatial resolution and are often no longer offered. In this contribution we present a novel approach for collecting high resolution SACPs (HR-SACPs) developed on a Gemini column. This HR-SACP technique combines the first demonstrated sub-micron spatial resolution with high angular accuracy of about 0.1°, at a convenient working distance of 10mm. This innovative approach integrates the use of aperture alignment coils to rock the beam with a digitally calibrated beam shift procedure to ensure the rocking beam is maintained on a point of interest. Moreover a new methodology to accurately measure SACP spatial resolution is proposed. While column considerations limit the rocking angle to 4°, this range is adequate to index the HR-SACP in conjunction with the pattern simulated from the approximate orientation deduced by EBSD. This new technique facilitates Accurate ECCI (A-ECCI) studies from very fine grained and/or highly strained materials. It offers also new insights for developing HR-SACP modes on new generation high-resolution electron columns. Copyright © 2014 Elsevier B.V. All rights reserved.
Single Photon Counting Large Format Imaging Sensors with High Spatial and Temporal Resolution
NASA Astrophysics Data System (ADS)
Siegmund, O. H. W.; Ertley, C.; Vallerga, J. V.; Cremer, T.; Craven, C. A.; Lyashenko, A.; Minot, M. J.
High time resolution astronomical and remote sensing applications have been addressed with microchannel plate based imaging, photon time tagging detector sealed tube schemes. These are being realized with the advent of cross strip readout techniques with high performance encoding electronics and atomic layer deposited (ALD) microchannel plate technologies. Sealed tube devices up to 20 cm square have now been successfully implemented with sub nanosecond timing and imaging. The objective is to provide sensors with large areas (25 cm2 to 400 cm2) with spatial resolutions of <20 μm FWHM and timing resolutions of <100 ps for dynamic imaging. New high efficiency photocathodes for the visible regime are discussed, which also allow response down below 150nm for UV sensing. Borosilicate MCPs are providing high performance, and when processed with ALD techniques are providing order of magnitude lifetime improvements and enhanced photocathode stability. New developments include UV/visible photocathodes, ALD MCPs, and high resolution cross strip anodes for 100 mm detectors. Tests with 50 mm format cross strip readouts suitable for Planacon devices show spatial resolutions better than 20 μm FWHM, with good image linearity while using low gain ( 106). Current cross strip encoding electronics can accommodate event rates of >5 MHz and event timing accuracy of 100 ps. High-performance ASIC versions of these electronics are in development with better event rate, power and mass suitable for spaceflight instruments.
Estimating and mapping ecological processes influencing microbial community assembly
Stegen, James C.; Lin, Xueju; Fredrickson, Jim K.; Konopka, Allan E.
2015-01-01
Ecological community assembly is governed by a combination of (i) selection resulting from among-taxa differences in performance; (ii) dispersal resulting from organismal movement; and (iii) ecological drift resulting from stochastic changes in population sizes. The relative importance and nature of these processes can vary across environments. Selection can be homogeneous or variable, and while dispersal is a rate, we conceptualize extreme dispersal rates as two categories; dispersal limitation results from limited exchange of organisms among communities, and homogenizing dispersal results from high levels of organism exchange. To estimate the influence and spatial variation of each process we extend a recently developed statistical framework, use a simulation model to evaluate the accuracy of the extended framework, and use the framework to examine subsurface microbial communities over two geologic formations. For each subsurface community we estimate the degree to which it is influenced by homogeneous selection, variable selection, dispersal limitation, and homogenizing dispersal. Our analyses revealed that the relative influences of these ecological processes vary substantially across communities even within a geologic formation. We further identify environmental and spatial features associated with each ecological process, which allowed mapping of spatial variation in ecological-process-influences. The resulting maps provide a new lens through which ecological systems can be understood; in the subsurface system investigated here they revealed that the influence of variable selection was associated with the rate at which redox conditions change with subsurface depth. PMID:25983725
Advancements in MR Imaging of the Prostate: From Diagnosis to Interventions
Bonekamp, David; Jacobs, Michael A.; El-Khouli, Riham; Stoianovici, Dan
2011-01-01
Prostate cancer is the most frequently diagnosed cancer in males and the second leading cause of cancer-related death in men. Assessment of prostate cancer can be divided into detection, localization, and staging; accurate assessment is a prerequisite for optimal clinical management and therapy selection. Magnetic resonance (MR) imaging has been shown to be of particular help in localization and staging of prostate cancer. Traditional prostate MR imaging has been based on morphologic imaging with standard T1-weighted and T2-weighted sequences, which has limited accuracy. Recent advances include additional functional and physiologic MR imaging techniques (diffusion-weighted imaging, MR spectroscopy, and perfusion imaging), which allow extension of the obtainable information beyond anatomic assessment. Multiparametric MR imaging provides the highest accuracy in diagnosis and staging of prostate cancer. In addition, improvements in MR imaging hardware and software (3-T vs 1.5-T imaging) continue to improve spatial and temporal resolution and the signal-to-noise ratio of MR imaging examinations. Another recent advancement in the field is MR imaging guidance for targeted prostate biopsy, which is an alternative to the current standard of transrectal ultrasonography–guided systematic biopsy. © RSNA, 2011 PMID:21571651
Development of a method for personal, spatiotemporal exposure assessment.
Adams, Colby; Riggs, Philip; Volckens, John
2009-07-01
This work describes the development and evaluation of a high resolution, space and time-referenced sampling method for personal exposure assessment to airborne particulate matter (PM). This method integrates continuous measures of personal PM levels with the corresponding location-activity (i.e. work/school, home, transit) of the subject. Monitoring equipment include a small, portable global positioning system (GPS) receiver, a miniature aerosol nephelometer, and an ambient temperature monitor to estimate the location, time, and magnitude of personal exposure to particulate matter air pollution. Precision and accuracy of each component, as well as the integrated method performance were tested in a combination of laboratory and field tests. Spatial data was apportioned into pre-determined location-activity categories (i.e. work/school, home, transit) with a simple, temporospatially-based algorithm. The apportioning algorithm was extremely effective with an overall accuracy of 99.6%. This method allows examination of an individual's estimated exposure through space and time, which may provide new insights into exposure-activity relationships not possible with traditional exposure assessment techniques (i.e., time-integrated, filter-based measurements). Furthermore, the method is applicable to any contaminant or stressor that can be measured on an individual with a direct-reading sensor.
Detection of bio-signature by microscopy and mass spectrometry
NASA Astrophysics Data System (ADS)
Tulej, M.; Wiesendanger, R.; Neuland, M., B.; Meyer, S.; Wurz, P.; Neubeck, A.; Ivarsson, M.; Riedo, V.; Moreno-Garcia, P.; Riedo, A.; Knopp, G.
2017-09-01
We demonstrate detection of micro-sized fossilized bacteria by means of microscopy and mass spectrometry. The characteristic structures of lifelike forms are visualized with a micrometre spatial resolution and mass spectrometric analyses deliver elemental and isotope composition of host and fossilized materials. Our studies show that high selectivity in isolation of fossilized material from host phase can be achieved while applying a microscope visualization (location), a laser ablation ion source with sufficiently small laser spot size and applying depth profiling method. Our investigations shows that fossilized features can be well isolated from host phase. The mass spectrometric measurements can be conducted with sufficiently high accuracy and precision yielding quantitative elemental and isotope composition of micro-sized objects. The current performance of the instrument allows the measurement of the isotope fractionation in per mill level and yield exclusively definition of the origin of the investigated species by combining optical visualization of investigated samples (morphology and texture), chemical characterization of host and embedded in the host micro-sized structure. Our isotope analyses involved bio-relevant B, C, S, and Ni isotopes which could be measured with sufficiently accuracy to conclude about the nature of the micro-sized objects.
Comments on the Diffusive Behavior of Two Upwind Schemes
NASA Technical Reports Server (NTRS)
Wood, William A.; Kleb, William L.
1998-01-01
The diffusive characteristics of two upwind schemes, multi-dimensional fluctuation splitting and locally one-dimensional finite volume, are compared for scalar advection-diffusion problems. Algorithms for the two schemes are developed for node-based data representation on median-dual meshes associated with unstructured triangulations in two spatial dimensions. Four model equations are considered: linear advection, non-linear advection, diffusion, and advection-diffusion. Modular coding is employed to isolate the effects of the two approaches for upwind flux evaluation, allowing for head-to-head accuracy and efficiency comparisons. Both the stability of compressive limiters and the amount of artificial diffusion generated by the schemes is found to be grid-orientation dependent, with the fluctuation splitting scheme producing less artificial diffusion than the finite volume scheme. Convergence rates are compared for the combined advection-diffusion problem, with a speedup of 2.5 seen for fluctuation splitting versus finite volume when solved on the same mesh. However, accurate solutions to problems with small diffusion coefficients can be achieved on coarser meshes using fluctuation splitting rather than finite volume, so that when comparing convergence rates to reach a given accuracy, fluctuation splitting shows a speedup of 29 over finite volume.
Diffusion Characteristics of Upwind Schemes on Unstructured Triangulations
NASA Technical Reports Server (NTRS)
Wood, William A.; Kleb, William L.
1998-01-01
The diffusive characteristics of two upwind schemes, multi-dimensional fluctuation splitting and dimensionally-split finite volume, are compared for scalar advection-diffusion problems. Algorithms for the two schemes are developed for node-based data representation on median-dual meshes associated with unstructured triangulations in two spatial dimensions. Four model equations are considered: linear advection, non-linear advection, diffusion, and advection-diffusion. Modular coding is employed to isolate the effects of the two approaches for upwind flux evaluation, allowing for head-to-head accuracy and efficiency comparisons. Both the stability of compressive limiters and the amount of artificial diffusion generated by the schemes is found to be grid-orientation dependent, with the fluctuation splitting scheme producing less artificial diffusion than the dimensionally-split finite volume scheme. Convergence rates are compared for the combined advection-diffusion problem, with a speedup of 2-3 seen for fluctuation splitting versus finite volume when solved on the same mesh. However, accurate solutions to problems with small diffusion coefficients can be achieved on coarser meshes using fluctuation splitting rather than finite volume, so that when comparing convergence rates to reach a given accuracy, fluctuation splitting shows a 20-25 speedup over finite volume.
A hierarchical spatial model for well yield in complex aquifers
NASA Astrophysics Data System (ADS)
Montgomery, J.; O'sullivan, F.
2017-12-01
Efficiently siting and managing groundwater wells requires reliable estimates of the amount of water that can be produced, or the well yield. This can be challenging to predict in highly complex, heterogeneous fractured aquifers due to the uncertainty around local hydraulic properties. Promising statistical approaches have been advanced in recent years. For instance, kriging and multivariate regression analysis have been applied to well test data with limited but encouraging levels of prediction accuracy. Additionally, some analytical solutions to diffusion in homogeneous porous media have been used to infer "effective" properties consistent with observed flow rates or drawdown. However, this is an under-specified inverse problem with substantial and irreducible uncertainty. We describe a flexible machine learning approach capable of combining diverse datasets with constraining physical and geostatistical models for improved well yield prediction accuracy and uncertainty quantification. Our approach can be implemented within a hierarchical Bayesian framework using Markov Chain Monte Carlo, which allows for additional sources of information to be incorporated in priors to further constrain and improve predictions and reduce the model order. We demonstrate the usefulness of this approach using data from over 7,000 wells in a fractured bedrock aquifer.
Direct measurement of bull's-eye nanoantenna metal loss
NASA Astrophysics Data System (ADS)
Hassani Nia, Iman; Jang, Sung J.; Memis, Omer G.; Gelfand, Ryan; Mohseni, Hooman
2013-09-01
The loss in optical antennas can affect their performance for their practical use in many branches of science such as biological and solar cell applications. However the big question is that how much loss is due to the joule heating in the metals. This would affect the efficiency of solar cells and is very important for single photon detection and also for some applications where high heat generation in nanoantennas is desirable, for example, payload release for cancer treatment. There are few groups who have done temperature measurements by methods such as Raman spectroscopy or fluorescence polarization anisotropy. The latter method, which is more reliable than Raman spectroscopy, requires the deposition of fluorescent molecules on the antenna surface. The molecules and the polarization of radiation rotate depending upon the surface temperature. The reported temperature measurement accuracy in this method is about 0.1° C. Here we present a method based on thermo-reflectance that allows better temperature accuracy as well as spatial resolution of 500 nm. Moreover, this method does not require the addition of new materials to the nanoantenna. We present the measured heat dissipation from bull's-eye nanoantennas and compare them with 3D simulation results.
Brezovich, Ivan A; Popple, Richard A; Duan, Jun; Shen, Sui; Wu, Xingen; Benhabib, Sidi; Huang, Mi; Cardan, Rex A
2016-07-08
Stereotactic radiosurgery (SRS) places great demands on spatial accuracy. Steel BBs used as markers in quality assurance (QA) phantoms are clearly visible in MV and planar kV images, but artifacts compromise cone-beam CT (CBCT) isocenter localization. The purpose of this work was to develop a QA phantom for measuring with sub-mm accuracy isocenter congruence of planar kV, MV, and CBCT imaging systems and to design a practical QA procedure that includes daily Winston-Lutz (WL) tests and does not require computer aid. The salient feature of the phantom (Universal Alignment Ball (UAB)) is a novel marker for precisely localizing isocenters of CBCT, planar kV, and MV beams. It consists of a 25.4mm diameter sphere of polymethylmetacrylate (PMMA) containing a concentric 6.35mm diameter tungsten carbide ball. The large density difference between PMMA and the polystyrene foam in which the PMMA sphere is embedded yields a sharp image of the sphere for accurate CBCT registration. The tungsten carbide ball serves in finding isocenter in planar kV and MV images and in doing WL tests. With the aid of the UAB, CBCT isocenter was located within 0.10 ± 0.05 mm of its true positon, and MV isocenter was pinpointed in planar images to within 0.06 ± 0.04mm. In clinical morning QA tests extending over an 18 months period the UAB consistently yielded measurements with sub-mm accuracy. The average distance between isocenter defined by orthogonal kV images and CBCT measured 0.16 ± 0.12 mm. In WL tests the central ray of anterior beams defined by a 1.5 × 1.5 cm2 MLC field agreed with CBCT isocenter within 0.03 ± 0.14 mm in the lateral direction and within 0.10 ± 0.19 mm in the longitudinal direction. Lateral MV beams approached CBCT isocenter within 0.00 ± 0.11 mm in the vertical direction and within -0.14 ± 0.15 mm longitudinally. It took therapists about 10 min to do the tests. The novel QA phantom allows pinpointing CBCT and MV isocenter positions to better than 0.2 mm, using visual image registration. Under CBCT guidance, MLC-defined beams are deliverable with sub-mm spatial accuracy. The QA procedure is practical for daily tests by therapists. © 2016 The Authors
NASA Technical Reports Server (NTRS)
Pagnutti, Mary
2006-01-01
This viewgraph presentation reviews the creation of a prototype algorithm for atmospheric correction using high spatial resolution earth observing imaging systems. The objective of the work was to evaluate accuracy of a prototype algorithm that uses satellite-derived atmospheric products to generate scene reflectance maps for high spatial resolution (HSR) systems. This presentation focused on preliminary results of only the satellite-based atmospheric correction algorithm.
NASA Astrophysics Data System (ADS)
Kamal, Muhammad; Johansen, Kasper
2017-10-01
Effective mangrove management requires spatially explicit information of mangrove tree crown map as a basis for ecosystem diversity study and health assessment. Accuracy assessment is an integral part of any mapping activities to measure the effectiveness of the classification approach. In geographic object-based image analysis (GEOBIA) the assessment of the geometric accuracy (shape, symmetry and location) of the created image objects from image segmentation is required. In this study we used an explicit area-based accuracy assessment to measure the degree of similarity between the results of the classification and reference data from different aspects, including overall quality (OQ), user's accuracy (UA), producer's accuracy (PA) and overall accuracy (OA). We developed a rule set to delineate the mangrove tree crown using WorldView-2 pan-sharpened image. The reference map was obtained by visual delineation of the mangrove tree crowns boundaries form a very high-spatial resolution aerial photograph (7.5cm pixel size). Ten random points with a 10 m radius circular buffer were created to calculate the area-based accuracy assessment. The resulting circular polygons were used to clip both the classified image objects and reference map for area comparisons. In this case, the area-based accuracy assessment resulted 64% and 68% for the OQ and OA, respectively. The overall quality of the calculation results shows the class-related area accuracy; which is the area of correctly classified as tree crowns was 64% out of the total area of tree crowns. On the other hand, the overall accuracy of 68% was calculated as the percentage of all correctly classified classes (tree crowns and canopy gaps) in comparison to the total class area (an entire image). Overall, the area-based accuracy assessment was simple to implement and easy to interpret. It also shows explicitly the omission and commission error variations of object boundary delineation with colour coded polygons.
Ma, Zhenling; Wu, Xiaoliang; Yan, Li; Xu, Zhenliang
2017-01-26
With the development of space technology and the performance of remote sensors, high-resolution satellites are continuously launched by countries around the world. Due to high efficiency, large coverage and not being limited by the spatial regulation, satellite imagery becomes one of the important means to acquire geospatial information. This paper explores geometric processing using satellite imagery without ground control points (GCPs). The outcome of spatial triangulation is introduced for geo-positioning as repeated observation. Results from combining block adjustment with non-oriented new images indicate the feasibility of geometric positioning with the repeated observation. GCPs are a must when high accuracy is demanded in conventional block adjustment; the accuracy of direct georeferencing with repeated observation without GCPs is superior to conventional forward intersection and even approximate to conventional block adjustment with GCPs. The conclusion is drawn that taking the existing oriented imagery as repeated observation enhances the effective utilization of previous spatial triangulation achievement, which makes the breakthrough for repeated observation to improve accuracy by increasing the base-height ratio and redundant observation. Georeferencing tests using data from multiple sensors and platforms with the repeated observation will be carried out in the follow-up research.
Moerel, Michelle; De Martino, Federico; Kemper, Valentin G; Schmitter, Sebastian; Vu, An T; Uğurbil, Kâmil; Formisano, Elia; Yacoub, Essa
2018-01-01
Following rapid technological advances, ultra-high field functional MRI (fMRI) enables exploring correlates of neuronal population activity at an increasing spatial resolution. However, as the fMRI blood-oxygenation-level-dependent (BOLD) contrast is a vascular signal, the spatial specificity of fMRI data is ultimately determined by the characteristics of the underlying vasculature. At 7T, fMRI measurement parameters determine the relative contribution of the macro- and microvasculature to the acquired signal. Here we investigate how these parameters affect relevant high-end fMRI analyses such as encoding, decoding, and submillimeter mapping of voxel preferences in the human auditory cortex. Specifically, we compare a T 2 * weighted fMRI dataset, obtained with 2D gradient echo (GE) EPI, to a predominantly T 2 weighted dataset obtained with 3D GRASE. We first investigated the decoding accuracy based on two encoding models that represented different hypotheses about auditory cortical processing. This encoding/decoding analysis profited from the large spatial coverage and sensitivity of the T 2 * weighted acquisitions, as evidenced by a significantly higher prediction accuracy in the GE-EPI dataset compared to the 3D GRASE dataset for both encoding models. The main disadvantage of the T 2 * weighted GE-EPI dataset for encoding/decoding analyses was that the prediction accuracy exhibited cortical depth dependent vascular biases. However, we propose that the comparison of prediction accuracy across the different encoding models may be used as a post processing technique to salvage the spatial interpretability of the GE-EPI cortical depth-dependent prediction accuracy. Second, we explored the mapping of voxel preferences. Large-scale maps of frequency preference (i.e., tonotopy) were similar across datasets, yet the GE-EPI dataset was preferable due to its larger spatial coverage and sensitivity. However, submillimeter tonotopy maps revealed biases in assigned frequency preference and selectivity for the GE-EPI dataset, but not for the 3D GRASE dataset. Thus, a T 2 weighted acquisition is recommended if high specificity in tonotopic maps is required. In conclusion, different fMRI acquisitions were better suited for different analyses. It is therefore critical that any sequence parameter optimization considers the eventual intended fMRI analyses and the nature of the neuroscience questions being asked. Copyright © 2017 Elsevier Inc. All rights reserved.
A simple enrichment correction factor for improving erosion estimation by rare earth oxide tracers
USDA-ARS?s Scientific Manuscript database
Spatially distributed soil erosion data are needed to better understanding soil erosion processes and validating distributed erosion models. Rare earth element (REE) oxides were used to generate spatial erosion data. However, a general concern on the accuracy of the technique arose due to selective ...
A new head phantom with realistic shape and spatially varying skull resistivity distribution.
Li, Jian-Bo; Tang, Chi; Dai, Meng; Liu, Geng; Shi, Xue-Tao; Yang, Bin; Xu, Can-Hua; Fu, Feng; You, Fu-Sheng; Tang, Meng-Xing; Dong, Xiu-Zhen
2014-02-01
Brain electrical impedance tomography (EIT) is an emerging method for monitoring brain injuries. To effectively evaluate brain EIT systems and reconstruction algorithms, we have developed a novel head phantom that features realistic anatomy and spatially varying skull resistivity. The head phantom was created with three layers, representing scalp, skull, and brain tissues. The fabrication process entailed 3-D printing of the anatomical geometry for mold creation followed by casting to ensure high geometrical precision and accuracy of the resistivity distribution. We evaluated the accuracy and stability of the phantom. Results showed that the head phantom achieved high geometric accuracy, accurate skull resistivity values, and good stability over time and in the frequency domain. Experimental impedance reconstructions performed using the head phantom and computer simulations were found to be consistent for the same perturbation object. In conclusion, this new phantom could provide a more accurate test platform for brain EIT research.
Identifying musical pieces from fMRI data using encoding and decoding models.
Hoefle, Sebastian; Engel, Annerose; Basilio, Rodrigo; Alluri, Vinoo; Toiviainen, Petri; Cagy, Maurício; Moll, Jorge
2018-02-02
Encoding models can reveal and decode neural representations in the visual and semantic domains. However, a thorough understanding of how distributed information in auditory cortices and temporal evolution of music contribute to model performance is still lacking in the musical domain. We measured fMRI responses during naturalistic music listening and constructed a two-stage approach that first mapped musical features in auditory cortices and then decoded novel musical pieces. We then probed the influence of stimuli duration (number of time points) and spatial extent (number of voxels) on decoding accuracy. Our approach revealed a linear increase in accuracy with duration and a point of optimal model performance for the spatial extent. We further showed that Shannon entropy is a driving factor, boosting accuracy up to 95% for music with highest information content. These findings provide key insights for future decoding and reconstruction algorithms and open new venues for possible clinical applications.
A simplified analytical random walk model for proton dose calculation
NASA Astrophysics Data System (ADS)
Yao, Weiguang; Merchant, Thomas E.; Farr, Jonathan B.
2016-10-01
We propose an analytical random walk model for proton dose calculation in a laterally homogeneous medium. A formula for the spatial fluence distribution of primary protons is derived. The variance of the spatial distribution is in the form of a distance-squared law of the angular distribution. To improve the accuracy of dose calculation in the Bragg peak region, the energy spectrum of the protons is used. The accuracy is validated against Monte Carlo simulation in water phantoms with either air gaps or a slab of bone inserted. The algorithm accurately reflects the dose dependence on the depth of the bone and can deal with small-field dosimetry. We further applied the algorithm to patients’ cases in the highly heterogeneous head and pelvis sites and used a gamma test to show the reasonable accuracy of the algorithm in these sites. Our algorithm is fast for clinical use.
Results and Error Estimates from GRACE Forward Modeling over Greenland, Canada, and Alaska
NASA Astrophysics Data System (ADS)
Bonin, J. A.; Chambers, D. P.
2012-12-01
Forward modeling using a weighted least squares technique allows GRACE information to be projected onto a pre-determined collection of local basins. This decreases the impact of spatial leakage, allowing estimates of mass change to be better localized. The technique is especially valuable where models of current-day mass change are poor, such as over Greenland and Antarctica. However, the accuracy of the forward model technique has not been determined, nor is it known how the distribution of the local basins affects the results. We use a "truth" model composed of hydrology and ice-melt slopes as an example case, to estimate the uncertainties of this forward modeling method and expose those design parameters which may result in an incorrect high-resolution mass distribution. We then apply these optimal parameters in a forward model estimate created from RL05 GRACE data. We compare the resulting mass slopes with the expected systematic errors from the simulation, as well as GIA and basic trend-fitting uncertainties. We also consider whether specific regions (such as Ellesmere Island and Baffin Island) can be estimated reliably using our optimal basin layout.
[Proton imaging applications for proton therapy: state of the art].
Amblard, R; Floquet, V; Angellier, G; Hannoun-Lévi, J M; Hérault, J
2015-04-01
Proton therapy allows a highly precise tumour volume irradiation with a low dose delivered to the healthy tissues. The steep dose gradients observed and the high treatment conformity require a precise knowledge of the proton range in matter and the target volume position relative to the beam. Thus, proton imaging allows an improvement of the treatment accuracy, and thereby, in treatment quality. Initially suggested in 1963, radiographic imaging with proton is still not used in clinical routine. The principal difficulty is the lack of spatial resolution, induced by the multiple Coulomb scattering of protons with nuclei. Moreover, its realization for all clinical locations requires relatively high energies that are previously not considered for clinical routine. Abandoned for some time in favor of X-ray technologies, research into new imaging methods using protons is back in the news because of the increase of proton radiation therapy centers in the world. This article exhibits a non-exhaustive state of the art in proton imaging. Copyright © 2015 Société française de radiothérapie oncologique (SFRO). Published by Elsevier SAS. All rights reserved.
Non-visual spatial tasks reveal increased interactions with stance postural control.
Woollacott, Marjorie; Vander Velde, Timothy
2008-05-07
The current investigation aimed to contrast the level and quality of dual-task interactions resulting from the combined performance of a challenging primary postural task and three specific, yet categorically dissociated, secondary central executive tasks. Experiments determined the extent to which modality (visual vs. auditory) and code (non-spatial vs. spatial) specific cognitive resources contributed to postural interference in young adults (n=9) in a dual-task setting. We hypothesized that the different forms of executive n-back task processing employed (visual-object, auditory-object and auditory-spatial) would display contrasting levels of interactions with tandem Romberg stance postural control, and that interactions within the spatial domain would be revealed as most vulnerable to dual-task interactions. Across all cognitive tasks employed, including auditory-object (aOBJ), auditory-spatial (aSPA), and visual-object (vOBJ) tasks, increasing n-back task complexity produced correlated increases in verbal reaction time measures. Increasing cognitive task complexity also resulted in consistent decreases in judgment accuracy. Postural performance was significantly influenced by the type of cognitive loading delivered. At comparable levels of cognitive task difficulty (n-back demands and accuracy judgments) the performance of challenging auditory-spatial tasks produced significantly greater levels of postural sway than either the auditory-object or visual-object based tasks. These results suggest that it is the employment of limited non-visual spatially based coding resources that may underlie previously observed visual dual-task interference effects with stance postural control in healthy young adults.
Hui, CheukKai; Robertson, Daniel; Alsanea, Fahed; Beddar, Sam
2015-08-01
Accurate confirmation and verification of the range of spot scanning proton beams is crucial for correct dose delivery. Current methods to measure proton beam range using ionization chambers are either time-consuming or result in measurements with poor spatial resolution. The large-volume liquid scintillator detector allows real-time measurements of the entire dose profile of a spot scanning proton beam. Thus, liquid scintillator detectors are an ideal tool for measuring the proton beam range for commissioning and quality assurance. However, optical artefacts may decrease the accuracy of measuring the proton beam range within the scintillator tank. The purpose of the current study was to 1) develop a geometric calibration system to accurately calculate physical distances within the liquid scintillator detector, taking into account optical artefacts; and 2) assess the accuracy, consistency, and robustness of proton beam range measurement using the liquid scintillator detector with our geometric calibration system. The range of the proton beam was measured with the calibrated liquid scintillator system and was compared to the nominal range. Measurements were made on three different days to evaluate the setup robustness from day to day, and three sets of measurements were made for each day to evaluate the consistency from delivery to delivery. All proton beam ranges measured using the liquid scintillator system were within half a millimeter of the nominal range. The delivery-to-delivery standard deviation of the range measurement was 0.04 mm, and the day-to-day standard deviation was 0.10 mm. In addition to the accuracy and robustness demonstrated by these results when our geometric calibration system was used, the liquid scintillator system allowed the range of all 94 proton beams to be measured in just two deliveries, making the liquid scintillator detector a perfect tool for range measurement of spot scanning proton beams.
Fast range measurement of spot scanning proton beams using a volumetric liquid scintillator detector
Hui, CheukKai; Robertson, Daniel; Alsanea, Fahed; Beddar, Sam
2016-01-01
Accurate confirmation and verification of the range of spot scanning proton beams is crucial for correct dose delivery. Current methods to measure proton beam range using ionization chambers are either time-consuming or result in measurements with poor spatial resolution. The large-volume liquid scintillator detector allows real-time measurements of the entire dose profile of a spot scanning proton beam. Thus, liquid scintillator detectors are an ideal tool for measuring the proton beam range for commissioning and quality assurance. However, optical artefacts may decrease the accuracy of measuring the proton beam range within the scintillator tank. The purpose of the current study was to 1) develop a geometric calibration system to accurately calculate physical distances within the liquid scintillator detector, taking into account optical artefacts; and 2) assess the accuracy, consistency, and robustness of proton beam range measurement using the liquid scintillator detector with our geometric calibration system. The range of the proton beam was measured with the calibrated liquid scintillator system and was compared to the nominal range. Measurements were made on three different days to evaluate the setup robustness from day to day, and three sets of measurements were made for each day to evaluate the consistency from delivery to delivery. All proton beam ranges measured using the liquid scintillator system were within half a millimeter of the nominal range. The delivery-to-delivery standard deviation of the range measurement was 0.04 mm, and the day-to-day standard deviation was 0.10 mm. In addition to the accuracy and robustness demonstrated by these results when our geometric calibration system was used, the liquid scintillator system allowed the range of all 94 proton beams to be measured in just two deliveries, making the liquid scintillator detector a perfect tool for range measurement of spot scanning proton beams. PMID:27274863
Classification Accuracy Increase Using Multisensor Data Fusion
NASA Astrophysics Data System (ADS)
Makarau, A.; Palubinskas, G.; Reinartz, P.
2011-09-01
The practical use of very high resolution visible and near-infrared (VNIR) data is still growing (IKONOS, Quickbird, GeoEye-1, etc.) but for classification purposes the number of bands is limited in comparison to full spectral imaging. These limitations may lead to the confusion of materials such as different roofs, pavements, roads, etc. and therefore may provide wrong interpretation and use of classification products. Employment of hyperspectral data is another solution, but their low spatial resolution (comparing to multispectral data) restrict their usage for many applications. Another improvement can be achieved by fusion approaches of multisensory data since this may increase the quality of scene classification. Integration of Synthetic Aperture Radar (SAR) and optical data is widely performed for automatic classification, interpretation, and change detection. In this paper we present an approach for very high resolution SAR and multispectral data fusion for automatic classification in urban areas. Single polarization TerraSAR-X (SpotLight mode) and multispectral data are integrated using the INFOFUSE framework, consisting of feature extraction (information fission), unsupervised clustering (data representation on a finite domain and dimensionality reduction), and data aggregation (Bayesian or neural network). This framework allows a relevant way of multisource data combination following consensus theory. The classification is not influenced by the limitations of dimensionality, and the calculation complexity primarily depends on the step of dimensionality reduction. Fusion of single polarization TerraSAR-X, WorldView-2 (VNIR or full set), and Digital Surface Model (DSM) data allow for different types of urban objects to be classified into predefined classes of interest with increased accuracy. The comparison to classification results of WorldView-2 multispectral data (8 spectral bands) is provided and the numerical evaluation of the method in comparison to other established methods illustrates the advantage in the classification accuracy for many classes such as buildings, low vegetation, sport objects, forest, roads, rail roads, etc.
Working Memory Components as Predictors of Children's Mathematical Word Problem Solving
ERIC Educational Resources Information Center
Zheng, Xinhua; Swanson, H. Lee; Marcoulides, George A.
2011-01-01
This study determined the working memory (WM) components (executive, phonological loop, and visual-spatial sketchpad) that best predicted mathematical word problem-solving accuracy of elementary school children in Grades 2, 3, and 4 (N = 310). A battery of tests was administered to assess problem-solving accuracy, problem-solving processes, WM,…
USDA-ARS?s Scientific Manuscript database
A detailed sensitivity analysis was conducted to determine the relative effects of measurement errors in climate data input parameters on the accuracy of calculated reference crop evapotranspiration (ET) using the ASCE-EWRI Standardized Reference ET Equation. Data for the period of 1995 to 2008, fro...
Dai, Shengfa; Wei, Qingguo
2017-01-01
Common spatial pattern algorithm is widely used to estimate spatial filters in motor imagery based brain-computer interfaces. However, use of a large number of channels will make common spatial pattern tend to over-fitting and the classification of electroencephalographic signals time-consuming. To overcome these problems, it is necessary to choose an optimal subset of the whole channels to save computational time and improve the classification accuracy. In this paper, a novel method named backtracking search optimization algorithm is proposed to automatically select the optimal channel set for common spatial pattern. Each individual in the population is a N-dimensional vector, with each component representing one channel. A population of binary codes generate randomly in the beginning, and then channels are selected according to the evolution of these codes. The number and positions of 1's in the code denote the number and positions of chosen channels. The objective function of backtracking search optimization algorithm is defined as the combination of classification error rate and relative number of channels. Experimental results suggest that higher classification accuracy can be achieved with much fewer channels compared to standard common spatial pattern with whole channels.
Mashburn, Shana L.; Winton, Kimberly T.
2010-01-01
This CD-ROM contains spatial datasets that describe natural and anthropogenic features and county-level estimates of agricultural pesticide use and pesticide data for surface-water, groundwater, and biological specimens in the state of Oklahoma. County-level estimates of pesticide use were compiled from the Pesticide National Synthesis Project of the U.S. Geological Survey, National Water-Quality Assessment Program. Pesticide data for surface water, groundwater, and biological specimens were compiled from U.S. Geological Survey National Water Information System database. These spatial datasets that describe natural and manmade features were compiled from several agencies and contain information collected by the U.S. Geological Survey. The U.S. Geological Survey datasets were not collected specifically for this compilation, but were previously collected for projects with various objectives. The spatial datasets were created by different agencies from sources with varied quality. As a result, features common to multiple layers may not overlay exactly. Users should check the metadata to determine proper use of these spatial datasets. These data were not checked for accuracy or completeness. If a question of accuracy or completeness arise, the user should contact the originator cited in the metadata.
A spatio-temporal landslide inventory for the NW of Spain: BAPA database
NASA Astrophysics Data System (ADS)
Valenzuela, Pablo; Domínguez-Cuesta, María José; Mora García, Manuel Antonio; Jiménez-Sánchez, Montserrat
2017-09-01
A landslide database has been created for the Principality of Asturias, NW Spain: the BAPA (Base de datos de Argayos del Principado de Asturias - Principality of Asturias Landslide Database). Data collection is mainly performed through searching local newspaper archives. Moreover, a BAPA App and a BAPA website (http://geol.uniovi.es/BAPA) have been developed to obtain additional information from citizens and institutions. Presently, the dataset covers the period 1980-2015, recording 2063 individual landslides. The use of free cartographic servers, such as Google Maps, Google Street View and Iberpix (Government of Spain), combined with the spatial descriptions and pictures contained in the press news, makes it possible to assess different levels of spatial accuracy. In the database, 59% of the records show an exact spatial location, and 51% of the records provided accurate dates, showing the usefulness of press archives as temporal records. Thus, 32% of the landslides show the highest spatial and temporal accuracy levels. The database also gathers information about the type and characteristics of the landslides, the triggering factors and the damage and costs caused. Field work was conducted to validate the methodology used in assessing the spatial location, temporal occurrence and characteristics of the landslides.
Accurate Reading with Sequential Presentation of Single Letters
Price, Nicholas S. C.; Edwards, Gemma L.
2012-01-01
Rapid, accurate reading is possible when isolated, single words from a sentence are sequentially presented at a fixed spatial location. We investigated if reading of words and sentences is possible when single letters are rapidly presented at the fovea under user-controlled or automatically controlled rates. When tested with complete sentences, trained participants achieved reading rates of over 60 wpm and accuracies of over 90% with the single letter reading (SLR) method and naive participants achieved average reading rates over 30 wpm with greater than 90% accuracy. Accuracy declined as individual letters were presented for shorter periods of time, even when the overall reading rate was maintained by increasing the duration of spaces between words. Words in the lexicon that occur more frequently were identified with higher accuracy and more quickly, demonstrating that trained participants have lexical access. In combination, our data strongly suggest that comprehension is possible and that SLR is a practicable form of reading under conditions in which normal scanning of text is not possible, or for scenarios with limited spatial and temporal resolution such as patients with low vision or prostheses. PMID:23115548
Optical biopsy fiber-based fluorescence spectroscopy instrumentation
NASA Astrophysics Data System (ADS)
Katz, Alvin; Ganesan, Singaravelu; Yang, Yuanlong; Tang, Gui C.; Budansky, Yury; Celmer, Edward J.; Savage, Howard E.; Schantz, Stimson P.; Alfano, Robert R.
1996-04-01
Native fluorescence spectroscopy of biomolecules has emerged as a new modality to the medical community in characterizing the various physiological conditions of tissues. In the past several years, many groups have been working to introduce the spectroscopic methods to diagnose cancer. Researchers have successfully used native fluorescence to distinguish cancerous from normal tissue samples in rat and human tissue. We have developed three generations of instruments, called the CD-scan, CD-ratiometer and CD-map, to allow the medical community to use optics for diagnosing tissue. Using ultraviolet excitation and emission spectral measurements on both normal and cancerous tissue of the breast, gynecology, colon, and aerodigestive tract can be separated. For example, from emission intensities at 340 nm to 440 nm (300 nm excitation), a statistically consistent difference between malignant tissue and normal or benign tissue is observed. In order to utilize optical biopsy techniques in a clinical setting, the CD-scan instrument was developed, which allows for rapid and reliable in-vitro and in-vivo florescence measurements of the aerodigestive tract with high accuracy. The instrumentation employs high sensitivity detection techniques which allows for lamp excitation, small diameter optical fiber probes; the higher spatial resolution afforded by the small diameter probes can increase the ability to detect smaller tumors. The fiber optic probes allow for usage in the aerodigestive tract, cervix and colon. Needle based fiber probes have been developed for in-vivo detection of breast cancer.
Verdine, Brian N.; Golinkoff, Roberta Michnick; Hirsh-Pasek, Kathryn; Newcombe, Nora S.; Filipowicz, Andrew T.; Chang, Alicia
2013-01-01
This study focuses on three main goals: First, 3-year-olds' spatial assembly skills are probed using interlocking block constructions (N = 102). A detailed scoring scheme provides insight into early spatial processing and offers information beyond a basic accuracy score. Second, the relation of spatial assembly to early mathematics skills was evaluated. Spatial skill independently predicted a significant amount of the variability in concurrent mathematics performance. Finally, the relationship between spatial assembly skill and socioeconomic status, gender, and parent-reported spatial language was examined. While children's performance did not differ by gender, lower-SES children were already lagging behind higher-SES children in block assembly. Furthermore, lower-SES parents reported using significantly fewer spatial words with their children. PMID:24112041
Latent spatial models and sampling design for landscape genetics
Hanks, Ephraim M.; Hooten, Mevin B.; Knick, Steven T.; Oyler-McCance, Sara J.; Fike, Jennifer A.; Cross, Todd B.; Schwartz, Michael K.
2016-01-01
We propose a spatially-explicit approach for modeling genetic variation across space and illustrate how this approach can be used to optimize spatial prediction and sampling design for landscape genetic data. We propose a multinomial data model for categorical microsatellite allele data commonly used in landscape genetic studies, and introduce a latent spatial random effect to allow for spatial correlation between genetic observations. We illustrate how modern dimension reduction approaches to spatial statistics can allow for efficient computation in landscape genetic statistical models covering large spatial domains. We apply our approach to propose a retrospective spatial sampling design for greater sage-grouse (Centrocercus urophasianus) population genetics in the western United States.
High-Order Moving Overlapping Grid Methodology in a Spectral Element Method
NASA Astrophysics Data System (ADS)
Merrill, Brandon E.
A moving overlapping mesh methodology that achieves spectral accuracy in space and up to second-order accuracy in time is developed for solution of unsteady incompressible flow equations in three-dimensional domains. The targeted applications are in aerospace and mechanical engineering domains and involve problems in turbomachinery, rotary aircrafts, wind turbines and others. The methodology is built within the dual-session communication framework initially developed for stationary overlapping meshes. The methodology employs semi-implicit spectral element discretization of equations in each subdomain and explicit treatment of subdomain interfaces with spectrally-accurate spatial interpolation and high-order accurate temporal extrapolation, and requires few, if any, iterations, yet maintains the global accuracy and stability of the underlying flow solver. Mesh movement is enabled through the Arbitrary Lagrangian-Eulerian formulation of the governing equations, which allows for prescription of arbitrary velocity values at discrete mesh points. The stationary and moving overlapping mesh methodologies are thoroughly validated using two- and three-dimensional benchmark problems in laminar and turbulent flows. The spatial and temporal global convergence, for both methods, is documented and is in agreement with the nominal order of accuracy of the underlying solver. Stationary overlapping mesh methodology was validated to assess the influence of long integration times and inflow-outflow global boundary conditions on the performance. In a turbulent benchmark of fully-developed turbulent pipe flow, the turbulent statistics are validated against the available data. Moving overlapping mesh simulations are validated on the problems of two-dimensional oscillating cylinder and a three-dimensional rotating sphere. The aerodynamic forces acting on these moving rigid bodies are determined, and all results are compared with published data. Scaling tests, with both methodologies, show near linear strong scaling, even for moderately large processor counts. The moving overlapping mesh methodology is utilized to investigate the effect of an upstream turbulent wake on a three-dimensional oscillating NACA0012 extruded airfoil. A direct numerical simulation (DNS) at Reynolds Number 44,000 is performed for steady inflow incident upon the airfoil oscillating between angle of attack 5.6° and 25° with reduced frequency k=0.16. Results are contrasted with subsequent DNS of the same oscillating airfoil in a turbulent wake generated by a stationary upstream cylinder.
Phase noise in pulsed Doppler lidar and limitations on achievable single-shot velocity accuracy
NASA Technical Reports Server (NTRS)
Mcnicholl, P.; Alejandro, S.
1992-01-01
The smaller sampling volumes afforded by Doppler lidars compared to radars allows for spatial resolutions at and below some sheer and turbulence wind structure scale sizes. This has brought new emphasis on achieving the optimum product of wind velocity and range resolutions. Several recent studies have considered the effects of amplitude noise, reduction algorithms, and possible hardware related signal artifacts on obtainable velocity accuracy. We discuss here the limitation on this accuracy resulting from the incoherent nature and finite temporal extent of backscatter from aerosols. For a lidar return from a hard (or slab) target, the phase of the intermediate frequency (IF) signal is random and the total return energy fluctuates from shot to shot due to speckle; however, the offset from the transmitted frequency is determinable with an accuracy subject only to instrumental effects and the signal to noise ratio (SNR), the noise being determined by the LO power in the shot noise limited regime. This is not the case for a return from a media extending over a range on the order of or greater than the spatial extent of the transmitted pulse, such as from atmospheric aerosols. In this case, the phase of the IF signal will exhibit a temporal random walk like behavior. It will be uncorrelated over times greater than the pulse duration as the transmitted pulse samples non-overlapping volumes of scattering centers. Frequency analysis of the IF signal in a window similar to the transmitted pulse envelope will therefore show shot-to-shot frequency deviations on the order of the inverse pulse duration reflecting the random phase rate variations. Like speckle, these deviations arise from the incoherent nature of the scattering process and diminish if the IF signal is averaged over times greater than a single range resolution cell (here the pulse duration). Apart from limiting the high SNR performance of a Doppler lidar, this shot-to-shot variance in velocity estimates has a practical impact on lidar design parameters. In high SNR operation, for example, a lidar's efficiency in obtaining mean wind measurements is determined by its repetition rate and not pulse energy or average power. In addition, this variance puts a practical limit on the shot-to-shot hard target performance required of a lidar.
Bridges, Daniel J; Pollard, Derek; Winters, Anna M; Winters, Benjamin; Sikaala, Chadwick; Renn, Silvia; Larsen, David A
2018-02-23
Indoor residual spraying (IRS) is a key tool in the fight to control, eliminate and ultimately eradicate malaria. IRS protection is based on a communal effect such that an individual's protection primarily relies on the community-level coverage of IRS with limited protection being provided by household-level coverage. To ensure a communal effect is achieved through IRS, achieving high and uniform community-level coverage should be the ultimate priority of an IRS campaign. Ensuring high community-level coverage of IRS in malaria-endemic areas is challenging given the lack of information available about both the location and number of households needing IRS in any given area. A process termed 'mSpray' has been developed and implemented and involves use of satellite imagery for enumeration for planning IRS and a mobile application to guide IRS implementation. This study assessed (1) the accuracy of the satellite enumeration and (2) how various degrees of spatial aid provided through the mSpray process affected community-level IRS coverage during the 2015 spray campaign in Zambia. A 2-stage sampling process was applied to assess accuracy of satellite enumeration to determine number and location of sprayable structures. Results indicated an overall sensitivity of 94% for satellite enumeration compared to finding structures on the ground. After adjusting for structure size, roof, and wall type, households in Nchelenge District where all types of satellite-based spatial aids (paper-based maps plus use of the mobile mSpray application) were used were more likely to have received IRS than Kasama district where maps used were not based on satellite enumeration. The probability of a household being sprayed in Nchelenge district where tablet-based maps were used, did not differ statistically from that of a household in Samfya District, where detailed paper-based spatial aids based on satellite enumeration were provided. IRS coverage from the 2015 spray season benefited from the use of spatial aids based upon satellite enumeration. These spatial aids can guide costly IRS planning and implementation leading to attainment of higher spatial coverage, and likely improve disease impact.
NASA Astrophysics Data System (ADS)
Snavely, Rachel A.
Focusing on the semi-arid and highly disturbed landscape of San Clemente Island, California, this research tests the effectiveness of incorporating a hierarchal object-based image analysis (OBIA) approach with high-spatial resolution imagery and light detection and range (LiDAR) derived canopy height surfaces for mapping vegetation communities. The study is part of a large-scale research effort conducted by researchers at San Diego State University's (SDSU) Center for Earth Systems Analysis Research (CESAR) and Soil Ecology and Restoration Group (SERG), to develop an updated vegetation community map which will support both conservation and management decisions on Naval Auxiliary Landing Field (NALF) San Clemente Island. Trimble's eCognition Developer software was used to develop and generate vegetation community maps for two study sites, with and without vegetation height data as input. Overall and class-specific accuracies were calculated and compared across the two classifications. The highest overall accuracy (approximately 80%) was observed with the classification integrating airborne visible and near infrared imagery having very high spatial resolution with a LiDAR derived canopy height model. Accuracies for individual vegetation classes differed between both classification methods, but were highest when incorporating the LiDAR digital surface data. The addition of a canopy height model, however, yielded little difference in classification accuracies for areas of very dense shrub cover. Overall, the results show the utility of the OBIA approach for mapping vegetation with high spatial resolution imagery, and emphasizes the advantage of both multi-scale analysis and digital surface data for accuracy characterizing highly disturbed landscapes. The integrated imagery and digital canopy height model approach presented both advantages and limitations, which have to be considered prior to its operational use in mapping vegetation communities.