Sample records for spatially correlated errors

  1. Analysis of spatial correlation in predictive models of forest variables that use LiDAR auxiliary information

    Treesearch

    F. Mauro; Vicente J. Monleon; H. Temesgen; L.A. Ruiz

    2017-01-01

    Accounting for spatial correlation of LiDAR model errors can improve the precision of model-based estimators. To estimate spatial correlation, sample designs that provide close observations are needed, but their implementation might be prohibitively expensive. To quantify the gains obtained by accounting for the spatial correlation of model errors, we examined (

  2. A method to estimate the effect of deformable image registration uncertainties on daily dose mapping

    PubMed Central

    Murphy, Martin J.; Salguero, Francisco J.; Siebers, Jeffrey V.; Staub, David; Vaman, Constantin

    2012-01-01

    Purpose: To develop a statistical sampling procedure for spatially-correlated uncertainties in deformable image registration and then use it to demonstrate their effect on daily dose mapping. Methods: Sequential daily CT studies are acquired to map anatomical variations prior to fractionated external beam radiotherapy. The CTs are deformably registered to the planning CT to obtain displacement vector fields (DVFs). The DVFs are used to accumulate the dose delivered each day onto the planning CT. Each DVF has spatially-correlated uncertainties associated with it. Principal components analysis (PCA) is applied to measured DVF error maps to produce decorrelated principal component modes of the errors. The modes are sampled independently and reconstructed to produce synthetic registration error maps. The synthetic error maps are convolved with dose mapped via deformable registration to model the resulting uncertainty in the dose mapping. The results are compared to the dose mapping uncertainty that would result from uncorrelated DVF errors that vary randomly from voxel to voxel. Results: The error sampling method is shown to produce synthetic DVF error maps that are statistically indistinguishable from the observed error maps. Spatially-correlated DVF uncertainties modeled by our procedure produce patterns of dose mapping error that are different from that due to randomly distributed uncertainties. Conclusions: Deformable image registration uncertainties have complex spatial distributions. The authors have developed and tested a method to decorrelate the spatial uncertainties and make statistical samples of highly correlated error maps. The sample error maps can be used to investigate the effect of DVF uncertainties on daily dose mapping via deformable image registration. An initial demonstration of this methodology shows that dose mapping uncertainties can be sensitive to spatial patterns in the DVF uncertainties. PMID:22320766

  3. Detecting Spatial Patterns in Biological Array Experiments

    PubMed Central

    ROOT, DAVID E.; KELLEY, BRIAN P.; STOCKWELL, BRENT R.

    2005-01-01

    Chemical genetic screening and DNA and protein microarrays are among a number of increasingly important and widely used biological research tools that involve large numbers of parallel experiments arranged in a spatial array. It is often difficult to ensure that uniform experimental conditions are present throughout the entire array, and as a result, one often observes systematic spatially correlated errors, especially when array experiments are performed using robots. Here, the authors apply techniques based on the discrete Fourier transform to identify and quantify spatially correlated errors superimposed on a spatially random background. They demonstrate that these techniques are effective in identifying common spatially systematic errors in high-throughput 384-well microplate assay data. In addition, the authors employ a statistical test to allow for automatic detection of such errors. Software tools for using this approach are provided. PMID:14567791

  4. Accounting for spatial correlation errors in the assimilation of GRACE into hydrological models through localization

    NASA Astrophysics Data System (ADS)

    Khaki, M.; Schumacher, M.; Forootan, E.; Kuhn, M.; Awange, J. L.; van Dijk, A. I. J. M.

    2017-10-01

    Assimilation of terrestrial water storage (TWS) information from the Gravity Recovery And Climate Experiment (GRACE) satellite mission can provide significant improvements in hydrological modelling. However, the rather coarse spatial resolution of GRACE TWS and its spatially correlated errors pose considerable challenges for achieving realistic assimilation results. Consequently, successful data assimilation depends on rigorous modelling of the full error covariance matrix of the GRACE TWS estimates, as well as realistic error behavior for hydrological model simulations. In this study, we assess the application of local analysis (LA) to maximize the contribution of GRACE TWS in hydrological data assimilation. For this, we assimilate GRACE TWS into the World-Wide Water Resources Assessment system (W3RA) over the Australian continent while applying LA and accounting for existing spatial correlations using the full error covariance matrix. GRACE TWS data is applied with different spatial resolutions including 1° to 5° grids, as well as basin averages. The ensemble-based sequential filtering technique of the Square Root Analysis (SQRA) is applied to assimilate TWS data into W3RA. For each spatial scale, the performance of the data assimilation is assessed through comparison with independent in-situ ground water and soil moisture observations. Overall, the results demonstrate that LA is able to stabilize the inversion process (within the implementation of the SQRA filter) leading to less errors for all spatial scales considered with an average RMSE improvement of 54% (e.g., 52.23 mm down to 26.80 mm) for all the cases with respect to groundwater in-situ measurements. Validating the assimilated results with groundwater observations indicates that LA leads to 13% better (in terms of RMSE) assimilation results compared to the cases with Gaussian errors assumptions. This highlights the great potential of LA and the use of the full error covariance matrix of GRACE TWS estimates for improved data assimilation results.

  5. Spatial autocorrelation among automated geocoding errors and its effects on testing for disease clustering

    PubMed Central

    Li, Jie; Fang, Xiangming

    2010-01-01

    Automated geocoding of patient addresses is an important data assimilation component of many spatial epidemiologic studies. Inevitably, the geocoding process results in positional errors. Positional errors incurred by automated geocoding tend to reduce the power of tests for disease clustering and otherwise affect spatial analytic methods. However, there are reasons to believe that the errors may often be positively spatially correlated and that this may mitigate their deleterious effects on spatial analyses. In this article, we demonstrate explicitly that the positional errors associated with automated geocoding of a dataset of more than 6000 addresses in Carroll County, Iowa are spatially autocorrelated. Furthermore, through two simulation studies of disease processes, including one in which the disease process is overlain upon the Carroll County addresses, we show that spatial autocorrelation among geocoding errors maintains the power of two tests for disease clustering at a level higher than that which would occur if the errors were independent. Implications of these results for cluster detection, privacy protection, and measurement-error modeling of geographic health data are discussed. PMID:20087879

  6. Comparison of different spatial transformations applied to EEG data: A case study of error processing.

    PubMed

    Cohen, Michael X

    2015-09-01

    The purpose of this paper is to compare the effects of different spatial transformations applied to the same scalp-recorded EEG data. The spatial transformations applied are two referencing schemes (average and linked earlobes), the surface Laplacian, and beamforming (a distributed source localization procedure). EEG data were collected during a speeded reaction time task that provided a comparison of activity between error vs. correct responses. Analyses focused on time-frequency power, frequency band-specific inter-electrode connectivity, and within-subject cross-trial correlations between EEG activity and reaction time. Time-frequency power analyses showed similar patterns of midfrontal delta-theta power for errors compared to correct responses across all spatial transformations. Beamforming additionally revealed error-related anterior and lateral prefrontal beta-band activity. Within-subject brain-behavior correlations showed similar patterns of results across the spatial transformations, with the correlations being the weakest after beamforming. The most striking difference among the spatial transformations was seen in connectivity analyses: linked earlobe reference produced weak inter-site connectivity that was attributable to volume conduction (zero phase lag), while the average reference and Laplacian produced more interpretable connectivity results. Beamforming did not reveal any significant condition modulations of connectivity. Overall, these analyses show that some findings are robust to spatial transformations, while other findings, particularly those involving cross-trial analyses or connectivity, are more sensitive and may depend on the use of appropriate spatial transformations. Copyright © 2014 Elsevier B.V. All rights reserved.

  7. Digital halftoning methods for selectively partitioning error into achromatic and chromatic channels

    NASA Technical Reports Server (NTRS)

    Mulligan, Jeffrey B.

    1990-01-01

    A method is described for reducing the visibility of artifacts arising in the display of quantized color images on CRT displays. The method is based on the differential spatial sensitivity of the human visual system to chromatic and achromatic modulations. Because the visual system has the highest spatial and temporal acuity for the luminance component of an image, a technique which will reduce luminance artifacts at the expense of introducing high-frequency chromatic errors is sought. A method based on controlling the correlations between the quantization errors in the individual phosphor images is explored. The luminance component is greatest when the phosphor errors are positively correlated, and is minimized when the phosphor errors are negatively correlated. The greatest effect of the correlation is obtained when the intensity quantization step sizes of the individual phosphors have equal luminances. For the ordered dither algorithm, a version of the method can be implemented by simply inverting the matrix of thresholds for one of the color components.

  8. Triple collocation-based estimation of spatially correlated observation error covariance in remote sensing soil moisture data assimilation

    NASA Astrophysics Data System (ADS)

    Wu, Kai; Shu, Hong; Nie, Lei; Jiao, Zhenhang

    2018-01-01

    Spatially correlated errors are typically ignored in data assimilation, thus degenerating the observation error covariance R to a diagonal matrix. We argue that a nondiagonal R carries more observation information making assimilation results more accurate. A method, denoted TC_Cov, was proposed for soil moisture data assimilation to estimate spatially correlated observation error covariance based on triple collocation (TC). Assimilation experiments were carried out to test the performance of TC_Cov. AMSR-E soil moisture was assimilated with a diagonal R matrix computed using the TC and assimilated using a nondiagonal R matrix, as estimated by proposed TC_Cov. The ensemble Kalman filter was considered as the assimilation method. Our assimilation results were validated against climate change initiative data and ground-based soil moisture measurements using the Pearson correlation coefficient and unbiased root mean square difference metrics. These experiments confirmed that deterioration of diagonal R assimilation results occurred when model simulation is more accurate than observation data. Furthermore, nondiagonal R achieved higher correlation coefficient and lower ubRMSD values over diagonal R in experiments and demonstrated the effectiveness of TC_Cov to estimate richly structuralized R in data assimilation. In sum, compared with diagonal R, nondiagonal R may relieve the detrimental effects of assimilation when simulated model results outperform observation data.

  9. Spatiotemporal Filtering Using Principal Component Analysis and Karhunen-Loeve Expansion Approaches for Regional GPS Network Analysis

    NASA Technical Reports Server (NTRS)

    Dong, D.; Fang, P.; Bock, F.; Webb, F.; Prawirondirdjo, L.; Kedar, S.; Jamason, P.

    2006-01-01

    Spatial filtering is an effective way to improve the precision of coordinate time series for regional GPS networks by reducing so-called common mode errors, thereby providing better resolution for detecting weak or transient deformation signals. The commonly used approach to regional filtering assumes that the common mode error is spatially uniform, which is a good approximation for networks of hundreds of kilometers extent, but breaks down as the spatial extent increases. A more rigorous approach should remove the assumption of spatially uniform distribution and let the data themselves reveal the spatial distribution of the common mode error. The principal component analysis (PCA) and the Karhunen-Loeve expansion (KLE) both decompose network time series into a set of temporally varying modes and their spatial responses. Therefore they provide a mathematical framework to perform spatiotemporal filtering.We apply the combination of PCA and KLE to daily station coordinate time series of the Southern California Integrated GPS Network (SCIGN) for the period 2000 to 2004. We demonstrate that spatially and temporally correlated common mode errors are the dominant error source in daily GPS solutions. The spatial characteristics of the common mode errors are close to uniform for all east, north, and vertical components, which implies a very long wavelength source for the common mode errors, compared to the spatial extent of the GPS network in southern California. Furthermore, the common mode errors exhibit temporally nonrandom patterns.

  10. Spatio-temporal representativeness of ground-based downward solar radiation measurements

    NASA Astrophysics Data System (ADS)

    Schwarz, Matthias; Wild, Martin; Folini, Doris

    2017-04-01

    Surface solar radiation (SSR) is most directly observed with ground based pyranometer measurements. Besides measurement uncertainties, which arise from the pyranometer instrument itself, also errors attributed to the limited spatial representativeness of observations from single sites for their large-scale surrounding have to be taken into account when using such measurements for energy balance studies. In this study the spatial representativeness of 157 homogeneous European downward surface solar radiation time series from the Global Energy Balance Archive (GEBA) and the Baseline Surface Radiation Network (BSRN) were examined for the period 1983-2015 by using the high resolution (0.05°) surface solar radiation data set from the Satellite Application Facility on Climate Monitoring (CM-SAF SARAH) as a proxy for the spatiotemporal variability of SSR. By correlating deseasonalized monthly SSR time series form surface observations against single collocated satellite derived SSR time series, a mean spatial correlation pattern was calculated and validated against purely observational based patterns. Generally decreasing correlations with increasing distance from station, with high correlations (R2 = 0.7) in proximity to the observational sites (±0.5°), was found. When correlating surface observations against time series from spatially averaged satellite derived SSR data (and thereby simulating coarser and coarser grids), very high correspondence between sites and the collocated pixels has been found for pixel sizes up to several degrees. Moreover, special focus was put on the quantification of errors which arise in conjunction to spatial sampling when estimating the temporal variability and trends for a larger region from a single surface observation site. For 15-year trends on a 1° grid, errors due to spatial sampling in the order of half of the measurement uncertainty for monthly mean values were found.

  11. A comparison of correlation-length estimation methods for the objective analysis of surface pollutants at Environment and Climate Change Canada.

    PubMed

    Ménard, Richard; Deshaies-Jacques, Martin; Gasset, Nicolas

    2016-09-01

    An objective analysis is one of the main components of data assimilation. By combining observations with the output of a predictive model we combine the best features of each source of information: the complete spatial and temporal coverage provided by models, with a close representation of the truth provided by observations. The process of combining observations with a model output is called an analysis. To produce an analysis requires the knowledge of observation and model errors, as well as its spatial correlation. This paper is devoted to the development of methods of estimation of these error variances and the characteristic length-scale of the model error correlation for its operational use in the Canadian objective analysis system. We first argue in favor of using compact support correlation functions, and then introduce three estimation methods: the Hollingsworth-Lönnberg (HL) method in local and global form, the maximum likelihood method (ML), and the [Formula: see text] diagnostic method. We perform one-dimensional (1D) simulation studies where the error variance and true correlation length are known, and perform an estimation of both error variances and correlation length where both are non-uniform. We show that a local version of the HL method can capture accurately the error variances and correlation length at each observation site, provided that spatial variability is not too strong. However, the operational objective analysis requires only a single and globally valid correlation length. We examine whether any statistics of the local HL correlation lengths could be a useful estimate, or whether other global estimation methods such as by the global HL, ML, or [Formula: see text] should be used. We found in both 1D simulation and using real data that the ML method is able to capture physically significant aspects of the correlation length, while most other estimates give unphysical and larger length-scale values. This paper describes a proposed improvement of the objective analysis of surface pollutants at Environment and Climate Change Canada (formerly known as Environment Canada). Objective analyses are essentially surface maps of air pollutants that are obtained by combining observations with an air quality model output, and are thought to provide a complete and more accurate representation of the air quality. The highlight of this study is an analysis of methods to estimate the model (or background) error correlation length-scale. The error statistics are an important and critical component to the analysis scheme.

  12. CORRELATED AND ZONAL ERRORS OF GLOBAL ASTROMETRIC MISSIONS: A SPHERICAL HARMONIC SOLUTION

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Makarov, V. V.; Dorland, B. N.; Gaume, R. A.

    We propose a computer-efficient and accurate method of estimating spatially correlated errors in astrometric positions, parallaxes, and proper motions obtained by space- and ground-based astrometry missions. In our method, the simulated observational equations are set up and solved for the coefficients of scalar and vector spherical harmonics representing the output errors rather than for individual objects in the output catalog. Both accidental and systematic correlated errors of astrometric parameters can be accurately estimated. The method is demonstrated on the example of the JMAPS mission, but can be used for other projects in space astrometry, such as SIM or JASMINE.

  13. Correlated and Zonal Errors of Global Astrometric Missions: A Spherical Harmonic Solution

    NASA Astrophysics Data System (ADS)

    Makarov, V. V.; Dorland, B. N.; Gaume, R. A.; Hennessy, G. S.; Berghea, C. T.; Dudik, R. P.; Schmitt, H. R.

    2012-07-01

    We propose a computer-efficient and accurate method of estimating spatially correlated errors in astrometric positions, parallaxes, and proper motions obtained by space- and ground-based astrometry missions. In our method, the simulated observational equations are set up and solved for the coefficients of scalar and vector spherical harmonics representing the output errors rather than for individual objects in the output catalog. Both accidental and systematic correlated errors of astrometric parameters can be accurately estimated. The method is demonstrated on the example of the JMAPS mission, but can be used for other projects in space astrometry, such as SIM or JASMINE.

  14. Modal energy analysis for mechanical systems excited by spatially correlated loads

    NASA Astrophysics Data System (ADS)

    Zhang, Peng; Fei, Qingguo; Li, Yanbin; Wu, Shaoqing; Chen, Qiang

    2018-10-01

    MODal ENergy Analysis (MODENA) is an energy-based method, which is proposed to deal with vibroacoustic problems. The performance of MODENA on the energy analysis of a mechanical system under spatially correlated excitation is investigated. A plate/cavity coupling system excited by a pressure field is studied in a numerical example, in which four kinds of pressure fields are involved, which include the purely random pressure field, the perfectly correlated pressure field, the incident diffuse field, and the turbulent boundary layer pressure fluctuation. The total energies of subsystems differ to reference solution only in the case of purely random pressure field and only for the non-excited subsystem (the cavity). A deeper analysis on the scale of modal energy is further conducted via another numerical example, in which two structural modes excited by correlated forces are coupled with one acoustic mode. A dimensionless correlation strength factor is proposed to determine the correlation strength between modal forces. Results show that the error on modal energy increases with the increment of the correlation strength factor. A criterion is proposed to establish a link between the error and the correlation strength factor. According to the criterion, the error is negligible when the correlation strength is weak, in this situation the correlation strength factor is less than a critical value.

  15. An evaluation of potential sampling locations in a reservoir with emphasis on conserved spatial correlation structure.

    PubMed

    Yenilmez, Firdes; Düzgün, Sebnem; Aksoy, Aysegül

    2015-01-01

    In this study, kernel density estimation (KDE) was coupled with ordinary two-dimensional kriging (OK) to reduce the number of sampling locations in measurement and kriging of dissolved oxygen (DO) concentrations in Porsuk Dam Reservoir (PDR). Conservation of the spatial correlation structure in the DO distribution was a target. KDE was used as a tool to aid in identification of the sampling locations that would be removed from the sampling network in order to decrease the total number of samples. Accordingly, several networks were generated in which sampling locations were reduced from 65 to 10 in increments of 4 or 5 points at a time based on kernel density maps. DO variograms were constructed, and DO values in PDR were kriged. Performance of the networks in DO estimations were evaluated through various error metrics, standard error maps (SEM), and whether the spatial correlation structure was conserved or not. Results indicated that smaller number of sampling points resulted in loss of information in regard to spatial correlation structure in DO. The minimum representative sampling points for PDR was 35. Efficacy of the sampling location selection method was tested against the networks generated by experts. It was shown that the evaluation approach proposed in this study provided a better sampling network design in which the spatial correlation structure of DO was sustained for kriging.

  16. Hybrid optical CDMA-FSO communications network under spatially correlated gamma-gamma scintillation.

    PubMed

    Jurado-Navas, Antonio; Raddo, Thiago R; Garrido-Balsells, José María; Borges, Ben-Hur V; Olmos, Juan José Vegas; Monroy, Idelfonso Tafur

    2016-07-25

    In this paper, we propose a new hybrid network solution based on asynchronous optical code-division multiple-access (OCDMA) and free-space optical (FSO) technologies for last-mile access networks, where fiber deployment is impractical. The architecture of the proposed hybrid OCDMA-FSO network is thoroughly described. The users access the network in a fully asynchronous manner by means of assigned fast frequency hopping (FFH)-based codes. In the FSO receiver, an equal gain-combining technique is employed along with intensity modulation and direct detection. New analytical formalisms for evaluating the average bit error rate (ABER) performance are also proposed. These formalisms, based on the spatially correlated gamma-gamma statistical model, are derived considering three distinct scenarios, namely, uncorrelated, totally correlated, and partially correlated channels. Numerical results show that users can successfully achieve error-free ABER levels for the three scenarios considered as long as forward error correction (FEC) algorithms are employed. Therefore, OCDMA-FSO networks can be a prospective alternative to deliver high-speed communication services to access networks with deficient fiber infrastructure.

  17. The role of visual spatial attention in adult developmental dyslexia.

    PubMed

    Collis, Nathan L; Kohnen, Saskia; Kinoshita, Sachiko

    2013-01-01

    The present study investigated the nature of visual spatial attention deficits in adults with developmental dyslexia, using a partial report task with five-letter, digit, and symbol strings. Participants responded by a manual key press to one of nine alternatives, which included other characters in the string, allowing an assessment of position errors as well as intrusion errors. The results showed that the dyslexic adults performed significantly worse than age-matched controls with letter and digit strings but not with symbol strings. Both groups produced W-shaped serial position functions with letter and digit strings. The dyslexics' deficits with letter string stimuli were limited to position errors, specifically at the string-interior positions 2 and 4. These errors correlated with letter transposition reading errors (e.g., reading slat as "salt"), but not with the Rapid Automatized Naming (RAN) task. Overall, these results suggest that the dyslexic adults have a visual spatial attention deficit; however, the deficit does not reflect a reduced span in visual-spatial attention, but a deficit in processing a string of letters in parallel, probably due to difficulty in the coding of letter position.

  18. A parametric multiclass Bayes error estimator for the multispectral scanner spatial model performance evaluation

    NASA Technical Reports Server (NTRS)

    Mobasseri, B. G.; Mcgillem, C. D.; Anuta, P. E. (Principal Investigator)

    1978-01-01

    The author has identified the following significant results. The probability of correct classification of various populations in data was defined as the primary performance index. The multispectral data being of multiclass nature as well, required a Bayes error estimation procedure that was dependent on a set of class statistics alone. The classification error was expressed in terms of an N dimensional integral, where N was the dimensionality of the feature space. The multispectral scanner spatial model was represented by a linear shift, invariant multiple, port system where the N spectral bands comprised the input processes. The scanner characteristic function, the relationship governing the transformation of the input spatial, and hence, spectral correlation matrices through the systems, was developed.

  19. Theoretical analysis on the measurement errors of local 2D DIC: Part I temporal and spatial uncertainty quantification of displacement measurements

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wang, Yueqi; Lava, Pascal; Reu, Phillip

    This study presents a theoretical uncertainty quantification of displacement measurements by subset-based 2D-digital image correlation. A generalized solution to estimate the random error of displacement measurement is presented. The obtained solution suggests that the random error of displacement measurements is determined by the image noise, the summation of the intensity gradient in a subset, the subpixel part of displacement, and the interpolation scheme. The proposed method is validated with virtual digital image correlation tests.

  20. Theoretical analysis on the measurement errors of local 2D DIC: Part I temporal and spatial uncertainty quantification of displacement measurements

    DOE PAGES

    Wang, Yueqi; Lava, Pascal; Reu, Phillip; ...

    2015-12-23

    This study presents a theoretical uncertainty quantification of displacement measurements by subset-based 2D-digital image correlation. A generalized solution to estimate the random error of displacement measurement is presented. The obtained solution suggests that the random error of displacement measurements is determined by the image noise, the summation of the intensity gradient in a subset, the subpixel part of displacement, and the interpolation scheme. The proposed method is validated with virtual digital image correlation tests.

  1. A simulation study to quantify the impacts of exposure ...

    EPA Pesticide Factsheets

    BackgroundExposure measurement error in copollutant epidemiologic models has the potential to introduce bias in relative risk (RR) estimates. A simulation study was conducted using empirical data to quantify the impact of correlated measurement errors in time-series analyses of air pollution and health.MethodsZIP-code level estimates of exposure for six pollutants (CO, NOx, EC, PM2.5, SO4, O3) from 1999 to 2002 in the Atlanta metropolitan area were used to calculate spatial, population (i.e. ambient versus personal), and total exposure measurement error.Empirically determined covariance of pollutant concentration pairs and the associated measurement errors were used to simulate true exposure (exposure without error) from observed exposure. Daily emergency department visits for respiratory diseases were simulated using a Poisson time-series model with a main pollutant RR = 1.05 per interquartile range, and a null association for the copollutant (RR = 1). Monte Carlo experiments were used to evaluate the impacts of correlated exposure errors of different copollutant pairs.ResultsSubstantial attenuation of RRs due to exposure error was evident in nearly all copollutant pairs studied, ranging from 10 to 40% attenuation for spatial error, 3–85% for population error, and 31–85% for total error. When CO, NOx or EC is the main pollutant, we demonstrated the possibility of false positives, specifically identifying significant, positive associations for copoll

  2. Selecting a Separable Parametric Spatiotemporal Covariance Structure for Longitudinal Imaging Data

    PubMed Central

    George, Brandon; Aban, Inmaculada

    2014-01-01

    Longitudinal imaging studies allow great insight into how the structure and function of a subject’s internal anatomy changes over time. Unfortunately, the analysis of longitudinal imaging data is complicated by inherent spatial and temporal correlation: the temporal from the repeated measures, and the spatial from the outcomes of interest being observed at multiple points in a patients body. We propose the use of a linear model with a separable parametric spatiotemporal error structure for the analysis of repeated imaging data. The model makes use of spatial (exponential, spherical, and Matérn) and temporal (compound symmetric, autoregressive-1, Toeplitz, and unstructured) parametric correlation functions. A simulation study, inspired by a longitudinal cardiac imaging study on mitral regurgitation patients, compared different information criteria for selecting a particular separable parametric spatiotemporal correlation structure as well as the effects on Type I and II error rates for inference on fixed effects when the specified model is incorrect. Information criteria were found to be highly accurate at choosing between separable parametric spatiotemporal correlation structures. Misspecification of the covariance structure was found to have the ability to inflate the Type I error or have an overly conservative test size, which corresponded to decreased power. An example with clinical data is given illustrating how the covariance structure procedure can be done in practice, as well as how covariance structure choice can change inferences about fixed effects. PMID:25293361

  3. Errors on interrupter tasks presented during spatial and verbal working memory performance are linearly linked to large-scale functional network connectivity in high temporal resolution resting state fMRI.

    PubMed

    Magnuson, Matthew Evan; Thompson, Garth John; Schwarb, Hillary; Pan, Wen-Ju; McKinley, Andy; Schumacher, Eric H; Keilholz, Shella Dawn

    2015-12-01

    The brain is organized into networks composed of spatially separated anatomical regions exhibiting coherent functional activity over time. Two of these networks (the default mode network, DMN, and the task positive network, TPN) have been implicated in the performance of a number of cognitive tasks. To directly examine the stable relationship between network connectivity and behavioral performance, high temporal resolution functional magnetic resonance imaging (fMRI) data were collected during the resting state, and behavioral data were collected from 15 subjects on different days, exploring verbal working memory, spatial working memory, and fluid intelligence. Sustained attention performance was also evaluated in a task interleaved between resting state scans. Functional connectivity within and between the DMN and TPN was related to performance on these tasks. Decreased TPN resting state connectivity was found to significantly correlate with fewer errors on an interrupter task presented during a spatial working memory paradigm and decreased DMN/TPN anti-correlation was significantly correlated with fewer errors on an interrupter task presented during a verbal working memory paradigm. A trend for increased DMN resting state connectivity to correlate to measures of fluid intelligence was also observed. These results provide additional evidence of the relationship between resting state networks and behavioral performance, and show that such results can be observed with high temporal resolution fMRI. Because cognitive scores and functional connectivity were collected on nonconsecutive days, these results highlight the stability of functional connectivity/cognitive performance coupling.

  4. A spatially adaptive total variation regularization method for electrical resistance tomography

    NASA Astrophysics Data System (ADS)

    Song, Xizi; Xu, Yanbin; Dong, Feng

    2015-12-01

    The total variation (TV) regularization method has been used to solve the ill-posed inverse problem of electrical resistance tomography (ERT), owing to its good ability to preserve edges. However, the quality of the reconstructed images, especially in the flat region, is often degraded by noise. To optimize the regularization term and the regularization factor according to the spatial feature and to improve the resolution of reconstructed images, a spatially adaptive total variation (SATV) regularization method is proposed. A kind of effective spatial feature indicator named difference curvature is used to identify which region is a flat or edge region. According to different spatial features, the SATV regularization method can automatically adjust both the regularization term and regularization factor. At edge regions, the regularization term is approximate to the TV functional to preserve the edges; in flat regions, it is approximate to the first-order Tikhonov (FOT) functional to make the solution stable. Meanwhile, the adaptive regularization factor determined by the spatial feature is used to constrain the regularization strength of the SATV regularization method for different regions. Besides, a numerical scheme is adopted for the implementation of the second derivatives of difference curvature to improve the numerical stability. Several reconstruction image metrics are used to quantitatively evaluate the performance of the reconstructed results. Both simulation and experimental results indicate that, compared with the TV (mean relative error 0.288, mean correlation coefficient 0.627) and FOT (mean relative error 0.295, mean correlation coefficient 0.638) regularization methods, the proposed SATV (mean relative error 0.259, mean correlation coefficient 0.738) regularization method can endure a relatively high level of noise and improve the resolution of reconstructed images.

  5. Fast adaptive diamond search algorithm for block-matching motion estimation using spatial correlation

    NASA Astrophysics Data System (ADS)

    Park, Sang-Gon; Jeong, Dong-Seok

    2000-12-01

    In this paper, we propose a fast adaptive diamond search algorithm (FADS) for block matching motion estimation. Many fast motion estimation algorithms reduce the computational complexity by the UESA (Unimodal Error Surface Assumption) where the matching error monotonically increases as the search moves away from the global minimum point. Recently, many fast BMAs (Block Matching Algorithms) make use of the fact that global minimum points in real world video sequences are centered at the position of zero motion. But these BMAs, especially in large motion, are easily trapped into the local minima and result in poor matching accuracy. So, we propose a new motion estimation algorithm using the spatial correlation among the neighboring blocks. We move the search origin according to the motion vectors of the spatially neighboring blocks and their MAEs (Mean Absolute Errors). The computer simulation shows that the proposed algorithm has almost the same computational complexity with DS (Diamond Search), but enhances PSNR. Moreover, the proposed algorithm gives almost the same PSNR as that of FS (Full Search), even for the large motion with half the computational load.

  6. Carrier-phase multipath corrections for GPS-based satellite attitude determination

    NASA Technical Reports Server (NTRS)

    Axelrad, A.; Reichert, P.

    2001-01-01

    This paper demonstrates the high degree of spatial repeatability of these errors for a spacecraft environment and describes a correction technique, termed the sky map method, which exploits the spatial correlation to correct measurements and improve the accuracy of GPS-based attitude solutions.

  7. What Do They Have in Common? Drivers of Streamflow Spatial Correlation and Prediction of Flow Regimes in Ungauged Locations

    NASA Astrophysics Data System (ADS)

    Betterle, A.; Radny, D.; Schirmer, M.; Botter, G.

    2017-12-01

    The spatial correlation of daily streamflows represents a statistical index encapsulating the similarity between hydrographs at two arbitrary catchment outlets. In this work, a process-based analytical framework is utilized to investigate the hydrological drivers of streamflow spatial correlation through an extensive application to 78 pairs of stream gauges belonging to 13 unregulated catchments in the eastern United States. The analysis provides insight on how the observed heterogeneity of the physical processes that control flow dynamics ultimately affect streamflow correlation and spatial patterns of flow regimes. Despite the variability of recession properties across the study catchments, the impact of heterogeneous drainage rates on the streamflow spatial correlation is overwhelmed by the spatial variability of frequency and intensity of effective rainfall events. Overall, model performances are satisfactory, with root mean square errors between modeled and observed streamflow spatial correlation below 10% in most cases. We also propose a method for estimating streamflow correlation in the absence of discharge data, which proves useful to predict streamflow regimes in ungauged areas. The method consists in setting a minimum threshold on the modeled flow correlation to individuate hydrologically similar sites. Catchment outlets that are most correlated (ρ>0.9) are found to be characterized by analogous streamflow distributions across a broad range of flow regimes.

  8. [Spatial interpolation of soil organic matter using regression Kriging and geographically weighted regression Kriging].

    PubMed

    Yang, Shun-hua; Zhang, Hai-tao; Guo, Long; Ren, Yan

    2015-06-01

    Relative elevation and stream power index were selected as auxiliary variables based on correlation analysis for mapping soil organic matter. Geographically weighted regression Kriging (GWRK) and regression Kriging (RK) were used for spatial interpolation of soil organic matter and compared with ordinary Kriging (OK), which acts as a control. The results indicated that soil or- ganic matter was significantly positively correlated with relative elevation whilst it had a significantly negative correlation with stream power index. Semivariance analysis showed that both soil organic matter content and its residuals (including ordinary least square regression residual and GWR resi- dual) had strong spatial autocorrelation. Interpolation accuracies by different methods were esti- mated based on a data set of 98 validation samples. Results showed that the mean error (ME), mean absolute error (MAE) and root mean square error (RMSE) of RK were respectively 39.2%, 17.7% and 20.6% lower than the corresponding values of OK, with a relative-improvement (RI) of 20.63. GWRK showed a similar tendency, having its ME, MAE and RMSE to be respectively 60.6%, 23.7% and 27.6% lower than those of OK, with a RI of 59.79. Therefore, both RK and GWRK significantly improved the accuracy of OK interpolation of soil organic matter due to their in- corporation of auxiliary variables. In addition, GWRK performed obviously better than RK did in this study, and its improved performance should be attributed to the consideration of sample spatial locations.

  9. Spatio-temporal filtering for determination of common mode error in regional GNSS networks

    NASA Astrophysics Data System (ADS)

    Bogusz, Janusz; Gruszczynski, Maciej; Figurski, Mariusz; Klos, Anna

    2015-04-01

    The spatial correlation between different stations for individual components in the regional GNSS networks seems to be significant. The mismodelling in satellite orbits, the Earth orientation parameters (EOP), largescale atmospheric effects or satellite antenna phase centre corrections can all cause the regionally correlated errors. This kind of GPS time series errors are referred to as common mode errors (CMEs). They are usually estimated with the regional spatial filtering, such as the "stacking". In this paper, we show the stacking approach for the set of ASG-EUPOS permanent stations, assuming that spatial distribution of the CME is uniform over the whole region of Poland (more than 600 km extent). The ASG-EUPOS is a multifunctional precise positioning system based on the reference network designed for Poland. We used a 5- year span time series (2008-2012) of daily solutions in the ITRF2008 from Bernese 5.0 processed by the Military University of Technology EPN Local Analysis Centre (MUT LAC). At the beginning of our analyses concerning spatial dependencies, the correlation coefficients between each pair of the stations in the GNSS network were calculated. This analysis shows that spatio-temporal behaviour of the GPS-derived time series is not purely random, but there is the evident uniform spatial response. In order to quantify the influence of filtering using CME, the norms L1 and L2 were determined. The values of these norms were calculated for the North, East and Up components twice: before performing the filtration and after stacking. The observed reduction of the L1 and L2 norms was up to 30% depending on the dimension of the network. However, the question how to define an optimal size of CME-analysed subnetwork remains unanswered in this research, due to the fact that our network is not extended enough.

  10. Selecting a separable parametric spatiotemporal covariance structure for longitudinal imaging data.

    PubMed

    George, Brandon; Aban, Inmaculada

    2015-01-15

    Longitudinal imaging studies allow great insight into how the structure and function of a subject's internal anatomy changes over time. Unfortunately, the analysis of longitudinal imaging data is complicated by inherent spatial and temporal correlation: the temporal from the repeated measures and the spatial from the outcomes of interest being observed at multiple points in a patient's body. We propose the use of a linear model with a separable parametric spatiotemporal error structure for the analysis of repeated imaging data. The model makes use of spatial (exponential, spherical, and Matérn) and temporal (compound symmetric, autoregressive-1, Toeplitz, and unstructured) parametric correlation functions. A simulation study, inspired by a longitudinal cardiac imaging study on mitral regurgitation patients, compared different information criteria for selecting a particular separable parametric spatiotemporal correlation structure as well as the effects on types I and II error rates for inference on fixed effects when the specified model is incorrect. Information criteria were found to be highly accurate at choosing between separable parametric spatiotemporal correlation structures. Misspecification of the covariance structure was found to have the ability to inflate the type I error or have an overly conservative test size, which corresponded to decreased power. An example with clinical data is given illustrating how the covariance structure procedure can be performed in practice, as well as how covariance structure choice can change inferences about fixed effects. Copyright © 2014 John Wiley & Sons, Ltd.

  11. Performance analysis of MIMO wireless optical communication system with Q-ary PPM over correlated log-normal fading channel

    NASA Astrophysics Data System (ADS)

    Wang, Huiqin; Wang, Xue; Lynette, Kibe; Cao, Minghua

    2018-06-01

    The performance of multiple-input multiple-output wireless optical communication systems that adopt Q-ary pulse position modulation over spatial correlated log-normal fading channel is analyzed in terms of its un-coded bit error rate and ergodic channel capacity. The analysis is based on the Wilkinson's method which approximates the distribution of a sum of correlated log-normal random variables to a log-normal random variable. The analytical and simulation results corroborate the increment of correlation coefficients among sub-channels lead to system performance degradation. Moreover, the receiver diversity has better performance in resistance of spatial correlation caused channel fading.

  12. Restoring method for missing data of spatial structural stress monitoring based on correlation

    NASA Astrophysics Data System (ADS)

    Zhang, Zeyu; Luo, Yaozhi

    2017-07-01

    Long-term monitoring of spatial structures is of great importance for the full understanding of their performance and safety. The missing part of the monitoring data link will affect the data analysis and safety assessment of the structure. Based on the long-term monitoring data of the steel structure of the Hangzhou Olympic Center Stadium, the correlation between the stress change of the measuring points is studied, and an interpolation method of the missing stress data is proposed. Stress data of correlated measuring points are selected in the 3 months of the season when missing data is required for fitting correlation. Data of daytime and nighttime are fitted separately for interpolation. For a simple linear regression when single point's correlation coefficient is 0.9 or more, the average error of interpolation is about 5%. For multiple linear regression, the interpolation accuracy is not significantly increased after the number of correlated points is more than 6. Stress baseline value of construction step should be calculated before interpolating missing data in the construction stage, and the average error is within 10%. The interpolation error of continuous missing data is slightly larger than that of the discrete missing data. The data missing rate of this method should better not exceed 30%. Finally, a measuring point's missing monitoring data is restored to verify the validity of the method.

  13. Augmented GNSS Differential Corrections Minimum Mean Square Error Estimation Sensitivity to Spatial Correlation Modeling Errors

    PubMed Central

    Kassabian, Nazelie; Presti, Letizia Lo; Rispoli, Francesco

    2014-01-01

    Railway signaling is a safety system that has evolved over the last couple of centuries towards autonomous functionality. Recently, great effort is being devoted in this field, towards the use and exploitation of Global Navigation Satellite System (GNSS) signals and GNSS augmentation systems in view of lower railway track equipments and maintenance costs, that is a priority to sustain the investments for modernizing the local and regional lines most of which lack automatic train protection systems and are still manually operated. The objective of this paper is to assess the sensitivity of the Linear Minimum Mean Square Error (LMMSE) algorithm to modeling errors in the spatial correlation function that characterizes true pseudorange Differential Corrections (DCs). This study is inspired by the railway application; however, it applies to all transportation systems, including the road sector, that need to be complemented by an augmentation system in order to deliver accurate and reliable positioning with integrity specifications. A vector of noisy pseudorange DC measurements are simulated, assuming a Gauss-Markov model with a decay rate parameter inversely proportional to the correlation distance that exists between two points of a certain environment. The LMMSE algorithm is applied on this vector to estimate the true DC, and the estimation error is compared to the noise added during simulation. The results show that for large enough correlation distance to Reference Stations (RSs) distance separation ratio values, the LMMSE brings considerable advantage in terms of estimation error accuracy and precision. Conversely, the LMMSE algorithm may deteriorate the quality of the DC measurements whenever the ratio falls below a certain threshold. PMID:24922454

  14. A simulation study to quantify the impacts of exposure measurement error on air pollution health risk estimates in copollutant time-series models.

    PubMed

    Dionisio, Kathie L; Chang, Howard H; Baxter, Lisa K

    2016-11-25

    Exposure measurement error in copollutant epidemiologic models has the potential to introduce bias in relative risk (RR) estimates. A simulation study was conducted using empirical data to quantify the impact of correlated measurement errors in time-series analyses of air pollution and health. ZIP-code level estimates of exposure for six pollutants (CO, NO x , EC, PM 2.5 , SO 4 , O 3 ) from 1999 to 2002 in the Atlanta metropolitan area were used to calculate spatial, population (i.e. ambient versus personal), and total exposure measurement error. Empirically determined covariance of pollutant concentration pairs and the associated measurement errors were used to simulate true exposure (exposure without error) from observed exposure. Daily emergency department visits for respiratory diseases were simulated using a Poisson time-series model with a main pollutant RR = 1.05 per interquartile range, and a null association for the copollutant (RR = 1). Monte Carlo experiments were used to evaluate the impacts of correlated exposure errors of different copollutant pairs. Substantial attenuation of RRs due to exposure error was evident in nearly all copollutant pairs studied, ranging from 10 to 40% attenuation for spatial error, 3-85% for population error, and 31-85% for total error. When CO, NO x or EC is the main pollutant, we demonstrated the possibility of false positives, specifically identifying significant, positive associations for copollutants based on the estimated type I error rate. The impact of exposure error must be considered when interpreting results of copollutant epidemiologic models, due to the possibility of attenuation of main pollutant RRs and the increased probability of false positives when measurement error is present.

  15. Medium-range Performance of the Global NWP Model

    NASA Astrophysics Data System (ADS)

    Kim, J.; Jang, T.; Kim, J.; Kim, Y.

    2017-12-01

    The medium-range performance of the global numerical weather prediction (NWP) model in the Korea Meteorological Administration (KMA) is investigated. The performance is based on the prediction of the extratropical circulation. The mean square error is expressed by sum of spatial variance of discrepancy between forecasts and observations and the square of the mean error (ME). Thus, it is important to investigate the ME effect in order to understand the model performance. The ME is expressed by the subtraction of an anomaly from forecast difference against the real climatology. It is found that the global model suffers from a severe systematic ME in medium-range forecasts. The systematic ME is dominant in the entire troposphere in all months. Such ME can explain at most 25% of root mean square error. We also compare the extratropical ME distribution with that from other NWP centers. NWP models exhibit similar spatial ME structure each other. It is found that the spatial ME pattern is highly correlated to that of an anomaly, implying that the ME varies with seasons. For example, the correlation coefficient between ME and anomaly ranges from -0.51 to -0.85 by months. The pattern of the extratropical circulation also has a high correlation to an anomaly. The global model has trouble in faithfully simulating extratropical cyclones and blockings in the medium-range forecast. In particular, the model has a hard to simulate an anomalous event in medium-range forecasts. If we choose an anomalous period for a test-bed experiment, we will suffer from a large error due to an anomaly.

  16. Comparison of different interpolation methods for spatial distribution of soil organic carbon and some soil properties in the Black Sea backward region of Turkey

    NASA Astrophysics Data System (ADS)

    Göl, Ceyhun; Bulut, Sinan; Bolat, Ferhat

    2017-10-01

    The purpose of this research is to compare the spatial variability of soil organic carbon (SOC) in four adjacent land uses including the cultivated area, the grassland area, the plantation area and the natural forest area in the semi - arid region of Black Sea backward region of Turkey. Some of the soil properties, including total nitrogen, SOC, soil organic matter, and bulk density were measured on a grid with a 50 m sampling distance on the top soil (0-15 cm depth). Accordingly, a total of 120 samples were taken from the four adjacent land uses. Data was analyzed using geostatistical methods. The methods used were: Block kriging (BK), co - kriging (CK) with organic matter, total nitrogen and bulk density as auxiliary variables and inverse distance weighting (IDW) methods with the power of 1, 2 and 4. The methods were compared using a performance criteria that included root mean square error (RMSE), mean absolute error (MAE) and the coefficient of correlation (r). The one - way ANOVA test showed that differences between the natural (0.6653 ± 0.2901) - plantation forest (0.7109 ± 0.2729) areas and the grassland (1.3964 ± 0.6828) - cultivated areas (1.5851 ± 0.5541) were statistically significant at 0.05 level (F = 28.462). The best model for describing spatially variation of SOC was CK with the lowest error criteria (RMSE = 0.3342, MAE = 0.2292) and the highest coefficient of correlation (r = 0.84). The spatial structure of SOC could be well described by the spherical model. The nugget effect indicated that SOC was moderately dependent on the study area. The error distributions of the model showed that the improved model was unbiased in predicting the spatial distribution of SOC. This study's results revealed that an explanatory variable linked SOC increased success of spatial interpolation methods. In subsequent studies, this case should be taken into account for reaching more accurate outputs.

  17. Uncertainty Analysis of Downscaled CMIP5 Precipitation Data for Louisiana, USA

    NASA Astrophysics Data System (ADS)

    Sumi, S. J.; Tamanna, M.; Chivoiu, B.; Habib, E. H.

    2014-12-01

    The downscaled CMIP3 and CMIP5 Climate and Hydrology Projections dataset contains fine spatial resolution translations of climate projections over the contiguous United States developed using two downscaling techniques (monthly Bias Correction Spatial Disaggregation (BCSD) and daily Bias Correction Constructed Analogs (BCCA)). The objective of this study is to assess the uncertainty of the CMIP5 downscaled general circulation models (GCM). We performed an analysis of the daily, monthly, seasonal and annual variability of precipitation downloaded from the Downscaled CMIP3 and CMIP5 Climate and Hydrology Projections website for the state of Louisiana, USA at 0.125° x 0.125° resolution. A data set of daily gridded observations of precipitation of a rectangular boundary covering Louisiana is used to assess the validity of 21 downscaled GCMs for the 1950-1999 period. The following statistics are computed using the CMIP5 observed dataset with respect to the 21 models: the correlation coefficient, the bias, the normalized bias, the mean absolute error (MAE), the mean absolute percentage error (MAPE), and the root mean square error (RMSE). A measure of variability simulated by each model is computed as the ratio of its standard deviation, in both space and time, to the corresponding standard deviation of the observation. The correlation and MAPE statistics are also computed for each of the nine climate divisions of Louisiana. Some of the patterns that we observed are: 1) Average annual precipitation rate shows similar spatial distribution for all the models within a range of 3.27 to 4.75 mm/day from Northwest to Southeast. 2) Standard deviation of summer (JJA) precipitation (mm/day) for the models maintains lower value than the observation whereas they have similar spatial patterns and range of values in winter (NDJ). 3) Correlation coefficients of annual precipitation of models against observation have a range of -0.48 to 0.36 with variable spatial distribution by model. 4) Most of the models show negative correlation coefficients in summer and positive in winter. 5) MAE shows similar spatial distribution for all the models within a range of 5.20 to 7.43 mm/day from Northwest to Southeast of Louisiana. 6) Highest values of correlation coefficients are found at seasonal scale within a range of 0.36 to 0.46.

  18. Spatial correlation and irradiance statistics in a multiple-beam terrestrial free-space optical communication link.

    PubMed

    Anguita, Jaime A; Neifeld, Mark A; Vasic, Bane V

    2007-09-10

    By means of numerical simulations we analyze the statistical properties of the power fluctuations induced by the incoherent superposition of multiple transmitted laser beams in a terrestrial free-space optical communication link. The measured signals arising from different transmitted optical beams are found to be statistically correlated. This channel correlation increases with receiver aperture and propagation distance. We find a simple scaling rule for the spatial correlation coefficient in terms of the propagation distance and we are able to predict the scintillation reduction in previously reported experiments with good accuracy. We propose an approximation to the probability density function of the received power of a spatially correlated multiple-beam system in terms of the parameters of the single-channel gamma-gamma function. A bit-error-rate evaluation is also presented to demonstrate the improvement of a multibeam system over its single-beam counterpart.

  19. Implementation of a flow-dependent background error correlation length scale formulation in the NEMOVAR OSTIA system

    NASA Astrophysics Data System (ADS)

    Fiedler, Emma; Mao, Chongyuan; Good, Simon; Waters, Jennifer; Martin, Matthew

    2017-04-01

    OSTIA is the Met Office's Operational Sea Surface Temperature (SST) and Ice Analysis system, which produces L4 (globally complete, gridded) analyses on a daily basis. Work is currently being undertaken to replace the original OI (Optimal Interpolation) data assimilation scheme with NEMOVAR, a 3D-Var data assimilation method developed for use with the NEMO ocean model. A dual background error correlation length scale formulation is used for SST in OSTIA, as implemented in NEMOVAR. Short and long length scales are combined according to the ratio of the decomposition of the background error variances into short and long spatial correlations. The pre-defined background error variances vary spatially and seasonally, but not on shorter time-scales. If the derived length scales applied to the daily analysis are too long, SST features may be smoothed out. Therefore a flow-dependent component to determining the effective length scale has also been developed. The total horizontal gradient of the background SST field is used to identify regions where the length scale should be shortened. These methods together have led to an improvement in the resolution of SST features compared to the previous OI analysis system, without the introduction of spurious noise. This presentation will show validation results for feature resolution in OSTIA using the OI scheme, the dual length scale NEMOVAR scheme, and the flow-dependent implementation.

  20. Online and offline awareness deficits: Anosognosia for spatial neglect.

    PubMed

    Chen, Peii; Toglia, Joan

    2018-04-12

    Anosognosia for spatial neglect (ASN) can be offline or online. Offline ASN is general unawareness of having experienced spatial deficits. Online ASN is an awareness deficit of underestimating spatial difficulties that likely to occur in an upcoming task (anticipatory ASN) or have just occurred during the task (emergent ASN). We explored the relationships among spatial neglect, offline ASN, anticipatory ASN, and emergent ASN. Research Method/Design: Forty-four survivors of stroke answered questionnaires assessing offline and online self-awareness of spatial problems. The online questionnaire was asked immediately before and after each of 4 tests for spatial neglect, including shape cancellation, address and sentence copying, telephone dialing, and indented paragraph reading. Participants were certain they had difficulties in daily spatial tasks (offline awareness), in the task they were about to perform (anticipatory awareness) and had just performed (emergent awareness). Nonetheless, they consistently overestimated their spatial abilities, indicating ASN. Offline and online ASN appeared independent. Online ASN improved after task execution. Neglect severity was not positively correlated with offline ASN. Greater neglect severity correlated with both greater anticipatory and emergent ASN. Regardless of neglect severity, we found task-specific differences in emergent ASN but not in anticipatory ASN. Individuals with spatial neglect acknowledge their spatial difficulty (certainty of error occurrence) but may not necessarily recognize the extent of their difficulty (accuracy of error estimation). Our findings suggest that offline and online ASN are independent. A potential implication from the study is that familiar and challenging tasks may facilitate emergence of self-awareness. (PsycINFO Database Record (c) 2018 APA, all rights reserved).

  1. Rain radar measurement error estimation using data assimilation in an advection-based nowcasting system

    NASA Astrophysics Data System (ADS)

    Merker, Claire; Ament, Felix; Clemens, Marco

    2017-04-01

    The quantification of measurement uncertainty for rain radar data remains challenging. Radar reflectivity measurements are affected, amongst other things, by calibration errors, noise, blocking and clutter, and attenuation. Their combined impact on measurement accuracy is difficult to quantify due to incomplete process understanding and complex interdependencies. An improved quality assessment of rain radar measurements is of interest for applications both in meteorology and hydrology, for example for precipitation ensemble generation, rainfall runoff simulations, or in data assimilation for numerical weather prediction. Especially a detailed description of the spatial and temporal structure of errors is beneficial in order to make best use of the areal precipitation information provided by radars. Radar precipitation ensembles are one promising approach to represent spatially variable radar measurement errors. We present a method combining ensemble radar precipitation nowcasting with data assimilation to estimate radar measurement uncertainty at each pixel. This combination of ensemble forecast and observation yields a consistent spatial and temporal evolution of the radar error field. We use an advection-based nowcasting method to generate an ensemble reflectivity forecast from initial data of a rain radar network. Subsequently, reflectivity data from single radars is assimilated into the forecast using the Local Ensemble Transform Kalman Filter. The spread of the resulting analysis ensemble provides a flow-dependent, spatially and temporally correlated reflectivity error estimate at each pixel. We will present first case studies that illustrate the method using data from a high-resolution X-band radar network.

  2. Total ozone trend significance from space time variability of daily Dobson data

    NASA Technical Reports Server (NTRS)

    Wilcox, R. W.

    1981-01-01

    Estimates of standard errors of total ozone time and area means, as derived from ozone's natural temporal and spatial variability and autocorrelation in middle latitudes determined from daily Dobson data are presented. Assessing the significance of apparent total ozone trends is equivalent to assessing the standard error of the means. Standard errors of time averages depend on the temporal variability and correlation of the averaged parameter. Trend detectability is discussed, both for the present network and for satellite measurements.

  3. Assessing the resolution-dependent utility of tomograms for geostatistics

    USGS Publications Warehouse

    Day-Lewis, F. D.; Lane, J.W.

    2004-01-01

    Geophysical tomograms are used increasingly as auxiliary data for geostatistical modeling of aquifer and reservoir properties. The correlation between tomographic estimates and hydrogeologic properties is commonly based on laboratory measurements, co-located measurements at boreholes, or petrophysical models. The inferred correlation is assumed uniform throughout the interwell region; however, tomographic resolution varies spatially due to acquisition geometry, regularization, data error, and the physics underlying the geophysical measurements. Blurring and inversion artifacts are expected in regions traversed by few or only low-angle raypaths. In the context of radar traveltime tomography, we derive analytical models for (1) the variance of tomographic estimates, (2) the spatially variable correlation with a hydrologic parameter of interest, and (3) the spatial covariance of tomographic estimates. Synthetic examples demonstrate that tomograms of qualitative value may have limited utility for geostatistics; moreover, the imprint of regularization may preclude inference of meaningful spatial statistics from tomograms.

  4. Spatial uncertainty analysis: Propagation of interpolation errors in spatially distributed models

    USGS Publications Warehouse

    Phillips, D.L.; Marks, D.G.

    1996-01-01

    In simulation modelling, it is desirable to quantify model uncertainties and provide not only point estimates for output variables but confidence intervals as well. Spatially distributed physical and ecological process models are becoming widely used, with runs being made over a grid of points that represent the landscape. This requires input values at each grid point, which often have to be interpolated from irregularly scattered measurement sites, e.g., weather stations. Interpolation introduces spatially varying errors which propagate through the model We extended established uncertainty analysis methods to a spatial domain for quantifying spatial patterns of input variable interpolation errors and how they propagate through a model to affect the uncertainty of the model output. We applied this to a model of potential evapotranspiration (PET) as a demonstration. We modelled PET for three time periods in 1990 as a function of temperature, humidity, and wind on a 10-km grid across the U.S. portion of the Columbia River Basin. Temperature, humidity, and wind speed were interpolated using kriging from 700- 1000 supporting data points. Kriging standard deviations (SD) were used to quantify the spatially varying interpolation uncertainties. For each of 5693 grid points, 100 Monte Carlo simulations were done, using the kriged values of temperature, humidity, and wind, plus random error terms determined by the kriging SDs and the correlations of interpolation errors among the three variables. For the spring season example, kriging SDs averaged 2.6??C for temperature, 8.7% for relative humidity, and 0.38 m s-1 for wind. The resultant PET estimates had coefficients of variation (CVs) ranging from 14% to 27% for the 10-km grid cells. Maps of PET means and CVs showed the spatial patterns of PET with a measure of its uncertainty due to interpolation of the input variables. This methodology should be applicable to a variety of spatially distributed models using interpolated inputs.

  5. Implementations of geographically weighted lasso in spatial data with multicollinearity (Case study: Poverty modeling of Java Island)

    NASA Astrophysics Data System (ADS)

    Setiyorini, Anis; Suprijadi, Jadi; Handoko, Budhi

    2017-03-01

    Geographically Weighted Regression (GWR) is a regression model that takes into account the spatial heterogeneity effect. In the application of the GWR, inference on regression coefficients is often of interest, as is estimation and prediction of the response variable. Empirical research and studies have demonstrated that local correlation between explanatory variables can lead to estimated regression coefficients in GWR that are strongly correlated, a condition named multicollinearity. It later results on a large standard error on estimated regression coefficients, and, hence, problematic for inference on relationships between variables. Geographically Weighted Lasso (GWL) is a method which capable to deal with spatial heterogeneity and local multicollinearity in spatial data sets. GWL is a further development of GWR method, which adds a LASSO (Least Absolute Shrinkage and Selection Operator) constraint in parameter estimation. In this study, GWL will be applied by using fixed exponential kernel weights matrix to establish a poverty modeling of Java Island, Indonesia. The results of applying the GWL to poverty datasets show that this method stabilizes regression coefficients in the presence of multicollinearity and produces lower prediction and estimation error of the response variable than GWR does.

  6. Use of forecasting signatures to help distinguish periodicity, randomness, and chaos in ripples and other spatial patterns

    USGS Publications Warehouse

    Rubin, D.M.

    1992-01-01

    Forecasting of one-dimensional time series previously has been used to help distinguish periodicity, chaos, and noise. This paper presents two-dimensional generalizations for making such distinctions for spatial patterns. The techniques are evaluated using synthetic spatial patterns and then are applied to a natural example: ripples formed in sand by blowing wind. Tests with the synthetic patterns demonstrate that the forecasting techniques can be applied to two-dimensional spatial patterns, with the same utility and limitations as when applied to one-dimensional time series. One limitation is that some combinations of periodicity and randomness exhibit forecasting signatures that mimic those of chaos. For example, sine waves distorted with correlated phase noise have forecasting errors that increase with forecasting distance, errors that, are minimized using nonlinear models at moderate embedding dimensions, and forecasting properties that differ significantly between the original and surrogates. Ripples formed in sand by flowing air or water typically vary in geometry from one to another, even when formed in a flow that is uniform on a large scale; each ripple modifies the local flow or sand-transport field, thereby influencing the geometry of the next ripple downcurrent. Spatial forecasting was used to evaluate the hypothesis that such a deterministic process - rather than randomness or quasiperiodicity - is responsible for the variation between successive ripples. This hypothesis is supported by a forecasting error that increases with forecasting distance, a greater accuracy of nonlinear relative to linear models, and significant differences between forecasts made with the original ripples and those made with surrogate patterns. Forecasting signatures cannot be used to distinguish ripple geometry from sine waves with correlated phase noise, but this kind of structure can be ruled out by two geometric properties of the ripples: Successive ripples are highly correlated in wavelength, and ripple crests display dislocations such as branchings and mergers. ?? 1992 American Institute of Physics.

  7. Common mode error in Antarctic GPS coordinate time series on its effect on bedrock-uplift estimates

    NASA Astrophysics Data System (ADS)

    Liu, Bin; King, Matt; Dai, Wujiao

    2018-05-01

    Spatially-correlated common mode error always exists in regional, or-larger, GPS networks. We applied independent component analysis (ICA) to GPS vertical coordinate time series in Antarctica from 2010 to 2014 and made a comparison with the principal component analysis (PCA). Using PCA/ICA, the time series can be decomposed into a set of temporal components and their spatial responses. We assume the components with common spatial responses are common mode error (CME). An average reduction of ˜40% about the RMS values was achieved in both PCA and ICA filtering. However, the common mode components obtained from the two approaches have different spatial and temporal features. ICA time series present interesting correlations with modeled atmospheric and non-tidal ocean loading displacements. A white noise (WN) plus power law noise (PL) model was adopted in the GPS velocity estimation using maximum likelihood estimation (MLE) analysis, with ˜55% reduction of the velocity uncertainties after filtering using ICA. Meanwhile, spatiotemporal filtering reduces the amplitude of PL and periodic terms in the GPS time series. Finally, we compare the GPS uplift velocities, after correction for elastic effects, with recent models of glacial isostatic adjustment (GIA). The agreements of the GPS observed velocities and four GIA models are generally improved after the spatiotemporal filtering, with a mean reduction of ˜0.9 mm/yr of the WRMS values, possibly allowing for more confident separation of various GIA model predictions.

  8. Airborne electromagnetic data levelling using principal component analysis based on flight line difference

    NASA Astrophysics Data System (ADS)

    Zhang, Qiong; Peng, Cong; Lu, Yiming; Wang, Hao; Zhu, Kaiguang

    2018-04-01

    A novel technique is developed to level airborne geophysical data using principal component analysis based on flight line difference. In the paper, flight line difference is introduced to enhance the features of levelling error for airborne electromagnetic (AEM) data and improve the correlation between pseudo tie lines. Thus we conduct levelling to the flight line difference data instead of to the original AEM data directly. Pseudo tie lines are selected distributively cross profile direction, avoiding the anomalous regions. Since the levelling errors of selective pseudo tie lines show high correlations, principal component analysis is applied to extract the local levelling errors by low-order principal components reconstruction. Furthermore, we can obtain the levelling errors of original AEM data through inverse difference after spatial interpolation. This levelling method does not need to fly tie lines and design the levelling fitting function. The effectiveness of this method is demonstrated by the levelling results of survey data, comparing with the results from tie-line levelling and flight-line correlation levelling.

  9. The effect of the dynamic wet troposphere on radio interferometric measurements

    NASA Technical Reports Server (NTRS)

    Treuhaft, R. N.; Lanyi, G. E.

    1987-01-01

    A statistical model of water vapor fluctuations is used to describe the effect of the dynamic wet troposphere on radio interferometric measurements. It is assumed that the spatial structure of refractivity is approximated by Kolmogorov turbulence theory, and that the temporal fluctuations are caused by spatial patterns moved over a site by the wind, and these assumptions are examined for the VLBI delay and delay rate observables. The results suggest that the delay rate measurement error is usually dominated by water vapor fluctuations, and water vapor induced VLBI parameter errors and correlations are determined as a function of the delay observable errors. A method is proposed for including the water vapor fluctuations in the parameter estimation method to obtain improved parameter estimates and parameter covariances.

  10. A method to map errors in the deformable registration of 4DCT images1

    PubMed Central

    Vaman, Constantin; Staub, David; Williamson, Jeffrey; Murphy, Martin J.

    2010-01-01

    Purpose: To present a new approach to the problem of estimating errors in deformable image registration (DIR) applied to sequential phases of a 4DCT data set. Methods: A set of displacement vector fields (DVFs) are made by registering a sequence of 4DCT phases. The DVFs are assumed to display anatomical movement, with the addition of errors due to the imaging and registration processes. The positions of physical landmarks in each CT phase are measured as ground truth for the physical movement in the DVF. Principal component analysis of the DVFs and the landmarks is used to identify and separate the eigenmodes of physical movement from the error eigenmodes. By subtracting the physical modes from the principal components of the DVFs, the registration errors are exposed and reconstructed as DIR error maps. The method is demonstrated via a simple numerical model of 4DCT DVFs that combines breathing movement with simulated maps of spatially correlated DIR errors. Results: The principal components of the simulated DVFs were observed to share the basic properties of principal components for actual 4DCT data. The simulated error maps were accurately recovered by the estimation method. Conclusions: Deformable image registration errors can have complex spatial distributions. Consequently, point-by-point landmark validation can give unrepresentative results that do not accurately reflect the registration uncertainties away from the landmarks. The authors are developing a method for mapping the complete spatial distribution of DIR errors using only a small number of ground truth validation landmarks. PMID:21158288

  11. Orbit error characteristic and distribution of TLE using CHAMP orbit data

    NASA Astrophysics Data System (ADS)

    Xu, Xiao-li; Xiong, Yong-qing

    2018-02-01

    Space object orbital covariance data is required for collision risk assessments, but publicly accessible two line element (TLE) data does not provide orbital error information. This paper compared historical TLE data and GPS precision ephemerides of CHAMP to assess TLE orbit accuracy from 2002 to 2008, inclusive. TLE error spatial variations with longitude and latitude were calculated to analyze error characteristics and distribution. The results indicate that TLE orbit data are systematically biased from the limited SGP4 model. The biases can reach the level of kilometers, and the sign and magnitude are correlate significantly with longitude.

  12. The effects of spatial autoregressive dependencies on inference in ordinary least squares: a geometric approach

    NASA Astrophysics Data System (ADS)

    Smith, Tony E.; Lee, Ka Lok

    2012-01-01

    There is a common belief that the presence of residual spatial autocorrelation in ordinary least squares (OLS) regression leads to inflated significance levels in beta coefficients and, in particular, inflated levels relative to the more efficient spatial error model (SEM). However, our simulations show that this is not always the case. Hence, the purpose of this paper is to examine this question from a geometric viewpoint. The key idea is to characterize the OLS test statistic in terms of angle cosines and examine the geometric implications of this characterization. Our first result is to show that if the explanatory variables in the regression exhibit no spatial autocorrelation, then the distribution of test statistics for individual beta coefficients in OLS is independent of any spatial autocorrelation in the error term. Hence, inferences about betas exhibit all the optimality properties of the classic uncorrelated error case. However, a second more important series of results show that if spatial autocorrelation is present in both the dependent and explanatory variables, then the conventional wisdom is correct. In particular, even when an explanatory variable is statistically independent of the dependent variable, such joint spatial dependencies tend to produce "spurious correlation" that results in over-rejection of the null hypothesis. The underlying geometric nature of this problem is clarified by illustrative examples. The paper concludes with a brief discussion of some possible remedies for this problem.

  13. A Patch Density Recommendation based on Convergence Studies for Vehicle Panel Vibration Response resulting from Excitation by a Diffuse Acoustic Field

    NASA Technical Reports Server (NTRS)

    Smith, Andrew; LaVerde, Bruce; Jones, Douglas; Towner, Robert; Hunt, Ron

    2013-01-01

    Fluid structural interaction problems that estimate panel vibration from an applied pressure field excitation are quite dependent on the spatial correlation of the pressure field. There is a danger of either over estimating a low frequency response or under predicting broad band panel response in the more modally dense bands if the pressure field spatial correlation is not accounted for adequately. Even when the analyst elects to use a fitted function for the spatial correlation an error may be introduced if the choice of patch density is not fine enough to represent the more continuous spatial correlation function throughout the intended frequency range of interest. Both qualitative and quantitative illustrations evaluating the adequacy of different patch density assumptions to approximate the fitted spatial correlation function are provided. The actual response of a typical vehicle panel system is then evaluated in a convergence study where the patch density assumptions are varied over the same finite element model. The convergence study results are presented illustrating the impact resulting from a poor choice of patch density. The fitted correlation function used in this study represents a Diffuse Acoustic Field (DAF) excitation of the panel to produce vibration response.

  14. Characterizing and estimating noise in InSAR and InSAR time series with MODIS

    USGS Publications Warehouse

    Barnhart, William D.; Lohman, Rowena B.

    2013-01-01

    InSAR time series analysis is increasingly used to image subcentimeter displacement rates of the ground surface. The precision of InSAR observations is often affected by several noise sources, including spatially correlated noise from the turbulent atmosphere. Under ideal scenarios, InSAR time series techniques can substantially mitigate these effects; however, in practice the temporal distribution of InSAR acquisitions over much of the world exhibit seasonal biases, long temporal gaps, and insufficient acquisitions to confidently obtain the precisions desired for tectonic research. Here, we introduce a technique for constraining the magnitude of errors expected from atmospheric phase delays on the ground displacement rates inferred from an InSAR time series using independent observations of precipitable water vapor from MODIS. We implement a Monte Carlo error estimation technique based on multiple (100+) MODIS-based time series that sample date ranges close to the acquisitions times of the available SAR imagery. This stochastic approach allows evaluation of the significance of signals present in the final time series product, in particular their correlation with topography and seasonality. We find that topographically correlated noise in individual interferograms is not spatially stationary, even over short-spatial scales (<10 km). Overall, MODIS-inferred displacements and velocities exhibit errors of similar magnitude to the variability within an InSAR time series. We examine the MODIS-based confidence bounds in regions with a range of inferred displacement rates, and find we are capable of resolving velocities as low as 1.5 mm/yr with uncertainties increasing to ∼6 mm/yr in regions with higher topographic relief.

  15. Spatial and temporal variability of fine particle composition and source types in five cities of Connecticut and Massachusetts

    PubMed Central

    Lee, Hyung Joo; Gent, Janneane F.; Leaderer, Brian P.; Koutrakis, Petros

    2011-01-01

    To protect public health from PM2.5 air pollution, it is critical to identify the source types of PM2.5 mass and chemical components associated with higher risks of adverse health outcomes. Source apportionment modeling using Positive Matrix Factorization (PMF), was used to identify PM2.5 source types and quantify the source contributions to PM2.5 in five cities of Connecticut and Massachusetts. Spatial and temporal variability of PM2.5 mass, components and source contributions were investigated. PMF analysis identified five source types: regional pollution as traced by sulfur, motor vehicle, road dust, oil combustion and sea salt. The sulfur-related regional pollution and traffic source type were major contributors to PM2.5. Due to sparse ground-level PM2.5 monitoring sites, current epidemiological studies are susceptible to exposure measurement errors. The higher correlations in concentrations and source contributions between different locations suggest less spatial variability, resulting in less exposure measurement errors. When concentrations and/or contributions were compared to regional averages, correlations were generally higher than between-site correlations. This suggests that for assigning exposures for health effects studies, using regional average concentrations or contributions from several PM2.5 monitors is more reliable than using data from the nearest central monitor. PMID:21429560

  16. Using Bayesian hierarchical models to better understand nitrate sources and sinks in agricultural watersheds.

    PubMed

    Xia, Yongqiu; Weller, Donald E; Williams, Meghan N; Jordan, Thomas E; Yan, Xiaoyuan

    2016-11-15

    Export coefficient models (ECMs) are often used to predict nutrient sources and sinks in watersheds because ECMs can flexibly incorporate processes and have minimal data requirements. However, ECMs do not quantify uncertainties in model structure, parameters, or predictions; nor do they account for spatial and temporal variability in land characteristics, weather, and management practices. We applied Bayesian hierarchical methods to address these problems in ECMs used to predict nitrate concentration in streams. We compared four model formulations, a basic ECM and three models with additional terms to represent competing hypotheses about the sources of error in ECMs and about spatial and temporal variability of coefficients: an ADditive Error Model (ADEM), a SpatioTemporal Parameter Model (STPM), and a Dynamic Parameter Model (DPM). The DPM incorporates a first-order random walk to represent spatial correlation among parameters and a dynamic linear model to accommodate temporal correlation. We tested the modeling approach in a proof of concept using watershed characteristics and nitrate export measurements from watersheds in the Coastal Plain physiographic province of the Chesapeake Bay drainage. Among the four models, the DPM was the best--it had the lowest mean error, explained the most variability (R 2  = 0.99), had the narrowest prediction intervals, and provided the most effective tradeoff between fit complexity (its deviance information criterion, DIC, was 45.6 units lower than any other model, indicating overwhelming support for the DPM). The superiority of the DPM supports its underlying hypothesis that the main source of error in ECMs is their failure to account for parameter variability rather than structural error. Analysis of the fitted DPM coefficients for cropland export and instream retention revealed some of the factors controlling nitrate concentration: cropland nitrate exports were positively related to stream flow and watershed average slope, while instream nitrate retention was positively correlated with nitrate concentration. By quantifying spatial and temporal variability in sources and sinks, the DPM provides new information to better target management actions to the most effective times and places. Given the wide use of ECMs as research and management tools, our approach can be broadly applied in other watersheds and to other materials. Copyright © 2016 Elsevier Ltd. All rights reserved.

  17. Characterization of identification errors and uses in localization of poor modal correlation

    NASA Astrophysics Data System (ADS)

    Martin, Guillaume; Balmes, Etienne; Chancelier, Thierry

    2017-05-01

    While modal identification is a mature subject, very few studies address the characterization of errors associated with components of a mode shape. This is particularly important in test/analysis correlation procedures, where the Modal Assurance Criterion is used to pair modes and to localize at which sensors discrepancies occur. Poor correlation is usually attributed to modeling errors, but clearly identification errors also occur. In particular with 3D Scanning Laser Doppler Vibrometer measurement, many transfer functions are measured. As a result individual validation of each measurement cannot be performed manually in a reasonable time frame and a notable fraction of measurements is expected to be fairly noisy leading to poor identification of the associated mode shape components. The paper first addresses measurements and introduces multiple criteria. The error measures the difference between test and synthesized transfer functions around each resonance and can be used to localize poorly identified modal components. For intermediate error values, diagnostic of the origin of the error is needed. The level evaluates the transfer function amplitude in the vicinity of a given mode and can be used to eliminate sensors with low responses. A Noise Over Signal indicator, product of error and level, is then shown to be relevant to detect poorly excited modes and errors due to modal property shifts between test batches. Finally, a contribution is introduced to evaluate the visibility of a mode in each transfer. Using tests on a drum brake component, these indicators are shown to provide relevant insight into the quality of measurements. In a second part, test/analysis correlation is addressed with a focus on the localization of sources of poor mode shape correlation. The MACCo algorithm, which sorts sensors by the impact of their removal on a MAC computation, is shown to be particularly relevant. Combined with the error it avoids keeping erroneous modal components. Applied after removal of poor modal components, it provides spatial maps of poor correlation, which help localizing mode shape correlation errors and thus prepare the selection of model changes in updating procedures.

  18. A Patch Density Recommendation based on Convergence Studies for Vehicle Panel Vibration Response resulting from Excitation by a Diffuse Acoustic Field

    NASA Technical Reports Server (NTRS)

    Smith, Andrew; LaVerde, Bruce; Jones, Douglas; Towner, Robert; Waldon, James; Hunt, Ron

    2013-01-01

    Producing fluid structural interaction estimates of panel vibration from an applied pressure field excitation are quite dependent on the spatial correlation of the pressure field. There is a danger of either over estimating a low frequency response or under predicting broad band panel response in the more modally dense bands if the pressure field spatial correlation is not accounted for adequately. It is a useful practice to simulate the spatial correlation of the applied pressure field over a 2d surface using a matrix of small patch area regions on a finite element model (FEM). Use of a fitted function for the spatial correlation between patch centers can result in an error if the choice of patch density is not fine enough to represent the more continuous spatial correlation function throughout the intended frequency range of interest. Several patch density assumptions to approximate the fitted spatial correlation function are first evaluated using both qualitative and quantitative illustrations. The actual response of a typical vehicle panel system FEM is then examined in a convergence study where the patch density assumptions are varied over the same model. The convergence study results illustrate the impacts possible from a poor choice of patch density on the analytical response estimate. The fitted correlation function used in this study represents a diffuse acoustic field (DAF) excitation of the panel to produce vibration response.

  19. Systematic ionospheric electron density tilts (SITs) at mid-latitudes and their associated HF bearing errors

    NASA Astrophysics Data System (ADS)

    Tedd, B. L.; Strangeways, H. J.; Jones, T. B.

    1985-11-01

    Systematic ionospheric tilts (SITs) at midlatitudes and the diurnal variation of bearing error for different transmission paths are examined. An explanation of diurnal variations of bearing error based on the dependence of ionospheric tilt on solar zenith angle and plasma transport processes is presented. The effect of vertical ion drift and the momentum transfer of neutral winds is investigated. During the daytime the transmissions are low and photochemical processes control SITs; however, at night transmissions are at higher heights and spatial and temporal variations of plasma transport processes influence SITs. A HF ray tracing technique which uses a three-dimensional ionospheric model based on predictions to simulate SIT-induced bearing errors is described; poor correlation with experimental data is observed and the causes for this are studied. A second model based on measured vertical-sounder data is proposed. Model two is applicable for predicting bearing error for a range of transmission paths and correlates well with experimental data.

  20. Analysis of Correlation between Ionospheric Spatial Gradients and Space Weather Intensity under Nominal Conditions for Ground-Based Augmentation Systems

    NASA Astrophysics Data System (ADS)

    Lee, J.

    2013-12-01

    Ground-Based Augmentation Systems (GBAS) support aircraft precision approach and landing by providing differential GPS corrections to aviation users. For GBAS applications, most of ionospheric errors are removed by applying the differential corrections. However, ionospheric correction errors may exist due to ionosphere spatial decorrelation between GBAS ground facility and users. Thus, the standard deviation of ionosphere spatial decorrelation (σvig) is estimated and included in the computation of error bounds on user position solution. The σvig of 4mm/km, derived for the Conterminous United States (CONUS), bounds one-sigma ionospheric spatial gradients under nominal conditions (including active, but not stormy condition) with an adequate safety margin [1]. The conservatism residing in the current σvig by fixing it to a constant value for all non-stormy conditions could be mitigated by subdividing ionospheric conditions into several classes and using different σvig for each class. This new concept, real-time σvig adaptation, will be possible if the level of ionospheric activity can be well classified based on space weather intensity. This paper studies correlation between the statistics of nominal ionospheric spatial gradients and space weather indices. The analysis was carried out using two sets of data collected from Continuous Operating Reference Station (CORS) Network; 9 consecutive (nominal and ionospherically active) days in 2004 and 19 consecutive (relatively 'quiet') days in 2010. Precise ionospheric delay estimates are obtained using the simplified truth processing method and vertical ionospheric gradients are computed using the well-known 'station pair method' [2]. The remaining biases which include carrier-phase leveling errors and Inter-frequency Bias (IFB) calibration errors are reduced by applying linear slip detection thresholds. The σvig was inflated to overbound the distribution of vertical ionospheric gradients with the required confidence level. Using the daily maximum values of σvig, day-to-day variations of spatial gradients are compared to those of two space weather indices; Disturbance, Storm Time (Dst) index and Interplanetary Magnetic Field Bz (IMF Bz). The day-to-day variations of both space weather indices showed a good agreement with those of daily maximum σvig. The results demonstrate that ionospheric gradient statistics are highly correlated with space weather indices on nominal and off-nominal days. Further investigation on this relationship would facilitate prediction of upcoming ionospheric behavior based on space weather information and adjusting σvig in real time. Consequently it will improve GBAS availability by adding external information to operation. [1] Lee, J., S. Pullen, S. Datta-Barua, and P. Enge (2007), Assessment of ionosphere spatial decorrelation for GPS-based aircraft landing systems, J. Aircraft, 44(5), 1662-1669, doi:10.2514/1.28199. [2] Jung, S., and J. Lee (2012), Long-term ionospheric anomaly monitoring for ground based augmentation systems, Radio Sci., 47, RS4006, doi:10.1029/2012RS005016.

  1. Spatial perception predicts laparoscopic skills on virtual reality laparoscopy simulator.

    PubMed

    Hassan, I; Gerdes, B; Koller, M; Dick, B; Hellwig, D; Rothmund, M; Zielke, A

    2007-06-01

    This study evaluates the influence of visual-spatial perception on laparoscopic performance of novices with a virtual reality simulator (LapSim(R)). Twenty-four novices completed standardized tests of visual-spatial perception (Lameris Toegepaste Natuurwetenschappelijk Onderzoek [TNO] Test(R) and Stumpf-Fay Cube Perspectives Test(R)) and laparoscopic skills were assessed objectively, while performing 1-h practice sessions on the LapSim(R), comprising of coordination, cutting, and clip application tasks. Outcome variables included time to complete the tasks, economy of motion as well as total error scores, respectively. The degree of visual-spatial perception correlated significantly with laparoscopic performance on the LapSim(R) scores. Participants with a high degree of spatial perception (Group A) performed the tasks faster than those (Group B) who had a low degree of spatial perception (p = 0.001). Individuals with a high degree of spatial perception also scored better for economy of motion (p = 0.021), tissue damage (p = 0.009), and total error (p = 0.007). Among novices, visual-spatial perception is associated with manual skills performed on a virtual reality simulator. This result may be important for educators to develop adequate training programs that can be individually adapted.

  2. Applying Metrological Techniques to Satellite Fundamental Climate Data Records

    NASA Astrophysics Data System (ADS)

    Woolliams, Emma R.; Mittaz, Jonathan PD; Merchant, Christopher J.; Hunt, Samuel E.; Harris, Peter M.

    2018-02-01

    Quantifying long-term environmental variability, including climatic trends, requires decadal-scale time series of observations. The reliability of such trend analysis depends on the long-term stability of the data record, and understanding the sources of uncertainty in historic, current and future sensors. We give a brief overview on how metrological techniques can be applied to historical satellite data sets. In particular we discuss the implications of error correlation at different spatial and temporal scales and the forms of such correlation and consider how uncertainty is propagated with partial correlation. We give a form of the Law of Propagation of Uncertainties that considers the propagation of uncertainties associated with common errors to give the covariance associated with Earth observations in different spectral channels.

  3. Quantifying drivers of wild pig movement across multiple spatial and temporal scales

    USGS Publications Warehouse

    Kay, Shannon L.; Fischer, Justin W.; Monaghan, Andrew J.; Beasley, James C; Boughton, Raoul; Campbell, Tyler A; Cooper, Susan M; Ditchkoff, Stephen S.; Hartley, Stephen B.; Kilgo, John C; Wisely, Samantha M; Wyckoff, A Christy; Vercauteren, Kurt C.; Pipen, Kim M

    2017-01-01

    The analytical framework we present can be used to assess movement patterns arising from multiple data sources for a range of species while accounting for spatio-temporal correlations. Our analyses show the magnitude by which reaction norms can change based on the temporal scale of response data, illustrating the importance of appropriately defining temporal scales of both the movement response and covariates depending on the intended implications of research (e.g., predicting effects of movement due to climate change versus planning local-scale management). We argue that consideration of multiple spatial scales within the same framework (rather than comparing across separate studies post-hoc) gives a more accurate quantification of cross-scale spatial effects by appropriately accounting for error correlation.

  4. A spatially adaptive spectral re-ordering technique for lossless coding of hyper-spectral images

    NASA Technical Reports Server (NTRS)

    Memon, Nasir D.; Galatsanos, Nikolas

    1995-01-01

    In this paper, we propose a new approach, applicable to lossless compression of hyper-spectral images, that alleviates some limitations of linear prediction as applied to this problem. According to this approach, an adaptive re-ordering of the spectral components of each pixel is performed prior to prediction and encoding. This re-ordering adaptively exploits, on a pixel-by pixel basis, the presence of inter-band correlations for prediction. Furthermore, the proposed approach takes advantage of spatial correlations, and does not introduce any coding overhead to transmit the order of the spectral bands. This is accomplished by using the assumption that two spatially adjacent pixels are expected to have similar spectral relationships. We thus have a simple technique to exploit spectral and spatial correlations in hyper-spectral data sets, leading to compression performance improvements as compared to our previously reported techniques for lossless compression. We also look at some simple error modeling techniques for further exploiting any structure that remains in the prediction residuals prior to entropy coding.

  5. Task relevance modulates the behavioural and neural effects of sensory predictions

    PubMed Central

    Friston, Karl J.; Nobre, Anna C.

    2017-01-01

    The brain is thought to generate internal predictions to optimize behaviour. However, it is unclear whether predictions signalling is an automatic brain function or depends on task demands. Here, we manipulated the spatial/temporal predictability of visual targets, and the relevance of spatial/temporal information provided by auditory cues. We used magnetoencephalography (MEG) to measure participants’ brain activity during task performance. Task relevance modulated the influence of predictions on behaviour: spatial/temporal predictability improved spatial/temporal discrimination accuracy, but not vice versa. To explain these effects, we used behavioural responses to estimate subjective predictions under an ideal-observer model. Model-based time-series of predictions and prediction errors (PEs) were associated with dissociable neural responses: predictions correlated with cue-induced beta-band activity in auditory regions and alpha-band activity in visual regions, while stimulus-bound PEs correlated with gamma-band activity in posterior regions. Crucially, task relevance modulated these spectral correlates, suggesting that current goals influence PE and prediction signalling. PMID:29206225

  6. Route learning in amnesia: a comparison of trial-and-error and errorless learning in patients with the Korsakoff syndrome.

    PubMed

    Kessels, Roy P C; van Loon, Eke; Wester, Arie J

    2007-10-01

    To examine the errorless learning approach using a procedural memory task (i.e. learning of actual routes) in patients with amnesia, as compared to trial-and-error learning. Counterbalanced self-controlled cases series. Psychiatric hospital (Korsakoff clinic). A convenience sample of 10 patients with the Korsakoff amnestic syndrome. All patients learned a route in four sessions on separate days using an errorless approach and a different route using trial-and-error. Error rate was scored during route learning and standard neuro-psychological tests were administered (i.e. subtest route recall of the Rivermead Behavioural Memory Test (RBMT) and the Dutch version of the California Verbal Learning Test (VLGT)). A significant learning effect was found in the trial-and-error condition over consecutive sessions (P = 0.006), but no performance difference was found between errorless and trial-and-error learning of the routes. VLGT performance was significantly correlated with a trial-and-error advantage (P < 0.05); no significant correlation was found between the RBMT subtest and the learning conditions. Errorless learning was no more successful than trial-and-error learning of a procedural spatial task in patients with the Korsakoff syndrome (severe amnesia).

  7. Effects of Heterogeneity and Uncertainties in Sources and Initial and Boundary Conditions on Spatiotemporal Variations of Groundwater Levels

    NASA Astrophysics Data System (ADS)

    Zhang, Y. K.; Liang, X.

    2014-12-01

    Effects of aquifer heterogeneity and uncertainties in source/sink, and initial and boundary conditions in a groundwater flow model on the spatiotemporal variations of groundwater level, h(x,t), were investigated. Analytical solutions for the variance and covariance of h(x, t) in an unconfined aquifer described by a linearized Boussinesq equation with a white noise source/sink and a random transmissivity field were derived. It was found that in a typical aquifer the error in h(x,t) in early time is mainly caused by the random initial condition and the error reduces as time goes to reach a constant error in later time. The duration during which the effect of the random initial condition is significant may last a few hundred days in most aquifers. The constant error in groundwater in later time is due to the combined effects of the uncertain source/sink and flux boundary: the closer to the flux boundary, the larger the error. The error caused by the uncertain head boundary is limited in a narrow zone near the boundary but it remains more or less constant over time. The effect of the heterogeneity is to increase the variation of groundwater level and the maximum effect occurs close to the constant head boundary because of the linear mean hydraulic gradient. The correlation of groundwater level decreases with temporal interval and spatial distance. In addition, the heterogeneity enhances the correlation of groundwater level, especially at larger time intervals and small spatial distances.

  8. Sampling design for spatially distributed hydrogeologic and environmental processes

    USGS Publications Warehouse

    Christakos, G.; Olea, R.A.

    1992-01-01

    A methodology for the design of sampling networks over space is proposed. The methodology is based on spatial random field representations of nonhomogeneous natural processes, and on optimal spatial estimation techniques. One of the most important results of random field theory for physical sciences is its rationalization of correlations in spatial variability of natural processes. This correlation is extremely important both for interpreting spatially distributed observations and for predictive performance. The extent of site sampling and the types of data to be collected will depend on the relationship of subsurface variability to predictive uncertainty. While hypothesis formulation and initial identification of spatial variability characteristics are based on scientific understanding (such as knowledge of the physics of the underlying phenomena, geological interpretations, intuition and experience), the support offered by field data is statistically modelled. This model is not limited by the geometric nature of sampling and covers a wide range in subsurface uncertainties. A factorization scheme of the sampling error variance is derived, which possesses certain atttactive properties allowing significant savings in computations. By means of this scheme, a practical sampling design procedure providing suitable indices of the sampling error variance is established. These indices can be used by way of multiobjective decision criteria to obtain the best sampling strategy. Neither the actual implementation of the in-situ sampling nor the solution of the large spatial estimation systems of equations are necessary. The required values of the accuracy parameters involved in the network design are derived using reference charts (readily available for various combinations of data configurations and spatial variability parameters) and certain simple yet accurate analytical formulas. Insight is gained by applying the proposed sampling procedure to realistic examples related to sampling problems in two dimensions. ?? 1992.

  9. Spatial and temporal variability of fine particle composition and source types in five cities of Connecticut and Massachusetts.

    PubMed

    Lee, Hyung Joo; Gent, Janneane F; Leaderer, Brian P; Koutrakis, Petros

    2011-05-01

    To protect public health from PM(2.5) air pollution, it is critical to identify the source types of PM(2.5) mass and chemical components associated with higher risks of adverse health outcomes. Source apportionment modeling using Positive Matrix Factorization (PMF), was used to identify PM(2.5) source types and quantify the source contributions to PM(2.5) in five cities of Connecticut and Massachusetts. Spatial and temporal variability of PM(2.5) mass, components and source contributions were investigated. PMF analysis identified five source types: regional pollution as traced by sulfur, motor vehicle, road dust, oil combustion and sea salt. The sulfur-related regional pollution and traffic source type were major contributors to PM(2.5). Due to sparse ground-level PM(2.5) monitoring sites, current epidemiological studies are susceptible to exposure measurement errors. The higher correlations in concentrations and source contributions between different locations suggest less spatial variability, resulting in less exposure measurement errors. When concentrations and/or contributions were compared to regional averages, correlations were generally higher than between-site correlations. This suggests that for assigning exposures for health effects studies, using regional average concentrations or contributions from several PM(2.5) monitors is more reliable than using data from the nearest central monitor. Copyright © 2011 Elsevier B.V. All rights reserved.

  10. Exploring Visuospatial Thinking in Chemistry Learning

    ERIC Educational Resources Information Center

    Wu, Hsin-Kai; Shah, Priti

    2004-01-01

    In this article, we examine the role of visuospatial cognition in chemistry learning. We review three related kinds of literature: correlational studies of spatial abilities and chemistry learning, students' conceptual errors and difficulties understanding visual representations, and visualization tools that have been designed to help overcome…

  11. Normalized Movement Quality Measures for Therapeutic Robots Strongly Correlate With Clinical Motor Impairment Measures

    PubMed Central

    Celik, Ozkan; O’Malley, Marcia K.; Boake, Corwin; Levin, Harvey S.; Yozbatiran, Nuray; Reistetter, Timothy A.

    2016-01-01

    In this paper, we analyze the correlations between four clinical measures (Fugl–Meyer upper extremity scale, Motor Activity Log, Action Research Arm Test, and Jebsen-Taylor Hand Function Test) and four robotic measures (smoothness of movement, trajectory error, average number of target hits per minute, and mean tangential speed), used to assess motor recovery. Data were gathered as part of a hybrid robotic and traditional upper extremity rehabilitation program for nine stroke patients. Smoothness of movement and trajectory error, temporally and spatially normalized measures of movement quality defined for point-to-point movements, were found to have significant moderate to strong correlations with all four of the clinical measures. The strong correlations suggest that smoothness of movement and trajectory error may be used to compare outcomes of different rehabilitation protocols and devices effectively, provide improved resolution for tracking patient progress compared to only pre-and post-treatment measurements, enable accurate adaptation of therapy based on patient progress, and deliver immediate and useful feedback to the patient and therapist. PMID:20388607

  12. Spatial interpolation schemes of daily precipitation for hydrologic modeling

    USGS Publications Warehouse

    Hwang, Y.; Clark, M.R.; Rajagopalan, B.; Leavesley, G.

    2012-01-01

    Distributed hydrologic models typically require spatial estimates of precipitation interpolated from sparsely located observational points to the specific grid points. We compare and contrast the performance of regression-based statistical methods for the spatial estimation of precipitation in two hydrologically different basins and confirmed that widely used regression-based estimation schemes fail to describe the realistic spatial variability of daily precipitation field. The methods assessed are: (1) inverse distance weighted average; (2) multiple linear regression (MLR); (3) climatological MLR; and (4) locally weighted polynomial regression (LWP). In order to improve the performance of the interpolations, the authors propose a two-step regression technique for effective daily precipitation estimation. In this simple two-step estimation process, precipitation occurrence is first generated via a logistic regression model before estimate the amount of precipitation separately on wet days. This process generated the precipitation occurrence, amount, and spatial correlation effectively. A distributed hydrologic model (PRMS) was used for the impact analysis in daily time step simulation. Multiple simulations suggested noticeable differences between the input alternatives generated by three different interpolation schemes. Differences are shown in overall simulation error against the observations, degree of explained variability, and seasonal volumes. Simulated streamflows also showed different characteristics in mean, maximum, minimum, and peak flows. Given the same parameter optimization technique, LWP input showed least streamflow error in Alapaha basin and CMLR input showed least error (still very close to LWP) in Animas basin. All of the two-step interpolation inputs resulted in lower streamflow error compared to the directly interpolated inputs. ?? 2011 Springer-Verlag.

  13. Sound localization skills in children who use bilateral cochlear implants and in children with normal acoustic hearing

    PubMed Central

    Grieco-Calub, Tina M.; Litovsky, Ruth Y.

    2010-01-01

    Objectives To measure sound source localization in children who have sequential bilateral cochlear implants (BICIs); to determine if localization accuracy correlates with performance on a right-left discrimination task (i.e., spatial acuity); to determine if there is a measurable bilateral benefit on a sound source identification task (i.e., localization accuracy) by comparing performance under bilateral and unilateral listening conditions; to determine if sound source localization continues to improve with longer durations of bilateral experience. Design Two groups of children participated in this study: a group of 21 children who received BICIs in sequential procedures (5–14 years old) and a group of 7 typically-developing children with normal acoustic hearing (5 years old). Testing was conducted in a large sound-treated booth with loudspeakers positioned on a horizontal arc with a radius of 1.2 m. Children participated in two experiments that assessed spatial hearing skills. Spatial hearing acuity was assessed with a discrimination task in which listeners determined if a sound source was presented on the right or left side of center; the smallest angle at which performance on this task was reliably above chance is the minimum audible angle. Sound localization accuracy was assessed with a sound source identification task in which children identified the perceived position of the sound source from a multi-loudspeaker array (7 or 15); errors are quantified using the root-mean-square (RMS) error. Results Sound localization accuracy was highly variable among the children with BICIs, with RMS errors ranging from 19°–56°. Performance of the NH group, with RMS errors ranging from 9°–29° was significantly better. Within the BICI group, in 11/21 children RMS errors were smaller in the bilateral vs. unilateral listening condition, indicating bilateral benefit. There was a significant correlation between spatial acuity and sound localization accuracy (R2=0.68, p<0.01), suggesting that children who achieve small RMS errors tend to have the smallest MAAs. Although there was large intersubject variability, testing of 11 children in the BICI group at two sequential visits revealed a subset of children who show improvement in spatial hearing skills over time. Conclusions A subset of children who use sequential BICIs can acquire sound localization abilities, even after long intervals between activation of hearing in the first- and second-implanted ears. This suggests that children with activation of the second implant later in life may be capable of developing spatial hearing abilities. The large variability in performance among the children with BICIs suggests that maturation of sound localization abilities in children with BICIs may be dependent on various individual subject factors such as age of implantation and chronological age. PMID:20592615

  14. Spectral Analysis of Forecast Error Investigated with an Observing System Simulation Experiment

    NASA Technical Reports Server (NTRS)

    Prive, N. C.; Errico, Ronald M.

    2015-01-01

    The spectra of analysis and forecast error are examined using the observing system simulation experiment (OSSE) framework developed at the National Aeronautics and Space Administration Global Modeling and Assimilation Office (NASAGMAO). A global numerical weather prediction model, the Global Earth Observing System version 5 (GEOS-5) with Gridpoint Statistical Interpolation (GSI) data assimilation, is cycled for two months with once-daily forecasts to 336 hours to generate a control case. Verification of forecast errors using the Nature Run as truth is compared with verification of forecast errors using self-analysis; significant underestimation of forecast errors is seen using self-analysis verification for up to 48 hours. Likewise, self analysis verification significantly overestimates the error growth rates of the early forecast, as well as mischaracterizing the spatial scales at which the strongest growth occurs. The Nature Run-verified error variances exhibit a complicated progression of growth, particularly for low wave number errors. In a second experiment, cycling of the model and data assimilation over the same period is repeated, but using synthetic observations with different explicitly added observation errors having the same error variances as the control experiment, thus creating a different realization of the control. The forecast errors of the two experiments become more correlated during the early forecast period, with correlations increasing for up to 72 hours before beginning to decrease.

  15. A quantitative comparison of simultaneous BOLD fMRI and NIRS recordings during functional brain activation

    NASA Technical Reports Server (NTRS)

    Strangman, Gary; Culver, Joseph P.; Thompson, John H.; Boas, David A.; Sutton, J. P. (Principal Investigator)

    2002-01-01

    Near-infrared spectroscopy (NIRS) has been used to noninvasively monitor adult human brain function in a wide variety of tasks. While rough spatial correspondences with maps generated from functional magnetic resonance imaging (fMRI) have been found in such experiments, the amplitude correspondences between the two recording modalities have not been fully characterized. To do so, we simultaneously acquired NIRS and blood-oxygenation level-dependent (BOLD) fMRI data and compared Delta(1/BOLD) (approximately R(2)(*)) to changes in oxyhemoglobin, deoxyhemoglobin, and total hemoglobin concentrations derived from the NIRS data from subjects performing a simple motor task. We expected the correlation with deoxyhemoglobin to be strongest, due to the causal relation between changes in deoxyhemoglobin concentrations and BOLD signal. Instead we found highly variable correlations, suggesting the need to account for individual subject differences in our NIRS calculations. We argue that the variability resulted from systematic errors associated with each of the signals, including: (1) partial volume errors due to focal concentration changes, (2) wavelength dependence of this partial volume effect, (3) tissue model errors, and (4) possible spatial incongruence between oxy- and deoxyhemoglobin concentration changes. After such effects were accounted for, strong correlations were found between fMRI changes and all optical measures, with oxyhemoglobin providing the strongest correlation. Importantly, this finding held even when including scalp, skull, and inactive brain tissue in the average BOLD signal. This may reflect, at least in part, the superior contrast-to-noise ratio for oxyhemoglobin relative to deoxyhemoglobin (from optical measurements), rather than physiology related to BOLD signal interpretation.

  16. A priori discretization error metrics for distributed hydrologic modeling applications

    NASA Astrophysics Data System (ADS)

    Liu, Hongli; Tolson, Bryan A.; Craig, James R.; Shafii, Mahyar

    2016-12-01

    Watershed spatial discretization is an important step in developing a distributed hydrologic model. A key difficulty in the spatial discretization process is maintaining a balance between the aggregation-induced information loss and the increase in computational burden caused by the inclusion of additional computational units. Objective identification of an appropriate discretization scheme still remains a challenge, in part because of the lack of quantitative measures for assessing discretization quality, particularly prior to simulation. This study proposes a priori discretization error metrics to quantify the information loss of any candidate discretization scheme without having to run and calibrate a hydrologic model. These error metrics are applicable to multi-variable and multi-site discretization evaluation and provide directly interpretable information to the hydrologic modeler about discretization quality. The first metric, a subbasin error metric, quantifies the routing information loss from discretization, and the second, a hydrological response unit (HRU) error metric, improves upon existing a priori metrics by quantifying the information loss due to changes in land cover or soil type property aggregation. The metrics are straightforward to understand and easy to recode. Informed by the error metrics, a two-step discretization decision-making approach is proposed with the advantage of reducing extreme errors and meeting the user-specified discretization error targets. The metrics and decision-making approach are applied to the discretization of the Grand River watershed in Ontario, Canada. Results show that information loss increases as discretization gets coarser. Moreover, results help to explain the modeling difficulties associated with smaller upstream subbasins since the worst discretization errors and highest error variability appear in smaller upstream areas instead of larger downstream drainage areas. Hydrologic modeling experiments under candidate discretization schemes validate the strong correlation between the proposed discretization error metrics and hydrologic simulation responses. Discretization decision-making results show that the common and convenient approach of making uniform discretization decisions across the watershed performs worse than the proposed non-uniform discretization approach in terms of preserving spatial heterogeneity under the same computational cost.

  17. The reduced serum free triiodothyronine and increased dorsal hippocampal SNAP-25 and Munc18-1 had existed in middle-aged CD-1 mice with mild spatial cognitive impairment.

    PubMed

    Cao, Lei; Jiang, Wei; Wang, Fang; Yang, Qi-Gang; Wang, Chao; Chen, Yong-Ping; Chen, Gui-Hai

    2013-12-02

    Changes of synaptic proteins in highlighted brain regions and decreased serum thyroid hormones (THs) have been implied in age-related learning and memory decline. Previously, we showed significant pairwise correlations among markedly impaired spatial learning and memory ability, decreased serum free triiodothyronine (FT3) and increased hippocampal SNAP-25 and Munc18-1 in old Kunming mice. However, whether these changes and the correlations occur in middle-age mice remains unclear. Since this age is one of the best stages to study age-related cognitive decline, we explored the spatial learning and memory ability, serum THs, cerebral SNAP-25 and Munc18-1 levels and their relationships of middle-aged mice in this study. The learning and memory abilities of 35 CD-1 mice (19 mice aged 6 months and 16 mice aged 12 months) were measured with a radial six-arm water maze (RAWM). The SNAP-25 and Munc18-1 levels were semi-quantified by Western blotting and the serum THs were detected by radioimmunoassay. The results showed the middle-aged mice had decreased serum FT3, increased dorsal hippocampal (DH) SNAP-25 and Munc18-1, and many or long number of errors and latency in both learning and memory phases of the RAWM. The Pearson's correlation test showed that the DH SANP-25 and Munc18-1 levels were positively correlated with the number of errors and latency in learning phases of the RAWM. Meanwhile, the DH SANP-25 and Munc18-1 levels negatively correlated with the serum FT3 level. These results suggested that reduced FT3 with increased DH SNAP-25 and Munc18-1 levels might be involved in the spatial learning ability decline in the middle-aged mice. © 2013 Elsevier B.V. All rights reserved.

  18. A PRIOR EVALUATION OF TWO-STAGE CLUSTER SAMPLING FOR ACCURACY ASSESSMENT OF LARGE-AREA LAND-COVER MAPS

    EPA Science Inventory

    Two-stage cluster sampling reduces the cost of collecting accuracy assessment reference data by constraining sample elements to fall within a limited number of geographic domains (clusters). However, because classification error is typically positively spatially correlated, withi...

  19. A consistent hierarchy of generalized kinetic equation approximations to the master equation applied to surface catalysis.

    PubMed

    Herschlag, Gregory J; Mitran, Sorin; Lin, Guang

    2015-06-21

    We develop a hierarchy of approximations to the master equation for systems that exhibit translational invariance and finite-range spatial correlation. Each approximation within the hierarchy is a set of ordinary differential equations that considers spatial correlations of varying lattice distance; the assumption is that the full system will have finite spatial correlations and thus the behavior of the models within the hierarchy will approach that of the full system. We provide evidence of this convergence in the context of one- and two-dimensional numerical examples. Lower levels within the hierarchy that consider shorter spatial correlations are shown to be up to three orders of magnitude faster than traditional kinetic Monte Carlo methods (KMC) for one-dimensional systems, while predicting similar system dynamics and steady states as KMC methods. We then test the hierarchy on a two-dimensional model for the oxidation of CO on RuO2(110), showing that low-order truncations of the hierarchy efficiently capture the essential system dynamics. By considering sequences of models in the hierarchy that account for longer spatial correlations, successive model predictions may be used to establish empirical approximation of error estimates. The hierarchy may be thought of as a class of generalized phenomenological kinetic models since each element of the hierarchy approximates the master equation and the lowest level in the hierarchy is identical to a simple existing phenomenological kinetic models.

  20. Problems with small area surveys: lensing covariance of supernova distance measurements.

    PubMed

    Cooray, Asantha; Huterer, Dragan; Holz, Daniel E

    2006-01-20

    While luminosity distances from type Ia supernovae (SNe) are a powerful probe of cosmology, the accuracy with which these distances can be measured is limited by cosmic magnification due to gravitational lensing by the intervening large-scale structure. Spatial clustering of foreground mass leads to correlated errors in SNe distances. By including the full covariance matrix of SNe, we show that future wide-field surveys will remain largely unaffected by lensing correlations. However, "pencil beam" surveys, and those with narrow (but possibly long) fields of view, can be strongly affected. For a survey with 30 arcmin mean separation between SNe, lensing covariance leads to a approximately 45% increase in the expected errors in dark energy parameters.

  1. Predictive accuracy of a ground-water model--Lessons from a postaudit

    USGS Publications Warehouse

    Konikow, Leonard F.

    1986-01-01

    Hydrogeologic studies commonly include the development, calibration, and application of a deterministic simulation model. To help assess the value of using such models to make predictions, a postaudit was conducted on a previously studied area in the Salt River and lower Santa Cruz River basins in central Arizona. A deterministic, distributed-parameter model of the ground-water system in these alluvial basins was calibrated by Anderson (1968) using about 40 years of data (1923–64). The calibrated model was then used to predict future water-level changes during the next 10 years (1965–74). Examination of actual water-level changes in 77 wells from 1965–74 indicates a poor correlation between observed and predicted water-level changes. The differences have a mean of 73 ft that is, predicted declines consistently exceeded those observed and a standard deviation of 47 ft. The bias in the predicted water-level change can be accounted for by the large error in the assumed total pumpage during the prediction period. However, the spatial distribution of errors in predicted water-level change does not correlate with the spatial distribution of errors in pumpage. Consequently, the lack of precision probably is not related only to errors in assumed pumpage, but may indicate the presence of other sources of error in the model, such as the two-dimensional representation of a three-dimensional problem or the lack of consideration of land-subsidence processes. This type of postaudit is a valuable method of verifying a model, and an evaluation of predictive errors can provide an increased understanding of the system and aid in assessing the value of undertaking development of a revised model.

  2. Spectral characteristics of background error covariance and multiscale data assimilation

    DOE PAGES

    Li, Zhijin; Cheng, Xiaoping; Gustafson, Jr., William I.; ...

    2016-05-17

    The steady increase of the spatial resolutions of numerical atmospheric and oceanic circulation models has occurred over the past decades. Horizontal grid spacing down to the order of 1 km is now often used to resolve cloud systems in the atmosphere and sub-mesoscale circulation systems in the ocean. These fine resolution models encompass a wide range of temporal and spatial scales, across which dynamical and statistical properties vary. In particular, dynamic flow systems at small scales can be spatially localized and temporarily intermittent. Difficulties of current data assimilation algorithms for such fine resolution models are numerically and theoretically examined. Ourmore » analysis shows that the background error correlation length scale is larger than 75 km for streamfunctions and is larger than 25 km for water vapor mixing ratios, even for a 2-km resolution model. A theoretical analysis suggests that such correlation length scales prevent the currently used data assimilation schemes from constraining spatial scales smaller than 150 km for streamfunctions and 50 km for water vapor mixing ratios. Moreover, our results highlight the need to fundamentally modify currently used data assimilation algorithms for assimilating high-resolution observations into the aforementioned fine resolution models. Lastly, within the framework of four-dimensional variational data assimilation, a multiscale methodology based on scale decomposition is suggested and challenges are discussed.« less

  3. Improving emissions inventories in North America through systematic analysis of model performance during ICARTT and MILAGRO

    NASA Astrophysics Data System (ADS)

    Mena, Marcelo Andres

    During 2004 and 2006 the University of Iowa provided air quality forecast support for flight planning of the ICARTT and MILAGRO field campaigns. A method for improvement of model performance in comparison to observations is showed. The method allows identifying sources of model error from boundary conditions and emissions inventories. Simultaneous analysis of horizontal interpolation of model error and error covariance showed that error in ozone modeling is highly correlated to the error of its precursors, and that there is geographical correlation also. During ICARTT ozone modeling error was improved by updating from the National Emissions Inventory from 1999 and 2001, and furthermore by updating large point source emissions from continuous monitoring data. Further improvements were achieved by reducing area emissions of NOx y 60% for states in the Southeast United States. Ozone error was highly correlated to NOy error during this campaign. Also ozone production in the United States was most sensitive to NOx emissions. During MILAGRO model performance in terms of correlation coefficients was higher, but model error in ozone modeling was high due overestimation of NOx and VOC emissions in Mexico City during forecasting. Large model improvements were shown by decreasing NOx emissions in Mexico City by 50% and VOC by 60%. Recurring ozone error is spatially correlated to CO and NOy error. Sensitivity studies show that Mexico City aerosol can reduce regional photolysis rates by 40% and ozone formation by 5-10%. Mexico City emissions can enhance NOy and O3 concentrations over the Gulf of Mexico in up to 10-20%. Mexico City emissions can convert regional ozone production regimes from VOC to NOx limited. A method of interpolation of observations along flight tracks is shown, which can be used to infer on the direction of outflow plumes. The use of ratios such as O3/NOy and NOx/NOy can be used to provide information on chemical characteristics of the plume, such as age, and ozone production regime. Interpolated MTBE observations can be used as a tracer of urban mobile source emissions. Finally procedures for estimating and gridding emissions inventories in Brazil and Mexico are presented.

  4. Panel data models with spatial correlation: Estimation theory and an empirical investigation of the United States wholesale gasoline industry

    NASA Astrophysics Data System (ADS)

    Kapoor, Mudit

    The first part of my dissertation considers the estimation of a panel data model with error components that are both spatially and time-wise correlated. The dissertation combines widely used model for spatial correlation (Cliff and Ord (1973, 1981)) with the classical error component panel data model. I introduce generalizations of the generalized moments (GM) procedure suggested in Kelejian and Prucha (1999) for estimating the spatial autoregressive parameter in case of a single cross section. I then use those estimators to define feasible generalized least squares (GLS) procedures for the regression parameters. I give formal large sample results concerning the consistency of the proposed GM procedures, as well as the consistency and asymptotic normality of the proposed feasible GLS procedures. The new estimators remain computationally feasible even in large samples. The second part of my dissertation employs a Cliff-Ord-type model to empirically estimate the nature and extent of price competition in the US wholesale gasoline industry. I use data on average weekly wholesale gasoline price for 289 terminals (distribution facilities) in the US. Data on demand factors, cost factors and market structure that affect price are also used. I consider two time periods, a high demand period (August 1999) and a low demand period (January 2000). I find a high level of competition in prices between neighboring terminals. In particular, price in one terminal is significantly and positively correlated to the price of its neighboring terminal. Moreover, I find this to be much higher during the low demand period, as compared to the high demand period. In contrast to previous work, I include for each terminal the characteristics of the marginal customer by controlling for demand factors in the neighboring location. I find these demand factors to be important during period of high demand and insignificant during the low demand period. Furthermore, I have also considered spatial correlation in unobserved factors that affect price. I find it to be high and significant only during the low demand period. Not correcting for it leads to incorrect inferences regarding exogenous explanatory variables.

  5. Monitoring of land subsidence and ground fissures in Xian, China 2005-2006: Mapped by sar Interferometry

    USGS Publications Warehouse

    Zhao, C.Y.; Zhang, Q.; Ding, X.-L.; Lu, Z.; Yang, C.S.; Qi, X.M.

    2009-01-01

    The City of Xian, China, has been experiencing significant land subsidence and ground fissure activities since 1960s, which have brought various severe geohazards including damages to buildings, bridges and other facilities. Monitoring of land subsidence and ground fissure activities can provide useful information for assessing the extent of, and mitigating such geohazards. In order to achieve robust Synthetic Aperture Radar Interferometry (InSAR) results, six interferometric pairs of Envisat ASAR data covering 2005–2006 are collected to analyze the InSAR processing errors firstly, such as temporal and spatial decorrelation error, external DEM error, atmospheric error and unwrapping error. Then the annual subsidence rate during 2005–2006 is calculated by weighted averaging two pairs of D-InSAR results with similar time spanning. Lastly, GPS measurements are applied to calibrate the InSAR results and centimeter precision is achieved. As for the ground fissure monitoring, five InSAR cross-sections are designed to demonstrate the relative subsidence difference across ground fissures. In conclusion, the final InSAR subsidence map during 2005–2006 shows four large subsidence zones in Xian hi-tech zones in western, eastern and southern suburbs of Xian City, among which two subsidence cones are newly detected and two ground fissures are deduced to be extended westward in Yuhuazhai subsidence cone. This study shows that the land subsidence and ground fissures are highly correlated spatially and temporally and both are correlated with hi-tech zone construction in Xian during the year of 2005–2006.

  6. JPEG2000-coded image error concealment exploiting convex sets projections.

    PubMed

    Atzori, Luigi; Ginesu, Giaime; Raccis, Alessio

    2005-04-01

    Transmission errors in JPEG2000 can be grouped into three main classes, depending on the affected area: LL, high frequencies at the lower decomposition levels, and high frequencies at the higher decomposition levels. The first type of errors are the most annoying but can be concealed exploiting the signal spatial correlation like in a number of techniques proposed in the past; the second are less annoying but more difficult to address; the latter are often imperceptible. In this paper, we address the problem of concealing the second class or errors when high bit-planes are damaged by proposing a new approach based on the theory of projections onto convex sets. Accordingly, the error effects are masked by iteratively applying two procedures: low-pass (LP) filtering in the spatial domain and restoration of the uncorrupted wavelet coefficients in the transform domain. It has been observed that a uniform LP filtering brought to some undesired side effects that negatively compensated the advantages. This problem has been overcome by applying an adaptive solution, which exploits an edge map to choose the optimal filter mask size. Simulation results demonstrated the efficiency of the proposed approach.

  7. Impact of spatially correlated pore-scale heterogeneity on drying porous media

    NASA Astrophysics Data System (ADS)

    Borgman, Oshri; Fantinel, Paolo; Lühder, Wieland; Goehring, Lucas; Holtzman, Ran

    2017-07-01

    We study the effect of spatially-correlated heterogeneity on isothermal drying of porous media. We combine a minimal pore-scale model with microfluidic experiments with the same pore geometry. Our simulated drying behavior compares favorably with experiments, considering the large sensitivity of the emergent behavior to the uncertainty associated with even small manufacturing errors. We show that increasing the correlation length in particle sizes promotes preferential drying of clusters of large pores, prolonging liquid connectivity and surface wetness and thus higher drying rates for longer periods. Our findings improve our quantitative understanding of how pore-scale heterogeneity impacts drying, which plays a role in a wide range of processes ranging from fuel cells to curing of paints and cements to global budgets of energy, water and solutes in soils.

  8. The potential of 2D Kalman filtering for soil moisture data assimilation

    USDA-ARS?s Scientific Manuscript database

    We examine the potential for parameterizing a two-dimensional (2D) land data assimilation system using spatial error auto-correlation statistics gleaned from a triple collocation analysis and the triplet of: (1) active microwave-, (2) passive microwave- and (3) land surface model-based surface soil ...

  9. Improving Photometry and Stellar Signal Preservation with Pixel-Level Systematic Error Correction

    NASA Technical Reports Server (NTRS)

    Kolodzijczak, Jeffrey J.; Smith, Jeffrey C.; Jenkins, Jon M.

    2013-01-01

    The Kepler Mission has demonstrated that excellent stellar photometric performance can be achieved using apertures constructed from optimally selected CCD pixels. The clever methods used to correct for systematic errors, while very successful, still have some limitations in their ability to extract long-term trends in stellar flux. They also leave poorly correlated bias sources, such as drifting moiré pattern, uncorrected. We will illustrate several approaches where applying systematic error correction algorithms to the pixel time series, rather than the co-added raw flux time series, provide significant advantages. Examples include, spatially localized determination of time varying moiré pattern biases, greater sensitivity to radiation-induced pixel sensitivity drops (SPSDs), improved precision of co-trending basis vectors (CBV), and a means of distinguishing the stellar variability from co-trending terms even when they are correlated. For the last item, the approach enables physical interpretation of appropriately scaled coefficients derived in the fit of pixel time series to the CBV as linear combinations of various spatial derivatives of the pixel response function (PRF). We demonstrate that the residuals of a fit of soderived pixel coefficients to various PRF-related components can be deterministically interpreted in terms of physically meaningful quantities, such as the component of the stellar flux time series which is correlated with the CBV, as well as, relative pixel gain, proper motion and parallax. The approach also enables us to parameterize and assess the limiting factors in the uncertainties in these quantities.

  10. Error Estimation of An Ensemble Statistical Seasonal Precipitation Prediction Model

    NASA Technical Reports Server (NTRS)

    Shen, Samuel S. P.; Lau, William K. M.; Kim, Kyu-Myong; Li, Gui-Long

    2001-01-01

    This NASA Technical Memorandum describes an optimal ensemble canonical correlation forecasting model for seasonal precipitation. Each individual forecast is based on the canonical correlation analysis (CCA) in the spectral spaces whose bases are empirical orthogonal functions (EOF). The optimal weights in the ensemble forecasting crucially depend on the mean square error of each individual forecast. An estimate of the mean square error of a CCA prediction is made also using the spectral method. The error is decomposed onto EOFs of the predictand and decreases linearly according to the correlation between the predictor and predictand. Since new CCA scheme is derived for continuous fields of predictor and predictand, an area-factor is automatically included. Thus our model is an improvement of the spectral CCA scheme of Barnett and Preisendorfer. The improvements include (1) the use of area-factor, (2) the estimation of prediction error, and (3) the optimal ensemble of multiple forecasts. The new CCA model is applied to the seasonal forecasting of the United States (US) precipitation field. The predictor is the sea surface temperature (SST). The US Climate Prediction Center's reconstructed SST is used as the predictor's historical data. The US National Center for Environmental Prediction's optimally interpolated precipitation (1951-2000) is used as the predictand's historical data. Our forecast experiments show that the new ensemble canonical correlation scheme renders a reasonable forecasting skill. For example, when using September-October-November SST to predict the next season December-January-February precipitation, the spatial pattern correlation between the observed and predicted are positive in 46 years among the 50 years of experiments. The positive correlations are close to or greater than 0.4 in 29 years, which indicates excellent performance of the forecasting model. The forecasting skill can be further enhanced when several predictors are used.

  11. Multiple window spatial registration error of a gamma camera: 133Ba point source as a replacement of the NEMA procedure.

    PubMed

    Bergmann, Helmar; Minear, Gregory; Raith, Maria; Schaffarich, Peter M

    2008-12-09

    The accuracy of multiple window spatial resolution characterises the performance of a gamma camera for dual isotope imaging. In the present study we investigate an alternative method to the standard NEMA procedure for measuring this performance parameter. A long-lived 133Ba point source with gamma energies close to 67Ga and a single bore lead collimator were used to measure the multiple window spatial registration error. Calculation of the positions of the point source in the images used the NEMA algorithm. The results were validated against the values obtained by the standard NEMA procedure which uses a liquid 67Ga source with collimation. Of the source-collimator configurations under investigation an optimum collimator geometry, consisting of a 5 mm thick lead disk with a diameter of 46 mm and a 5 mm central bore, was selected. The multiple window spatial registration errors obtained by the 133Ba method showed excellent reproducibility (standard deviation < 0.07 mm). The values were compared with the results from the NEMA procedure obtained at the same locations and showed small differences with a correlation coefficient of 0.51 (p < 0.05). In addition, the 133Ba point source method proved to be much easier to use. A Bland-Altman analysis showed that the 133Ba and the 67Ga Method can be used interchangeably. The 133Ba point source method measures the multiple window spatial registration error with essentially the same accuracy as the NEMA-recommended procedure, but is easier and safer to use and has the potential to replace the current standard procedure.

  12. Generalized Bootstrap Method for Assessment of Uncertainty in Semivariogram Inference

    USGS Publications Warehouse

    Olea, R.A.; Pardo-Iguzquiza, E.

    2011-01-01

    The semivariogram and its related function, the covariance, play a central role in classical geostatistics for modeling the average continuity of spatially correlated attributes. Whereas all methods are formulated in terms of the true semivariogram, in practice what can be used are estimated semivariograms and models based on samples. A generalized form of the bootstrap method to properly model spatially correlated data is used to advance knowledge about the reliability of empirical semivariograms and semivariogram models based on a single sample. Among several methods available to generate spatially correlated resamples, we selected a method based on the LU decomposition and used several examples to illustrate the approach. The first one is a synthetic, isotropic, exhaustive sample following a normal distribution, the second example is also a synthetic but following a non-Gaussian random field, and a third empirical sample consists of actual raingauge measurements. Results show wider confidence intervals than those found previously by others with inadequate application of the bootstrap. Also, even for the Gaussian example, distributions for estimated semivariogram values and model parameters are positively skewed. In this sense, bootstrap percentile confidence intervals, which are not centered around the empirical semivariogram and do not require distributional assumptions for its construction, provide an achieved coverage similar to the nominal coverage. The latter cannot be achieved by symmetrical confidence intervals based on the standard error, regardless if the standard error is estimated from a parametric equation or from bootstrap. ?? 2010 International Association for Mathematical Geosciences.

  13. Hindcasting of decadal‐timescale estuarine bathymetric change with a tidal‐timescale model

    USGS Publications Warehouse

    Ganju, Neil K.; Schoellhamer, David H.; Jaffe, Bruce E.

    2009-01-01

    Hindcasting decadal-timescale bathymetric change in estuaries is prone to error due to limited data for initial conditions, boundary forcing, and calibration; computational limitations further hinder efforts. We developed and calibrated a tidal-timescale model to bathymetric change in Suisun Bay, California, over the 1867–1887 period. A general, multiple-timescale calibration ensured robustness over all timescales; two input reduction methods, the morphological hydrograph and the morphological acceleration factor, were applied at the decadal timescale. The model was calibrated to net bathymetric change in the entire basin; average error for bathymetric change over individual depth ranges was 37%. On a model cell-by-cell basis, performance for spatial amplitude correlation was poor over the majority of the domain, though spatial phase correlation was better, with 61% of the domain correctly indicated as erosional or depositional. Poor agreement was likely caused by the specification of initial bed composition, which was unknown during the 1867–1887 period. Cross-sectional bathymetric change between channels and flats, driven primarily by wind wave resuspension, was modeled with higher skill than longitudinal change, which is driven in part by gravitational circulation. The accelerated response of depth may have prevented gravitational circulation from being represented properly. As performance criteria became more stringent in a spatial sense, the error of the model increased. While these methods are useful for estimating basin-scale sedimentation changes, they may not be suitable for predicting specific locations of erosion or deposition. They do, however, provide a foundation for realistic estuarine geomorphic modeling applications.

  14. Integration of imagery and cartographic data through a common map base

    NASA Technical Reports Server (NTRS)

    Clark, J.

    1983-01-01

    Several disparate data types are integrated by using control points as the basis for spatially registering the data to a map base. The data are reprojected to match the coordinates of the reference UTM (Universal Transverse Mercator) map projection, as expressed in lines and samples. Control point selection is the most critical aspect of integrating the Thematic Mapper Simulator MSS imagery with the cartographic data. It is noted that control points chosen from the imagery are subject to error from mislocated points, either points that did not correlate well to the reference map or minor pixel offsets because of interactive cursorring errors. Errors are also introduced in map control points when points are improperly located and digitized, leading to inaccurate latitude and longitude coordinates. Nonsystematic aircraft platform variations, such as yawl, pitch, and roll, affect the spatial fidelity of the imagery in comparison with the quadrangles. Features in adjacent flight paths do not always correspond properly owing to the systematic panorama effect and alteration of flightline direction, as well as platform variations.

  15. A heteroskedastic error covariance matrix estimator using a first-order conditional autoregressive Markov simulation for deriving asympotical efficient estimates from ecological sampled Anopheles arabiensis aquatic habitat covariates

    PubMed Central

    Jacob, Benjamin G; Griffith, Daniel A; Muturi, Ephantus J; Caamano, Erick X; Githure, John I; Novak, Robert J

    2009-01-01

    Background Autoregressive regression coefficients for Anopheles arabiensis aquatic habitat models are usually assessed using global error techniques and are reported as error covariance matrices. A global statistic, however, will summarize error estimates from multiple habitat locations. This makes it difficult to identify where there are clusters of An. arabiensis aquatic habitats of acceptable prediction. It is therefore useful to conduct some form of spatial error analysis to detect clusters of An. arabiensis aquatic habitats based on uncertainty residuals from individual sampled habitats. In this research, a method of error estimation for spatial simulation models was demonstrated using autocorrelation indices and eigenfunction spatial filters to distinguish among the effects of parameter uncertainty on a stochastic simulation of ecological sampled Anopheles aquatic habitat covariates. A test for diagnostic checking error residuals in an An. arabiensis aquatic habitat model may enable intervention efforts targeting productive habitats clusters, based on larval/pupal productivity, by using the asymptotic distribution of parameter estimates from a residual autocovariance matrix. The models considered in this research extends a normal regression analysis previously considered in the literature. Methods Field and remote-sampled data were collected during July 2006 to December 2007 in Karima rice-village complex in Mwea, Kenya. SAS 9.1.4® was used to explore univariate statistics, correlations, distributions, and to generate global autocorrelation statistics from the ecological sampled datasets. A local autocorrelation index was also generated using spatial covariance parameters (i.e., Moran's Indices) in a SAS/GIS® database. The Moran's statistic was decomposed into orthogonal and uncorrelated synthetic map pattern components using a Poisson model with a gamma-distributed mean (i.e. negative binomial regression). The eigenfunction values from the spatial configuration matrices were then used to define expectations for prior distributions using a Markov chain Monte Carlo (MCMC) algorithm. A set of posterior means were defined in WinBUGS 1.4.3®. After the model had converged, samples from the conditional distributions were used to summarize the posterior distribution of the parameters. Thereafter, a spatial residual trend analyses was used to evaluate variance uncertainty propagation in the model using an autocovariance error matrix. Results By specifying coefficient estimates in a Bayesian framework, the covariate number of tillers was found to be a significant predictor, positively associated with An. arabiensis aquatic habitats. The spatial filter models accounted for approximately 19% redundant locational information in the ecological sampled An. arabiensis aquatic habitat data. In the residual error estimation model there was significant positive autocorrelation (i.e., clustering of habitats in geographic space) based on log-transformed larval/pupal data and the sampled covariate depth of habitat. Conclusion An autocorrelation error covariance matrix and a spatial filter analyses can prioritize mosquito control strategies by providing a computationally attractive and feasible description of variance uncertainty estimates for correctly identifying clusters of prolific An. arabiensis aquatic habitats based on larval/pupal productivity. PMID:19772590

  16. Correlated Errors in the Surface Code

    NASA Astrophysics Data System (ADS)

    Lopez, Daniel; Mucciolo, E. R.; Novais, E.

    2012-02-01

    A milestone step into the development of quantum information technology would be the ability to design and operate a reliable quantum memory. The greatest obstacle to create such a device has been decoherence due to the unavoidable interaction between the quantum system and its environment. Quantum Error Correction is therefore an essential ingredient to any quantum computing information device. A great deal of attention has been given to surface codes, since it has very good scaling properties. In this seminar, we discuss the time evolution of a qubit encoded in the logical basis of a surface code. The system is interacting with a bosonic environment at zero temperature. Our results show how much spatial and time correlations can be detrimental to the efficiency of the code.

  17. Importance of spatial autocorrelation in modeling bird distributions at a continental scale

    USGS Publications Warehouse

    Bahn, V.; O'Connor, R.J.; Krohn, W.B.

    2006-01-01

    Spatial autocorrelation in species' distributions has been recognized as inflating the probability of a type I error in hypotheses tests, causing biases in variable selection, and violating the assumption of independence of error terms in models such as correlation or regression. However, it remains unclear whether these problems occur at all spatial resolutions and extents, and under which conditions spatially explicit modeling techniques are superior. Our goal was to determine whether spatial models were superior at large extents and across many different species. In addition, we investigated the importance of purely spatial effects in distribution patterns relative to the variation that could be explained through environmental conditions. We studied distribution patterns of 108 bird species in the conterminous United States using ten years of data from the Breeding Bird Survey. We compared the performance of spatially explicit regression models with non-spatial regression models using Akaike's information criterion. In addition, we partitioned the variance in species distributions into an environmental, a pure spatial and a shared component. The spatially-explicit conditional autoregressive regression models strongly outperformed the ordinary least squares regression models. In addition, partialling out the spatial component underlying the species' distributions showed that an average of 17% of the explained variation could be attributed to purely spatial effects independent of the spatial autocorrelation induced by the underlying environmental variables. We concluded that location in the range and neighborhood play an important role in the distribution of species. Spatially explicit models are expected to yield better predictions especially for mobile species such as birds, even in coarse-grained models with a large extent. ?? Ecography.

  18. Evaluation of the predicted error of the soil moisture retrieval from C-band SAR by comparison against modelled soil moisture estimates over Australia

    PubMed Central

    Doubková, Marcela; Van Dijk, Albert I.J.M.; Sabel, Daniel; Wagner, Wolfgang; Blöschl, Günter

    2012-01-01

    The Sentinel-1 will carry onboard a C-band radar instrument that will map the European continent once every four days and the global land surface at least once every twelve days with finest 5 × 20 m spatial resolution. The high temporal sampling rate and operational configuration make Sentinel-1 of interest for operational soil moisture monitoring. Currently, updated soil moisture data are made available at 1 km spatial resolution as a demonstration service using Global Mode (GM) measurements from the Advanced Synthetic Aperture Radar (ASAR) onboard ENVISAT. The service demonstrates the potential of the C-band observations to monitor variations in soil moisture. Importantly, a retrieval error estimate is also available; these are needed to assimilate observations into models. The retrieval error is estimated by propagating sensor errors through the retrieval model. In this work, the existing ASAR GM retrieval error product is evaluated using independent top soil moisture estimates produced by the grid-based landscape hydrological model (AWRA-L) developed within the Australian Water Resources Assessment system (AWRA). The ASAR GM retrieval error estimate, an assumed prior AWRA-L error estimate and the variance in the respective datasets were used to spatially predict the root mean square error (RMSE) and the Pearson's correlation coefficient R between the two datasets. These were compared with the RMSE calculated directly from the two datasets. The predicted and computed RMSE showed a very high level of agreement in spatial patterns as well as good quantitative agreement; the RMSE was predicted within accuracy of 4% of saturated soil moisture over 89% of the Australian land mass. Predicted and calculated R maps corresponded within accuracy of 10% over 61% of the continent. The strong correspondence between the predicted and calculated RMSE and R builds confidence in the retrieval error model and derived ASAR GM error estimates. The ASAR GM and Sentinel-1 have the same basic physical measurement characteristics, and therefore very similar retrieval error estimation method can be applied. Because of the expected improvements in radiometric resolution of the Sentinel-1 backscatter measurements, soil moisture estimation errors can be expected to be an order of magnitude less than those for ASAR GM. This opens the possibility for operationally available medium resolution soil moisture estimates with very well-specified errors that can be assimilated into hydrological or crop yield models, with potentially large benefits for land-atmosphere fluxes, crop growth, and water balance monitoring and modelling. PMID:23483015

  19. Assessment of averaging spatially correlated noise for 3-D radial imaging.

    PubMed

    Stobbe, Robert W; Beaulieu, Christian

    2011-07-01

    Any measurement of signal intensity obtained from an image will be corrupted by noise. If the measurement is from one voxel, an error bound associated with noise can be assigned if the standard deviation of noise in the image is known. If voxels are averaged together within a region of interest (ROI) and the image noise is uncorrelated, the error bound associated with noise will be reduced in proportion to the square root of the number of voxels in the ROI. However, when 3-D-radial images are created the image noise will be spatially correlated. In this paper, an equation is derived and verified with simulated noise for the computation of noise averaging when image noise is correlated, facilitating the assessment of noise characteristics for different 3-D-radial imaging methodologies. It is already known that if the radial evolution of projections are altered such that constant sampling density is produced in k-space, the signal-to-noise ratio (SNR) inefficiency of standard radial imaging (SR) can effectively be eliminated (assuming a uniform transfer function is desired). However, it is shown in this paper that the low-frequency noise power reduction of SR will produce beneficial (anti-) correlation of noise and enhanced noise averaging characteristics. If an ROI contains only one voxel a radial evolution altered uniform k-space sampling technique such as twisted projection imaging (TPI) will produce an error bound ~35% less with respect to noise than SR, however, for an ROI containing 16 voxels the SR methodology will facilitate an error bound ~20% less than TPI. If a filtering transfer function is desired, it is shown that designing sampling density to create the filter shape has both SNR and noise correlation advantages over sampling k-space uniformly. In this context SR is also beneficial. Two sets of 48 images produced from a saline phantom with sodium MRI at 4.7T are used to experimentally measure noise averaging characteristics of radial imaging and good agreement with theory is obtained.

  20. Improving the Non-Hydrostatic Numerical Dust Model by Integrating Soil Moisture and Greenness Vegetation Fraction Data with Different Spatiotemporal Resolutions.

    PubMed

    Yu, Manzhu; Yang, Chaowei

    2016-01-01

    Dust storms are devastating natural disasters that cost billions of dollars and many human lives every year. Using the Non-Hydrostatic Mesoscale Dust Model (NMM-dust), this research studies how different spatiotemporal resolutions of two input parameters (soil moisture and greenness vegetation fraction) impact the sensitivity and accuracy of a dust model. Experiments are conducted by simulating dust concentration during July 1-7, 2014, for the target area covering part of Arizona and California (31, 37, -118, -112), with a resolution of ~ 3 km. Using ground-based and satellite observations, this research validates the temporal evolution and spatial distribution of dust storm output from the NMM-dust, and quantifies model error using measurements of four evaluation metrics (mean bias error, root mean square error, correlation coefficient and fractional gross error). Results showed that the default configuration of NMM-dust (with a low spatiotemporal resolution of both input parameters) generates an overestimation of Aerosol Optical Depth (AOD). Although it is able to qualitatively reproduce the temporal trend of the dust event, the default configuration of NMM-dust cannot fully capture its actual spatial distribution. Adjusting the spatiotemporal resolution of soil moisture and vegetation cover datasets showed that the model is sensitive to both parameters. Increasing the spatiotemporal resolution of soil moisture effectively reduces model's overestimation of AOD, while increasing the spatiotemporal resolution of vegetation cover changes the spatial distribution of reproduced dust storm. The adjustment of both parameters enables NMM-dust to capture the spatial distribution of dust storms, as well as reproducing more accurate dust concentration.

  1. Space-Time Data Fusion

    NASA Technical Reports Server (NTRS)

    Braverman, Amy; Nguyen, Hai; Olsen, Edward; Cressie, Noel

    2011-01-01

    Space-time Data Fusion (STDF) is a methodology for combing heterogeneous remote sensing data to optimally estimate the true values of a geophysical field of interest, and obtain uncertainties for those estimates. The input data sets may have different observing characteristics including different footprints, spatial resolutions and fields of view, orbit cycles, biases, and noise characteristics. Despite these differences all observed data can be linked to the underlying field, and therefore the each other, by a statistical model. Differences in footprints and other geometric characteristics are accounted for by parameterizing pixel-level remote sensing observations as spatial integrals of true field values lying within pixel boundaries, plus measurement error. Both spatial and temporal correlations in the true field and in the observations are estimated and incorporated through the use of a space-time random effects (STRE) model. Once the models parameters are estimated, we use it to derive expressions for optimal (minimum mean squared error and unbiased) estimates of the true field at any arbitrary location of interest, computed from the observations. Standard errors of these estimates are also produced, allowing confidence intervals to be constructed. The procedure is carried out on a fine spatial grid to approximate a continuous field. We demonstrate STDF by applying it to the problem of estimating CO2 concentration in the lower-atmosphere using data from the Atmospheric Infrared Sounder (AIRS) and the Japanese Greenhouse Gasses Observing Satellite (GOSAT) over one year for the continental US.

  2. Accounting for Limited Detection Efficiency and Localization Precision in Cluster Analysis in Single Molecule Localization Microscopy

    PubMed Central

    Shivanandan, Arun; Unnikrishnan, Jayakrishnan; Radenovic, Aleksandra

    2015-01-01

    Single Molecule Localization Microscopy techniques like PhotoActivated Localization Microscopy, with their sub-diffraction limit spatial resolution, have been popularly used to characterize the spatial organization of membrane proteins, by means of quantitative cluster analysis. However, such quantitative studies remain challenged by the techniques’ inherent sources of errors such as a limited detection efficiency of less than 60%, due to incomplete photo-conversion, and a limited localization precision in the range of 10 – 30nm, varying across the detected molecules, mainly depending on the number of photons collected from each. We provide analytical methods to estimate the effect of these errors in cluster analysis and to correct for them. These methods, based on the Ripley’s L(r) – r or Pair Correlation Function popularly used by the community, can facilitate potentially breakthrough results in quantitative biology by providing a more accurate and precise quantification of protein spatial organization. PMID:25794150

  3. Catching ghosts with a coarse net: use and abuse of spatial sampling data in detecting synchronization

    PubMed Central

    2017-01-01

    Synchronization of population dynamics in different habitats is a frequently observed phenomenon. A common mathematical tool to reveal synchronization is the (cross)correlation coefficient between time courses of values of the population size of a given species where the population size is evaluated from spatial sampling data. The corresponding sampling net or grid is often coarse, i.e. it does not resolve all details of the spatial configuration, and the evaluation error—i.e. the difference between the true value of the population size and its estimated value—can be considerable. We show that this estimation error can make the value of the correlation coefficient very inaccurate or even irrelevant. We consider several population models to show that the value of the correlation coefficient calculated on a coarse sampling grid rarely exceeds 0.5, even if the true value is close to 1, so that the synchronization is effectively lost. We also observe ‘ghost synchronization’ when the correlation coefficient calculated on a coarse sampling grid is close to 1 but in reality the dynamics are not correlated. Finally, we suggest a simple test to check the sampling grid coarseness and hence to distinguish between the true and artifactual values of the correlation coefficient. PMID:28202589

  4. Comparison of cosmology and seabed acoustics measurements using statistical inference from maximum entropy

    NASA Astrophysics Data System (ADS)

    Knobles, David; Stotts, Steven; Sagers, Jason

    2012-03-01

    Why can one obtain from similar measurements a greater amount of information about cosmological parameters than seabed parameters in ocean waveguides? The cosmological measurements are in the form of a power spectrum constructed from spatial correlations of temperature fluctuations within the microwave background radiation. The seabed acoustic measurements are in the form of spatial correlations along the length of a spatial aperture. This study explores the above question from the perspective of posterior probability distributions obtained from maximizing a relative entropy functional. An answer is in part that the seabed in shallow ocean environments generally has large temporal and spatial inhomogeneities, whereas the early universe was a nearly homogeneous cosmological soup with small but important fluctuations. Acoustic propagation models used in shallow water acoustics generally do not capture spatial and temporal variability sufficiently well, which leads to model error dominating the statistical inference problem. This is not the case in cosmology. Further, the physics of the acoustic modes in cosmology is that of a standing wave with simple initial conditions, whereas for underwater acoustics it is a traveling wave in a strongly inhomogeneous bounded medium.

  5. Optimizing dynamic downscaling in one-way nesting using a regional ocean model

    NASA Astrophysics Data System (ADS)

    Pham, Van Sy; Hwang, Jin Hwan; Ku, Hyeyun

    2016-10-01

    Dynamical downscaling with nested regional oceanographic models has been demonstrated to be an effective approach for both operationally forecasted sea weather on regional scales and projections of future climate change and its impact on the ocean. However, when nesting procedures are carried out in dynamic downscaling from a larger-scale model or set of observations to a smaller scale, errors are unavoidable due to the differences in grid sizes and updating intervals. The present work assesses the impact of errors produced by nesting procedures on the downscaled results from Ocean Regional Circulation Models (ORCMs). Errors are identified and evaluated based on their sources and characteristics by employing the Big-Brother Experiment (BBE). The BBE uses the same model to produce both nesting and nested simulations; so it addresses those error sources separately (i.e., without combining the contributions of errors from different sources). Here, we focus on discussing errors resulting from the spatial grids' differences, the updating times and the domain sizes. After the BBE was separately run for diverse cases, a Taylor diagram was used to analyze the results and recommend an optimal combination of grid size, updating period and domain sizes. Finally, suggested setups for the downscaling were evaluated by examining the spatial correlations of variables and the relative magnitudes of variances between the nested model and the original data.

  6. Linear-time general decoding algorithm for the surface code

    NASA Astrophysics Data System (ADS)

    Darmawan, Andrew S.; Poulin, David

    2018-05-01

    A quantum error correcting protocol can be substantially improved by taking into account features of the physical noise process. We present an efficient decoder for the surface code which can account for general noise features, including coherences and correlations. We demonstrate that the decoder significantly outperforms the conventional matching algorithm on a variety of noise models, including non-Pauli noise and spatially correlated noise. The algorithm is based on an approximate calculation of the logical channel using a tensor-network description of the noisy state.

  7. Do hospitals respond to rivals' quality and efficiency? A spatial panel econometric analysis.

    PubMed

    Longo, Francesco; Siciliani, Luigi; Gravelle, Hugh; Santos, Rita

    2017-09-01

    We investigate whether hospitals in the English National Health Service change their quality or efficiency in response to changes in quality or efficiency of neighbouring hospitals. We first provide a theoretical model that predicts that a hospital will not respond to changes in the efficiency of its rivals but may change its quality or efficiency in response to changes in the quality of rivals, though the direction of the response is ambiguous. We use data on eight quality measures (including mortality, emergency readmissions, patient reported outcome, and patient satisfaction) and six efficiency measures (including bed occupancy, cancelled operations, and costs) for public hospitals between 2010/11 and 2013/14 to estimate both spatial cross-sectional and spatial fixed- and random-effects panel data models. We find that although quality and efficiency measures are unconditionally spatially correlated, the spatial regression models suggest that a hospital's quality or efficiency does not respond to its rivals' quality or efficiency, except for a hospital's overall mortality that is positively associated with that of its rivals. The results are robust to allowing for spatially correlated covariates and errors and to instrumenting rivals' quality and efficiency. Copyright © 2017 John Wiley & Sons, Ltd.

  8. Spatial regression test for ensuring temperature data quality in southern Spain

    NASA Astrophysics Data System (ADS)

    Estévez, J.; Gavilán, P.; García-Marín, A. P.

    2018-01-01

    Quality assurance of meteorological data is crucial for ensuring the reliability of applications and models that use such data as input variables, especially in the field of environmental sciences. Spatial validation of meteorological data is based on the application of quality control procedures using data from neighbouring stations to assess the validity of data from a candidate station (the station of interest). These kinds of tests, which are referred to in the literature as spatial consistency tests, take data from neighbouring stations in order to estimate the corresponding measurement at the candidate station. These estimations can be made by weighting values according to the distance between the stations or to the coefficient of correlation, among other methods. The test applied in this study relies on statistical decision-making and uses a weighting based on the standard error of the estimate. This paper summarizes the results of the application of this test to maximum, minimum and mean temperature data from the Agroclimatic Information Network of Andalusia (southern Spain). This quality control procedure includes a decision based on a factor f, the fraction of potential outliers for each station across the region. Using GIS techniques, the geographic distribution of the errors detected has been also analysed. Finally, the performance of the test was assessed by evaluating its effectiveness in detecting known errors.

  9. Quantifying and correcting motion artifacts in MRI

    NASA Astrophysics Data System (ADS)

    Bones, Philip J.; Maclaren, Julian R.; Millane, Rick P.; Watts, Richard

    2006-08-01

    Patient motion during magnetic resonance imaging (MRI) can produce significant artifacts in a reconstructed image. Since measurements are made in the spatial frequency domain ('k-space'), rigid-body translational motion results in phase errors in the data samples while rotation causes location errors. A method is presented to detect and correct these errors via a modified sampling strategy, thereby achieving more accurate image reconstruction. The strategy involves sampling vertical and horizontal strips alternately in k-space and employs phase correlation within the overlapping segments to estimate translational motion. An extension, also based on correlation, is employed to estimate rotational motion. Results from simulations with computer-generated phantoms suggest that the algorithm is robust up to realistic noise levels. The work is being extended to physical phantoms. Provided that a reference image is available and the object is of limited extent, it is shown that a measure related to the amount of energy outside the support can be used to objectively compare the severity of motion-induced artifacts.

  10. Classification of radiological errors in chest radiographs, using support vector machine on the spatial frequency features of false- negative and false-positive regions

    NASA Astrophysics Data System (ADS)

    Pietrzyk, Mariusz W.; Donovan, Tim; Brennan, Patrick C.; Dix, Alan; Manning, David J.

    2011-03-01

    Aim: To optimize automated classification of radiological errors during lung nodule detection from chest radiographs (CxR) using a support vector machine (SVM) run on the spatial frequency features extracted from the local background of selected regions. Background: The majority of the unreported pulmonary nodules are visually detected but not recognized; shown by the prolonged dwell time values at false-negative regions. Similarly, overestimated nodule locations are capturing substantial amounts of foveal attention. Spatial frequency properties of selected local backgrounds are correlated with human observer responses either in terms of accuracy in indicating abnormality position or in the precision of visual sampling the medical images. Methods: Seven radiologists participated in the eye tracking experiments conducted under conditions of pulmonary nodule detection from a set of 20 postero-anterior CxR. The most dwelled locations have been identified and subjected to spatial frequency (SF) analysis. The image-based features of selected ROI were extracted with un-decimated Wavelet Packet Transform. An analysis of variance was run to select SF features and a SVM schema was implemented to classify False-Negative and False-Positive from all ROI. Results: A relative high overall accuracy was obtained for each individually developed Wavelet-SVM algorithm, with over 90% average correct ratio for errors recognition from all prolonged dwell locations. Conclusion: The preliminary results show that combined eye-tracking and image-based features can be used for automated detection of radiological error with SVM. The work is still in progress and not all analytical procedures have been completed, which might have an effect on the specificity of the algorithm.

  11. Quasi-Likelihood Techniques in a Logistic Regression Equation for Identifying Simulium damnosum s.l. Larval Habitats Intra-cluster Covariates in Togo.

    PubMed

    Jacob, Benjamin G; Novak, Robert J; Toe, Laurent; Sanfo, Moussa S; Afriyie, Abena N; Ibrahim, Mohammed A; Griffith, Daniel A; Unnasch, Thomas R

    2012-01-01

    The standard methods for regression analyses of clustered riverine larval habitat data of Simulium damnosum s.l. a major black-fly vector of Onchoceriasis, postulate models relating observational ecological-sampled parameter estimators to prolific habitats without accounting for residual intra-cluster error correlation effects. Generally, this correlation comes from two sources: (1) the design of the random effects and their assumed covariance from the multiple levels within the regression model; and, (2) the correlation structure of the residuals. Unfortunately, inconspicuous errors in residual intra-cluster correlation estimates can overstate precision in forecasted S.damnosum s.l. riverine larval habitat explanatory attributes regardless how they are treated (e.g., independent, autoregressive, Toeplitz, etc). In this research, the geographical locations for multiple riverine-based S. damnosum s.l. larval ecosystem habitats sampled from 2 pre-established epidemiological sites in Togo were identified and recorded from July 2009 to June 2010. Initially the data was aggregated into proc genmod. An agglomerative hierarchical residual cluster-based analysis was then performed. The sampled clustered study site data was then analyzed for statistical correlations using Monthly Biting Rates (MBR). Euclidean distance measurements and terrain-related geomorphological statistics were then generated in ArcGIS. A digital overlay was then performed also in ArcGIS using the georeferenced ground coordinates of high and low density clusters stratified by Annual Biting Rates (ABR). This data was overlain onto multitemporal sub-meter pixel resolution satellite data (i.e., QuickBird 0.61m wavbands ). Orthogonal spatial filter eigenvectors were then generated in SAS/GIS. Univariate and non-linear regression-based models (i.e., Logistic, Poisson and Negative Binomial) were also employed to determine probability distributions and to identify statistically significant parameter estimators from the sampled data. Thereafter, Durbin-Watson test statistics were used to test the null hypothesis that the regression residuals were not autocorrelated against the alternative that the residuals followed an autoregressive process in AUTOREG. Bayesian uncertainty matrices were also constructed employing normal priors for each of the sampled estimators in PROC MCMC. The residuals revealed both spatially structured and unstructured error effects in the high and low ABR-stratified clusters. The analyses also revealed that the estimators, levels of turbidity and presence of rocks were statistically significant for the high-ABR-stratified clusters, while the estimators distance between habitats and floating vegetation were important for the low-ABR-stratified cluster. Varying and constant coefficient regression models, ABR- stratified GIS-generated clusters, sub-meter resolution satellite imagery, a robust residual intra-cluster diagnostic test, MBR-based histograms, eigendecomposition spatial filter algorithms and Bayesian matrices can enable accurate autoregressive estimation of latent uncertainity affects and other residual error probabilities (i.e., heteroskedasticity) for testing correlations between georeferenced S. damnosum s.l. riverine larval habitat estimators. The asymptotic distribution of the resulting residual adjusted intra-cluster predictor error autocovariate coefficients can thereafter be established while estimates of the asymptotic variance can lead to the construction of approximate confidence intervals for accurately targeting productive S. damnosum s.l habitats based on spatiotemporal field-sampled count data.

  12. Local indicators of geocoding accuracy (LIGA): theory and application

    PubMed Central

    Jacquez, Geoffrey M; Rommel, Robert

    2009-01-01

    Background Although sources of positional error in geographic locations (e.g. geocoding error) used for describing and modeling spatial patterns are widely acknowledged, research on how such error impacts the statistical results has been limited. In this paper we explore techniques for quantifying the perturbability of spatial weights to different specifications of positional error. Results We find that a family of curves describes the relationship between perturbability and positional error, and use these curves to evaluate sensitivity of alternative spatial weight specifications to positional error both globally (when all locations are considered simultaneously) and locally (to identify those locations that would benefit most from increased geocoding accuracy). We evaluate the approach in simulation studies, and demonstrate it using a case-control study of bladder cancer in south-eastern Michigan. Conclusion Three results are significant. First, the shape of the probability distributions of positional error (e.g. circular, elliptical, cross) has little impact on the perturbability of spatial weights, which instead depends on the mean positional error. Second, our methodology allows researchers to evaluate the sensitivity of spatial statistics to positional accuracy for specific geographies. This has substantial practical implications since it makes possible routine sensitivity analysis of spatial statistics to positional error arising in geocoded street addresses, global positioning systems, LIDAR and other geographic data. Third, those locations with high perturbability (most sensitive to positional error) and high leverage (that contribute the most to the spatial weight being considered) will benefit the most from increased positional accuracy. These are rapidly identified using a new visualization tool we call the LIGA scatterplot. Herein lies a paradox for spatial analysis: For a given level of positional error increasing sample density to more accurately follow the underlying population distribution increases perturbability and introduces error into the spatial weights matrix. In some studies positional error may not impact the statistical results, and in others it might invalidate the results. We therefore must understand the relationships between positional accuracy and the perturbability of the spatial weights in order to have confidence in a study's results. PMID:19863795

  13. Fine-scale landscape genetics of the American badger (Taxidea taxus): disentangling landscape effects and sampling artifacts in a poorly understood species

    PubMed Central

    Kierepka, E M; Latch, E K

    2016-01-01

    Landscape genetics is a powerful tool for conservation because it identifies landscape features that are important for maintaining genetic connectivity between populations within heterogeneous landscapes. However, using landscape genetics in poorly understood species presents a number of challenges, namely, limited life history information for the focal population and spatially biased sampling. Both obstacles can reduce power in statistics, particularly in individual-based studies. In this study, we genotyped 233 American badgers in Wisconsin at 12 microsatellite loci to identify alternative statistical approaches that can be applied to poorly understood species in an individual-based framework. Badgers are protected in Wisconsin owing to an overall lack in life history information, so our study utilized partial redundancy analysis (RDA) and spatially lagged regressions to quantify how three landscape factors (Wisconsin River, Ecoregions and land cover) impacted gene flow. We also performed simulations to quantify errors created by spatially biased sampling. Statistical analyses first found that geographic distance was an important influence on gene flow, mainly driven by fine-scale positive spatial autocorrelations. After controlling for geographic distance, both RDA and regressions found that Wisconsin River and Agriculture were correlated with genetic differentiation. However, only Agriculture had an acceptable type I error rate (3–5%) to be considered biologically relevant. Collectively, this study highlights the benefits of combining robust statistics and error assessment via simulations and provides a method for hypothesis testing in individual-based landscape genetics. PMID:26243136

  14. Exploring the Relationship Between Students' Visual Spatial Abilities and Comprehension in STEM Fields

    NASA Astrophysics Data System (ADS)

    Cid, Ximena; Lopez, Ramon

    2011-10-01

    It is well known that student have difficulties with concepts in physics and space science as well as other STEM fields. Some of these difficulties may be rooted in student conceptual errors, whereas other difficulties may arise from issues with visual cognition and spatial intelligence. It has also been suggested that some aspects of high attrition rates from STEM fields can be attributed to students' visual spatial abilities. We will be presenting data collected from introductory courses in the College of Engineering, Department of Physics, Department of Chemistry, and the Department of Mathematics at the University of Texas at Arlington. These data examine the relationship between students' visual spatial abilities and comprehension in the subject matter. Where correlations are found to exist, visual spatial interventions can be implemented to reduce the attrition rates.

  15. Phase measurement error in summation of electron holography series.

    PubMed

    McLeod, Robert A; Bergen, Michael; Malac, Marek

    2014-06-01

    Off-axis electron holography is a method for the transmission electron microscope (TEM) that measures the electric and magnetic properties of a specimen. The electrostatic and magnetic potentials modulate the electron wavefront phase. The error in measurement of the phase therefore determines the smallest observable changes in electric and magnetic properties. Here we explore the summation of a hologram series to reduce the phase error and thereby improve the sensitivity of electron holography. Summation of hologram series requires independent registration and correction of image drift and phase wavefront drift, the consequences of which are discussed. Optimization of the electro-optical configuration of the TEM for the double biprism configuration is examined. An analytical model of image and phase drift, composed of a combination of linear drift and Brownian random-walk, is derived and experimentally verified. The accuracy of image registration via cross-correlation and phase registration is characterized by simulated hologram series. The model of series summation errors allows the optimization of phase error as a function of exposure time and fringe carrier frequency for a target spatial resolution. An experimental example of hologram series summation is provided on WS2 fullerenes. A metric is provided to measure the object phase error from experimental results and compared to analytical predictions. The ultimate experimental object root-mean-square phase error is 0.006 rad (2π/1050) at a spatial resolution less than 0.615 nm and a total exposure time of 900 s. The ultimate phase error in vacuum adjacent to the specimen is 0.0037 rad (2π/1700). The analytical prediction of phase error differs with the experimental metrics by +7% inside the object and -5% in the vacuum, indicating that the model can provide reliable quantitative predictions. Crown Copyright © 2014. Published by Elsevier B.V. All rights reserved.

  16. Cross correlation calculations and neutron scattering analysis for a portable solid state neutron detection system

    NASA Astrophysics Data System (ADS)

    Saltos, Andrea

    In efforts to perform accurate dosimetry, Oakes et al. [Nucl. Intrum. Mehods. (2013)] introduced a new portable solid state neutron rem meter based on an adaptation of the Bonner sphere and the position sensitive long counter. The system utilizes high thermal efficiency neutron detectors to generate a linear combination of measurement signals that are used to estimate the incident neutron spectra. The inversion problem associated to deduce dose from the counts in individual detector elements is addressed by applying a cross-correlation method which allows estimation of dose with average errors less than 15%. In this work, an evaluation of the performance of this system was extended to take into account new correlation techniques and neutron scattering contribution. To test the effectiveness of correlations, the Distance correlation, Pearson Product-Moment correlation, and their weighted versions were performed between measured spatial detector responses obtained from nine different test spectra, and the spatial response of Library functions generated by MCNPX. Results indicate that there is no advantage of using the Distance Correlation over the Pearson Correlation, and that weighted versions of these correlations do not increase their performance in evaluating dose. Both correlations were proven to work well even at low integrated doses measured for short periods of time. To evaluate the contribution produced by room-return neutrons on the dosimeter response, MCNPX was used to simulate dosimeter responses for five isotropic neutron sources placed inside different sizes of rectangular concrete rooms. Results show that the contribution of scattered neutrons to the response of the dosimeter can be significant, so that for most cases the dose is over predicted with errors as large as 500%. A possible method to correct for the contribution of room-return neutrons is also assessed and can be used as a good initial estimate on how to approach the problem.

  17. Confronting weather and climate models with observational data from soil moisture networks over the United States

    PubMed Central

    Dirmeyer, Paul A.; Wu, Jiexia; Norton, Holly E.; Dorigo, Wouter A.; Quiring, Steven M.; Ford, Trenton W.; Santanello, Joseph A.; Bosilovich, Michael G.; Ek, Michael B.; Koster, Randal D.; Balsamo, Gianpaolo; Lawrence, David M.

    2018-01-01

    Four land surface models in uncoupled and coupled configurations are compared to observations of daily soil moisture from 19 networks in the conterminous United States to determine the viability of such comparisons and explore the characteristics of model and observational data. First, observations are analyzed for error characteristics and representation of spatial and temporal variability. Some networks have multiple stations within an area comparable to model grid boxes; for those we find that aggregation of stations before calculation of statistics has little effect on estimates of variance, but soil moisture memory is sensitive to aggregation. Statistics for some networks stand out as unlike those of their neighbors, likely due to differences in instrumentation, calibration and maintenance. Buried sensors appear to have less random error than near-field remote sensing techniques, and heat dissipation sensors show less temporal variability than other types. Model soil moistures are evaluated using three metrics: standard deviation in time, temporal correlation (memory) and spatial correlation (length scale). Models do relatively well in capturing large-scale variability of metrics across climate regimes, but poorly reproduce observed patterns at scales of hundreds of kilometers and smaller. Uncoupled land models do no better than coupled model configurations, nor do reanalyses outperform free-running models. Spatial decorrelation scales are found to be difficult to diagnose. Using data for model validation, calibration or data assimilation from multiple soil moisture networks with different types of sensors and measurement techniques requires great caution. Data from models and observations should be put on the same spatial and temporal scales before comparison. PMID:29645013

  18. Confronting Weather and Climate Models with Observational Data from Soil Moisture Networks over the United States

    NASA Technical Reports Server (NTRS)

    Dirmeyer, Paul A.; Wu, Jiexia; Norton, Holly E.; Dorigo, Wouter A.; Quiring, Steven M.; Ford, Trenton W.; Santanello, Joseph A., Jr.; Bosilovich, Michael G.; Ek, Michael B.; Koster, Randal Dean; hide

    2016-01-01

    Four land surface models in uncoupled and coupled configurations are compared to observations of daily soil moisture from 19 networks in the conterminous United States to determine the viability of such comparisons and explore the characteristics of model and observational data. First, observations are analyzed for error characteristics and representation of spatial and temporal variability. Some networks have multiple stations within an area comparable to model grid boxes; for those we find that aggregation of stations before calculation of statistics has little effect on estimates of variance, but soil moisture memory is sensitive to aggregation. Statistics for some networks stand out as unlike those of their neighbors, likely due to differences in instrumentation, calibration and maintenance. Buried sensors appear to have less random error than near-field remote sensing techniques, and heat dissipation sensors show less temporal variability than other types. Model soil moistures are evaluated using three metrics: standard deviation in time, temporal correlation (memory) and spatial correlation (length scale). Models do relatively well in capturing large-scale variability of metrics across climate regimes, but poorly reproduce observed patterns at scales of hundreds of kilometers and smaller. Uncoupled land models do no better than coupled model configurations, nor do reanalyses out perform free-running models. Spatial decorrelation scales are found to be difficult to diagnose. Using data for model validation, calibration or data assimilation from multiple soil moisture networks with different types of sensors and measurement techniques requires great caution. Data from models and observations should be put on the same spatial and temporal scales before comparison.

  19. Confronting weather and climate models with observational data from soil moisture networks over the United States.

    PubMed

    Dirmeyer, Paul A; Wu, Jiexia; Norton, Holly E; Dorigo, Wouter A; Quiring, Steven M; Ford, Trenton W; Santanello, Joseph A; Bosilovich, Michael G; Ek, Michael B; Koster, Randal D; Balsamo, Gianpaolo; Lawrence, David M

    2016-04-01

    Four land surface models in uncoupled and coupled configurations are compared to observations of daily soil moisture from 19 networks in the conterminous United States to determine the viability of such comparisons and explore the characteristics of model and observational data. First, observations are analyzed for error characteristics and representation of spatial and temporal variability. Some networks have multiple stations within an area comparable to model grid boxes; for those we find that aggregation of stations before calculation of statistics has little effect on estimates of variance, but soil moisture memory is sensitive to aggregation. Statistics for some networks stand out as unlike those of their neighbors, likely due to differences in instrumentation, calibration and maintenance. Buried sensors appear to have less random error than near-field remote sensing techniques, and heat dissipation sensors show less temporal variability than other types. Model soil moistures are evaluated using three metrics: standard deviation in time, temporal correlation (memory) and spatial correlation (length scale). Models do relatively well in capturing large-scale variability of metrics across climate regimes, but poorly reproduce observed patterns at scales of hundreds of kilometers and smaller. Uncoupled land models do no better than coupled model configurations, nor do reanalyses outperform free-running models. Spatial decorrelation scales are found to be difficult to diagnose. Using data for model validation, calibration or data assimilation from multiple soil moisture networks with different types of sensors and measurement techniques requires great caution. Data from models and observations should be put on the same spatial and temporal scales before comparison.

  20. The influence of landscape characteristics and home-range size on the quantification of landscape-genetics relationships

    Treesearch

    Tabitha A. Graves; Tzeidle N. Wasserman; Milton Cezar Ribeiro; Erin L. Landguth; Stephen F. Spear; Niko Balkenhol; Colleen B. Higgins; Marie-Josee Fortin; Samuel A. Cushman; Lisette P. Waits

    2012-01-01

    A common approach used to estimate landscape resistance involves comparing correlations of ecological and genetic distances calculated among individuals of a species. However, the location of sampled individuals may contain some degree of spatial uncertainty due to the natural variation of animals moving through their home range ormeasurement error in plant or animal...

  1. Linear models for airborne-laser-scanning-based operational forest inventory with small field sample size and highly correlated LiDAR data

    USGS Publications Warehouse

    Junttila, Virpi; Kauranne, Tuomo; Finley, Andrew O.; Bradford, John B.

    2015-01-01

    Modern operational forest inventory often uses remotely sensed data that cover the whole inventory area to produce spatially explicit estimates of forest properties through statistical models. The data obtained by airborne light detection and ranging (LiDAR) correlate well with many forest inventory variables, such as the tree height, the timber volume, and the biomass. To construct an accurate model over thousands of hectares, LiDAR data must be supplemented with several hundred field sample measurements of forest inventory variables. This can be costly and time consuming. Different LiDAR-data-based and spatial-data-based sampling designs can reduce the number of field sample plots needed. However, problems arising from the features of the LiDAR data, such as a large number of predictors compared with the sample size (overfitting) or a strong correlation among predictors (multicollinearity), may decrease the accuracy and precision of the estimates and predictions. To overcome these problems, a Bayesian linear model with the singular value decomposition of predictors, combined with regularization, is proposed. The model performance in predicting different forest inventory variables is verified in ten inventory areas from two continents, where the number of field sample plots is reduced using different sampling designs. The results show that, with an appropriate field plot selection strategy and the proposed linear model, the total relative error of the predicted forest inventory variables is only 5%–15% larger using 50 field sample plots than the error of a linear model estimated with several hundred field sample plots when we sum up the error due to both the model noise variance and the model’s lack of fit.

  2. Patients with chronic insomnia have selective impairments in memory that are modulated by cortisol.

    PubMed

    Chen, Gui-Hai; Xia, Lan; Wang, Fang; Li, Xue-Wei; Jiao, Chuan-An

    2016-10-01

    Memory impairment is a frequent complaint in insomniacs; however, it is not consistently demonstrated. It is unknown whether memory impairment in insomniacs involves neuroendocrine dysfunction. The participants in this study were selected from the clinical setting and included 21 patients with chronic insomnia disorder (CID), 25 patients with insomnia and comorbid depressive disorder (CDD), and 20 control participants without insomnia. We evaluated spatial working and reference memory, object working and reference memory, and object recognition memory using the Nine Box Maze Test. We also evaluated serum neuroendocrine hormone levels. Compared to the controls, the CID patients made significantly more errors in spatial working and object recognition memory (p < .05), whereas the CDD patients performed poorly in all the assessed memory types (p < .05). In addition, the CID patients had higher levels (mean difference [95% CI]) of corticotrophin-releasing hormone, cortisol (31.98 [23.97, 39.98] μg/l), total triiodothyronine (667.58 [505.71, 829.45] μg/l), and total thyroxine (41.49 [33.23, 49.74] μg/l) (p < .05), and lower levels of thyrotropin-releasing hormone (-35.93 [-38.83, -33.02] ng/l), gonadotropin-releasing hormone (-4.50 [-5.02, -3.98] ng/l) (p < .05), and adrenocorticotropic hormone compared to the CDD patients. After controlling for confounding variables, the partial correlation analysis revealed that the levels of cortisol positively correlated with the errors in object working memory (r = .534, p = .033) and negatively correlated with the errors in object recognition memory (r = -.659, p = .006) in the CID patients. The results suggest that the CID patients had selective memory impairment, which may be mediated by increased cortisol levels. © 2016 Society for Psychophysiological Research.

  3. The effect of the dynamic wet troposphere on VLBI measurements

    NASA Technical Reports Server (NTRS)

    Treuhaft, R. N.; Lanyi, G. E.

    1986-01-01

    Calculations using a statistical model of water vapor fluctuations yield the effect of the dynamic wet troposphere on Very Long Baseline Interferometry (VLBI) measurements. The statistical model arises from two primary assumptions: (1) the spatial structure of refractivity fluctuations can be closely approximated by elementary (Kolmogorov) turbulence theory, and (2) temporal fluctuations are caused by spatial patterns which are moved over a site by the wind. The consequences of these assumptions are outlined for the VLBI delay and delay rate observables. For example, wet troposphere induced rms delays for Deep Space Network (DSN) VLBI at 20-deg elevation are about 3 cm of delay per observation, which is smaller, on the average, than other known error sources in the current DSN VLBI data set. At 20-deg elevation for 200-s time intervals, water vapor induces approximately 1.5 x 10 to the minus 13th power s/s in the Allan standard deviation of interferometric delay, which is a measure of the delay rate observable error. In contrast to the delay error, the delay rate measurement error is dominated by water vapor fluctuations. Water vapor induced VLBI parameter errors and correlations are calculated. For the DSN, baseline length parameter errors due to water vapor fluctuations are in the range of 3 to 5 cm. The above physical assumptions also lead to a method for including the water vapor fluctuations in the parameter estimation procedure, which is used to extract baseline and source information from the VLBI observables.

  4. Hybrid inversions of CO2 fluxes at regional scale applied to network design

    NASA Astrophysics Data System (ADS)

    Kountouris, Panagiotis; Gerbig, Christoph; -Thomas Koch, Frank

    2013-04-01

    Long term observations of atmospheric greenhouse gas measuring stations, located at representative regions over the continent, improve our understanding of greenhouse gas sources and sinks. These mixing ratio measurements can be linked to surface fluxes by atmospheric transport inversions. Within the upcoming years new stations are to be deployed, which requires decision making tools with respect to the location and the density of the network. We are developing a method to assess potential greenhouse gas observing networks in terms of their ability to recover specific target quantities. As target quantities we use CO2 fluxes aggregated to specific spatial and temporal scales. We introduce a high resolution inverse modeling framework, which attempts to combine advantages from pixel based inversions with those of a carbon cycle data assimilation system (CCDAS). The hybrid inversion system consists of the Lagrangian transport model STILT, the diagnostic biosphere model VPRM and a Bayesian inversion scheme. We aim to retrieve the spatiotemporal distribution of net ecosystem exchange (NEE) at a high spatial resolution (10 km x 10 km) by inverting for spatially and temporally varying scaling factors for gross ecosystem exchange (GEE) and respiration (R) rather than solving for the fluxes themselves. Thus the state space includes parameters for controlling photosynthesis and respiration, but unlike in a CCDAS it allows for spatial and temporal variations, which can be expressed as NEE(x,y,t) = λG(x,y,t) GEE(x,y,t) + λR(x,y,t) R(x,y,t) . We apply spatially and temporally correlated uncertainties by using error covariance matrices with non-zero off-diagonal elements. Synthetic experiments will test our system and select the optimal a priori error covariance by using different spatial and temporal correlation lengths on the error statistics of the a priori covariance and comparing the optimized fluxes against the 'known truth'. As 'known truth' we use independent fluxes generated from a different biosphere model (BIOME-BGC). Initially we perform single-station inversions for Ochsenkopf tall tower located in Germany. Further expansion of the inversion framework to multiple stations and its application to network design will address the questions of how well a set of network stations can constrain a given target quantity, and whether there are objective criteria to select an optimal configuration for new stations that maximizes the uncertainty reduction.

  5. An Efficient Data Compression Model Based on Spatial Clustering and Principal Component Analysis in Wireless Sensor Networks.

    PubMed

    Yin, Yihang; Liu, Fengzheng; Zhou, Xiang; Li, Quanzhong

    2015-08-07

    Wireless sensor networks (WSNs) have been widely used to monitor the environment, and sensors in WSNs are usually power constrained. Because inner-node communication consumes most of the power, efficient data compression schemes are needed to reduce the data transmission to prolong the lifetime of WSNs. In this paper, we propose an efficient data compression model to aggregate data, which is based on spatial clustering and principal component analysis (PCA). First, sensors with a strong temporal-spatial correlation are grouped into one cluster for further processing with a novel similarity measure metric. Next, sensor data in one cluster are aggregated in the cluster head sensor node, and an efficient adaptive strategy is proposed for the selection of the cluster head to conserve energy. Finally, the proposed model applies principal component analysis with an error bound guarantee to compress the data and retain the definite variance at the same time. Computer simulations show that the proposed model can greatly reduce communication and obtain a lower mean square error than other PCA-based algorithms.

  6. Spatial probabilistic pulsatility model for enhancing photoplethysmographic imaging systems

    NASA Astrophysics Data System (ADS)

    Amelard, Robert; Clausi, David A.; Wong, Alexander

    2016-11-01

    Photoplethysmographic imaging (PPGI) is a widefield noncontact biophotonic technology able to remotely monitor cardiovascular function over anatomical areas. Although spatial context can provide insight into physiologically relevant sampling locations, existing PPGI systems rely on coarse spatial averaging with no anatomical priors for assessing arterial pulsatility. Here, we developed a continuous probabilistic pulsatility model for importance-weighted blood pulse waveform extraction. Using a data-driven approach, the model was constructed using a 23 participant sample with a large demographic variability (11/12 female/male, age 11 to 60 years, BMI 16.4 to 35.1 kg·m-2). Using time-synchronized ground-truth blood pulse waveforms, spatial correlation priors were computed and projected into a coaligned importance-weighted Cartesian space. A modified Parzen-Rosenblatt kernel density estimation method was used to compute the continuous resolution-agnostic probabilistic pulsatility model. The model identified locations that consistently exhibited pulsatility across the sample. Blood pulse waveform signals extracted with the model exhibited significantly stronger temporal correlation (W=35,p<0.01) and spectral SNR (W=31,p<0.01) compared to uniform spatial averaging. Heart rate estimation was in strong agreement with true heart rate [r2=0.9619, error (μ,σ)=(0.52,1.69) bpm].

  7. Inversion of multi-frequency electromagnetic induction data for 3D characterization of hydraulic conductivity

    USGS Publications Warehouse

    Brosten, Troy R.; Day-Lewis, Frederick D.; Schultz, Gregory M.; Curtis, Gary P.; Lane, John W.

    2011-01-01

    Electromagnetic induction (EMI) instruments provide rapid, noninvasive, and spatially dense data for characterization of soil and groundwater properties. Data from multi-frequency EMI tools can be inverted to provide quantitative electrical conductivity estimates as a function of depth. In this study, multi-frequency EMI data collected across an abandoned uranium mill site near Naturita, Colorado, USA, are inverted to produce vertical distribution of electrical conductivity (EC) across the site. The relation between measured apparent electrical conductivity (ECa) and hydraulic conductivity (K) is weak (correlation coefficient of 0.20), whereas the correlation between the depth dependent EC obtained from the inversions, and K is sufficiently strong to be used for hydrologic estimation (correlation coefficient of − 0.62). Depth-specific EC values were correlated with co-located K measurements to develop a site-specific ln(EC)–ln(K) relation. This petrophysical relation was applied to produce a spatially detailed map of K across the study area. A synthetic example based on ECa values at the site was used to assess model resolution and correlation loss given variations in depth and/or measurement error. Results from synthetic modeling indicate that optimum correlation with K occurs at ~ 0.5 m followed by a gradual correlation loss of 90% at 2.3 m. These results are consistent with an analysis of depth of investigation (DOI) given the range of frequencies, transmitter–receiver separation, and measurement errors for the field data. DOIs were estimated at 2.0 ± 0.5 m depending on the soil conductivities. A 4-layer model, with varying thicknesses, was used to invert the ECa to maximize available information within the aquifer region for improved correlations with K. Results show improved correlation between K and the corresponding inverted EC at similar depths, underscoring the importance of inversion in using multi-frequency EMI data for hydrologic estimation.

  8. Inversion of multi-frequency electromagnetic induction data for 3D characterization of hydraulic conductivity

    USGS Publications Warehouse

    Brosten, T.R.; Day-Lewis, F. D.; Schultz, G.M.; Curtis, G.P.; Lane, J.W.

    2011-01-01

    Electromagnetic induction (EMI) instruments provide rapid, noninvasive, and spatially dense data for characterization of soil and groundwater properties. Data from multi-frequency EMI tools can be inverted to provide quantitative electrical conductivity estimates as a function of depth. In this study, multi-frequency EMI data collected across an abandoned uranium mill site near Naturita, Colorado, USA, are inverted to produce vertical distribution of electrical conductivity (EC) across the site. The relation between measured apparent electrical conductivity (ECa) and hydraulic conductivity (K) is weak (correlation coefficient of 0.20), whereas the correlation between the depth dependent EC obtained from the inversions, and K is sufficiently strong to be used for hydrologic estimation (correlation coefficient of -0.62). Depth-specific EC values were correlated with co-located K measurements to develop a site-specific ln(EC)-ln(K) relation. This petrophysical relation was applied to produce a spatially detailed map of K across the study area. A synthetic example based on ECa values at the site was used to assess model resolution and correlation loss given variations in depth and/or measurement error. Results from synthetic modeling indicate that optimum correlation with K occurs at ~0.5m followed by a gradual correlation loss of 90% at 2.3m. These results are consistent with an analysis of depth of investigation (DOI) given the range of frequencies, transmitter-receiver separation, and measurement errors for the field data. DOIs were estimated at 2.0??0.5m depending on the soil conductivities. A 4-layer model, with varying thicknesses, was used to invert the ECa to maximize available information within the aquifer region for improved correlations with K. Results show improved correlation between K and the corresponding inverted EC at similar depths, underscoring the importance of inversion in using multi-frequency EMI data for hydrologic estimation. ?? 2011.

  9. Application of objective clinical human reliability analysis (OCHRA) in assessment of technical performance in laparoscopic rectal cancer surgery.

    PubMed

    Foster, J D; Miskovic, D; Allison, A S; Conti, J A; Ockrim, J; Cooper, E J; Hanna, G B; Francis, N K

    2016-06-01

    Laparoscopic rectal resection is technically challenging, with outcomes dependent upon technical performance. No robust objective assessment tool exists for laparoscopic rectal resection surgery. This study aimed to investigate the application of the objective clinical human reliability analysis (OCHRA) technique for assessing technical performance of laparoscopic rectal surgery and explore the validity and reliability of this technique. Laparoscopic rectal cancer resection operations were described in the format of a hierarchical task analysis. Potential technical errors were defined. The OCHRA technique was used to identify technical errors enacted in videos of twenty consecutive laparoscopic rectal cancer resection operations from a single site. The procedural task, spatial location, and circumstances of all identified errors were logged. Clinical validity was assessed through correlation with clinical outcomes; reliability was assessed by test-retest. A total of 335 execution errors identified, with a median 15 per operation. More errors were observed during pelvic tasks compared with abdominal tasks (p < 0.001). Within the pelvis, more errors were observed during dissection on the right side than the left (p = 0.03). Test-retest confirmed reliability (r = 0.97, p < 0.001). A significant correlation was observed between error frequency and mesorectal specimen quality (r s = 0.52, p = 0.02) and with blood loss (r s = 0.609, p = 0.004). OCHRA offers a valid and reliable method for evaluating technical performance of laparoscopic rectal surgery.

  10. Distortion of Digital Image Correlation (DIC) Displacements and Strains from Heat Waves

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jones, E. M. C.; Reu, P. L.

    “Heat waves” is a colloquial term used to describe convective currents in air formed when different objects in an area are at different temperatures. In the context of Digital Image Correlation (DIC) and other optical-based image processing techniques, imaging an object of interest through heat waves can significantly distort the apparent location and shape of the object. We present that there are many potential heat sources in DIC experiments, including but not limited to lights, cameras, hot ovens, and sunlight, yet error caused by heat waves is often overlooked. This paper first briefly presents three practical situations in which heatmore » waves contributed significant error to DIC measurements to motivate the investigation of heat waves in more detail. Then the theoretical background of how light is refracted through heat waves is presented, and the effects of heat waves on displacements and strains computed from DIC are characterized in detail. Finally, different filtering methods are investigated to reduce the displacement and strain errors caused by imaging through heat waves. The overarching conclusions from this work are that errors caused by heat waves are significantly higher than typical noise floors for DIC measurements, and that the errors are difficult to filter because the temporal and spatial frequencies of the errors are in the same range as those of typical signals of interest. In conclusion, eliminating or mitigating the effects of heat sources in a DIC experiment is the best solution to minimizing errors caused by heat waves.« less

  11. Distortion of Digital Image Correlation (DIC) Displacements and Strains from Heat Waves

    DOE PAGES

    Jones, E. M. C.; Reu, P. L.

    2017-11-28

    “Heat waves” is a colloquial term used to describe convective currents in air formed when different objects in an area are at different temperatures. In the context of Digital Image Correlation (DIC) and other optical-based image processing techniques, imaging an object of interest through heat waves can significantly distort the apparent location and shape of the object. We present that there are many potential heat sources in DIC experiments, including but not limited to lights, cameras, hot ovens, and sunlight, yet error caused by heat waves is often overlooked. This paper first briefly presents three practical situations in which heatmore » waves contributed significant error to DIC measurements to motivate the investigation of heat waves in more detail. Then the theoretical background of how light is refracted through heat waves is presented, and the effects of heat waves on displacements and strains computed from DIC are characterized in detail. Finally, different filtering methods are investigated to reduce the displacement and strain errors caused by imaging through heat waves. The overarching conclusions from this work are that errors caused by heat waves are significantly higher than typical noise floors for DIC measurements, and that the errors are difficult to filter because the temporal and spatial frequencies of the errors are in the same range as those of typical signals of interest. In conclusion, eliminating or mitigating the effects of heat sources in a DIC experiment is the best solution to minimizing errors caused by heat waves.« less

  12. Spatial heterogeneity study of vegetation coverage at Heihe River Basin

    NASA Astrophysics Data System (ADS)

    Wu, Lijuan; Zhong, Bo; Guo, Liyu; Zhao, Xiangwei

    2014-11-01

    Spatial heterogeneity of the animal-landscape system has three major components: heterogeneity of resource distributions in the physical environment, heterogeneity of plant tissue chemistry, heterogeneity of movement modes by the animal. Furthermore, all three different types of heterogeneity interact each other and can either reinforce or offset one another, thereby affecting system stability and dynamics. In previous studies, the study areas are investigated by field sampling, which costs a large amount of manpower. In addition, uncertain in sampling affects the quality of field data, which leads to unsatisfactory results during the entire study. In this study, remote sensing data is used to guide the sampling for research on heterogeneity of vegetation coverage to avoid errors caused by randomness of field sampling. Semi-variance and fractal dimension analysis are used to analyze the spatial heterogeneity of vegetation coverage at Heihe River Basin. The spherical model with nugget is used to fit the semivariogram of vegetation coverage. Based on the experiment above, it is found, (1)there is a strong correlation between vegetation coverage and distance of vegetation populations within the range of 0-28051.3188m at Heihe River Basin, but the correlation loses suddenly when the distance greater than 28051.3188m. (2)The degree of spatial heterogeneity of vegetation coverage at Heihe River Basin is medium. (3)Spatial distribution variability of vegetation occurs mainly on small scales. (4)The degree of spatial autocorrelation is 72.29% between 25% and 75%, which means that spatial correlation of vegetation coverage at Heihe River Basin is medium high.

  13. Spatial Representativeness of Surface-Measured Variations of Downward Solar Radiation

    NASA Astrophysics Data System (ADS)

    Schwarz, M.; Folini, D.; Hakuba, M. Z.; Wild, M.

    2017-12-01

    When using time series of ground-based surface solar radiation (SSR) measurements in combination with gridded data, the spatial and temporal representativeness of the point observations must be considered. We use SSR data from surface observations and high-resolution (0.05°) satellite-derived data to infer the spatiotemporal representativeness of observations for monthly and longer time scales in Europe. The correlation analysis shows that the squared correlation coefficients (R2) between SSR times series decrease linearly with increasing distance between the surface observations. For deseasonalized monthly mean time series, R2 ranges from 0.85 for distances up to 25 km between the stations to 0.25 at distances of 500 km. A decorrelation length (i.e., the e-folding distance of R2) on the order of 400 km (with spread of 100-600 km) was found. R2 from correlations between point observations and colocated grid box area means determined from satellite data were found to be 0.80 for a 1° grid. To quantify the error which arises when using a point observation as a surrogate for the area mean SSR of larger surroundings, we calculated a spatial sampling error (SSE) for a 1° grid of 8 (3) W/m2 for monthly (annual) time series. The SSE based on a 1° grid, therefore, is of the same magnitude as the measurement uncertainty. The analysis generally reveals that monthly mean (or longer temporally aggregated) point observations of SSR capture the larger-scale variability well. This finding shows that comparing time series of SSR measurements with gridded data is feasible for those time scales.

  14. Comparisons of neural networks to standard techniques for image classification and correlation

    NASA Technical Reports Server (NTRS)

    Paola, Justin D.; Schowengerdt, Robert A.

    1994-01-01

    Neural network techniques for multispectral image classification and spatial pattern detection are compared to the standard techniques of maximum-likelihood classification and spatial correlation. The neural network produced a more accurate classification than maximum-likelihood of a Landsat scene of Tucson, Arizona. Some of the errors in the maximum-likelihood classification are illustrated using decision region and class probability density plots. As expected, the main drawback to the neural network method is the long time required for the training stage. The network was trained using several different hidden layer sizes to optimize both the classification accuracy and training speed, and it was found that one node per class was optimal. The performance improved when 3x3 local windows of image data were entered into the net. This modification introduces texture into the classification without explicit calculation of a texture measure. Larger windows were successfully used for the detection of spatial features in Landsat and Magellan synthetic aperture radar imagery.

  15. Students’ Errors in Geometry Viewed from Spatial Intelligence

    NASA Astrophysics Data System (ADS)

    Riastuti, N.; Mardiyana, M.; Pramudya, I.

    2017-09-01

    Geometry is one of the difficult materials because students must have ability to visualize, describe images, draw shapes, and know the kind of shapes. This study aim is to describe student error based on Newmans’ Error Analysis in solving geometry problems viewed from spatial intelligence. This research uses descriptive qualitative method by using purposive sampling technique. The datas in this research are the result of geometri material test and interview by the 8th graders of Junior High School in Indonesia. The results of this study show that in each category of spatial intelligence has a different type of error in solving the problem on the material geometry. Errors are mostly made by students with low spatial intelligence because they have deficiencies in visual abilities. Analysis of student error viewed from spatial intelligence is expected to help students do reflection in solving the problem of geometry.

  16. Improvements in GRACE Gravity Fields Using Regularization

    NASA Astrophysics Data System (ADS)

    Save, H.; Bettadpur, S.; Tapley, B. D.

    2008-12-01

    The unconstrained global gravity field models derived from GRACE are susceptible to systematic errors that show up as broad "stripes" aligned in a North-South direction on the global maps of mass flux. These errors are believed to be a consequence of both systematic and random errors in the data that are amplified by the nature of the gravity field inverse problem. These errors impede scientific exploitation of the GRACE data products, and limit the realizable spatial resolution of the GRACE global gravity fields in certain regions. We use regularization techniques to reduce these "stripe" errors in the gravity field products. The regularization criteria are designed such that there is no attenuation of the signal and that the solutions fit the observations as well as an unconstrained solution. We have used a computationally inexpensive method, normally referred to as "L-ribbon", to find the regularization parameter. This paper discusses the characteristics and statistics of a 5-year time-series of regularized gravity field solutions. The solutions show markedly reduced stripes, are of uniformly good quality over time, and leave little or no systematic observation residuals, which is a frequent consequence of signal suppression from regularization. Up to degree 14, the signal in regularized solution shows correlation greater than 0.8 with the un-regularized CSR Release-04 solutions. Signals from large-amplitude and small-spatial extent events - such as the Great Sumatra Andaman Earthquake of 2004 - are visible in the global solutions without using special post-facto error reduction techniques employed previously in the literature. Hydrological signals as small as 5 cm water-layer equivalent in the small river basins, like Indus and Nile for example, are clearly evident, in contrast to noisy estimates from RL04. The residual variability over the oceans relative to a seasonal fit is small except at higher latitudes, and is evident without the need for de-striping or spatial smoothing.

  17. Geographically correlated errors observed from a laser-based short-arc technique

    NASA Astrophysics Data System (ADS)

    Bonnefond, P.; Exertier, P.; Barlier, F.

    1999-07-01

    The laser-based short-arc technique has been developed in order to avoid local errors which affect the dynamical orbit computation, such as those due to mismodeling in the geopotential. It is based on a geometric method and consists in fitting short arcs (about 4000 km), issued from a global orbit, with satellite laser ranging tracking measurements from a ground station network. Ninety-two TOPEX/Poseidon (T/P) cycles of laser-based short-arc orbits have then been compared to JGM-2 and JGM-3 T/P orbits computed by the Precise Orbit Determination (POD) teams (Service d'Orbitographie Doris/Centre National d'Etudes Spatiales and Goddard Space Flight Center/NASA) over two areas: (1) the Mediterranean area and (2) a part of the Pacific (including California and Hawaii) called hereafter the U.S. area. Geographically correlated orbit errors in these areas are clearly evidenced: for example, -2.6 cm and +0.7 cm for the Mediterranean and U.S. areas, respectively, relative to JGM-3 orbits. However, geographically correlated errors (GCE) which are commonly linked to errors in the gravity model, can also be due to systematic errors in the reference frame and/or to biases in the tracking measurements. The short-arc technique being very sensitive to such error sources, our analysis however demonstrates that the induced geographical systematic effects are at the level of 1-2 cm on the radial orbit component. Results are also compared with those obtained with the GPS-based reduced dynamic technique. The time-dependent part of GCE has also been studied. Over 6 years of T/P data, coherent signals in the radial component of T/P Precise Orbit Ephemeris (POE) are clearly evidenced with a time period of about 6 months. In addition, impact of time varying-error sources coming from the reference frame and the tracking data accuracy has been analyzed, showing a possible linear trend of about 0.5-1 mm/yr in the radial component of T/P POE.

  18. A multi-pixel InSAR time series analysis method: Simultaneous estimation of atmospheric noise, orbital errors and deformation

    NASA Astrophysics Data System (ADS)

    Jolivet, R.; Simons, M.

    2016-12-01

    InSAR time series analysis allows reconstruction of ground deformation with meter-scale spatial resolution and high temporal sampling. For instance, the ESA Sentinel-1 Constellation is capable of providing 6-day temporal sampling, thereby opening a new window on the spatio-temporal behavior of tectonic processes. However, due to computational limitations, most time series methods rely on a pixel-by-pixel approach. This limitation is a concern because (1) accounting for orbital errors requires referencing all interferograms to a common set of pixels before reconstruction of the time series and (2) spatially correlated atmospheric noise due to tropospheric turbulence is ignored. Decomposing interferograms into statistically independent wavelets will mitigate issues of correlated noise, but prior estimation of orbital uncertainties will still be required. Here, we explore a method that considers all pixels simultaneously when solving for the spatio-temporal evolution of interferometric phase Our method is based on a massively parallel implementation of a conjugate direction solver. We consider an interferogram as the sum of the phase difference between 2 SAR acquisitions and the corresponding orbital errors. In addition, we fit the temporal evolution with a physically parameterized function while accounting for spatially correlated noise in the data covariance. We assume noise is isotropic for any given InSAR pair with a covariance described by an exponential function that decays with increasing separation distance between pixels. We regularize our solution in space using a similar exponential function as model covariance. Given the problem size, we avoid matrix multiplications of the full covariances by computing convolutions in the Fourier domain. We first solve the unregularized least squares problem using the LSQR algorithm to approach the final solution, then run our conjugate direction solver to account for data and model covariances. We present synthetic tests showing the efficiency of our method. We then reconstruct a 20-year continuous time series covering Northern Chile. Without input from any additional GNSS data, we recover the secular deformation rate, seasonal oscillations and the deformation fields from the 2005 Mw 7.8 Tarapaca and 2007 Mw 7.7 Tocopilla earthquakes.

  19. Optical correlation based pose estimation using bipolar phase grayscale amplitude spatial light modulators

    NASA Astrophysics Data System (ADS)

    Outerbridge, Gregory John, II

    Pose estimation techniques have been developed on both optical and digital correlator platforms to aid in the autonomous rendezvous and docking of spacecraft. This research has focused on the optical architecture, which utilizes high-speed bipolar-phase grayscale-amplitude spatial light modulators as the image and correlation filter devices. The optical approach has the primary advantage of optical parallel processing: an extremely fast and efficient way of performing complex correlation calculations. However, the constraints imposed on optically implementable filters makes optical correlator based posed estimation technically incompatible with the popular weighted composite filter designs successfully used on the digital platform. This research employs a much simpler "bank of filters" approach to optical pose estimation that exploits the inherent efficiency of optical correlation devices. A novel logarithmically mapped optically implementable matched filter combined with a pose search algorithm resulted in sub-degree standard deviations in angular pose estimation error. These filters were extremely simple to generate, requiring no complicated training sets and resulted in excellent performance even in the presence of significant background noise. Common edge detection and scaling of the input image was the only image pre-processing necessary for accurate pose detection at all alignment distances of interest.

  20. Spatial regression analysis of traffic crashes in Seoul.

    PubMed

    Rhee, Kyoung-Ah; Kim, Joon-Ki; Lee, Young-ihn; Ulfarsson, Gudmundur F

    2016-06-01

    Traffic crashes can be spatially correlated events and the analysis of the distribution of traffic crash frequency requires evaluation of parameters that reflect spatial properties and correlation. Typically this spatial aspect of crash data is not used in everyday practice by planning agencies and this contributes to a gap between research and practice. A database of traffic crashes in Seoul, Korea, in 2010 was developed at the traffic analysis zone (TAZ) level with a number of GIS developed spatial variables. Practical spatial models using available software were estimated. The spatial error model was determined to be better than the spatial lag model and an ordinary least squares baseline regression. A geographically weighted regression model provided useful insights about localization of effects. The results found that an increased length of roads with speed limit below 30 km/h and a higher ratio of residents below age of 15 were correlated with lower traffic crash frequency, while a higher ratio of residents who moved to the TAZ, more vehicle-kilometers traveled, and a greater number of access points with speed limit difference between side roads and mainline above 30 km/h all increased the number of traffic crashes. This suggests, for example, that better control or design for merging lower speed roads with higher speed roads is important. A key result is that the length of bus-only center lanes had the largest effect on increasing traffic crashes. This is important as bus-only center lanes with bus stop islands have been increasingly used to improve transit times. Hence the potential negative safety impacts of such systems need to be studied further and mitigated through improved design of pedestrian access to center bus stop islands. Copyright © 2016 Elsevier Ltd. All rights reserved.

  1. Efficient Solar Scene Wavefront Estimation with Reduced Systematic and RMS Errors: Summary

    NASA Astrophysics Data System (ADS)

    Anugu, N.; Garcia, P.

    2016-04-01

    Wave front sensing for solar telescopes is commonly implemented with the Shack-Hartmann sensors. Correlation algorithms are usually used to estimate the extended scene Shack-Hartmann sub-aperture image shifts or slopes. The image shift is computed by correlating a reference sub-aperture image with the target distorted sub-aperture image. The pixel position where the maximum correlation is located gives the image shift in integer pixel coordinates. Sub-pixel precision image shifts are computed by applying a peak-finding algorithm to the correlation peak Poyneer (2003); Löfdahl (2010). However, the peak-finding algorithm results are usually biased towards the integer pixels, these errors are called as systematic bias errors Sjödahl (1994). These errors are caused due to the low pixel sampling of the images. The amplitude of these errors depends on the type of correlation algorithm and the type of peak-finding algorithm being used. To study the systematic errors in detail, solar sub-aperture synthetic images are constructed by using a Swedish Solar Telescope solar granulation image1. The performance of cross-correlation algorithm in combination with different peak-finding algorithms is investigated. The studied peak-finding algorithms are: parabola Poyneer (2003); quadratic polynomial Löfdahl (2010); threshold center of gravity Bailey (2003); Gaussian Nobach & Honkanen (2005) and Pyramid Bailey (2003). The systematic error study reveals that that the pyramid fit is the most robust to pixel locking effects. The RMS error analysis study reveals that the threshold centre of gravity behaves better in low SNR, although the systematic errors in the measurement are large. It is found that no algorithm is best for both the systematic and the RMS error reduction. To overcome the above problem, a new solution is proposed. In this solution, the image sampling is increased prior to the actual correlation matching. The method is realized in two steps to improve its computational efficiency. In the first step, the cross-correlation is implemented at the original image spatial resolution grid (1 pixel). In the second step, the cross-correlation is performed using a sub-pixel level grid by limiting the field of search to 4 × 4 pixels centered at the first step delivered initial position. The generation of these sub-pixel grid based region of interest images is achieved with the bi-cubic interpolation. The correlation matching with sub-pixel grid technique was previously reported in electronic speckle photography Sjö'dahl (1994). This technique is applied here for the solar wavefront sensing. A large dynamic range and a better accuracy in the measurements are achieved with the combination of the original pixel grid based correlation matching in a large field of view and a sub-pixel interpolated image grid based correlation matching within a small field of view. The results revealed that the proposed method outperforms all the different peak-finding algorithms studied in the first approach. It reduces both the systematic error and the RMS error by a factor of 5 (i.e., 75% systematic error reduction), when 5 times improved image sampling was used. This measurement is achieved at the expense of twice the computational cost. With the 5 times improved image sampling, the wave front accuracy is increased by a factor of 5. The proposed solution is strongly recommended for wave front sensing in the solar telescopes, particularly, for measuring large dynamic image shifts involved open loop adaptive optics. Also, by choosing an appropriate increment of image sampling in trade-off between the computational speed limitation and the aimed sub-pixel image shift accuracy, it can be employed in closed loop adaptive optics. The study is extended to three other class of sub-aperture images (a point source; a laser guide star; a Galactic Center extended scene). The results are planned to submit for the Optical Express journal.

  2. Canceling the momentum in a phase-shifting algorithm to eliminate spatially uniform errors.

    PubMed

    Hibino, Kenichi; Kim, Yangjin

    2016-08-10

    In phase-shifting interferometry, phase modulation nonlinearity causes both spatially uniform and nonuniform errors in the measured phase. Conventional linear-detuning error-compensating algorithms only eliminate the spatially variable error component. The uniform error is proportional to the inertial momentum of the data-sampling weight of a phase-shifting algorithm. This paper proposes a design approach to cancel the momentum by using characteristic polynomials in the Z-transform space and shows that an arbitrary M-frame algorithm can be modified to a new (M+2)-frame algorithm that acquires new symmetry to eliminate the uniform error.

  3. A study of the application of differential techniques to the global positioning system for a helicopter precision approach

    NASA Technical Reports Server (NTRS)

    Mccall, D. L.

    1984-01-01

    The results of a simulation study to define the functional characteristics of a airborne and ground reference GPS receiver for use in a Differential GPS system are doumented. The operations of a variety of receiver types (sequential-single channel, continuous multi-channel, etc.) are evaluated for a typical civil helicopter mission scenario. The math model of each receiver type incorporated representative system errors including intentional degradation. The results include the discussion of the receiver relative performance, the spatial correlative properties of individual range error sources, and the navigation algorithm used to smooth the position data.

  4. Neural mechanisms underlying spatial realignment during adaptation to optical wedge prisms.

    PubMed

    Chapman, Heidi L; Eramudugolla, Ranmalee; Gavrilescu, Maria; Strudwick, Mark W; Loftus, Andrea; Cunnington, Ross; Mattingley, Jason B

    2010-07-01

    Visuomotor adaptation to a shift in visual input produced by prismatic lenses is an example of dynamic sensory-motor plasticity within the brain. Prism adaptation is readily induced in healthy individuals, and is thought to reflect the brain's ability to compensate for drifts in spatial calibration between different sensory systems. The neural correlate of this form of functional plasticity is largely unknown, although current models predict the involvement of parieto-cerebellar circuits. Recent studies that have employed event-related functional magnetic resonance imaging (fMRI) to identify brain regions associated with prism adaptation have discovered patterns of parietal and cerebellar modulation as participants corrected their visuomotor errors during the early part of adaptation. However, the role of these regions in the later stage of adaptation, when 'spatial realignment' or true adaptation is predicted to occur, remains unclear. Here, we used fMRI to quantify the distinctive patterns of parieto-cerebellar activity as visuomotor adaptation develops. We directly contrasted activation patterns during the initial error correction phase of visuomotor adaptation with that during the later spatial realignment phase, and found significant recruitment of the parieto-cerebellar network--with activations in the right inferior parietal lobe and the right posterior cerebellum. These findings provide the first evidence of both cerebellar and parietal involvement during the spatial realignment phase of prism adaptation. Copyright (c) 2010 Elsevier Ltd. All rights reserved.

  5. Psychological symptoms and spatial orientation during the first 3 months after acute unilateral vestibular lesion.

    PubMed

    Gómez-Alvarez, Fatima B; Jáuregui-Renaud, Kathrine

    2011-02-01

    We undertook this study to assess the correlation between the results of simple tests of spatial orientation and the occurrence of common psychological symptoms during the first 3 months after an acute, unilateral, peripheral, vestibular lesion. Ten vestibular patients were selected and accepted to participate in the study. During a 3-month follow-up, we recorded the static visual vertical (VV), the estimation error of reorientation in the yaw plane and the responses to a standardized questionnaire of balance symptoms, the Dizziness Handicap Inventory (DHI), the depersonalization/derealization inventory by Cox and Swinson (DD), the Dissociative Experiences Scale (DES), the 12-item General Health Questionnaire (GHQ-12), the Zung Instrument for Anxiety Disorders and the Hamilton Depression Rating Scale. At week 1, all patients showed a VV >2° and failed to reorient themselves effectively. They reported several balance symptoms and handicap as well as DD symptoms, including attention/concentration difficulties; 80% of the patients had a Hamilton score ≥8. At this time the balance symptom score correlated with the DHI. After 3 months, all scores decreased. Multiple regression analysis of the differences from baseline showed that the DD score difference was related to the difference on the balance score, the reorientation error and the DHI score (p <0.01). No other linear relationships were observed (p >0.5). During the acute phase of a unilateral, peripheral, vestibular lesion, patients may show poor spatial orientation concurrent with DD symptoms including attention/concentration difficulties, and somatic depression symptoms. After vestibular rehabilitation, DD symptoms decrease as the spatial orientation improves, even if somatic symptoms of depression persist. Copyright © 2011 IMSS. Published by Elsevier Inc. All rights reserved.

  6. Precipitation From a Multiyear Database of Convection-Allowing WRF Simulations

    NASA Astrophysics Data System (ADS)

    Goines, D. C.; Kennedy, A. D.

    2018-03-01

    Convection-allowing models (CAMs) have become frequently used for operational forecasting and, more recently, have been utilized for general circulation model downscaling. CAM forecasts have typically been analyzed for a few case studies or over short time periods, but this limits the ability to judge the overall skill of deterministic simulations. Analysis over long time periods can yield a better understanding of systematic model error. Four years of warm season (April-August, 2010-2013)-simulated precipitation has been accumulated from two Weather Research and Forecasting (WRF) models with 4 km grid spacing. The simulations were provided by the National Center for Environmental Prediction (NCEP) and the National Severe Storms Laboratory (NSSL), each with different dynamic cores and parameterization schemes. These simulations are evaluated against the NCEP Stage-IV precipitation data set with similar 4 km grid spacing. The spatial distribution and diurnal cycle of precipitation in the central United States are analyzed using Hovmöller diagrams, grid point correlations, and traditional verification skill scoring (i.e., ETS; Equitable Threat Score). Although NCEP-WRF had a high positive error in total precipitation, spatial characteristics were similar to observations. For example, the spatial distribution of NCEP-WRF precipitation correlated better than NSSL-WRF for the Northern Plains. Hovmöller results exposed a delay in initiation and decay of diurnal precipitation by NCEP-WRF while both models had difficulty in reproducing the timing and location of propagating precipitation. ETS was highest for NSSL-WRF in all domains at all times. ETS was also higher in areas of propagating precipitation compared to areas of unorganized diurnal scattered precipitation. Monthly analysis identified unique differences between the two models in their abilities to correctly simulate the spatial distribution and zonal motion of precipitation through the warm season.

  7. Accounting for the measurement error of spectroscopically inferred soil carbon data for improved precision of spatial predictions.

    PubMed

    Somarathna, P D S N; Minasny, Budiman; Malone, Brendan P; Stockmann, Uta; McBratney, Alex B

    2018-08-01

    Spatial modelling of environmental data commonly only considers spatial variability as the single source of uncertainty. In reality however, the measurement errors should also be accounted for. In recent years, infrared spectroscopy has been shown to offer low cost, yet invaluable information needed for digital soil mapping at meaningful spatial scales for land management. However, spectrally inferred soil carbon data are known to be less accurate compared to laboratory analysed measurements. This study establishes a methodology to filter out the measurement error variability by incorporating the measurement error variance in the spatial covariance structure of the model. The study was carried out in the Lower Hunter Valley, New South Wales, Australia where a combination of laboratory measured, and vis-NIR and MIR inferred topsoil and subsoil soil carbon data are available. We investigated the applicability of residual maximum likelihood (REML) and Markov Chain Monte Carlo (MCMC) simulation methods to generate parameters of the Matérn covariance function directly from the data in the presence of measurement error. The results revealed that the measurement error can be effectively filtered-out through the proposed technique. When the measurement error was filtered from the data, the prediction variance almost halved, which ultimately yielded a greater certainty in spatial predictions of soil carbon. Further, the MCMC technique was successfully used to define the posterior distribution of measurement error. This is an important outcome, as the MCMC technique can be used to estimate the measurement error if it is not explicitly quantified. Although this study dealt with soil carbon data, this method is amenable for filtering the measurement error of any kind of continuous spatial environmental data. Copyright © 2018 Elsevier B.V. All rights reserved.

  8. Localization of extended brain sources from EEG/MEG: the ExSo-MUSIC approach.

    PubMed

    Birot, Gwénaël; Albera, Laurent; Wendling, Fabrice; Merlet, Isabelle

    2011-05-01

    We propose a new MUSIC-like method, called 2q-ExSo-MUSIC (q ≥ 1). This method is an extension of the 2q-MUSIC (q ≥ 1) approach for solving the EEG/MEG inverse problem, when spatially-extended neocortical sources ("ExSo") are considered. It introduces a novel ExSo-MUSIC principle. The novelty is two-fold: i) the parameterization of the spatial source distribution that leads to an appropriate metric in the context of distributed brain sources and ii) the introduction of an original, efficient and low-cost way of optimizing this metric. In 2q-ExSo-MUSIC, the possible use of higher order statistics (q ≥ 2) offers a better robustness with respect to Gaussian noise of unknown spatial coherence and modeling errors. As a result we reduced the penalizing effects of both the background cerebral activity that can be seen as a Gaussian and spatially correlated noise, and the modeling errors induced by the non-exact resolution of the forward problem. Computer results on simulated EEG signals obtained with physiologically-relevant models of both the sources and the volume conductor show a highly increased performance of our 2q-ExSo-MUSIC method as compared to the classical 2q-MUSIC algorithms. Copyright © 2011 Elsevier Inc. All rights reserved.

  9. Transmission network of the 2014-2015 Ebola epidemic in Sierra Leone.

    PubMed

    Yang, Wan; Zhang, Wenyi; Kargbo, David; Yang, Ruifu; Chen, Yong; Chen, Zeliang; Kamara, Abdul; Kargbo, Brima; Kandula, Sasikiran; Karspeck, Alicia; Liu, Chao; Shaman, Jeffrey

    2015-11-06

    Understanding the growth and spatial expansion of (re)emerging infectious disease outbreaks, such as Ebola and avian influenza, is critical for the effective planning of control measures; however, such efforts are often compromised by data insufficiencies and observational errors. Here, we develop a spatial-temporal inference methodology using a modified network model in conjunction with the ensemble adjustment Kalman filter, a Bayesian inference method equipped to handle observational errors. The combined method is capable of revealing the spatial-temporal progression of infectious disease, while requiring only limited, readily compiled data. We use this method to reconstruct the transmission network of the 2014-2015 Ebola epidemic in Sierra Leone and identify source and sink regions. Our inference suggests that, in Sierra Leone, transmission within the network introduced Ebola to neighbouring districts and initiated self-sustaining local epidemics; two of the more populous and connected districts, Kenema and Port Loko, facilitated two independent transmission pathways. Epidemic intensity differed by district, was highly correlated with population size (r = 0.76, p = 0.0015) and a critical window of opportunity for containing local Ebola epidemics at the source (ca one month) existed. This novel methodology can be used to help identify and contain the spatial expansion of future (re)emerging infectious disease outbreaks. © 2015 The Author(s).

  10. Development of an Asset Value Map for Disaster Risk Assessment in China by Spatial Disaggregation Using Ancillary Remote Sensing Data.

    PubMed

    Wu, Jidong; Li, Ying; Li, Ning; Shi, Peijun

    2018-01-01

    The extent of economic losses due to a natural hazard and disaster depends largely on the spatial distribution of asset values in relation to the hazard intensity distribution within the affected area. Given that statistical data on asset value are collected by administrative units in China, generating spatially explicit asset exposure maps remains a key challenge for rapid postdisaster economic loss assessment. The goal of this study is to introduce a top-down (or downscaling) approach to disaggregate administrative-unit level asset value to grid-cell level. To do so, finding the highly correlated "surrogate" indicators is the key. A combination of three data sets-nighttime light grid, LandScan population grid, and road density grid, is used as ancillary asset density distribution information for spatializing the asset value. As a result, a high spatial resolution asset value map of China for 2015 is generated. The spatial data set contains aggregated economic value at risk at 30 arc-second spatial resolution. Accuracy of the spatial disaggregation reflects redistribution errors introduced by the disaggregation process as well as errors from the original ancillary data sets. The overall accuracy of the results proves to be promising. The example of using the developed disaggregated asset value map in exposure assessment of watersheds demonstrates that the data set offers immense analytical flexibility for overlay analysis according to the hazard extent. This product will help current efforts to analyze spatial characteristics of exposure and to uncover the contributions of both physical and social drivers of natural hazard and disaster across space and time. © 2017 Society for Risk Analysis.

  11. Assessing uncertainty in high-resolution spatial climate data across the US Northeast.

    PubMed

    Bishop, Daniel A; Beier, Colin M

    2013-01-01

    Local and regional-scale knowledge of climate change is needed to model ecosystem responses, assess vulnerabilities and devise effective adaptation strategies. High-resolution gridded historical climate (GHC) products address this need, but come with multiple sources of uncertainty that are typically not well understood by data users. To better understand this uncertainty in a region with a complex climatology, we conducted a ground-truthing analysis of two 4 km GHC temperature products (PRISM and NRCC) for the US Northeast using 51 Cooperative Network (COOP) weather stations utilized by both GHC products. We estimated GHC prediction error for monthly temperature means and trends (1980-2009) across the US Northeast and evaluated any landscape effects (e.g., elevation, distance from coast) on those prediction errors. Results indicated that station-based prediction errors for the two GHC products were similar in magnitude, but on average, the NRCC product predicted cooler than observed temperature means and trends, while PRISM was cooler for means and warmer for trends. We found no evidence for systematic sources of uncertainty across the US Northeast, although errors were largest at high elevations. Errors in the coarse-scale (4 km) digital elevation models used by each product were correlated with temperature prediction errors, more so for NRCC than PRISM. In summary, uncertainty in spatial climate data has many sources and we recommend that data users develop an understanding of uncertainty at the appropriate scales for their purposes. To this end, we demonstrate a simple method for utilizing weather stations to assess local GHC uncertainty and inform decisions among alternative GHC products.

  12. Error quantification of a high-resolution coupled hydrodynamic-ecosystem coastal-ocean model: Part 2. Chlorophyll-a, nutrients and SPM

    NASA Astrophysics Data System (ADS)

    Allen, J. Icarus; Holt, Jason T.; Blackford, Jerry; Proctor, Roger

    2007-12-01

    Marine systems models are becoming increasingly complex and sophisticated, but far too little attention has been paid to model errors and the extent to which model outputs actually relate to ecosystem processes. Here we describe the application of summary error statistics to a complex 3D model (POLCOMS-ERSEM) run for the period 1988-1989 in the southern North Sea utilising information from the North Sea Project, which collected a wealth of observational data. We demonstrate that to understand model data misfit and the mechanisms creating errors, we need to use a hierarchy of techniques, including simple correlations, model bias, model efficiency, binary discriminator analysis and the distribution of model errors to assess model errors spatially and temporally. We also demonstrate that a linear cost function is an inappropriate measure of misfit. This analysis indicates that the model has some skill for all variables analysed. A summary plot of model performance indicates that model performance deteriorates as we move through the ecosystem from the physics, to the nutrients and plankton.

  13. Reinforcement Learning Models and Their Neural Correlates: An Activation Likelihood Estimation Meta-Analysis

    PubMed Central

    Kumar, Poornima; Eickhoff, Simon B.; Dombrovski, Alexandre Y.

    2015-01-01

    Reinforcement learning describes motivated behavior in terms of two abstract signals. The representation of discrepancies between expected and actual rewards/punishments – prediction error – is thought to update the expected value of actions and predictive stimuli. Electrophysiological and lesion studies suggest that mesostriatal prediction error signals control behavior through synaptic modification of cortico-striato-thalamic networks. Signals in the ventromedial prefrontal and orbitofrontal cortex are implicated in representing expected value. To obtain unbiased maps of these representations in the human brain, we performed a meta-analysis of functional magnetic resonance imaging studies that employed algorithmic reinforcement learning models, across a variety of experimental paradigms. We found that the ventral striatum (medial and lateral) and midbrain/thalamus represented reward prediction errors, consistent with animal studies. Prediction error signals were also seen in the frontal operculum/insula, particularly for social rewards. In Pavlovian studies, striatal prediction error signals extended into the amygdala, while instrumental tasks engaged the caudate. Prediction error maps were sensitive to the model-fitting procedure (fixed or individually-estimated) and to the extent of spatial smoothing. A correlate of expected value was found in a posterior region of the ventromedial prefrontal cortex, caudal and medial to the orbitofrontal regions identified in animal studies. These findings highlight a reproducible motif of reinforcement learning in the cortico-striatal loops and identify methodological dimensions that may influence the reproducibility of activation patterns across studies. PMID:25665667

  14. Limitations of Dower's inverse transform for the study of atrial loops during atrial fibrillation.

    PubMed

    Guillem, María S; Climent, Andreu M; Bollmann, Andreas; Husser, Daniela; Millet, José; Castells, Francisco

    2009-08-01

    Spatial characteristics of atrial fibrillatory waves have been extracted by using a vectorcardiogram (VCG) during atrial fibrillation (AF). However, the VCG is usually not recorded in clinical practice and atrial loops are derived from the 12-lead electrocardiogram (ECG). We evaluated the suitability of the reconstruction of orthogonal leads from the 12-lead ECG for fibrillatory waves in AF. We used the Physikalisch-Technische Bundesanstalt diagnostic ECG database, which contains 15 simultaneously recorded signals (12-lead ECG and three Frank orthogonal leads) of 13 patients during AF. Frank leads were derived from the 12-lead ECG by using Dower's inverse transform. Derived leads were then compared to true Frank leads in terms of the relative error achieved. We calculated the orientation of AF loops of both recorded orthogonal leads and derived leads and measured the difference in estimated orientation. Also, we investigated the relationship of errors in derivation with fibrillatory wave amplitude, frequency, wave residuum, and fit to a plane of the AF loops. Errors in derivation of AF loops were 68 +/- 31% and errors in the estimation of orientation were 35.85 +/- 20.43 degrees . We did not find any correlation among these errors and amplitude, frequency, or other parameters. In conclusion, Dower's inverse transform should not be used for the derivation of orthogonal leads from the 12-lead ECG for the analysis of fibrillatory wave loops in AF. Spatial parameters obtained after this derivation may differ from those obtained from recorded orthogonal leads.

  15. Noncontact methods for measuring water-surface elevations and velocities in rivers: Implications for depth and discharge extraction

    USGS Publications Warehouse

    Nelson, Jonathan M.; Kinzel, Paul J.; McDonald, Richard R.; Schmeeckle, Mark

    2016-01-01

    Recently developed optical and videographic methods for measuring water-surface properties in a noninvasive manner hold great promise for extracting river hydraulic and bathymetric information. This paper describes such a technique, concentrating on the method of infrared videog- raphy for measuring surface velocities and both acoustic (laboratory-based) and laser-scanning (field-based) techniques for measuring water-surface elevations. In ideal laboratory situations with simple flows, appropriate spatial and temporal averaging results in accurate water-surface elevations and water-surface velocities. In test cases, this accuracy is sufficient to allow direct inversion of the governing equations of motion to produce estimates of depth and discharge. Unlike other optical techniques for determining local depth that rely on transmissivity of the water column (bathymetric lidar, multi/hyperspectral correlation), this method uses only water-surface information, so even deep and/or turbid flows can be investigated. However, significant errors arise in areas of nonhydrostatic spatial accelerations, such as those associated with flow over bedforms or other relatively steep obstacles. Using laboratory measurements for test cases, the cause of these errors is examined and both a simple semi-empirical method and computational results are presented that can potentially reduce bathymetric inversion errors.

  16. Accounting for rate instability and spatial patterns in the boundary analysis of cancer mortality maps

    PubMed Central

    Goovaerts, Pierre

    2006-01-01

    Boundary analysis of cancer maps may highlight areas where causative exposures change through geographic space, the presence of local populations with distinct cancer incidences, or the impact of different cancer control methods. Too often, such analysis ignores the spatial pattern of incidence or mortality rates and overlooks the fact that rates computed from sparsely populated geographic entities can be very unreliable. This paper proposes a new methodology that accounts for the uncertainty and spatial correlation of rate data in the detection of significant edges between adjacent entities or polygons. Poisson kriging is first used to estimate the risk value and the associated standard error within each polygon, accounting for the population size and the risk semivariogram computed from raw rates. The boundary statistic is then defined as half the absolute difference between kriged risks. Its reference distribution, under the null hypothesis of no boundary, is derived through the generation of multiple realizations of the spatial distribution of cancer risk values. This paper presents three types of neutral models generated using methods of increasing complexity: the common random shuffle of estimated risk values, a spatial re-ordering of these risks, or p-field simulation that accounts for the population size within each polygon. The approach is illustrated using age-adjusted pancreatic cancer mortality rates for white females in 295 US counties of the Northeast (1970–1994). Simulation studies demonstrate that Poisson kriging yields more accurate estimates of the cancer risk and how its value changes between polygons (i.e. boundary statistic), relatively to the use of raw rates or local empirical Bayes smoother. When used in conjunction with spatial neutral models generated by p-field simulation, the boundary analysis based on Poisson kriging estimates minimizes the proportion of type I errors (i.e. edges wrongly declared significant) while the frequency of these errors is predicted well by the p-value of the statistical test. PMID:19023455

  17. Assessment of imputation methods using varying ecological information to fill the gaps in a tree functional trait database

    NASA Astrophysics Data System (ADS)

    Poyatos, Rafael; Sus, Oliver; Vilà-Cabrera, Albert; Vayreda, Jordi; Badiella, Llorenç; Mencuccini, Maurizio; Martínez-Vilalta, Jordi

    2016-04-01

    Plant functional traits are increasingly being used in ecosystem ecology thanks to the growing availability of large ecological databases. However, these databases usually contain a large fraction of missing data because measuring plant functional traits systematically is labour-intensive and because most databases are compilations of datasets with different sampling designs. As a result, within a given database, there is an inevitable variability in the number of traits available for each data entry and/or the species coverage in a given geographical area. The presence of missing data may severely bias trait-based analyses, such as the quantification of trait covariation or trait-environment relationships and may hamper efforts towards trait-based modelling of ecosystem biogeochemical cycles. Several data imputation (i.e. gap-filling) methods have been recently tested on compiled functional trait databases, but the performance of imputation methods applied to a functional trait database with a regular spatial sampling has not been thoroughly studied. Here, we assess the effects of data imputation on five tree functional traits (leaf biomass to sapwood area ratio, foliar nitrogen, maximum height, specific leaf area and wood density) in the Ecological and Forest Inventory of Catalonia, an extensive spatial database (covering 31900 km2). We tested the performance of species mean imputation, single imputation by the k-nearest neighbors algorithm (kNN) and a multiple imputation method, Multivariate Imputation with Chained Equations (MICE) at different levels of missing data (10%, 30%, 50%, and 80%). We also assessed the changes in imputation performance when additional predictors (species identity, climate, forest structure, spatial structure) were added in kNN and MICE imputations. We evaluated the imputed datasets using a battery of indexes describing departure from the complete dataset in trait distribution, in the mean prediction error, in the correlation matrix and in selected bivariate trait relationships. MICE yielded imputations which better preserved the variability and covariance structure of the data and provided an estimate of between-imputation uncertainty. We found that adding species identity as a predictor in MICE and kNN improved imputation for all traits, but adding climate did not lead to any appreciable improvement. However, forest structure and spatial structure did reduce imputation errors in maximum height and in leaf biomass to sapwood area ratios, respectively. Although species mean imputations showed the lowest error for 3 out the 5 studied traits, dataset-averaged errors were lowest for MICE imputations with all additional predictors, when missing data levels were 50% or lower. Species mean imputations always resulted in larger errors in the correlation matrix and appreciably altered the studied bivariate trait relationships. In conclusion, MICE imputations using species identity, climate, forest structure and spatial structure as predictors emerged as the most suitable method of the ones tested here, but it was also evident that imputation performance deteriorates at high levels of missing data (80%).

  18. Spatial durbin error model for human development index in Province of Central Java.

    NASA Astrophysics Data System (ADS)

    Septiawan, A. R.; Handajani, S. S.; Martini, T. S.

    2018-05-01

    The Human Development Index (HDI) is an indicator used to measure success in building the quality of human life, explaining how people access development outcomes when earning income, health and education. Every year HDI in Central Java has improved to a better direction. In 2016, HDI in Central Java was 69.98 %, an increase of 0.49 % over the previous year. The objective of this study was to apply the spatial Durbin error model using angle weights queen contiguity to measure HDI in Central Java Province. Spatial Durbin error model is used because the model overcomes the spatial effect of errors and the effects of spatial depedency on the independent variable. Factors there use is life expectancy, mean years of schooling, expected years of schooling, and purchasing power parity. Based on the result of research, we get spatial Durbin error model for HDI in Central Java with influencing factors are life expectancy, mean years of schooling, expected years of schooling, and purchasing power parity.

  19. Bayesian learning for spatial filtering in an EEG-based brain-computer interface.

    PubMed

    Zhang, Haihong; Yang, Huijuan; Guan, Cuntai

    2013-07-01

    Spatial filtering for EEG feature extraction and classification is an important tool in brain-computer interface. However, there is generally no established theory that links spatial filtering directly to Bayes classification error. To address this issue, this paper proposes and studies a Bayesian analysis theory for spatial filtering in relation to Bayes error. Following the maximum entropy principle, we introduce a gamma probability model for describing single-trial EEG power features. We then formulate and analyze the theoretical relationship between Bayes classification error and the so-called Rayleigh quotient, which is a function of spatial filters and basically measures the ratio in power features between two classes. This paper also reports our extensive study that examines the theory and its use in classification, using three publicly available EEG data sets and state-of-the-art spatial filtering techniques and various classifiers. Specifically, we validate the positive relationship between Bayes error and Rayleigh quotient in real EEG power features. Finally, we demonstrate that the Bayes error can be practically reduced by applying a new spatial filter with lower Rayleigh quotient.

  20. Fourier decomposition of spatial localization errors reveals an idiotropic dominance of an internal model of gravity.

    PubMed

    De Sá Teixeira, Nuno Alexandre

    2014-12-01

    Given its conspicuous nature, gravity has been acknowledged by several research lines as a prime factor in structuring the spatial perception of one's environment. One such line of enquiry has focused on errors in spatial localization aimed at the vanishing location of moving objects - it has been systematically reported that humans mislocalize spatial positions forward, in the direction of motion (representational momentum) and downward in the direction of gravity (representational gravity). Moreover, spatial localization errors were found to evolve dynamically with time in a pattern congruent with an anticipated trajectory (representational trajectory). The present study attempts to ascertain the degree to which vestibular information plays a role in these phenomena. Human observers performed a spatial localization task while tilted to varying degrees and referring to the vanishing locations of targets moving along several directions. A Fourier decomposition of the obtained spatial localization errors revealed that although spatial errors were increased "downward" mainly along the body's longitudinal axis (idiotropic dominance), the degree of misalignment between the latter and physical gravity modulated the time course of the localization responses. This pattern is surmised to reflect increased uncertainty about the internal model when faced with conflicting cues regarding the perceived "downward" direction.

  1. Theta oscillations during holeboard training in rats: different learning strategies entail different context-dependent modulations in the hippocampus.

    PubMed

    Woldeit, M L; Korz, V

    2010-02-03

    A functional connection between theta rhythms, information processing, learning and memory formation is well documented by studies focusing on the impact of theta waves on motor activity, global context or phase coding in spatial learning. In the present study we analyzed theta oscillations during a spatial learning task and assessed which specific behavioral contexts were connected to changes in theta power and to the formation of memory. Therefore, we measured hippocampal dentate gyrus theta modulations in male rats that were allowed to establish a long-term spatial reference memory in a holeboard (fixed pattern of baited holes) in comparison to rats that underwent similar training conditions but could not form a reference memory (randomly baited holes). The first group established a pattern specific learning strategy, while the second developed an arbitrary search strategy, visiting increasingly more holes during training. Theta power was equally influenced during the training course in both groups, but was significantly higher when compared to untrained controls. A detailed behavioral analysis, however, revealed behavior- and context-specific differences within the experimental groups. In spatially trained animals theta power correlated with the amounts of reference memory errors in the context of the inspection of unbaited holes and exploration in which, as suggested by time frequency analyses, also slow wave (delta) power was increased. In contrast, in randomly trained animals positive correlations with working memory errors were found in the context of rearing behavior. These findings indicate a contribution of theta/delta to long-lasting memory formation in spatially trained animals, whereas in pseudo trained animals theta seems to be related to attention in order to establish trial specific short-term working memory. Implications for differences in neuronal plasticity found in earlier studies are discussed. Copyright 2010 IBRO. Published by Elsevier Ltd. All rights reserved.

  2. A geostatistical state-space model of animal densities for stream networks.

    PubMed

    Hocking, Daniel J; Thorson, James T; O'Neil, Kyle; Letcher, Benjamin H

    2018-06-21

    Population dynamics are often correlated in space and time due to correlations in environmental drivers as well as synchrony induced by individual dispersal. Many statistical analyses of populations ignore potential autocorrelations and assume that survey methods (distance and time between samples) eliminate these correlations, allowing samples to be treated independently. If these assumptions are incorrect, results and therefore inference may be biased and uncertainty under-estimated. We developed a novel statistical method to account for spatio-temporal correlations within dendritic stream networks, while accounting for imperfect detection in the surveys. Through simulations, we found this model decreased predictive error relative to standard statistical methods when data were spatially correlated based on stream distance and performed similarly when data were not correlated. We found that increasing the number of years surveyed substantially improved the model accuracy when estimating spatial and temporal correlation coefficients, especially from 10 to 15 years. Increasing the number of survey sites within the network improved the performance of the non-spatial model but only marginally improved the density estimates in the spatio-temporal model. We applied this model to Brook Trout data from the West Susquehanna Watershed in Pennsylvania collected over 34 years from 1981 - 2014. We found the model including temporal and spatio-temporal autocorrelation best described young-of-the-year (YOY) and adult density patterns. YOY densities were positively related to forest cover and negatively related to spring temperatures with low temporal autocorrelation and moderately-high spatio-temporal correlation. Adult densities were less strongly affected by climatic conditions and less temporally variable than YOY but with similar spatio-temporal correlation and higher temporal autocorrelation. This article is protected by copyright. All rights reserved. This article is protected by copyright. All rights reserved.

  3. Statistical Quality Control of Moisture Data in GEOS DAS

    NASA Technical Reports Server (NTRS)

    Dee, D. P.; Rukhovets, L.; Todling, R.

    1999-01-01

    A new statistical quality control algorithm was recently implemented in the Goddard Earth Observing System Data Assimilation System (GEOS DAS). The final step in the algorithm consists of an adaptive buddy check that either accepts or rejects outlier observations based on a local statistical analysis of nearby data. A basic assumption in any such test is that the observed field is spatially coherent, in the sense that nearby data can be expected to confirm each other. However, the buddy check resulted in excessive rejection of moisture data, especially during the Northern Hemisphere summer. The analysis moisture variable in GEOS DAS is water vapor mixing ratio. Observational evidence shows that the distribution of mixing ratio errors is far from normal. Furthermore, spatial correlations among mixing ratio errors are highly anisotropic and difficult to identify. Both factors contribute to the poor performance of the statistical quality control algorithm. To alleviate the problem, we applied the buddy check to relative humidity data instead. This variable explicitly depends on temperature and therefore exhibits a much greater spatial coherence. As a result, reject rates of moisture data are much more reasonable and homogeneous in time and space.

  4. [Prediction of soil nutrients spatial distribution based on neural network model combined with goestatistics].

    PubMed

    Li, Qi-Quan; Wang, Chang-Quan; Zhang, Wen-Jiang; Yu, Yong; Li, Bing; Yang, Juan; Bai, Gen-Chuan; Cai, Yan

    2013-02-01

    In this study, a radial basis function neural network model combined with ordinary kriging (RBFNN_OK) was adopted to predict the spatial distribution of soil nutrients (organic matter and total N) in a typical hilly region of Sichuan Basin, Southwest China, and the performance of this method was compared with that of ordinary kriging (OK) and regression kriging (RK). All the three methods produced the similar soil nutrient maps. However, as compared with those obtained by multiple linear regression model, the correlation coefficients between the measured values and the predicted values of soil organic matter and total N obtained by neural network model increased by 12. 3% and 16. 5% , respectively, suggesting that neural network model could more accurately capture the complicated relationships between soil nutrients and quantitative environmental factors. The error analyses of the prediction values of 469 validation points indicated that the mean absolute error (MAE) , mean relative error (MRE), and root mean squared error (RMSE) of RBFNN_OK were 6.9%, 7.4%, and 5. 1% (for soil organic matter), and 4.9%, 6.1% , and 4.6% (for soil total N) smaller than those of OK (P<0.01), and 2.4%, 2.6% , and 1.8% (for soil organic matter), and 2.1%, 2.8%, and 2.2% (for soil total N) smaller than those of RK, respectively (P<0.05).

  5. Multi-photon self-error-correction hyperentanglement distribution over arbitrary collective-noise channels

    NASA Astrophysics Data System (ADS)

    Gao, Cheng-Yan; Wang, Guan-Yu; Zhang, Hao; Deng, Fu-Guo

    2017-01-01

    We present a self-error-correction spatial-polarization hyperentanglement distribution scheme for N-photon systems in a hyperentangled Greenberger-Horne-Zeilinger state over arbitrary collective-noise channels. In our scheme, the errors of spatial entanglement can be first averted by encoding the spatial-polarization hyperentanglement into the time-bin entanglement with identical polarization and defined spatial modes before it is transmitted over the fiber channels. After transmission over the noisy channels, the polarization errors introduced by the depolarizing noise can be corrected resorting to the time-bin entanglement. Finally, the parties in quantum communication can in principle share maximally hyperentangled states with a success probability of 100%.

  6. Spatial Representativeness Error in the Ground-Level Observation Networks for Black Carbon Radiation Absorption

    NASA Astrophysics Data System (ADS)

    Wang, Rong; Andrews, Elisabeth; Balkanski, Yves; Boucher, Olivier; Myhre, Gunnar; Samset, Bjørn Hallvard; Schulz, Michael; Schuster, Gregory L.; Valari, Myrto; Tao, Shu

    2018-02-01

    There is high uncertainty in the direct radiative forcing of black carbon (BC), an aerosol that strongly absorbs solar radiation. The observation-constrained estimate, which is several times larger than the bottom-up estimate, is influenced by the spatial representativeness error due to the mesoscale inhomogeneity of the aerosol fields and the relatively low resolution of global chemistry-transport models. Here we evaluated the spatial representativeness error for two widely used observational networks (AErosol RObotic NETwork and Global Atmosphere Watch) by downscaling the geospatial grid in a global model of BC aerosol absorption optical depth to 0.1° × 0.1°. Comparing the models at a spatial resolution of 2° × 2° with BC aerosol absorption at AErosol RObotic NETwork sites (which are commonly located near emission hot spots) tends to cause a global spatial representativeness error of 30%, as a positive bias for the current top-down estimate of global BC direct radiative forcing. By contrast, the global spatial representativeness error will be 7% for the Global Atmosphere Watch network, because the sites are located in such a way that there are almost an equal number of sites with positive or negative representativeness error.

  7. A multistate dynamic site occupancy model for spatially aggregated sessile communities

    USGS Publications Warehouse

    Fukaya, Keiichi; Royle, J. Andrew; Okuda, Takehiro; Nakaoka, Masahiro; Noda, Takashi

    2017-01-01

    Estimation of transition probabilities of sessile communities seems easy in principle but may still be difficult in practice because resampling error (i.e. a failure to resample exactly the same location at fixed points) may cause significant estimation bias. Previous studies have developed novel analytical methods to correct for this estimation bias. However, they did not consider the local structure of community composition induced by the aggregated distribution of organisms that is typically observed in sessile assemblages and is very likely to affect observations.We developed a multistate dynamic site occupancy model to estimate transition probabilities that accounts for resampling errors associated with local community structure. The model applies a nonparametric multivariate kernel smoothing methodology to the latent occupancy component to estimate the local state composition near each observation point, which is assumed to determine the probability distribution of data conditional on the occurrence of resampling error.By using computer simulations, we confirmed that an observation process that depends on local community structure may bias inferences about transition probabilities. By applying the proposed model to a real data set of intertidal sessile communities, we also showed that estimates of transition probabilities and of the properties of community dynamics may differ considerably when spatial dependence is taken into account.Results suggest the importance of accounting for resampling error and local community structure for developing management plans that are based on Markovian models. Our approach provides a solution to this problem that is applicable to broad sessile communities. It can even accommodate an anisotropic spatial correlation of species composition, and may also serve as a basis for inferring complex nonlinear ecological dynamics.

  8. Prediction skill of rainstorm events over India in the TIGGE weather prediction models

    NASA Astrophysics Data System (ADS)

    Karuna Sagar, S.; Rajeevan, M.; Vijaya Bhaskara Rao, S.; Mitra, A. K.

    2017-12-01

    Extreme rainfall events pose a serious threat of leading to severe floods in many countries worldwide. Therefore, advance prediction of its occurrence and spatial distribution is very essential. In this paper, an analysis has been made to assess the skill of numerical weather prediction models in predicting rainstorms over India. Using gridded daily rainfall data set and objective criteria, 15 rainstorms were identified during the monsoon season (June to September). The analysis was made using three TIGGE (THe Observing System Research and Predictability Experiment (THORPEX) Interactive Grand Global Ensemble) models. The models considered are the European Centre for Medium-Range Weather Forecasts (ECMWF), National Centre for Environmental Prediction (NCEP) and the UK Met Office (UKMO). Verification of the TIGGE models for 43 observed rainstorm days from 15 rainstorm events has been made for the period 2007-2015. The comparison reveals that rainstorm events are predictable up to 5 days in advance, however with a bias in spatial distribution and intensity. The statistical parameters like mean error (ME) or Bias, root mean square error (RMSE) and correlation coefficient (CC) have been computed over the rainstorm region using the multi-model ensemble (MME) mean. The study reveals that the spread is large in ECMWF and UKMO followed by the NCEP model. Though the ensemble spread is quite small in NCEP, the ensemble member averages are not well predicted. The rank histograms suggest that the forecasts are under prediction. The modified Contiguous Rain Area (CRA) technique was used to verify the spatial as well as the quantitative skill of the TIGGE models. Overall, the contribution from the displacement and pattern errors to the total RMSE is found to be more in magnitude. The volume error increases from 24 hr forecast to 48 hr forecast in all the three models.

  9. Channel correlation and BER performance analysis of coherent optical communication systems with receive diversity over moderate-to-strong non-Kolmogorov turbulence.

    PubMed

    Fu, Yulong; Ma, Jing; Tan, Liying; Yu, Siyuan; Lu, Gaoyuan

    2018-04-10

    In this paper, new expressions of the channel-correlation coefficient and its components (the large- and small-scale channel-correlation coefficients) for a plane wave are derived for a horizontal link in moderate-to-strong non-Kolmogorov turbulence using a generalized effective atmospheric spectrum which includes finite-turbulence inner and outer scales and high-wave-number "bump". The closed-form expression of the average bit error rate (BER) of the coherent free-space optical communication system is derived using the derived channel-correlation coefficients and an α-μ distribution to approximate the sum of the square root of arbitrarily correlated Gamma-Gamma random variables. Analytical results are provided to investigate the channel correlation and evaluate the average BER performance. The validity of the proposed approximation is illustrated by Monte Carlo simulations. This work will help with further investigation of the fading correlation in spatial diversity systems.

  10. Spatial and temporal variability of the overall error of National Atmospheric Deposition Program measurements determined by the USGS collocated-sampler program, water years 1989-2001

    USGS Publications Warehouse

    Wetherbee, G.A.; Latysh, N.E.; Gordon, J.D.

    2005-01-01

    Data from the U.S. Geological Survey (USGS) collocated-sampler program for the National Atmospheric Deposition Program/National Trends Network (NADP/NTN) are used to estimate the overall error of NADP/NTN measurements. Absolute errors are estimated by comparison of paired measurements from collocated instruments. Spatial and temporal differences in absolute error were identified and are consistent with longitudinal distributions of NADP/NTN measurements and spatial differences in precipitation characteristics. The magnitude of error for calcium, magnesium, ammonium, nitrate, and sulfate concentrations, specific conductance, and sample volume is of minor environmental significance to data users. Data collected after a 1994 sample-handling protocol change are prone to less absolute error than data collected prior to 1994. Absolute errors are smaller during non-winter months than during winter months for selected constituents at sites where frozen precipitation is common. Minimum resolvable differences are estimated for different regions of the USA to aid spatial and temporal watershed analyses.

  11. VERBAL AND SPATIAL WORKING MEMORY LOAD HAVE SIMILARLY MINIMAL EFFECTS ON SPEECH PRODUCTION.

    PubMed

    Lee, Ogyoung; Redford, Melissa A

    2015-08-10

    The goal of the present study was to test the effects of working memory on speech production. Twenty American-English speaking adults produced syntactically complex sentences in tasks that taxed either verbal or spatial working memory. Sentences spoken under load were produced with more errors, fewer prosodic breaks, and at faster rates than sentence produced in the control conditions, but other acoustic correlates of rhythm and intonation did not change. Verbal and spatial working memory had very similar effects on production, suggesting that the different span tasks used to tax working memory merely shifted speakers' attention away from the act of speaking. This finding runs contra the hypothesis of incremental phonological/phonetic encoding, which predicts the manipulation of information in verbal working memory during speech production.

  12. A Study of the Groundwater Level Spatial Variability in the Messara Valley of Crete

    NASA Astrophysics Data System (ADS)

    Varouchakis, E. A.; Hristopulos, D. T.; Karatzas, G. P.

    2009-04-01

    The island of Crete (Greece) has a dry sub-humid climate and marginal groundwater resources, which are extensively used for agricultural activities and human consumption. The Messara valley is located in the south of the Heraklion prefecture, it covers an area of 398 km2, and it is the largest and most productive valley of the island. Over-exploitation during the past thirty (30) years has led to a dramatic decrease of thirty five (35) meters in the groundwater level. Possible future climatic changes in the Mediterranean region, potential desertification, population increase, and extensive agricultural activity generate concern over the sustainability of the water resources of the area. The accurate estimation of the water table depth is important for an integrated groundwater resource management plan. This study focuses on the Mires basin of the Messara valley for reasons of hydro-geological data availability and geological homogeneity. The research goal is to model and map the spatial variability of the basin's groundwater level accurately. The data used in this study consist of seventy (70) piezometric head measurements for the hydrological year 2001-2002. These are unevenly distributed and mostly concentrated along a temporary river that crosses the basin. The range of piezometric heads varies from an extreme low value of 9.4 meters above sea level (masl) to 62 masl, for the wet period of the year (October to April). An initial goal of the study is to develop spatial models for the accurate generation of static maps of groundwater level. At a second stage, these maps should extend the models to dynamic (space-time) situations for the prediction of future water levels. Preliminary data analysis shows that the piezometric head variations are not normally distributed. Several methods including Box-Cox transformation and a modified version of it, transgaussian Kriging, and Gaussian anamorphosis have been used to obtain a spatial model for the piezometric head. A trend model was constructed that accounted for the distance of the wells from the river bed. The spatial dependence of the fluctuations was studied by fitting isotropic and anisotropic empirical variograms with classical models, the Matérn model and the Spartan variogram family (Hristopulos, 2003; Hristopoulos and Elogne, 2007). The most accurate results, mean absolute prediction error of 4.57 masl, were obtained using the modified Box-Cox transform of the original data. The exponential and the isotropic Spartan variograms provided the best fits to the experimental variogram. Using Ordinary Kriging with either variogram function gave a mean absolute estimation error of 4.57 masl based on leave-one-out cross validation. The bias error of the predictions was calculated equal to -0.38 masl and the correlation coefficient of the predictions with respect of the original data equal to 0.8. The estimates located on the borders of the study domain presented a higher prediction error that varies from 8 to 14 masl due to the limited number of neighbor data. The maximum estimation error, observed at the extreme low value calculation, was 23 masl. The method of locally weighted regression (LWR), (NIST/SEMATECH 2009) was also investigated as an alternative approach for spatial modeling. The trend calculated from a second order LWR method showed a remarkable fit to the original data marked by a mean absolute estimation error of 4.4 masl. The bias prediction error was calculated equal to -0.16 masl and the correlation coefficient between predicted and original data equal to 0.88 masl. Higher estimation errors were found at the same locations and vary within the same range. The extreme low value calculation error has improved to 21 masl. Plans for future research include the incorporation of spatial anisotropy in the kriging algorithm, the investigation of kernel functions other than the tricube in LWR, as well as the use of locally adapted bandwidth values. Furthermore, pumping rates for fifty eight (58) of the seventy (70) wells are available display a correlation coefficient of -0.6 with the respective ground water levels. A Digital Elevation Model (DEM) of the area will provide additional information about the unsampled locations of the basin. The pumping rates and the DEM will be used as secondary information in a co-kriging approach, leading to more accurate estimation of the basin's water table. NIST/SEMATECH e-Handbook of Statitical Methods, http://www.itl.nist.gov/div898/handbook/, 12/01/09. D.T. Hristopulos, "Spartan Gibbs random field models for geostatistical applications," SIAM J. Scient. Comput., vol. 24, no. 6, pp. 2125-2162, 2003 D.T. Hristopulos and S. Elogne, "Analytic properties and covariance functions for a new class of generalized Gibbs random fields," IEEE TRANSACTIONS ON INFORMATION THEORY, vol. 53, no 12, pp. 4667-4679, 2007

  13. Design fluency and neuroanatomical correlates in 54 neurosurgical patients with lesions to the right hemisphere.

    PubMed

    Marin, Dario; Madotto, Eleonora; Fabbro, Franco; Skrap, Miran; Tomasino, Barbara

    2017-10-01

    We addressed the neuroanatomical correlates of 54 right-brain-damaged neurosurgical patients on visuo-spatial design fluency, which is a measure of the ability to generate/plan a series of new abstract combinations in a flexible way. 22.2% of the patients were impaired. They failed the task because they did not use strategic behavior, in particular they used rotational strategy to a significantly lower extent and produced a significantly higher rate of perseverative errors. Overall performance did not correlate with neuropsychological tests, suggesting that proficient performance was independent of other cognitive domains. Performance significantly correlated with use of rotational strategy. Tasks related to executive functions such as psychomotor speed and capacity to shift were positively correlated to the number of strategies used to solve the task. Lesion analysis showed that the maximum density of the patients' lesions-obtained by subtracting the overlap of lesions of spared patients from the overlap of lesions of impaired patients-overlaps with the precentral gyrus, rolandic operculum/insula, superior/middle temporal gyrus/hippocampus and, at subcortical level, with part of the superior longitudinal fasciculus, external capsule, retrolenticular part of the internal capsule and sagittal stratum (inferior longitudinal fasciculus and inferior fronto-occipital fasciculus). These areas are part of the fronto-parietal-temporal network known to be involved in top-down control of visuo-spatial attention, suggesting that the mechanisms and the strategies needed for proficient performance are essentially visuo-spatial in nature.

  14. Field evaluation of the error arising from inadequate time averaging in the standard use of depth-integrating suspended-sediment samplers

    USGS Publications Warehouse

    Topping, David J.; Rubin, David M.; Wright, Scott A.; Melis, Theodore S.

    2011-01-01

    Several common methods for measuring suspended-sediment concentration in rivers in the United States use depth-integrating samplers to collect a velocity-weighted suspended-sediment sample in a subsample of a river cross section. Because depth-integrating samplers are always moving through the water column as they collect a sample, and can collect only a limited volume of water and suspended sediment, they collect only minimally time-averaged data. Four sources of error exist in the field use of these samplers: (1) bed contamination, (2) pressure-driven inrush, (3) inadequate sampling of the cross-stream spatial structure in suspended-sediment concentration, and (4) inadequate time averaging. The first two of these errors arise from misuse of suspended-sediment samplers, and the third has been the subject of previous study using data collected in the sand-bedded Middle Loup River in Nebraska. Of these four sources of error, the least understood source of error arises from the fact that depth-integrating samplers collect only minimally time-averaged data. To evaluate this fourth source of error, we collected suspended-sediment data between 1995 and 2007 at four sites on the Colorado River in Utah and Arizona, using a P-61 suspended-sediment sampler deployed in both point- and one-way depth-integrating modes, and D-96-A1 and D-77 bag-type depth-integrating suspended-sediment samplers. These data indicate that the minimal duration of time averaging during standard field operation of depth-integrating samplers leads to an error that is comparable in magnitude to that arising from inadequate sampling of the cross-stream spatial structure in suspended-sediment concentration. This random error arising from inadequate time averaging is positively correlated with grain size and does not largely depend on flow conditions or, for a given size class of suspended sediment, on elevation above the bed. Averaging over time scales >1 minute is the likely minimum duration required to result in substantial decreases in this error. During standard two-way depth integration, a depth-integrating suspended-sediment sampler collects a sample of the water-sediment mixture during two transits at each vertical in a cross section: one transit while moving from the water surface to the bed, and another transit while moving from the bed to the water surface. As the number of transits is doubled at an individual vertical, this error is reduced by ~30 percent in each size class of suspended sediment. For a given size class of suspended sediment, the error arising from inadequate sampling of the cross-stream spatial structure in suspended-sediment concentration depends only on the number of verticals collected, whereas the error arising from inadequate time averaging depends on both the number of verticals collected and the number of transits collected at each vertical. Summing these two errors in quadrature yields a total uncertainty in an equal-discharge-increment (EDI) or equal-width-increment (EWI) measurement of the time-averaged velocity-weighted suspended-sediment concentration in a river cross section (exclusive of any laboratory-processing errors). By virtue of how the number of verticals and transits influences the two individual errors within this total uncertainty, the error arising from inadequate time averaging slightly dominates that arising from inadequate sampling of the cross-stream spatial structure in suspended-sediment concentration. Adding verticals to an EDI or EWI measurement is slightly more effective in reducing the total uncertainty than adding transits only at each vertical, because a new vertical contributes both temporal and spatial information. However, because collection of depth-integrated samples at more transits at each vertical is generally easier and faster than at more verticals, addition of a combination of verticals and transits is likely a more practical approach to reducing the total uncertainty in most field situatio

  15. Retrieval of interseismic displacement from multi-temporal InSAR measurements: challenges and solutions

    NASA Astrophysics Data System (ADS)

    Zhang, L.; Ding, X.; Lu, Z.; Wen, Y.; Hu, J.

    2016-12-01

    High-resolution measurements of interseismic displacement are critical for understanding the earthquake cycle and for assessing earthquake hazard. Compared with sparsely located GNSS sites, it is well-known that by jointly analyzing a set of data over the same area acquired on different dates, multi-temporal InSAR (MTInSAR) is capable of remotely imaging interseismic deformation at an unprecedented level of spatial resolution. However conventional MTInSAR cannot hold a considerate promise for the precise retrieval of interseismic deformation in tectonically active zones where complicated atmospheric delay, orbital errors, and localized seasonal ground fluctuations commonly exist. Of interest in this study is to develop reliable solutions to correct or suppress these unwanted signals thereby to improve the accuracy of mapped interseismic displacement. Our technical innovations lie in the following aspects. According to different spatial-temporal characteristics, a joint model that takes both orbit errors and interseismic displacement as parameters is designed to isolate long wavelength motion from orbit error even in the case these two types of signals exhibit similar spatial patterns. To suppress the localized impacts (e.g., a portion of atmospheric artifacts and small-scale anthropogenic deformation), spatial correlation is employed as a constraint during the parameter estimation. The proposed solutions are evaluated by synthetic tests and applied to map the interseismic displacement over Eastern Turkey that spans the Arabia-Eurasia plate boundary zone from a large set of radar images acquired by Envisat/ASAR and Sentinel-1. The derived interseismic displacement validated by GPS data is further used to invert the slip rate and locking depth for the North and East Anatolian Faults. A cross-comparison with published results is also conducted.

  16. Retrieving accurate temporal and spatial information about Taylor slug flows from non-invasive NIR photometry measurements

    NASA Astrophysics Data System (ADS)

    Helmers, Thorben; Thöming, Jorg; Mießner, Ulrich

    2017-11-01

    In this article, we introduce a novel approach to retrieve spatial- and time-resolved Taylor slug flow information from a single non-invasive photometric flow sensor. The presented approach uses disperse phase surface properties to retrieve the instantaneous velocity information from a single sensor's time-scaled signal. For this purpose, a photometric sensor system is simulated using a ray-tracing algorithm to calculate spatially resolved near-infrared transmission signals. At the signal position corresponding to the rear droplet cap, a correlation factor of the droplet's geometric properties is retrieved and used to extract the instantaneous droplet velocity from the real sensor's temporal transmission signal. Furthermore, a correlation for the rear cap geometry based on the a priori known total superficial flow velocity is developed, because the cap curvature is velocity sensitive itself. Our model for velocity derivation is validated, and measurements of a first prototype showcase the capability of the device. Long-term measurements visualize systematic fluctuations in droplet lengths, velocities, and frequencies that could otherwise, without the observation on a larger timescale, have been identified as measurement errors and not systematic phenomenas.

  17. Estimating Prediction Uncertainty from Geographical Information System Raster Processing: A User's Manual for the Raster Error Propagation Tool (REPTool)

    USGS Publications Warehouse

    Gurdak, Jason J.; Qi, Sharon L.; Geisler, Michael L.

    2009-01-01

    The U.S. Geological Survey Raster Error Propagation Tool (REPTool) is a custom tool for use with the Environmental System Research Institute (ESRI) ArcGIS Desktop application to estimate error propagation and prediction uncertainty in raster processing operations and geospatial modeling. REPTool is designed to introduce concepts of error and uncertainty in geospatial data and modeling and provide users of ArcGIS Desktop a geoprocessing tool and methodology to consider how error affects geospatial model output. Similar to other geoprocessing tools available in ArcGIS Desktop, REPTool can be run from a dialog window, from the ArcMap command line, or from a Python script. REPTool consists of public-domain, Python-based packages that implement Latin Hypercube Sampling within a probabilistic framework to track error propagation in geospatial models and quantitatively estimate the uncertainty of the model output. Users may specify error for each input raster or model coefficient represented in the geospatial model. The error for the input rasters may be specified as either spatially invariant or spatially variable across the spatial domain. Users may specify model output as a distribution of uncertainty for each raster cell. REPTool uses the Relative Variance Contribution method to quantify the relative error contribution from the two primary components in the geospatial model - errors in the model input data and coefficients of the model variables. REPTool is appropriate for many types of geospatial processing operations, modeling applications, and related research questions, including applications that consider spatially invariant or spatially variable error in geospatial data.

  18. Entropy of space-time outcome in a movement speed-accuracy task.

    PubMed

    Hsieh, Tsung-Yu; Pacheco, Matheus Maia; Newell, Karl M

    2015-12-01

    The experiment reported was set-up to investigate the space-time entropy of movement outcome as a function of a range of spatial (10, 20 and 30 cm) and temporal (250-2500 ms) criteria in a discrete aiming task. The variability and information entropy of the movement spatial and temporal errors considered separately increased and decreased on the respective dimension as a function of an increment of movement velocity. However, the joint space-time entropy was lowest when the relative contribution of spatial and temporal task criteria was comparable (i.e., mid-range of space-time constraints), and it increased with a greater trade-off between spatial or temporal task demands, revealing a U-shaped function across space-time task criteria. The traditional speed-accuracy functions of spatial error and temporal error considered independently mapped to this joint space-time U-shaped entropy function. The trade-off in movement tasks with joint space-time criteria is between spatial error and timing error, rather than movement speed and accuracy. Copyright © 2015 Elsevier B.V. All rights reserved.

  19. Computer Simulations to Study Diffraction Effects of Stacking Faults in Beta-SiC: II. Experimental Verification. 2; Experimental Verification

    NASA Technical Reports Server (NTRS)

    Pujar, Vijay V.; Cawley, James D.; Levine, S. (Technical Monitor)

    2000-01-01

    Earlier results from computer simulation studies suggest a correlation between the spatial distribution of stacking errors in the Beta-SiC structure and features observed in X-ray diffraction patterns of the material. Reported here are experimental results obtained from two types of nominally Beta-SiC specimens, which yield distinct XRD data. These samples were analyzed using high resolution transmission electron microscopy (HRTEM) and the stacking error distribution was directly determined. The HRTEM results compare well to those deduced by matching the XRD data with simulated spectra, confirming the hypothesis that the XRD data is indicative not only of the presence and density of stacking errors, but also that it can yield information regarding their distribution. In addition, the stacking error population in both specimens is related to their synthesis conditions and it appears that it is similar to the relation developed by others to explain the formation of the corresponding polytypes.

  20. Evaluate error correction ability of magnetorheological finishing by smoothing spectral function

    NASA Astrophysics Data System (ADS)

    Wang, Jia; Fan, Bin; Wan, Yongjian; Shi, Chunyan; Zhuo, Bin

    2014-08-01

    Power Spectral Density (PSD) has been entrenched in optics design and manufacturing as a characterization of mid-high spatial frequency (MHSF) errors. Smoothing Spectral Function (SSF) is a newly proposed parameter that based on PSD to evaluate error correction ability of computer controlled optical surfacing (CCOS) technologies. As a typical deterministic and sub-aperture finishing technology based on CCOS, magnetorheological finishing (MRF) leads to MHSF errors inevitably. SSF is employed to research different spatial frequency error correction ability of MRF process. The surface figures and PSD curves of work-piece machined by MRF are presented. By calculating SSF curve, the correction ability of MRF for different spatial frequency errors will be indicated as a normalized numerical value.

  1. Correction of mid-spatial-frequency errors by smoothing in spin motion for CCOS

    NASA Astrophysics Data System (ADS)

    Zhang, Yizhong; Wei, Chaoyang; Shao, Jianda; Xu, Xueke; Liu, Shijie; Hu, Chen; Zhang, Haichao; Gu, Haojin

    2015-08-01

    Smoothing is a convenient and efficient way to correct mid-spatial-frequency errors. Quantifying the smoothing effect allows improvements in efficiency for finishing precision optics. A series experiments in spin motion are performed to study the smoothing effects about correcting mid-spatial-frequency errors. Some of them use a same pitch tool at different spinning speed, and others at a same spinning speed with different tools. Introduced and improved Shu's model to describe and compare the smoothing efficiency with different spinning speed and different tools. From the experimental results, the mid-spatial-frequency errors on the initial surface were nearly smoothed out after the process in spin motion and the number of smoothing times can be estimated by the model before the process. Meanwhile this method was also applied to smooth the aspherical component, which has an obvious mid-spatial-frequency error after Magnetorheological Finishing processing. As a result, a high precision aspheric optical component was obtained with PV=0.1λ and RMS=0.01λ.

  2. Influence of the quality of intraoperative fluoroscopic images on the spatial positioning accuracy of a CAOS system.

    PubMed

    Wang, Junqiang; Wang, Yu; Zhu, Gang; Chen, Xiangqian; Zhao, Xiangrui; Qiao, Huiting; Fan, Yubo

    2018-06-01

    Spatial positioning accuracy is a key issue in a computer-assisted orthopaedic surgery (CAOS) system. Since intraoperative fluoroscopic images are one of the most important input data to the CAOS system, the quality of these images should have a significant influence on the accuracy of the CAOS system. But the regularities and mechanism of the influence of the quality of intraoperative images on the accuracy of a CAOS system have yet to be studied. Two typical spatial positioning methods - a C-arm calibration-based method and a bi-planar positioning method - are used to study the influence of different image quality parameters, such as resolution, distortion, contrast and signal-to-noise ratio, on positioning accuracy. The error propagation rules of image error in different spatial positioning methods are analyzed by the Monte Carlo method. Correlation analysis showed that resolution and distortion had a significant influence on spatial positioning accuracy. In addition the C-arm calibration-based method was more sensitive to image distortion, while the bi-planar positioning method was more susceptible to image resolution. The image contrast and signal-to-noise ratio have no significant influence on the spatial positioning accuracy. The result of Monte Carlo analysis proved that generally the bi-planar positioning method was more sensitive to image quality than the C-arm calibration-based method. The quality of intraoperative fluoroscopic images is a key issue in the spatial positioning accuracy of a CAOS system. Although the 2 typical positioning methods have very similar mathematical principles, they showed different sensitivities to different image quality parameters. The result of this research may help to create a realistic standard for intraoperative fluoroscopic images for CAOS systems. Copyright © 2018 John Wiley & Sons, Ltd.

  3. Iowa radon leukaemia study: a hierarchical population risk model for spatially correlated exposure measured with error.

    PubMed

    Smith, Brian J; Zhang, Lixun; Field, R William

    2007-11-10

    This paper presents a Bayesian model that allows for the joint prediction of county-average radon levels and estimation of the associated leukaemia risk. The methods are motivated by radon data from an epidemiologic study of residential radon in Iowa that include 2726 outdoor and indoor measurements. Prediction of county-average radon is based on a geostatistical model for the radon data which assumes an underlying continuous spatial process. In the radon model, we account for uncertainties due to incomplete spatial coverage, spatial variability, characteristic differences between homes, and detector measurement error. The predicted radon averages are, in turn, included as a covariate in Poisson models for incident cases of acute lymphocytic (ALL), acute myelogenous (AML), chronic lymphocytic (CLL), and chronic myelogenous (CML) leukaemias reported to the Iowa cancer registry from 1973 to 2002. Since radon and leukaemia risk are modelled simultaneously in our approach, the resulting risk estimates accurately reflect uncertainties in the predicted radon exposure covariate. Posterior mean (95 per cent Bayesian credible interval) estimates of the relative risk associated with a 1 pCi/L increase in radon for ALL, AML, CLL, and CML are 0.91 (0.78-1.03), 1.01 (0.92-1.12), 1.06 (0.96-1.16), and 1.12 (0.98-1.27), respectively. Copyright 2007 John Wiley & Sons, Ltd.

  4. Improving Evapotranspiration Estimates Using Multi-Platform Remote Sensing

    NASA Astrophysics Data System (ADS)

    Knipper, Kyle; Hogue, Terri; Franz, Kristie; Scott, Russell

    2016-04-01

    Understanding the linkages between energy and water cycles through evapotranspiration (ET) is uniquely challenging given its dependence on a range of climatological parameters and surface/atmospheric heterogeneity. A number of methods have been developed to estimate ET either from primarily remote-sensing observations, in-situ measurements, or a combination of the two. However, the scale of many of these methods may be too large to provide needed information about the spatial and temporal variability of ET that can occur over regions with acute or chronic land cover change and precipitation driven fluxes. The current study aims to improve the spatial and temporal variability of ET utilizing only satellite-based observations by incorporating a potential evapotranspiration (PET) methodology with satellite-based down-scaled soil moisture estimates in southern Arizona, USA. Initially, soil moisture estimates from AMSR2 and SMOS are downscaled to 1km through a triangular relationship between MODIS land surface temperature (MYD11A1), vegetation indices (MOD13Q1/MYD13Q1), and brightness temperature. Downscaled soil moisture values are then used to scale PET to actual ET (AET) at a daily, 1km resolution. Derived AET estimates are compared to observed flux tower estimates, the North American Land Data Assimilation System (NLDAS) model output (i.e. Variable Infiltration Capacity (VIC) Macroscale Hydrologic Model, Mosiac Model, and Noah Model simulations), the Operational Simplified Surface Energy Balance Model (SSEBop), and a calibrated empirical ET model created specifically for the region. Preliminary results indicate a strong increase in correlation when incorporating the downscaling technique to original AMSR2 and SMOS soil moisture values, with the added benefit of being able to decipher small scale heterogeneity in soil moisture (riparian versus desert grassland). AET results show strong correlations with relatively low error and bias when compared to flux tower estimates. In addition, AET results show improved bias to those reported by SSEBop, with similar correlations and errors when compared to the empirical ET model. Spatial patterns of estimated AET display patterns representative of the basin's elevation and vegetation characteristics, with improved spatial resolution and temporal heterogeneity when compared to previous models.

  5. Spatial Representativeness Error in the Ground‐Level Observation Networks for Black Carbon Radiation Absorption

    PubMed Central

    Andrews, Elisabeth; Balkanski, Yves; Boucher, Olivier; Myhre, Gunnar; Samset, Bjørn Hallvard; Schulz, Michael; Schuster, Gregory L.; Valari, Myrto; Tao, Shu

    2018-01-01

    Abstract There is high uncertainty in the direct radiative forcing of black carbon (BC), an aerosol that strongly absorbs solar radiation. The observation‐constrained estimate, which is several times larger than the bottom‐up estimate, is influenced by the spatial representativeness error due to the mesoscale inhomogeneity of the aerosol fields and the relatively low resolution of global chemistry‐transport models. Here we evaluated the spatial representativeness error for two widely used observational networks (AErosol RObotic NETwork and Global Atmosphere Watch) by downscaling the geospatial grid in a global model of BC aerosol absorption optical depth to 0.1° × 0.1°. Comparing the models at a spatial resolution of 2° × 2° with BC aerosol absorption at AErosol RObotic NETwork sites (which are commonly located near emission hot spots) tends to cause a global spatial representativeness error of 30%, as a positive bias for the current top‐down estimate of global BC direct radiative forcing. By contrast, the global spatial representativeness error will be 7% for the Global Atmosphere Watch network, because the sites are located in such a way that there are almost an equal number of sites with positive or negative representativeness error. PMID:29937603

  6. Comparison of spatial association approaches for landscape mapping of soil organic carbon stocks

    NASA Astrophysics Data System (ADS)

    Miller, B. A.; Koszinski, S.; Wehrhan, M.; Sommer, M.

    2015-03-01

    The distribution of soil organic carbon (SOC) can be variable at small analysis scales, but consideration of its role in regional and global issues demands the mapping of large extents. There are many different strategies for mapping SOC, among which is to model the variables needed to calculate the SOC stock indirectly or to model the SOC stock directly. The purpose of this research is to compare direct and indirect approaches to mapping SOC stocks from rule-based, multiple linear regression models applied at the landscape scale via spatial association. The final products for both strategies are high-resolution maps of SOC stocks (kg m-2), covering an area of 122 km2, with accompanying maps of estimated error. For the direct modelling approach, the estimated error map was based on the internal error estimations from the model rules. For the indirect approach, the estimated error map was produced by spatially combining the error estimates of component models via standard error propagation equations. We compared these two strategies for mapping SOC stocks on the basis of the qualities of the resulting maps as well as the magnitude and distribution of the estimated error. The direct approach produced a map with less spatial variation than the map produced by the indirect approach. The increased spatial variation represented by the indirect approach improved R2 values for the topsoil and subsoil stocks. Although the indirect approach had a lower mean estimated error for the topsoil stock, the mean estimated error for the total SOC stock (topsoil + subsoil) was lower for the direct approach. For these reasons, we recommend the direct approach to modelling SOC stocks be considered a more conservative estimate of the SOC stocks' spatial distribution.

  7. Comparison of spatial association approaches for landscape mapping of soil organic carbon stocks

    NASA Astrophysics Data System (ADS)

    Miller, B. A.; Koszinski, S.; Wehrhan, M.; Sommer, M.

    2014-11-01

    The distribution of soil organic carbon (SOC) can be variable at small analysis scales, but consideration of its role in regional and global issues demands the mapping of large extents. There are many different strategies for mapping SOC, among which are to model the variables needed to calculate the SOC stock indirectly or to model the SOC stock directly. The purpose of this research is to compare direct and indirect approaches to mapping SOC stocks from rule-based, multiple linear regression models applied at the landscape scale via spatial association. The final products for both strategies are high-resolution maps of SOC stocks (kg m-2), covering an area of 122 km2, with accompanying maps of estimated error. For the direct modelling approach, the estimated error map was based on the internal error estimations from the model rules. For the indirect approach, the estimated error map was produced by spatially combining the error estimates of component models via standard error propagation equations. We compared these two strategies for mapping SOC stocks on the basis of the qualities of the resulting maps as well as the magnitude and distribution of the estimated error. The direct approach produced a map with less spatial variation than the map produced by the indirect approach. The increased spatial variation represented by the indirect approach improved R2 values for the topsoil and subsoil stocks. Although the indirect approach had a lower mean estimated error for the topsoil stock, the mean estimated error for the total SOC stock (topsoil + subsoil) was lower for the direct approach. For these reasons, we recommend the direct approach to modelling SOC stocks be considered a more conservative estimate of the SOC stocks' spatial distribution.

  8. Brain signaling and behavioral responses induced by exposure to (56)Fe-particle radiation

    NASA Technical Reports Server (NTRS)

    Denisova, N. A.; Shukitt-Hale, B.; Rabin, B. M.; Joseph, J. A.

    2002-01-01

    Previous experiments have demonstrated that exposure to 56Fe-particle irradiation (1.5 Gy, 1 GeV) produced aging-like accelerations in neuronal and behavioral deficits. Astronauts on long-term space flights will be exposed to similar heavy-particle radiations that might have similar deleterious effects on neuronal signaling and cognitive behavior. Therefore, the present study evaluated whether radiation-induced spatial learning and memory behavioral deficits are associated with region-specific brain signaling deficits by measuring signaling molecules previously found to be essential for behavior [pre-synaptic vesicle proteins, synaptobrevin and synaptophysin, and protein kinases, calcium-dependent PRKCs (also known as PKCs) and PRKA (PRKA RIIbeta)]. The results demonstrated a significant radiation-induced increase in reference memory errors. The increases in reference memory errors were significantly negatively correlated with striatal synaptobrevin and frontal cortical synaptophysin expression. Both synaptophysin and synaptobrevin are synaptic vesicle proteins that are important in cognition. Striatal PRKA, a memory signaling molecule, was also significantly negatively correlated with reference memory errors. Overall, our findings suggest that radiation-induced pre-synaptic facilitation may contribute to some previously reported radiation-induced decrease in striatal dopamine release and for the disruption of the central dopaminergic system integrity and dopamine-mediated behavior.

  9. Brain signaling and behavioral responses induced by exposure to (56)Fe-particle radiation.

    PubMed

    Denisova, N A; Shukitt-Hale, B; Rabin, B M; Joseph, J A

    2002-12-01

    Previous experiments have demonstrated that exposure to 56Fe-particle irradiation (1.5 Gy, 1 GeV) produced aging-like accelerations in neuronal and behavioral deficits. Astronauts on long-term space flights will be exposed to similar heavy-particle radiations that might have similar deleterious effects on neuronal signaling and cognitive behavior. Therefore, the present study evaluated whether radiation-induced spatial learning and memory behavioral deficits are associated with region-specific brain signaling deficits by measuring signaling molecules previously found to be essential for behavior [pre-synaptic vesicle proteins, synaptobrevin and synaptophysin, and protein kinases, calcium-dependent PRKCs (also known as PKCs) and PRKA (PRKA RIIbeta)]. The results demonstrated a significant radiation-induced increase in reference memory errors. The increases in reference memory errors were significantly negatively correlated with striatal synaptobrevin and frontal cortical synaptophysin expression. Both synaptophysin and synaptobrevin are synaptic vesicle proteins that are important in cognition. Striatal PRKA, a memory signaling molecule, was also significantly negatively correlated with reference memory errors. Overall, our findings suggest that radiation-induced pre-synaptic facilitation may contribute to some previously reported radiation-induced decrease in striatal dopamine release and for the disruption of the central dopaminergic system integrity and dopamine-mediated behavior.

  10. Performance Evaluation of the Geostationary Synthetic Thinned Array Radiometer (GeoSTAR) Demonstrator Instrument

    NASA Technical Reports Server (NTRS)

    Tanner, Alan B.; Wilson, William J.; Lambrigsten, Bjorn H.; Dinardo, Steven J.; Brown, Shannon T.; Kangaslahti, Pekka P.; Gaier, Todd C.; Ruf, C. S.; Gross, S. M.; Lim, B. H.; hide

    2006-01-01

    The design, error budget, and preliminary test results of a 50-56 GHz synthetic aperture radiometer demonstration system are presented. The instrument consists of a fixed 24-element array of correlation interferometers, and is capable of producing calibrated images with 0.8 degree spatial resolution within a 17 degree wide field of view. This system has been built to demonstrate performance and a design which can be scaled to a much larger geostationary earth imager. As a baseline, such a system would consist of about 300 elements, and would be capable of providing contiguous, full hemispheric images of the earth with 1 Kelvin of radiometric precision and 50 km spatial resolution.

  11. Upscaling NZ-DNDC using a regression based meta-model to estimate direct N2O emissions from New Zealand grazed pastures.

    PubMed

    Giltrap, Donna L; Ausseil, Anne-Gaëlle E

    2016-01-01

    The availability of detailed input data frequently limits the application of process-based models at large scale. In this study, we produced simplified meta-models of the simulated nitrous oxide (N2O) emission factors (EF) using NZ-DNDC. Monte Carlo simulations were performed and the results investigated using multiple regression analysis to produce simplified meta-models of EF. These meta-models were then used to estimate direct N2O emissions from grazed pastures in New Zealand. New Zealand EF maps were generated using the meta-models with data from national scale soil maps. Direct emissions of N2O from grazed pasture were calculated by multiplying the EF map with a nitrogen (N) input map. Three meta-models were considered. Model 1 included only the soil organic carbon in the top 30cm (SOC30), Model 2 also included a clay content factor, and Model 3 added the interaction between SOC30 and clay. The median annual national direct N2O emissions from grazed pastures estimated using each model (assuming model errors were purely random) were: 9.6GgN (Model 1), 13.6GgN (Model 2), and 11.9GgN (Model 3). These values corresponded to an average EF of 0.53%, 0.75% and 0.63% respectively, while the corresponding average EF using New Zealand national inventory values was 0.67%. If the model error can be assumed to be independent for each pixel then the 95% confidence interval for the N2O emissions was of the order of ±0.4-0.7%, which is much lower than existing methods. However, spatial correlations in the model errors could invalidate this assumption. Under the extreme assumption that the model error for each pixel was identical the 95% confidence interval was approximately ±100-200%. Therefore further work is needed to assess the degree of spatial correlation in the model errors. Copyright © 2015 Elsevier B.V. All rights reserved.

  12. Temporal and Spatial Simulation of Atmospheric Pollutant PM2.5 Changes and Risk Assessment of Population Exposure to Pollution Using Optimization Algorithms of the Back Propagation-Artificial Neural Network Model and GIS

    PubMed Central

    Zhang, Ping; Hong, Bo; He, Liang; Cheng, Fei; Zhao, Peng; Wei, Cailiang; Liu, Yunhui

    2015-01-01

    PM2.5 pollution has become of increasing public concern because of its relative importance and sensitivity to population health risks. Accurate predictions of PM2.5 pollution and population exposure risks are crucial to developing effective air pollution control strategies. We simulated and predicted the temporal and spatial changes of PM2.5 concentration and population exposure risks, by coupling optimization algorithms of the Back Propagation-Artificial Neural Network (BP-ANN) model and a geographical information system (GIS) in Xi’an, China, for 2013, 2020, and 2025. Results indicated that PM2.5 concentration was positively correlated with GDP, SO2, and NO2, while it was negatively correlated with population density, average temperature, precipitation, and wind speed. Principal component analysis of the PM2.5 concentration and its influencing factors’ variables extracted four components that accounted for 86.39% of the total variance. Correlation coefficients of the Levenberg-Marquardt (trainlm) and elastic (trainrp) algorithms were more than 0.8, the index of agreement (IA) ranged from 0.541 to 0.863 and from 0.502 to 0.803 by trainrp and trainlm algorithms, respectively; mean bias error (MBE) and Root Mean Square Error (RMSE) indicated that the predicted values were very close to the observed values, and the accuracy of trainlm algorithm was better than the trainrp. Compared to 2013, temporal and spatial variation of PM2.5 concentration and risk of population exposure to pollution decreased in 2020 and 2025. The high-risk areas of population exposure to PM2.5 were mainly distributed in the northern region, where there is downtown traffic, abundant commercial activity, and more exhaust emissions. A moderate risk zone was located in the southern region associated with some industrial pollution sources, and there were mainly low-risk areas in the western and eastern regions, which are predominantly residential and educational areas. PMID:26426030

  13. Temporal and Spatial Simulation of Atmospheric Pollutant PM2.5 Changes and Risk Assessment of Population Exposure to Pollution Using Optimization Algorithms of the Back Propagation-Artificial Neural Network Model and GIS.

    PubMed

    Zhang, Ping; Hong, Bo; He, Liang; Cheng, Fei; Zhao, Peng; Wei, Cailiang; Liu, Yunhui

    2015-09-29

    PM2.5 pollution has become of increasing public concern because of its relative importance and sensitivity to population health risks. Accurate predictions of PM2.5 pollution and population exposure risks are crucial to developing effective air pollution control strategies. We simulated and predicted the temporal and spatial changes of PM2.5 concentration and population exposure risks, by coupling optimization algorithms of the Back Propagation-Artificial Neural Network (BP-ANN) model and a geographical information system (GIS) in Xi'an, China, for 2013, 2020, and 2025. Results indicated that PM2.5 concentration was positively correlated with GDP, SO₂, and NO₂, while it was negatively correlated with population density, average temperature, precipitation, and wind speed. Principal component analysis of the PM2.5 concentration and its influencing factors' variables extracted four components that accounted for 86.39% of the total variance. Correlation coefficients of the Levenberg-Marquardt (trainlm) and elastic (trainrp) algorithms were more than 0.8, the index of agreement (IA) ranged from 0.541 to 0.863 and from 0.502 to 0.803 by trainrp and trainlm algorithms, respectively; mean bias error (MBE) and Root Mean Square Error (RMSE) indicated that the predicted values were very close to the observed values, and the accuracy of trainlm algorithm was better than the trainrp. Compared to 2013, temporal and spatial variation of PM2.5 concentration and risk of population exposure to pollution decreased in 2020 and 2025. The high-risk areas of population exposure to PM2.5 were mainly distributed in the northern region, where there is downtown traffic, abundant commercial activity, and more exhaust emissions. A moderate risk zone was located in the southern region associated with some industrial pollution sources, and there were mainly low-risk areas in the western and eastern regions, which are predominantly residential and educational areas.

  14. Preserving subject variability in group fMRI analysis: performance evaluation of GICA vs. IVA

    PubMed Central

    Michael, Andrew M.; Anderson, Mathew; Miller, Robyn L.; Adalı, Tülay; Calhoun, Vince D.

    2014-01-01

    Independent component analysis (ICA) is a widely applied technique to derive functionally connected brain networks from fMRI data. Group ICA (GICA) and Independent Vector Analysis (IVA) are extensions of ICA that enable users to perform group fMRI analyses; however a full comparison of the performance limits of GICA and IVA has not been investigated. Recent interest in resting state fMRI data with potentially higher degree of subject variability makes the evaluation of the above techniques important. In this paper we compare component estimation accuracies of GICA and an improved version of IVA using simulated fMRI datasets. We systematically change the degree of inter-subject spatial variability of components and evaluate estimation accuracy over all spatial maps (SMs) and time courses (TCs) of the decomposition. Our results indicate the following: (1) at low levels of SM variability or when just one SM is varied, both GICA and IVA perform well, (2) at higher levels of SM variability or when more than one SMs are varied, IVA continues to perform well but GICA yields SM estimates that are composites of other SMs with errors in TCs, (3) both GICA and IVA remove spatial correlations of overlapping SMs and introduce artificial correlations in their TCs, (4) if number of SMs is over estimated, IVA continues to perform well but GICA introduces artifacts in the varying and extra SMs with artificial correlations in the TCs of extra components, and (5) in the absence or presence of SMs unique to one subject, GICA produces errors in TCs and IVA estimates are accurate. In summary, our simulation experiments (both simplistic and realistic) and our holistic analyses approach indicate that IVA produces results that are closer to ground truth and thereby better preserves subject variability. The improved version of IVA is now packaged into the GIFT toolbox (http://mialab.mrn.org/software/gift). PMID:25018704

  15. A map overlay error model based on boundary geometry

    USGS Publications Warehouse

    Gaeuman, D.; Symanzik, J.; Schmidt, J.C.

    2005-01-01

    An error model for quantifying the magnitudes and variability of errors generated in the areas of polygons during spatial overlay of vector geographic information system layers is presented. Numerical simulation of polygon boundary displacements was used to propagate coordinate errors to spatial overlays. The model departs from most previous error models in that it incorporates spatial dependence of coordinate errors at the scale of the boundary segment. It can be readily adapted to match the scale of error-boundary interactions responsible for error generation on a given overlay. The area of error generated by overlay depends on the sinuosity of polygon boundaries, as well as the magnitude of the coordinate errors on the input layers. Asymmetry in boundary shape has relatively little effect on error generation. Overlay errors are affected by real differences in boundary positions on the input layers, as well as errors in the boundary positions. Real differences between input layers tend to compensate for much of the error generated by coordinate errors. Thus, the area of change measured on an overlay layer produced by the XOR overlay operation will be more accurate if the area of real change depicted on the overlay is large. The model presented here considers these interactions, making it especially useful for estimating errors studies of landscape change over time. ?? 2005 The Ohio State University.

  16. Spatial heterogeneity of type I error for local cluster detection tests

    PubMed Central

    2014-01-01

    Background Just as power, type I error of cluster detection tests (CDTs) should be spatially assessed. Indeed, CDTs’ type I error and power have both a spatial component as CDTs both detect and locate clusters. In the case of type I error, the spatial distribution of wrongly detected clusters (WDCs) can be particularly affected by edge effect. This simulation study aims to describe the spatial distribution of WDCs and to confirm and quantify the presence of edge effect. Methods A simulation of 40 000 datasets has been performed under the null hypothesis of risk homogeneity. The simulation design used realistic parameters from survey data on birth defects, and in particular, two baseline risks. The simulated datasets were analyzed using the Kulldorff’s spatial scan as a commonly used test whose behavior is otherwise well known. To describe the spatial distribution of type I error, we defined the participation rate for each spatial unit of the region. We used this indicator in a new statistical test proposed to confirm, as well as quantify, the edge effect. Results The predefined type I error of 5% was respected for both baseline risks. Results showed strong edge effect in participation rates, with a descending gradient from center to edge, and WDCs more often centrally situated. Conclusions In routine analysis of real data, clusters on the edge of the region should be carefully considered as they rarely occur when there is no cluster. Further work is needed to combine results from power studies with this work in order to optimize CDTs performance. PMID:24885343

  17. A priori evaluation of two-stage cluster sampling for accuracy assessment of large-area land-cover maps

    USGS Publications Warehouse

    Wickham, J.D.; Stehman, S.V.; Smith, J.H.; Wade, T.G.; Yang, L.

    2004-01-01

    Two-stage cluster sampling reduces the cost of collecting accuracy assessment reference data by constraining sample elements to fall within a limited number of geographic domains (clusters). However, because classification error is typically positively spatially correlated, within-cluster correlation may reduce the precision of the accuracy estimates. The detailed population information to quantify a priori the effect of within-cluster correlation on precision is typically unavailable. Consequently, a convenient, practical approach to evaluate the likely performance of a two-stage cluster sample is needed. We describe such an a priori evaluation protocol focusing on the spatial distribution of the sample by land-cover class across different cluster sizes and costs of different sampling options, including options not imposing clustering. This protocol also assesses the two-stage design's adequacy for estimating the precision of accuracy estimates for rare land-cover classes. We illustrate the approach using two large-area, regional accuracy assessments from the National Land-Cover Data (NLCD), and describe how the a priorievaluation was used as a decision-making tool when implementing the NLCD design.

  18. Aspects of spatial and temporal aggregation in estimating regional carbon dioxide fluxes from temperate forest soils

    NASA Technical Reports Server (NTRS)

    Kicklighter, David W.; Melillo, Jerry M.; Peterjohn, William T.; Rastetter, Edward B.; Mcguire, A. David; Steudler, Paul A.; Aber, John D.

    1994-01-01

    We examine the influence of aggregation errors on developing estimates of regional soil-CO2 flux from temperate forests. We find daily soil-CO2 fluxes to be more sensitive to changes in soil temperatures (Q(sub 10) = 3.08) than air temperatures (Q(sub 10) = 1.99). The direct use of mean monthly air temperatures with a daily flux model underestimates regional fluxes by approximately 4%. Temporal aggregation error varies with spatial resolution. Overall, our calibrated modeling approach reduces spatial aggregation error by 9.3% and temporal aggregation error by 15.5%. After minimizing spatial and temporal aggregation errors, mature temperate forest soils are estimated to contribute 12.9 Pg C/yr to the atmosphere as carbon dioxide. Georeferenced model estimates agree well with annual soil-CO2 fluxes measured during chamber studies in mature temperate forest stands around the globe.

  19. Spatial sampling considerations of the CERES (Clouds and Earth Radiant Energy System) instrument

    NASA Astrophysics Data System (ADS)

    Smith, G. L.; Manalo-Smith, Natividdad; Priestley, Kory

    2014-10-01

    The CERES (Clouds and Earth Radiant Energy System) instrument is a scanning radiometer with three channels for measuring Earth radiation budget. At present CERES models are operating aboard the Terra, Aqua and Suomi/NPP spacecraft and flights of CERES instruments are planned for the JPSS-1 spacecraft and its successors. CERES scans from one limb of the Earth to the other and back. The footprint size grows with distance from nadir simply due to geometry so that the size of the smallest features which can be resolved from the data increases and spatial sampling errors increase with nadir angle. This paper presents an analysis of the effect of nadir angle on spatial sampling errors of the CERES instrument. The analysis performed in the Fourier domain. Spatial sampling errors are created by smoothing of features which are the size of the footprint and smaller, or blurring, and inadequate sampling, that causes aliasing errors. These spatial sampling errors are computed in terms of the system transfer function, which is the Fourier transform of the point response function, the spacing of data points and the spatial spectrum of the radiance field.

  20. The impact of model prediction error in designing geodetic networks for crustal deformation applications

    NASA Astrophysics Data System (ADS)

    Murray, J. R.

    2017-12-01

    Earth surface displacements measured at Global Navigation Satellite System (GNSS) sites record crustal deformation due, for example, to slip on faults underground. A primary objective in designing geodetic networks to study crustal deformation is to maximize the ability to recover parameters of interest like fault slip. Given Green's functions (GFs) relating observed displacement to motion on buried dislocations representing a fault, one can use various methods to estimate spatially variable slip. However, assumptions embodied in the GFs, e.g., use of a simplified elastic structure, introduce spatially correlated model prediction errors (MPE) not reflected in measurement uncertainties (Duputel et al., 2014). In theory, selection algorithms should incorporate inter-site correlations to identify measurement locations that give unique information. I assess the impact of MPE on site selection by expanding existing methods (Klein et al., 2017; Reeves and Zhe, 1999) to incorporate this effect. Reeves and Zhe's algorithm sequentially adds or removes a predetermined number of data according to a criterion that minimizes the sum of squared errors (SSE) on parameter estimates. Adapting this method to GNSS network design, Klein et al. select new sites that maximize model resolution, using trade-off curves to determine when additional resolution gain is small. Their analysis uses uncorrelated data errors and GFs for a uniform elastic half space. I compare results using GFs for spatially variable strike slip on a discretized dislocation in a uniform elastic half space, a layered elastic half space, and a layered half space with inclusion of MPE. I define an objective criterion to terminate the algorithm once the next site removal would increase SSE more than the expected incremental SSE increase if all sites had equal impact. Using a grid of candidate sites with 8 km spacing, I find the relative value of the selected sites (defined by the percent increase in SSE that further removal of each site would cause) is more uniform when MPE is included. However, the number and distribution of selected sites depends primarily on site location relative to the fault. For this test case, inclusion of MPE has minimal practical impact; I will investigate whether these findings hold for more densely spaced candidate grids and dipping faults.

  1. Entropy of Movement Outcome in Space-Time.

    PubMed

    Lai, Shih-Chiung; Hsieh, Tsung-Yu; Newell, Karl M

    2015-07-01

    Information entropy of the joint spatial and temporal (space-time) probability of discrete movement outcome was investigated in two experiments as a function of different movement strategies (space-time, space, and time instructional emphases), task goals (point-aiming and target-aiming) and movement speed-accuracy constraints. The variance of the movement spatial and temporal errors was reduced by instructional emphasis on the respective spatial or temporal dimension, but increased on the other dimension. The space-time entropy was lower in targetaiming task than the point aiming task but did not differ between instructional emphases. However, the joint probabilistic measure of spatial and temporal entropy showed that spatial error is traded for timing error in tasks with space-time criteria and that the pattern of movement error depends on the dimension of the measurement process. The unified entropy measure of movement outcome in space-time reveals a new relation for the speed-accuracy.

  2. The Effect of Error Correlation on Interfactor Correlation in Psychometric Measurement

    ERIC Educational Resources Information Center

    Westfall, Peter H.; Henning, Kevin S. S.; Howell, Roy D.

    2012-01-01

    This article shows how interfactor correlation is affected by error correlations. Theoretical and practical justifications for error correlations are given, and a new equivalence class of models is presented to explain the relationship between interfactor correlation and error correlations. The class allows simple, parsimonious modeling of error…

  3. Automated measurement and classification of pulmonary blood-flow velocity patterns using phase-contrast MRI and correlation analysis.

    PubMed

    van Amerom, Joshua F P; Kellenberger, Christian J; Yoo, Shi-Joon; Macgowan, Christopher K

    2009-01-01

    An automated method was evaluated to detect blood flow in small pulmonary arteries and classify each as artery or vein, based on a temporal correlation analysis of their blood-flow velocity patterns. The method was evaluated using velocity-sensitive phase-contrast magnetic resonance data collected in vitro with a pulsatile flow phantom and in vivo in 11 human volunteers. The accuracy of the method was validated in vitro, which showed relative velocity errors of 12% at low spatial resolution (four voxels per diameter), but was reduced to 5% at increased spatial resolution (16 voxels per diameter). The performance of the method was evaluated in vivo according to its reproducibility and agreement with manual velocity measurements by an experienced radiologist. In all volunteers, the correlation analysis was able to detect and segment peripheral pulmonary vessels and distinguish arterial from venous velocity patterns. The intrasubject variability of repeated measurements was approximately 10% of peak velocity, or 2.8 cm/s root-mean-variance, demonstrating the high reproducibility of the method. Excellent agreement was obtained between the correlation analysis and radiologist measurements of pulmonary velocities, with a correlation of R2=0.98 (P<.001) and a slope of 0.99+/-0.01.

  4. Precoded spatial multiplexing MIMO system with spatial component interleaver.

    PubMed

    Gao, Xiang; Wu, Zhanji

    In this paper, the performance of precoded bit-interleaved coded modulation (BICM) spatial multiplexing multiple-input multiple-output (MIMO) system with spatial component interleaver is investigated. For the ideal precoded spatial multiplexing MIMO system with spatial component interleaver based on singular value decomposition (SVD) of the MIMO channel, the average pairwise error probability (PEP) of coded bits is derived. Based on the PEP analysis, the optimum spatial Q-component interleaver design criterion is provided to achieve the minimum error probability. For the limited feedback precoded proposed scheme with linear zero forcing (ZF) receiver, in order to minimize a bound on the average probability of a symbol vector error, a novel effective signal-to-noise ratio (SNR)-based precoding matrix selection criterion and a simplified criterion are proposed. Based on the average mutual information (AMI)-maximization criterion, the optimal constellation rotation angles are investigated. Simulation results indicate that the optimized spatial multiplexing MIMO system with spatial component interleaver can achieve significant performance advantages compared to the conventional spatial multiplexing MIMO system.

  5. Diffraction analysis of sidelobe characteristics of optical elements with ripple error

    NASA Astrophysics Data System (ADS)

    Zhao, Lei; Luo, Yupeng; Bai, Jian; Zhou, Xiangdong; Du, Juan; Liu, Qun; Luo, Yujie

    2018-03-01

    The ripple errors of the lens lead to optical damage in high energy laser system. The analysis of sidelobe on the focal plane, caused by ripple error, provides a reference to evaluate the error and the imaging quality. In this paper, we analyze the diffraction characteristics of sidelobe of optical elements with ripple errors. First, we analyze the characteristics of ripple error and build relationship between ripple error and sidelobe. The sidelobe results from the diffraction of ripple errors. The ripple error tends to be periodic due to fabrication method on the optical surface. The simulated experiments are carried out based on angular spectrum method by characterizing ripple error as rotationally symmetric periodic structures. The influence of two major parameter of ripple including spatial frequency and peak-to-valley value to sidelobe is discussed. The results indicate that spatial frequency and peak-to-valley value both impact sidelobe at the image plane. The peak-tovalley value is the major factor to affect the energy proportion of the sidelobe. The spatial frequency is the major factor to affect the distribution of the sidelobe at the image plane.

  6. Comparing ordinary kriging and inverse distance weighting for soil as pollution in Beijing.

    PubMed

    Qiao, Pengwei; Lei, Mei; Yang, Sucai; Yang, Jun; Guo, Guanghui; Zhou, Xiaoyong

    2018-06-01

    Spatial interpolation method is the basis of soil heavy metal pollution assessment and remediation. The existing evaluation index for interpolation accuracy did not combine with actual situation. The selection of interpolation methods needs to be based on specific research purposes and research object characteristics. In this paper, As pollution in soils of Beijing was taken as an example. The prediction accuracy of ordinary kriging (OK) and inverse distance weighted (IDW) were evaluated based on the cross validation results and spatial distribution characteristics of influencing factors. The results showed that, under the condition of specific spatial correlation, the cross validation results of OK and IDW for every soil point and the prediction accuracy of spatial distribution trend are similar. But the prediction accuracy of OK for the maximum and minimum is less than IDW, while the number of high pollution areas identified by OK are less than IDW. It is difficult to identify the high pollution areas fully by OK, which shows that the smoothing effect of OK is obvious. In addition, with increasing of the spatial correlation of As concentration, the cross validation error of OK and IDW decreases, and the high pollution area identified by OK is approaching the result of IDW, which can identify the high pollution areas more comprehensively. However, because the semivariogram constructed by OK interpolation method is more subjective and requires larger number of soil samples, IDW is more suitable for spatial prediction of heavy metal pollution in soils.

  7. Building on crossvalidation for increasing the quality of geostatistical modeling

    USGS Publications Warehouse

    Olea, R.A.

    2012-01-01

    The random function is a mathematical model commonly used in the assessment of uncertainty associated with a spatially correlated attribute that has been partially sampled. There are multiple algorithms for modeling such random functions, all sharing the requirement of specifying various parameters that have critical influence on the results. The importance of finding ways to compare the methods and setting parameters to obtain results that better model uncertainty has increased as these algorithms have grown in number and complexity. Crossvalidation has been used in spatial statistics, mostly in kriging, for the analysis of mean square errors. An appeal of this approach is its ability to work with the same empirical sample available for running the algorithms. This paper goes beyond checking estimates by formulating a function sensitive to conditional bias. Under ideal conditions, such function turns into a straight line, which can be used as a reference for preparing measures of performance. Applied to kriging, deviations from the ideal line provide sensitivity to the semivariogram lacking in crossvalidation of kriging errors and are more sensitive to conditional bias than analyses of errors. In terms of stochastic simulation, in addition to finding better parameters, the deviations allow comparison of the realizations resulting from the applications of different methods. Examples show improvements of about 30% in the deviations and approximately 10% in the square root of mean square errors between reasonable starting modelling and the solutions according to the new criteria. ?? 2011 US Government.

  8. Simplified planar model of a car steering system with rack and pinion and McPherson suspension

    NASA Astrophysics Data System (ADS)

    Knapczyk, J.; Kucybała, P.

    2016-09-01

    The paper presents the analysis and optimization of steering system with rack and pinion and McPherson suspension using spatial model and equivalent simplified planar model. The dimension of the steering linkage that give minimum steering error can be estimated using planar model. The steering error is defined as the difference between the actual angle made by the outer front wheel during steering manoeuvers and the calculated angle for the same wheel based on the Ackerman principle. For a given linear rack displacement, a specified steering arms angular displacements are determined while simultaneously ensuring best transmission angle characteristics (i) without and (ii) with imposing linear correlation between input and output. Numerical examples are used to illustrate the proposed method.

  9. Geographically correlated orbit error

    NASA Technical Reports Server (NTRS)

    Rosborough, G. W.

    1989-01-01

    The dominant error source in estimating the orbital position of a satellite from ground based tracking data is the modeling of the Earth's gravity field. The resulting orbit error due to gravity field model errors are predominantly long wavelength in nature. This results in an orbit error signature that is strongly correlated over distances on the size of ocean basins. Anderle and Hoskin (1977) have shown that the orbit error along a given ground track also is correlated to some degree with the orbit error along adjacent ground tracks. This cross track correlation is verified here and is found to be significant out to nearly 1000 kilometers in the case of TOPEX/POSEIDON when using the GEM-T1 gravity model. Finally, it was determined that even the orbit error at points where ascending and descending ground traces cross is somewhat correlated. The implication of these various correlations is that the orbit error due to gravity error is geographically correlated. Such correlations have direct implications when using altimetry to recover oceanographic signals.

  10. PKMζ Differentially Utilized between Sexes for Remote Long-Term Spatial Memory

    PubMed Central

    Sebastian, Veronica; Vergel, Tatyana; Baig, Raheela; Schrott, Lisa M.; Serrano, Peter A.

    2013-01-01

    It is well established that male rats have an advantage in acquiring place-learning strategies, allowing them to learn spatial tasks more readily than female rats. However many of these differences have been examined solely during acquisition or in 24h memory retention. Here, we investigated whether sex differences exist in remote long-term memory, lasting 30d after training, and whether there are differences in the expression pattern of molecular markers associated with long-term memory maintenance. Specifically, we analyzed the expression of protein kinase M zeta (PKMζ) and the α-amino-3-hydroxy-5-methyl-4-isoxazolepropionic acid (AMPA) receptor subunit GluA2. To adequately evaluate memory retention, we used a robust training protocol to attenuate sex differences in acquisition and found differential effects in memory retention 1d and 30d after training. Female cohorts tested for memory retention 1d after 60 training trials outperformed males by making significantly fewer reference memory errors at test. In contrast, male cohorts tested 30d after 60 training trials outperformed females of the same condition, making fewer reference memory errors and achieving significantly higher retention test scores. Furthermore, given 60 training trials, females tested 30d later showed significantly worse memory compared to females tested 1d later, while males tested 30d later did not differ from males tested 1d later. Together these data suggest that with robust training males do no retain spatial information as well as females do 24h post-training but maintain this spatial information for longer. Males also showed a significant increase in synaptic PKMζ expression and a positive correlation with retention test scores, while females did not. Interestingly, both sexes showed a positive correlation between retention test scores and synaptic GluA2 expression. Furthermore, the increased expression of synaptic PKMζ, associated with male memory but not with female memory, identifies another potential sex-mediated difference in memory processing. PMID:24244733

  11. Evaluation of Pan-Sharpening Methods for Automatic Shadow Detection in High Resolution Images of Urban Areas

    NASA Astrophysics Data System (ADS)

    de Azevedo, Samara C.; Singh, Ramesh P.; da Silva, Erivaldo A.

    2017-04-01

    Finer spatial resolution of areas with tall objects within urban environment causes intense shadows that lead to wrong information in urban mapping. Due to the shadows, automatic detection of objects (such as buildings, trees, structures, towers) and to estimate the surface coverage from high spatial resolution is difficult. Thus, automatic shadow detection is the first necessary preprocessing step to improve the outcome of many remote sensing applications, particularly for high spatial resolution images. Efforts have been made to explore spatial and spectral information to evaluate such shadows. In this paper, we have used morphological attribute filtering to extract contextual relations in an efficient multilevel approach for high resolution images. The attribute selected for the filtering was the area estimated from shadow spectral feature using the Normalized Saturation-Value Difference Index (NSVDI) derived from pan-sharpening images. In order to assess the quality of fusion products and the influence on shadow detection algorithm, we evaluated three pan-sharpening methods - Intensity-Hue-Saturation (IHS), Principal Components (PC) and Gran-Schmidt (GS) through the image quality measures: Correlation Coefficient (CC), Root Mean Square Error (RMSE), Relative Dimensionless Global Error in Synthesis (ERGAS) and Universal Image Quality Index (UIQI). Experimental results over Worldview II scene from São Paulo city (Brazil) show that GS method provides good correlation with original multispectral bands with no radiometric and contrast distortion. The automatic method using GS method for NSDVI generation clearly provide a clear distinction of shadows and non-shadows pixels with an overall accuracy more than 90%. The experimental results confirm the effectiveness of the proposed approach which could be used for further shadow removal and reliable for object recognition, land-cover mapping, 3D reconstruction, etc. especially in developing countries where land use and land cover are rapidly changing with tall objects within urban areas.

  12. A Framework of Temporal-Spatial Descriptors-Based Feature Extraction for Improved Myoelectric Pattern Recognition.

    PubMed

    Khushaba, Rami N; Al-Timemy, Ali H; Al-Ani, Ahmed; Al-Jumaily, Adel

    2017-10-01

    The extraction of the accurate and efficient descriptors of muscular activity plays an important role in tackling the challenging problem of myoelectric control of powered prostheses. In this paper, we present a new feature extraction framework that aims to give an enhanced representation of muscular activities through increasing the amount of information that can be extracted from individual and combined electromyogram (EMG) channels. We propose to use time-domain descriptors (TDDs) in estimating the EMG signal power spectrum characteristics; a step that preserves the computational power required for the construction of spectral features. Subsequently, TDD is used in a process that involves: 1) representing the temporal evolution of the EMG signals by progressively tracking the correlation between the TDD extracted from each analysis time window and a nonlinearly mapped version of it across the same EMG channel and 2) representing the spatial coherence between the different EMG channels, which is achieved by calculating the correlation between the TDD extracted from the differences of all possible combinations of pairs of channels and their nonlinearly mapped versions. The proposed temporal-spatial descriptors (TSDs) are validated on multiple sparse and high-density (HD) EMG data sets collected from a number of intact-limbed and amputees performing a large number of hand and finger movements. Classification results showed significant reductions in the achieved error rates in comparison to other methods, with the improvement of at least 8% on average across all subjects. Additionally, the proposed TSDs achieved significantly well in problems with HD-EMG with average classification errors of <5% across all subjects using windows lengths of 50 ms only.

  13. Rainfall Observed Over Bangladesh 2000-2008: A Comparison of Spatial Interpolation Methods

    NASA Astrophysics Data System (ADS)

    Pervez, M.; Henebry, G. M.

    2010-12-01

    In preparation for a hydrometeorological study of freshwater resources in the greater Ganges-Brahmaputra region, we compared the results of four methods of spatial interpolation applied to point measurements of daily rainfall over Bangladesh during a seven year period (2000-2008). Two univariate (inverse distance weighted and spline-regularized and tension) and two multivariate geostatistical (ordinary kriging and kriging with external drift) methods were used to interpolate daily observations from a network of 221 rain gauges across Bangladesh spanning an area of 143,000 sq km. Elevation and topographic index were used as the covariates in the geostatistical methods. The validity of the interpolated maps was analyzed through cross-validation. The quality of the methods was assessed through the Pearson and Spearman correlations and root mean square error measurements of accuracy in cross-validation. Preliminary results indicated that the univariate methods performed better than the geostatistical methods at daily scales, likely due to the relatively dense sampled point measurements and a weak correlation between the rainfall and covariates at daily scales in this region. Inverse distance weighted produced the better results than the spline. For the days with extreme or high rainfall—spatially and quantitatively—the correlation between observed and interpolated estimates appeared to be high (r2 ~ 0.6 RMSE ~ 10mm), although for low rainfall days the correlations were poor (r2 ~ 0.1 RMSE ~ 3mm). The performance quality of these methods was influenced by the density of the sample point measurements, the quantity of the observed rainfall along with spatial extent, and an appropriate search radius defining the neighboring points. Results indicated that interpolated rainfall estimates at daily scales may introduce uncertainties in the successive hydrometeorological analysis. Interpolations at 5-day, 10-day, 15-day, and monthly time scales are currently under investigation.

  14. Spatial Assessment of Model Errors from Four Regression Techniques

    Treesearch

    Lianjun Zhang; Jeffrey H. Gove; Jeffrey H. Gove

    2005-01-01

    Fomst modelers have attempted to account for the spatial autocorrelations among trees in growth and yield models by applying alternative regression techniques such as linear mixed models (LMM), generalized additive models (GAM), and geographicalIy weighted regression (GWR). However, the model errors are commonly assessed using average errors across the entire study...

  15. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Vasylkivska, Veronika S.; Huerta, Nicolas J.

    Determining the spatiotemporal characteristics of natural and induced seismic events holds the opportunity to gain new insights into why these events occur. Linking the seismicity characteristics with other geologic, geographic, natural, or anthropogenic factors could help to identify the causes and suggest mitigation strategies that reduce the risk associated with such events. The nearest-neighbor approach utilized in this work represents a practical first step toward identifying statistically correlated clusters of recorded earthquake events. Detailed study of the Oklahoma earthquake catalog’s inherent errors, empirical model parameters, and model assumptions is presented. We found that the cluster analysis results are stable withmore » respect to empirical parameters (e.g., fractal dimension) but were sensitive to epicenter location errors and seismicity rates. Most critically, we show that the patterns in the distribution of earthquake clusters in Oklahoma are primarily defined by spatial relationships between events. This observation is a stark contrast to California (also known for induced seismicity) where a comparable cluster distribution is defined by both spatial and temporal interactions between events. These results highlight the difficulty in understanding the mechanisms and behavior of induced seismicity but provide insights for future work.« less

  16. Re-assessing acalculia: Distinguishing spatial and purely arithmetical deficits in right-hemisphere damaged patients.

    PubMed

    Benavides-Varela, S; Piva, D; Burgio, F; Passarini, L; Rolma, G; Meneghello, F; Semenza, C

    2017-03-01

    Arithmetical deficits in right-hemisphere damaged patients have been traditionally considered secondary to visuo-spatial impairments, although the exact relationship between the two deficits has rarely been assessed. The present study implemented a voxelwise lesion analysis among 30 right-hemisphere damaged patients and a controlled, matched-sample, cross-sectional analysis with 35 cognitively normal controls regressing three composite cognitive measures on standardized numerical measures. The results showed that patients and controls significantly differ in Number comprehension, Transcoding, and Written operations, particularly subtractions and multiplications. The percentage of patients performing below the cutoffs ranged between 27% and 47% across these tasks. Spatial errors were associated with extensive lesions in fronto-temporo-parietal regions -which frequently lead to neglect- whereas pure arithmetical errors appeared related to more confined lesions in the right angular gyrus and its proximity. Stepwise regression models consistently revealed that spatial errors were primarily predicted by composite measures of visuo-spatial attention/neglect and representational abilities. Conversely, specific errors of arithmetic nature linked to representational abilities only. Crucially, the proportion of arithmetical errors (ranging from 65% to 100% across tasks) was higher than that of spatial ones. These findings thus suggest that unilateral right hemisphere lesions can directly affect core numerical/arithmetical processes, and that right-hemisphere acalculia is not only ascribable to visuo-spatial deficits as traditionally thought. Copyright © 2017 Elsevier Ltd. All rights reserved.

  17. Accounting for substitution and spatial heterogeneity in a labelled choice experiment.

    PubMed

    Lizin, S; Brouwer, R; Liekens, I; Broeckx, S

    2016-10-01

    Many environmental valuation studies using stated preferences techniques are single-site studies that ignore essential spatial aspects, including possible substitution effects. In this paper substitution effects are captured explicitly in the design of a labelled choice experiment and the inclusion of different distance variables in the choice model specification. We test the effect of spatial heterogeneity on welfare estimates and transfer errors for minor and major river restoration works, and the transferability of river specific utility functions, accounting for key variables such as site visitation, spatial clustering and income. River specific utility functions appear to be transferable, resulting in low transfer errors. However, ignoring spatial heterogeneity increases transfer errors. Copyright © 2016 Elsevier Ltd. All rights reserved.

  18. Kriging with Unknown Variance Components for Regional Ionospheric Reconstruction.

    PubMed

    Huang, Ling; Zhang, Hongping; Xu, Peiliang; Geng, Jianghui; Wang, Cheng; Liu, Jingnan

    2017-02-27

    Ionospheric delay effect is a critical issue that limits the accuracy of precise Global Navigation Satellite System (GNSS) positioning and navigation for single-frequency users, especially in mid- and low-latitude regions where variations in the ionosphere are larger. Kriging spatial interpolation techniques have been recently introduced to model the spatial correlation and variability of ionosphere, which intrinsically assume that the ionosphere field is stochastically stationary but does not take the random observational errors into account. In this paper, by treating the spatial statistical information on ionosphere as prior knowledge and based on Total Electron Content (TEC) semivariogram analysis, we use Kriging techniques to spatially interpolate TEC values. By assuming that the stochastic models of both the ionospheric signals and measurement errors are only known up to some unknown factors, we propose a new Kriging spatial interpolation method with unknown variance components for both the signals of ionosphere and TEC measurements. Variance component estimation has been integrated with Kriging to reconstruct regional ionospheric delays. The method has been applied to data from the Crustal Movement Observation Network of China (CMONOC) and compared with the ordinary Kriging and polynomial interpolations with spherical cap harmonic functions, polynomial functions and low-degree spherical harmonic functions. The statistics of results indicate that the daily ionospheric variations during the experimental period characterized by the proposed approach have good agreement with the other methods, ranging from 10 to 80 TEC Unit (TECU, 1 TECU = 1 × 10 16 electrons/m²) with an overall mean of 28.2 TECU. The proposed method can produce more appropriate estimations whose general TEC level is as smooth as the ordinary Kriging but with a smaller standard deviation around 3 TECU than others. The residual results show that the interpolation precision of the new proposed method is better than the ordinary Kriging and polynomial interpolation by about 1.2 TECU and 0.7 TECU, respectively. The root mean squared error of the proposed new Kriging with variance components is within 1.5 TECU and is smaller than those from other methods under comparison by about 1 TECU. When compared with ionospheric grid points, the mean squared error of the proposed method is within 6 TECU and smaller than Kriging, indicating that the proposed method can produce more accurate ionospheric delays and better estimation accuracy over China regional area.

  19. Kriging with Unknown Variance Components for Regional Ionospheric Reconstruction

    PubMed Central

    Huang, Ling; Zhang, Hongping; Xu, Peiliang; Geng, Jianghui; Wang, Cheng; Liu, Jingnan

    2017-01-01

    Ionospheric delay effect is a critical issue that limits the accuracy of precise Global Navigation Satellite System (GNSS) positioning and navigation for single-frequency users, especially in mid- and low-latitude regions where variations in the ionosphere are larger. Kriging spatial interpolation techniques have been recently introduced to model the spatial correlation and variability of ionosphere, which intrinsically assume that the ionosphere field is stochastically stationary but does not take the random observational errors into account. In this paper, by treating the spatial statistical information on ionosphere as prior knowledge and based on Total Electron Content (TEC) semivariogram analysis, we use Kriging techniques to spatially interpolate TEC values. By assuming that the stochastic models of both the ionospheric signals and measurement errors are only known up to some unknown factors, we propose a new Kriging spatial interpolation method with unknown variance components for both the signals of ionosphere and TEC measurements. Variance component estimation has been integrated with Kriging to reconstruct regional ionospheric delays. The method has been applied to data from the Crustal Movement Observation Network of China (CMONOC) and compared with the ordinary Kriging and polynomial interpolations with spherical cap harmonic functions, polynomial functions and low-degree spherical harmonic functions. The statistics of results indicate that the daily ionospheric variations during the experimental period characterized by the proposed approach have good agreement with the other methods, ranging from 10 to 80 TEC Unit (TECU, 1 TECU = 1 × 1016 electrons/m2) with an overall mean of 28.2 TECU. The proposed method can produce more appropriate estimations whose general TEC level is as smooth as the ordinary Kriging but with a smaller standard deviation around 3 TECU than others. The residual results show that the interpolation precision of the new proposed method is better than the ordinary Kriging and polynomial interpolation by about 1.2 TECU and 0.7 TECU, respectively. The root mean squared error of the proposed new Kriging with variance components is within 1.5 TECU and is smaller than those from other methods under comparison by about 1 TECU. When compared with ionospheric grid points, the mean squared error of the proposed method is within 6 TECU and smaller than Kriging, indicating that the proposed method can produce more accurate ionospheric delays and better estimation accuracy over China regional area. PMID:28264424

  20. Analysis of a spatial tracking subsystem for optical communications

    NASA Technical Reports Server (NTRS)

    Win, Moe Z.; Chen, CHIEN-C.

    1992-01-01

    Spatial tracking plays a very critical role in designing optical communication systems because of the small angular beamwidth associated with the optical signal. One possible solution for spatial tracking is to use a nutating mirror which dithers the incoming beam at a rate much higher than the mechanical disturbances. A power detector then senses the change in detected power as the signal is reflected off the nutating mirror. This signal is then correlated with the nutator driver signals to obtain estimates of the azimuth and elevation tracking signals to control the fast scanning mirrors. A theoretical analysis is performed for a spatial tracking system using a nutator disturbed by shot noise and mechanical vibrations. Contributions of shot noise and mechanical vibrations to the total tracking error variance are derived. Given the vibration spectrum and the expected signal power, there exists an optimal amplitude for the nutation which optimizes the receiver performance. The expected performance of a nutator based system is estimated based on the choice of nutation amplitude.

  1. Covariance analyses of satellite-derived mesoscale wind fields

    NASA Technical Reports Server (NTRS)

    Maddox, R. A.; Vonder Haar, T. H.

    1979-01-01

    Statistical structure functions have been computed independently for nine satellite-derived mesoscale wind fields that were obtained on two different days. Small cumulus clouds were tracked at 5 min intervals, but since these clouds occurred primarily in the warm sectors of midlatitude cyclones the results cannot be considered representative of the circulations within cyclones in general. The field structure varied considerably with time and was especially affected if mesoscale features were observed. The wind fields on the 2 days studied were highly anisotropic with large gradients in structure occurring approximately normal to the mean flow. Structure function calculations for the combined set of satellite winds were used to estimate random error present in the fields. It is concluded for these data that the random error in vector winds derived from cumulus cloud tracking using high-frequency satellite data is less than 1.75 m/s. Spatial correlation functions were also computed for the nine data sets. Normalized correlation functions were considerably different for u and v components and decreased rapidly as data point separation increased for both components. The correlation functions for transverse and longitudinal components decreased less rapidly as data point separation increased.

  2. A multiple-objective optimal exploration strategy

    USGS Publications Warehouse

    Christakos, G.; Olea, R.A.

    1988-01-01

    Exploration for natural resources is accomplished through partial sampling of extensive domains. Such imperfect knowledge is subject to sampling error. Complex systems of equations resulting from modelling based on the theory of correlated random fields are reduced to simple analytical expressions providing global indices of estimation variance. The indices are utilized by multiple objective decision criteria to find the best sampling strategies. The approach is not limited by geometric nature of the sampling, covers a wide range in spatial continuity and leads to a step-by-step procedure. ?? 1988.

  3. Collocation mismatch uncertainties in satellite aerosol retrieval validation

    NASA Astrophysics Data System (ADS)

    Virtanen, Timo H.; Kolmonen, Pekka; Sogacheva, Larisa; Rodríguez, Edith; Saponaro, Giulia; de Leeuw, Gerrit

    2018-02-01

    Satellite-based aerosol products are routinely validated against ground-based reference data, usually obtained from sun photometer networks such as AERONET (AEROsol RObotic NETwork). In a typical validation exercise a spatial sample of the instantaneous satellite data is compared against a temporal sample of the point-like ground-based data. The observations do not correspond to exactly the same column of the atmosphere at the same time, and the representativeness of the reference data depends on the spatiotemporal variability of the aerosol properties in the samples. The associated uncertainty is known as the collocation mismatch uncertainty (CMU). The validation results depend on the sampling parameters. While small samples involve less variability, they are more sensitive to the inevitable noise in the measurement data. In this paper we study systematically the effect of the sampling parameters in the validation of AATSR (Advanced Along-Track Scanning Radiometer) aerosol optical depth (AOD) product against AERONET data and the associated collocation mismatch uncertainty. To this end, we study the spatial AOD variability in the satellite data, compare it against the corresponding values obtained from densely located AERONET sites, and assess the possible reasons for observed differences. We find that the spatial AOD variability in the satellite data is approximately 2 times larger than in the ground-based data, and the spatial variability correlates only weakly with that of AERONET for short distances. We interpreted that only half of the variability in the satellite data is due to the natural variability in the AOD, and the rest is noise due to retrieval errors. However, for larger distances (˜ 0.5°) the correlation is improved as the noise is averaged out, and the day-to-day changes in regional AOD variability are well captured. Furthermore, we assess the usefulness of the spatial variability of the satellite AOD data as an estimate of CMU by comparing the retrieval errors to the total uncertainty estimates including the CMU in the validation. We find that accounting for CMU increases the fraction of consistent observations.

  4. Novel probabilistic models of spatial genetic ancestry with applications to stratification correction in genome-wide association studies.

    PubMed

    Bhaskar, Anand; Javanmard, Adel; Courtade, Thomas A; Tse, David

    2017-03-15

    Genetic variation in human populations is influenced by geographic ancestry due to spatial locality in historical mating and migration patterns. Spatial population structure in genetic datasets has been traditionally analyzed using either model-free algorithms, such as principal components analysis (PCA) and multidimensional scaling, or using explicit spatial probabilistic models of allele frequency evolution. We develop a general probabilistic model and an associated inference algorithm that unify the model-based and data-driven approaches to visualizing and inferring population structure. Our spatial inference algorithm can also be effectively applied to the problem of population stratification in genome-wide association studies (GWAS), where hidden population structure can create fictitious associations when population ancestry is correlated with both the genotype and the trait. Our algorithm Geographic Ancestry Positioning (GAP) relates local genetic distances between samples to their spatial distances, and can be used for visually discerning population structure as well as accurately inferring the spatial origin of individuals on a two-dimensional continuum. On both simulated and several real datasets from diverse human populations, GAP exhibits substantially lower error in reconstructing spatial ancestry coordinates compared to PCA. We also develop an association test that uses the ancestry coordinates inferred by GAP to accurately account for ancestry-induced correlations in GWAS. Based on simulations and analysis of a dataset of 10 metabolic traits measured in a Northern Finland cohort, which is known to exhibit significant population structure, we find that our method has superior power to current approaches. Our software is available at https://github.com/anand-bhaskar/gap . abhaskar@stanford.edu or ajavanma@usc.edu. Supplementary data are available at Bioinformatics online. © The Author 2016. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com

  5. Effects of error covariance structure on estimation of model averaging weights and predictive performance

    USGS Publications Warehouse

    Lu, Dan; Ye, Ming; Meyer, Philip D.; Curtis, Gary P.; Shi, Xiaoqing; Niu, Xu-Feng; Yabusaki, Steve B.

    2013-01-01

    When conducting model averaging for assessing groundwater conceptual model uncertainty, the averaging weights are often evaluated using model selection criteria such as AIC, AICc, BIC, and KIC (Akaike Information Criterion, Corrected Akaike Information Criterion, Bayesian Information Criterion, and Kashyap Information Criterion, respectively). However, this method often leads to an unrealistic situation in which the best model receives overwhelmingly large averaging weight (close to 100%), which cannot be justified by available data and knowledge. It was found in this study that this problem was caused by using the covariance matrix, CE, of measurement errors for estimating the negative log likelihood function common to all the model selection criteria. This problem can be resolved by using the covariance matrix, Cek, of total errors (including model errors and measurement errors) to account for the correlation between the total errors. An iterative two-stage method was developed in the context of maximum likelihood inverse modeling to iteratively infer the unknown Cek from the residuals during model calibration. The inferred Cek was then used in the evaluation of model selection criteria and model averaging weights. While this method was limited to serial data using time series techniques in this study, it can be extended to spatial data using geostatistical techniques. The method was first evaluated in a synthetic study and then applied to an experimental study, in which alternative surface complexation models were developed to simulate column experiments of uranium reactive transport. It was found that the total errors of the alternative models were temporally correlated due to the model errors. The iterative two-stage method using Cekresolved the problem that the best model receives 100% model averaging weight, and the resulting model averaging weights were supported by the calibration results and physical understanding of the alternative models. Using Cek obtained from the iterative two-stage method also improved predictive performance of the individual models and model averaging in both synthetic and experimental studies.

  6. A new polishing process for large-aperture and high-precision aspheric surface

    NASA Astrophysics Data System (ADS)

    Nie, Xuqing; Li, Shengyi; Dai, Yifan; Song, Ci

    2013-07-01

    The high-precision aspheric surface is hard to be achieved due to the mid-spatial frequency error in the finishing step. The influence of mid-spatial frequency error is studied through the simulations and experiments. In this paper, a new polishing process based on magnetorheological finishing (MRF), smooth polishing (SP) and ion beam figuring (IBF) is proposed. A 400mm aperture parabolic surface is polished with this new process. The smooth polishing (SP) is applied after rough machining to control the MSF error. In the middle finishing step, most of low-spatial frequency error is removed by MRF rapidly, then the mid-spatial frequency error is restricted by SP, finally ion beam figuring is used to finish the surface. The surface accuracy is improved from the initial 37.691nm (rms, 95% aperture) to the final 4.195nm. The results show that the new polishing process is effective to manufacture large-aperture and high-precision aspheric surface.

  7. Deterministic error correction for nonlocal spatial-polarization hyperentanglement

    PubMed Central

    Li, Tao; Wang, Guan-Yu; Deng, Fu-Guo; Long, Gui-Lu

    2016-01-01

    Hyperentanglement is an effective quantum source for quantum communication network due to its high capacity, low loss rate, and its unusual character in teleportation of quantum particle fully. Here we present a deterministic error-correction scheme for nonlocal spatial-polarization hyperentangled photon pairs over collective-noise channels. In our scheme, the spatial-polarization hyperentanglement is first encoded into a spatial-defined time-bin entanglement with identical polarization before it is transmitted over collective-noise channels, which leads to the error rejection of the spatial entanglement during the transmission. The polarization noise affecting the polarization entanglement can be corrected with a proper one-step decoding procedure. The two parties in quantum communication can, in principle, obtain a nonlocal maximally entangled spatial-polarization hyperentanglement in a deterministic way, which makes our protocol more convenient than others in long-distance quantum communication. PMID:26861681

  8. Deterministic error correction for nonlocal spatial-polarization hyperentanglement.

    PubMed

    Li, Tao; Wang, Guan-Yu; Deng, Fu-Guo; Long, Gui-Lu

    2016-02-10

    Hyperentanglement is an effective quantum source for quantum communication network due to its high capacity, low loss rate, and its unusual character in teleportation of quantum particle fully. Here we present a deterministic error-correction scheme for nonlocal spatial-polarization hyperentangled photon pairs over collective-noise channels. In our scheme, the spatial-polarization hyperentanglement is first encoded into a spatial-defined time-bin entanglement with identical polarization before it is transmitted over collective-noise channels, which leads to the error rejection of the spatial entanglement during the transmission. The polarization noise affecting the polarization entanglement can be corrected with a proper one-step decoding procedure. The two parties in quantum communication can, in principle, obtain a nonlocal maximally entangled spatial-polarization hyperentanglement in a deterministic way, which makes our protocol more convenient than others in long-distance quantum communication.

  9. Application and evaluation of ISVR method in QuickBird image fusion

    NASA Astrophysics Data System (ADS)

    Cheng, Bo; Song, Xiaolu

    2014-05-01

    QuickBird satellite images are widely used in many fields, and applications have put forward high requirements for the integration of the spatial information and spectral information of the imagery. A fusion method for high resolution remote sensing images based on ISVR is identified in this study. The core principle of ISVS is taking the advantage of radicalization targeting to remove the effect of different gain and error of satellites' sensors. Transformed from DN to radiance, the multi-spectral image's energy is used to simulate the panchromatic band. The linear regression analysis is carried through the simulation process to find a new synthetically panchromatic image, which is highly linearly correlated to the original panchromatic image. In order to evaluate, test and compare the algorithm results, this paper used ISVR and other two different fusion methods to give a comparative study of the spatial information and spectral information, taking the average gradient and the correlation coefficient as an indicator. Experiments showed that this method could significantly improve the quality of fused image, especially in preserving spectral information, to maximize the spectral information of original multispectral images, while maintaining abundant spatial information.

  10. Functional Additive Mixed Models

    PubMed Central

    Scheipl, Fabian; Staicu, Ana-Maria; Greven, Sonja

    2014-01-01

    We propose an extensive framework for additive regression models for correlated functional responses, allowing for multiple partially nested or crossed functional random effects with flexible correlation structures for, e.g., spatial, temporal, or longitudinal functional data. Additionally, our framework includes linear and nonlinear effects of functional and scalar covariates that may vary smoothly over the index of the functional response. It accommodates densely or sparsely observed functional responses and predictors which may be observed with additional error and includes both spline-based and functional principal component-based terms. Estimation and inference in this framework is based on standard additive mixed models, allowing us to take advantage of established methods and robust, flexible algorithms. We provide easy-to-use open source software in the pffr() function for the R-package refund. Simulations show that the proposed method recovers relevant effects reliably, handles small sample sizes well and also scales to larger data sets. Applications with spatially and longitudinally observed functional data demonstrate the flexibility in modeling and interpretability of results of our approach. PMID:26347592

  11. Functional Additive Mixed Models.

    PubMed

    Scheipl, Fabian; Staicu, Ana-Maria; Greven, Sonja

    2015-04-01

    We propose an extensive framework for additive regression models for correlated functional responses, allowing for multiple partially nested or crossed functional random effects with flexible correlation structures for, e.g., spatial, temporal, or longitudinal functional data. Additionally, our framework includes linear and nonlinear effects of functional and scalar covariates that may vary smoothly over the index of the functional response. It accommodates densely or sparsely observed functional responses and predictors which may be observed with additional error and includes both spline-based and functional principal component-based terms. Estimation and inference in this framework is based on standard additive mixed models, allowing us to take advantage of established methods and robust, flexible algorithms. We provide easy-to-use open source software in the pffr() function for the R-package refund. Simulations show that the proposed method recovers relevant effects reliably, handles small sample sizes well and also scales to larger data sets. Applications with spatially and longitudinally observed functional data demonstrate the flexibility in modeling and interpretability of results of our approach.

  12. Spatial Multiplexing of Atom-Photon Entanglement Sources using Feedforward Control and Switching Networks.

    PubMed

    Tian, Long; Xu, Zhongxiao; Chen, Lirong; Ge, Wei; Yuan, Haoxiang; Wen, Yafei; Wang, Shengzhi; Li, Shujing; Wang, Hai

    2017-09-29

    The light-matter quantum interface that can create quantum correlations or entanglement between a photon and one atomic collective excitation is a fundamental building block for a quantum repeater. The intrinsic limit is that the probability of preparing such nonclassical atom-photon correlations has to be kept low in order to suppress multiexcitation. To enhance this probability without introducing multiexcitation errors, a promising scheme is to apply multimode memories to the interface. Significant progress has been made in temporal, spectral, and spatial multiplexing memories, but the enhanced probability for generating the entangled atom-photon pair has not been experimentally realized. Here, by using six spin-wave-photon entanglement sources, a switching network, and feedforward control, we build a multiplexed light-matter interface and then demonstrate a ∼sixfold (∼fourfold) probability increase in generating entangled atom-photon (photon-photon) pairs. The measured compositive Bell parameter for the multiplexed interface is 2.49±0.03 combined with a memory lifetime of up to ∼51  μs.

  13. Evaluation of Fuzzy-Logic Framework for Spatial Statistics Preserving Methods for Estimation of Missing Precipitation Data

    NASA Astrophysics Data System (ADS)

    El Sharif, H.; Teegavarapu, R. S.

    2012-12-01

    Spatial interpolation methods used for estimation of missing precipitation data at a site seldom check for their ability to preserve site and regional statistics. Such statistics are primarily defined by spatial correlations and other site-to-site statistics in a region. Preservation of site and regional statistics represents a means of assessing the validity of missing precipitation estimates at a site. This study evaluates the efficacy of a fuzzy-logic methodology for infilling missing historical daily precipitation data in preserving site and regional statistics. Rain gauge sites in the state of Kentucky, USA, are used as a case study for evaluation of this newly proposed method in comparison to traditional data infilling techniques. Several error and performance measures will be used to evaluate the methods and trade-offs in accuracy of estimation and preservation of site and regional statistics.

  14. Ice tracking techniques, implementation, performance, and applications

    NASA Technical Reports Server (NTRS)

    Rothrock, D. A.; Carsey, F. D.; Curlander, J. C.; Holt, B.; Kwok, R.; Weeks, W. F.

    1992-01-01

    Present techniques of ice tracking make use both of cross-correlation and of edge tracking, the former being more successful in heavy pack ice, the latter being critical for the broken ice of the pack margins. Algorithms must assume some constraints on the spatial variations of displacements to eliminate fliers, but must avoid introducing any errors into the spatial statistics of the measured displacement field. We draw our illustrations from the implementation of an automated tracking system for kinematic analyses of ERS-1 and JERS-1 SAR imagery at the University of Alaska - the Alaska SAR Facility's Geophysical Processor System. Analyses of the ice kinematic data that might have some general interest to analysts of cloud-derived wind fields are the spatial structure of the fields, and the evaluation and variability of average deformation and its invariants: divergence, vorticity and shear. Many problems in sea ice dynamics and mechanics can be addressed with the kinematic data from SAR.

  15. The dorsal stream contribution to phonological retrieval in object naming

    PubMed Central

    Faseyitan, Olufunsho; Kim, Junghoon; Coslett, H. Branch

    2012-01-01

    Meaningful speech, as exemplified in object naming, calls on knowledge of the mappings between word meanings and phonological forms. Phonological errors in naming (e.g. GHOST named as ‘goath’) are commonly seen in persisting post-stroke aphasia and are thought to signal impairment in retrieval of phonological form information. We performed a voxel-based lesion-symptom mapping analysis of 1718 phonological naming errors collected from 106 individuals with diverse profiles of aphasia. Voxels in which lesion status correlated with phonological error rates localized to dorsal stream areas, in keeping with classical and contemporary brain-language models. Within the dorsal stream, the critical voxels were concentrated in premotor cortex, pre- and postcentral gyri and supramarginal gyrus with minimal extension into auditory-related posterior temporal and temporo-parietal cortices. This challenges the popular notion that error-free phonological retrieval requires guidance from sensory traces stored in posterior auditory regions and points instead to sensory-motor processes located further anterior in the dorsal stream. In a separate analysis, we compared the lesion maps for phonological and semantic errors and determined that there was no spatial overlap, demonstrating that the brain segregates phonological and semantic retrieval operations in word production. PMID:23171662

  16. Improving the Quality of Low-Cost GPS Receiver Data for Monitoring Using Spatial Correlations

    NASA Astrophysics Data System (ADS)

    Zhang, Li; Schwieger, Volker

    2016-06-01

    The investigations on low-cost single frequency GPS receivers at the Institute of Engineering Geodesy (IIGS) show that u-blox LEA-6T GPS receivers combined with Trimble Bullet III GPS antennas containing self-constructed L1-optimized choke rings can already obtain an accuracy in the range of millimeters which meets the requirements of geodetic precise monitoring applications (see [27]). However, the quality (accuracy and reliability) of low-cost GPS receiver data, particularly in shadowing environment, should still be improved, since the multipath effects are the major error for the short baselines. For this purpose, several adjoined stations with low-cost GPS receivers and antennas were set up next to the metal wall on the roof of the IIGS building and measured statically for several days. The time series of three-dimensional coordinates of the GPS receivers were analyzed. Spatial correlations between the adjoined stations, possibly caused by multipath effect, will be taken into account. The coordinates of one station can be corrected using the spatial correlations of the adjoined stations, so that the quality of the GPS measurements is improved. The developed algorithms are based on the coordinates and the results will be delivered in near-real-time (in about 30 minutes), so that they are suitable for structural health monitoring applications.

  17. Comparison between GSTAR and GSTAR-Kalman Filter models on inflation rate forecasting in East Java

    NASA Astrophysics Data System (ADS)

    Rahma Prillantika, Jessica; Apriliani, Erna; Wahyuningsih, Nuri

    2018-03-01

    Up to now, we often find data which have correlation between time and location. This data also known as spatial data. Inflation rate is one type of spatial data because it is not only related to the events of the previous time, but also has relevance to the other location or elsewhere. In this research, we do comparison between GSTAR model and GSTAR-Kalman Filter to get prediction which have small error rate. Kalman Filter is one estimator that estimates state changes due to noise from white noise. The final result shows that Kalman Filter is able to improve the GSTAR forecast result. This is shown through simulation results in the form of graphs and clarified with smaller RMSE values.

  18. Classification Model for Forest Fire Hotspot Occurrences Prediction Using ANFIS Algorithm

    NASA Astrophysics Data System (ADS)

    Wijayanto, A. K.; Sani, O.; Kartika, N. D.; Herdiyeni, Y.

    2017-01-01

    This study proposed the application of data mining technique namely Adaptive Neuro-Fuzzy inference system (ANFIS) on forest fires hotspot data to develop classification models for hotspots occurrence in Central Kalimantan. Hotspot is a point that is indicated as the location of fires. In this study, hotspot distribution is categorized as true alarm and false alarm. ANFIS is a soft computing method in which a given inputoutput data set is expressed in a fuzzy inference system (FIS). The FIS implements a nonlinear mapping from its input space to the output space. The method of this study classified hotspots as target objects by correlating spatial attributes data using three folds in ANFIS algorithm to obtain the best model. The best result obtained from the 3rd fold provided low error for training (error = 0.0093676) and also low error testing result (error = 0.0093676). Attribute of distance to road is the most determining factor that influences the probability of true and false alarm where the level of human activities in this attribute is higher. This classification model can be used to develop early warning system of forest fire.

  19. Assessment of the role of aptitude in the acquisition of advanced laparoscopic surgical skill sets: results from a virtual reality-based laparoscopic colectomy training programme.

    PubMed

    Nugent, Emmeline; Hseino, Hazem; Boyle, Emily; Mehigan, Brian; Ryan, Kieran; Traynor, Oscar; Neary, Paul

    2012-09-01

    The surgeons of the future will need to have advanced laparoscopic skills. The current challenge in surgical education is to teach these skills and to identify factors that may have a positive influence on training curriculums. The primary aim of this study was to determine if fundamental aptitude impacts on ability to perform a laparoscopic colectomy. A practical laparoscopic colectomy course was held by the National Surgical Training Centre at the Royal College of Surgeons in Ireland. The course consisted of didactics, warm-up and the performance of a laparoscopic sigmoid colectomy on thesimulator. Objective metrics such as time and motion analysis were recorded. Each candidate had their psychomotor and visual spatial aptitude assessed. The colectomy trays were assessed by blinded experts post procedure for errors. Ten trainee surgeons that were novices with respect to advanced laparoscopic procedures attended the course. A significant correlation was found between psychomotor and visual spatial aptitude and performance on both the warm-up session and laparoscopic colectomy (r > 0.7, p < 0.05). Performance on the warm-up session correlated with performance of the laparoscopic colectomy (r = 0.8, p = 0.04). There was also a significant correlation between the number of tray errors and time taken to perform the laparoscopic colectomy (r = 0.83, p = 0.001). The results have demonstrated that there is a relationship between aptitude and ability to perform both basic laparoscopic tasks and laparoscopic colectomy on a simulator. The findings suggest that there may be a role for the consideration of an individual's inherent baseline ability when trying to design and optimise technical teaching curricula for advanced laparoscopic procedures.

  20. Impact of Spatial Soil and Climate Input Data Aggregation on Regional Yield Simulations

    PubMed Central

    Hoffmann, Holger; Zhao, Gang; Asseng, Senthold; Bindi, Marco; Biernath, Christian; Constantin, Julie; Coucheney, Elsa; Dechow, Rene; Doro, Luca; Eckersten, Henrik; Gaiser, Thomas; Grosz, Balázs; Heinlein, Florian; Kassie, Belay T.; Kersebaum, Kurt-Christian; Klein, Christian; Kuhnert, Matthias; Lewan, Elisabet; Moriondo, Marco; Nendel, Claas; Priesack, Eckart; Raynal, Helene; Roggero, Pier P.; Rötter, Reimund P.; Siebert, Stefan; Specka, Xenia; Tao, Fulu; Teixeira, Edmar; Trombi, Giacomo; Wallach, Daniel; Weihermüller, Lutz; Yeluripati, Jagadeesh; Ewert, Frank

    2016-01-01

    We show the error in water-limited yields simulated by crop models which is associated with spatially aggregated soil and climate input data. Crop simulations at large scales (regional, national, continental) frequently use input data of low resolution. Therefore, climate and soil data are often generated via averaging and sampling by area majority. This may bias simulated yields at large scales, varying largely across models. Thus, we evaluated the error associated with spatially aggregated soil and climate data for 14 crop models. Yields of winter wheat and silage maize were simulated under water-limited production conditions. We calculated this error from crop yields simulated at spatial resolutions from 1 to 100 km for the state of North Rhine-Westphalia, Germany. Most models showed yields biased by <15% when aggregating only soil data. The relative mean absolute error (rMAE) of most models using aggregated soil data was in the range or larger than the inter-annual or inter-model variability in yields. This error increased further when both climate and soil data were aggregated. Distinct error patterns indicate that the rMAE may be estimated from few soil variables. Illustrating the range of these aggregation effects across models, this study is a first step towards an ex-ante assessment of aggregation errors in large-scale simulations. PMID:27055028

  1. Impact of Spatial Soil and Climate Input Data Aggregation on Regional Yield Simulations.

    PubMed

    Hoffmann, Holger; Zhao, Gang; Asseng, Senthold; Bindi, Marco; Biernath, Christian; Constantin, Julie; Coucheney, Elsa; Dechow, Rene; Doro, Luca; Eckersten, Henrik; Gaiser, Thomas; Grosz, Balázs; Heinlein, Florian; Kassie, Belay T; Kersebaum, Kurt-Christian; Klein, Christian; Kuhnert, Matthias; Lewan, Elisabet; Moriondo, Marco; Nendel, Claas; Priesack, Eckart; Raynal, Helene; Roggero, Pier P; Rötter, Reimund P; Siebert, Stefan; Specka, Xenia; Tao, Fulu; Teixeira, Edmar; Trombi, Giacomo; Wallach, Daniel; Weihermüller, Lutz; Yeluripati, Jagadeesh; Ewert, Frank

    2016-01-01

    We show the error in water-limited yields simulated by crop models which is associated with spatially aggregated soil and climate input data. Crop simulations at large scales (regional, national, continental) frequently use input data of low resolution. Therefore, climate and soil data are often generated via averaging and sampling by area majority. This may bias simulated yields at large scales, varying largely across models. Thus, we evaluated the error associated with spatially aggregated soil and climate data for 14 crop models. Yields of winter wheat and silage maize were simulated under water-limited production conditions. We calculated this error from crop yields simulated at spatial resolutions from 1 to 100 km for the state of North Rhine-Westphalia, Germany. Most models showed yields biased by <15% when aggregating only soil data. The relative mean absolute error (rMAE) of most models using aggregated soil data was in the range or larger than the inter-annual or inter-model variability in yields. This error increased further when both climate and soil data were aggregated. Distinct error patterns indicate that the rMAE may be estimated from few soil variables. Illustrating the range of these aggregation effects across models, this study is a first step towards an ex-ante assessment of aggregation errors in large-scale simulations.

  2. Experimental power spectral density analysis for mid- to high-spatial frequency surface error control.

    PubMed

    Hoyo, Javier Del; Choi, Heejoo; Burge, James H; Kim, Geon-Hee; Kim, Dae Wook

    2017-06-20

    The control of surface errors as a function of spatial frequency is critical during the fabrication of modern optical systems. A large-scale surface figure error is controlled by a guided removal process, such as computer-controlled optical surfacing. Smaller-scale surface errors are controlled by polishing process parameters. Surface errors of only a few millimeters may degrade the performance of an optical system, causing background noise from scattered light and reducing imaging contrast for large optical systems. Conventionally, the microsurface roughness is often given by the root mean square at a high spatial frequency range, with errors within a 0.5×0.5  mm local surface map with 500×500 pixels. This surface specification is not adequate to fully describe the characteristics for advanced optical systems. The process for controlling and minimizing mid- to high-spatial frequency surface errors with periods of up to ∼2-3  mm was investigated for many optical fabrication conditions using the measured surface power spectral density (PSD) of a finished Zerodur optical surface. Then, the surface PSD was systematically related to various fabrication process parameters, such as the grinding methods, polishing interface materials, and polishing compounds. The retraceable experimental polishing conditions and processes used to produce an optimal optical surface PSD are presented.

  3. Spatial abstraction for autonomous robot navigation.

    PubMed

    Epstein, Susan L; Aroor, Anoop; Evanusa, Matthew; Sklar, Elizabeth I; Parsons, Simon

    2015-09-01

    Optimal navigation for a simulated robot relies on a detailed map and explicit path planning, an approach problematic for real-world robots that are subject to noise and error. This paper reports on autonomous robots that rely on local spatial perception, learning, and commonsense rationales instead. Despite realistic actuator error, learned spatial abstractions form a model that supports effective travel.

  4. A stochastic-dynamic model for global atmospheric mass field statistics

    NASA Technical Reports Server (NTRS)

    Ghil, M.; Balgovind, R.; Kalnay-Rivas, E.

    1981-01-01

    A model that yields the spatial correlation structure of atmospheric mass field forecast errors was developed. The model is governed by the potential vorticity equation forced by random noise. Expansion in spherical harmonics and correlation function was computed analytically using the expansion coefficients. The finite difference equivalent was solved using a fast Poisson solver and the correlation function was computed using stratified sampling of the individual realization of F(omega) and hence of phi(omega). A higher order equation for gamma was derived and solved directly in finite differences by two successive applications of the fast Poisson solver. The methods were compared for accuracy and efficiency and the third method was chosen as clearly superior. The results agree well with the latitude dependence of observed atmospheric correlation data. The value of the parameter c sub o which gives the best fit to the data is close to the value expected from dynamical considerations.

  5. Impact of exposure measurement error in air pollution epidemiology: effect of error type in time-series studies.

    PubMed

    Goldman, Gretchen T; Mulholland, James A; Russell, Armistead G; Strickland, Matthew J; Klein, Mitchel; Waller, Lance A; Tolbert, Paige E

    2011-06-22

    Two distinctly different types of measurement error are Berkson and classical. Impacts of measurement error in epidemiologic studies of ambient air pollution are expected to depend on error type. We characterize measurement error due to instrument imprecision and spatial variability as multiplicative (i.e. additive on the log scale) and model it over a range of error types to assess impacts on risk ratio estimates both on a per measurement unit basis and on a per interquartile range (IQR) basis in a time-series study in Atlanta. Daily measures of twelve ambient air pollutants were analyzed: NO2, NOx, O3, SO2, CO, PM10 mass, PM2.5 mass, and PM2.5 components sulfate, nitrate, ammonium, elemental carbon and organic carbon. Semivariogram analysis was applied to assess spatial variability. Error due to this spatial variability was added to a reference pollutant time-series on the log scale using Monte Carlo simulations. Each of these time-series was exponentiated and introduced to a Poisson generalized linear model of cardiovascular disease emergency department visits. Measurement error resulted in reduced statistical significance for the risk ratio estimates for all amounts (corresponding to different pollutants) and types of error. When modelled as classical-type error, risk ratios were attenuated, particularly for primary air pollutants, with average attenuation in risk ratios on a per unit of measurement basis ranging from 18% to 92% and on an IQR basis ranging from 18% to 86%. When modelled as Berkson-type error, risk ratios per unit of measurement were biased away from the null hypothesis by 2% to 31%, whereas risk ratios per IQR were attenuated (i.e. biased toward the null) by 5% to 34%. For CO modelled error amount, a range of error types were simulated and effects on risk ratio bias and significance were observed. For multiplicative error, both the amount and type of measurement error impact health effect estimates in air pollution epidemiology. By modelling instrument imprecision and spatial variability as different error types, we estimate direction and magnitude of the effects of error over a range of error types.

  6. Documentation of procedures for textural/spatial pattern recognition techniques

    NASA Technical Reports Server (NTRS)

    Haralick, R. M.; Bryant, W. F.

    1976-01-01

    A C-130 aircraft was flown over the Sam Houston National Forest on March 21, 1973 at 10,000 feet altitude to collect multispectral scanner (MSS) data. Existing textural and spatial automatic processing techniques were used to classify the MSS imagery into specified timber categories. Several classification experiments were performed on this data using features selected from the spectral bands and a textural transform band. The results indicate that (1) spatial post-processing a classified image can cut the classification error to 1/2 or 1/3 of its initial value, (2) spatial post-processing the classified image using combined spectral and textural features produces a resulting image with less error than post-processing a classified image using only spectral features and (3) classification without spatial post processing using the combined spectral textural features tends to produce about the same error rate as a classification without spatial post processing using only spectral features.

  7. Calibration and filtering strategies for frequency domain electromagnetic data

    USGS Publications Warehouse

    Minsley, Burke J.; Smith, Bruce D.; Hammack, Richard; Sams, James I.; Veloski, Garret

    2010-01-01

    echniques for processing frequency-domain electromagnetic (FDEM) data that address systematic instrument errors and random noise are presented, improving the ability to invert these data for meaningful earth models that can be quantitatively interpreted. A least-squares calibration method, originally developed for airborne electromagnetic datasets, is implemented for a ground-based survey in order to address systematic instrument errors, and new insights are provided into the importance of calibration for preserving spectral relationships within the data that lead to more reliable inversions. An alternative filtering strategy based on principal component analysis, which takes advantage of the strong correlation observed in FDEM data, is introduced to help address random noise in the data without imposing somewhat arbitrary spatial smoothing.Read More: http://library.seg.org/doi/abs/10.4133/1.3445431

  8. Improving Genomic Prediction in Cassava Field Experiments Using Spatial Analysis.

    PubMed

    Elias, Ani A; Rabbi, Ismail; Kulakow, Peter; Jannink, Jean-Luc

    2018-01-04

    Cassava ( Manihot esculenta Crantz) is an important staple food in sub-Saharan Africa. Breeding experiments were conducted at the International Institute of Tropical Agriculture in cassava to select elite parents. Taking into account the heterogeneity in the field while evaluating these trials can increase the accuracy in estimation of breeding values. We used an exploratory approach using the parametric spatial kernels Power, Spherical, and Gaussian to determine the best kernel for a given scenario. The spatial kernel was fit simultaneously with a genomic kernel in a genomic selection model. Predictability of these models was tested through a 10-fold cross-validation method repeated five times. The best model was chosen as the one with the lowest prediction root mean squared error compared to that of the base model having no spatial kernel. Results from our real and simulated data studies indicated that predictability can be increased by accounting for spatial variation irrespective of the heritability of the trait. In real data scenarios we observed that the accuracy can be increased by a median value of 3.4%. Through simulations, we showed that a 21% increase in accuracy can be achieved. We also found that Range (row) directional spatial kernels, mostly Gaussian, explained the spatial variance in 71% of the scenarios when spatial correlation was significant. Copyright © 2018 Elias et al.

  9. Global spectroscopic survey of cloud thermodynamic phase at high spatial resolution, 2005-2015

    NASA Astrophysics Data System (ADS)

    Thompson, David R.; Kahn, Brian H.; Green, Robert O.; Chien, Steve A.; Middleton, Elizabeth M.; Tran, Daniel Q.

    2018-02-01

    The distribution of ice, liquid, and mixed phase clouds is important for Earth's planetary radiation budget, impacting cloud optical properties, evolution, and solar reflectivity. Most remote orbital thermodynamic phase measurements observe kilometer scales and are insensitive to mixed phases. This under-constrains important processes with outsize radiative forcing impact, such as spatial partitioning in mixed phase clouds. To date, the fine spatial structure of cloud phase has not been measured at global scales. Imaging spectroscopy of reflected solar energy from 1.4 to 1.8 µm can address this gap: it directly measures ice and water absorption, a robust indicator of cloud top thermodynamic phase, with spatial resolution of tens to hundreds of meters. We report the first such global high spatial resolution survey based on data from 2005 to 2015 acquired by the Hyperion imaging spectrometer onboard NASA's Earth Observer 1 (EO-1) spacecraft. Seasonal and latitudinal distributions corroborate observations by the Atmospheric Infrared Sounder (AIRS). For extratropical cloud systems, just 25 % of variance observed at GCM grid scales of 100 km was related to irreducible measurement error, while 75 % was explained by spatial correlations possible at finer resolutions.

  10. Characterizing China's energy consumption with selective economic factors and energy-resource endowment: a spatial econometric approach

    NASA Astrophysics Data System (ADS)

    Jiang, Lei; Ji, Minhe; Bai, Ling

    2015-06-01

    Coupled with intricate regional interactions, the provincial disparity of energy-resource endowment and other economic conditions in China have created spatially complex energy consumption patterns that require analyses beyond the traditional ones. To distill the spatial effect out of the resource and economic factors on China's energy consumption, this study recast the traditional econometric model in a spatial context. Several analytic steps were taken to reveal different aspects of the issue. Per capita energy consumption (AVEC) at the provincial level was first mapped to reveal spatial clusters of high energy consumption being located in either well developed or energy resourceful regions. This visual spatial autocorrelation pattern of AVEC was quantitatively tested to confirm its existence among Chinese provinces. A Moran scatterplot was employed to further display a relatively centralized trend occurring in those provinces that had parallel AVEC, revealing a spatial structure with attraction among high-high or low-low regions and repellency among high-low or low-high regions. By a comparison between the ordinary least square (OLS) model and its spatial econometric counterparts, a spatial error model (SEM) was selected to analyze the impact of major economic determinants on AVEC. While the analytic results revealed a significant positive correlation between AVEC and economic development, other determinants showed some intricate influential patterns. The provinces endowed with rich energy reserves were inclined to consume much more energy than those otherwise, whereas changing the economic structure by increasing the proportion of secondary and tertiary industries also tended to consume more energy. Both situations seem to underpin the fact that these provinces were largely trapped in the economies that were supported by technologies of low energy efficiency during the period, while other parts of the country were rapidly modernized by adopting advanced technologies and more efficient industries. On the other hand, institutional change (i.e., marketization) and innovation (i.e., technological progress) exerted positive impacts on AVEC improvement, as always expected in this and other studies. Finally, the model comparison indicated that SEM was capable of separating spatial effect from the error term of OLS, so as to improve goodness-of-fit and the significance level of individual determinants.

  11. Comparison of retracked coastal altimetry sea levels against high frequency radar on the continental shelf of the Great Barrier Reef, Australia

    NASA Astrophysics Data System (ADS)

    Idris, Nurul Hazrina; Deng, Xiaoli; Idris, Nurul Hawani

    2017-07-01

    Comparison of Jason-1 altimetry retracked sea levels and high frequency (HF) radar velocity is examined within the region of the Great Barrier Reef, Australia. The comparison between both datasets is not direct because the altimetry derives only the geostrophic component, while the HF radar velocity includes information on both geostrophic and ageostrophic components, such as tides and winds. The comparison of altimetry and HF radar data is performed based on the parameter of surface velocity inferred from both datasets. The results show that 48% (10 out of 21 cases) of data have high (≥0.5) spatial correlation. The mean of spatial correlation for all 21 cases is 0.43. This value is within the range (0.42 to 0.5) observed by other studies. Low correlation is observed due to disagreement in the trend of velocity signals in which sometimes they have contradictions in the signal direction and the position of the peak is shifted. In terms of standard deviation of difference and root mean square error, both datasets show reasonable agreement with ≤2.5 cm s-1.

  12. Bias and uncertainty in regression-calibrated models of groundwater flow in heterogeneous media

    USGS Publications Warehouse

    Cooley, R.L.; Christensen, S.

    2006-01-01

    Groundwater models need to account for detailed but generally unknown spatial variability (heterogeneity) of the hydrogeologic model inputs. To address this problem we replace the large, m-dimensional stochastic vector ?? that reflects both small and large scales of heterogeneity in the inputs by a lumped or smoothed m-dimensional approximation ????*, where ?? is an interpolation matrix and ??* is a stochastic vector of parameters. Vector ??* has small enough dimension to allow its estimation with the available data. The consequence of the replacement is that model function f(????*) written in terms of the approximate inputs is in error with respect to the same model function written in terms of ??, ??,f(??), which is assumed to be nearly exact. The difference f(??) - f(????*), termed model error, is spatially correlated, generates prediction biases, and causes standard confidence and prediction intervals to be too small. Model error is accounted for in the weighted nonlinear regression methodology developed to estimate ??* and assess model uncertainties by incorporating the second-moment matrix of the model errors into the weight matrix. Techniques developed by statisticians to analyze classical nonlinear regression methods are extended to analyze the revised method. The analysis develops analytical expressions for bias terms reflecting the interaction of model nonlinearity and model error, for correction factors needed to adjust the sizes of confidence and prediction intervals for this interaction, and for correction factors needed to adjust the sizes of confidence and prediction intervals for possible use of a diagonal weight matrix in place of the correct one. If terms expressing the degree of intrinsic nonlinearity for f(??) and f(????*) are small, then most of the biases are small and the correction factors are reduced in magnitude. Biases, correction factors, and confidence and prediction intervals were obtained for a test problem for which model error is large to test robustness of the methodology. Numerical results conform with the theoretical analysis. ?? 2005 Elsevier Ltd. All rights reserved.

  13. Anatomic, clinical, and neuropsychological correlates of spelling errors in primary progressive aphasia.

    PubMed

    Shim, Hyungsub; Hurley, Robert S; Rogalski, Emily; Mesulam, M-Marsel

    2012-07-01

    This study evaluates spelling errors in the three subtypes of primary progressive aphasia (PPA): agrammatic (PPA-G), logopenic (PPA-L), and semantic (PPA-S). Forty-one PPA patients and 36 age-matched healthy controls were administered a test of spelling. The total number of errors and types of errors in spelling to dictation of regular words, exception words and nonwords, were recorded. Error types were classified based on phonetic plausibility. In the first analysis, scores were evaluated by clinical diagnosis. Errors in spelling exception words and phonetically plausible errors were seen in PPA-S. Conversely, PPA-G was associated with errors in nonword spelling and phonetically implausible errors. In the next analysis, spelling scores were correlated to other neuropsychological language test scores. Significant correlations were found between exception word spelling and measures of naming and single word comprehension. Nonword spelling correlated with tests of grammar and repetition. Global language measures did not correlate significantly with spelling scores, however. Cortical thickness analysis based on MRI showed that atrophy in several language regions of interest were correlated with spelling errors. Atrophy in the left supramarginal gyrus and inferior frontal gyrus (IFG) pars orbitalis correlated with errors in nonword spelling, while thinning in the left temporal pole and fusiform gyrus correlated with errors in exception word spelling. Additionally, phonetically implausible errors in regular word spelling correlated with thinning in the left IFG pars triangularis and pars opercularis. Together, these findings suggest two independent systems for spelling to dictation, one phonetic (phoneme to grapheme conversion), and one lexical (whole word retrieval). Copyright © 2012 Elsevier Ltd. All rights reserved.

  14. A New Methodology of Spatial Cross-Correlation Analysis

    PubMed Central

    Chen, Yanguang

    2015-01-01

    Spatial correlation modeling comprises both spatial autocorrelation and spatial cross-correlation processes. The spatial autocorrelation theory has been well-developed. It is necessary to advance the method of spatial cross-correlation analysis to supplement the autocorrelation analysis. This paper presents a set of models and analytical procedures for spatial cross-correlation analysis. By analogy with Moran’s index newly expressed in a spatial quadratic form, a theoretical framework is derived for geographical cross-correlation modeling. First, two sets of spatial cross-correlation coefficients are defined, including a global spatial cross-correlation coefficient and local spatial cross-correlation coefficients. Second, a pair of scatterplots of spatial cross-correlation is proposed, and the plots can be used to visually reveal the causality behind spatial systems. Based on the global cross-correlation coefficient, Pearson’s correlation coefficient can be decomposed into two parts: direct correlation (partial correlation) and indirect correlation (spatial cross-correlation). As an example, the methodology is applied to the relationships between China’s urbanization and economic development to illustrate how to model spatial cross-correlation phenomena. This study is an introduction to developing the theory of spatial cross-correlation, and future geographical spatial analysis might benefit from these models and indexes. PMID:25993120

  15. A new methodology of spatial cross-correlation analysis.

    PubMed

    Chen, Yanguang

    2015-01-01

    Spatial correlation modeling comprises both spatial autocorrelation and spatial cross-correlation processes. The spatial autocorrelation theory has been well-developed. It is necessary to advance the method of spatial cross-correlation analysis to supplement the autocorrelation analysis. This paper presents a set of models and analytical procedures for spatial cross-correlation analysis. By analogy with Moran's index newly expressed in a spatial quadratic form, a theoretical framework is derived for geographical cross-correlation modeling. First, two sets of spatial cross-correlation coefficients are defined, including a global spatial cross-correlation coefficient and local spatial cross-correlation coefficients. Second, a pair of scatterplots of spatial cross-correlation is proposed, and the plots can be used to visually reveal the causality behind spatial systems. Based on the global cross-correlation coefficient, Pearson's correlation coefficient can be decomposed into two parts: direct correlation (partial correlation) and indirect correlation (spatial cross-correlation). As an example, the methodology is applied to the relationships between China's urbanization and economic development to illustrate how to model spatial cross-correlation phenomena. This study is an introduction to developing the theory of spatial cross-correlation, and future geographical spatial analysis might benefit from these models and indexes.

  16. A Simple and Universal Aerosol Retrieval Algorithm for Landsat Series Images Over Complex Surfaces

    NASA Astrophysics Data System (ADS)

    Wei, Jing; Huang, Bo; Sun, Lin; Zhang, Zhaoyang; Wang, Lunche; Bilal, Muhammad

    2017-12-01

    Operational aerosol optical depth (AOD) products are available at coarse spatial resolutions from several to tens of kilometers. These resolutions limit the application of these products for monitoring atmospheric pollutants at the city level. Therefore, a simple, universal, and high-resolution (30 m) Landsat aerosol retrieval algorithm over complex urban surfaces is developed. The surface reflectance is estimated from a combination of top of atmosphere reflectance at short-wave infrared (2.22 μm) and Landsat 4-7 surface reflectance climate data records over densely vegetated areas and bright areas. The aerosol type is determined using the historical aerosol optical properties derived from the local urban Aerosol Robotic Network (AERONET) site (Beijing). AERONET ground-based sun photometer AOD measurements from five sites located in urban and rural areas are obtained to validate the AOD retrievals. Terra MODerate resolution Imaging Spectrometer Collection (C) 6 AOD products (MOD04) including the dark target (DT), the deep blue (DB), and the combined DT and DB (DT&DB) retrievals at 10 km spatial resolution are obtained for comparison purposes. Validation results show that the Landsat AOD retrievals at a 30 m resolution are well correlated with the AERONET AOD measurements (R2 = 0.932) and that approximately 77.46% of the retrievals fall within the expected error with a low mean absolute error of 0.090 and a root-mean-square error of 0.126. Comparison results show that Landsat AOD retrievals are overall better and less biased than MOD04 AOD products, indicating that the new algorithm is robust and performs well in AOD retrieval over complex surfaces. The new algorithm can provide continuous and detailed spatial distributions of AOD during both low and high aerosol loadings.

  17. Errors and uncertainties in regional climate simulations of rainfall variability over Tunisia: a multi-model and multi-member approach

    NASA Astrophysics Data System (ADS)

    Fathalli, Bilel; Pohl, Benjamin; Castel, Thierry; Safi, Mohamed Jomâa

    2018-02-01

    Temporal and spatial variability of rainfall over Tunisia (at 12 km spatial resolution) is analyzed in a multi-year (1992-2011) ten-member ensemble simulation performed using the WRF model, and a sample of regional climate hindcast simulations from Euro-CORDEX. RCM errors and skills are evaluated against a dense network of local rain gauges. Uncertainties arising, on the one hand, from the different model configurations and, on the other hand, from internal variability are furthermore quantified and ranked at different timescales using simple spread metrics. Overall, the WRF simulation shows good skill for simulating spatial patterns of rainfall amounts over Tunisia, marked by strong altitudinal and latitudinal gradients, as well as the rainfall interannual variability, in spite of systematic errors. Mean rainfall biases are wet in both DJF and JJA seasons for the WRF ensemble, while they are dry in winter and wet in summer for most of the used Euro-CORDEX models. The sign of mean annual rainfall biases over Tunisia can also change from one member of the WRF ensemble to another. Skills in regionalizing precipitation over Tunisia are season dependent, with better correlations and weaker biases in winter. Larger inter-member spreads are observed in summer, likely because of (1) an attenuated large-scale control on Mediterranean and Tunisian climate, and (2) a larger contribution of local convective rainfall to the seasonal amounts. Inter-model uncertainties are globally stronger than those attributed to model's internal variability. However, inter-member spreads can be of the same magnitude in summer, emphasizing the important stochastic nature of the summertime rainfall variability over Tunisia.

  18. Effect of correlated observation error on parameters, predictions, and uncertainty

    USGS Publications Warehouse

    Tiedeman, Claire; Green, Christopher T.

    2013-01-01

    Correlations among observation errors are typically omitted when calculating observation weights for model calibration by inverse methods. We explore the effects of omitting these correlations on estimates of parameters, predictions, and uncertainties. First, we develop a new analytical expression for the difference in parameter variance estimated with and without error correlations for a simple one-parameter two-observation inverse model. Results indicate that omitting error correlations from both the weight matrix and the variance calculation can either increase or decrease the parameter variance, depending on the values of error correlation (ρ) and the ratio of dimensionless scaled sensitivities (rdss). For small ρ, the difference in variance is always small, but for large ρ, the difference varies widely depending on the sign and magnitude of rdss. Next, we consider a groundwater reactive transport model of denitrification with four parameters and correlated geochemical observation errors that are computed by an error-propagation approach that is new for hydrogeologic studies. We compare parameter estimates, predictions, and uncertainties obtained with and without the error correlations. Omitting the correlations modestly to substantially changes parameter estimates, and causes both increases and decreases of parameter variances, consistent with the analytical expression. Differences in predictions for the models calibrated with and without error correlations can be greater than parameter differences when both are considered relative to their respective confidence intervals. These results indicate that including observation error correlations in weighting for nonlinear regression can have important effects on parameter estimates, predictions, and their respective uncertainties.

  19. Model Errors in Simulating Precipitation and Radiation fields in the NARCCAP Hindcast Experiment

    NASA Astrophysics Data System (ADS)

    Kim, J.; Waliser, D. E.; Mearns, L. O.; Mattmann, C. A.; McGinnis, S. A.; Goodale, C. E.; Hart, A. F.; Crichton, D. J.

    2012-12-01

    The relationship between the model errors in simulating precipitation and radiation fields including the surface insolation and OLR, is examined from the multi-RCM NARCCAP hindcast experiment for the conterminous U.S. region. Findings in this study suggest that the RCM biases in simulating precipitation are related with those in simulating radiation fields. For a majority of RCMs participated in the NARCCAP hindcast experiment as well as their ensemble, the spatial pattern of the insolation bias is negatively correlated with that of the precipitation bias, suggesting that the biases in precipitation and surface insolation are systematically related, most likely via the cloud fields. The relationship varies according to seasons as well with stronger relationship between the simulated precipitation and surface insolation during winter. This suggests that the RCM biases in precipitation and radiation are related via cloud fields. Additional analysis on the RCM errors in OLR is underway to examine more details of this relationship.

  20. Real-time correction of beamforming time delay errors in abdominal ultrasound imaging

    NASA Astrophysics Data System (ADS)

    Rigby, K. W.

    2000-04-01

    The speed of sound varies with tissue type, yet commercial ultrasound imagers assume a constant sound speed. Sound speed variation in abdominal fat and muscle layers is widely believed to be largely responsible for poor contrast and resolution in some patients. The simplest model of the abdominal wall assumes that it adds a spatially varying time delay to the ultrasound wavefront. The adequacy of this model is controversial. We describe an adaptive imaging system consisting of a GE LOGIQ 700 imager connected to a multi- processor computer. Arrival time errors for each beamforming channel, estimated by correlating each channel signal with the beamsummed signal, are used to correct the imager's beamforming time delays at the acoustic frame rate. A multi- row transducer provides two-dimensional sampling of arrival time errors. We observe significant improvement in abdominal images of healthy male volunteers: increased contrast of blood vessels, increased visibility of the renal capsule, and increased brightness of the liver.

  1. Spatial measurement error and correction by spatial SIMEX in linear regression models when using predicted air pollution exposures.

    PubMed

    Alexeeff, Stacey E; Carroll, Raymond J; Coull, Brent

    2016-04-01

    Spatial modeling of air pollution exposures is widespread in air pollution epidemiology research as a way to improve exposure assessment. However, there are key sources of exposure model uncertainty when air pollution is modeled, including estimation error and model misspecification. We examine the use of predicted air pollution levels in linear health effect models under a measurement error framework. For the prediction of air pollution exposures, we consider a universal Kriging framework, which may include land-use regression terms in the mean function and a spatial covariance structure for the residuals. We derive the bias induced by estimation error and by model misspecification in the exposure model, and we find that a misspecified exposure model can induce asymptotic bias in the effect estimate of air pollution on health. We propose a new spatial simulation extrapolation (SIMEX) procedure, and we demonstrate that the procedure has good performance in correcting this asymptotic bias. We illustrate spatial SIMEX in a study of air pollution and birthweight in Massachusetts. © The Author 2015. Published by Oxford University Press. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.

  2. Assessing the short-term clock drift of early broadband stations with burst events of the 26 s persistent and localized microseism

    NASA Astrophysics Data System (ADS)

    Xie, Jun; Ni, Sidao; Chu, Risheng; Xia, Yingjie

    2018-01-01

    Accurate seismometer clock plays an important role in seismological studies including earthquake location and tomography. However, some seismic stations may have clock drift larger than 1 s (e.g. GSC in 1992), especially in early days of global seismic networks. The 26 s Persistent Localized (PL) microseism event in the Gulf of Guinea sometime excites strong and coherent signals, and can be used as repeating source for assessing stability of seismometer clocks. Taking station GSC, PAS and PFO in the TERRAscope network as an example, the 26 s PL signal can be easily observed in the ambient noise cross-correlation function between these stations and a remote station OBN with interstation distance about 9700 km. The travel-time variation of this 26 s signal in the ambient noise cross-correlation function is used to infer clock error. A drastic clock error is detected during June 1992 for station GSC, but not found for station PAS and PFO. This short-term clock error is confirmed by both teleseismic and local earthquake records with a magnitude of 25 s. Averaged over the three stations, the accuracy of the ambient noise cross-correlation function method with the 26 s source is about 0.3-0.5 s. Using this PL source, the clock can be validated for historical records of sparsely distributed stations, where the usual ambient noise cross-correlation function of short-period (<20 s) ambient noise might be less effective due to its attenuation over long interstation distances. However, this method suffers from cycling problem, and should be verified by teleseismic/local P waves. Further studies are also needed to investigate whether the 26 s source moves spatially and its effects on clock drift detection.

  3. Soil pH Errors Propagation from Measurements to Spatial Predictions - Cost Benefit Analysis and Risk Assessment Implications for Practitioners and Modelers

    NASA Astrophysics Data System (ADS)

    Owens, P. R.; Libohova, Z.; Seybold, C. A.; Wills, S. A.; Peaslee, S.; Beaudette, D.; Lindbo, D. L.

    2017-12-01

    The measurement errors and spatial prediction uncertainties of soil properties in the modeling community are usually assessed against measured values when available. However, of equal importance is the assessment of errors and uncertainty impacts on cost benefit analysis and risk assessments. Soil pH was selected as one of the most commonly measured soil properties used for liming recommendations. The objective of this study was to assess the error size from different sources and their implications with respect to management decisions. Error sources include measurement methods, laboratory sources, pedotransfer functions, database transections, spatial aggregations, etc. Several databases of measured and predicted soil pH were used for this study including the United States National Cooperative Soil Survey Characterization Database (NCSS-SCDB), the US Soil Survey Geographic (SSURGO) Database. The distribution of errors among different sources from measurement methods to spatial aggregation showed a wide range of values. The greatest RMSE of 0.79 pH units was from spatial aggregation (SSURGO vs Kriging), while the measurement methods had the lowest RMSE of 0.06 pH units. Assuming the order of data acquisition based on the transaction distance i.e. from measurement method to spatial aggregation the RMSE increased from 0.06 to 0.8 pH units suggesting an "error propagation". This has major implications for practitioners and modeling community. Most soil liming rate recommendations are based on 0.1 pH unit increments, while the desired soil pH level increments are based on 0.4 to 0.5 pH units. Thus, even when the measured and desired target soil pH are the same most guidelines recommend 1 ton ha-1 lime, which translates in 111 ha-1 that the farmer has to factor in the cost-benefit analysis. However, this analysis need to be based on uncertainty predictions (0.5-1.0 pH units) rather than measurement errors (0.1 pH units) which would translate in 555-1,111 investment that need to be assessed against the risk. The modeling community can benefit from such analysis, however, error size and spatial distribution for global and regional predictions need to be assessed against the variability of other drivers and impact on management decisions.

  4. Design Considerations of Polishing Lap for Computer-Controlled Cylindrical Polishing Process

    NASA Technical Reports Server (NTRS)

    Khan, Gufran S.; Gubarev, Mikhail; Arnold, William; Ramsey, Brian D.

    2009-01-01

    This paper establishes a relationship between the polishing process parameters and the generation of mid spatial-frequency error. The consideration of the polishing lap design to optimize the process in order to keep residual errors to a minimum and optimization of the process (speeds, stroke, etc.) and to keep the residual mid spatial-frequency error to a minimum, is also presented.

  5. A reverberation-time-aware DNN approach leveraging spatial information for microphone array dereverberation

    NASA Astrophysics Data System (ADS)

    Wu, Bo; Yang, Minglei; Li, Kehuang; Huang, Zhen; Siniscalchi, Sabato Marco; Wang, Tong; Lee, Chin-Hui

    2017-12-01

    A reverberation-time-aware deep-neural-network (DNN)-based multi-channel speech dereverberation framework is proposed to handle a wide range of reverberation times (RT60s). There are three key steps in designing a robust system. First, to accomplish simultaneous speech dereverberation and beamforming, we propose a framework, namely DNNSpatial, by selectively concatenating log-power spectral (LPS) input features of reverberant speech from multiple microphones in an array and map them into the expected output LPS features of anechoic reference speech based on a single deep neural network (DNN). Next, the temporal auto-correlation function of received signals at different RT60s is investigated to show that RT60-dependent temporal-spatial contexts in feature selection are needed in the DNNSpatial training stage in order to optimize the system performance in diverse reverberant environments. Finally, the RT60 is estimated to select the proper temporal and spatial contexts before feeding the log-power spectrum features to the trained DNNs for speech dereverberation. The experimental evidence gathered in this study indicates that the proposed framework outperforms the state-of-the-art signal processing dereverberation algorithm weighted prediction error (WPE) and conventional DNNSpatial systems without taking the reverberation time into account, even for extremely weak and severe reverberant conditions. The proposed technique generalizes well to unseen room size, array geometry and loudspeaker position, and is robust to reverberation time estimation error.

  6. Unique characteristics of motor adaptation during walking in young children.

    PubMed

    Musselman, Kristin E; Patrick, Susan K; Vasudevan, Erin V L; Bastian, Amy J; Yang, Jaynie F

    2011-05-01

    Children show precocious ability in the learning of languages; is this the case with motor learning? We used split-belt walking to probe motor adaptation (a form of motor learning) in children. Data from 27 children (ages 8-36 mo) were compared with those from 10 adults. Children walked with the treadmill belts at the same speed (tied belt), followed by walking with the belts moving at different speeds (split belt) for 8-10 min, followed again by tied-belt walking (postsplit). Initial asymmetries in temporal coordination (i.e., double support time) induced by split-belt walking were slowly reduced, with most children showing an aftereffect (i.e., asymmetry in the opposite direction to the initial) in the early postsplit period, indicative of learning. In contrast, asymmetries in spatial coordination (i.e., center of oscillation) persisted during split-belt walking and no aftereffect was seen. Step length, a measure of both spatial and temporal coordination, showed intermediate effects. The time course of learning in double support and step length was slower in children than in adults. Moreover, there was a significant negative correlation between the size of the initial asymmetry during early split-belt walking (called error) and the aftereffect for step length. Hence, children may have more difficulty learning when the errors are large. The findings further suggest that the mechanisms controlling temporal and spatial adaptation are different and mature at different times.

  7. Fringe order correction for the absolute phase recovered by two selected spatial frequency fringe projections in fringe projection profilometry.

    PubMed

    Ding, Yi; Peng, Kai; Yu, Miao; Lu, Lei; Zhao, Kun

    2017-08-01

    The performance of the two selected spatial frequency phase unwrapping methods is limited by a phase error bound beyond which errors will occur in the fringe order leading to a significant error in the recovered absolute phase map. In this paper, we propose a method to detect and correct the wrong fringe orders. Two constraints are introduced during the fringe order determination of two selected spatial frequency phase unwrapping methods. A strategy to detect and correct the wrong fringe orders is also described. Compared with the existing methods, we do not need to estimate the threshold associated with absolute phase values to determine the fringe order error, thus making it more reliable and avoiding the procedure of search in detecting and correcting successive fringe order errors. The effectiveness of the proposed method is validated by the experimental results.

  8. Mismeasurement and the resonance of strong confounders: correlated errors.

    PubMed

    Marshall, J R; Hastrup, J L; Ross, J S

    1999-07-01

    Confounding in epidemiology, and the limits of standard methods of control for an imperfectly measured confounder, have been understood for some time. However, most treatments of this problem are based on the assumption that errors of measurement in confounding and confounded variables are independent. This paper considers the situation in which a strong risk factor (confounder) and an inconsequential but suspected risk factor (confounded) are each measured with errors that are correlated; the situation appears especially likely to occur in the field of nutritional epidemiology. Error correlation appears to add little to measurement error as a source of bias in estimating the impact of a strong risk factor: it can add to, diminish, or reverse the bias induced by measurement error in estimating the impact of the inconsequential risk factor. Correlation of measurement errors can add to the difficulty involved in evaluating structures in which confounding and measurement error are present. In its presence, observed correlations among risk factors can be greater than, less than, or even opposite to the true correlations. Interpretation of multivariate epidemiologic structures in which confounding is likely requires evaluation of measurement error structures, including correlations among measurement errors.

  9. Effects of Correlated Errors on the Analysis of Space Geodetic Data

    NASA Technical Reports Server (NTRS)

    Romero-Wolf, Andres; Jacobs, C. S.

    2011-01-01

    As thermal errors are reduced instrumental and troposphere correlated errors will increasingly become more important. Work in progress shows that troposphere covariance error models improve data analysis results. We expect to see stronger effects with higher data rates. Temperature modeling of delay errors may further reduce temporal correlations in the data.

  10. Positron Emission Tomography for Pre-Clinical Sub-Volume Dose Escalation

    NASA Astrophysics Data System (ADS)

    Bass, Christopher Paul

    Purpose: This dissertation focuses on establishment of pre-clinical methods facilitating the use of PET imaging for selective sub-volume dose escalation. Specifically the problems addressed are 1.) The difficulties associated with comparing multiple PET images, 2.) The need for further validation of novel PET tracers before their implementation in dose escalation schema and 3.) The lack of concrete pre-clinical data supporting the use of PET images for guidance of selective sub-volume dose escalations. Methods and materials: In order to compare multiple PET images the confounding effects of mispositioning and anatomical change between imaging sessions needed to be alleviated. To mitigate the effects of these sources of error, deformable image registration was employed. A deformable registration algorithm was selected and the registration error was evaluated via the introduction of external fiducials to the tumor. Once a method for image registration was established, a procedure for validating the use of novel PET tracers with FDG was developed. Nude mice were used to perform in-vivo comparisons of the spatial distributions of two PET tracers, FDG and FLT. The spatial distributions were also compared across two separate tumor lines to determine the effects of tumor morphology on spatial distribution. Finally, the research establishes a method for acquiring pre-clinical data supporting the use of PET for image-guidance in selective dose escalation. Nude mice were imaged using only FDG PET/CT and the resulting images were used to plan PET-guided dose escalations to a 5 mm sub-volume within the tumor that contained the highest PET tracer uptake. These plans were then delivered using the Small Animal Radiation Research Platform (SARRP) and the efficacy of the PET-guided plans was observed. Results and Conclusions: The analysis of deformable registration algorithms revealed that the BRAINSFit B-spline deformable registration algorithm available in SLICER3D was capable of registering small animal PET/CT data sets in less than 5 minutes with an average registration error of .3 mm. The methods used in chapter 3 allowed for the comparison of the spatial distributions of multiple PET tracers imaged at different times. A comparison of FDG and FLT showed that both are positively correlated but that tumor morphology does significantly affect the correlation between the two tracers. An overlap analysis of the high intensity PET regions of FDG and FLT showed that FLT offers additional spatial information to that seen with FDG. In chapter 4 the SARRP allowed for the delivery of planned PET-guided selective dose escalations to a pre-clinical tumor model. This will facilitate future research validating the use of PET for clinical selective dose escalation.

  11. Novel modes and adaptive block scanning order for intra prediction in AV1

    NASA Astrophysics Data System (ADS)

    Hadar, Ofer; Shleifer, Ariel; Mukherjee, Debargha; Joshi, Urvang; Mazar, Itai; Yuzvinsky, Michael; Tavor, Nitzan; Itzhak, Nati; Birman, Raz

    2017-09-01

    The demand for streaming video content is on the rise and growing exponentially. Networks bandwidth is very costly and therefore there is a constant effort to improve video compression rates and enable the sending of reduced data volumes while retaining quality of experience (QoE). One basic feature that utilizes the spatial correlation of pixels for video compression is Intra-Prediction, which determines the codec's compression efficiency. Intra prediction enables significant reduction of the Intra-Frame (I frame) size and, therefore, contributes to efficient exploitation of bandwidth. In this presentation, we propose new Intra-Prediction algorithms that improve the AV1 prediction model and provide better compression ratios. Two (2) types of methods are considered: )1( New scanning order method that maximizes spatial correlation in order to reduce prediction error; and )2( New Intra-Prediction modes implementation in AVI. Modern video coding standards, including AVI codec, utilize fixed scan orders in processing blocks during intra coding. The fixed scan orders typically result in residual blocks with high prediction error mainly in blocks with edges. This means that the fixed scan orders cannot fully exploit the content-adaptive spatial correlations between adjacent blocks, thus the bitrate after compression tends to be large. To reduce the bitrate induced by inaccurate intra prediction, the proposed approach adaptively chooses the scanning order of blocks according to criteria of firstly predicting blocks with maximum number of surrounding, already Inter-Predicted blocks. Using the modified scanning order method and the new modes has reduced the MSE by up to five (5) times when compared to conventional TM mode / Raster scan and up to two (2) times when compared to conventional CALIC mode / Raster scan, depending on the image characteristics (which determines the percentage of blocks predicted with Inter-Prediction, which in turn impacts the efficiency of the new scanning method). For the same cases, the PSNR was shown to improve by up to 7.4dB and up to 4 dB, respectively. The new modes have yielded 5% improvement in BD-Rate over traditionally used modes, when run on K-Frame, which is expected to yield 1% of overall improvement.

  12. An Improved GRACE Terrestrial Water Storage Assimilation System For Estimating Large-Scale Soil Moisture and Shallow Groundwater

    NASA Astrophysics Data System (ADS)

    Girotto, M.; De Lannoy, G. J. M.; Reichle, R. H.; Rodell, M.

    2015-12-01

    The Gravity Recovery And Climate Experiment (GRACE) mission is unique because it provides highly accurate column integrated estimates of terrestrial water storage (TWS) variations. Major limitations of GRACE-based TWS observations are related to their monthly temporal and coarse spatial resolution (around 330 km at the equator), and to the vertical integration of the water storage components. These challenges can be addressed through data assimilation. To date, it is still not obvious how best to assimilate GRACE-TWS observations into a land surface model, in order to improve hydrological variables, and many details have yet to be worked out. This presentation discusses specific recent features of the assimilation of gridded GRACE-TWS data into the NASA Goddard Earth Observing System (GEOS-5) Catchment land surface model to improve soil moisture and shallow groundwater estimates at the continental scale. The major recent advancements introduced by the presented work with respect to earlier systems include: 1) the assimilation of gridded GRACE-TWS data product with scaling factors that are specifically derived for data assimilation purposes only; 2) the assimilation is performed through a 3D assimilation scheme, in which reasonable spatial and temporal error standard deviations and correlations are exploited; 3) the analysis step uses an optimized calculation and application of the analysis increments; 4) a poor-man's adaptive estimation of a spatially variable measurement error. This work shows that even if they are characterized by a coarse spatial and temporal resolution, the observed column integrated GRACE-TWS data have potential for improving our understanding of soil moisture and shallow groundwater variations.

  13. Optimal configurations of spatial scale for grid cell firing under noise and uncertainty

    PubMed Central

    Towse, Benjamin W.; Barry, Caswell; Bush, Daniel; Burgess, Neil

    2014-01-01

    We examined the accuracy with which the location of an agent moving within an environment could be decoded from the simulated firing of systems of grid cells. Grid cells were modelled with Poisson spiking dynamics and organized into multiple ‘modules’ of cells, with firing patterns of similar spatial scale within modules and a wide range of spatial scales across modules. The number of grid cells per module, the spatial scaling factor between modules and the size of the environment were varied. Errors in decoded location can take two forms: small errors of precision and larger errors resulting from ambiguity in decoding periodic firing patterns. With enough cells per module (e.g. eight modules of 100 cells each) grid systems are highly robust to ambiguity errors, even over ranges much larger than the largest grid scale (e.g. over a 500 m range when the maximum grid scale is 264 cm). Results did not depend strongly on the precise organization of scales across modules (geometric, co-prime or random). However, independent spatial noise across modules, which would occur if modules receive independent spatial inputs and might increase with spatial uncertainty, dramatically degrades the performance of the grid system. This effect of spatial uncertainty can be mitigated by uniform expansion of grid scales. Thus, in the realistic regimes simulated here, the optimal overall scale for a grid system represents a trade-off between minimizing spatial uncertainty (requiring large scales) and maximizing precision (requiring small scales). Within this view, the temporary expansion of grid scales observed in novel environments may be an optimal response to increased spatial uncertainty induced by the unfamiliarity of the available spatial cues. PMID:24366144

  14. Cognitive factors in the close visual and magnetic particle inspection of welds underwater.

    PubMed

    Leach, J; Morris, P E

    1998-06-01

    Underwater close visual inspection (CVI) and magnetic particle inspection (MPI) are major components of the commercial diver's job of nondestructive testing and the maintenance of subsea structures. We explored the accuracy of CVI in Experiment 1 and that of MPI in Experiment 2 and observed high error rates (47% and 24%, respectively). Performance was strongly correlated with embedded figures and visual search tests and was unrelated to length of professional diving experience, formal inspection qualification, or age. Cognitive tests of memory for designs, spatial relations, dotted outlines, and block design failed to correlate with performance. Actual or potential applications of this research include more reliable inspection reporting, increased effectiveness from current inspection techniques, and directions for the refinement of subsea inspection equipment.

  15. Performance evaluation of receive-diversity free-space optical communications over correlated Gamma-Gamma fading channels.

    PubMed

    Yang, Guowei; Khalighi, Mohammad-Ali; Ghassemlooy, Zabih; Bourennane, Salah

    2013-08-20

    The efficacy of spatial diversity in practical free-space optical communication systems is impaired by the fading correlation among the underlying subchannels. We consider in this paper the generation of correlated Gamma-Gamma random variables in view of evaluating the system outage probability and bit-error-rate under the condition of correlated fading. Considering the case of receive-diversity systems with intensity modulation and direct detection, we propose a set of criteria for setting the correlation coefficients on the small- and large-scale fading components based on scintillation theory. We verify these criteria using wave-optics simulations and further show through Monte Carlo simulations that we can effectively neglect the correlation corresponding to the small-scale turbulence in most practical systems, irrespective of the specific turbulence conditions. This has not been clarified before, to the best of our knowledge. We then present some numerical results to illustrate the effect of fading correlation on the system performance. Our conclusions can be generalized to the cases of multiple-beam and multiple-beam multiple-aperture systems.

  16. Seismic gradiometry using ambient seismic noise in an anisotropic Earth

    NASA Astrophysics Data System (ADS)

    de Ridder, S. A. L.; Curtis, A.

    2017-05-01

    We introduce a wavefield gradiometry technique to estimate both isotropic and anisotropic local medium characteristics from short recordings of seismic signals by inverting a wave equation. The method exploits the information in the spatial gradients of a seismic wavefield that are calculated using dense deployments of seismic arrays. The application of the method uses the surface wave energy in the ambient seismic field. To estimate isotropic and anisotropic medium properties we invert an elliptically anisotropic wave equation. The spatial derivatives of the recorded wavefield are evaluated by calculating finite differences over nearby recordings, which introduces a systematic anisotropic error. A two-step approach corrects this error: finite difference stencils are first calibrated, then the output of the wave-equation inversion is corrected using the linearized impulse response to the inverted velocity anomaly. We test the procedure on ambient seismic noise recorded in a large and dense ocean bottom cable array installed over Ekofisk field. The estimated azimuthal anisotropy forms a circular geometry around the production-induced subsidence bowl. This conforms with results from studies employing controlled sources, and with interferometry correlating long records of seismic noise. Yet in this example, the results were obtained using only a few minutes of ambient seismic noise.

  17. Spatio-temporal distribution of Oklahoma earthquakes: Exploring relationships using a nearest-neighbor approach: Nearest-neighbor analysis of Oklahoma

    DOE PAGES

    Vasylkivska, Veronika S.; Huerta, Nicolas J.

    2017-06-24

    Determining the spatiotemporal characteristics of natural and induced seismic events holds the opportunity to gain new insights into why these events occur. Linking the seismicity characteristics with other geologic, geographic, natural, or anthropogenic factors could help to identify the causes and suggest mitigation strategies that reduce the risk associated with such events. The nearest-neighbor approach utilized in this work represents a practical first step toward identifying statistically correlated clusters of recorded earthquake events. Detailed study of the Oklahoma earthquake catalog’s inherent errors, empirical model parameters, and model assumptions is presented. We found that the cluster analysis results are stable withmore » respect to empirical parameters (e.g., fractal dimension) but were sensitive to epicenter location errors and seismicity rates. Most critically, we show that the patterns in the distribution of earthquake clusters in Oklahoma are primarily defined by spatial relationships between events. This observation is a stark contrast to California (also known for induced seismicity) where a comparable cluster distribution is defined by both spatial and temporal interactions between events. These results highlight the difficulty in understanding the mechanisms and behavior of induced seismicity but provide insights for future work.« less

  18. Evolution of Altimetry Calibration and Future Challenges

    NASA Technical Reports Server (NTRS)

    Fu, Lee-Lueng; Haines, Bruce J.

    2012-01-01

    Over the past 20 years, altimetry calibration has evolved from an engineering-oriented exercise to a multidisciplinary endeavor driving the state of the art. This evolution has been spurred by the developing promise of altimetry to capture the large-scale, but small-amplitude, changes of the ocean surface containing the expression of climate change. The scope of altimeter calibration/validation programs has expanded commensurately. Early efforts focused on determining a constant range bias and verifying basic compliance of the data products with mission requirements. Contemporary investigations capture, with increasing accuracies, the spatial and temporal characteristics of errors in all elements of the measurement system. Dedicated calibration sites still provide the fundamental service of estimating absolute bias, but also enable long-term monitoring of the sea-surface height and constituent measurements. The use of a network of island and coastal tide gauges has provided the best perspective on the measurement stability, and revealed temporal variations of altimeter measurement system drift. The cross-calibration between successive missions provided fundamentally new information on the performance of altimetry systems. Spatially and temporally correlated errors pose challenges for future missions, underscoring the importance of cross-calibration of new measurements against the established record.

  19. Spatiotemporal distribution of Oklahoma earthquakes: Exploring relationships using a nearest-neighbor approach

    NASA Astrophysics Data System (ADS)

    Vasylkivska, Veronika S.; Huerta, Nicolas J.

    2017-07-01

    Determining the spatiotemporal characteristics of natural and induced seismic events holds the opportunity to gain new insights into why these events occur. Linking the seismicity characteristics with other geologic, geographic, natural, or anthropogenic factors could help to identify the causes and suggest mitigation strategies that reduce the risk associated with such events. The nearest-neighbor approach utilized in this work represents a practical first step toward identifying statistically correlated clusters of recorded earthquake events. Detailed study of the Oklahoma earthquake catalog's inherent errors, empirical model parameters, and model assumptions is presented. We found that the cluster analysis results are stable with respect to empirical parameters (e.g., fractal dimension) but were sensitive to epicenter location errors and seismicity rates. Most critically, we show that the patterns in the distribution of earthquake clusters in Oklahoma are primarily defined by spatial relationships between events. This observation is a stark contrast to California (also known for induced seismicity) where a comparable cluster distribution is defined by both spatial and temporal interactions between events. These results highlight the difficulty in understanding the mechanisms and behavior of induced seismicity but provide insights for future work.

  20. Spatio-temporal distribution of Oklahoma earthquakes: Exploring relationships using a nearest-neighbor approach: Nearest-neighbor analysis of Oklahoma

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Vasylkivska, Veronika S.; Huerta, Nicolas J.

    Determining the spatiotemporal characteristics of natural and induced seismic events holds the opportunity to gain new insights into why these events occur. Linking the seismicity characteristics with other geologic, geographic, natural, or anthropogenic factors could help to identify the causes and suggest mitigation strategies that reduce the risk associated with such events. The nearest-neighbor approach utilized in this work represents a practical first step toward identifying statistically correlated clusters of recorded earthquake events. Detailed study of the Oklahoma earthquake catalog’s inherent errors, empirical model parameters, and model assumptions is presented. We found that the cluster analysis results are stable withmore » respect to empirical parameters (e.g., fractal dimension) but were sensitive to epicenter location errors and seismicity rates. Most critically, we show that the patterns in the distribution of earthquake clusters in Oklahoma are primarily defined by spatial relationships between events. This observation is a stark contrast to California (also known for induced seismicity) where a comparable cluster distribution is defined by both spatial and temporal interactions between events. These results highlight the difficulty in understanding the mechanisms and behavior of induced seismicity but provide insights for future work.« less

  1. Uncertainties and coupled error covariances in the CERA-20C, ECMWF's first coupled reanalysis ensemble

    NASA Astrophysics Data System (ADS)

    Feng, Xiangbo; Haines, Keith

    2017-04-01

    ECMWF has produced its first ensemble ocean-atmosphere coupled reanalysis, the 20th century Coupled ECMWF ReAnalysis (CERA-20C), with 10 ensemble members at 3-hour resolution. Here the analysis uncertainties (ensemble spread) of lower atmospheric variables and sea surface temperature (SST), and their correlations, are quantified on diurnal, seasonal and longer timescales. The 2-m air temperature (T2m) spread is always larger than the SST spread at high-frequencies, but smaller on monthly timescales, except in deep convection areas, indicating increasing SST control at longer timescales. Spatially the T2m-SST ensemble correlations are the strongest where ocean mixed layers are shallow and can respond to atmospheric variability. Where atmospheric convection is strong with a deep precipitating boundary layer, T2m-SST correlations are greatly reduced. As the 20th-century progresses more observations become available, and ensemble spreads decline at all variability timescales. The T2m-SST correlations increase through the 20th-century, except in the tropics. As winds become better constrained over the oceans with less spread, T2m-SST become more correlated. In the tropics, strong ENSO-related inter-annual variability is found in the correlations, as atmospheric convection centres move. These ensemble spreads have been used to provide background errors for the assimilation throughout the reanalysis, have implications for the weights given to observations, and are a general measure of the uncertainties in the analysed product. Although cross boundary covariances are not currently used, they offer considerable potential for strengthening the ocean-atmosphere coupling in future reanalyses.

  2. Discrete distributed strain sensing of intelligent structures

    NASA Technical Reports Server (NTRS)

    Anderson, Mark S.; Crawley, Edward F.

    1992-01-01

    Techniques are developed for the design of discrete highly distributed sensor systems for use in intelligent structures. First the functional requirements for such a system are presented. Discrete spatially averaging strain sensors are then identified as satisfying the functional requirements. A variety of spatial weightings for spatially averaging sensors are examined, and their wave number characteristics are determined. Preferable spatial weightings are identified. Several numerical integration rules used to integrate such sensors in order to determine the global deflection of the structure are discussed. A numerical simulation is conducted using point and rectangular sensors mounted on a cantilevered beam under static loading. Gage factor and sensor position uncertainties are incorporated to assess the absolute error and standard deviation of the error in the estimated tip displacement found by numerically integrating the sensor outputs. An experiment is carried out using a statically loaded cantilevered beam with five point sensors. It is found that in most cases the actual experimental error is within one standard deviation of the absolute error as found in the numerical simulation.

  3. Improved characterisation of measurement errors in electrical resistivity tomography (ERT) surveys

    NASA Astrophysics Data System (ADS)

    Tso, C. H. M.; Binley, A. M.; Kuras, O.; Graham, J.

    2016-12-01

    Measurement errors can play a pivotal role in geophysical inversion. Most inverse models require users to prescribe a statistical model of data errors before inversion. Wrongly prescribed error levels can lead to over- or under-fitting of data, yet commonly used models of measurement error are relatively simplistic. With the heightening interests in uncertainty estimation across hydrogeophysics, better characterisation and treatment of measurement errors is needed to provide more reliable estimates of uncertainty. We have analysed two time-lapse electrical resistivity tomography (ERT) datasets; one contains 96 sets of direct and reciprocal data collected from a surface ERT line within a 24h timeframe, while the other is a year-long cross-borehole survey at a UK nuclear site with over 50,000 daily measurements. Our study included the characterisation of the spatial and temporal behaviour of measurement errors using autocorrelation and covariance analysis. We find that, in addition to well-known proportionality effects, ERT measurements can also be sensitive to the combination of electrodes used. This agrees with reported speculation in previous literature that ERT errors could be somewhat correlated. Based on these findings, we develop a new error model that allows grouping based on electrode number in additional to fitting a linear model to transfer resistance. The new model fits the observed measurement errors better and shows superior inversion and uncertainty estimates in synthetic examples. It is robust, because it groups errors together based on the number of the four electrodes used to make each measurement. The new model can be readily applied to the diagonal data weighting matrix commonly used in classical inversion methods, as well as to the data covariance matrix in the Bayesian inversion framework. We demonstrate its application using extensive ERT monitoring datasets from the two aforementioned sites.

  4. Frontotemporal correlates of impulsivity and machine learning in retired professional athletes with a history of multiple concussions.

    PubMed

    Goswami, R; Dufort, P; Tartaglia, M C; Green, R E; Crawley, A; Tator, C H; Wennberg, R; Mikulis, D J; Keightley, M; Davis, Karen D

    2016-05-01

    The frontotemporal cortical network is associated with behaviours such as impulsivity and aggression. The health of the uncinate fasciculus (UF) that connects the orbitofrontal cortex (OFC) with the anterior temporal lobe (ATL) may be a crucial determinant of behavioural regulation. Behavioural changes can emerge after repeated concussion and thus we used MRI to examine the UF and connected gray matter as it relates to impulsivity and aggression in retired professional football players who had sustained multiple concussions. Behaviourally, athletes had faster reaction times and an increased error rate on a go/no-go task, and increased aggression and mania compared to controls. MRI revealed that the athletes had (1) cortical thinning of the ATL, (2) negative correlations of OFC thickness with aggression and task errors, indicative of impulsivity, (3) negative correlations of UF axial diffusivity with error rates and aggression, and (4) elevated resting-state functional connectivity between the ATL and OFC. Using machine learning, we found that UF diffusion imaging differentiates athletes from healthy controls with significant classifiers based on UF mean and radial diffusivity showing 79-84 % sensitivity and specificity, and 0.8 areas under the ROC curves. The spatial pattern of classifier weights revealed hot spots at the orbitofrontal and temporal ends of the UF. These data implicate the UF system in the pathological outcomes of repeated concussion as they relate to impulsive behaviour. Furthermore, a support vector machine has potential utility in the general assessment and diagnosis of brain abnormalities following concussion.

  5. Projection correlation based view interpolation for cone beam CT: primary fluence restoration in scatter measurement with a moving beam stop array.

    PubMed

    Yan, Hao; Mou, Xuanqin; Tang, Shaojie; Xu, Qiong; Zankl, Maria

    2010-11-07

    Scatter correction is an open problem in x-ray cone beam (CB) CT. The measurement of scatter intensity with a moving beam stop array (BSA) is a promising technique that offers a low patient dose and accurate scatter measurement. However, when restoring the blocked primary fluence behind the BSA, spatial interpolation cannot well restore the high-frequency part, causing streaks in the reconstructed image. To address this problem, we deduce a projection correlation (PC) to utilize the redundancy (over-determined information) in neighbouring CB views. PC indicates that the main high-frequency information is contained in neighbouring angular projections, instead of the current projection itself, which provides a guiding principle that applies to high-frequency information restoration. On this basis, we present the projection correlation based view interpolation (PC-VI) algorithm; that it outperforms the use of only spatial interpolation is validated. The PC-VI based moving BSA method is developed. In this method, PC-VI is employed instead of spatial interpolation, and new moving modes are designed, which greatly improve the performance of the moving BSA method in terms of reliability and practicability. Evaluation is made on a high-resolution voxel-based human phantom realistically including the entire procedure of scatter measurement with a moving BSA, which is simulated by analytical ray-tracing plus Monte Carlo simulation with EGSnrc. With the proposed method, we get visually artefact-free images approaching the ideal correction. Compared with the spatial interpolation based method, the relative mean square error is reduced by a factor of 6.05-15.94 for different slices. PC-VI does well in CB redundancy mining; therefore, it has further potential in CBCT studies.

  6. Spatial analysis of highway incident durations in the context of Hurricane Sandy.

    PubMed

    Xie, Kun; Ozbay, Kaan; Yang, Hong

    2015-01-01

    The objectives of this study are (1) to develop an incident duration model which can account for the spatial dependence of duration observations, and (2) to investigate the impacts of a hurricane on incident duration. Highway incident data from New York City and its surrounding regions before and after Hurricane Sandy was used for the study. Moran's I statistics confirmed that durations of the neighboring incidents were spatially correlated. Moreover, Lagrange Multiplier tests suggested that the spatial dependence should be captured in a spatial lag specification. A spatial error model, a spatial lag model and a standard model without consideration of spatial effects were developed. The spatial lag model is found to outperform the others by capturing the spatial dependence of incident durations via a spatially lagged dependent variable. It was further used to assess the effects of hurricane-related variables on incident duration. The results show that the incidents during and post the hurricane are expected to have 116.3% and 79.8% longer durations than those that occurred in the regular time. However, no significant increase in incident duration is observed in the evacuation period before Sandy's landfall. Results of temporal stability tests further confirm the existence of the significant changes in incident duration patterns during and post the hurricane. Those findings can provide insights to aid in the development of hurricane evacuation plans and emergency management strategies. Copyright © 2014 Elsevier Ltd. All rights reserved.

  7. Three-dimensional FLASH Laser Radar Range Estimation via Blind Deconvolution

    DTIC Science & Technology

    2009-10-01

    scene can result in errors due to several factors including the optical spatial impulse response, detector blurring, photon noise , timing jitter, and...estimation error include spatial blur, detector blurring, noise , timing jitter, and inter-sample targets. Unlike previous research, this paper ac- counts...for pixel coupling by defining the range image mathematical model as a 2D convolution between the system spatial impulse response and the object (target

  8. Are Books Like Number Lines? Children Spontaneously Encode Spatial-Numeric Relationships in a Novel Spatial Estimation Task.

    PubMed

    Thompson, Clarissa A; Morris, Bradley J; Sidney, Pooja G

    2017-01-01

    Do children spontaneously represent spatial-numeric features of a task, even when it does not include printed numbers (Mix et al., 2016)? Sixty first grade students completed a novel spatial estimation task by seeking and finding pages in a 100-page book without printed page numbers. Children were shown pages 1 through 6 and 100, and then were asked, "Can you find page X?" Children's precision of estimates on the page finder task and a 0-100 number line estimation task was calculated with the Percent Absolute Error (PAE) formula (Siegler and Booth, 2004), in which lower PAE indicated more precise estimates. Children's numerical knowledge was further assessed with: (1) numeral identification (e.g., What number is this: 57?), (2) magnitude comparison (e.g., Which is larger: 54 or 57?), and (3) counting on (e.g., Start counting from 84 and count up 5 more). Children's accuracy on these tasks was correlated with their number line PAE. Children's number line estimation PAE predicted their page finder PAE, even after controlling for age and accuracy on the other numerical tasks. Children's estimates on the page finder and number line tasks appear to tap a general magnitude representation. However, the page finder task did not correlate with numeral identification and counting-on performance, likely because these tasks do not measure children's magnitude knowledge. Our results suggest that the novel page finder task is a useful measure of children's magnitude knowledge, and that books have similar spatial-numeric affordances as number lines and numeric board games.

  9. Fixed Pattern Noise pixel-wise linear correction for crime scene imaging CMOS sensor

    NASA Astrophysics Data System (ADS)

    Yang, Jie; Messinger, David W.; Dube, Roger R.; Ientilucci, Emmett J.

    2017-05-01

    Filtered multispectral imaging technique might be a potential method for crime scene documentation and evidence detection due to its abundant spectral information as well as non-contact and non-destructive nature. Low-cost and portable multispectral crime scene imaging device would be highly useful and efficient. The second generation crime scene imaging system uses CMOS imaging sensor to capture spatial scene and bandpass Interference Filters (IFs) to capture spectral information. Unfortunately CMOS sensors suffer from severe spatial non-uniformity compared to CCD sensors and the major cause is Fixed Pattern Noise (FPN). IFs suffer from "blue shift" effect and introduce spatial-spectral correlated errors. Therefore, Fixed Pattern Noise (FPN) correction is critical to enhance crime scene image quality and is also helpful for spatial-spectral noise de-correlation. In this paper, a pixel-wise linear radiance to Digital Count (DC) conversion model is constructed for crime scene imaging CMOS sensor. Pixel-wise conversion gain Gi,j and Dark Signal Non-Uniformity (DSNU) Zi,j are calculated. Also, conversion gain is divided into four components: FPN row component, FPN column component, defects component and effective photo response signal component. Conversion gain is then corrected to average FPN column and row components and defects component so that the sensor conversion gain is uniform. Based on corrected conversion gain and estimated image incident radiance from the reverse of pixel-wise linear radiance to DC model, corrected image spatial uniformity can be enhanced to 7 times as raw image, and the bigger the image DC value within its dynamic range, the better the enhancement.

  10. High-resolution observations of the globular cluster NGC 7099

    NASA Astrophysics Data System (ADS)

    Sams, Bruce Jones, III

    The globular cluster NGC 7099 is a prototypical collapsed core cluster. Through a series of instrumental, observational, and theoretical observations, I have resolved its core structure using a ground based telescope. The core has a radius of 2.15 arcsec when imaged with a V band spatial resolution of 0.35 arcsec. Initial attempts at speckle imaging produced images of inadequate signal to noise and resolution. To explain these results, a new, fully general signal-to-noise model has been developed. It properly accounts for all sources of noise in a speckle observation, including aliasing of high spatial frequencies by inadequate sampling of the image plane. The model, called Full Speckle Noise (FSN), can be used to predict the outcome of any speckle imaging experiment. A new high resolution imaging technique called ACT (Atmospheric Correlation with a Template) was developed to create sharper astronomical images. ACT compensates for image motion due to atmospheric turbulence. ACT is similar to the Shift and Add algorithm, but uses apriori spatial knowledge about the image to further constrain the shifts. In this instance, the final images of NGC 7099 have resolutions of 0.35 arcsec from data taken in 1 arcsec seeing. The PAPA (Precision Analog Photon Address) camera was used to record data. It is subject to errors when imaging cluster cores in a large field of view. The origin of these errors is explained, and several ways to avoid them proposed. New software was created for the PAPA camera to properly take flat field images taken in a large field of view. Absolute photometry measurements of NGC 7099 made with the PAPA camera are accurate to 0.1 magnitude. Luminosity sampling errors dominate surface brightness profiles of the central few arcsec in a collapsed core cluster. These errors set limits on the ultimate spatial accuracy of surface brightness profiles.

  11. An approach for real-time fast point positioning of the BeiDou Navigation Satellite System using augmentation information

    NASA Astrophysics Data System (ADS)

    Tu, Rui; Zhang, Rui; Zhang, Pengfei; Liu, Jinhai; Lu, Xiaochun

    2018-07-01

    This study proposes an approach to facilitate real-time fast point positioning of the BeiDou Navigation Satellite System (BDS) based on regional augmentation information. We term this as the precise positioning based on augmentation information (BPP) approach. The coordinates of the reference stations were highly constrained to extract the augmentation information, which contained not only the satellite orbit clock error correlated with the satellite running state, but also included the atmosphere error and unmodeled error, which are correlated with the spatial and temporal states. Based on these mixed augmentation corrections, a precise point positioning (PPP) model could be used for the coordinates estimation of the user stations, and the float ambiguity could be easily fixed for the single-difference between satellites. Thus, this technique provided a quick and high-precision positioning service. Three different datasets with small, medium, and large baselines (0.6 km, 30 km and 136 km) were used to validate the feasibility and effectiveness of the proposed BPP method. The validations showed that using the BPP model, 1–2 cm positioning service can be provided in a 100 km wide area after just 2 s of initialization. Thus, as the proposed approach not only capitalized on both PPP and RTK but also provided consistent application, it can be used for area augmentation positioning.

  12. Quantifying the impact of material-model error on macroscale quantities-of-interest using multiscale a posteriori error-estimation techniques

    DOE PAGES

    Brown, Judith A.; Bishop, Joseph E.

    2016-07-20

    An a posteriori error-estimation framework is introduced to quantify and reduce modeling errors resulting from approximating complex mesoscale material behavior with a simpler macroscale model. Such errors may be prevalent when modeling welds and additively manufactured structures, where spatial variations and material textures may be present in the microstructure. We consider a case where a <100> fiber texture develops in the longitudinal scanning direction of a weld. Transversely isotropic elastic properties are obtained through homogenization of a microstructural model with this texture and are considered the reference weld properties within the error-estimation framework. Conversely, isotropic elastic properties are considered approximatemore » weld properties since they contain no representation of texture. Errors introduced by using isotropic material properties to represent a weld are assessed through a quantified error bound in the elastic regime. Lastly, an adaptive error reduction scheme is used to determine the optimal spatial variation of the isotropic weld properties to reduce the error bound.« less

  13. Seasonal Differences in Spatial Scales of Chlorophyll-A Concentration in Lake TAIHU,CHINA

    NASA Astrophysics Data System (ADS)

    Bao, Y.; Tian, Q.; Sun, S.; Wei, H.; Tian, J.

    2012-08-01

    Spatial distribution of chlorophyll-a (chla) concentration in Lake Taihu is non-uniform and seasonal variability. Chla concentration retrieval algorithms were separately established using measured data and remote sensing images (HJ-1 CCD and MODIS data) in October 2010, March 2011, and September 2011. Then parameters of semi- variance were calculated on the scale of 30m, 250m and 500m for analyzing spatial heterogeneity in different seasons. Finally, based on the definitions of Lumped chla (chlaL) and Distributed chla (chlaD), seasonal model of chla concentration scale error was built. The results indicated that: spatial distribution of chla concentration in spring was more uniform. In summer and autumn, chla concentration in the north of the lake such as Meiliang Bay and Zhushan Bay was higher than that in the south of Lake Taihu. Chla concentration on different scales showed the similar structure in the same season, while it had different structure in different seasons. And inversion chla concentration from MODIS 500m had a greater scale error. The spatial scale error changed with seasons. It was higher in summer and autumn than that in spring. The maximum relative error can achieve 23%.

  14. Efficient heralding of O-band passively spatial-multiplexed photons for noise-tolerant quantum key distribution.

    PubMed

    Liu, Mao Tong; Lim, Han Chuen

    2014-09-22

    When implementing O-band quantum key distribution on optical fiber transmission lines carrying C-band data traffic, noise photons that arise from spontaneous Raman scattering or insufficient filtering of the classical data channels could cause the quantum bit-error rate to exceed the security threshold. In this case, a photon heralding scheme may be used to reject the uncorrelated noise photons in order to restore the quantum bit-error rate to a low level. However, the secure key rate would suffer unless one uses a heralded photon source with sufficiently high heralding rate and heralding efficiency. In this work we demonstrate a heralded photon source that has a heralding efficiency that is as high as 74.5%. One disadvantage of a typical heralded photon source is that the long deadtime of the heralding detector results in a significant drop in the heralding rate. To counter this problem, we propose a passively spatial-multiplexed configuration at the heralding arm. Using two heralding detectors in this configuration, we obtain an increase in the heralding rate by 37% and a corresponding increase in the heralded photon detection rate by 16%. We transmit the O-band photons over 10 km of noisy optical fiber to observe the relation between quantum bit-error rate and noise-degraded second-order correlation function of the transmitted photons. The effects of afterpulsing when we shorten the deadtime of the heralding detectors are also observed and discussed.

  15. A global map of rainfed cropland areas (GMRCA) at the end of last millennium using remote sensing

    USGS Publications Warehouse

    Biradar, C.M.; Thenkabail, P.S.; Noojipady, P.; Li, Y.; Dheeravath, V.; Turral, H.; Velpuri, M.; Gumma, M.K.; Gangalakunta, O.R.P.; Cai, X.L.; Xiao, X.; Schull, M.A.; Alankara, R.D.; Gunasinghe, S.; Mohideen, S.

    2009-01-01

    The overarching goal of this study was to produce a global map of rainfed cropland areas (GMRCA) and calculate country-by-country rainfed area statistics using remote sensing data. A suite of spatial datasets, methods and protocols for mapping GMRCA were described. These consist of: (a) data fusion and composition of multi-resolution time-series mega-file data-cube (MFDC), (b) image segmentation based on precipitation, temperature, and elevation zones, (c) spectral correlation similarity (SCS), (d) protocols for class identification and labeling through uses of SCS R2-values, bi-spectral plots, space-time spiral curves (ST-SCs), rich source of field-plot data, and zoom-in-views of Google Earth (GE), and (e) techniques for resolving mixed classes by decision tree algorithms, and spatial modeling. The outcome was a 9-class GMRCA from which country-by-country rainfed area statistics were computed for the end of the last millennium. The global rainfed cropland area estimate from the GMRCA 9-class map was 1.13 billion hectares (Bha). The total global cropland areas (rainfed plus irrigated) was 1.53 Bha which was close to national statistics compiled by FAOSTAT (1.51 Bha). The accuracies and errors of GMRCA were assessed using field-plot and Google Earth data points. The accuracy varied between 92 and 98% with kappa value of about 0.76, errors of omission of 2-8%, and the errors of commission of 19-36%. ?? 2008 Elsevier B.V.

  16. Multiple Velocity Profile Measurements in Hypersonic Flows Using Sequentially-Imaged Fluorescence Tagging

    NASA Technical Reports Server (NTRS)

    Bathel, Brett F.; Danehy, Paul M.; Inman, Jennifer A.; Jones, Stephen B.; Ivey,Christopher b.; Goyne, Christopher P.

    2010-01-01

    Nitric-oxide planar laser-induced fluorescence (NO PLIF) was used to perform velocity measurements in hypersonic flows by generating multiple tagged lines which fluoresce as they convect downstream. For each laser pulse, a single interline, progressive scan intensified CCD (charge-coupled device) camera was used to obtain two sequential images of the NO molecules that had been tagged by the laser. The CCD configuration allowed for sub-microsecond acquisition of both images, resulting in sub-microsecond temporal resolution as well as sub-mm spatial resolution (0.5-mm horizontal, 0.7-mm vertical). Determination of axial velocity was made by application of a cross-correlation analysis of the horizontal shift of individual tagged lines. A numerical study of measured velocity error due to a uniform and linearly-varying collisional rate distribution was performed. Quantification of systematic errors, the contribution of gating/exposure duration errors, and the influence of collision rate on temporal uncertainty were made. Quantification of the spatial uncertainty depended upon the signal-to-noise ratio of the acquired profiles. This velocity measurement technique has been demonstrated for two hypersonic flow experiments: (1) a reaction control system (RCS) jet on an Orion Crew Exploration Vehicle (CEV) wind tunnel model and (2) a 10-degree half-angle wedge containing a 2-mm tall, 4-mm wide cylindrical boundary layer trip. The experiments were performed at the NASA Langley Research Center's 31-Inch Mach 10 Air Tunnel.

  17. An alternative way to evaluate chemistry-transport model variability

    NASA Astrophysics Data System (ADS)

    Menut, Laurent; Mailler, Sylvain; Bessagnet, Bertrand; Siour, Guillaume; Colette, Augustin; Couvidat, Florian; Meleux, Frédérik

    2017-03-01

    A simple and complementary model evaluation technique for regional chemistry transport is discussed. The methodology is based on the concept that we can learn about model performance by comparing the simulation results with observational data available for time periods other than the period originally targeted. First, the statistical indicators selected in this study (spatial and temporal correlations) are computed for a given time period, using colocated observation and simulation data in time and space. Second, the same indicators are used to calculate scores for several other years while conserving the spatial locations and Julian days of the year. The difference between the results provides useful insights on the model capability to reproduce the observed day-to-day and spatial variability. In order to synthesize the large amount of results, a new indicator is proposed, designed to compare several error statistics between all the years of validation and to quantify whether the period and area being studied were well captured by the model for the correct reasons.

  18. The terminator "toy" chemistry test: A simple tool to assess errors in transport schemes

    DOE PAGES

    Lauritzen, P. H.; Conley, A. J.; Lamarque, J. -F.; ...

    2015-05-04

    This test extends the evaluation of transport schemes from prescribed advection of inert scalars to reactive species. The test consists of transporting two interacting chemical species in the Nair and Lauritzen 2-D idealized flow field. The sources and sinks for these two species are given by a simple, but non-linear, "toy" chemistry that represents combination (X+X → X 2) and dissociation (X 2 → X+X). This chemistry mimics photolysis-driven conditions near the solar terminator, where strong gradients in the spatial distribution of the species develop near its edge. Despite the large spatial variations in each species, the weighted sum Xmore » T = X+2X 2 should always be preserved at spatial scales at which molecular diffusion is excluded. The terminator test demonstrates how well the advection–transport scheme preserves linear correlations. Chemistry–transport (physics–dynamics) coupling can also be studied with this test. Examples of the consequences of this test are shown for illustration.« less

  19. Anxiety and spatial attention moderate the electrocortical response to aversive pictures.

    PubMed

    MacNamara, Annmarie; Hajcak, Greg

    2009-11-01

    Aversive stimuli capture attention and elicit increased neural activity, as indexed by behavioral, electrocortical and hemodynamic measures; moreover, individual differences in anxiety relate to a further increased sensitivity to threatening stimuli. Evidence has been mixed, however, as to whether aversive pictures elicit increased neural response when presented in unattended spatial locations. In the current study, ERP and behavioral data were recorded from 49 participants as aversive and neutral pictures were simultaneously presented in spatially attended and unattended locations; on each trial, participants made same/different judgments about pictures presented in attended locations. Aversive images presented in unattended locations resulted in increased error rate and reaction time. The late positive potential (LPP) component of the ERP was only larger when aversive images were presented in attended locations, and this increase was positively correlated with self-reported state anxiety. Findings are discussed in regard to the sensitivity of ERP and behavioral responses to aversive distracters, and in terms of increased neural processing of threatening stimuli in anxiety.

  20. Spatial Resolution, Grayscale, and Error Diffusion Trade-offs: Impact on Display System Design

    NASA Technical Reports Server (NTRS)

    Gille, Jennifer L. (Principal Investigator)

    1996-01-01

    We examine technology trade-offs related to grayscale resolution, spatial resolution, and error diffusion for tessellated display systems. We present new empirical results from our psychophysical study of these trade-offs and compare them to the predictions of a model of human vision.

  1. Electrostatic potential of B-DNA: effect of interionic correlations.

    PubMed Central

    Gavryushov, S; Zielenkiewicz, P

    1998-01-01

    Modified Poisson-Boltzmann (MPB) equations have been numerically solved to study ionic distributions and mean electrostatic potentials around a macromolecule of arbitrarily complex shape and charge distribution. Results for DNA are compared with those obtained by classical Poisson-Boltzmann (PB) calculations. The comparisons were made for 1:1 and 2:1 electrolytes at ionic strengths up to 1 M. It is found that ion-image charge interactions and interionic correlations, which are neglected by the PB equation, have relatively weak effects on the electrostatic potential at charged groups of the DNA. The PB equation predicts errors in the long-range electrostatic part of the free energy that are only approximately 1.5 kJ/mol per nucleotide even in the case of an asymmetrical electrolyte. In contrast, the spatial correlations between ions drastically affect the electrostatic potential at significant separations from the macromolecule leading to a clearly predicted effect of charge overneutralization. PMID:9826596

  2. Rigorous covariance propagation of geoid errors to geodetic MDT estimates

    NASA Astrophysics Data System (ADS)

    Pail, R.; Albertella, A.; Fecher, T.; Savcenko, R.

    2012-04-01

    The mean dynamic topography (MDT) is defined as the difference between the mean sea surface (MSS) derived from satellite altimetry, averaged over several years, and the static geoid. Assuming geostrophic conditions, from the MDT the ocean surface velocities as important component of global ocean circulation can be derived from it. Due to the availability of GOCE gravity field models, for the very first time MDT can now be derived solely from satellite observations (altimetry and gravity) down to spatial length-scales of 100 km and even below. Global gravity field models, parameterized in terms of spherical harmonic coefficients, are complemented by the full variance-covariance matrix (VCM). Therefore, for the geoid component a realistic statistical error estimate is available, while the error description of the altimetric component is still an open issue and is, if at all, attacked empirically. In this study we make the attempt to perform, based on the full gravity VCM, rigorous error propagation to derived geostrophic surface velocities, thus also considering all correlations. For the definition of the static geoid we use the third release of the time-wise GOCE model, as well as the satellite-only combination model GOCO03S. In detail, we will investigate the velocity errors resulting from the geoid component in dependence of the harmonic degree, and the impact of using/no using covariances on the MDT errors and its correlations. When deriving an MDT, it is spectrally filtered to a certain maximum degree, which is usually driven by the signal content of the geoid model, by applying isotropic or non-isotropic filters. Since this filtering is acting also on the geoid component, the consistent integration of this filter process into the covariance propagation shall be performed, and its impact shall be quantified. The study will be performed for MDT estimates in specific test areas of particular oceanographic interest.

  3. Investigating the role of background and observation error correlations in improving a model forecast of forest carbon balance using four dimensional variational data assimilation.

    NASA Astrophysics Data System (ADS)

    Pinnington, Ewan; Casella, Eric; Dance, Sarah; Lawless, Amos; Morison, James; Nichols, Nancy; Wilkinson, Matthew; Quaife, Tristan

    2016-04-01

    Forest ecosystems play an important role in sequestering human emitted carbon-dioxide from the atmosphere and therefore greatly reduce the effect of anthropogenic induced climate change. For that reason understanding their response to climate change is of great importance. Efforts to implement variational data assimilation routines with functional ecology models and land surface models have been limited, with sequential and Markov chain Monte Carlo data assimilation methods being prevalent. When data assimilation has been used with models of carbon balance, background "prior" errors and observation errors have largely been treated as independent and uncorrelated. Correlations between background errors have long been known to be a key aspect of data assimilation in numerical weather prediction. More recently, it has been shown that accounting for correlated observation errors in the assimilation algorithm can considerably improve data assimilation results and forecasts. In this paper we implement a 4D-Var scheme with a simple model of forest carbon balance, for joint parameter and state estimation and assimilate daily observations of Net Ecosystem CO2 Exchange (NEE) taken at the Alice Holt forest CO2 flux site in Hampshire, UK. We then investigate the effect of specifying correlations between parameter and state variables in background error statistics and the effect of specifying correlations in time between observation error statistics. The idea of including these correlations in time is new and has not been previously explored in carbon balance model data assimilation. In data assimilation, background and observation error statistics are often described by the background error covariance matrix and the observation error covariance matrix. We outline novel methods for creating correlated versions of these matrices, using a set of previously postulated dynamical constraints to include correlations in the background error statistics and a Gaussian correlation function to include time correlations in the observation error statistics. The methods used in this paper will allow the inclusion of time correlations between many different observation types in the assimilation algorithm, meaning that previously neglected information can be accounted for. In our experiments we compared the results using our new correlated background and observation error covariance matrices and those using diagonal covariance matrices. We found that using the new correlated matrices reduced the root mean square error in the 14 year forecast of daily NEE by 44 % decreasing from 4.22 g C m-2 day-1 to 2.38 g C m-2 day-1.

  4. Simulation of wave propagation in three-dimensional random media

    NASA Astrophysics Data System (ADS)

    Coles, Wm. A.; Filice, J. P.; Frehlich, R. G.; Yadlowsky, M.

    1995-04-01

    Quantitative error analyses for the simulation of wave propagation in three-dimensional random media, when narrow angular scattering is assumed, are presented for plane-wave and spherical-wave geometry. This includes the errors that result from finite grid size, finite simulation dimensions, and the separation of the two-dimensional screens along the propagation direction. Simple error scalings are determined for power-law spectra of the random refractive indices of the media. The effects of a finite inner scale are also considered. The spatial spectra of the intensity errors are calculated and compared with the spatial spectra of

  5. Displacement fields from point cloud data: Application of particle imaging velocimetry to landslide geodesy

    USGS Publications Warehouse

    Aryal, Arjun; Brooks, Benjamin A.; Reid, Mark E.; Bawden, Gerald W.; Pawlak, Geno

    2012-01-01

    Acquiring spatially continuous ground-surface displacement fields from Terrestrial Laser Scanners (TLS) will allow better understanding of the physical processes governing landslide motion at detailed spatial and temporal scales. Problems arise, however, when estimating continuous displacement fields from TLS point-clouds because reflecting points from sequential scans of moving ground are not defined uniquely, thus repeat TLS surveys typically do not track individual reflectors. Here, we implemented the cross-correlation-based Particle Image Velocimetry (PIV) method to derive a surface deformation field using TLS point-cloud data. We estimated associated errors using the shape of the cross-correlation function and tested the method's performance with synthetic displacements applied to a TLS point cloud. We applied the method to the toe of the episodically active Cleveland Corral Landslide in northern California using TLS data acquired in June 2005–January 2007 and January–May 2010. Estimated displacements ranged from decimeters to several meters and they agreed well with independent measurements at better than 9% root mean squared (RMS) error. For each of the time periods, the method provided a smooth, nearly continuous displacement field that coincides with independently mapped boundaries of the slide and permits further kinematic and mechanical inference. For the 2010 data set, for instance, the PIV-derived displacement field identified a diffuse zone of displacement that preceded by over a month the development of a new lateral shear zone. Additionally, the upslope and downslope displacement gradients delineated by the dense PIV field elucidated the non-rigid behavior of the slide.

  6. Background-Error Correlation Model Based on the Implicit Solution of a Diffusion Equation

    DTIC Science & Technology

    2010-01-01

    1 Background- Error Correlation Model Based on the Implicit Solution of a Diffusion Equation Matthew J. Carrier* and Hans Ngodock...4. TITLE AND SUBTITLE Background- Error Correlation Model Based on the Implicit Solution of a Diffusion Equation 5a. CONTRACT NUMBER 5b. GRANT...2001), which sought to model error correlations based on the explicit solution of a generalized diffusion equation. The implicit solution is

  7. Stochastic simulation of spatially correlated geo-processes

    USGS Publications Warehouse

    Christakos, G.

    1987-01-01

    In this study, developments in the theory of stochastic simulation are discussed. The unifying element is the notion of Radon projection in Euclidean spaces. This notion provides a natural way of reconstructing the real process from a corresponding process observable on a reduced dimensionality space, where analysis is theoretically easier and computationally tractable. Within this framework, the concept of space transformation is defined and several of its properties, which are of significant importance within the context of spatially correlated processes, are explored. The turning bands operator is shown to follow from this. This strengthens considerably the theoretical background of the geostatistical method of simulation, and some new results are obtained in both the space and frequency domains. The inverse problem is solved generally and the applicability of the method is extended to anisotropic as well as integrated processes. Some ill-posed problems of the inverse operator are discussed. Effects of the measurement error and impulses at origin are examined. Important features of the simulated process as described by geomechanical laws, the morphology of the deposit, etc., may be incorporated in the analysis. The simulation may become a model-dependent procedure and this, in turn, may provide numerical solutions to spatial-temporal geologic models. Because the spatial simu??lation may be technically reduced to unidimensional simulations, various techniques of generating one-dimensional realizations are reviewed. To link theory and practice, an example is computed in detail. ?? 1987 International Association for Mathematical Geology.

  8. A high-order time-accurate interrogation method for time-resolved PIV

    NASA Astrophysics Data System (ADS)

    Lynch, Kyle; Scarano, Fulvio

    2013-03-01

    A novel method is introduced for increasing the accuracy and extending the dynamic range of time-resolved particle image velocimetry (PIV). The approach extends the concept of particle tracking velocimetry by multiple frames to the pattern tracking by cross-correlation analysis as employed in PIV. The working principle is based on tracking the patterned fluid element, within a chosen interrogation window, along its individual trajectory throughout an image sequence. In contrast to image-pair interrogation methods, the fluid trajectory correlation concept deals with variable velocity along curved trajectories and non-zero tangential acceleration during the observed time interval. As a result, the velocity magnitude and its direction are allowed to evolve in a nonlinear fashion along the fluid element trajectory. The continuum deformation (namely spatial derivatives of the velocity vector) is accounted for by adopting local image deformation. The principle offers important reductions of the measurement error based on three main points: by enlarging the temporal measurement interval, the relative error becomes reduced; secondly, the random and peak-locking errors are reduced by the use of least-squares polynomial fits to individual trajectories; finally, the introduction of high-order (nonlinear) fitting functions provides the basis for reducing the truncation error. Lastly, the instantaneous velocity is evaluated as the temporal derivative of the polynomial representation of the fluid parcel position in time. The principal features of this algorithm are compared with a single-pair iterative image deformation method. Synthetic image sequences are considered with steady flow (translation, shear and rotation) illustrating the increase of measurement precision. An experimental data set obtained by time-resolved PIV measurements of a circular jet is used to verify the robustness of the method on image sequences affected by camera noise and three-dimensional motions. In both cases, it is demonstrated that the measurement time interval can be significantly extended without compromising the correlation signal-to-noise ratio and with no increase of the truncation error. The increase of velocity dynamic range scales more than linearly with the number of frames included for the analysis, which supersedes by one order of magnitude the pair correlation by window deformation. The main factors influencing the performance of the method are discussed, namely the number of images composing the sequence and the polynomial order chosen to represent the motion throughout the trajectory.

  9. Colour image compression by grey to colour conversion

    NASA Astrophysics Data System (ADS)

    Drew, Mark S.; Finlayson, Graham D.; Jindal, Abhilash

    2011-03-01

    Instead of de-correlating image luminance from chrominance, some use has been made of using the correlation between the luminance component of an image and its chromatic components, or the correlation between colour components, for colour image compression. In one approach, the Green colour channel was taken as a base, and the other colour channels or their DCT subbands were approximated as polynomial functions of the base inside image windows. This paper points out that we can do better if we introduce an addressing scheme into the image description such that similar colours are grouped together spatially. With a Luminance component base, we test several colour spaces and rearrangement schemes, including segmentation. and settle on a log-geometric-mean colour space. Along with PSNR versus bits-per-pixel, we found that spatially-keyed s-CIELAB colour error better identifies problem regions. Instead of segmentation, we found that rearranging on sorted chromatic components has almost equal performance and better compression. Here, we sort on each of the chromatic components and separately encode windows of each. The result consists of the original greyscale plane plus the polynomial coefficients of windows of rearranged chromatic values, which are then quantized. The simplicity of the method produces a fast and simple scheme for colour image and video compression, with excellent results.

  10. Observation and simulation of net primary productivity in Qilian Mountain, western China.

    PubMed

    Zhou, Y; Zhu, Q; Chen, J M; Wang, Y Q; Liu, J; Sun, R; Tang, S

    2007-11-01

    We modeled net primary productivity (NPP) at high spatial resolution using an advanced spaceborne thermal emission and reflection radiometer (ASTER) image of a Qilian Mountain study area using the boreal ecosystem productivity simulator (BEPS). Two key driving variables of the model, leaf area index (LAI) and land cover type, were derived from ASTER and moderate resolution imaging spectroradiometer (MODIS) data. Other spatially explicit inputs included daily meteorological data (radiation, precipitation, temperature, humidity), available soil water holding capacity (AWC), and forest biomass. NPP was estimated for coniferous forests and other land cover types in the study area. The result showed that NPP of coniferous forests in the study area was about 4.4 tCha(-1)y(-1). The correlation coefficient between the modeled NPP and ground measurements was 0.84, with a mean relative error of about 13.9%.

  11. Introduction to CAUSES: Description of Weather and Climate Models and Their Near-Surface Temperature Errors in 5 day Hindcasts Near the Southern Great Plains

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Morcrette, C. J.; Van Weverberg, K.; Ma, H. -Y.

    The Clouds Above the United States and Errors at the Surface (CAUSES) project is aimed at gaining a better understanding of the physical processes that are leading to the creation of warm screen-temperature biases over the American Midwest, which are seen in many numerical models. Here in Part 1, a series of 5-day hindcasts, each initialised from re-analyses and performed by 11 different models, are evaluated against screen-temperature observations. All the models have a warm bias over parts of the Midwest. Several ways of quantifying the impact of the initial conditions on the evolution of the simulations are presented, showingmore » that within a day or so all models have produced a warm bias that is representative of their bias after 5 days, and not closely tied to the conditions at the initial time. Although the surface temperature biases sometimes coincide with locations where the re-analyses themselves have a bias, there are many regions in each of the models where biases grow over the course of 5 days or are larger than the biases present in the reanalyses. At the Southern Great Plains site, the model biases are shown to not be confined to the surface, but extend several kilometres into the atmosphere. In most of the models, there is a strong diurnal cycle in the screen-temperature bias and in some models the biases are largest around midday, while in the others it is largest during the night. While the different physical processes that are contributing to a given model having a screen-temperature error will be discussed in more detail in the companion papers (Parts 2 and 3) the fact that there is a spatial coherence in the phase of the diurnal cycle of the error across wide regions and that there are numerous locations across the Midwest where the diurnal cycle of the error is highly correlated with the diurnal cycle of the error at SGP suggest that the detailed evaluations of the role of different processes in contributing to errors at SGP will be representative of errors that are prevalent over a much larger spatial scale.« less

  12. Introduction to CAUSES: Description of weather and climate models and their near-surface temperature errors in 5-day hindcasts near the Southern Great Plains

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Morcrette, Cyril J.; Van Weverberg, Kwinten; Ma, H

    2018-02-16

    The Clouds Above the United States and Errors at the Surface (CAUSES) project is aimed at gaining a better understanding of the physical processes that are leading to the creation of warm screen-temperature biases over the American Midwest, which are seen in many numerical models. Here in Part 1, a series of 5-day hindcasts, each initialised from re-analyses and performed by 11 different models, are evaluated against screen-temperature observations. All the models have a warm bias over parts of the Midwest. Several ways of quantifying the impact of the initial conditions on the evolution of the simulations are presented, showingmore » that within a day or so all models have produced a warm bias that is representative of their bias after 5 days, and not closely tied to the conditions at the initial time. Although the surface temperature biases sometimes coincide with locations where the re-analyses themselves have a bias, there are many regions in each of the models where biases grow over the course of 5 days or are larger than the biases present in the reanalyses. At the Southern Great Plains site, the model biases are shown to not be confined to the surface, but extend several kilometres into the atmosphere. In most of the models, there is a strong diurnal cycle in the screen-temperature bias and in some models the biases are largest around midday, while in the others it is largest during the night. While the different physical processes that are contributing to a given model having a screen-temperature error will be discussed in more detail in the companion papers (Parts 2 and 3) the fact that there is a spatial coherence in the phase of the diurnal cycle of the error across wide regions and that there are numerous locations across the Midwest where the diurnal cycle of the error is highly correlated with the diurnal cycle of the error at SGP suggest that the detailed evaluations of the role of different processes in contributing to errors at SGP will be representative of errors that are prevalent over a much larger spatial scale.« less

  13. Decorrelation of the true and estimated classifier errors in high-dimensional settings.

    PubMed

    Hanczar, Blaise; Hua, Jianping; Dougherty, Edward R

    2007-01-01

    The aim of many microarray experiments is to build discriminatory diagnosis and prognosis models. Given the huge number of features and the small number of examples, model validity which refers to the precision of error estimation is a critical issue. Previous studies have addressed this issue via the deviation distribution (estimated error minus true error), in particular, the deterioration of cross-validation precision in high-dimensional settings where feature selection is used to mitigate the peaking phenomenon (overfitting). Because classifier design is based upon random samples, both the true and estimated errors are sample-dependent random variables, and one would expect a loss of precision if the estimated and true errors are not well correlated, so that natural questions arise as to the degree of correlation and the manner in which lack of correlation impacts error estimation. We demonstrate the effect of correlation on error precision via a decomposition of the variance of the deviation distribution, observe that the correlation is often severely decreased in high-dimensional settings, and show that the effect of high dimensionality on error estimation tends to result more from its decorrelating effects than from its impact on the variance of the estimated error. We consider the correlation between the true and estimated errors under different experimental conditions using both synthetic and real data, several feature-selection methods, different classification rules, and three error estimators commonly used (leave-one-out cross-validation, k-fold cross-validation, and .632 bootstrap). Moreover, three scenarios are considered: (1) feature selection, (2) known-feature set, and (3) all features. Only the first is of practical interest; however, the other two are needed for comparison purposes. We will observe that the true and estimated errors tend to be much more correlated in the case of a known feature set than with either feature selection or using all features, with the better correlation between the latter two showing no general trend, but differing for different models.

  14. A weakly-constrained data assimilation approach to address rainfall-runoff model structural inadequacy in streamflow prediction

    NASA Astrophysics Data System (ADS)

    Lee, Haksu; Seo, Dong-Jun; Noh, Seong Jin

    2016-11-01

    This paper presents a simple yet effective weakly-constrained (WC) data assimilation (DA) approach for hydrologic models which accounts for model structural inadequacies associated with rainfall-runoff transformation processes. Compared to the strongly-constrained (SC) DA, WC DA adjusts the control variables less while producing similarly or more accurate analysis. Hence the adjusted model states are dynamically more consistent with those of the base model. The inadequacy of a rainfall-runoff model was modeled as an additive error to runoff components prior to routing and penalized in the objective function. Two example modeling applications, distributed and lumped, were carried out to investigate the effects of the WC DA approach on DA results. For distributed modeling, the distributed Sacramento Soil Moisture Accounting (SAC-SMA) model was applied to the TIFM7 Basin in Missouri, USA. For lumped modeling, the lumped SAC-SMA model was applied to nineteen basins in Texas. In both cases, the variational DA (VAR) technique was used to assimilate discharge data at the basin outlet. For distributed SAC-SMA, spatially homogeneous error modeling yielded updated states that are spatially much more similar to the a priori states, as quantified by Earth Mover's Distance (EMD), than spatially heterogeneous error modeling by up to ∼10 times. DA experiments using both lumped and distributed SAC-SMA modeling indicated that assimilating outlet flow using the WC approach generally produce smaller mean absolute difference as well as higher correlation between the a priori and the updated states than the SC approach, while producing similar or smaller root mean square error of streamflow analysis and prediction. Large differences were found in both lumped and distributed modeling cases between the updated and the a priori lower zone tension and primary free water contents for both WC and SC approaches, indicating possible model structural deficiency in describing low flows or evapotranspiration processes for the catchments studied. Also presented are the findings from this study and key issues relevant to WC DA approaches using hydrologic models.

  15. Continuous Sub-daily Rainfall Simulation for Regional Flood Risk Assessment - Modelling of Spatio-temporal Correlation Structure of Extreme Precipitation in the Austrian Alps

    NASA Astrophysics Data System (ADS)

    Salinas, J. L.; Nester, T.; Komma, J.; Bloeschl, G.

    2017-12-01

    Generation of realistic synthetic spatial rainfall is of pivotal importance for assessing regional hydroclimatic hazard as the input for long term rainfall-runoff simulations. The correct reproduction of observed rainfall characteristics, such as regional intensity-duration-frequency curves, and spatial and temporal correlations is necessary to adequately model the magnitude and frequency of the flood peaks, by reproducing antecedent soil moisture conditions before extreme rainfall events, and joint probability of flood waves at confluences. In this work, a modification of the model presented by Bardossy and Platte (1992), where precipitation is first modeled on a station basis as a multivariate autoregressive model (mAr) in a Normal space. The spatial and temporal correlation structures are imposed in the Normal space, allowing for a different temporal autocorrelation parameter for each station, and simultaneously ensuring the positive-definiteness of the correlation matrix of the mAr errors. The Normal rainfall is then transformed to a Gamma-distributed space, with parameters varying monthly according to a sinusoidal function, in order to adapt to the observed rainfall seasonality. One of the main differences with the original model is the simulation time-step, reduced from 24h to 6h. Due to a larger availability of daily rainfall data, as opposite to sub-daily (e.g. hourly), the parameters of the Gamma distributions are calibrated to reproduce simultaneously a series of daily rainfall characteristics (mean daily rainfall, standard deviations of daily rainfall, and 24h intensity-duration-frequency [IDF] curves), as well as other aggregated rainfall measures (mean annual rainfall, and monthly rainfall). The calibration of the spatial and temporal correlation parameters is performed in a way that the catchment-averaged IDF curves aggregated at different temporal scales fit the measured ones. The rainfall model is used to generate 10.000 years of synthetic precipitation, fed into a rainfall-runoff model to derive the flood frequency in the Tirolean Alps in Austria. Given the number of generated events, the simulation framework is able to generate a large variety of rainfall patterns, as well as reproduce the variograms of relevant extreme rainfall events in the region of interest.

  16. Perceptual Color Characterization of Cameras

    PubMed Central

    Vazquez-Corral, Javier; Connah, David; Bertalmío, Marcelo

    2014-01-01

    Color camera characterization, mapping outputs from the camera sensors to an independent color space, such as XY Z, is an important step in the camera processing pipeline. Until now, this procedure has been primarily solved by using a 3 × 3 matrix obtained via a least-squares optimization. In this paper, we propose to use the spherical sampling method, recently published by Finlayson et al., to perform a perceptual color characterization. In particular, we search for the 3 × 3 matrix that minimizes three different perceptual errors, one pixel based and two spatially based. For the pixel-based case, we minimize the CIE ΔE error, while for the spatial-based case, we minimize both the S-CIELAB error and the CID error measure. Our results demonstrate an improvement of approximately 3% for the ΔE error, 7% for the S-CIELAB error and 13% for the CID error measures. PMID:25490586

  17. Examining Impulse-Variability in Kicking.

    PubMed

    Chappell, Andrew; Molina, Sergio L; McKibben, Jonathon; Stodden, David F

    2016-07-01

    This study examined variability in kicking speed and spatial accuracy to test the impulse-variability theory prediction of an inverted-U function and the speed-accuracy trade-off. Twenty-eight 18- to 25-year-old adults kicked a playground ball at various percentages (50-100%) of their maximum speed at a wall target. Speed variability and spatial error were analyzed using repeated-measures ANOVA with built-in polynomial contrasts. Results indicated a significant inverse linear trajectory for speed variability (p < .001, η2= .345) where 50% and 60% maximum speed had significantly higher variability than the 100% condition. A significant quadratic fit was found for spatial error scores of mean radial error (p < .0001, η2 = .474) and subject-centroid radial error (p < .0001, η2 = .453). Findings suggest variability and accuracy of multijoint, ballistic skill performance may not follow the general principles of impulse-variability theory or the speed-accuracy trade-off.

  18. The Relationship between Spatial and Temporal Magnitude Estimation of Scientific Concepts at Extreme Scales

    NASA Astrophysics Data System (ADS)

    Price, Aaron; Lee, H.

    2010-01-01

    Many astronomical objects, processes, and events exist and occur at extreme scales of spatial and temporal magnitudes. Our research draws upon the psychological literature, replete with evidence of linguistic and metaphorical links between the spatial and temporal domains, to compare how students estimate spatial and temporal magnitudes associated with objects and processes typically taught in science class.. We administered spatial and temporal scale estimation tests, with many astronomical items, to 417 students enrolled in 12 undergraduate science courses. Results show that while the temporal test was more difficult, students’ overall performance patterns between the two tests were mostly similar. However, asymmetrical correlations between the two tests indicate that students think of the extreme ranges of spatial and temporal scales in different ways, which is likely influenced by their classroom experience. When making incorrect estimations, students tended to underestimate the difference between the everyday scale and the extreme scales on both tests. This suggests the use of a common logarithmic mental number line for both spatial and temporal magnitude estimation. However, there are differences between the two tests in the errors student make in the everyday range. Among the implications discussed is the use of spatio-temporal reference frames, instead of smooth bootstrapping, to help students maneuver between scales of magnitude and the use of logarithmic transformations between reference frames. Implications for astronomy range from learning about spectra to large scale galaxy structure.

  19. Left neglect dyslexia: Perseveration and reading error types.

    PubMed

    Ronchi, Roberta; Algeri, Lorella; Chiapella, Laura; Gallucci, Marcello; Spada, Maria Simonetta; Vallar, Giuseppe

    2016-08-01

    Right-brain-damaged patients may show a reading disorder termed neglect dyslexia. Patients with left neglect dyslexia omit letters on the left-hand-side (the beginning, when reading left-to-right) part of the letter string, substitute them with other letters, and add letters to the left of the string. The aim of this study was to investigate the pattern of association, if any, between error types in patients with left neglect dyslexia and recurrent perseveration (a productive visuo-motor deficit characterized by addition of marks) in target cancellation. Specifically, we aimed at assessing whether different productive symptoms (relative to the reading and the visuo-motor domains) could be associated in patients with left spatial neglect. Fifty-four right-brain-damaged patients took part in the study: 50 out of the 54 patients showed left spatial neglect, with 27 of them also exhibiting left neglect dyslexia. Neglect dyslexic patients who showed perseveration produced mainly substitution neglect errors in reading. Conversely, omissions were the prevailing reading error pattern in neglect dyslexic patients without perseveration. Addition reading errors were much infrequent. Different functional pathological mechanisms may underlie omission and substitution reading errors committed by right-brain-damaged patients with left neglect dyslexia. One such mechanism, involving the defective stopping of inappropriate responses, may contribute to both recurrent perseveration in target cancellation, and substitution errors in reading. Productive pathological phenomena, together with deficits of spatial attention to events taking place on the left-hand-side of space, shape the manifestations of neglect dyslexia, and, more generally, of spatial neglect. Copyright © 2016 Elsevier Ltd. All rights reserved.

  20. Wave Gradiometry for the Central U.S

    NASA Astrophysics Data System (ADS)

    liu, Y.; Holt, W. E.

    2013-12-01

    Wave gradiometry is a new technique utilizing the shape of seismic wave fields captured by USArray transportable stations to determine fundamental wave propagation characteristics. The horizontal and vertical wave displacements, spatial gradients and time derivatives of displacement are linearly linked by two coefficients which can be used to infer wave slowness, back azimuth, radiation pattern and geometrical spreading. The reducing velocity method from Langston [2007] is applied to pre-process our data. Spatial gradients of the shifted displacement fields are estimated using bi-cubic splines [Beavan and Haines, 2001]. Using singular value decomposition, the spatial gradients are then inverted to iteratively solve for wave parameters mentioned above. Numerical experiments with synthetic data sets provided by Princeton University's Neal Real Time Global Seismicity Portal are conducted to test the algorithm stability and evaluate errors. Our results based on real records in the central U.S. show that, the average Rayleigh wave phase velocity ranges from 3.8 to 4.2 km/s for periods from 60-125s, and 3.6 to 4.0 km/s for periods from 25-60s, which is consistent with earth model. Geometrical spreading and radiation pattern show similar features between different frequency bands. Azimuth variations are partially correlated with phase velocity change. Finally, we calculated waveform amplitude and spatial gradient uncertainties to determine formal errors in the estimated wave parameters. Further effort will be put into calculating shear wave velocity structure with respect to depth in the studied area. The wave gradiometry method is now being employed across the USArray using real observations and results obtained to date are for stations in eastern portion of the U.S. Rayleigh wave phase velocity derived from Aug, 20th, 2011 Vanuatu earthquake for periods from 100 - 125 s.

  1. Hyperspectral imaging-based spatially-resolved technique for accurate measurement of the optical properties of horticultural products

    NASA Astrophysics Data System (ADS)

    Cen, Haiyan

    Hyperspectral imaging-based spatially-resolved technique is promising for determining the optical properties and quality attributes of horticultural and food products. However, considerable challenges still exist for accurate determination of spectral absorption and scattering properties from intact horticultural products. The objective of this research was, therefore, to develop and optimize hyperspectral imaging-based spatially-resolved technique for accurate measurement of the optical properties of horticultural products. Monte Carlo simulations and experiments for model samples of known optical properties were performed to optimize the inverse algorithm of a single-layer diffusion model and the optical designs, for extracting the absorption (micro a) and reduced scattering (micros') coefficients from spatially-resolved reflectance profiles. The logarithm and integral data transformation and the relative weighting methods were found to greatly improve the parameter estimation accuracy with the relative errors of 10.4%, 10.7%, and 11.4% for micro a, and 6.6%, 7.0%, and 7.1% for micros', respectively. More accurate measurements of optical properties were obtained when the light beam was of Gaussian type with the diameter of less than 1 mm, and the minimum and maximum source-detector distances were 1.5 mm and 10--20 transport mean free paths, respectively. An optical property measuring prototype was built, based on the optimization results, and evaluated for automatic measurement of absorption and reduced scattering coefficients for the wavelengths of 500--1,000 nm. The instrument was used to measure the optical properties, and assess quality/maturity, of 500 'Redstar' peaches and 1039 'Golden Delicious' (GD) and 1040 'Delicious' (RD) apples. A separate study was also conducted on confocal laser scanning and scanning electron microscopic image analysis and compression test of fruit tissue specimens to measure the structural and mechanical properties of 'Golden Delicious' and 'Granny Smith' (GS) apples under accelerated softening at high temperature (22 ºC)/high humidity (95%) for up to 30 days. The absorption spectra of peach and apple fruit were featured with the absorption peaks of major pigments (i.e., chlorophylls and anthocyanin) and water, while the reduced scattering coefficient generally decreased with the increase of wavelength. Partial least squares regression resulted in various levels of correlation of microa and micros' with the firmness, soluble solids content, and skin and flesh color parameters of peaches (r = 0.204--0.855) and apples (r = 0.460--0.885), and the combination of the two optical parameters generally gave higher correlations (up to 0.893). The mean value of microa and micros' for GD and GS apples for each storage date was positively correlated with acoustic/impact firmness, Young's modulus, and cell parameters (r = 0.585--0.948 for GD and r = 0.292--0.993 for GS). A two-layer diffusion model for determining the optical properties of fruit skin and flesh was further investigated through solid model samples. The average errors of determining two and four optical parameters were 6.8% and 15.3%, respectively, for the Monte Carlo reflectance data. The errors of determining the first or surface layer of the model samples were approximately 23.0% for microa and 18.4% for micros', indicating the difficulty and also potential in applying the two-layer diffusion model for fruit. This research has demonstrated the usefulness of hyperspectral imaging-based spatially-resolved technique for determining the optical properties and maturity/quality of fruits. However, further research is needed to reduce measurement variability or error caused by irregular or rough surface of fruit and the presence of fruit skin, and apply the technique to other foods and biological materials.

  2. Divided spatial attention and feature-mixing errors.

    PubMed

    Golomb, Julie D

    2015-11-01

    Spatial attention is thought to play a critical role in feature binding. However, often multiple objects or locations are of interest in our environment, and we need to shift or split attention between them. Recent evidence has demonstrated that shifting and splitting spatial attention results in different types of feature-binding errors. In particular, when two locations are simultaneously sharing attentional resources, subjects are susceptible to feature-mixing errors; that is, they tend to report a color that is a subtle blend of the target color and the color at the other attended location. The present study was designed to test whether these feature-mixing errors are influenced by target-distractor similarity. Subjects were cued to split attention across two different spatial locations, and were subsequently presented with an array of colored stimuli, followed by a postcue indicating which color to report. Target-distractor similarity was manipulated by varying the distance in color space between the two attended stimuli. Probabilistic modeling in all cases revealed shifts in the response distribution consistent with feature-mixing errors; however, the patterns differed considerably across target-distractor color distances. With large differences in color, the findings replicated the mixing result, but with small color differences, repulsion was instead observed, with the reported target color shifted away from the other attended color.

  3. Non-airborne conflicts: The causes and effects of runway transgressions

    NASA Technical Reports Server (NTRS)

    Tarrel, Richard J.

    1985-01-01

    The 1210 ASRS runway transgression reports are studied and expanded to yield descriptive statistics. Additionally, a one of three subset was studied in detail for purposes of evaluating the causes, risks, and consequences behind trangression events. Occurrences are subdivided by enabling factor and flight phase designations. It is concluded that a larger risk of collision is associated with controller enabled departure transgressions over all other categories. The influence of this type is especially evident during the period following the air traffic controllers' strike of 1981. Causal analysis indicates that, coincidentally, controller enabled departure transgressions also, show the strongest correlations between causal factors. It shows that departure errors occur more often when visibility is reduced, and when multiple takeoff runways or intersection takeoffs are employed. In general, runway transgressions attributable to both pilot and controller errors arise from three problem areas: information transfer, awareness, and spatial judgement. Enhanced awareness by controllers will probably reduce controller enabled incidents.

  4. Supplemental optical specifications for imaging systems: parameters of phase gradient

    NASA Astrophysics Data System (ADS)

    Xuan, Bin; Li, Jun-Feng; Wang, Peng; Chen, Xiao-Ping; Song, Shu-Mei; Xie, Jing-Jiang

    2009-12-01

    Specifications of phase error, peak to valley (PV), and root mean square (rms) are not able to represent the properties of a wavefront reasonably because of their irresponsibility for spatial frequencies. Power spectral density is a parameter that is especially effective to indicate the frequency regime. However, it seems not convenient for opticians to implement. Parameters of phase gradient, PV gradient, and rms gradient are most correlated with a point-spread function of an imaging system, and they can provide clear instruction of manufacture. The algorithms of gradient parameters have been modified in order to represent the image quality better. In order to demonstrate the analyses, an experimental spherical mirror has been worked out. It is clear that imaging performances can be maintained while manufacture difficulties are decreased when a reasonable trade-off between specifications of phase error and phase gradient is made.

  5. Prototype Development of a Geostationary Synthetic Thinned Aperture Radiometer, GeoSTAR

    NASA Technical Reports Server (NTRS)

    Tanner, Alan B.; Wilson, William J.; Kangaslahti, Pekka P.; Lambrigsten, Bjorn H.; Dinardo, Steven J.; Piepmeier, Jeffrey R.; Ruf, Christopher S.; Rogacki, Steven; Gross, S. M.; Musko, Steve

    2004-01-01

    Preliminary details of a 2-D synthetic aperture radiometer prototype operating from 50 to 58 GHz will be presented. The instrument is being developed as a laboratory testbed, and the goal of this work is to demonstrate the technologies needed to do atmospheric soundings with high spatial resolution from Geostationary orbit. The concept is to deploy a large sparse aperture Y-array from a geostationary satellite, and to use aperture synthesis to obtain images of the earth without the need for a large mechanically scanned antenna. The laboratory prototype consists of a Y-array of 24 horn antennas, MMIC receivers, and a digital cross-correlation sub-system. System studies are discussed, including an error budget which has been derived from numerical simulations. The error budget defines key requirements, such as null offsets, phase calibration, and antenna pattern knowledge. Details of the instrument design are discussed in the context of these requirements.

  6. Accuracy of Robotic Radiosurgical Liver Treatment Throughout the Respiratory Cycle

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Winter, Jeff D.; Wong, Raimond; Swaminath, Anand

    Purpose: To quantify random uncertainties in robotic radiosurgical treatment of liver lesions with real-time respiratory motion management. Methods and Materials: We conducted a retrospective analysis of 27 liver cancer patients treated with robotic radiosurgery over 118 fractions. The robotic radiosurgical system uses orthogonal x-ray images to determine internal target position and correlates this position with an external surrogate to provide robotic corrections of linear accelerator positioning. Verification and update of this internal–external correlation model was achieved using periodic x-ray images collected throughout treatment. To quantify random uncertainties in targeting, we analyzed logged tracking information and isolated x-ray images collected immediately beforemore » beam delivery. For translational correlation errors, we quantified the difference between correlation model–estimated target position and actual position determined by periodic x-ray imaging. To quantify prediction errors, we computed the mean absolute difference between the predicted coordinates and actual modeled position calculated 115 milliseconds later. We estimated overall random uncertainty by quadratically summing correlation, prediction, and end-to-end targeting errors. We also investigated relationships between tracking errors and motion amplitude using linear regression. Results: The 95th percentile absolute correlation errors in each direction were 2.1 mm left–right, 1.8 mm anterior–posterior, 3.3 mm cranio–caudal, and 3.9 mm 3-dimensional radial, whereas 95th percentile absolute radial prediction errors were 0.5 mm. Overall 95th percentile random uncertainty was 4 mm in the radial direction. Prediction errors were strongly correlated with modeled target amplitude (r=0.53-0.66, P<.001), whereas only weak correlations existed for correlation errors. Conclusions: Study results demonstrate that model correlation errors are the primary random source of uncertainty in Cyberknife liver treatment and, unlike prediction errors, are not strongly correlated with target motion amplitude. Aggregate 3-dimensional radial position errors presented here suggest the target will be within 4 mm of the target volume for 95% of the beam delivery.« less

  7. Spatiotemporal integration for tactile localization during arm movements: a probabilistic approach.

    PubMed

    Maij, Femke; Wing, Alan M; Medendorp, W Pieter

    2013-12-01

    It has been shown that people make systematic errors in the localization of a brief tactile stimulus that is delivered to the index finger while they are making an arm movement. Here we modeled these spatial errors with a probabilistic approach, assuming that they follow from temporal uncertainty about the occurrence of the stimulus. In the model, this temporal uncertainty converts into a spatial likelihood about the external stimulus location, depending on arm velocity. We tested the prediction of the model that the localization errors depend on arm velocity. Participants (n = 8) were instructed to localize a tactile stimulus that was presented to their index finger while they were making either slow- or fast-targeted arm movements. Our results confirm the model's prediction that participants make larger localization errors when making faster arm movements. The model, which was used to fit the errors for both slow and fast arm movements simultaneously, accounted very well for all the characteristics of these data with temporal uncertainty in stimulus processing as the only free parameter. We conclude that spatial errors in dynamic tactile perception stem from the temporal precision with which tactile inputs are processed.

  8. Benefit transfer and spatial heterogeneity of preferences for water quality improvements.

    PubMed

    Martin-Ortega, J; Brouwer, R; Ojea, E; Berbel, J

    2012-09-15

    The improvement in the water quality resulting from the implementation of the EU Water Framework Directive is expected to generate substantial non-market benefits. A wide spread estimation of these benefits across Europe will require the application of benefit transfer. We use a spatially explicit valuation design to account for the spatial heterogeneity of preferences to help generate lower transfer errors. A map-based choice experiment is applied in the Guadalquivir River Basin (Spain), accounting simultaneously for the spatial distribution of water quality improvements and beneficiaries. Our results show that accounting for the spatial heterogeneity of preferences generally produces lower transfer errors. Copyright © 2012 Elsevier Ltd. All rights reserved.

  9. Spatiotemporal Path-Matching for Comparisons Between Ground- Based and Satellite Lidar Measurements

    NASA Technical Reports Server (NTRS)

    Berkoff, Timothy A.; Valencia, Sandra; Welton, Ellsworth J.; Spinhirne, James D.

    2005-01-01

    The spatiotemporal sampling differences between ground-based and satellite lidar data can contribute to significant errors for direct measurement comparisons. Improvement in sample correspondence is examined by the use of radiosonde wind velocity to vary the time average in ground-based lidar data to spatially match coincident satellite lidar measurements. Results are shown for the 26 February 2004 GLAS/ICESat overflight of a ground-based lidar stationed at NASA GSFC. Statistical analysis indicates that improvement in signal correlation is expected under certain conditions, even when a ground-based observation is mismatched in directional orientation to the satellite track.

  10. Image quality assessment by preprocessing and full reference model combination

    NASA Astrophysics Data System (ADS)

    Bianco, S.; Ciocca, G.; Marini, F.; Schettini, R.

    2009-01-01

    This paper focuses on full-reference image quality assessment and presents different computational strategies aimed to improve the robustness and accuracy of some well known and widely used state of the art models, namely the Structural Similarity approach (SSIM) by Wang and Bovik and the S-CIELAB spatial-color model by Zhang and Wandell. We investigate the hypothesis that combining error images with a visual attention model could allow a better fit of the psycho-visual data of the LIVE Image Quality assessment Database Release 2. We show that the proposed quality assessment metric better correlates with the experimental data.

  11. Testing a dynamic-field account of interactions between spatial attention and spatial working memory.

    PubMed

    Johnson, Jeffrey S; Spencer, John P

    2016-05-01

    Studies examining the relationship between spatial attention and spatial working memory (SWM) have shown that discrimination responses are faster for targets appearing at locations that are being maintained in SWM, and that location memory is impaired when attention is withdrawn during the delay. These observations support the proposal that sustained attention is required for successful retention in SWM: If attention is withdrawn, memory representations are likely to fail, increasing errors. In the present study, this proposal was reexamined in light of a neural-process model of SWM. On the basis of the model's functioning, we propose an alternative explanation for the observed decline in SWM performance when a secondary task is performed during retention: SWM representations drift systematically toward the location of targets appearing during the delay. To test this explanation, participants completed a color discrimination task during the delay interval of a spatial-recall task. In the critical shifting-attention condition, the color stimulus could appear either toward or away from the midline reference axis, relative to the memorized location. We hypothesized that if shifting attention during the delay leads to the failure of SWM representations, there should be an increase in the variance of recall errors, but no change in directional errors, regardless of the direction of the shift. Conversely, if shifting attention induces drift of SWM representations-as predicted by the model-systematic changes in the patterns of spatial-recall errors should occur that would depend on the direction of the shift. The results were consistent with the latter possibility-recall errors were biased toward the locations of discrimination targets appearing during the delay.

  12. Testing a Dynamic Field Account of Interactions between Spatial Attention and Spatial Working Memory

    PubMed Central

    Johnson, Jeffrey S.; Spencer, John P.

    2016-01-01

    Studies examining the relationship between spatial attention and spatial working memory (SWM) have shown that discrimination responses are faster for targets appearing at locations that are being maintained in SWM, and that location memory is impaired when attention is withdrawn during the delay. These observations support the proposal that sustained attention is required for successful retention in SWM: if attention is withdrawn, memory representations are likely to fail, increasing errors. In the present study, this proposal is reexamined in light of a neural process model of SWM. On the basis of the model's functioning, we propose an alternative explanation for the observed decline in SWM performance when a secondary task is performed during retention: SWM representations drift systematically toward the location of targets appearing during the delay. To test this explanation, participants completed a color-discrimination task during the delay interval of a spatial recall task. In the critical shifting attention condition, the color stimulus could appear either toward or away from the memorized location relative to a midline reference axis. We hypothesized that if shifting attention during the delay leads to the failure of SWM representations, there should be an increase in the variance of recall errors but no change in directional error, regardless of the direction of the shift. Conversely, if shifting attention induces drift of SWM representations—as predicted by the model—there should be systematic changes in the pattern of spatial recall errors depending on the direction of the shift. Results were consistent with the latter possibility—recall errors were biased toward the location of discrimination targets appearing during the delay. PMID:26810574

  13. Quantifying the effect of disruptions to temporal coherence on the intelligibility of compressed American Sign Language video

    NASA Astrophysics Data System (ADS)

    Ciaramello, Frank M.; Hemami, Sheila S.

    2009-02-01

    Communication of American Sign Language (ASL) over mobile phones would be very beneficial to the Deaf community. ASL video encoded to achieve the rates provided by current cellular networks must be heavily compressed and appropriate assessment techniques are required to analyze the intelligibility of the compressed video. As an extension to a purely spatial measure of intelligibility, this paper quantifies the effect of temporal compression artifacts on sign language intelligibility. These artifacts can be the result of motion-compensation errors that distract the observer or frame rate reductions. They reduce the the perception of smooth motion and disrupt the temporal coherence of the video. Motion-compensation errors that affect temporal coherence are identified by measuring the block-level correlation between co-located macroblocks in adjacent frames. The impact of frame rate reductions was quantified through experimental testing. A subjective study was performed in which fluent ASL participants rated the intelligibility of sequences encoded at a range of 5 different frame rates and with 3 different levels of distortion. The subjective data is used to parameterize an objective intelligibility measure which is highly correlated with subjective ratings at multiple frame rates.

  14. Post-error action control is neurobehaviorally modulated under conditions of constant speeded response.

    PubMed

    Soshi, Takahiro; Ando, Kumiko; Noda, Takamasa; Nakazawa, Kanako; Tsumura, Hideki; Okada, Takayuki

    2014-01-01

    Post-error slowing (PES) is an error recovery strategy that contributes to action control, and occurs after errors in order to prevent future behavioral flaws. Error recovery often malfunctions in clinical populations, but the relationship between behavioral traits and recovery from error is unclear in healthy populations. The present study investigated the relationship between impulsivity and error recovery by simulating a speeded response situation using a Go/No-go paradigm that forced the participants to constantly make accelerated responses prior to stimuli disappearance (stimulus duration: 250 ms). Neural correlates of post-error processing were examined using event-related potentials (ERPs). Impulsivity traits were measured with self-report questionnaires (BIS-11, BIS/BAS). Behavioral results demonstrated that the commission error for No-go trials was 15%, but PES did not take place immediately. Delayed PES was negatively correlated with error rates and impulsivity traits, showing that response slowing was associated with reduced error rates and changed with impulsivity. Response-locked error ERPs were clearly observed for the error trials. Contrary to previous studies, error ERPs were not significantly related to PES. Stimulus-locked N2 was negatively correlated with PES and positively correlated with impulsivity traits at the second post-error Go trial: larger N2 activity was associated with greater PES and less impulsivity. In summary, under constant speeded conditions, error monitoring was dissociated from post-error action control, and PES did not occur quickly. Furthermore, PES and its neural correlate (N2) were modulated by impulsivity traits. These findings suggest that there may be clinical and practical efficacy of maintaining cognitive control of actions during error recovery under common daily environments that frequently evoke impulsive behaviors.

  15. Post-error action control is neurobehaviorally modulated under conditions of constant speeded response

    PubMed Central

    Soshi, Takahiro; Ando, Kumiko; Noda, Takamasa; Nakazawa, Kanako; Tsumura, Hideki; Okada, Takayuki

    2015-01-01

    Post-error slowing (PES) is an error recovery strategy that contributes to action control, and occurs after errors in order to prevent future behavioral flaws. Error recovery often malfunctions in clinical populations, but the relationship between behavioral traits and recovery from error is unclear in healthy populations. The present study investigated the relationship between impulsivity and error recovery by simulating a speeded response situation using a Go/No-go paradigm that forced the participants to constantly make accelerated responses prior to stimuli disappearance (stimulus duration: 250 ms). Neural correlates of post-error processing were examined using event-related potentials (ERPs). Impulsivity traits were measured with self-report questionnaires (BIS-11, BIS/BAS). Behavioral results demonstrated that the commission error for No-go trials was 15%, but PES did not take place immediately. Delayed PES was negatively correlated with error rates and impulsivity traits, showing that response slowing was associated with reduced error rates and changed with impulsivity. Response-locked error ERPs were clearly observed for the error trials. Contrary to previous studies, error ERPs were not significantly related to PES. Stimulus-locked N2 was negatively correlated with PES and positively correlated with impulsivity traits at the second post-error Go trial: larger N2 activity was associated with greater PES and less impulsivity. In summary, under constant speeded conditions, error monitoring was dissociated from post-error action control, and PES did not occur quickly. Furthermore, PES and its neural correlate (N2) were modulated by impulsivity traits. These findings suggest that there may be clinical and practical efficacy of maintaining cognitive control of actions during error recovery under common daily environments that frequently evoke impulsive behaviors. PMID:25674058

  16. Evaluation of normalization methods for cDNA microarray data by k-NN classification

    PubMed Central

    Wu, Wei; Xing, Eric P; Myers, Connie; Mian, I Saira; Bissell, Mina J

    2005-01-01

    Background Non-biological factors give rise to unwanted variations in cDNA microarray data. There are many normalization methods designed to remove such variations. However, to date there have been few published systematic evaluations of these techniques for removing variations arising from dye biases in the context of downstream, higher-order analytical tasks such as classification. Results Ten location normalization methods that adjust spatial- and/or intensity-dependent dye biases, and three scale methods that adjust scale differences were applied, individually and in combination, to five distinct, published, cancer biology-related cDNA microarray data sets. Leave-one-out cross-validation (LOOCV) classification error was employed as the quantitative end-point for assessing the effectiveness of a normalization method. In particular, a known classifier, k-nearest neighbor (k-NN), was estimated from data normalized using a given technique, and the LOOCV error rate of the ensuing model was computed. We found that k-NN classifiers are sensitive to dye biases in the data. Using NONRM and GMEDIAN as baseline methods, our results show that single-bias-removal techniques which remove either spatial-dependent dye bias (referred later as spatial effect) or intensity-dependent dye bias (referred later as intensity effect) moderately reduce LOOCV classification errors; whereas double-bias-removal techniques which remove both spatial- and intensity effect reduce LOOCV classification errors even further. Of the 41 different strategies examined, three two-step processes, IGLOESS-SLFILTERW7, ISTSPLINE-SLLOESS and IGLOESS-SLLOESS, all of which removed intensity effect globally and spatial effect locally, appear to reduce LOOCV classification errors most consistently and effectively across all data sets. We also found that the investigated scale normalization methods do not reduce LOOCV classification error. Conclusion Using LOOCV error of k-NNs as the evaluation criterion, three double-bias-removal normalization strategies, IGLOESS-SLFILTERW7, ISTSPLINE-SLLOESS and IGLOESS-SLLOESS, outperform other strategies for removing spatial effect, intensity effect and scale differences from cDNA microarray data. The apparent sensitivity of k-NN LOOCV classification error to dye biases suggests that this criterion provides an informative measure for evaluating normalization methods. All the computational tools used in this study were implemented using the R language for statistical computing and graphics. PMID:16045803

  17. Evaluation of normalization methods for cDNA microarray data by k-NN classification.

    PubMed

    Wu, Wei; Xing, Eric P; Myers, Connie; Mian, I Saira; Bissell, Mina J

    2005-07-26

    Non-biological factors give rise to unwanted variations in cDNA microarray data. There are many normalization methods designed to remove such variations. However, to date there have been few published systematic evaluations of these techniques for removing variations arising from dye biases in the context of downstream, higher-order analytical tasks such as classification. Ten location normalization methods that adjust spatial- and/or intensity-dependent dye biases, and three scale methods that adjust scale differences were applied, individually and in combination, to five distinct, published, cancer biology-related cDNA microarray data sets. Leave-one-out cross-validation (LOOCV) classification error was employed as the quantitative end-point for assessing the effectiveness of a normalization method. In particular, a known classifier, k-nearest neighbor (k-NN), was estimated from data normalized using a given technique, and the LOOCV error rate of the ensuing model was computed. We found that k-NN classifiers are sensitive to dye biases in the data. Using NONRM and GMEDIAN as baseline methods, our results show that single-bias-removal techniques which remove either spatial-dependent dye bias (referred later as spatial effect) or intensity-dependent dye bias (referred later as intensity effect) moderately reduce LOOCV classification errors; whereas double-bias-removal techniques which remove both spatial- and intensity effect reduce LOOCV classification errors even further. Of the 41 different strategies examined, three two-step processes, IGLOESS-SLFILTERW7, ISTSPLINE-SLLOESS and IGLOESS-SLLOESS, all of which removed intensity effect globally and spatial effect locally, appear to reduce LOOCV classification errors most consistently and effectively across all data sets. We also found that the investigated scale normalization methods do not reduce LOOCV classification error. Using LOOCV error of k-NNs as the evaluation criterion, three double-bias-removal normalization strategies, IGLOESS-SLFILTERW7, ISTSPLINE-SLLOESS and IGLOESS-SLLOESS, outperform other strategies for removing spatial effect, intensity effect and scale differences from cDNA microarray data. The apparent sensitivity of k-NN LOOCV classification error to dye biases suggests that this criterion provides an informative measure for evaluating normalization methods. All the computational tools used in this study were implemented using the R language for statistical computing and graphics.

  18. Advances in Parameter and Uncertainty Quantification Using Bayesian Hierarchical Techniques with a Spatially Referenced Watershed Model (Invited)

    NASA Astrophysics Data System (ADS)

    Alexander, R. B.; Boyer, E. W.; Schwarz, G. E.; Smith, R. A.

    2013-12-01

    Estimating water and material stores and fluxes in watershed studies is frequently complicated by uncertainties in quantifying hydrological and biogeochemical effects of factors such as land use, soils, and climate. Although these process-related effects are commonly measured and modeled in separate catchments, researchers are especially challenged by their complexity across catchments and diverse environmental settings, leading to a poor understanding of how model parameters and prediction uncertainties vary spatially. To address these concerns, we illustrate the use of Bayesian hierarchical modeling techniques with a dynamic version of the spatially referenced watershed model SPARROW (SPAtially Referenced Regression On Watershed attributes). The dynamic SPARROW model is designed to predict streamflow and other water cycle components (e.g., evapotranspiration, soil and groundwater storage) for monthly varying hydrological regimes, using mechanistic functions, mass conservation constraints, and statistically estimated parameters. In this application, the model domain includes nearly 30,000 NHD (National Hydrologic Data) stream reaches and their associated catchments in the Susquehanna River Basin. We report the results of our comparisons of alternative models of varying complexity, including models with different explanatory variables as well as hierarchical models that account for spatial and temporal variability in model parameters and variance (error) components. The model errors are evaluated for changes with season and catchment size and correlations in time and space. The hierarchical models consist of a two-tiered structure in which climate forcing parameters are modeled as random variables, conditioned on watershed properties. Quantification of spatial and temporal variations in the hydrological parameters and model uncertainties in this approach leads to more efficient (lower variance) and less biased model predictions throughout the river network. Moreover, predictions of water-balance components are reported according to probabilistic metrics (e.g., percentiles, prediction intervals) that include both parameter and model uncertainties. These improvements in predictions of streamflow dynamics can inform the development of more accurate predictions of spatial and temporal variations in biogeochemical stores and fluxes (e.g., nutrients and carbon) in watersheds.

  19. Impact of temporal upscaling and chemical transport model horizontal resolution on reducing ozone exposure misclassification

    NASA Astrophysics Data System (ADS)

    Xu, Yadong; Serre, Marc L.; Reyes, Jeanette M.; Vizuete, William

    2017-10-01

    We have developed a Bayesian Maximum Entropy (BME) framework that integrates observations from a surface monitoring network and predictions from a Chemical Transport Model (CTM) to create improved exposure estimates that can be resolved into any spatial and temporal resolution. The flexibility of the framework allows for input of data in any choice of time scales and CTM predictions of any spatial resolution with varying associated degrees of estimation error and cost in terms of implementation and computation. This study quantifies the impact on exposure estimation error due to these choices by first comparing estimations errors when BME relied on ozone concentration data either as an hourly average, the daily maximum 8-h average (DM8A), or the daily 24-h average (D24A). Our analysis found that the use of DM8A and D24A data, although less computationally intensive, reduced estimation error more when compared to the use of hourly data. This was primarily due to the poorer CTM model performance in the hourly average predicted ozone. Our second analysis compared spatial variability and estimation errors when BME relied on CTM predictions with a grid cell resolution of 12 × 12 km2 versus a coarser resolution of 36 × 36 km2. Our analysis found that integrating the finer grid resolution CTM predictions not only reduced estimation error, but also increased the spatial variability in daily ozone estimates by 5 times. This improvement was due to the improved spatial gradients and model performance found in the finer resolved CTM simulation. The integration of observational and model predictions that is permitted in a BME framework continues to be a powerful approach for improving exposure estimates of ambient air pollution. The results of this analysis demonstrate the importance of also understanding model performance variability and its implications on exposure error.

  20. Estimating surface soil moisture from SMAP observations using a Neural Network technique.

    PubMed

    Kolassa, J; Reichle, R H; Liu, Q; Alemohammad, S H; Gentine, P; Aida, K; Asanuma, J; Bircher, S; Caldwell, T; Colliander, A; Cosh, M; Collins, C Holifield; Jackson, T J; Martínez-Fernández, J; McNairn, H; Pacheco, A; Thibeault, M; Walker, J P

    2018-01-01

    A Neural Network (NN) algorithm was developed to estimate global surface soil moisture for April 2015 to March 2017 with a 2-3 day repeat frequency using passive microwave observations from the Soil Moisture Active Passive (SMAP) satellite, surface soil temperatures from the NASA Goddard Earth Observing System Model version 5 (GEOS-5) land modeling system, and Moderate Resolution Imaging Spectroradiometer-based vegetation water content. The NN was trained on GEOS-5 soil moisture target data, making the NN estimates consistent with the GEOS-5 climatology, such that they may ultimately be assimilated into this model without further bias correction. Evaluated against in situ soil moisture measurements, the average unbiased root mean square error (ubRMSE), correlation and anomaly correlation of the NN retrievals were 0.037 m 3 m -3 , 0.70 and 0.66, respectively, against SMAP core validation site measurements and 0.026 m 3 m -3 , 0.58 and 0.48, respectively, against International Soil Moisture Network (ISMN) measurements. At the core validation sites, the NN retrievals have a significantly higher skill than the GEOS-5 model estimates and a slightly lower correlation skill than the SMAP Level-2 Passive (L2P) product. The feasibility of the NN method was reflected by a lower ubRMSE compared to the L2P retrievals as well as a higher skill when ancillary parameters in physically-based retrievals were uncertain. Against ISMN measurements, the skill of the two retrieval products was more comparable. A triple collocation analysis against Advanced Microwave Scanning Radiometer 2 (AMSR2) and Advanced Scatterometer (ASCAT) soil moisture retrievals showed that the NN and L2P retrieval errors have a similar spatial distribution, but the NN retrieval errors are generally lower in densely vegetated regions and transition zones.

  1. Impact of random pointing and tracking errors on the design of coherent and incoherent optical intersatellite communication links

    NASA Technical Reports Server (NTRS)

    Chen, Chien-Chung; Gardner, Chester S.

    1989-01-01

    Given the rms transmitter pointing error and the desired probability of bit error (PBE), it can be shown that an optimal transmitter antenna gain exists which minimizes the required transmitter power. Given the rms local oscillator tracking error, an optimum receiver antenna gain can be found which optimizes the receiver performance. The impact of pointing and tracking errors on the design of direct-detection pulse-position modulation (PPM) and heterodyne noncoherent frequency-shift keying (NCFSK) systems are then analyzed in terms of constraints on the antenna size and the power penalty incurred. It is shown that in the limit of large spatial tracking errors, the advantage in receiver sensitivity for the heterodyne system is quickly offset by the smaller antenna gain and the higher power penalty due to tracking errors. In contrast, for systems with small spatial tracking errors, the heterodyne system is superior because of the higher receiver sensitivity.

  2. A Robust State Estimation Framework Considering Measurement Correlations and Imperfect Synchronization

    DOE PAGES

    Zhao, Junbo; Wang, Shaobu; Mili, Lamine; ...

    2018-01-08

    Here, this paper develops a robust power system state estimation framework with the consideration of measurement correlations and imperfect synchronization. In the framework, correlations of SCADA and Phasor Measurements (PMUs) are calculated separately through unscented transformation and a Vector Auto-Regression (VAR) model. In particular, PMU measurements during the waiting period of two SCADA measurement scans are buffered to develop the VAR model with robustly estimated parameters using projection statistics approach. The latter takes into account the temporal and spatial correlations of PMU measurements and provides redundant measurements to suppress bad data and mitigate imperfect synchronization. In case where the SCADAmore » and PMU measurements are not time synchronized, either the forecasted PMU measurements or the prior SCADA measurements from the last estimation run are leveraged to restore system observability. Then, a robust generalized maximum-likelihood (GM)-estimator is extended to integrate measurement error correlations and to handle the outliers in the SCADA and PMU measurements. Simulation results that stem from a comprehensive comparison with other alternatives under various conditions demonstrate the benefits of the proposed framework.« less

  3. A Robust State Estimation Framework Considering Measurement Correlations and Imperfect Synchronization

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhao, Junbo; Wang, Shaobu; Mili, Lamine

    Here, this paper develops a robust power system state estimation framework with the consideration of measurement correlations and imperfect synchronization. In the framework, correlations of SCADA and Phasor Measurements (PMUs) are calculated separately through unscented transformation and a Vector Auto-Regression (VAR) model. In particular, PMU measurements during the waiting period of two SCADA measurement scans are buffered to develop the VAR model with robustly estimated parameters using projection statistics approach. The latter takes into account the temporal and spatial correlations of PMU measurements and provides redundant measurements to suppress bad data and mitigate imperfect synchronization. In case where the SCADAmore » and PMU measurements are not time synchronized, either the forecasted PMU measurements or the prior SCADA measurements from the last estimation run are leveraged to restore system observability. Then, a robust generalized maximum-likelihood (GM)-estimator is extended to integrate measurement error correlations and to handle the outliers in the SCADA and PMU measurements. Simulation results that stem from a comprehensive comparison with other alternatives under various conditions demonstrate the benefits of the proposed framework.« less

  4. Air pollution modeling over very complex terrain: An evaluation of WRF-Chem over Switzerland for two 1-year periods

    NASA Astrophysics Data System (ADS)

    Ritter, Mathias; Müller, Mathias D.; Tsai, Ming-Yi; Parlow, Eberhard

    2013-10-01

    The fully coupled chemistry module (WRF-Chem) within the Weather Research and Forecasting (WRF) model has been implemented over a Swiss domain for the years 2002 and 1991. The very complex terrain requires a high horizontal resolution (2 × 2 km2), which is achieved by nesting the Swiss domain into a coarser European one. The temporal and spatial distribution of O3, NO2 and PM10 as well as temperature and solar radiation are evaluated against ground-based measurements. The model performs well for the meteorological parameters with Pearson correlation coefficients of 0.92 for temperature and 0.88-0.89 for solar radiation. Temperature has root mean square errors (RMSE) of 3.30 K and 3.51 K for 2002 and 1991 and solar radiation has RMSEs of 122.92 and 116.35 for 2002 and 1991, respectively. For the modeled air pollutants, a multi-linear regression post-processing was used to eliminate systematic bias. Seasonal variations of post-processed air pollutants are represented correctly. However, short-term peaks of several days are not captured by the model. Averaged daily maximum and daily values of O3 achieved Pearson correlation coefficients of 0.69-0.77 whereas averaged NO2 and PM10 had the highest correlations for yearly average values (0.68-0.78). The spatial distribution reveals the importance of PM10 advection from the Po valley to southern Switzerland (Ticino). The absolute errors are ranging from - 10 to 15 μg/m3 for ozone, - 9 to 3 μg/m3 for NO2 and - 4 to 3 μg/m3 for PM10. However, larger errors occur along heavily trafficked roads, in street canyons or on mountains. We also compare yearly modeled results against a dedicated Swiss dispersion model for NO2 and PM10. The dedicated dispersion model has a slightly better statistical performance, but WRF-Chem is capable of computing the temporal evolution of three-dimensional data for a variety of air pollutants and meteorological parameters. Overall, WRF-Chem with the application of post-processing algorithms can produce encouraging statistical values over very complex terrain which are competitive with similar studies.

  5. Simulating and quantifying legacy topographic data uncertainty: an initial step to advancing topographic change analyses

    NASA Astrophysics Data System (ADS)

    Wasklewicz, Thad; Zhu, Zhen; Gares, Paul

    2017-12-01

    Rapid technological advances, sustained funding, and a greater recognition of the value of topographic data have helped develop an increasing archive of topographic data sources. Advances in basic and applied research related to Earth surface changes require researchers to integrate recent high-resolution topography (HRT) data with the legacy datasets. Several technical challenges and data uncertainty issues persist to date when integrating legacy datasets with more recent HRT data. The disparate data sources required to extend the topographic record back in time are often stored in formats that are not readily compatible with more recent HRT data. Legacy data may also contain unknown error or unreported error that make accounting for data uncertainty difficult. There are also cases of known deficiencies in legacy datasets, which can significantly bias results. Finally, scientists are faced with the daunting challenge of definitively deriving the extent to which a landform or landscape has or will continue to change in response natural and/or anthropogenic processes. Here, we examine the question: how do we evaluate and portray data uncertainty from the varied topographic legacy sources and combine this uncertainty with current spatial data collection techniques to detect meaningful topographic changes? We view topographic uncertainty as a stochastic process that takes into consideration spatial and temporal variations from a numerical simulation and physical modeling experiment. The numerical simulation incorporates numerous topographic data sources typically found across a range of legacy data to present high-resolution data, while the physical model focuses on more recent HRT data acquisition techniques. Elevation uncertainties observed from anchor points in the digital terrain models are modeled using "states" in a stochastic estimator. Stochastic estimators trace the temporal evolution of the uncertainties and are natively capable of incorporating sensor measurements observed at various times in history. The geometric relationship between the anchor point and the sensor measurement can be approximated via spatial correlation even when a sensor does not directly observe an anchor point. Findings from a numerical simulation indicate the estimated error coincides with the actual error using certain sensors (Kinematic GNSS, ALS, TLS, and SfM-MVS). Data from 2D imagery and static GNSS did not perform as well at the time the sensor is integrated into estimator largely as a result of the low density of data added from these sources. The estimator provides a history of DEM estimation as well as the uncertainties and cross correlations observed on anchor points. Our work provides preliminary evidence that our approach is valid for integrating legacy data with HRT and warrants further exploration and field validation. [Figure not available: see fulltext.

  6. Spatial regression methods capture prediction uncertainty in species distribution model projections through time

    Treesearch

    Alan K. Swanson; Solomon Z. Dobrowski; Andrew O. Finley; James H. Thorne; Michael K. Schwartz

    2013-01-01

    The uncertainty associated with species distribution model (SDM) projections is poorly characterized, despite its potential value to decision makers. Error estimates from most modelling techniques have been shown to be biased due to their failure to account for spatial autocorrelation (SAC) of residual error. Generalized linear mixed models (GLMM) have the ability to...

  7. Self-error-rejecting photonic qubit transmission in polarization-spatial modes with linear optical elements

    NASA Astrophysics Data System (ADS)

    Jiang, YuXiao; Guo, PengLiang; Gao, ChengYan; Wang, HaiBo; Alzahrani, Faris; Hobiny, Aatef; Deng, FuGuo

    2017-12-01

    We present an original self-error-rejecting photonic qubit transmission scheme for both the polarization and spatial states of photon systems transmitted over collective noise channels. In our scheme, we use simple linear-optical elements, including half-wave plates, 50:50 beam splitters, and polarization beam splitters, to convert spatial-polarization modes into different time bins. By using postselection in different time bins, the success probability of obtaining the uncorrupted states approaches 1/4 for single-photon transmission, which is not influenced by the coefficients of noisy channels. Our self-error-rejecting transmission scheme can be generalized to hyperentangled n-photon systems and is useful in practical high-capacity quantum communications with photon systems in two degrees of freedom.

  8. Correlated errors in geodetic time series: Implications for time-dependent deformation

    USGS Publications Warehouse

    Langbein, J.; Johnson, H.

    1997-01-01

    Analysis of frequent trilateration observations from the two-color electronic distance measuring networks in California demonstrate that the noise power spectra are dominated by white noise at higher frequencies and power law behavior at lower frequencies. In contrast, Earth scientists typically have assumed that only white noise is present in a geodetic time series, since a combination of infrequent measurements and low precision usually preclude identifying the time-correlated signature in such data. After removing a linear trend from the two-color data, it becomes evident that there are primarily two recognizable types of time-correlated noise present in the residuals. The first type is a seasonal variation in displacement which is probably a result of measuring to shallow surface monuments installed in clayey soil which responds to seasonally occurring rainfall; this noise is significant only for a small fraction of the sites analyzed. The second type of correlated noise becomes evident only after spectral analysis of line length changes and shows a functional relation at long periods between power and frequency of and where f is frequency and ?? ??? 2. With ?? = 2, this type of correlated noise is termed random-walk noise, and its source is mainly thought to be small random motions of geodetic monuments with respect to the Earth's crust, though other sources are possible. Because the line length changes in the two-color networks are measured at irregular intervals, power spectral techniques cannot reliably estimate the level of I//" noise. Rather, we also use here a maximum likelihood estimation technique which assumes that there are only two sources of noise in the residual time series (white noise and randomwalk noise) and estimates the amount of each. From this analysis we find that the random-walk noise level averages about 1.3 mm/Vyr and that our estimates of the white noise component confirm theoretical limitations of the measurement technique. In addition, the seasonal noise can be as large as 3 mm in amplitude but typically is less than 0.5 mm. Because of the presence of random-walk noise in these time series, modeling and interpretation of the geodetic data must account for this source of error. By way of example we show that estimating the time-varying strain tensor (a form of spatial averaging) from geodetic data having both random-walk and white noise error components results in seemingly significant variations in the rate of strain accumulation; spatial averaging does reduce the size of both noise components but not their relative influence on the resulting strain accumulation model. Copyright 1997 by the American Geophysical Union.

  9. Are Books Like Number Lines? Children Spontaneously Encode Spatial-Numeric Relationships in a Novel Spatial Estimation Task

    PubMed Central

    Thompson, Clarissa A.; Morris, Bradley J.; Sidney, Pooja G.

    2017-01-01

    Do children spontaneously represent spatial-numeric features of a task, even when it does not include printed numbers (Mix et al., 2016)? Sixty first grade students completed a novel spatial estimation task by seeking and finding pages in a 100-page book without printed page numbers. Children were shown pages 1 through 6 and 100, and then were asked, “Can you find page X?” Children’s precision of estimates on the page finder task and a 0-100 number line estimation task was calculated with the Percent Absolute Error (PAE) formula (Siegler and Booth, 2004), in which lower PAE indicated more precise estimates. Children’s numerical knowledge was further assessed with: (1) numeral identification (e.g., What number is this: 57?), (2) magnitude comparison (e.g., Which is larger: 54 or 57?), and (3) counting on (e.g., Start counting from 84 and count up 5 more). Children’s accuracy on these tasks was correlated with their number line PAE. Children’s number line estimation PAE predicted their page finder PAE, even after controlling for age and accuracy on the other numerical tasks. Children’s estimates on the page finder and number line tasks appear to tap a general magnitude representation. However, the page finder task did not correlate with numeral identification and counting-on performance, likely because these tasks do not measure children’s magnitude knowledge. Our results suggest that the novel page finder task is a useful measure of children’s magnitude knowledge, and that books have similar spatial-numeric affordances as number lines and numeric board games. PMID:29312084

  10. Correlates of county-level nonviral sexually transmitted infection hot spots in the US: application of hot spot analysis and spatial logistic regression.

    PubMed

    Chang, Brian A; Pearson, William S; Owusu-Edusei, Kwame

    2017-04-01

    We used a combination of hot spot analysis (HSA) and spatial regression to examine county-level hot spot correlates for the most commonly reported nonviral sexually transmitted infections (STIs) in the 48 contiguous states in the United States (US). We obtained reported county-level total case rates of chlamydia, gonorrhea, and primary and secondary (P&S) syphilis in all counties in the 48 contiguous states from national surveillance data and computed temporally smoothed rates using 2008-2012 data. Covariates were obtained from county-level multiyear (2008-2012) American Community Surveys from the US census. We conducted HSA to identify hot spot counties for all three STIs. We then applied spatial logistic regression with the spatial error model to determine the association between the identified hot spots and the covariates. HSA indicated that ≥84% of hot spots for each STI were in the South. Spatial regression results indicated that, a 10-unit increase in the percentage of Black non-Hispanics was associated with ≈42% (P < 0.01) [≈22% (P < 0.01), for Hispanics] increase in the odds of being a hot spot county for chlamydia and gonorrhea, and ≈27% (P < 0.01) [≈11% (P < 0.01) for Hispanics] for P&S syphilis. Compared with the other regions (West, Midwest, and Northeast), counties in the South were 6.5 (P < 0.01; chlamydia), 9.6 (P < 0.01; gonorrhea), and 4.7 (P < 0.01; P&S syphilis) times more likely to be hot spots. Our study provides important information on hot spot clusters of nonviral STIs in the entire United States, including associations between hot spot counties and sociodemographic factors. Published by Elsevier Inc.

  11. What Is the Evidence for Inter-laminar Integration in a Prefrontal Cortical Minicolumn?

    PubMed

    Opris, Ioan; Chang, Stephano; Noga, Brian R

    2017-01-01

    The objective of this perspective article is to examine columnar inter-laminar integration during the executive control of behavior. The integration hypothesis posits that perceptual and behavioral signals are integrated within the prefrontal cortical inter-laminar microcircuits. Inter-laminar minicolumnar activity previously recorded from the dorsolateral prefrontal cortex (dlPFC) of nonhuman primates, trained in a visual delay match-to-sample (DMS) task, was re-assessed from an integrative perspective. Biomorphic multielectrode arrays (MEAs) played a unique role in the in vivo recording of columnar cell firing in the dlPFC layers 2/3 and 5/6. Several integrative aspects stem from these experiments: 1. Functional integration of perceptual and behavioral signals across cortical layers during executive control. The integrative effect of dlPFC minicolumns was shown by: (i) increased correlated firing on correct vs. error trials; (ii) decreased correlated firing when the number of non-matching images increased; and (iii) similar spatial firing preference across cortical-striatal cells during spatial-trials, and less on object-trials. 2. Causal relations to integration of cognitive signals by the minicolumnar turbo-engines. The inter-laminar integration between the perceptual and executive circuits was facilitated by stimulating the infra-granular layers with firing patterns obtained from supra-granular layers that enhanced spatial preference of percent correct performance on spatial trials. 3. Integration across hierarchical levels of the brain. The integration of intention signals (visual spatial, direction) with movement preparation (timing, velocity) in striatum and with the motor command and posture in midbrain is also discussed. These findings provide evidence for inter-laminar integration of executive control signals within brain's prefrontal cortical microcircuits.

  12. Improved characterisation and modelling of measurement errors in electrical resistivity tomography (ERT) surveys

    NASA Astrophysics Data System (ADS)

    Tso, Chak-Hau Michael; Kuras, Oliver; Wilkinson, Paul B.; Uhlemann, Sebastian; Chambers, Jonathan E.; Meldrum, Philip I.; Graham, James; Sherlock, Emma F.; Binley, Andrew

    2017-11-01

    Measurement errors can play a pivotal role in geophysical inversion. Most inverse models require users to prescribe or assume a statistical model of data errors before inversion. Wrongly prescribed errors can lead to over- or under-fitting of data; however, the derivation of models of data errors is often neglected. With the heightening interest in uncertainty estimation within hydrogeophysics, better characterisation and treatment of measurement errors is needed to provide improved image appraisal. Here we focus on the role of measurement errors in electrical resistivity tomography (ERT). We have analysed two time-lapse ERT datasets: one contains 96 sets of direct and reciprocal data collected from a surface ERT line within a 24 h timeframe; the other is a two-year-long cross-borehole survey at a UK nuclear site with 246 sets of over 50,000 measurements. Our study includes the characterisation of the spatial and temporal behaviour of measurement errors using autocorrelation and correlation coefficient analysis. We find that, in addition to well-known proportionality effects, ERT measurements can also be sensitive to the combination of electrodes used, i.e. errors may not be uncorrelated as often assumed. Based on these findings, we develop a new error model that allows grouping based on electrode number in addition to fitting a linear model to transfer resistance. The new model explains the observed measurement errors better and shows superior inversion results and uncertainty estimates in synthetic examples. It is robust, because it groups errors together based on the electrodes used to make the measurements. The new model can be readily applied to the diagonal data weighting matrix widely used in common inversion methods, as well as to the data covariance matrix in a Bayesian inversion framework. We demonstrate its application using extensive ERT monitoring datasets from the two aforementioned sites.

  13. Background Error Correlation Modeling with Diffusion Operators

    DTIC Science & Technology

    2013-01-01

    RESPONSIBLE PERSON 19b. TELEPHONE NUMBER (Include area code) 07-10-2013 Book Chapter Background Error Correlation Modeling with Diffusion Operators...normalization Unclassified Unclassified Unclassified UU 27 Max Yaremchuk (228) 688-5259 Reset Chapter 8 Background error correlation modeling with diffusion ...field, then a structure like this simulates enhanced diffusive transport of model errors in the regions of strong cur- rents on the background of

  14. The distance discordance metric - A novel approach to quantifying spatial uncertainties in intra- and inter-patient deformable image registration

    PubMed Central

    Saleh, Ziad H.; Apte, Aditya P.; Sharp, Gregory C.; Shusharina, Nadezhda P.; Wang, Ya; Veeraraghavan, Harini; Thor, Maria; Muren, Ludvig P.; Rao, Shyam S.; Lee, Nancy Y.; Deasy, Joseph O.

    2014-01-01

    Previous methods to estimate the inherent accuracy of deformable image registration (DIR) have typically been performed relative to a known ground truth, such as tracking of anatomic landmarks or known deformations in a physical or virtual phantom. In this study, we propose a new approach to estimate the spatial geometric uncertainty of DIR using statistical sampling techniques that can be applied to the resulting deformation vector fields (DVFs) for a given registration. The proposed DIR performance metric, the distance discordance metric (DDM), is based on the variability in the distance between corresponding voxels from different images, which are co-registered to the same voxel at location (X) in an arbitrarily chosen “reference” image. The DDM value, at location (X) in the reference image, represents the mean dispersion between voxels, when these images are registered to other images in the image set. The method requires at least four registered images to estimate the uncertainty of the DIRs, both for inter-and intra-patient DIR. To validate the proposed method, we generated an image set by deforming a software phantom with known DVFs. The registration error was computed at each voxel in the “reference” phantom and then compared to DDM, inverse consistency error (ICE), and transitivity error (TE) over the entire phantom. The DDM showed a higher Pearson correlation (Rp) with the actual error (Rp ranged from 0.6 to 0.9) in comparison with ICE and TE (Rp ranged from 0.2 to 0.8). In the resulting spatial DDM map, regions with distinct intensity gradients had a lower discordance and therefore, less variability relative to regions with uniform intensity. Subsequently, we applied DDM for intra-patient DIR in an image set of 10 longitudinal computed tomography (CT) scans of one prostate cancer patient and for inter-patient DIR in an image set of 10 planning CT scans of different head and neck cancer patients. For both intra- and inter-patient DIR, the spatial DDM map showed large variation over the volume of interest (the pelvis for the prostate patient and the head for the head and neck patients). The highest discordance was observed in the soft tissues, such as the brain, bladder, and rectum, due to higher variability in the registration. The smallest DDM values were observed in the bony structures in the pelvis and the base of the skull. The proposed metric, DDM, provides a quantitative tool to evaluate the performance of DIR when a set of images is available. Therefore, DDM can be used to estimate and visualize the uncertainty of intra- and/or inter-patient DIR based on the variability of the registration rather than the absolute registration error. PMID:24440838

  15. Fast Face-Recognition Optical Parallel Correlator Using High Accuracy Correlation Filter

    NASA Astrophysics Data System (ADS)

    Watanabe, Eriko; Kodate, Kashiko

    2005-11-01

    We designed and fabricated a fully automatic fast face recognition optical parallel correlator [E. Watanabe and K. Kodate: Appl. Opt. 44 (2005) 5666] based on the VanderLugt principle. The implementation of an as-yet unattained ultra high-speed system was aided by reconfiguring the system to make it suitable for easier parallel processing, as well as by composing a higher accuracy correlation filter and high-speed ferroelectric liquid crystal-spatial light modulator (FLC-SLM). In running trial experiments using this system (dubbed FARCO), we succeeded in acquiring remarkably low error rates of 1.3% for false match rate (FMR) and 2.6% for false non-match rate (FNMR). Given the results of our experiments, the aim of this paper is to examine methods of designing correlation filters and arranging database image arrays for even faster parallel correlation, underlining the issues of calculation technique, quantization bit rate, pixel size and shift from optical axis. The correlation filter has proved its excellent performance and higher precision than classical correlation and joint transform correlator (JTC). Moreover, arrangement of multi-object reference images leads to 10-channel correlation signals, as sharply marked as those of a single channel. This experiment result demonstrates great potential for achieving the process speed of 10000 face/s.

  16. Integrated model for predicting rice yield with climate change

    NASA Astrophysics Data System (ADS)

    Park, Jin-Ki; Das, Amrita; Park, Jong-Hwa

    2018-04-01

    Rice is the chief agricultural product and one of the primary food source. For this reason, it is of pivotal importance for worldwide economy and development. Therefore, in a decision-support-system both for the farmers and in the planning and management of the country's economy, forecasting yield is vital. However, crop yield, which is a dependent of the soil-bio-atmospheric system, is difficult to represent in statistical language. This paper describes a novel approach for predicting rice yield using artificial neural network, spatial interpolation, remote sensing and GIS methods. Herein, the variation in the yield is attributed to climatic parameters and crop health, and the normalized difference vegetation index from MODIS is used as an indicator of plant health and growth. Due importance was given to scaling up the input parameters using spatial interpolation and GIS and minimising the sources of error in every step of the modelling. The low percentage error (2.91) and high correlation (0.76) signifies the robust performance of the proposed model. This simple but effective approach is then used to estimate the influence of climate change on South Korean rice production. As proposed in the RCP8.5 scenario, an upswing in temperature may increase the rice yield throughout South Korea.

  17. Prefrontal vulnerabilities and whole brain connectivity in aging and depression.

    PubMed

    Lamar, Melissa; Charlton, Rebecca A; Ajilore, Olusola; Zhang, Aifeng; Yang, Shaolin; Barrick, Thomas R; Rhodes, Emma; Kumar, Anand

    2013-07-01

    Studies exploring the underpinnings of age-related neurodegeneration suggest fronto-limbic alterations that are increasingly vulnerable in the presence of disease including late life depression. Less work has assessed the impact of this specific vulnerability on widespread brain circuitry. Seventy-nine older adults (healthy controls=45; late life depression=34) completed translational tasks shown in non-human primates to rely on fronto-limbic networks involving dorsolateral (Self-Ordered Pointing Task) or orbitofrontal (Object Alternation Task) cortices. A sub-sample of participants also completed diffusion tensor imaging for white matter tract quantification (uncinate and cingulum bundle; n=58) and whole brain tract-based spatial statistics (n=62). Despite task associations to specific white matter tracts across both groups, only healthy controls demonstrated significant correlations between widespread tract integrity and cognition. Thus, increasing Object Alternation Task errors were associated with decreasing fractional anisotropy in the uncinate in late life depression; however, only in healthy controls was the uncinate incorporated into a larger network of white matter vulnerability associating fractional anisotropy with Object Alternation Task errors using whole brain tract-based spatial statistics. It appears that the whole brain impact of specific fronto-limbic vulnerabilities in aging may be eclipsed in the presence of disease-specific neuropathology like that seen in late life depression. Copyright © 2013 Elsevier Ltd. All rights reserved.

  18. Multiple Velocity Profile Measurements in Hypersonic Flows using Sequentially-Imaged Fluorescence Tagging

    NASA Technical Reports Server (NTRS)

    Bathel, Brett F.; Danehy, Paul M.; Inmian, Jennifer A.; Jones, Stephen B.; Ivey, Christopher B.; Goyne, Christopher P.

    2010-01-01

    Nitric-oxide planar laser-induced fluorescence (NO PLIF) was used to perform velocity measurements in hypersonic flows by generating multiple tagged lines which fluoresce as they convect downstream. For each laser pulse, a single interline, progressive scan intensified CCD camera was used to obtain separate images of the initial undelayed and delayed NO molecules that had been tagged by the laser. The CCD configuration allowed for sub-microsecond acquisition of both images, resulting in sub-microsecond temporal resolution as well as sub-mm spatial resolution (0.5-mm x 0.7-mm). Determination of axial velocity was made by application of a cross-correlation analysis of the horizontal shift of individual tagged lines. Quantification of systematic errors, the contribution of gating/exposure duration errors, and influence of collision rate on fluorescence to temporal uncertainty were made. Quantification of the spatial uncertainty depended upon the analysis technique and signal-to-noise of the acquired profiles. This investigation focused on two hypersonic flow experiments: (1) a reaction control system (RCS) jet on an Orion Crew Exploration Vehicle (CEV) wind tunnel model and (2) a 10-degree half-angle wedge containing a 2-mm tall, 4-mm wide cylindrical boundary layer trip. The experiments were performed at the NASA Langley Research Center's 31-inch Mach 10 wind tunnel.

  19. Contour Error Map Algorithm

    NASA Technical Reports Server (NTRS)

    Merceret, Francis; Lane, John; Immer, Christopher; Case, Jonathan; Manobianco, John

    2005-01-01

    The contour error map (CEM) algorithm and the software that implements the algorithm are means of quantifying correlations between sets of time-varying data that are binarized and registered on spatial grids. The present version of the software is intended for use in evaluating numerical weather forecasts against observational sea-breeze data. In cases in which observational data come from off-grid stations, it is necessary to preprocess the observational data to transform them into gridded data. First, the wind direction is gridded and binarized so that D(i,j;n) is the input to CEM based on forecast data and d(i,j;n) is the input to CEM based on gridded observational data. Here, i and j are spatial indices representing 1.25-km intervals along the west-to-east and south-to-north directions, respectively; and n is a time index representing 5-minute intervals. A binary value of D or d = 0 corresponds to an offshore wind, whereas a value of D or d = 1 corresponds to an onshore wind. CEM includes two notable subalgorithms: One identifies and verifies sea-breeze boundaries; the other, which can be invoked optionally, performs an image-erosion function for the purpose of attempting to eliminate river-breeze contributions in the wind fields.

  20. Spontaneous axial myopia and emmetropization in a strain of wild-type guinea pig (Cavia porcellus).

    PubMed

    Jiang, Liqin; Schaeffel, Frank; Zhou, Xiangtian; Zhang, Sen; Jin, Xi; Pan, Miaozhen; Ye, Lingying; Wu, Xiaomin; Huang, Qinzhu; Lu, Fan; Qu, Jia

    2009-03-01

    To describe a wild-type guinea pig strain with an incidence of spontaneous axial myopia, minimal pupil responses, lack of accommodation, and apparently normal spatial vision. Such a strain is of interest because it may permit the exploration of defective emmetropization and mapping of the underlying quantitative trait loci. Twenty-eight guinea pigs were selected from 220 animals based on binocular myopia (exceeding -1.50 diopter [D]) or anisometropia (difference between both eyes exceeding 10 D) at 4 weeks of age. Refractions and pupil responses were measured with eccentric infrared photoretinoscopy, corneal curvature by modified conventional keratometer, and axial lengths by A-scan ultrasonography once a week. Twenty-one guinea pigs were raised under a normal 12-hour light/12-hour dark cycle. From a sample of 18 anisometropic guinea pigs, 11 were raised under normal light cycle and 7 were raised in the dark to determine the extent to which visual input guides emmetropization. Spatial vision was tested in an automated optomotor drum. In 10 guinea pigs with myopia in both eyes, refractive errors ranged from -15.67 D to -1.50 D at 3 weeks with a high interocular correlation (R = 0.82); axial length and corneal curvature grew almost linearly over time. Strikingly, two patterns of recovery were observed in anisometropic guinea pigs: in 12 (67%) anisometropia persisted, and in 6 (33%) it declined over time. These ratios remained similar in dark-reared guinea pigs. Unlike published strains, all guinea pigs of this strain showed weak pupil responses and no signs of accommodation but up to 3 cyc/deg of spatial resolution. This strain of guinea pigs has spontaneous axial refractive errors that may be genetically or epigenetically determined. Interestingly, it differs from other published strains that show no refractive errors, vivid accommodation, or pupil responses.

  1. Assessment of the spatial variability in tall wheatgrass forage using LANDSAT 8 satellite imagery to delineate potential management zones.

    PubMed

    Cicore, Pablo; Serrano, João; Shahidian, Shakib; Sousa, Adelia; Costa, José Luis; da Silva, José Rafael Marques

    2016-09-01

    Little information is available on the degree of within-field variability of potential production of Tall wheatgrass (Thinopyrum ponticum) forage under unirrigated conditions. The aim of this study was to characterize the spatial variability of the accumulated biomass (AB) without nutritional limitations through vegetation indexes, and then use this information to determine potential management zones. A 27-×-27-m grid cell size was chosen and 84 biomass sampling areas (BSA), each 2 m(2) in size, were georeferenced. Nitrogen and phosphorus fertilizers were applied after an initial cut at 3 cm height. At 500 °C day, the AB from each sampling area, was collected and evaluated. The spatial variability of AB was estimated more accurately using the Normalized Difference Vegetation Index (NDVI), calculated from LANDSAT 8 images obtained on 24 November 2014 (NDVInov) and 10 December 2014 (NDVIdec) because the potential AB was highly associated with NDVInov and NDVIdec (r (2)  = 0.85 and 0.83, respectively). These models between the potential AB data and NDVI were evaluated by root mean squared error (RMSE) and relative root mean squared error (RRMSE). This last coefficient was 12 and 15 % for NDVInov and NDVIdec, respectively. Potential AB and NDVI spatial correlation were quantified with semivariograms. The spatial dependence of AB was low. Six classes of NDVI were analyzed for comparison, and two management zones (MZ) were established with them. In order to evaluate if the NDVI method allows us to delimit MZ with different attainable yields, the AB estimated for these MZ were compared through an ANOVA test. The potential AB had significant differences among MZ. Based on these findings, it can be concluded that NDVI obtained from LANDSAT 8 images can be reliably used for creating MZ in soils under permanent pastures dominated by Tall wheatgrass.

  2. A Spatial Perspective of Droughts and Pluvials in the Tropics and their Relationships to ENSO in CMIP5 Model Simulations

    NASA Astrophysics Data System (ADS)

    Perez Arango, J. D.; Lintner, B. R.; Lyon, B.

    2016-12-01

    Although many aspects of the tropical response to ENSO are well-known, the spatial characteristics of the rainfall response to ENSO remain relatively unexplored. Moreover, in current generation climate models, the spatial signatures of the ENSO tropical teleconnection are more uncertain than other aspects of ENSO variability, such as the amplitude of rainfall anomalies. Following the approach of Lyon (2004) and Lyon and Barnston (2005), we analyze here integrated measures of the spatial extent of drought and pluvial conditions in the tropics and their relationship to ENSO in observations as well as simulations of Phase 5 of the Coupled Model Intercomparison Project (CMIP5) with prescribed SST forcing. We compute diagnostics including the model ensemble-means and standard deviations of moderate, intermediate, and severe droughts and pluvials and the lagged correlations with respect to ENSO-based SST indices like NINO3. Overall, in a tropics-wide sense, the models generally capture the areal extent of observed droughts and pluvials and their phasing with respect to ENSO. However, at more local scales, e.g., tropical South America, the simulated metrics agree less strongly with observations, underscoring the role of errors in the spatial patterns of ENSO-induced rainfall anomalies.

  3. Development of an adaptive bilateral filter for evaluating color image difference

    NASA Astrophysics Data System (ADS)

    Wang, Zhaohui; Hardeberg, Jon Yngve

    2012-04-01

    Spatial filtering, which aims to mimic the contrast sensitivity function (CSF) of the human visual system (HVS), has previously been combined with color difference formulae for measuring color image reproduction errors. These spatial filters attenuate imperceptible information in images, unfortunately including high frequency edges, which are believed to be crucial in the process of scene analysis by the HVS. The adaptive bilateral filter represents a novel approach, which avoids the undesirable loss of edge information introduced by CSF-based filtering. The bilateral filter employs two Gaussian smoothing filters in different domains, i.e., spatial domain and intensity domain. We propose a method to decide the parameters, which are designed to be adaptive to the corresponding viewing conditions, and the quantity and homogeneity of information contained in an image. Experiments and discussions are given to support the proposal. A series of perceptual experiments were conducted to evaluate the performance of our approach. The experimental sample images were reproduced with variations in six image attributes: lightness, chroma, hue, compression, noise, and sharpness/blurriness. The Pearson's correlation values between the model-predicted image difference and the observed difference were employed to evaluate the performance, and compare it with that of spatial CIELAB and image appearance model.

  4. Spatially coupled low-density parity-check error correction for holographic data storage

    NASA Astrophysics Data System (ADS)

    Ishii, Norihiko; Katano, Yutaro; Muroi, Tetsuhiko; Kinoshita, Nobuhiro

    2017-09-01

    The spatially coupled low-density parity-check (SC-LDPC) was considered for holographic data storage. The superiority of SC-LDPC was studied by simulation. The simulations show that the performance of SC-LDPC depends on the lifting number, and when the lifting number is over 100, SC-LDPC shows better error correctability compared with irregular LDPC. SC-LDPC is applied to the 5:9 modulation code, which is one of the differential codes. The error-free point is near 2.8 dB and over 10-1 can be corrected in simulation. From these simulation results, this error correction code can be applied to actual holographic data storage test equipment. Results showed that 8 × 10-2 can be corrected, furthermore it works effectively and shows good error correctability.

  5. Spatial averaging errors in creating hemispherical reflectance (albedo) maps from directional reflectance data

    NASA Technical Reports Server (NTRS)

    Kimes, D. S.; Kerber, A. G.; Sellers, P. J.

    1993-01-01

    Spatial averaging errors which may occur when creating hemispherical reflectance maps for different cover types from direct nadir technique to estimate the hemispherical reflectance are assessed by comparing the results with those obtained with a knowledge-based system called VEG (Kimes et al., 1991, 1992). It was found that hemispherical reflectance errors provided by using VEG are much less than those using the direct nadir techniques, depending on conditions. Suggestions are made concerning sampling and averaging strategies for creating hemispherical reflectance maps for photosynthetic, carbon cycle, and climate change studies.

  6. Analysis of Darwin Rainfall Data: Implications on Sampling Strategy

    NASA Technical Reports Server (NTRS)

    Rafael, Qihang Li; Bras, Rafael L.; Veneziano, Daniele

    1996-01-01

    Rainfall data collected by radar in the vicinity of Darwin, Australia, have been analyzed in terms of their mean, variance, autocorrelation of area-averaged rain rate, and diurnal variation. It is found that, when compared with the well-studied GATE (Global Atmospheric Research Program Atlantic Tropical Experiment) data, Darwin rainfall has larger coefficient of variation (CV), faster reduction of CV with increasing area size, weaker temporal correlation, and a strong diurnal cycle and intermittence. The coefficient of variation for Darwin rainfall has larger magnitude and exhibits larger spatial variability over the sea portion than over the land portion within the area of radar coverage. Stationary, and nonstationary models have been used to study the sampling errors associated with space-based rainfall measurement. The nonstationary model shows that the sampling error is sensitive to the starting sampling time for some sampling frequencies, due to the diurnal cycle of rain, but not for others. Sampling experiments using data also show such sensitivity. When the errors are averaged over starting time, the results of the experiments and the stationary and nonstationary models match each other very closely. In the small areas for which data are available for I>oth Darwin and GATE, the sampling error is expected to be larger for Darwin due to its larger CV.

  7. High spatial precision nano-imaging of polarization-sensitive plasmonic particles

    NASA Astrophysics Data System (ADS)

    Liu, Yunbo; Wang, Yipei; Lee, Somin Eunice

    2018-02-01

    Precise polarimetric imaging of polarization-sensitive nanoparticles is essential for resolving their accurate spatial positions beyond the diffraction limit. However, conventional technologies currently suffer from beam deviation errors which cannot be corrected beyond the diffraction limit. To overcome this issue, we experimentally demonstrate a spatially stable nano-imaging system for polarization-sensitive nanoparticles. In this study, we show that by integrating a voltage-tunable imaging variable polarizer with optical microscopy, we are able to suppress beam deviation errors. We expect that this nano-imaging system should allow for acquisition of accurate positional and polarization information from individual nanoparticles in applications where real-time, high precision spatial information is required.

  8. Postoperative pain impairs subsequent performance on a spatial memory task via effects on N-methyl-D-aspartate receptor in aged rats.

    PubMed

    Chi, Haidong; Kawano, Takashi; Tamura, Takahiko; Iwata, Hideki; Takahashi, Yasuhiro; Eguchi, Satoru; Yamazaki, Fumimoto; Kumagai, Naoko; Yokoyama, Masataka

    2013-12-18

    Pain may be associated with postoperative cognitive dysfunction (POCD); however, this relationship remains under investigated. Therefore, we examined the impact of postoperative pain on cognitive functions in aged animals. Rats were allocated to the following groups: control (C), 1.2 % isoflurane for 2 hours alone (I), I with laparotomy (IL), IL with analgesia using local ropivacaine (IL+R), and IL with analgesia using systemic morphine (IL+M). Pain was assessed by rat grimace scale (RGS). Spatial memory was evaluated using a radial maze from postoperative days (POD) 3 to 14. NMDA receptor (NR) 2 subunits in hippocampus were measured by ELISA. Finally, effects of memantine, a low-affinity uncompetitive N-methyl-d-aspartate (NMDA) receptor antagonist, on postoperative cognitive performance were tested. Postoperative RGS was increased in Group IL, but not in other groups. The number of memory errors in Group I were comparable to that in Group C, whereas errors in Group IL were increased. Importantly, in Group IL+R and IL+M, cognitive impairment was not found. The memory errors were positively correlated with the levels of NMDA receptor 2 subunits in hippocampus. Prophylactic treatment with memantine could prevent the development of memory deficits observed in Group IL without an analgesic effect. Postoperative pain contributes to the development of memory deficits after anesthesia and surgery via up-regulation of hippocampal NMDA receptors. Our findings suggest that postoperative pain management may be important for the prevention of POCD in elderly patients. Copyright © 2013 Elsevier Inc. All rights reserved.

  9. Correlations between Preoperative Angle Parameters and Postoperative Unpredicted Refractive Errors after Cataract Surgery in Open Angle Glaucoma (AOD 500).

    PubMed

    Lee, Wonseok; Bae, Hyoung Won; Lee, Si Hyung; Kim, Chan Yun; Seong, Gong Je

    2017-03-01

    To assess the accuracy of intraocular lens (IOL) power prediction for cataract surgery with open angle glaucoma (OAG) and to identify preoperative angle parameters correlated with postoperative unpredicted refractive errors. This study comprised 45 eyes from 45 OAG subjects and 63 eyes from 63 non-glaucomatous cataract subjects (controls). We investigated differences in preoperative predicted refractive errors and postoperative refractive errors for each group. Preoperative predicted refractive errors were obtained by biometry (IOL-master) and compared to postoperative refractive errors measured by auto-refractometer 2 months postoperatively. Anterior angle parameters were determined using swept source optical coherence tomography. We investigated correlations between preoperative angle parameters [angle open distance (AOD); trabecular iris surface area (TISA); angle recess area (ARA); trabecular iris angle (TIA)] and postoperative unpredicted refractive errors. In patients with OAG, significant differences were noted between preoperative predicted and postoperative real refractive errors, with more myopia than predicted. No significant differences were recorded in controls. Angle parameters (AOD, ARA, TISA, and TIA) at the superior and inferior quadrant were significantly correlated with differences between predicted and postoperative refractive errors in OAG patients (-0.321 to -0.408, p<0.05). Superior quadrant AOD 500 was significantly correlated with postoperative refractive differences in multivariate linear regression analysis (β=-2.925, R²=0.404). Clinically unpredicted refractive errors after cataract surgery were more common in OAG than in controls. Certain preoperative angle parameters, especially AOD 500 at the superior quadrant, were significantly correlated with these unpredicted errors.

  10. Correlations between Preoperative Angle Parameters and Postoperative Unpredicted Refractive Errors after Cataract Surgery in Open Angle Glaucoma (AOD 500)

    PubMed Central

    Lee, Wonseok; Bae, Hyoung Won; Lee, Si Hyung; Kim, Chan Yun

    2017-01-01

    Purpose To assess the accuracy of intraocular lens (IOL) power prediction for cataract surgery with open angle glaucoma (OAG) and to identify preoperative angle parameters correlated with postoperative unpredicted refractive errors. Materials and Methods This study comprised 45 eyes from 45 OAG subjects and 63 eyes from 63 non-glaucomatous cataract subjects (controls). We investigated differences in preoperative predicted refractive errors and postoperative refractive errors for each group. Preoperative predicted refractive errors were obtained by biometry (IOL-master) and compared to postoperative refractive errors measured by auto-refractometer 2 months postoperatively. Anterior angle parameters were determined using swept source optical coherence tomography. We investigated correlations between preoperative angle parameters [angle open distance (AOD); trabecular iris surface area (TISA); angle recess area (ARA); trabecular iris angle (TIA)] and postoperative unpredicted refractive errors. Results In patients with OAG, significant differences were noted between preoperative predicted and postoperative real refractive errors, with more myopia than predicted. No significant differences were recorded in controls. Angle parameters (AOD, ARA, TISA, and TIA) at the superior and inferior quadrant were significantly correlated with differences between predicted and postoperative refractive errors in OAG patients (-0.321 to -0.408, p<0.05). Superior quadrant AOD 500 was significantly correlated with postoperative refractive differences in multivariate linear regression analysis (β=-2.925, R2=0.404). Conclusion Clinically unpredicted refractive errors after cataract surgery were more common in OAG than in controls. Certain preoperative angle parameters, especially AOD 500 at the superior quadrant, were significantly correlated with these unpredicted errors. PMID:28120576

  11. Spatial effects, sampling errors, and task specialization in the honey bee.

    PubMed

    Johnson, B R

    2010-05-01

    Task allocation patterns should depend on the spatial distribution of work within the nest, variation in task demand, and the movement patterns of workers, however, relatively little research has focused on these topics. This study uses a spatially explicit agent based model to determine whether such factors alone can generate biases in task performance at the individual level in the honey bees, Apis mellifera. Specialization (bias in task performance) is shown to result from strong sampling error due to localized task demand, relatively slow moving workers relative to nest size, and strong spatial variation in task demand. To date, specialization has been primarily interpreted with the response threshold concept, which is focused on intrinsic (typically genotypic) differences between workers. Response threshold variation and sampling error due to spatial effects are not mutually exclusive, however, and this study suggests that both contribute to patterns of task bias at the individual level. While spatial effects are strong enough to explain some documented cases of specialization; they are relatively short term and not explanatory for long term cases of specialization. In general, this study suggests that the spatial layout of tasks and fluctuations in their demand must be explicitly controlled for in studies focused on identifying genotypic specialists.

  12. Correlation matching method for high-precision position detection of optical vortex using Shack-Hartmann wavefront sensor.

    PubMed

    Huang, Chenxi; Huang, Hongxin; Toyoda, Haruyoshi; Inoue, Takashi; Liu, Huafeng

    2012-11-19

    We propose a new method for realizing high-spatial-resolution detection of singularity points in optical vortex beams. The method uses a Shack-Hartmann wavefront sensor (SHWS) to record a Hartmanngram. A map of evaluation values related to phase slope is then calculated from the Hartmanngram. The position of an optical vortex is determined by comparing the map with reference maps that are calculated from numerically created spiral phases having various positions. Optical experiments were carried out to verify the method. We displayed various spiral phase distribution patterns on a phase-only spatial light modulator and measured the resulting singularity point using the proposed method. The results showed good linearity in detecting the position of singularity points. The RMS error of the measured position of the singularity point was approximately 0.056, in units normalized to the lens size of the lenslet array used in the SHWS.

  13. SINGLE NEURON ACTIVITY AND THETA MODULATION IN POSTRHINAL CORTEX DURING VISUAL OBJECT DISCRIMINATION

    PubMed Central

    Furtak, Sharon C.; Ahmed, Omar J.; Burwell, Rebecca D.

    2012-01-01

    Postrhinal cortex, the rodent homolog of the primate parahippocampal cortex, processes spatial and contextual information. Our hypothesis of postrhinal function is that it serves to encode context, in part, by forming representations that link objects to places. We recorded postrhinal neuronal activity and local field potentials (LFPs) in rats trained on a two-choice, visual discrimination task. As predicted, a large proportion of postrhinal neurons signaled object-location conjunctions. In addition, postrhinal LFPs exhibited strong oscillatory rhythms in the theta band, and many postrhinal neurons were phase locked to theta. Although correlated with running speed, theta power was lower than predicted by speed alone immediately before and after choice. However, theta power was significantly increased following incorrect decisions, suggesting a role in signaling error. These findings provide evidence that postrhinal cortex encodes representations that link objects to places and suggest that postrhinal theta modulation extends to cognitive as well as spatial functions. PMID:23217745

  14. Effect of surface thickness on the wetting front velocity during jet impingement surface cooling

    NASA Astrophysics Data System (ADS)

    Agrawal, Chitranjan; Gotherwal, Deepesh; Singh, Chandradeep; Singh, Charan

    2017-02-01

    A hot stainless steel (SS-304) surface of 450 ± 10 °C initial temperature is cooled with a normally impinging round water jet. The experiments have been performed for the surface of different thickness e.g. 1, 2, 3 mm and jet Reynolds number in the range of Re = 26,500-48,000. The cooling performance of the hot test surface is evaluated on the basis of wetting front velocity. The wetting front velocity is determined for 10-40 mm downstream spatial locations away from the stagnation point. It has been observed that the wetting front velocity increase with the rise in jet flow rate, however, diminishes towards the downstream spatial location and with the rise in surface thickness. The proposed correlation for the dimensionless wetting front velocity predicts the experimental data well within the error band of ±30 %, whereas, 75 % of experimental data lies within the range of ±20 %.

  15. A Gridded Daily Min/Max Temperature Dataset With 0.1° Resolution for the Yangtze River Valley and its Error Estimation

    NASA Astrophysics Data System (ADS)

    Xiong, Qiufen; Hu, Jianglin

    2013-05-01

    The minimum/maximum (Min/Max) temperature in the Yangtze River valley is decomposed into the climatic mean and anomaly component. A spatial interpolation is developed which combines the 3D thin-plate spline scheme for climatological mean and the 2D Barnes scheme for the anomaly component to create a daily Min/Max temperature dataset. The climatic mean field is obtained by the 3D thin-plate spline scheme because the relationship between the decreases in Min/Max temperature with elevation is robust and reliable on a long time-scale. The characteristics of the anomaly field tend to be related to elevation variation weakly, and the anomaly component is adequately analyzed by the 2D Barnes procedure, which is computationally efficient and readily tunable. With this hybridized interpolation method, a daily Min/Max temperature dataset that covers the domain from 99°E to 123°E and from 24°N to 36°N with 0.1° longitudinal and latitudinal resolution is obtained by utilizing daily Min/Max temperature data from three kinds of station observations, which are national reference climatological stations, the basic meteorological observing stations and the ordinary meteorological observing stations in 15 provinces and municipalities in the Yangtze River valley from 1971 to 2005. The error estimation of the gridded dataset is assessed by examining cross-validation statistics. The results show that the statistics of daily Min/Max temperature interpolation not only have high correlation coefficient (0.99) and interpolation efficiency (0.98), but also the mean bias error is 0.00 °C. For the maximum temperature, the root mean square error is 1.1 °C and the mean absolute error is 0.85 °C. For the minimum temperature, the root mean square error is 0.89 °C and the mean absolute error is 0.67 °C. Thus, the new dataset provides the distribution of Min/Max temperature over the Yangtze River valley with realistic, successive gridded data with 0.1° × 0.1° spatial resolution and daily temporal scale. The primary factors influencing the dataset precision are elevation and terrain complexity. In general, the gridded dataset has a relatively high precision in plains and flatlands and a relatively low precision in mountainous areas.

  16. Reducing representativeness and sampling errors in radio occultation-radiosonde comparisons

    NASA Astrophysics Data System (ADS)

    Gilpin, Shay; Rieckh, Therese; Anthes, Richard

    2018-05-01

    Radio occultation (RO) and radiosonde (RS) comparisons provide a means of analyzing errors associated with both observational systems. Since RO and RS observations are not taken at the exact same time or location, temporal and spatial sampling errors resulting from atmospheric variability can be significant and inhibit error analysis of the observational systems. In addition, the vertical resolutions of RO and RS profiles vary and vertical representativeness errors may also affect the comparison. In RO-RS comparisons, RO observations are co-located with RS profiles within a fixed time window and distance, i.e. within 3-6 h and circles of radii ranging between 100 and 500 km. In this study, we first show that vertical filtering of RO and RS profiles to a common vertical resolution reduces representativeness errors. We then test two methods of reducing horizontal sampling errors during RO-RS comparisons: restricting co-location pairs to within ellipses oriented along the direction of wind flow rather than circles and applying a spatial-temporal sampling correction based on model data. Using data from 2011 to 2014, we compare RO and RS differences at four GCOS Reference Upper-Air Network (GRUAN) RS stations in different climatic locations, in which co-location pairs were constrained to a large circle ( ˜ 666 km radius), small circle ( ˜ 300 km radius), and ellipse parallel to the wind direction ( ˜ 666 km semi-major axis, ˜ 133 km semi-minor axis). We also apply a spatial-temporal sampling correction using European Centre for Medium-Range Weather Forecasts Interim Reanalysis (ERA-Interim) gridded data. Restricting co-locations to within the ellipse reduces root mean square (RMS) refractivity, temperature, and water vapor pressure differences relative to RMS differences within the large circle and produces differences that are comparable to or less than the RMS differences within circles of similar area. Applying the sampling correction shows the most significant reduction in RMS differences, such that RMS differences are nearly identical to the sampling correction regardless of the geometric constraints. We conclude that implementing the spatial-temporal sampling correction using a reliable model will most effectively reduce sampling errors during RO-RS comparisons; however, if a reliable model is not available, restricting spatial comparisons to within an ellipse parallel to the wind flow will reduce sampling errors caused by horizontal atmospheric variability.

  17. Decreased Leftward ‘Aiming’ Motor-Intentional Spatial Cuing in Traumatic Brain Injury

    PubMed Central

    Wagner, Daymond; Eslinger, Paul J.; Barrett, A. M.

    2016-01-01

    Objective To characterize the mediation of attention and action in space following traumatic brain injury (TBI). Method Two exploratory analyses were performed to determine the influence of spatial ‘Aiming’ motor versus spatial ‘Where’ bias on line bisection in TBI participants. The first experiment compared performance according to severity and location of injury in TBI. The second experiment examined bisection performance in a larger TBI sample against a matched control group. In both experiments, participants bisected lines in near and far space using an apparatus that allowed for the fractionation of spatial Aiming versus Where error components. Results In the first experiment, participants with severe injuries tended to incur rightward error when starting from the right in far space, compared with participants with mild injuries. In the second experiment, when performance was examined at the individual level, more participants with TBI tended to incur rightward motor error compared to controls. Conclusions TBI may cause frontal-subcortical cognitive dysfunction and asymmetric motor perseveration, affecting spatial Aiming bias on line bisection. Potential effects on real-world function need further investigation. PMID:27571220

  18. Language-specific dysgraphia in Korean patients with right brain stroke: influence of unilateral spatial neglect.

    PubMed

    Jang, Dae-Hyun; Kim, Min-Wook; Park, Kyoung Ha; Lee, Jae Woo

    2015-03-01

    The purpose of the present study was to investigate the relationship between Korean language-specific dysgraphia and unilateral spatial neglect in 31 right brain stroke patients. All patients were tested for writing errors in spontaneous writing, dictation, and copying tests. The dysgraphia was classified into visuospatial omission, visuospatial destruction, syllabic tilting, stroke omission, stroke addition, and stroke tilting. Twenty-three (77.4%) of the 31 patients made dysgraphia and 18 (58.1%) demonstrated unilateral spatial neglect. The visuospatial omission was the most common dysgraphia followed by stroke addition and omission errors. The highest number of errors was made in the copying and the least was in the spontaneous writing test. Patients with unilateral spatial neglect made a significantly higher number of dysgraphia in the copying test than those without. We identified specific dysgraphia features such as a right side space omission and a vertical stroke addition in Korean right brain stroke patients. In conclusion, unilateral spatial neglect influences copy writing system of Korean language in patients with right brain stroke.

  19. Optimal mapping of terrestrial gamma dose rates using geological parent material and aerogeophysical survey data.

    PubMed

    Rawlins, B G; Scheib, C; Tyler, A N; Beamish, D

    2012-12-01

    Regulatory authorities need ways to estimate natural terrestrial gamma radiation dose rates (nGy h⁻¹) across the landscape accurately, to assess its potential deleterious health effects. The primary method for estimating outdoor dose rate is to use an in situ detector supported 1 m above the ground, but such measurements are costly and cannot capture the landscape-scale variation in dose rates which are associated with changes in soil and parent material mineralogy. We investigate the potential for improving estimates of terrestrial gamma dose rates across Northern Ireland (13,542 km²) using measurements from 168 sites and two sources of ancillary data: (i) a map based on a simplified classification of soil parent material, and (ii) dose estimates from a national-scale, airborne radiometric survey. We used the linear mixed modelling framework in which the two ancillary variables were included in separate models as fixed effects, plus a correlation structure which captures the spatially correlated variance component. We used a cross-validation procedure to determine the magnitude of the prediction errors for the different models. We removed a random subset of 10 terrestrial measurements and formed the model from the remainder (n = 158), and then used the model to predict values at the other 10 sites. We repeated this procedure 50 times. The measurements of terrestrial dose vary between 1 and 103 (nGy h⁻¹). The median absolute model prediction errors (nGy h⁻¹) for the three models declined in the following order: no ancillary data (10.8) > simple geological classification (8.3) > airborne radiometric dose (5.4) as a single fixed effect. Estimates of airborne radiometric gamma dose rate can significantly improve the spatial prediction of terrestrial dose rate.

  20. Electrophysiological evidence for right frontal lobe dominance in spatial visuomotor learning.

    PubMed

    Lang, W; Lang, M; Kornhuber, A; Kornhuber, H H

    1986-02-01

    Slow negative potential shifts were recorded together with the error made in motor performance when two different groups of 14 students tracked visual stimuli with their right hand. Various visuomotor tasks were compared. A tracking task (T) in which subjects had to track the stimulus directly, showed no decrease of error in motor performance during the experiment. In a distorted tracking task (DT) a continuous horizontal distortion of the visual feedback had to be compensated. The additional demands of this task required visuomotor learning. Another learning condition was a mirrored-tracking task (horizontally inverted tracking, hIT), i.e. an elementary function, such as the concept of changing left and right was interposed between perception and action. In addition, subjects performed a no-tracking control task (NT) in which they started the visual stimulus without tracking it. A slow negative potential shift was associated with the visuomotor performance (TP: tracking potential). In the learning tasks (DT and hIT) this negativity was significantly enhanced over the anterior midline and in hIT frontally and precentrally over both hemispheres. Comparing hIT and T for every subject, the enhancement of the tracking potential in hIT was correlated with the success in motor learning in frontomedial and bilaterally in frontolateral recordings (r = 0.81-0.88). However, comparing DT and T, such a correlation was only found in frontomedial and right frontolateral electrodes (r = 0.5-0.61), but not at the left frontolateral electrode. These experiments are consistent with previous findings and give further neurophysiological evidence for frontal lobe activity in visuomotor learning. The hemispherical asymmetry is discussed in respect to hemispherical specialization (right frontal lobe dominance in spatial visuomotor learning).

  1. Rotational wind indicator enhances control of rotated displays

    NASA Technical Reports Server (NTRS)

    Cunningham, H. A.; Pavel, Misha

    1991-01-01

    Rotation by 108 deg of the spatial mapping between a visual display and a manual input device produces large spatial errors in a discrete aiming task. These errors are not easily corrected by voluntary mental effort, but the central nervous system does adapt gradually to the new mapping. Bernotat (1970) showed that adding true hand position to a 90 deg rotated display improved performance of a compensatory tracking task, but tracking error rose again upon removal of the explicit cue. This suggests that the explicit error signal did not induce changes in the neural mapping, but rather allowed the operator to reduce tracking error using a higher mental strategy. In this report, we describe an explicit visual display enhancement applied to a 108 deg rotated discrete aiming task. A 'wind indicator' corresponding to the effect of the mapping rotation is displayed on the operator-controlled cursor. The human operator is instructed to oppose the virtual force represented by the indicator, as one would do if flying an airplane in a crosswind. This enhancement reduces spatial aiming error in the first 10 minutes of practice by an average of 70 percent when compared to a no enhancement control condition. Moreover, it produces adaptation aftereffect, which is evidence of learning by neural adaptation rather than by mental strategy. Finally, aiming error does not rise upon removal of the explicit cue.

  2. Investigating different filter and rescaling methods on simulated GRACE-like TWS variations for hydrological applications

    NASA Astrophysics Data System (ADS)

    Zhang, Liangjing; Dahle, Christoph; Neumayer, Karl-Hans; Dobslaw, Henryk; Flechtner, Frank; Thomas, Maik

    2016-04-01

    Terrestrial water storage (TWS) variations obtained from GRACE play an increasingly important role in various hydrological and hydro-meteorological applications. Since monthly-mean gravity fields are contaminated by errors caused by a number of sources with distinct spatial correlation structures, filtering is needed to remove in particular high frequency noise. Subsequently, bias and leakage caused by the filtering need to be corrected before the final results are interpreted as GRACE-based observations of TWS. Knowledge about the reliability and performance of different post-processing methods is highly important for the GRACE users. In this contribution, we re-assess a number of commonly used post-processing methods using a simulated GRACE-like gravity field time-series based on realistic orbits and instrument error assumptions as well as background error assumptions out of the updated ESA Earth System Model. Two non-isotropic filter methods from Kusche (2007) and Swenson and Wahr (2006) are tested. Rescaling factors estimated from five different hydrological models and the ensemble median are applied to the post-processed simulated GRACE-like TWS estimates to correct the bias and leakage. Since TWS anomalies out of the post-processed simulation results can be readily compared to the time-variable Earth System Model initially used as "truth" during the forward simulation step, we are able to thoroughly check the plausibility of our error estimation assessment and will subsequently recommend a processing strategy that shall also be applied to planned GRACE and GRACE-FO Level-3 products for hydrological applications provided by GFZ. Kusche, J. (2007): Approximate decorrelation and non-isotropic smoothing of time-variable GRACE-type gravity field models. J. Geodesy, 81 (11), 733-749, doi:10.1007/s00190-007-0143-3. Swenson, S. and Wahr, J. (2006): Post-processing removal of correlated errors in GRACE data. Geophysical Research Letters, 33(8):L08402.

  3. Spatial Dynamics and Determinants of County-Level Education Expenditure in China

    ERIC Educational Resources Information Center

    Gu, Jiafeng

    2012-01-01

    In this paper, a multivariate spatial autoregressive model of local public education expenditure determination with autoregressive disturbance is developed and estimated. The existence of spatial interdependence is tested using Moran's I statistic and Lagrange multiplier test statistics for both the spatial error and spatial lag models. The full…

  4. Effects of Tropospheric Spatio-Temporal Correlated Noise on the Analysis of Space Geodetic Data

    NASA Technical Reports Server (NTRS)

    Romero-Wolf, A. F.; Jacobs, C. S.

    2011-01-01

    The standard VLBI analysis models measurement noise as purely thermal errors modeled according to uncorrelated Gaussian distributions. As the price of recording bits steadily decreases, thermal errors will soon no longer dominate. It is therefore expected that troposphere and instrumentation/clock errors will increasingly become more dominant. Given that both of these errors have correlated spectra, properly modeling the error distributions will become more relevant for optimal analysis. This paper will discuss the advantages of including the correlations between tropospheric delays using a Kolmogorov spectrum and the frozen ow model pioneered by Treuhaft and Lanyi. We will show examples of applying these correlated noise spectra to the weighting of VLBI data analysis.

  5. Using local correlation tracking to recover solar spectral information from a slitless spectrograph

    NASA Astrophysics Data System (ADS)

    Courrier, Hans T.; Kankelborg, Charles C.

    2018-01-01

    The Multi-Order Solar EUV Spectrograph (MOSES) is a sounding rocket instrument that utilizes a concave spherical diffraction grating to form simultaneous images in the diffraction orders m=0, +1, and -1. MOSES is designed to capture high-resolution cotemporal spectral and spatial information of solar features over a large two-dimensional field of view. Our goal is to estimate the Doppler shift as a function of position for every MOSES exposure. Since the instrument is designed to operate without an entrance slit, this requires disentangling overlapping spectral and spatial information in the m=±1 images. Dispersion in these images leads to a field-dependent displacement that is proportional to Doppler shift. We identify these Doppler shift-induced displacements for the single bright emission line in the instrument passband by comparing images from each spectral order. We demonstrate the use of local correlation tracking as a means to quantify these differences between a pair of cotemporal image orders. The resulting vector displacement field is interpreted as a measurement of the Doppler shift. Since three image orders are available, we generate three Doppler maps from each exposure. These may be compared to produce an error estimate.

  6. Spatial serial order processing in schizophrenia.

    PubMed

    Fraser, David; Park, Sohee; Clark, Gina; Yohanna, Daniel; Houk, James C

    2004-10-01

    The aim of this study was to examine serial order processing deficits in 21 schizophrenia patients and 16 age- and education-matched healthy controls. In a spatial serial order working memory task, one to four spatial targets were presented in a randomized sequence. Subjects were required to remember the locations and the order in which the targets were presented. Patients showed a marked deficit in ability to remember the sequences compared with controls. Increasing the number of targets within a sequence resulted in poorer memory performance for both control and schizophrenia subjects, but the effect was much more pronounced in the patients. Targets presented at the end of a long sequence were more vulnerable to memory error in schizophrenia patients. Performance deficits were not attributable to motor errors, but to errors in target choice. The results support the idea that the memory errors seen in schizophrenia patients may be due to saturating the working memory network at relatively low levels of memory load.

  7. Aging and the intrusion superiority effect in visuo-spatial working memory.

    PubMed

    Cornoldi, Cesare; Bassani, Chiara; Berto, Rita; Mammarella, Nicola

    2007-01-01

    This study investigated the active component of visuo-spatial working memory (VSWM) in younger and older adults testing the hypotheses that elderly individuals have a poorer performance than younger ones and that errors in active VSWM tasks depend, at least partially, on difficulties in avoiding intrusions (i.e., avoiding already activated information). In two experiments, participants were presented with sequences of matrices on which three positions were pointed out sequentially: their task was to process all the positions but indicate only the final position of each sequence. Results showed a poorer performance in the elderly compared to the younger group and a higher number of intrusion (errors due to activated but irrelevant positions) rather than invention (errors consisting of pointing out a position never indicated by the experiementer) errors. The number of errors increased when a concurrent task was introduced (Experiment 1) and it was affected by different patterns of matrices (Experiment 2). In general, results show that elderly people have an impaired VSWM and produce a large number of errors due to inhibition failures. However, both the younger and the older adults' visuo-spatial working memory was affected by the presence of activated irrelevant information, the reduction of the available resources, and task constraints.

  8. Visualizing Uncertainty of Point Phenomena by Redesigned Error Ellipses

    NASA Astrophysics Data System (ADS)

    Murphy, Christian E.

    2018-05-01

    Visualizing uncertainty remains one of the great challenges in modern cartography. There is no overarching strategy to display the nature of uncertainty, as an effective and efficient visualization depends, besides on the spatial data feature type, heavily on the type of uncertainty. This work presents a design strategy to visualize uncertainty con-nected to point features. The error ellipse, well-known from mathematical statistics, is adapted to display the uncer-tainty of point information originating from spatial generalization. Modified designs of the error ellipse show the po-tential of quantitative and qualitative symbolization and simultaneous point based uncertainty symbolization. The user can intuitively depict the centers of gravity, the major orientation of the point arrays as well as estimate the ex-tents and possible spatial distributions of multiple point phenomena. The error ellipse represents uncertainty in an intuitive way, particularly suitable for laymen. Furthermore it is shown how applicable an adapted design of the er-ror ellipse is to display the uncertainty of point features originating from incomplete data. The suitability of the error ellipse to display the uncertainty of point information is demonstrated within two showcases: (1) the analysis of formations of association football players, and (2) uncertain positioning of events on maps for the media.

  9. A spatial error model with continuous random effects and an application to growth convergence

    NASA Astrophysics Data System (ADS)

    Laurini, Márcio Poletti

    2017-10-01

    We propose a spatial error model with continuous random effects based on Matérn covariance functions and apply this model for the analysis of income convergence processes (β -convergence). The use of a model with continuous random effects permits a clearer visualization and interpretation of the spatial dependency patterns, avoids the problems of defining neighborhoods in spatial econometrics models, and allows projecting the spatial effects for every possible location in the continuous space, circumventing the existing aggregations in discrete lattice representations. We apply this model approach to analyze the economic growth of Brazilian municipalities between 1991 and 2010 using unconditional and conditional formulations and a spatiotemporal model of convergence. The results indicate that the estimated spatial random effects are consistent with the existence of income convergence clubs for Brazilian municipalities in this period.

  10. Improving estimates of water resources in a semi-arid region by assimilating GRACE data into the PCR-GLOBWB hydrological model

    NASA Astrophysics Data System (ADS)

    Tangdamrongsub, Natthachet; Steele-Dunne, Susan C.; Gunter, Brian C.; Ditmar, Pavel G.; Sutanudjaja, Edwin H.; Sun, Yu; Xia, Ting; Wang, Zhongjing

    2017-04-01

    An accurate estimation of water resources dynamics is crucial for proper management of both agriculture and the local ecology, particularly in semi-arid regions. Imperfections in model physics, uncertainties in model land parameters and meteorological data, as well as the human impact on land changes often limit the accuracy of hydrological models in estimating water storages. To mitigate this problem, this study investigated the assimilation of terrestrial water storage variation (TWSV) estimates derived from the Gravity Recovery And Climate Experiment (GRACE) data using an ensemble Kalman filter (EnKF) approach. The region considered was the Hexi Corridor in northern China. The hydrological model used for the analysis was PCR-GLOBWB, driven by satellite-based forcing data from April 2002 to December 2010. The impact of the GRACE data assimilation (DA) scheme was evaluated in terms of the TWSV, as well as the variation of individual hydrological storage estimates. The capability of GRACE DA to adjust the storage level was apparent not only for the entire TWSV but also for the groundwater component. In this study, spatially correlated errors in GRACE data were taken into account, utilizing the full error variance-covariance matrices provided as a part of the GRACE data product. The benefits of this approach were demonstrated by comparing the EnKF results obtained with and without taking into account error correlations. The results were validated against in situ groundwater data from five well sites. On average, the experiments showed that GRACE DA improved the accuracy of groundwater storage estimates by as much as 25 %. The inclusion of error correlations provided an equal or greater improvement in the estimates. In contrast, a validation against in situ streamflow data from two river gauges showed no significant benefits of GRACE DA. This is likely due to the limited spatial and temporal resolution of GRACE observations. Finally, results of the GRACE DA study were used to assess the status of water resources over the Hexi Corridor over the considered 9-year time interval. Areally averaged values revealed that TWS, soil moisture, and groundwater storages over the region decreased with an average rate of approximately 0.2, 0.1, and 0.1 cm yr-1 in terms of equivalent water heights, respectively. A particularly rapid decline in TWS (approximately -0.4 cm yr-1) was seen over the Shiyang River basin located in the southeastern part of Hexi Corridor. The reduction mostly occurred in the groundwater layer. An investigation of the relationship between water resources and agricultural activities suggested that groundwater consumption required to maintain crop yield in the growing season for this specific basin was likely the cause of the groundwater depletion.

  11. Evaluation of a data fusion approach to estimate daily PM2.5 levels in North China

    PubMed Central

    Liang, Fengchao; Gao, Meng; Xiao, Qingyang; Carmichael, Gregory R.

    2017-01-01

    PM2.5 air pollution has been a growing concern worldwide. Previous studies have conducted several techniques to estimate PM2.5 exposure spatiotemporally in China, but all these have limitations. This study was to develop a data fusion approach and compare it with kriging and Chemistry Module. Two techniques were applied to create daily spatial cover of PM2.5 in grid cells with a resolution of 10 km in North China in 2013, respectively, which was kriging with an external drift (KED) and Weather Research and Forecast Model with Chemistry Module (WRF-Chem). A data fusion technique was developed by fusing PM2.5 concentration predicted by KED and WRF-Chem, accounting for the distance from the central of grid cell to the nearest ground observations and daily spatial correlations between WRF-Chem and observations. Model performances were evaluated by comparing them with ground observations and the spatial prediction errors. KED and data fusion performed better at monitoring sites with a daily model R2 of 0.95 and 0.94, respectively and PM2.5 was overestimated by WRF-Chem (R2=0.51). KED and data fusion performed better around the ground monitors, WRF-Chem performed relative worse with high prediction errors in the central of study domain. In our study, both KED and data fusion technique provided highly accurate PM2.5. Current monitoring network in North China was dense enough to provide a reliable PM2.5 prediction by interpolation technique. PMID:28599195

  12. The Representation of Three-Dimensional Space in Fish

    PubMed Central

    Burt de Perera, Theresa; Holbrook, Robert I.; Davis, Victoria

    2016-01-01

    In mammals, the so-called “seat of the cognitive map” is located in place cells within the hippocampus. Recent work suggests that the shape of place cell fields might be defined by the animals’ natural movement; in rats the fields appear to be laterally compressed (meaning that the spatial map of the animal is more highly resolved in the horizontal dimensions than in the vertical), whereas the place cell fields of bats are statistically spherical (which should result in a spatial map that is equally resolved in all three dimensions). It follows that navigational error should be equal in the horizontal and vertical dimensions in animals that travel freely through volumes, whereas in surface-bound animals would demonstrate greater vertical error. Here, we describe behavioral experiments on pelagic fish in which we investigated the way that fish encode three-dimensional space and we make inferences about the underlying processing. Our work suggests that fish, like mammals, have a higher order representation of space that assembles incoming sensory information into a neural unit that can be used to determine position and heading in three-dimensions. Further, our results are consistent with this representation being encoded isotropically, as would be expected for animals that move freely through volumes. Definitive evidence for spherical place fields in fish will not only reveal the neural correlates of space to be a deep seated vertebrate trait, but will also help address the questions of the degree to which environment spatial ecology has shaped cognitive processes and their underlying neural mechanisms. PMID:27014002

  13. Evaluation of a data fusion approach to estimate daily PM2.5 levels in North China.

    PubMed

    Liang, Fengchao; Gao, Meng; Xiao, Qingyang; Carmichael, Gregory R; Pan, Xiaochuan; Liu, Yang

    2017-10-01

    PM 2.5 air pollution has been a growing concern worldwide. Previous studies have conducted several techniques to estimate PM 2.5 exposure spatiotemporally in China, but all these have limitations. This study was to develop a data fusion approach and compare it with kriging and Chemistry Module. Two techniques were applied to create daily spatial cover of PM 2.5 in grid cells with a resolution of 10km in North China in 2013, respectively, which was kriging with an external drift (KED) and Weather Research and Forecast Model with Chemistry Module (WRF-Chem). A data fusion technique was developed by fusing PM 2.5 concentration predicted by KED and WRF-Chem, accounting for the distance from the central of grid cell to the nearest ground observations and daily spatial correlations between WRF-Chem and observations. Model performances were evaluated by comparing them with ground observations and the spatial prediction errors. KED and data fusion performed better at monitoring sites with a daily model R 2 of 0.95 and 0.94, respectively and PM 2.5 was overestimated by WRF-Chem (R 2 =0.51). KED and data fusion performed better around the ground monitors, WRF-Chem performed relative worse with high prediction errors in the central of study domain. In our study, both KED and data fusion technique provided highly accurate PM 2.5 . Current monitoring network in North China was dense enough to provide a reliable PM 2.5 prediction by interpolation technique. Copyright © 2017. Published by Elsevier Inc.

  14. Asteroid (21) Lutetia: Semi-Automatic Impact Craters Detection and Classification

    NASA Astrophysics Data System (ADS)

    Jenerowicz, M.; Banaszkiewicz, M.

    2018-05-01

    The need to develop an automated method, independent of lighting and surface conditions, for the identification and measurement of impact craters, as well as the creation of a reliable and efficient tool, has become a justification of our studies. This paper presents a methodology for the detection of impact craters based on their spectral and spatial features. The analysis aims at evaluation of the algorithm capabilities to determinate the spatial parameters of impact craters presented in a time series. In this way, time-consuming visual interpretation of images would be reduced to the special cases. The developed algorithm is tested on a set of OSIRIS high resolution images of asteroid Lutetia surface which is characterized by varied landforms and the abundance of craters created by collisions with smaller bodies of the solar system.The proposed methodology consists of three main steps: characterisation of objects of interest on limited set of data, semi-automatic extraction of impact craters performed for total set of data by applying the Mathematical Morphology image processing (Serra, 1988, Soille, 2003), and finally, creating libraries of spatial and spectral parameters for extracted impact craters, i.e. the coordinates of the crater center, semi-major and semi-minor axis, shadow length and cross-section. The overall accuracy of the proposed method is 98 %, the Kappa coefficient is 0.84, the correlation coefficient is ∼ 0.80, the omission error 24.11 %, the commission error 3.45 %. The obtained results show that methods based on Mathematical Morphology operators are effective also with a limited number of data and low-contrast images.

  15. Accounting for Non-Gaussian Sources of Spatial Correlation in Parametric Functional Magnetic Resonance Imaging Paradigms I: Revisiting Cluster-Based Inferences.

    PubMed

    Gopinath, Kaundinya; Krishnamurthy, Venkatagiri; Sathian, K

    2018-02-01

    In a recent study, Eklund et al. employed resting-state functional magnetic resonance imaging data as a surrogate for null functional magnetic resonance imaging (fMRI) datasets and posited that cluster-wise family-wise error (FWE) rate-corrected inferences made by using parametric statistical methods in fMRI studies over the past two decades may have been invalid, particularly for cluster defining thresholds less stringent than p < 0.001; this was principally because the spatial autocorrelation functions (sACF) of fMRI data had been modeled incorrectly to follow a Gaussian form, whereas empirical data suggested otherwise. Here, we show that accounting for non-Gaussian signal components such as those arising from resting-state neural activity as well as physiological responses and motion artifacts in the null fMRI datasets yields first- and second-level general linear model analysis residuals with nearly uniform and Gaussian sACF. Further comparison with nonparametric permutation tests indicates that cluster-based FWE corrected inferences made with Gaussian spatial noise approximations are valid.

  16. Development and assessment of a higher-spatial-resolution (4.4 km) MISR aerosol optical depth product using AERONET-DRAGON data

    NASA Astrophysics Data System (ADS)

    Garay, Michael J.; Kalashnikova, Olga V.; Bull, Michael A.

    2017-04-01

    Since early 2000, the Multi-angle Imaging SpectroRadiometer (MISR) instrument on NASA's Terra satellite has been acquiring data that have been used to produce aerosol optical depth (AOD) and particle property retrievals at 17.6 km spatial resolution. Capitalizing on the capabilities provided by multi-angle viewing, the current operational (Version 22) MISR algorithm performs well, with about 75 % of MISR AOD retrievals globally falling within 0.05 or 20 % × AOD of paired validation data from the ground-based Aerosol Robotic Network (AERONET). This paper describes the development and assessment of a prototype version of a higher-spatial-resolution 4.4 km MISR aerosol optical depth product compared against multiple AERONET Distributed Regional Aerosol Gridded Observations Network (DRAGON) deployments around the globe. In comparisons with AERONET-DRAGON AODs, the 4.4 km resolution retrievals show improved correlation (r = 0. 9595), smaller RMSE (0.0768), reduced bias (-0.0208), and a larger fraction within the expected error envelope (80.92 %) relative to the Version 22 MISR retrievals.

  17. A spatial model for a stream networks of Citarik River with the environmental variables: potential of hydrogen (PH) and temperature

    NASA Astrophysics Data System (ADS)

    Bachrudin, A.; Mohamed, N. B.; Supian, S.; Sukono; Hidayat, Y.

    2018-03-01

    Application of existing geostatistical theory of stream networks provides a number of interesting and challenging problems. Most of statistical tools in the traditional geostatistics have been based on a Euclidean distance such as autocovariance functions, but for stream data is not permissible since it deals with a stream distance. To overcome this autocovariance developed a model based on the distance the flow with using convolution kernel approach (moving average construction). Spatial model for a stream networks is widely used to monitor environmental on a river networks. In a case study of a river in province of West Java, the objective of this paper is to analyze a capability of a predictive on two environmental variables, potential of hydrogen (PH) and temperature using ordinary kriging. Several the empirical results show: (1) The best fit of autocovariance functions for temperature and potential hydrogen (ph) of Citarik River is linear which also yields the smallest root mean squared prediction error (RMSPE), (2) the spatial correlation values between the locations on upstream and on downstream of Citarik river exhibit decreasingly

  18. ICE-Based Custom Full-Mesh Network for the CHIME High Bandwidth Radio Astronomy Correlator

    NASA Astrophysics Data System (ADS)

    Bandura, K.; Cliche, J. F.; Dobbs, M. A.; Gilbert, A. J.; Ittah, D.; Mena Parra, J.; Smecher, G.

    2016-03-01

    New generation radio interferometers encode signals from thousands of antenna feeds across large bandwidth. Channelizing and correlating this data requires networking capabilities that can handle unprecedented data rates with reasonable cost. The Canadian Hydrogen Intensity Mapping Experiment (CHIME) correlator processes 8-bits from N=2,048 digitizer inputs across 400MHz of bandwidth. Measured in N2× bandwidth, it is the largest radio correlator that is currently commissioning. Its digital back-end must exchange and reorganize the 6.6terabit/s produced by its 128 digitizing and channelizing nodes, and feed it to the 256 graphics processing unit (GPU) node spatial correlator in a way that each node obtains data from all digitizer inputs but across a small fraction of the bandwidth (i.e. ‘corner-turn’). In order to maximize performance and reliability of the corner-turn system while minimizing cost, a custom networking solution has been implemented. The system makes use of Field Programmable Gate Array (FPGA) transceivers to implement direct, passive copper, full-mesh, high speed serial connections between sixteen circuit boards in a crate, to exchange data between crates, and to offload the data to a cluster of 256 GPU nodes using standard 10Gbit/s Ethernet links. The GPU nodes complete the corner-turn by combining data from all crates and then computing visibilities. Eye diagrams and frame error counters confirm error-free operation of the corner-turn network in both the currently operating CHIME Pathfinder telescope (a prototype for the full CHIME telescope) and a representative fraction of the full CHIME hardware providing an end-to-end system validation. An analysis of an equivalent corner-turn system built with Ethernet switches instead of custom passive data links is provided.

  19. EMC: Air Quality Forecast Home page

    Science.gov Websites

    archive NAM Verification Meteorology Error Time Series EMC NAM Spatial Maps Real Time Mesoscale Analysis Precipitation verification NAQFC VERIFICATION CMAQ Ozone & PM Error Time Series AOD Error Time Series HYSPLIT Smoke forecasts vs GASP satellite Dust and Smoke Error Time Series HYSPLIT WCOSS Upgrade (July

  20. Phase-aberration correction with a 3-D ultrasound scanner: feasibility study.

    PubMed

    Ivancevich, Nikolas M; Dahl, Jeremy J; Trahey, Gregg E; Smith, Stephen W

    2006-08-01

    We tested the feasibility of using adaptive imaging, namely phase-aberration correction, with two-dimensional (2-D) arrays and real-time, 3-D ultrasound. Because of the high spatial frequency content of aberrators, 2-D arrays, which generally have smaller pitch and thus higher spatial sampling frequency, and 3-D imaging show potential to improve the performance of adaptive imaging. Phase-correction algorithms improve image quality by compensating for tissue-induced errors in beamforming. Using the illustrative example of transcranial ultrasound, we have evaluated our ability to perform adaptive imaging with a real-time, 3-D scanner. We have used a polymer casting of a human temporal bone, root-mean-square (RMS) phase variation of 45.0 ns, full-width-half-maximum (FWHM) correlation length of 3.35 mm, and an electronic aberrator, 100 ns RMS, 3.76 mm correlation, with tissue phantoms as illustrative examples of near-field, phase-screen aberrators. Using the multilag, least-squares, cross-correlation method, we have shown the ability of 3-D adaptive imaging to increase anechoic cyst identification, image brightness, contrast-to-speckle ratio (CSR), and, in 3-D color Doppler experiments, the ability to visualize flow. For a physical aberrator skull casting we saw CSR increase by 13% from 1.01 to 1.14, while the number of detectable cysts increased from 4.3 to 7.7.

  1. Objectified quantification of uncertainties in Bayesian atmospheric inversions

    NASA Astrophysics Data System (ADS)

    Berchet, A.; Pison, I.; Chevallier, F.; Bousquet, P.; Bonne, J.-L.; Paris, J.-D.

    2015-05-01

    Classical Bayesian atmospheric inversions process atmospheric observations and prior emissions, the two being connected by an observation operator picturing mainly the atmospheric transport. These inversions rely on prescribed errors in the observations, the prior emissions and the observation operator. When data pieces are sparse, inversion results are very sensitive to the prescribed error distributions, which are not accurately known. The classical Bayesian framework experiences difficulties in quantifying the impact of mis-specified error distributions on the optimized fluxes. In order to cope with this issue, we rely on recent research results to enhance the classical Bayesian inversion framework through a marginalization on a large set of plausible errors that can be prescribed in the system. The marginalization consists in computing inversions for all possible error distributions weighted by the probability of occurrence of the error distributions. The posterior distribution of the fluxes calculated by the marginalization is not explicitly describable. As a consequence, we carry out a Monte Carlo sampling based on an approximation of the probability of occurrence of the error distributions. This approximation is deduced from the well-tested method of the maximum likelihood estimation. Thus, the marginalized inversion relies on an automatic objectified diagnosis of the error statistics, without any prior knowledge about the matrices. It robustly accounts for the uncertainties on the error distributions, contrary to what is classically done with frozen expert-knowledge error statistics. Some expert knowledge is still used in the method for the choice of an emission aggregation pattern and of a sampling protocol in order to reduce the computation cost. The relevance and the robustness of the method is tested on a case study: the inversion of methane surface fluxes at the mesoscale with virtual observations on a realistic network in Eurasia. Observing system simulation experiments are carried out with different transport patterns, flux distributions and total prior amounts of emitted methane. The method proves to consistently reproduce the known "truth" in most cases, with satisfactory tolerance intervals. Additionally, the method explicitly provides influence scores and posterior correlation matrices. An in-depth interpretation of the inversion results is then possible. The more objective quantification of the influence of the observations on the fluxes proposed here allows us to evaluate the impact of the observation network on the characterization of the surface fluxes. The explicit correlations between emission aggregates reveal the mis-separated regions, hence the typical temporal and spatial scales the inversion can analyse. These scales are consistent with the chosen aggregation patterns.

  2. Bayesian spatio-temporal discard model in a demersal trawl fishery

    NASA Astrophysics Data System (ADS)

    Grazia Pennino, M.; Muñoz, Facundo; Conesa, David; López-Quílez, Antonio; Bellido, José M.

    2014-07-01

    Spatial management of discards has recently been proposed as a useful tool for the protection of juveniles, by reducing discard rates and can be used as a buffer against management errors and recruitment failure. In this study Bayesian hierarchical spatial models have been used to analyze about 440 trawl fishing operations of two different metiers, sampled between 2009 and 2012, in order to improve our understanding of factors that influence the quantity of discards and to identify their spatio-temporal distribution in the study area. Our analysis showed that the relative importance of each variable was different for each metier, with a few similarities. In particular, the random vessel effect and seasonal variability were identified as main driving variables for both metiers. Predictive maps of the abundance of discards and maps of the posterior mean of the spatial component show several hot spots with high discard concentration for each metier. We argue how the seasonal/spatial effects, and the knowledge about the factors influential to discarding, could potentially be exploited as potential mitigation measures for future fisheries management strategies. However, misidentification of hotspots and uncertain predictions can culminate in inappropriate mitigation practices which can sometimes be irreversible. The proposed Bayesian spatial method overcomes these issues, since it offers a unified approach which allows the incorporation of spatial random-effect terms, spatial correlation of the variables and the uncertainty of the parameters in the modeling process, resulting in a better quantification of the uncertainty and accurate predictions.

  3. Neuropsychology of selective attention and magnetic cortical stimulation.

    PubMed

    Sabatino, M; Di Nuovo, S; Sardo, P; Abbate, C S; La Grutta, V

    1996-01-01

    Informed volunteers were asked to perform different neuropsychological tests involving selective attention under control conditions and during transcranial magnetic cortical stimulation. The tests chosen involved the recognition of a specific letter among different letters (verbal test) and the search for three different spatial orientations of an appendage to a square (visuo-spatial test). For each test the total time taken and the error rate were calculated. Results showed that cortical stimulation did not cause a worsening in performance. Moreover, magnetic stimulation of the temporal lobe neither modified completion time in both verbal and visuo-spatial tests nor changed error rate. In contrast, magnetic stimulation of the pre-frontal area induced a significant reduction in the performance time of both the verbal and visuo-spatial tests always without an increase in the number of errors. The experimental findings underline the importance of the pre-frontal area in performing tasks requiring a high level of controlled attention and suggest the need to adopt an interdisciplinary approach towards the study of neurone/mind interface mechanisms.

  4. Evaluation of spatial filtering on the accuracy of wheat area estimate

    NASA Technical Reports Server (NTRS)

    Dejesusparada, N. (Principal Investigator); Moreira, M. A.; Chen, S. C.; Delima, A. M.

    1982-01-01

    A 3 x 3 pixel spatial filter for postclassification was used for wheat classification to evaluate the effects of this procedure on the accuracy of area estimation using LANDSAT digital data obtained from a single pass. Quantitative analyses were carried out in five test sites (approx 40 sq km each) and t tests showed that filtering with threshold values significantly decreased errors of commission and omission. In area estimation filtering improved the overestimate of 4.5% to 2.7% and the root-mean-square error decreased from 126.18 ha to 107.02 ha. Extrapolating the same procedure of automatic classification using spatial filtering for postclassification to the whole study area, the accuracy in area estimate was improved from the overestimate of 10.9% to 9.7%. It is concluded that when single pass LANDSAT data is used for crop identification and area estimation the postclassification procedure using a spatial filter provides a more accurate area estimate by reducing classification errors.

  5. Void Growth and Coalescence Simulations

    DTIC Science & Technology

    2013-08-01

    distortion and damage, minimum time step, and appropriate material model parameters. Further, a temporal and spatial convergence study was used to...estimate errors, thus, this study helps to provide guidelines for modeling of materials with voids. Finally, we use a Gurson model with Johnson-Cook...spatial convergence study was used to estimate errors, thus, this study helps to provide guidelines for modeling of materials with voids. Finally, we

  6. Remembering forward: Neural correlates of memory and prediction in human motor adaptation

    PubMed Central

    Scheidt, Robert A; Zimbelman, Janice L; Salowitz, Nicole M G; Suminski, Aaron J; Mosier, Kristine M; Houk, James; Simo, Lucia

    2011-01-01

    We used functional MR imaging (FMRI), a robotic manipulandum and systems identification techniques to examine neural correlates of predictive compensation for spring-like loads during goal-directed wrist movements in neurologically-intact humans. Although load changed unpredictably from one trial to the next, subjects nevertheless used sensorimotor memories from recent movements to predict and compensate upcoming loads. Prediction enabled subjects to adapt performance so that the task was accomplished with minimum effort. Population analyses of functional images revealed a distributed, bilateral network of cortical and subcortical activity supporting predictive load compensation during visual target capture. Cortical regions - including prefrontal, parietal and hippocampal cortices - exhibited trial-by-trial fluctuations in BOLD signal consistent with the storage and recall of sensorimotor memories or “states” important for spatial working memory. Bilateral activations in associative regions of the striatum demonstrated temporal correlation with the magnitude of kinematic performance error (a signal that could drive reward-optimizing reinforcement learning and the prospective scaling of previously learned motor programs). BOLD signal correlations with load prediction were observed in the cerebellar cortex and red nuclei (consistent with the idea that these structures generate adaptive fusimotor signals facilitating cancellation of expected proprioceptive feedback, as required for conditional feedback adjustments to ongoing motor commands and feedback error learning). Analysis of single subject images revealed that predictive activity was at least as likely to be observed in more than one of these neural systems as in just one. We conclude therefore that motor adaptation is mediated by predictive compensations supported by multiple, distributed, cortical and subcortical structures. PMID:21840405

  7. Characterizing spatial heterogeneity based on the b-value and fractal analyses of the 2015 Nepal earthquake sequence

    NASA Astrophysics Data System (ADS)

    Nampally, Subhadra; Padhy, Simanchal; Dimri, Vijay P.

    2018-01-01

    The nature of spatial distribution of heterogeneities in the source area of the 2015 Nepal earthquake is characterized based on the seismic b-value and fractal analysis of its aftershocks. The earthquake size distribution of aftershocks gives a b-value of 1.11 ± 0.08, possibly representing the highly heterogeneous and low stress state of the region. The aftershocks exhibit a fractal structure characterized by a spectrum of generalized dimensions, Dq varying from D2 = 1.66 to D22 = 0.11. The existence of a fractal structure suggests that the spatial distribution of aftershocks is not a random phenomenon, but it self-organizes into a critical state, exhibiting a scale-independent structure governed by a power-law scaling, where a small perturbation in stress is sufficient enough to trigger aftershocks. In order to obtain the bias in fractal dimensions resulting from finite data size, we compared the multifractal spectrum for the real data and random simulations. On comparison, we found that the lower limit of bias in D2 is 0.44. The similarity in their multifractal spectra suggests the lack of long-range correlation in the data, with an only weakly multifractal or a monofractal with a single correlation dimension D2 characterizing the data. The minimum number of events required for a multifractal process with an acceptable error is discussed. We also tested for a possible correlation between changes in D2 and energy released during the earthquakes. The values of D2 rise during the two largest earthquakes (M > 7.0) in the sequence. The b- and D2 values are related by D2 = 1.45 b that corresponds to the intermediate to large earthquakes. Our results provide useful constraints on the spatial distribution of b- and D2-values, which are useful for seismic hazard assessment in the aftershock area of a large earthquake.

  8. Modeling spatial patterns of soil respiration in maize fields from vegetation and soil property factors with the use of remote sensing and geographical information system.

    PubMed

    Huang, Ni; Wang, Li; Guo, Yiqiang; Hao, Pengyu; Niu, Zheng

    2014-01-01

    To examine the method for estimating the spatial patterns of soil respiration (Rs) in agricultural ecosystems using remote sensing and geographical information system (GIS), Rs rates were measured at 53 sites during the peak growing season of maize in three counties in North China. Through Pearson's correlation analysis, leaf area index (LAI), canopy chlorophyll content, aboveground biomass, soil organic carbon (SOC) content, and soil total nitrogen content were selected as the factors that affected spatial variability in Rs during the peak growing season of maize. The use of a structural equation modeling approach revealed that only LAI and SOC content directly affected Rs. Meanwhile, other factors indirectly affected Rs through LAI and SOC content. When three greenness vegetation indices were extracted from an optical image of an environmental and disaster mitigation satellite in China, enhanced vegetation index (EVI) showed the best correlation with LAI and was thus used as a proxy for LAI to estimate Rs at the regional scale. The spatial distribution of SOC content was obtained by extrapolating the SOC content at the plot scale based on the kriging interpolation method in GIS. When data were pooled for 38 plots, a first-order exponential analysis indicated that approximately 73% of the spatial variability in Rs during the peak growing season of maize can be explained by EVI and SOC content. Further test analysis based on independent data from 15 plots showed that the simple exponential model had acceptable accuracy in estimating the spatial patterns of Rs in maize fields on the basis of remotely sensed EVI and GIS-interpolated SOC content, with R2 of 0.69 and root-mean-square error of 0.51 µmol CO2 m(-2) s(-1). The conclusions from this study provide valuable information for estimates of Rs during the peak growing season of maize in three counties in North China.

  9. Accounting for Non-Gaussian Sources of Spatial Correlation in Parametric Functional Magnetic Resonance Imaging Paradigms II: A Method to Obtain First-Level Analysis Residuals with Uniform and Gaussian Spatial Autocorrelation Function and Independent and Identically Distributed Time-Series.

    PubMed

    Gopinath, Kaundinya; Krishnamurthy, Venkatagiri; Lacey, Simon; Sathian, K

    2018-02-01

    In a recent study Eklund et al. have shown that cluster-wise family-wise error (FWE) rate-corrected inferences made in parametric statistical method-based functional magnetic resonance imaging (fMRI) studies over the past couple of decades may have been invalid, particularly for cluster defining thresholds less stringent than p < 0.001; principally because the spatial autocorrelation functions (sACFs) of fMRI data had been modeled incorrectly to follow a Gaussian form, whereas empirical data suggest otherwise. Hence, the residuals from general linear model (GLM)-based fMRI activation estimates in these studies may not have possessed a homogenously Gaussian sACF. Here we propose a method based on the assumption that heterogeneity and non-Gaussianity of the sACF of the first-level GLM analysis residuals, as well as temporal autocorrelations in the first-level voxel residual time-series, are caused by unmodeled MRI signal from neuronal and physiological processes as well as motion and other artifacts, which can be approximated by appropriate decompositions of the first-level residuals with principal component analysis (PCA), and removed. We show that application of this method yields GLM residuals with significantly reduced spatial correlation, nearly Gaussian sACF and uniform spatial smoothness across the brain, thereby allowing valid cluster-based FWE-corrected inferences based on assumption of Gaussian spatial noise. We further show that application of this method renders the voxel time-series of first-level GLM residuals independent, and identically distributed across time (which is a necessary condition for appropriate voxel-level GLM inference), without having to fit ad hoc stochastic colored noise models. Furthermore, the detection power of individual subject brain activation analysis is enhanced. This method will be especially useful for case studies, which rely on first-level GLM analysis inferences.

  10. Modeling Spatial Patterns of Soil Respiration in Maize Fields from Vegetation and Soil Property Factors with the Use of Remote Sensing and Geographical Information System

    PubMed Central

    Huang, Ni; Wang, Li; Guo, Yiqiang; Hao, Pengyu; Niu, Zheng

    2014-01-01

    To examine the method for estimating the spatial patterns of soil respiration (Rs) in agricultural ecosystems using remote sensing and geographical information system (GIS), Rs rates were measured at 53 sites during the peak growing season of maize in three counties in North China. Through Pearson's correlation analysis, leaf area index (LAI), canopy chlorophyll content, aboveground biomass, soil organic carbon (SOC) content, and soil total nitrogen content were selected as the factors that affected spatial variability in Rs during the peak growing season of maize. The use of a structural equation modeling approach revealed that only LAI and SOC content directly affected Rs. Meanwhile, other factors indirectly affected Rs through LAI and SOC content. When three greenness vegetation indices were extracted from an optical image of an environmental and disaster mitigation satellite in China, enhanced vegetation index (EVI) showed the best correlation with LAI and was thus used as a proxy for LAI to estimate Rs at the regional scale. The spatial distribution of SOC content was obtained by extrapolating the SOC content at the plot scale based on the kriging interpolation method in GIS. When data were pooled for 38 plots, a first-order exponential analysis indicated that approximately 73% of the spatial variability in Rs during the peak growing season of maize can be explained by EVI and SOC content. Further test analysis based on independent data from 15 plots showed that the simple exponential model had acceptable accuracy in estimating the spatial patterns of Rs in maize fields on the basis of remotely sensed EVI and GIS-interpolated SOC content, with R2 of 0.69 and root-mean-square error of 0.51 µmol CO2 m−2 s−1. The conclusions from this study provide valuable information for estimates of Rs during the peak growing season of maize in three counties in North China. PMID:25157827

  11. Evaluation of land performance in Senegal using multi-temporal NDVI and rainfall series

    USGS Publications Warehouse

    Li, Ji; Lewis, J.; Rowland, James; Tappan, G.; Tieszen, L.L.

    2004-01-01

    Time series of rainfall data and normalized difference vegetation index (NDVI) were used to evaluate land cover performance in Senegal, Africa, for the period 1982–1997, including analysis of woodland/forest, agriculture, savanna, and steppe land cover types. A strong relationship exists between annual rainfall and season-integrated NDVI for all of Senegal (r=0.74 to 0.90). For agriculture, savanna, and steppe areas, high positive correlations portray ‘normal’ land cover performance in relation to the rainfall/NDVI association. Regions of low correlation might indicate areas impacted by human influence. However, in the woodland/forest area, a negative or low correlation (with high NDVI) may reflect ‘normal’ land cover performance, due in part to the saturation effect of the rainfall/NDVI association. The analysis identified three areas of poor performance, where degradation has occurred over many years. Use of the ‘Standard Error of the Estimate’ provided essential information for detecting spatial anomalies associated with land degradation.

  12. Evaluation of 2 cognitive abilities tests in a dual-task environment

    NASA Technical Reports Server (NTRS)

    Vidulich, M. A.; Tsang, P. S.

    1986-01-01

    Most real world operators are required to perform multiple tasks simultaneously. In some cases, such as flying a high performance aircraft or trouble shooting a failing nuclear power plant, the operator's ability to time share or process in parallel" can be driven to extremes. This has created interest in selection tests of cognitive abilities. Two tests that have been suggested are the Dichotic Listening Task and the Cognitive Failures Questionnaire. Correlations between these test results and time sharing performance were obtained and the validity of these tests were examined. The primary task was a tracking task with dynamically varying bandwidth. This was performed either alone or concurrently with either another tracking task or a spatial transformation task. The results were: (1) An unexpected negative correlation was detected between the two tests; (2) The lack of correlation between either test and task performance made the predictive utility of the tests scores appear questionable; (3) Pilots made more errors on the Dichotic Listening Task than college students.

  13. BaTMAn: Bayesian Technique for Multi-image Analysis

    NASA Astrophysics Data System (ADS)

    Casado, J.; Ascasibar, Y.; García-Benito, R.; Guidi, G.; Choudhury, O. S.; Bellocchi, E.; Sánchez, S. F.; Díaz, A. I.

    2016-12-01

    Bayesian Technique for Multi-image Analysis (BaTMAn) characterizes any astronomical dataset containing spatial information and performs a tessellation based on the measurements and errors provided as input. The algorithm iteratively merges spatial elements as long as they are statistically consistent with carrying the same information (i.e. identical signal within the errors). The output segmentations successfully adapt to the underlying spatial structure, regardless of its morphology and/or the statistical properties of the noise. BaTMAn identifies (and keeps) all the statistically-significant information contained in the input multi-image (e.g. an IFS datacube). The main aim of the algorithm is to characterize spatially-resolved data prior to their analysis.

  14. Using CO2:CO Correlations to Improve Inverse Analyses of Carbon Fluxes

    NASA Technical Reports Server (NTRS)

    Palmer, Paul I.; Suntharalingam, Parvadha; Jones, Dylan B. A.; Jacob, Daniel J.; Streets, David G.; Fu, Qingyan; Vay, Stephanie A.; Sachse, Glen W.

    2006-01-01

    Observed correlations between atmospheric concentrations of CO2 and CO represent potentially powerful information for improving CO2 surface flux estimates through coupled CO2-CO inverse analyses. We explore the value of these correlations in improving estimates of regional CO2 fluxes in east Asia by using aircraft observations of CO2 and CO from the TRACE-P campaign over the NW Pacific in March 2001. Our inverse model uses regional CO2 and CO surface fluxes as the state vector, separating biospheric and combustion contributions to CO2. CO2-CO error correlation coefficients are included in the inversion as off-diagonal entries in the a priori and observation error covariance matrices. We derive error correlations in a priori combustion source estimates of CO2 and CO by propagating error estimates of fuel consumption rates and emission factors. However, we find that these correlations are weak because CO source uncertainties are mostly determined by emission factors. Observed correlations between atmospheric CO2 and CO concentrations imply corresponding error correlations in the chemical transport model used as the forward model for the inversion. These error correlations in excess of 0.7, as derived from the TRACE-P data, enable a coupled CO2-CO inversion to achieve significant improvement over a CO2-only inversion for quantifying regional fluxes of CO2.

  15. Simulation of wave propagation in three-dimensional random media

    NASA Technical Reports Server (NTRS)

    Coles, William A.; Filice, J. P.; Frehlich, R. G.; Yadlowsky, M.

    1993-01-01

    Quantitative error analysis for simulation of wave propagation in three dimensional random media assuming narrow angular scattering are presented for the plane wave and spherical wave geometry. This includes the errors resulting from finite grid size, finite simulation dimensions, and the separation of the two-dimensional screens along the propagation direction. Simple error scalings are determined for power-law spectra of the random refractive index of the media. The effects of a finite inner scale are also considered. The spatial spectra of the intensity errors are calculated and compared to the spatial spectra of intensity. The numerical requirements for a simulation of given accuracy are determined for realizations of the field. The numerical requirements for accurate estimation of higher moments of the field are less stringent.

  16. Functional transcranial Doppler sonography and a spatial orientation paradigm identify the non-dominant hemisphere.

    PubMed

    Dorst, J; Haag, A; Knake, S; Oertel, W H; Hamer, H M; Rosenow, F

    2008-10-01

    Functional transcranial Doppler sonography (fTCD) during word generation is well established for language lateralization. In this study, we evaluated a fTCD paradigm to reliably identify the non-dominant hemisphere. Twenty-nine right-handed healthy subjects (27.1+/-7.6 years) performed the 'cube perspective test' [Stumpf, H., & Fay, E. (1983). Schlauchfiguren: Ein Test zur Beurteilung des räumlichen Vorstellungsvermögens. Verlag für Psychologie Dr. C. J. Hogrefe, Göttingen, Toronto, Zürich] a spatial orientation task, while the cerebral blood flow velocity (CBFV) was simultaneously measured in both middle cerebral arteries (MCAs). In addition, the established word generation paradigm for language lateralization was performed. Subjects with atypical language representation were excluded. Data were analysed offline with the software Average, which performed a heart-cycle integration and a baseline-correction and calculated a lateralization index (LI) with its standard error of the mean increase in CBFV separately for both MCAs. Twenty-one of 29 subjects (72.4%) lateralized to the right hemisphere (chi2=5.828, p=0.016). The mean LI of the spatial orientation paradigm pointed to the right hemisphere (x =-1.9+/-3.2) and was different from the LI of word generation (x =3.9+/-2.2;p<0.001). There was no correlation between the LI of spatial orientation and word generation (R=0.095, p=0.624). Age of the subjects did not correlate with the LI during spatial orientation (p>0.05) but negatively with the LI during word generation (R=-0.468, p=0.010). The maximum increase of CBFV was greater in the spatial orientation (14.0%+/-3.6%) than in the word generation paradigm (9.4%+/-4.0%; p<0.001). In more than two thirds of the subjects with left-sided language dominance, the spatial orientation paradigm was able to identify the non-dominant hemisphere. The results suggest both paradigms to be independent of each other. The spatial orientation paradigm, therefore, appears to be a non-verbal fTCD paradigm with possible clinical relevance.

  17. Haptic spatial matching in near peripersonal space.

    PubMed

    Kaas, Amanda L; Mier, Hanneke I van

    2006-04-01

    Research has shown that haptic spatial matching at intermanual distances over 60 cm is prone to large systematic errors. The error pattern has been explained by the use of reference frames intermediate between egocentric and allocentric coding. This study investigated haptic performance in near peripersonal space, i.e. at intermanual distances of 60 cm and less. Twelve blindfolded participants (six males and six females) were presented with two turn bars at equal distances from the midsagittal plane, 30 or 60 cm apart. Different orientations (vertical/horizontal or oblique) of the left bar had to be matched by adjusting the right bar to either a mirror symmetric (/ \\) or parallel (/ /) position. The mirror symmetry task can in principle be performed accurately in both an egocentric and an allocentric reference frame, whereas the parallel task requires an allocentric representation. Results showed that parallel matching induced large systematic errors which increased with distance. Overall error was significantly smaller in the mirror task. The task difference also held for the vertical orientation at 60 cm distance, even though this orientation required the same response in both tasks, showing a marked effect of task instruction. In addition, men outperformed women on the parallel task. Finally, contrary to our expectations, systematic errors were found in the mirror task, predominantly at 30 cm distance. Based on these findings, we suggest that haptic performance in near peripersonal space might be dominated by different mechanisms than those which come into play at distances over 60 cm. Moreover, our results indicate that both inter-individual differences and task demands affect task performance in haptic spatial matching. Therefore, we conclude that the study of haptic spatial matching in near peripersonal space might reveal important additional constraints for the specification of adequate models of haptic spatial performance.

  18. Measurement Error Correction for Predicted Spatiotemporal Air Pollution Exposures.

    PubMed

    Keller, Joshua P; Chang, Howard H; Strickland, Matthew J; Szpiro, Adam A

    2017-05-01

    Air pollution cohort studies are frequently analyzed in two stages, first modeling exposure then using predicted exposures to estimate health effects in a second regression model. The difference between predicted and unobserved true exposures introduces a form of measurement error in the second stage health model. Recent methods for spatial data correct for measurement error with a bootstrap and by requiring the study design ensure spatial compatibility, that is, monitor and subject locations are drawn from the same spatial distribution. These methods have not previously been applied to spatiotemporal exposure data. We analyzed the association between fine particulate matter (PM2.5) and birth weight in the US state of Georgia using records with estimated date of conception during 2002-2005 (n = 403,881). We predicted trimester-specific PM2.5 exposure using a complex spatiotemporal exposure model. To improve spatial compatibility, we restricted to mothers residing in counties with a PM2.5 monitor (n = 180,440). We accounted for additional measurement error via a nonparametric bootstrap. Third trimester PM2.5 exposure was associated with lower birth weight in the uncorrected (-2.4 g per 1 μg/m difference in exposure; 95% confidence interval [CI]: -3.9, -0.8) and bootstrap-corrected (-2.5 g, 95% CI: -4.2, -0.8) analyses. Results for the unrestricted analysis were attenuated (-0.66 g, 95% CI: -1.7, 0.35). This study presents a novel application of measurement error correction for spatiotemporal air pollution exposures. Our results demonstrate the importance of spatial compatibility between monitor and subject locations and provide evidence of the association between air pollution exposure and birth weight.

  19. Evaluation of Satellite and Model Precipitation Products Over Turkey

    NASA Astrophysics Data System (ADS)

    Yilmaz, M. T.; Amjad, M.

    2017-12-01

    Satellite-based remote sensing, gauge stations, and models are the three major platforms to acquire precipitation dataset. Among them satellites and models have the advantage of retrieving spatially and temporally continuous and consistent datasets, while the uncertainty estimates of these retrievals are often required for many hydrological studies to understand the source and the magnitude of the uncertainty in hydrological response parameters. In this study, satellite and model precipitation data products are validated over various temporal scales (daily, 3-daily, 7-daily, 10-daily and monthly) using in-situ measured precipitation observations from a network of 733 gauges from all over the Turkey. Tropical Rainfall Measurement Mission (TRMM) Multi-satellite Precipitation Analysis (TMPA) 3B42 version 7 and European Center of Medium-Range Weather Forecast (ECMWF) model estimates (daily, 3-daily, 7-daily and 10-daily accumulated forecast) are used in this study. Retrievals are evaluated for their mean and standard deviation and their accuracies are evaluated via bias, root mean square error, error standard deviation and correlation coefficient statistics. Intensity vs frequency analysis and some contingency table statistics like percent correct, probability of detection, false alarm ratio and critical success index are determined using daily time-series. Both ECMWF forecasts and TRMM observations, on average, overestimate the precipitation compared to gauge estimates; wet biases are 10.26 mm/month and 8.65 mm/month, respectively for ECMWF and TRMM. RMSE values of ECMWF forecasts and TRMM estimates are 39.69 mm/month and 41.55 mm/month, respectively. Monthly correlations between Gauges-ECMWF, Gauges-TRMM and ECMWF-TRMM are 0.76, 0.73 and 0.81, respectively. The model and the satellite error statistics are further compared against the gauges error statistics based on inverse distance weighting (IWD) analysis. Both the model and satellite data have less IWD errors (14.72 mm/month and 10.75 mm/month, respectively) compared to gauges IWD error (21.58 mm/month). These results show that, on average, ECMWF forecast data have higher skill than TRMM observations. Overall, both ECMWF forecast data and TRMM observations show good potential for catchment scale hydrological analysis.

  20. Validation of spatial variability in downscaling results from the VALUE perfect predictor experiment

    NASA Astrophysics Data System (ADS)

    Widmann, Martin; Bedia, Joaquin; Gutiérrez, Jose Manuel; Maraun, Douglas; Huth, Radan; Fischer, Andreas; Keller, Denise; Hertig, Elke; Vrac, Mathieu; Wibig, Joanna; Pagé, Christian; Cardoso, Rita M.; Soares, Pedro MM; Bosshard, Thomas; Casado, Maria Jesus; Ramos, Petra

    2016-04-01

    VALUE is an open European network to validate and compare downscaling methods for climate change research. Within VALUE a systematic validation framework to enable the assessment and comparison of both dynamical and statistical downscaling methods has been developed. In the first validation experiment the downscaling methods are validated in a setup with perfect predictors taken from the ERA-interim reanalysis for the period 1997 - 2008. This allows to investigate the isolated skill of downscaling methods without further error contributions from the large-scale predictors. One aspect of the validation is the representation of spatial variability. As part of the VALUE validation we have compared various properties of the spatial variability of downscaled daily temperature and precipitation with the corresponding properties in observations. We have used two test validation datasets, one European-wide set of 86 stations, and one higher-density network of 50 stations in Germany. Here we present results based on three approaches, namely the analysis of i.) correlation matrices, ii.) pairwise joint threshold exceedances, and iii.) regions of similar variability. We summarise the information contained in correlation matrices by calculating the dependence of the correlations on distance and deriving decorrelation lengths, as well as by determining the independent degrees of freedom. Probabilities for joint threshold exceedances and (where appropriate) non-exceedances are calculated for various user-relevant thresholds related for instance to extreme precipitation or frost and heat days. The dependence of these probabilities on distance is again characterised by calculating typical length scales that separate dependent from independent exceedances. Regionalisation is based on rotated Principal Component Analysis. The results indicate which downscaling methods are preferable if the dependency of variability at different locations is relevant for the user.

  1. Mapping the spatial pattern of temperate forest above ground biomass by integrating airborne lidar with Radarsat-2 imagery via geostatistical models

    NASA Astrophysics Data System (ADS)

    Li, Wang; Niu, Zheng; Gao, Shuai; Wang, Cheng

    2014-11-01

    Light Detection and Ranging (LiDAR) and Synthetic Aperture Radar (SAR) are two competitive active remote sensing techniques in forest above ground biomass estimation, which is important for forest management and global climate change study. This study aims to further explore their capabilities in temperate forest above ground biomass (AGB) estimation by emphasizing the spatial auto-correlation of variables obtained from these two remote sensing tools, which is a usually overlooked aspect in remote sensing applications to vegetation studies. Remote sensing variables including airborne LiDAR metrics, backscattering coefficient for different SAR polarizations and their ratio variables for Radarsat-2 imagery were calculated. First, simple linear regression models (SLR) was established between the field-estimated above ground biomass and the remote sensing variables. Pearson's correlation coefficient (R2) was used to find which LiDAR metric showed the most significant correlation with the regression residuals and could be selected as co-variable in regression co-kriging (RCoKrig). Second, regression co-kriging was conducted by choosing the regression residuals as dependent variable and the LiDAR metric (Hmean) with highest R2 as co-variable. Third, above ground biomass over the study area was estimated using SLR model and RCoKrig model, respectively. The results for these two models were validated using the same ground points. Results showed that both of these two methods achieved satisfactory prediction accuracy, while regression co-kriging showed the lower estimation error. It is proved that regression co-kriging model is feasible and effective in mapping the spatial pattern of AGB in the temperate forest using Radarsat-2 data calibrated by airborne LiDAR metrics.

  2. MODEST: A Tool for Geodesy and Astronomy

    NASA Technical Reports Server (NTRS)

    Sovers, Ojars J.; Jacobs, Christopher S.; Lanyi, Gabor E.

    2004-01-01

    Features of the JPL VLBI modeling and estimation software "MODEST" are reviewed. Its main advantages include thoroughly documented model physics, portability, and detailed error modeling. Two unique models are included: modeling of source structure and modeling of both spatial and temporal correlations in tropospheric delay noise. History of the code parallels the development of the astrometric and geodetic VLBI technique and the software retains many of the models implemented during its advancement. The code has been traceably maintained since the early 1980s, and will continue to be updated with recent IERS standards. Scripts are being developed to facilitate user-friendly data processing in the era of e-VLBI.

  3. Introduction to CAUSES: Description of Weather and Climate Models and Their Near-Surface Temperature Errors in 5 day Hindcasts Near the Southern Great Plains

    DOE PAGES

    Morcrette, C. J.; Van Weverberg, K.; Ma, H. -Y.; ...

    2018-02-16

    We introduce the Clouds Above the United States and Errors at the Surface (CAUSES) project with its aim of better understanding the physical processes leading to warm screen temperature biases over the American Midwest in many numerical models. In this first of four companion papers, 11 different models, from nine institutes, perform a series of 5 day hindcasts, each initialized from reanalyses. After describing the common experimental protocol and detailing each model configuration, a gridded temperature data set is derived from observations and used to show that all the models have a warm bias over parts of the Midwest. Additionally,more » a strong diurnal cycle in the screen temperature bias is found in most models. In some models the bias is largest around midday, while in others it is largest during the night. At the Department of Energy Atmospheric Radiation Measurement Southern Great Plains (SGP) site, the model biases are shown to extend several kilometers into the atmosphere. Finally, to provide context for the companion papers, in which observations from the SGP site are used to evaluate the different processes contributing to errors there, it is shown that there are numerous locations across the Midwest where the diurnal cycle of the error is highly correlated with the diurnal cycle of the error at SGP. This suggests that conclusions drawn from detailed evaluation of models using instruments located at SGP will be representative of errors that are prevalent over a larger spatial scale.« less

  4. Introduction to CAUSES: Description of Weather and Climate Models and Their Near-Surface Temperature Errors in 5 day Hindcasts Near the Southern Great Plains

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Morcrette, C. J.; Van Weverberg, K.; Ma, H. -Y.

    We introduce the Clouds Above the United States and Errors at the Surface (CAUSES) project with its aim of better understanding the physical processes leading to warm screen temperature biases over the American Midwest in many numerical models. In this first of four companion papers, 11 different models, from nine institutes, perform a series of 5 day hindcasts, each initialized from reanalyses. After describing the common experimental protocol and detailing each model configuration, a gridded temperature data set is derived from observations and used to show that all the models have a warm bias over parts of the Midwest. Additionally,more » a strong diurnal cycle in the screen temperature bias is found in most models. In some models the bias is largest around midday, while in others it is largest during the night. At the Department of Energy Atmospheric Radiation Measurement Southern Great Plains (SGP) site, the model biases are shown to extend several kilometers into the atmosphere. Finally, to provide context for the companion papers, in which observations from the SGP site are used to evaluate the different processes contributing to errors there, it is shown that there are numerous locations across the Midwest where the diurnal cycle of the error is highly correlated with the diurnal cycle of the error at SGP. This suggests that conclusions drawn from detailed evaluation of models using instruments located at SGP will be representative of errors that are prevalent over a larger spatial scale.« less

  5. Introduction to CAUSES: Description of Weather and Climate Models and Their Near-Surface Temperature Errors in 5 day Hindcasts Near the Southern Great Plains

    NASA Astrophysics Data System (ADS)

    Morcrette, C. J.; Van Weverberg, K.; Ma, H.-Y.; Ahlgrimm, M.; Bazile, E.; Berg, L. K.; Cheng, A.; Cheruy, F.; Cole, J.; Forbes, R.; Gustafson, W. I.; Huang, M.; Lee, W.-S.; Liu, Y.; Mellul, L.; Merryfield, W. J.; Qian, Y.; Roehrig, R.; Wang, Y.-C.; Xie, S.; Xu, K.-M.; Zhang, C.; Klein, S.; Petch, J.

    2018-03-01

    We introduce the Clouds Above the United States and Errors at the Surface (CAUSES) project with its aim of better understanding the physical processes leading to warm screen temperature biases over the American Midwest in many numerical models. In this first of four companion papers, 11 different models, from nine institutes, perform a series of 5 day hindcasts, each initialized from reanalyses. After describing the common experimental protocol and detailing each model configuration, a gridded temperature data set is derived from observations and used to show that all the models have a warm bias over parts of the Midwest. Additionally, a strong diurnal cycle in the screen temperature bias is found in most models. In some models the bias is largest around midday, while in others it is largest during the night. At the Department of Energy Atmospheric Radiation Measurement Southern Great Plains (SGP) site, the model biases are shown to extend several kilometers into the atmosphere. Finally, to provide context for the companion papers, in which observations from the SGP site are used to evaluate the different processes contributing to errors there, it is shown that there are numerous locations across the Midwest where the diurnal cycle of the error is highly correlated with the diurnal cycle of the error at SGP. This suggests that conclusions drawn from detailed evaluation of models using instruments located at SGP will be representative of errors that are prevalent over a larger spatial scale.

  6. Optimal interpolation schemes to constrain pmPM2.5 in regional modeling over the United States

    NASA Astrophysics Data System (ADS)

    Sousan, Sinan Dhia Jameel

    This thesis presents the use of data assimilation with optimal interpolation (OI) to develop atmospheric aerosol concentration estimates for the United States at high spatial and temporal resolutions. Concentration estimates are highly desirable for a wide range of applications, including visibility, climate, and human health. OI is a viable data assimilation method that can be used to improve Community Multiscale Air Quality (CMAQ) model fine particulate matter (PM2.5) estimates. PM2.5 is the mass of solid and liquid particles with diameters less than or equal to 2.5 µm suspended in the gas phase. OI was employed by combining model estimates with satellite and surface measurements. The satellite data assimilation combined 36 x 36 km aerosol concentrations from CMAQ with aerosol optical depth (AOD) measured by MODIS and AERONET over the continental United States for 2002. Posterior model concentrations generated by the OI algorithm were compared with surface PM2.5 measurements to evaluate a number of possible data assimilation parameters, including model error, observation error, and temporal averaging assumptions. Evaluation was conducted separately for six geographic U.S. regions in 2002. Variability in model error and MODIS biases limited the effectiveness of a single data assimilation system for the entire continental domain. The best combinations of four settings and three averaging schemes led to a domain-averaged improvement in fractional error from 1.2 to 0.97 and from 0.99 to 0.89 at respective IMPROVE and STN monitoring sites. For 38% of OI results, MODIS OI degraded the forward model skill due to biases and outliers in MODIS AOD. Surface data assimilation combined 36 × 36 km aerosol concentrations from the CMAQ model with surface PM2.5 measurements over the continental United States for 2002. The model error covariance matrix was constructed by using the observational method. The observation error covariance matrix included site representation that scaled the observation error by land use (i.e. urban or rural locations). In theory, urban locations should have less effect on surrounding areas than rural sites, which can be controlled using site representation error. The annual evaluations showed substantial improvements in model performance with increases in the correlation coefficient from 0.36 (prior) to 0.76 (posterior), and decreases in the fractional error from 0.43 (prior) to 0.15 (posterior). In addition, the normalized mean error decreased from 0.36 (prior) to 0.13 (posterior), and the RMSE decreased from 5.39 µg m-3 (prior) to 2.32 µg m-3 (posterior). OI decreased model bias for both large spatial areas and point locations, and could be extended to more advanced data assimilation methods. The current work will be applied to a five year (2000-2004) CMAQ simulation aimed at improving aerosol model estimates. The posterior model concentrations will be used to inform exposure studies over the U.S. that relate aerosol exposure to mortality and morbidity rates. Future improvements for the OI techniques used in the current study will include combining both surface and satellite data to improve posterior model estimates. Satellite data have high spatial and temporal resolutions in comparison to surface measurements, which are scarce but more accurate than model estimates. The satellite data are subject to noise affected by location and season of retrieval. The implementation of OI to combine satellite and surface data sets has the potential to improve posterior model estimates for locations that have no direct measurements.

  7. Error correcting mechanisms during antisaccades: contribution of online control during primary saccades and offline control via secondary saccades.

    PubMed

    Bedi, Harleen; Goltz, Herbert C; Wong, Agnes M F; Chandrakumar, Manokaraananthan; Niechwiej-Szwedo, Ewa

    2013-01-01

    Errors in eye movements can be corrected during the ongoing saccade through in-flight modifications (i.e., online control), or by programming a secondary eye movement (i.e., offline control). In a reflexive saccade task, the oculomotor system can use extraretinal information (i.e., efference copy) online to correct errors in the primary saccade, and offline retinal information to generate a secondary corrective saccade. The purpose of this study was to examine the error correction mechanisms in the antisaccade task. The roles of extraretinal and retinal feedback in maintaining eye movement accuracy were investigated by presenting visual feedback at the spatial goal of the antisaccade. We found that online control for antisaccade is not affected by the presence of visual feedback; that is whether visual feedback is present or not, the duration of the deceleration interval was extended and significantly correlated with reduced antisaccade endpoint error. We postulate that the extended duration of deceleration is a feature of online control during volitional saccades to improve their endpoint accuracy. We found that secondary saccades were generated more frequently in the antisaccade task compared to the reflexive saccade task. Furthermore, we found evidence for a greater contribution from extraretinal sources of feedback in programming the secondary "corrective" saccades in the antisaccade task. Nonetheless, secondary saccades were more corrective for the remaining antisaccade amplitude error in the presence of visual feedback of the target. Taken together, our results reveal a distinctive online error control strategy through an extension of the deceleration interval in the antisaccade task. Target feedback does not improve online control, rather it improves the accuracy of secondary saccades in the antisaccade task.

  8. Error Correcting Mechanisms during Antisaccades: Contribution of Online Control during Primary Saccades and Offline Control via Secondary Saccades

    PubMed Central

    Bedi, Harleen; Goltz, Herbert C.; Wong, Agnes M. F.; Chandrakumar, Manokaraananthan; Niechwiej-Szwedo, Ewa

    2013-01-01

    Errors in eye movements can be corrected during the ongoing saccade through in-flight modifications (i.e., online control), or by programming a secondary eye movement (i.e., offline control). In a reflexive saccade task, the oculomotor system can use extraretinal information (i.e., efference copy) online to correct errors in the primary saccade, and offline retinal information to generate a secondary corrective saccade. The purpose of this study was to examine the error correction mechanisms in the antisaccade task. The roles of extraretinal and retinal feedback in maintaining eye movement accuracy were investigated by presenting visual feedback at the spatial goal of the antisaccade. We found that online control for antisaccade is not affected by the presence of visual feedback; that is whether visual feedback is present or not, the duration of the deceleration interval was extended and significantly correlated with reduced antisaccade endpoint error. We postulate that the extended duration of deceleration is a feature of online control during volitional saccades to improve their endpoint accuracy. We found that secondary saccades were generated more frequently in the antisaccade task compared to the reflexive saccade task. Furthermore, we found evidence for a greater contribution from extraretinal sources of feedback in programming the secondary “corrective” saccades in the antisaccade task. Nonetheless, secondary saccades were more corrective for the remaining antisaccade amplitude error in the presence of visual feedback of the target. Taken together, our results reveal a distinctive online error control strategy through an extension of the deceleration interval in the antisaccade task. Target feedback does not improve online control, rather it improves the accuracy of secondary saccades in the antisaccade task. PMID:23936308

  9. Assessing uncertainty in SRTM elevations for global flood modelling

    NASA Astrophysics Data System (ADS)

    Hawker, L. P.; Rougier, J.; Neal, J. C.; Bates, P. D.

    2017-12-01

    The SRTM DEM is widely used as the topography input to flood models in data-sparse locations. Understanding spatial error in the SRTM product is crucial in constraining uncertainty about elevations and assessing the impact of these upon flood prediction. Assessment of SRTM error was carried out by Rodriguez et al (2006), but this did not explicitly quantify the spatial structure of vertical errors in the DEM, and nor did it distinguish between errors over different types of landscape. As a result, there is a lack of information about spatial structure of vertical errors of the SRTM in the landscape that matters most to flood models - the floodplain. Therefore, this study attempts this task by comparing SRTM, an error corrected SRTM product (The MERIT DEM of Yamazaki et al., 2017) and near truth LIDAR elevations for 3 deltaic floodplains (Mississippi, Po, Wax Lake) and a large lowland region (the Fens, UK). Using the error covariance function, calculated by comparing SRTM elevations to the near truth LIDAR, perturbations of the 90m SRTM DEM were generated, producing a catalogue of plausible DEMs. This allows modellers to simulate a suite of plausible DEMs at any aggregated block size above native SRTM resolution. Finally, the generated DEM's were input into a hydrodynamic model of the Mekong Delta, built using the LISFLOOD-FP hydrodynamic model, to assess how DEM error affects the hydrodynamics and inundation extent across the domain. The end product of this is an inundation map with the probability of each pixel being flooded based on the catalogue of DEMs. In a world of increasing computer power, but a lack of detailed datasets, this powerful approach can be used throughout natural hazard modelling to understand how errors in the SRTM DEM can impact the hazard assessment.

  10. Dual Systems for Spatial Updating in Immediate and Retrieved Environments: Evidence from Bias Analysis.

    PubMed

    Liu, Chuanjun; Xiao, Chengli

    2018-01-01

    The spatial updating and memory systems are employed during updating in both the immediate and retrieved environments. However, these dual systems seem to work differently, as the difference of pointing latency and absolute error between the two systems vary across environments. To verify this issue, the present study employed the bias analysis of signed errors based on the hypothesis that the transformed representation will bias toward the original one. Participants learned a spatial layout and then either stayed in the learning location or were transferred to a neighboring room directly or after being disoriented. After that, they performed spatial judgments from perspectives aligned with the learning direction, aligned with the direction they faced during the test, or a novel direction misaligned with the two above-mentioned directions. The patterns of signed error bias were consistent across environments. Responses for memory aligned perspectives were unbiased, whereas responses for sensorimotor aligned perspectives were biased away from the memory aligned perspective, and responses for misaligned perspectives were biased toward sensorimotor aligned perspectives. These findings indicate that the spatial updating system is consistently independent of the spatial memory system regardless of the environments, but the updating system becomes less accessible as the environment changes from immediate to a retrieved one.

  11. Dual Systems for Spatial Updating in Immediate and Retrieved Environments: Evidence from Bias Analysis

    PubMed Central

    Liu, Chuanjun; Xiao, Chengli

    2018-01-01

    The spatial updating and memory systems are employed during updating in both the immediate and retrieved environments. However, these dual systems seem to work differently, as the difference of pointing latency and absolute error between the two systems vary across environments. To verify this issue, the present study employed the bias analysis of signed errors based on the hypothesis that the transformed representation will bias toward the original one. Participants learned a spatial layout and then either stayed in the learning location or were transferred to a neighboring room directly or after being disoriented. After that, they performed spatial judgments from perspectives aligned with the learning direction, aligned with the direction they faced during the test, or a novel direction misaligned with the two above-mentioned directions. The patterns of signed error bias were consistent across environments. Responses for memory aligned perspectives were unbiased, whereas responses for sensorimotor aligned perspectives were biased away from the memory aligned perspective, and responses for misaligned perspectives were biased toward sensorimotor aligned perspectives. These findings indicate that the spatial updating system is consistently independent of the spatial memory system regardless of the environments, but the updating system becomes less accessible as the environment changes from immediate to a retrieved one. PMID:29467698

  12. PTV margin determination in conformal SRT of intracranial lesions

    PubMed Central

    Parker, Brent C.; Shiu, Almon S.; Maor, Moshe H.; Lang, Frederick F.; Liu, H. Helen; White, R. Allen; Antolak, John A.

    2002-01-01

    The planning target volume (PTV) includes the clinical target volume (CTV) to be irradiated and a margin to account for uncertainties in the treatment process. Uncertainties in miniature multileaf collimator (mMLC) leaf positioning, CT scanner spatial localization, CT‐MRI image fusion spatial localization, and Gill‐Thomas‐Cosman (GTC) relocatable head frame repositioning were quantified for the purpose of determining a minimum PTV margin that still delivers a satisfactory CTV dose. The measured uncertainties were then incorporated into a simple Monte Carlo calculation for evaluation of various margin and fraction combinations. Satisfactory CTV dosimetric criteria were selected to be a minimum CTV dose of 95% of the PTV dose and at least 95% of the CTV receiving 100% of the PTV dose. The measured uncertainties were assumed to be Gaussian distributions. Systematic errors were added linearly and random errors were added in quadrature assuming no correlation to arrive at the total combined error. The Monte Carlo simulation written for this work examined the distribution of cumulative dose volume histograms for a large patient population using various margin and fraction combinations to determine the smallest margin required to meet the established criteria. The program examined 5 and 30 fraction treatments, since those are the only fractionation schemes currently used at our institution. The fractionation schemes were evaluated using no margin, a margin of just the systematic component of the total uncertainty, and a margin of the systematic component plus one standard deviation of the total uncertainty. It was concluded that (i) a margin of the systematic error plus one standard deviation of the total uncertainty is the smallest PTV margin necessary to achieve the established CTV dose criteria, and (ii) it is necessary to determine the uncertainties introduced by the specific equipment and procedures used at each institution since the uncertainties may vary among locations. PACS number(s): 87.53.Kn, 87.53.Ly PMID:12132939

  13. Underestimating the effects of spatial heterogeneity due to individual movement and spatial scale: infectious disease as an example

    USGS Publications Warehouse

    Cross, Paul C.; Caillaud, Damien; Heisey, Dennis M.

    2013-01-01

    Many ecological and epidemiological studies occur in systems with mobile individuals and heterogeneous landscapes. Using a simulation model, we show that the accuracy of inferring an underlying biological process from observational data depends on movement and spatial scale of the analysis. As an example, we focused on estimating the relationship between host density and pathogen transmission. Observational data can result in highly biased inference about the underlying process when individuals move among sampling areas. Even without sampling error, the effect of host density on disease transmission is underestimated by approximately 50 % when one in ten hosts move among sampling areas per lifetime. Aggregating data across larger regions causes minimal bias when host movement is low, and results in less biased inference when movement rates are high. However, increasing data aggregation reduces the observed spatial variation, which would lead to the misperception that a spatially targeted control effort may not be very effective. In addition, averaging over the local heterogeneity will result in underestimating the importance of spatial covariates. Minimizing the bias due to movement is not just about choosing the best spatial scale for analysis, but also about reducing the error associated with using the sampling location as a proxy for an individual’s spatial history. This error associated with the exposure covariate can be reduced by choosing sampling regions with less movement, including longitudinal information of individuals’ movements, or reducing the window of exposure by using repeated sampling or younger individuals.

  14. Patient identification using a near-infrared laser scanner

    NASA Astrophysics Data System (ADS)

    Manit, Jirapong; Bremer, Christina; Schweikard, Achim; Ernst, Floris

    2017-03-01

    We propose a new biometric approach where the tissue thickness of a person's forehead is used as a biometric feature. Given that the spatial registration of two 3D laser scans of the same human face usually produces a low error value, the principle of point cloud registration and its error metric can be applied to human classification techniques. However, by only considering the spatial error, it is not possible to reliably verify a person's identity. We propose to use a novel near-infrared laser-based head tracking system to determine an additional feature, the tissue thickness, and include this in the error metric. Using MRI as a ground truth, data from the foreheads of 30 subjects was collected from which a 4D reference point cloud was created for each subject. The measurements from the near-infrared system were registered with all reference point clouds using the ICP algorithm. Afterwards, the spatial and tissue thickness errors were extracted, forming a 2D feature space. For all subjects, the lowest feature distance resulted from the registration of a measurement and the reference point cloud of the same person. The combined registration error features yielded two clusters in the feature space, one from the same subject and another from the other subjects. When only the tissue thickness error was considered, these clusters were less distinct but still present. These findings could help to raise safety standards for head and neck cancer patients and lays the foundation for a future human identification technique.

  15. A comparative experimental evaluation of uncertainty estimation methods for two-component PIV

    NASA Astrophysics Data System (ADS)

    Boomsma, Aaron; Bhattacharya, Sayantan; Troolin, Dan; Pothos, Stamatios; Vlachos, Pavlos

    2016-09-01

    Uncertainty quantification in planar particle image velocimetry (PIV) measurement is critical for proper assessment of the quality and significance of reported results. New uncertainty estimation methods have been recently introduced generating interest about their applicability and utility. The present study compares and contrasts current methods, across two separate experiments and three software packages in order to provide a diversified assessment of the methods. We evaluated the performance of four uncertainty estimation methods, primary peak ratio (PPR), mutual information (MI), image matching (IM) and correlation statistics (CS). The PPR method was implemented and tested in two processing codes, using in-house open source PIV processing software (PRANA, Purdue University) and Insight4G (TSI, Inc.). The MI method was evaluated in PRANA, as was the IM method. The CS method was evaluated using DaVis (LaVision, GmbH). Utilizing two PIV systems for high and low-resolution measurements and a laser doppler velocimetry (LDV) system, data were acquired in a total of three cases: a jet flow and a cylinder in cross flow at two Reynolds numbers. LDV measurements were used to establish a point validation against which the high-resolution PIV measurements were validated. Subsequently, the high-resolution PIV measurements were used as a reference against which the low-resolution PIV data were assessed for error and uncertainty. We compared error and uncertainty distributions, spatially varying RMS error and RMS uncertainty, and standard uncertainty coverages. We observed that qualitatively, each method responded to spatially varying error (i.e. higher error regions resulted in higher uncertainty predictions in that region). However, the PPR and MI methods demonstrated reduced uncertainty dynamic range response. In contrast, the IM and CS methods showed better response, but under-predicted the uncertainty ranges. The standard coverages (68% confidence interval) ranged from approximately 65%-77% for PPR and MI methods, 40%-50% for IM and near 50% for CS. These observations illustrate some of the strengths and weaknesses of the methods considered herein and identify future directions for development and improvement.

  16. Switching between Spatial Stimulus-Response Mappings: A Developmental Study of Cognitive Flexibility

    ERIC Educational Resources Information Center

    Crone, Eveline A.; Ridderinkhof, K. Richard; Worm, Mijkje; Somsen, Riek J. M.; van der Molen, Maurits W.

    2004-01-01

    Four different age groups (8-9-year-olds, 11-12-year-olds, 13-15-year-olds and young adults) performed a spatial rule-switch task in which the sorting rule had to be detected on the basis of feedback or on the basis of switch cues. Performance errors were examined on the basis of a recently introduced method of error scoring for the Wisconsin Card…

  17. Precise method of compensating radiation-induced errors in a hot-cathode-ionization gauge with correcting electrode

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Saeki, Hiroshi, E-mail: saeki@spring8.or.jp; Magome, Tamotsu, E-mail: saeki@spring8.or.jp

    2014-10-06

    To compensate pressure-measurement errors caused by a synchrotron radiation environment, a precise method using a hot-cathode-ionization-gauge head with correcting electrode, was developed and tested in a simulation experiment with excess electrons in the SPring-8 storage ring. This precise method to improve the measurement accuracy, can correctly reduce the pressure-measurement errors caused by electrons originating from the external environment, and originating from the primary gauge filament influenced by spatial conditions of the installed vacuum-gauge head. As the result of the simulation experiment to confirm the performance reducing the errors caused by the external environment, the pressure-measurement error using this method wasmore » approximately less than several percent in the pressure range from 10{sup −5} Pa to 10{sup −8} Pa. After the experiment, to confirm the performance reducing the error caused by spatial conditions, an additional experiment was carried out using a sleeve and showed that the improved function was available.« less

  18. Lexical and phonological variability in preschool children with speech sound disorder.

    PubMed

    Macrae, Toby; Tyler, Ann A; Lewis, Kerry E

    2014-02-01

    The authors of this study examined relationships between measures of word and speech error variability and between these and other speech and language measures in preschool children with speech sound disorder (SSD). In this correlational study, 18 preschool children with SSD, age-appropriate receptive vocabulary, and normal oral motor functioning and hearing were assessed across 2 sessions. Experimental measures included word and speech error variability, receptive vocabulary, nonword repetition (NWR), and expressive language. Pearson product–moment correlation coefficients were calculated among the experimental measures. The correlation between word and speech error variability was slight and nonsignificant. The correlation between word variability and receptive vocabulary was moderate and negative, although nonsignificant. High word variability was associated with small receptive vocabularies. The correlations between speech error variability and NWR and between speech error variability and the mean length of children's utterances were moderate and negative, although both were nonsignificant. High speech error variability was associated with poor NWR and language scores. High word variability may reflect unstable lexical representations, whereas high speech error variability may reflect indistinct phonological representations. Preschool children with SSD who show abnormally high levels of different types of speech variability may require slightly different approaches to intervention.

  19. Mathematics skills in good readers with hydrocephalus.

    PubMed

    Barnes, Marcia A; Pengelly, Sarah; Dennis, Maureen; Wilkinson, Margaret; Rogers, Tracey; Faulkner, Heather

    2002-01-01

    Children with hydrocephalus have poor math skills. We investigated the nature of their arithmetic computation errors by comparing written subtraction errors in good readers with hydrocephalus, typically developing good readers of the same age, and younger children matched for math level to the children with hydrocephalus. Children with hydrocephalus made more procedural errors (although not more fact retrieval or visual-spatial errors) than age-matched controls; they made the same number of procedural errors as younger, math-level matched children. We also investigated a broad range of math abilities, and found that children with hydrocephalus performed more poorly than age-matched controls on tests of geometry and applied math skills such as estimation and problem solving. Computation deficits in children with hydrocephalus reflect delayed development of procedural knowledge. Problems in specific math domains such as geometry and applied math, were associated with deficits in constituent cognitive skills such as visual spatial competence, memory, and general knowledge.

  20. Adaptive optics system performance approximations for atmospheric turbulence correction

    NASA Astrophysics Data System (ADS)

    Tyson, Robert K.

    1990-10-01

    Analysis of adaptive optics system behavior often can be reduced to a few approximations and scaling laws. For atmospheric turbulence correction, the deformable mirror (DM) fitting error is most often used to determine a priori the interactuator spacing and the total number of correction zones required. This paper examines the mirror fitting error in terms of its most commonly used exponential form. The explicit constant in the error term is dependent on deformable mirror influence function shape and actuator geometry. The method of least squares fitting of discrete influence functions to the turbulent wavefront is compared to the linear spatial filtering approximation of system performance. It is found that the spatial filtering method overstimates the correctability of the adaptive optics system by a small amount. By evaluating fitting error for a number of DM configurations, actuator geometries, and influence functions, fitting error constants verify some earlier investigations.

  1. Performance evaluation of spatial compounding in the presence of aberration and adaptive imaging

    NASA Astrophysics Data System (ADS)

    Dahl, Jeremy J.; Guenther, Drake; Trahey, Gregg E.

    2003-05-01

    Spatial compounding has been used for years to reduce speckle in ultrasonic images and to resolve anatomical features hidden behind the grainy appearance of speckle. Adaptive imaging restores image contrast and resolution by compensating for beamforming errors caused by tissue-induced phase errors. Spatial compounding represents a form of incoherent imaging, whereas adaptive imaging attempts to maintain a coherent, diffraction-limited aperture in the presence of aberration. Using a Siemens Antares scanner, we acquired single channel RF data on a commercially available 1-D probe. Individual channel RF data was acquired on a cyst phantom in the presence of a near field electronic phase screen. Simulated data was also acquired for both a 1-D and a custom built 8x96, 1.75-D probe (Tetrad Corp.). The data was compounded using a receive spatial compounding algorithm; a widely used algorithm because it takes advantage of parallel beamforming to avoid reductions in frame rate. Phase correction was also performed by using a least mean squares algorithm to estimate the arrival time errors. We present simulation and experimental data comparing the performance of spatial compounding to phase correction in contrast and resolution tasks. We evaluate spatial compounding and phase correction, and combinations of the two methods, under varying aperture sizes, aperture overlaps, and aberrator strength to examine the optimum configuration and conditions in which spatial compounding will provide a similar or better result than adaptive imaging. We find that, in general, phase correction is hindered at high aberration strengths and spatial frequencies, whereas spatial compounding is helped by these aberrators.

  2. Exchange-Correlation Effects for Noncovalent Interactions in Density Functional Theory.

    PubMed

    Otero-de-la-Roza, A; DiLabio, Gino A; Johnson, Erin R

    2016-07-12

    In this article, we develop an understanding of how errors from exchange-correlation functionals affect the modeling of noncovalent interactions in dispersion-corrected density-functional theory. Computed CCSD(T) reference binding energies for a collection of small-molecule clusters are decomposed via a molecular many-body expansion and are used to benchmark density-functional approximations, including the effect of semilocal approximation, exact-exchange admixture, and range separation. Three sources of error are identified. Repulsion error arises from the choice of semilocal functional approximation. This error affects intermolecular repulsions and is present in all n-body exchange-repulsion energies with a sign that alternates with the order n of the interaction. Delocalization error is independent of the choice of semilocal functional but does depend on the exact exchange fraction. Delocalization error misrepresents the induction energies, leading to overbinding in all induction n-body terms, and underestimates the electrostatic contribution to the 2-body energies. Deformation error affects only monomer relaxation (deformation) energies and behaves similarly to bond-dissociation energy errors. Delocalization and deformation errors affect systems with significant intermolecular orbital interactions (e.g., hydrogen- and halogen-bonded systems), whereas repulsion error is ubiquitous. Many-body errors from the underlying exchange-correlation functional greatly exceed in general the magnitude of the many-body dispersion energy term. A functional built to accurately model noncovalent interactions must contain a dispersion correction, semilocal exchange, and correlation components that minimize the repulsion error independently and must also incorporate exact exchange in such a way that delocalization error is absent.

  3. Ground and surface temperature variability for remote sensing of soil moisture in a heterogeneous landscape

    USGS Publications Warehouse

    Giraldo, M.A.; Bosch, D.; Madden, M.; Usery, L.; Finn, M.

    2009-01-01

    At the Little River Watershed (LRW) heterogeneous landscape near Tifton Georgia US an in situ network of stations operated by the US Department of Agriculture-Agriculture Research Service-Southeast Watershed Research Lab (USDA-ARS-SEWRL) was established in 2003 for the long term study of climatic and soil biophysical processes. To develop an accurate interpolation of the in situ readings that can be used to produce distributed representations of soil moisture (SM) and energy balances at the landscape scale for remote sensing studies, we studied (1) the temporal and spatial variations of ground temperature (GT) and infra red temperature (IRT) within 30 by 30 m plots around selected network stations; (2) the relationship between the readings from the eight 30 by 30 m plots and the point reading of the network stations for the variables SM, GT and IRT; and (3) the spatial and temporal variation of GT and IRT within agriculture landuses: grass, orchard, peanuts, cotton and bare soil in the surrounding landscape. The results showed high correlations between the station readings and the adjacent 30 by 30 m plot average value for SM; high seasonal independent variation in the GT and IRT behavior among the eight 30 by 30 m plots; and site specific, in-field homogeneity in each 30 by 30 m plot. We found statistical differences in the GT and IRT between the different landuses as well as high correlations between GT and IRT regardless of the landuse. Greater standard deviations for IRT than for GT (in the range of 2-4) were found within the 30 by 30 m, suggesting that when a single point reading for this variable is selected for the validation of either remote sensing data or water-energy models, errors may occur. The results confirmed that in this landscape homogeneous 30 by 30 m plots can be used as landscape spatial units for soil moisture and ground temperature studies. Under this landscape conditions small plots can account for local expressions of environmental processes, decreasing the errors and uncertainties in remote sensing estimates caused by landscape heterogeneity.

  4. Changes in precipitation isotope-climate relationships from temporal grouping and aggregation of weekly-resolved USNIP data: impacts on paleoclimate and environmental applications

    NASA Astrophysics Data System (ADS)

    Akers, P. D.; Welker, J. M.

    2015-12-01

    Spatial variations in precipitation isotopes have been the focus of much recent research, but relatively less work has explored changes at various temporal scales. This is partly because most spatially-diverse and long-term isotope databases are offered at a monthly resolution, while daily or event-level records are spatially and temporally limited by cost and logistics. A subset of 25 United States Network for Isotopes in Precipitation (USNIP) sites with weekly-resolution in the east-central United States was analyzed for site-specific relationships between δ18O and δD (the local meteoric water line/LMWL), δ18O and surface temperature, and δ18O and precipitation amount. Weekly data were then aggregated into monthly and seasonal data to examine the effect of aggregation on correlation and slope values for each of the relationships. Generally, increasing aggregation improved correlations (>25% for some sites) due to a reduced effect of extreme values, but estimates on regression variable error increased (>100%) because of reduced sample sizes. Aggregation resulted in small, but significant drops (5-25%) in relationship slope values for some sites. Weekly data were also grouped by month and season to explore changes in relationships throughout the year. Significant subannual variability exists in slope values and correlations even for sites with very strong overall correlations. LMWL slopes are highest in winter and lowest in summer, while the δ18O-surface temperature relationship is strongest in spring. Despite these overall trends, a high level of month-to-month and season-to-season variability is the norm for these sites. Researchers blindly applying overall relationships drawn from monthly-resolved databases to paleoclimate or environmental research risk assuming these relationships apply at all temporal resolutions. When possible, researchers should match the temporal resolution used to calculate an isotopic relationship with the temporal resolution of their applied proxy.

  5. Visuospatial impairment in Parkinson's disease: the role of laterality.

    PubMed

    Karádi, Kázmér; Lucza, Tivadar; Aschermann, Zsuzsanna; Komoly, Sámuel; Deli, Gabriella; Bosnyák, Edit; Acs, Péter; Horváth, Réka; Janszky, József; Kovács, Norbert

    2015-01-01

    Asymmetry is one of the unique and mysterious features of Parkinson's disease (PD). Motor symptoms develop unilaterally either on the left (LPD) or the right side (RPD). Incongruent data are available whether the side of onset has an impact on cognition in PD. The objective of this study is to compare the visuospatial performance of RPD and LPD patients. Seventy-one non-demented, non-depressive and right-handed patients were categorized into RBD (n = 36) and LPD (n = 35) groups. Rey-Osterrieth Complex Figure Test (ROCF) was evaluated by both the Taylor's and Loring's scoring systems. Subsequently, we also performed subgroup analyses on patients having short disease duration (≤5 years, 15 RBD and 15 LPD patients). The standard analysis of ROCF (Taylor's system) did not reveal any differences; however, the utilization of the Loring's system demonstrated that LPD patients had significantly worse visuospatial performance than the RPD subjects (3.0 vs. 2.0 points, median, p = 0.002). Correlation between the number of spatial errors and the degree of asymmetry was significant (r = -0.437, p = 0.001). However, this difference could not be observed in PD patients with short disease duration. LPD patients had worse visuospatial performance than the RPD subjects and the number of errors tightly correlated with the degree of asymmetry and long disease duration.

  6. Study of the Effect of Temporal Sampling Frequency on DSCOVR Observations Using the GEOS-5 Nature Run Results (Part I): Earths Radiation Budget

    NASA Technical Reports Server (NTRS)

    Holdaway, Daniel; Yang, Yuekui

    2016-01-01

    Satellites always sample the Earth-atmosphere system in a finite temporal resolution. This study investigates the effect of sampling frequency on the satellite-derived Earth radiation budget, with the Deep Space Climate Observatory (DSCOVR) as an example. The output from NASA's Goddard Earth Observing System Version 5 (GEOS-5) Nature Run is used as the truth. The Nature Run is a high spatial and temporal resolution atmospheric simulation spanning a two-year period. The effect of temporal resolution on potential DSCOVR observations is assessed by sampling the full Nature Run data with 1-h to 24-h frequencies. The uncertainty associated with a given sampling frequency is measured by computing means over daily, monthly, seasonal and annual intervals and determining the spread across different possible starting points. The skill with which a particular sampling frequency captures the structure of the full time series is measured using correlations and normalized errors. Results show that higher sampling frequency gives more information and less uncertainty in the derived radiation budget. A sampling frequency coarser than every 4 h results in significant error. Correlations between true and sampled time series also decrease more rapidly for a sampling frequency less than 4 h.

  7. Smoothing effect for spatially distributed renewable resources and its impact on power grid robustness.

    PubMed

    Nagata, Motoki; Hirata, Yoshito; Fujiwara, Naoya; Tanaka, Gouhei; Suzuki, Hideyuki; Aihara, Kazuyuki

    2017-03-01

    In this paper, we show that spatial correlation of renewable energy outputs greatly influences the robustness of the power grids against large fluctuations of the effective power. First, we evaluate the spatial correlation among renewable energy outputs. We find that the spatial correlation of renewable energy outputs depends on the locations, while the influence of the spatial correlation of renewable energy outputs on power grids is not well known. Thus, second, by employing the topology of the power grid in eastern Japan, we analyze the robustness of the power grid with spatial correlation of renewable energy outputs. The analysis is performed by using a realistic differential-algebraic equations model. The results show that the spatial correlation of the energy resources strongly degrades the robustness of the power grid. Our results suggest that we should consider the spatial correlation of the renewable energy outputs when estimating the stability of power grids.

  8. Technical Note: Atmospheric CO2 inversions on the mesoscale using data-driven prior uncertainties: methodology and system evaluation

    NASA Astrophysics Data System (ADS)

    Kountouris, Panagiotis; Gerbig, Christoph; Rödenbeck, Christian; Karstens, Ute; Koch, Thomas Frank; Heimann, Martin

    2018-03-01

    Atmospheric inversions are widely used in the optimization of surface carbon fluxes on a regional scale using information from atmospheric CO2 dry mole fractions. In many studies the prior flux uncertainty applied to the inversion schemes does not directly reflect the true flux uncertainties but is used to regularize the inverse problem. Here, we aim to implement an inversion scheme using the Jena inversion system and applying a prior flux error structure derived from a model-data residual analysis using high spatial and temporal resolution over a full year period in the European domain. We analyzed the performance of the inversion system with a synthetic experiment, in which the flux constraint is derived following the same residual analysis but applied to the model-model mismatch. The synthetic study showed a quite good agreement between posterior and true fluxes on European, country, annual and monthly scales. Posterior monthly and country-aggregated fluxes improved their correlation coefficient with the known truth by 7 % compared to the prior estimates when compared to the reference, with a mean correlation of 0.92. The ratio of the SD between the posterior and reference and between the prior and reference was also reduced by 33 % with a mean value of 1.15. We identified temporal and spatial scales on which the inversion system maximizes the derived information; monthly temporal scales at around 200 km spatial resolution seem to maximize the information gain.

  9. Modifying Spearman's Attenuation Equation to Yield Partial Corrections for Measurement Error--With Application to Sample Size Calculations

    ERIC Educational Resources Information Center

    Nicewander, W. Alan

    2018-01-01

    Spearman's correction for attenuation (measurement error) corrects a correlation coefficient for measurement errors in either-or-both of two variables, and follows from the assumptions of classical test theory. Spearman's equation removes all measurement error from a correlation coefficient which translates into "increasing the reliability of…

  10. High quality InSAR data linked to seasonal change in hydraulic head for an agricultural area in the San Luis Valley, Colorado

    NASA Astrophysics Data System (ADS)

    Reeves, Jessica A.; Knight, Rosemary; Zebker, Howard A.; Schreüder, Willem A.; Shanker Agram, Piyush; Lauknes, Tom R.

    2011-12-01

    In the San Luis Valley (SLV), Colorado legislation passed in 2004 requires that hydraulic head levels in the confined aquifer system stay within the range experienced in the years 1978-2000. While some measurements of hydraulic head exist, greater spatial and temporal sampling would be very valuable in understanding the behavior of the system. Interferometric synthetic aperture radar (InSAR) data provide fine spatial resolution measurements of Earth surface deformation, which can be related to hydraulic head change in the confined aquifer system. However, change in cm-scale crop structure with time leads to signal decorrelation, resulting in low quality data. Here we apply small baseline subset (SBAS) analysis to InSAR data collected from 1992 to 2001. We are able to show high levels of correlation, denoting high quality data, in areas between the center pivot irrigation circles, where the lack of water results in little surface vegetation. At three well locations we see a seasonal variation in the InSAR data that mimics the hydraulic head data. We use measured values of the elastic skeletal storage coefficient to estimate hydraulic head from the InSAR data. In general the magnitude of estimated and measured head agree to within the calculated error. However, the errors are unacceptably large due to both errors in the InSAR data and uncertainty in the measured value of the elastic skeletal storage coefficient. We conclude that InSAR is capturing the seasonal head variation, but that further research is required to obtain accurate hydraulic head estimates from the InSAR deformation measurements.

  11. Near-lossless multichannel EEG compression based on matrix and tensor decompositions.

    PubMed

    Dauwels, Justin; Srinivasan, K; Reddy, M Ramasubba; Cichocki, Andrzej

    2013-05-01

    A novel near-lossless compression algorithm for multichannel electroencephalogram (MC-EEG) is proposed based on matrix/tensor decomposition models. MC-EEG is represented in suitable multiway (multidimensional) forms to efficiently exploit temporal and spatial correlations simultaneously. Several matrix/tensor decomposition models are analyzed in view of efficient decorrelation of the multiway forms of MC-EEG. A compression algorithm is built based on the principle of “lossy plus residual coding,” consisting of a matrix/tensor decomposition-based coder in the lossy layer followed by arithmetic coding in the residual layer. This approach guarantees a specifiable maximum absolute error between original and reconstructed signals. The compression algorithm is applied to three different scalp EEG datasets and an intracranial EEG dataset, each with different sampling rate and resolution. The proposed algorithm achieves attractive compression ratios compared to compressing individual channels separately. For similar compression ratios, the proposed algorithm achieves nearly fivefold lower average error compared to a similar wavelet-based volumetric MC-EEG compression algorithm.

  12. Contribution of Modis Satellite Image to Estimate the Daily Air Temperature in the Casablanca City, Morocco

    NASA Astrophysics Data System (ADS)

    Bahi, Hicham; Rhinane, Hassan; Bensalmia, Ahmed

    2016-10-01

    Air temperature is considered to be an essential variable for the study and analysis of meteorological regimes and chronics. However, the implementation of a daily monitoring of this variable is very difficult to achieve. It requires sufficient of measurements stations density, meteorological parks and favourable logistics. The present work aims to establish relationship between day and night land surface temperatures from MODIS data and the daily measurements of air temperature acquired between [2011-20112] and provided by the Department of National Meteorology [DMN] of Casablanca, Morocco. The results of the statistical analysis show significant interdependence during night observations with correlation coefficient of R2=0.921 and Root Mean Square Error RMSE=1.503 for Tmin while the physical magnitude estimated from daytime MODIS observation shows a relatively coarse error with R2=0.775 and RMSE=2.037 for Tmax. A method based on Gaussian process regression was applied to compute the spatial distribution of air temperature from MODIS throughout the city of Casablanca.

  13. Climbing fibers predict movement kinematics and performance errors.

    PubMed

    Streng, Martha L; Popa, Laurentiu S; Ebner, Timothy J

    2017-09-01

    Requisite for understanding cerebellar function is a complete characterization of the signals provided by complex spike (CS) discharge of Purkinje cells, the output neurons of the cerebellar cortex. Numerous studies have provided insights into CS function, with the most predominant view being that they are evoked by error events. However, several reports suggest that CSs encode other aspects of movements and do not always respond to errors or unexpected perturbations. Here, we evaluated CS firing during a pseudo-random manual tracking task in the monkey ( Macaca mulatta ). This task provides extensive coverage of the work space and relative independence of movement parameters, delivering a robust data set to assess the signals that activate climbing fibers. Using reverse correlation, we determined feedforward and feedback CSs firing probability maps with position, velocity, and acceleration, as well as position error, a measure of tracking performance. The direction and magnitude of the CS modulation were quantified using linear regression analysis. The major findings are that CSs significantly encode all three kinematic parameters and position error, with acceleration modulation particularly common. The modulation is not related to "events," either for position error or kinematics. Instead, CSs are spatially tuned and provide a linear representation of each parameter evaluated. The CS modulation is largely predictive. Similar analyses show that the simple spike firing is modulated by the same parameters as the CSs. Therefore, CSs carry a broader array of signals than previously described and argue for climbing fiber input having a prominent role in online motor control. NEW & NOTEWORTHY This article demonstrates that complex spike (CS) discharge of cerebellar Purkinje cells encodes multiple parameters of movement, including motor errors and kinematics. The CS firing is not driven by error or kinematic events; instead it provides a linear representation of each parameter. In contrast with the view that CSs carry feedback signals, the CSs are predominantly predictive of upcoming position errors and kinematics. Therefore, climbing fibers carry multiple and predictive signals for online motor control. Copyright © 2017 the American Physiological Society.

  14. Temporal and spatial deviation in F2 peak parameters derived from FORMOSAT-3/COSMIC

    NASA Astrophysics Data System (ADS)

    Kumar, Sanjay; Singh, R. P.; Tan, Eng Leong; Singh, A. K.; Ghodpage, R. N.; Siingh, Devendraa

    2016-06-01

    The plasma frequency profiles derived from the Constellation of Observing System for Meteorology, Ionosphere and Climate (COSMIC) radio occultation measurements are compared with ground-based ionosonde data during the year 2013. Equatorial and midlatitude five stations located in the Northern and Southern Hemisphere are considered: Jicamarca, Jeju, Darwin, Learmonth, and Juliusruh. The aim is to validate the COSMIC-derived data with ground-based measurements and to estimate the difference in plasma frequency (which represents electron density) and height of F2 layer peak during the daytime/nighttime and during different seasons by comparing the two data sets. Analysis showed that the nighttime data are better correlated than the daytime, and the maximum difference occurs at the equatorial ionospheric anomaly (EIA) station as compared to lower and midlatitude stations during the equinox months. The difference between daytime and nighttime correlations becomes insignificant at midlatitude stations. The statistical analysis of computed errors in foF2 (hmF2) showed Gaussian nature with the most probable error range of ±15% (±10%) at the equatorial and EIA stations, ±9% (±7%) outside the EIA region which reduced to ±8% (±6%) at midlatitude stations. The reduction in error at midlatitudes is attributed to the decrease in latitudinal electron density gradients. Comparing the analyzed data during the three geomagnetic storms and quiet days of the same months, it is observed that the differences are significantly enhanced during storm periods and the magnitude of difference in foF2 increases with the intensity of geomagnetic storm.

  15. Surface Downward Longwave Radiation Retrieval Algorithm for GEO-KOMPSAT-2A/AMI

    NASA Astrophysics Data System (ADS)

    Ahn, Seo-Hee; Lee, Kyu-Tae; Rim, Se-Hun; Zo, Il-Sung; Kim, Bu-Yo

    2018-05-01

    This study contributes to the development of an algorithm to retrieve the Earth's surface downward longwave radiation (DLR) for 2nd Geostationary Earth Orbit KOrea Multi-Purpose SATellite (GEO-KOMPSAT-2A; GK-2A)/Advanced Meteorological Imager (AMI). Regarding simulation data for algorithm development, we referred to Clouds and the Earth's Radiant Energy System (CERES), and the European Centre for Medium-Range Weather Forecasts (ECMWF) ERA-interim reanalysis data. The clear sky DLR calculations were in good agreement with the Gangneung-Wonju National University (GWNU) Line-By-Line (LBL) model. Compared with CERES data, the Root Mean Square Error (RMSE) was 10.14Wm-2. In the case of cloudy sky DLR, we estimated the cloud base temperature empirically by utilizing cloud liquid water content (LWC) according to the cloud type. As a result, the correlation coefficients with CERES all sky DLRs were greater than 0.99. However, the RMSE between calculated DLR and CERES data was about 16.67Wm-2, due to ice clouds and problems of mismatched spatial and temporal resolutions for input data. This error may be reduced when GK-2A is launched and its products can be used as input data. Accordingly, further study is needed to improve the accuracy of DLR calculation by using high-resolution input data. In addition, when compared with BSRN surface-based observational data and retrieved DLR for all sky, the correlation coefficient was 0.86 and the RMSE was 31.55 Wm-2, which indicates relatively high accuracy. It is expected that increasing the number of experimental Cases will reduce the error.

  16. Regional GRACE-based estimates of water mass variations over Australia: validation and interpretation

    NASA Astrophysics Data System (ADS)

    Seoane, L.; Ramillien, G.; Frappart, F.; Leblanc, M.

    2013-04-01

    Time series of regional 2°-by-2° GRACE solutions have been computed from 2003 to 2011 with a 10 day resolution by using an energy integral method over Australia [112° E 156° E; 44° S 10° S]. This approach uses the dynamical orbit analysis of GRACE Level 1 measurements, and specially accurate along-track K Band Range Rate (KBRR) residuals (1 μm s-1 level of error) to estimate the total water mass over continental regions. The advantages of regional solutions are a significant reduction of GRACE aliasing errors (i.e. north-south stripes) providing a more accurate estimation of water mass balance for hydrological applications. In this paper, the validation of these regional solutions over Australia is presented as well as their ability to describe water mass change as a reponse of climate forcings such as El Niño. Principal component analysis of GRACE-derived total water storage maps show spatial and temporal patterns that are consistent with independent datasets (e.g. rainfall, climate index and in-situ observations). Regional TWS show higher spatial correlations with in-situ water table measurements over Murray-Darling drainage basin (80-90%), and they offer a better localization of hydrological structures than classical GRACE global solutions (i.e. Level 2 GRGS products and 400 km ICA solutions as a linear combination of GFZ, CSR and JPL GRACE solutions).

  17. Motion Field Estimation for a Dynamic Scene Using a 3D LiDAR

    PubMed Central

    Li, Qingquan; Zhang, Liang; Mao, Qingzhou; Zou, Qin; Zhang, Pin; Feng, Shaojun; Ochieng, Washington

    2014-01-01

    This paper proposes a novel motion field estimation method based on a 3D light detection and ranging (LiDAR) sensor for motion sensing for intelligent driverless vehicles and active collision avoidance systems. Unlike multiple target tracking methods, which estimate the motion state of detected targets, such as cars and pedestrians, motion field estimation regards the whole scene as a motion field in which each little element has its own motion state. Compared to multiple target tracking, segmentation errors and data association errors have much less significance in motion field estimation, making it more accurate and robust. This paper presents an intact 3D LiDAR-based motion field estimation method, including pre-processing, a theoretical framework for the motion field estimation problem and practical solutions. The 3D LiDAR measurements are first projected to small-scale polar grids, and then, after data association and Kalman filtering, the motion state of every moving grid is estimated. To reduce computing time, a fast data association algorithm is proposed. Furthermore, considering the spatial correlation of motion among neighboring grids, a novel spatial-smoothing algorithm is also presented to optimize the motion field. The experimental results using several data sets captured in different cities indicate that the proposed motion field estimation is able to run in real-time and performs robustly and effectively. PMID:25207868

  18. Motion field estimation for a dynamic scene using a 3D LiDAR.

    PubMed

    Li, Qingquan; Zhang, Liang; Mao, Qingzhou; Zou, Qin; Zhang, Pin; Feng, Shaojun; Ochieng, Washington

    2014-09-09

    This paper proposes a novel motion field estimation method based on a 3D light detection and ranging (LiDAR) sensor for motion sensing for intelligent driverless vehicles and active collision avoidance systems. Unlike multiple target tracking methods, which estimate the motion state of detected targets, such as cars and pedestrians, motion field estimation regards the whole scene as a motion field in which each little element has its own motion state. Compared to multiple target tracking, segmentation errors and data association errors have much less significance in motion field estimation, making it more accurate and robust. This paper presents an intact 3D LiDAR-based motion field estimation method, including pre-processing, a theoretical framework for the motion field estimation problem and practical solutions. The 3D LiDAR measurements are first projected to small-scale polar grids, and then, after data association and Kalman filtering, the motion state of every moving grid is estimated. To reduce computing time, a fast data association algorithm is proposed. Furthermore, considering the spatial correlation of motion among neighboring grids, a novel spatial-smoothing algorithm is also presented to optimize the motion field. The experimental results using several data sets captured in different cities indicate that the proposed motion field estimation is able to run in real-time and performs robustly and effectively.

  19. Microseismic source locations with deconvolution migration

    NASA Astrophysics Data System (ADS)

    Wu, Shaojiang; Wang, Yibo; Zheng, Yikang; Chang, Xu

    2018-03-01

    Identifying and locating microseismic events are critical problems in hydraulic fracturing monitoring for unconventional resources exploration. In contrast to active seismic data, microseismic data are usually recorded with unknown source excitation time and source location. In this study, we introduce deconvolution migration by combining deconvolution interferometry with interferometric cross-correlation migration (CCM). This method avoids the need for the source excitation time and enhances both the spatial resolution and robustness by eliminating the square term of the source wavelets from CCM. The proposed algorithm is divided into the following three steps: (1) generate the virtual gathers by deconvolving the master trace with all other traces in the microseismic gather to remove the unknown excitation time; (2) migrate the virtual gather to obtain a single image of the source location and (3) stack all of these images together to get the final estimation image of the source location. We test the proposed method on complex synthetic and field data set from the surface hydraulic fracturing monitoring, and compare the results with those obtained by interferometric CCM. The results demonstrate that the proposed method can obtain a 50 per cent higher spatial resolution image of the source location, and more robust estimation with smaller errors of the localization especially in the presence of velocity model errors. This method is also beneficial for source mechanism inversion and global seismology applications.

  20. High-resolution mapping of the NO2 spatial distribution over Belgian urban areas based on airborne APEX remote sensing

    NASA Astrophysics Data System (ADS)

    Tack, Frederik; Merlaud, Alexis; Iordache, Marian-Daniel; Danckaert, Thomas; Yu, Huan; Fayt, Caroline; Meuleman, Koen; Deutsch, Felix; Fierens, Frans; Van Roozendael, Michel

    2017-05-01

    We present retrieval results of tropospheric nitrogen dioxide (NO2) vertical column densities (VCDs), mapped at high spatial resolution over three Belgian cities, based on the DOAS analysis of Airborne Prism EXperiment (APEX) observations. APEX, developed by a Swiss-Belgian consortium on behalf of ESA (European Space Agency), is a pushbroom hyperspectral imager characterised by a high spatial resolution and high spectral performance. APEX data have been acquired under clear-sky conditions over the two largest and most heavily polluted Belgian cities, i.e. Antwerp and Brussels on 15 April and 30 June 2015. Additionally, a number of background sites have been covered for the reference spectra. The APEX instrument was mounted in a Dornier DO-228 aeroplane, operated by Deutsches Zentrum für Luft- und Raumfahrt (DLR). NO2 VCDs were retrieved from spatially aggregated radiance spectra allowing urban plumes to be resolved at the resolution of 60 × 80 m2. The main sources in the Antwerp area appear to be related to the (petro)chemical industry while traffic-related emissions dominate in Brussels. The NO2 levels observed in Antwerp range between 3 and 35 × 1015 molec cm-2, with a mean VCD of 17.4 ± 3.7 × 1015 molec cm-2. In the Brussels area, smaller levels are found, ranging between 1 and 20 × 1015 molec cm-2 and a mean VCD of 7.7 ± 2.1 × 1015 molec cm-2. The overall errors on the retrieved NO2 VCDs are on average 21 and 28 % for the Antwerp and Brussels data sets. Low VCD retrievals are mainly limited by noise (1σ slant error), while high retrievals are mainly limited by systematic errors. Compared to coincident car mobile-DOAS measurements taken in Antwerp and Brussels, both data sets are in good agreement with correlation coefficients around 0.85 and slopes close to unity. APEX retrievals tend to be, on average, 12 and 6 % higher for Antwerp and Brussels, respectively. Results demonstrate that the NO2 distribution in an urban environment, and its fine-scale variability, can be mapped accurately with high spatial resolution and in a relatively short time frame, and the contributing emission sources can be resolved. High-resolution quantitative information about the atmospheric NO2 horizontal variability is currently rare, but can be very valuable for (air quality) studies at the urban scale.

  1. A bottom up approach to on-road CO2 emissions estimates: improved spatial accuracy and applications for regional planning.

    PubMed

    Gately, Conor K; Hutyra, Lucy R; Wing, Ian Sue; Brondfield, Max N

    2013-03-05

    On-road transportation is responsible for 28% of all U.S. fossil-fuel CO2 emissions. Mapping vehicle emissions at regional scales is challenging due to data limitations. Existing emission inventories use spatial proxies such as population and road density to downscale national or state-level data. Such procedures introduce errors where the proxy variables and actual emissions are weakly correlated, and limit analysis of the relationship between emissions and demographic trends at local scales. We develop an on-road emission inventory product for Massachusetts-based on roadway-level traffic data obtained from the Highway Performance Monitoring System (HPMS). We provide annual estimates of on-road CO2 emissions at a 1 × 1 km grid scale for the years 1980 through 2008. We compared our results with on-road emissions estimates from the Emissions Database for Global Atmospheric Research (EDGAR), with the Vulcan Product, and with estimates derived from state fuel consumption statistics reported by the Federal Highway Administration (FHWA). Our model differs from FHWA estimates by less than 8.5% on average, and is within 5% of Vulcan estimates. We found that EDGAR estimates systematically exceed FHWA by an average of 22.8%. Panel regression analysis of per-mile CO2 emissions on population density at the town scale shows a statistically significant correlation that varies systematically in sign and magnitude as population density increases. Population density has a positive correlation with per-mile CO2 emissions for densities below 2000 persons km(-2), above which increasing density correlates negatively with per-mile emissions.

  2. Effects of Head Rotation on Space- and Word-Based Reading Errors in Spatial Neglect

    ERIC Educational Resources Information Center

    Reinhart, Stefan; Keller, Ingo; Kerkhoff, Georg

    2010-01-01

    Patients with right hemisphere lesions often omit or misread words on the left side of a text or the beginning letters of single words which is termed neglect dyslexia (ND). Two types of reading errors are typically observed in ND: omissions and word-based reading errors. The prior are considered as space-based omission errors on the…

  3. Nanoscale deformation analysis with high-resolution transmission electron microscopy and digital image correlation

    DOE PAGES

    Wang, Xueju; Pan, Zhipeng; Fan, Feifei; ...

    2015-09-10

    We present an application of the digital image correlation (DIC) method to high-resolution transmission electron microscopy (HRTEM) images for nanoscale deformation analysis. The combination of DIC and HRTEM offers both the ultrahigh spatial resolution and high displacement detection sensitivity that are not possible with other microscope-based DIC techniques. We demonstrate the accuracy and utility of the HRTEM-DIC technique through displacement and strain analysis on amorphous silicon. Two types of error sources resulting from the transmission electron microscopy (TEM) image noise and electromagnetic-lens distortions are quantitatively investigated via rigid-body translation experiments. The local and global DIC approaches are applied for themore » analysis of diffusion- and reaction-induced deformation fields in electrochemically lithiated amorphous silicon. As a result, the DIC technique coupled with HRTEM provides a new avenue for the deformation analysis of materials at the nanometer length scales.« less

  4. Spatial and decadal variations in satellite-based terrestrial evapotranspiration and drought over Inner Mongolia Autonomous Region of China during 1982-2009

    NASA Astrophysics Data System (ADS)

    Zhang, Zhaolu; Kang, Hui; Yao, Yunjun; Fadhil, Ayad M.; Zhang, Yuhu; Jia, Kun

    2017-12-01

    Evapotranspiration ( ET) plays an important role in exchange of water budget and carbon cycles over the Inner Mongolia autonomous region of China (IMARC). However, the spatial and decadal variations in terrestrial ET and drought over the IMARC in the past was calculated by only using sparse meteorological point-based data which remain quite uncertain. In this study, by combining satellite and meteorology datasets, a satellite-based semi-empirical Penman ET (SEMI-PM) algorithm is used to estimate regional ET and evaporative wet index (EWI) calculated by the ratio of ET and potential ET ( PET) over the IMARC. Validation result shows that the square of the correlation coefficients (R2) for the four sites varies from 0.45 to 0.84 and the root-mean-square error (RMSE) is 0.78 mm. We found that the ET has decreased on an average of 4.8 mm per decade (p=0.10) over the entire IMARC during 1982-2009 and the EWI has decreased on an average of 1.1% per decade (p=0.08) during the study period. Importantly, the patterns of monthly EWI anomalies have a good spatial and temporal correlation with the Palmer Drought Severity Index (PDSI) anomalies from 1982 to 2009, indicating EWI can be used to monitor regional surface drought with high spatial resolution. In high-latitude ecosystems of northeast region of the IMARC, both air temperature (Ta) and incident solar radiation (Rs) are the most important parameters in determining ET. However, in semiarid and arid areas of the central and southwest regions of the IMARC, both relative humidity (RH) and normalized difference vegetation index (NDVI) are the most important factors controlling annual variation of ET.

  5. Using SMOS brightness temperature and derived surface-soil moisture to characterize surface conditions and validate land surface models.

    NASA Astrophysics Data System (ADS)

    Polcher, Jan; Barella-Ortiz, Anaïs; Piles, Maria; Gelati, Emiliano; de Rosnay, Patricia

    2017-04-01

    The SMOS satellite, operated by ESA, observes the surface in the L-band. On continental surface these observations are sensitive to moisture and in particular surface-soil moisture (SSM). In this presentation we will explore how the observations of this satellite can be exploited over the Iberian Peninsula by comparing its results with two land surface models : ORCHIDEE and HTESSEL. Measured and modelled brightness temperatures show a good agreement in their temporal evolution, but their spatial structures are not consistent. An empirical orthogonal function analysis of the brightness temperature's error identifies a dominant structure over the south-west of the Iberian Peninsula which evolves during the year and is maximum in autumn and winter. Hypotheses concerning forcing-induced biases and assumptions made in the radiative transfer model are analysed to explain this inconsistency, but no candidate is found to be responsible for the weak spatial correlations. The analysis of spatial inconsistencies between modelled and measured TBs is important, as these can affect the estimation of geophysical variables and TB assimilation in operational models, as well as result in misleading validation studies. When comparing the surface-soil moisture of the models with the product derived operationally by ESA from SMOS observations similar results are found. The spatial correlation over the IP between SMOS and ORCHIDEE SSM estimates is poor (ρ 0.3). A single value decomposition (SVD) analysis of rainfall and SSM shows that the co-varying patterns of these variables are in reasonable agreement between both products. Moreover the first three SVD soil moisture patterns explain over 80% of the SSM variance simulated by the model while the explained fraction is only 52% of the remotely sensed values. These results suggest that the rainfall-driven soil moisture variability may not account for the poor spatial correlation between SMOS and ORCHIDEE products. Other reasons have to be sought to explain the poor agreement in spatial patterns between satellite derived and modelled SSM. This presentation will hopefully contribute to the discussion of how SMOS and other observations can be used to prepare, carry-out and exploit a field campaign over the Iberian Peninsula which aims at improving our understanding of semi-arid land surface processes.

  6. Cross-comparison and evaluation of air pollution field estimation methods

    NASA Astrophysics Data System (ADS)

    Yu, Haofei; Russell, Armistead; Mulholland, James; Odman, Talat; Hu, Yongtao; Chang, Howard H.; Kumar, Naresh

    2018-04-01

    Accurate estimates of human exposure is critical for air pollution health studies and a variety of methods are currently being used to assign pollutant concentrations to populations. Results from these methods may differ substantially, which can affect the outcomes of health impact assessments. Here, we applied 14 methods for developing spatiotemporal air pollutant concentration fields of eight pollutants to the Atlanta, Georgia region. These methods include eight methods relying mostly on air quality observations (CM: central monitor; SA: spatial average; IDW: inverse distance weighting; KRIG: kriging; TESS-D: discontinuous tessellation; TESS-NN: natural neighbor tessellation with interpolation; LUR: land use regression; AOD: downscaled satellite-derived aerosol optical depth), one using the RLINE dispersion model, and five methods using a chemical transport model (CMAQ), with and without using observational data to constrain results. The derived fields were evaluated and compared. Overall, all methods generally perform better at urban than rural area, and for secondary than primary pollutants. We found the CM and SA methods may be appropriate only for small domains, and for secondary pollutants, though the SA method lead to large negative spatial correlations when using data withholding for PM2.5 (spatial correlation coefficient R = -0.81). The TESS-D method was found to have major limitations. Results of the IDW, KRIG and TESS-NN methods are similar. They are found to be better suited for secondary pollutants because of their satisfactory temporal performance (e.g. average temporal R2 > 0.85 for PM2.5 but less than 0.35 for primary pollutant NO2). In addition, they are suitable for areas with relatively dense monitoring networks due to their inability to capture spatial concentration variabilities, as indicated by the negative spatial R (lower than -0.2 for PM2.5 when assessed using data withholding). The performance of LUR and AOD methods were similar to kriging. Using RLINE and CMAQ fields without fusing observational data led to substantial errors and biases, though the CMAQ model captured spatial gradients reasonably well (spatial R = 0.45 for PM2.5). Two unique tests conducted here included quantifying autocorrelation of method biases (which can be important in time series analyses) and how well the methods capture the observed interspecies correlations (which would be of particular importance in multipollutant health assessments). Autocorrelation of method biases lasted longest and interspecies correlations of primary pollutants was higher than observations when air quality models were used without data fusing. Use of hybrid methods that combine air quality model outputs with observational data overcome some of these limitations and is better suited for health studies. Results from this study contribute to better understanding the strengths and weaknesses of different methods for estimating human exposures.

  7. Ordinary kriging vs inverse distance weighting: spatial interpolation of the sessile community of Madagascar reef, Gulf of Mexico.

    PubMed

    Zarco-Perello, Salvador; Simões, Nuno

    2017-01-01

    Information about the distribution and abundance of the habitat-forming sessile organisms in marine ecosystems is of great importance for conservation and natural resource managers. Spatial interpolation methodologies can be useful to generate this information from in situ sampling points, especially in circumstances where remote sensing methodologies cannot be applied due to small-scale spatial variability of the natural communities and low light penetration in the water column. Interpolation methods are widely used in environmental sciences; however, published studies using these methodologies in coral reef science are scarce. We compared the accuracy of the two most commonly used interpolation methods in all disciplines, inverse distance weighting (IDW) and ordinary kriging (OK), to predict the distribution and abundance of hard corals, octocorals, macroalgae, sponges and zoantharians and identify hotspots of these habitat-forming organisms using data sampled at three different spatial scales (5, 10 and 20 m) in Madagascar reef, Gulf of Mexico. The deeper sandy environments of the leeward and windward regions of Madagascar reef were dominated by macroalgae and seconded by octocorals. However, the shallow rocky environments of the reef crest had the highest richness of habitat-forming groups of organisms; here, we registered high abundances of octocorals and macroalgae, with sponges, Millepora alcicornis and zoantharians dominating in some patches, creating high levels of habitat heterogeneity. IDW and OK generated similar maps of distribution for all the taxa; however, cross-validation tests showed that IDW outperformed OK in the prediction of their abundances. When the sampling distance was at 20 m, both interpolation techniques performed poorly, but as the sampling was done at shorter distances prediction accuracies increased, especially for IDW. OK had higher mean prediction errors and failed to correctly interpolate the highest abundance values measured in situ , except for macroalgae, whereas IDW had lower mean prediction errors and high correlations between predicted and measured values in all cases when sampling was every 5 m. The accurate spatial interpolations created using IDW allowed us to see the spatial variability of each taxa at a biological and spatial resolution that remote sensing would not have been able to produce. Our study sets the basis for further research projects and conservation management in Madagascar reef and encourages similar studies in the region and other parts of the world where remote sensing technologies are not suitable for use.

  8. Ordinary kriging vs inverse distance weighting: spatial interpolation of the sessile community of Madagascar reef, Gulf of Mexico

    PubMed Central

    Simões, Nuno

    2017-01-01

    Information about the distribution and abundance of the habitat-forming sessile organisms in marine ecosystems is of great importance for conservation and natural resource managers. Spatial interpolation methodologies can be useful to generate this information from in situ sampling points, especially in circumstances where remote sensing methodologies cannot be applied due to small-scale spatial variability of the natural communities and low light penetration in the water column. Interpolation methods are widely used in environmental sciences; however, published studies using these methodologies in coral reef science are scarce. We compared the accuracy of the two most commonly used interpolation methods in all disciplines, inverse distance weighting (IDW) and ordinary kriging (OK), to predict the distribution and abundance of hard corals, octocorals, macroalgae, sponges and zoantharians and identify hotspots of these habitat-forming organisms using data sampled at three different spatial scales (5, 10 and 20 m) in Madagascar reef, Gulf of Mexico. The deeper sandy environments of the leeward and windward regions of Madagascar reef were dominated by macroalgae and seconded by octocorals. However, the shallow rocky environments of the reef crest had the highest richness of habitat-forming groups of organisms; here, we registered high abundances of octocorals and macroalgae, with sponges, Millepora alcicornis and zoantharians dominating in some patches, creating high levels of habitat heterogeneity. IDW and OK generated similar maps of distribution for all the taxa; however, cross-validation tests showed that IDW outperformed OK in the prediction of their abundances. When the sampling distance was at 20 m, both interpolation techniques performed poorly, but as the sampling was done at shorter distances prediction accuracies increased, especially for IDW. OK had higher mean prediction errors and failed to correctly interpolate the highest abundance values measured in situ, except for macroalgae, whereas IDW had lower mean prediction errors and high correlations between predicted and measured values in all cases when sampling was every 5 m. The accurate spatial interpolations created using IDW allowed us to see the spatial variability of each taxa at a biological and spatial resolution that remote sensing would not have been able to produce. Our study sets the basis for further research projects and conservation management in Madagascar reef and encourages similar studies in the region and other parts of the world where remote sensing technologies are not suitable for use. PMID:29204321

  9. Estimating pixel variances in the scenes of staring sensors

    DOEpatents

    Simonson, Katherine M [Cedar Crest, NM; Ma, Tian J [Albuquerque, NM

    2012-01-24

    A technique for detecting changes in a scene perceived by a staring sensor is disclosed. The technique includes acquiring a reference image frame and a current image frame of a scene with the staring sensor. A raw difference frame is generated based upon differences between the reference image frame and the current image frame. Pixel error estimates are generated for each pixel in the raw difference frame based at least in part upon spatial error estimates related to spatial intensity gradients in the scene. The pixel error estimates are used to mitigate effects of camera jitter in the scene between the current image frame and the reference image frame.

  10. Practical guidance on characterizing availability in resource selection functions under a use-availability design

    USGS Publications Warehouse

    Northrup, Joseph M.; Hooten, Mevin B.; Anderson, Charles R.; Wittemyer, George

    2013-01-01

    Habitat selection is a fundamental aspect of animal ecology, the understanding of which is critical to management and conservation. Global positioning system data from animals allow fine-scale assessments of habitat selection and typically are analyzed in a use-availability framework, whereby animal locations are contrasted with random locations (the availability sample). Although most use-availability methods are in fact spatial point process models, they often are fit using logistic regression. This framework offers numerous methodological challenges, for which the literature provides little guidance. Specifically, the size and spatial extent of the availability sample influences coefficient estimates potentially causing interpretational bias. We examined the influence of availability on statistical inference through simulations and analysis of serially correlated mule deer GPS data. Bias in estimates arose from incorrectly assessing and sampling the spatial extent of availability. Spatial autocorrelation in covariates, which is common for landscape characteristics, exacerbated the error in availability sampling leading to increased bias. These results have strong implications for habitat selection analyses using GPS data, which are increasingly prevalent in the literature. We recommend researchers assess the sensitivity of their results to their availability sample and, where bias is likely, take care with interpretations and use cross validation to assess robustness.

  11. Every photon counts: improving low, mid, and high-spatial frequency errors on astronomical optics and materials with MRF

    NASA Astrophysics Data System (ADS)

    Maloney, Chris; Lormeau, Jean Pierre; Dumas, Paul

    2016-07-01

    Many astronomical sensing applications operate in low-light conditions; for these applications every photon counts. Controlling mid-spatial frequencies and surface roughness on astronomical optics are critical for mitigating scattering effects such as flare and energy loss. By improving these two frequency regimes higher contrast images can be collected with improved efficiency. Classically, Magnetorheological Finishing (MRF) has offered an optical fabrication technique to correct low order errors as well has quilting/print-through errors left over in light-weighted optics from conventional polishing techniques. MRF is a deterministic, sub-aperture polishing process that has been used to improve figure on an ever expanding assortment of optical geometries, such as planos, spheres, on and off axis aspheres, primary mirrors and freeform optics. Precision optics are routinely manufactured by this technology with sizes ranging from 5-2,000mm in diameter. MRF can be used for form corrections; turning a sphere into an asphere or free form, but more commonly for figure corrections achieving figure errors as low as 1nm RMS while using careful metrology setups. Recent advancements in MRF technology have improved the polishing performance expected for astronomical optics in low, mid and high spatial frequency regimes. Deterministic figure correction with MRF is compatible with most materials, including some recent examples on Silicon Carbide and RSA905 Aluminum. MRF also has the ability to produce `perfectly-bad' compensating surfaces, which may be used to compensate for measured or modeled optical deformation from sources such as gravity or mounting. In addition, recent advances in MRF technology allow for corrections of mid-spatial wavelengths as small as 1mm simultaneously with form error correction. Efficient midspatial frequency corrections make use of optimized process conditions including raster polishing in combination with a small tool size. Furthermore, a novel MRF fluid, called C30, has been developed to finish surfaces to ultra-low roughness (ULR) and has been used as the low removal rate fluid required for fine figure correction of mid-spatial frequency errors. This novel MRF fluid is able to achieve <4Å RMS on Nickel-plated Aluminum and even <1.5Å RMS roughness on Silicon, Fused Silica and other materials. C30 fluid is best utilized within a fine figure correction process to target mid-spatial frequency errors as well as smooth surface roughness 'for free' all in one step. In this paper we will discuss recent advancements in MRF technology and the ability to meet requirements for precision optics in low, mid and high spatial frequency regimes and how improved MRF performance addresses the need for achieving tight specifications required for astronomical optics.

  12. New decoding methods of interleaved burst error-correcting codes

    NASA Astrophysics Data System (ADS)

    Nakano, Y.; Kasahara, M.; Namekawa, T.

    1983-04-01

    A probabilistic method of single burst error correction, using the syndrome correlation of subcodes which constitute the interleaved code, is presented. This method makes it possible to realize a high capability of burst error correction with less decoding delay. By generalizing this method it is possible to obtain probabilistic method of multiple (m-fold) burst error correction. After estimating the burst error positions using syndrome correlation of subcodes which are interleaved m-fold burst error detecting codes, this second method corrects erasure errors in each subcode and m-fold burst errors. The performance of these two methods is analyzed via computer simulation, and their effectiveness is demonstrated.

  13. Stitching-error reduction in gratings by shot-shifted electron-beam lithography

    NASA Technical Reports Server (NTRS)

    Dougherty, D. J.; Muller, R. E.; Maker, P. D.; Forouhar, S.

    2001-01-01

    Calculations of the grating spatial-frequency spectrum and the filtering properties of multiple-pass electron-beam writing demonstrate a tradeoff between stitching-error suppression and minimum pitch separation. High-resolution measurements of optical-diffraction patterns show a 25-dB reduction in stitching-error side modes.

  14. Merging gauge and satellite rainfall with specification of associated uncertainty across Australia

    NASA Astrophysics Data System (ADS)

    Woldemeskel, Fitsum M.; Sivakumar, Bellie; Sharma, Ashish

    2013-08-01

    Accurate estimation of spatial rainfall is crucial for modelling hydrological systems and planning and management of water resources. While spatial rainfall can be estimated either using rain gauge-based measurements or using satellite-based measurements, such estimates are subject to uncertainties due to various sources of errors in either case, including interpolation and retrieval errors. The purpose of the present study is twofold: (1) to investigate the benefit of merging rain gauge measurements and satellite rainfall data for Australian conditions and (2) to produce a database of retrospective rainfall along with a new uncertainty metric for each grid location at any timestep. The analysis involves four steps: First, a comparison of rain gauge measurements and the Tropical Rainfall Measuring Mission (TRMM) 3B42 data at such rain gauge locations is carried out. Second, gridded monthly rain gauge rainfall is determined using thin plate smoothing splines (TPSS) and modified inverse distance weight (MIDW) method. Third, the gridded rain gauge rainfall is merged with the monthly accumulated TRMM 3B42 using a linearised weighting procedure, the weights at each grid being calculated based on the error variances of each dataset. Finally, cross validation (CV) errors at rain gauge locations and standard errors at gridded locations for each timestep are estimated. The CV error statistics indicate that merging of the two datasets improves the estimation of spatial rainfall, and more so where the rain gauge network is sparse. The provision of spatio-temporal standard errors with the retrospective dataset is particularly useful for subsequent modelling applications where input error knowledge can help reduce the uncertainty associated with modelling outcomes.

  15. Combined fabrication technique for high-precision aspheric optical windows

    NASA Astrophysics Data System (ADS)

    Hu, Hao; Song, Ci; Xie, Xuhui

    2016-07-01

    Specifications made on optical components are becoming more and more stringent with the performance improvement of modern optical systems. These strict requirements not only involve low spatial frequency surface accuracy, mid-and-high spatial frequency surface errors, but also surface smoothness and so on. This presentation mainly focuses on the fabrication process for square aspheric window which combines accurate grinding, magnetorheological finishing (MRF) and smoothing polishing (SP). In order to remove the low spatial frequency surface errors and subsurface defects after accurate grinding, the deterministic polishing method MRF with high convergence and stable material removal rate is applied. Then the SP technology with pseudo-random path is adopted to eliminate the mid-and-high spatial frequency surface ripples and high slope errors which is the defect for MRF. Additionally, the coordinate measurement method and interferometry are combined in different phase. Acid-etched method and ion beam figuring (IBF) are also investigated on observing and reducing the subsurface defects. Actual fabrication result indicates that the combined fabrication technique can lead to high machining efficiency on manufaturing the high-precision and high-quality optical aspheric windows.

  16. A New Stratified Sampling Procedure which Decreases Error Estimation of Varroa Mite Number on Sticky Boards.

    PubMed

    Kretzschmar, A; Durand, E; Maisonnasse, A; Vallon, J; Le Conte, Y

    2015-06-01

    A new procedure of stratified sampling is proposed in order to establish an accurate estimation of Varroa destructor populations on sticky bottom boards of the hive. It is based on the spatial sampling theory that recommends using regular grid stratification in the case of spatially structured process. The distribution of varroa mites on sticky board being observed as spatially structured, we designed a sampling scheme based on a regular grid with circles centered on each grid element. This new procedure is then compared with a former method using partially random sampling. Relative error improvements are exposed on the basis of a large sample of simulated sticky boards (n=20,000) which provides a complete range of spatial structures, from a random structure to a highly frame driven structure. The improvement of varroa mite number estimation is then measured by the percentage of counts with an error greater than a given level. © The Authors 2015. Published by Oxford University Press on behalf of Entomological Society of America. All rights reserved. For Permissions, please email: journals.permissions@oup.com.

  17. Spatial interpolation of GPS PWV and meteorological variables over the west coast of Peninsular Malaysia during 2013 Klang Valley Flash Flood

    NASA Astrophysics Data System (ADS)

    Suparta, Wayan; Rahman, Rosnani

    2016-02-01

    Global Positioning System (GPS) receivers are widely installed throughout the Peninsular Malaysia, but the implementation for monitoring weather hazard system such as flash flood is still not optimal. To increase the benefit for meteorological applications, the GPS system should be installed in collocation with meteorological sensors so the precipitable water vapor (PWV) can be measured. The distribution of PWV is a key element to the Earth's climate for quantitative precipitation improvement as well as flash flood forecasts. The accuracy of this parameter depends on a large extent on the number of GPS receiver installations and meteorological sensors in the targeted area. Due to cost constraints, a spatial interpolation method is proposed to address these issues. In this paper, we investigated spatial distribution of GPS PWV and meteorological variables (surface temperature, relative humidity, and rainfall) by using thin plate spline (tps) and ordinary kriging (Krig) interpolation techniques over the Klang Valley in Peninsular Malaysia (longitude: 99.5°-102.5°E and latitude: 2.0°-6.5°N). Three flash flood cases in September, October, and December 2013 were studied. The analysis was performed using mean absolute error (MAE), root mean square error (RMSE), and coefficient of determination (R2) to determine the accuracy and reliability of the interpolation techniques. Results at different phases (pre, onset, and post) that were evaluated showed that tps interpolation technique is more accurate, reliable, and highly correlated in estimating GPS PWV and relative humidity, whereas Krig is more reliable for predicting temperature and rainfall during pre-flash flood events. During the onset of flash flood events, both methods showed good interpolation in estimating all meteorological parameters with high accuracy and reliability. The finding suggests that the proposed method of spatial interpolation techniques are capable of handling limited data sources with high accuracy, which in turn can be used to predict future floods.

  18. Distribution of kriging errors, the implications and how to communicate them

    NASA Astrophysics Data System (ADS)

    Li, Hong Yi; Milne, Alice; Webster, Richard

    2016-04-01

    Kriging in one form or another has become perhaps the most popular method for spatial prediction in environmental science. Each prediction is unbiased and of minimum variance, which itself is estimated. The kriging variances depend on the mathematical model chosen to describe the spatial variation; different models, however plausible, give rise to different minimized variances. Practitioners often compare models by so-called cross-validation before finally choosing the most appropriate for their kriging. One proceeds as follows. One removes a unit (a sampling point) from the whole set, kriges the value there and compares the kriged value with the value observed to obtain the deviation or error. One repeats the process for each and every point in turn and for all plausible models. One then computes the mean errors (MEs) and the mean of the squared errors (MSEs). Ideally a squared error should equal the corresponding kriging variance (σK2), and so one is advised to choose the model for which on average the squared errors most nearly equal the kriging variances, i.e. the ratio MSDR = MSE/σK2 ≈ 1. Maximum likelihood estimation of models almost guarantees that the MSDR equals 1, and so the kriging variances are unbiased predictors of the squared error across the region. The method is based on the assumption that the errors have a normal distribution. The squared deviation ratio (SDR) should therefore be distributed as χ2 with one degree of freedom with a median of 0.455. We have found that often the median of the SDR (MedSDR) is less, in some instances much less, than 0.455 even though the mean of the SDR is close to 1. It seems that in these cases the distributions of the errors are leptokurtic, i.e. they have an excess of predictions close to the true values, excesses near the extremes and a dearth of predictions in between. In these cases the kriging variances are poor measures of the uncertainty at individual sites. The uncertainty is typically under-estimated for the extreme observations and compensated for by over estimating for other observations. Statisticians must tell users when they present maps of predictions. We illustrate the situation with results from mapping salinity in land reclaimed from the Yangtze delta in the Gulf of Hangzhou, China. There the apparent electrical conductivity (ECa) of the topsoil was measured at 525 points in a field of 2.3 ha. The marginal distribution of the observations was strongly positively skewed, and so the observed ECas were transformed to their logarithms to give an approximately symmetric distribution. That distribution was strongly platykurtic with short tails and no evident outliers. The logarithms were analysed as a mixed model of quadratic drift plus correlated random residuals with a spherical variogram. The kriged predictions that deviated from their true values with an MSDR of 0.993, but with a medSDR=0.324. The coefficient of kurtosis of the deviations was 1.45, i.e. substantially larger than 0 for a normal distribution. The reasons for this behaviour are being sought. The most likely explanation is that there are spatial outliers, i.e. points at which the observed values that differ markedly from those at their their closest neighbours.

  19. Distribution of kriging errors, the implications and how to communicate them

    NASA Astrophysics Data System (ADS)

    Li, HongYi; Milne, Alice; Webster, Richard

    2015-04-01

    Kriging in one form or another has become perhaps the most popular method for spatial prediction in environmental science. Each prediction is unbiased and of minimum variance, which itself is estimated. The kriging variances depend on the mathematical model chosen to describe the spatial variation; different models, however plausible, give rise to different minimized variances. Practitioners often compare models by so-called cross-validation before finally choosing the most appropriate for their kriging. One proceeds as follows. One removes a unit (a sampling point) from the whole set, kriges the value there and compares the kriged value with the value observed to obtain the deviation or error. One repeats the process for each and every point in turn and for all plausible models. One then computes the mean errors (MEs) and the mean of the squared errors (MSEs). Ideally a squared error should equal the corresponding kriging variance (σ_K^2), and so one is advised to choose the model for which on average the squared errors most nearly equal the kriging variances, i.e. the ratio MSDR=MSE/ σ_K2 ≈1. Maximum likelihood estimation of models almost guarantees that the MSDR equals 1, and so the kriging variances are unbiased predictors of the squared error across the region. The method is based on the assumption that the errors have a normal distribution. The squared deviation ratio (SDR) should therefore be distributed as χ2 with one degree of freedom with a median of 0.455. We have found that often the median of the SDR (MedSDR) is less, in some instances much less, than 0.455 even though the mean of the SDR is close to 1. It seems that in these cases the distributions of the errors are leptokurtic, i.e. they have an excess of predictions close to the true values, excesses near the extremes and a dearth of predictions in between. In these cases the kriging variances are poor measures of the uncertainty at individual sites. The uncertainty is typically under-estimated for the extreme observations and compensated for by over estimating for other observations. Statisticians must tell users when they present maps of predictions. We illustrate the situation with results from mapping salinity in land reclaimed from the Yangtze delta in the Gulf of Hangzhou, China. There the apparent electrical conductivity (EC_a) of the topsoil was measured at 525 points in a field of 2.3~ha. The marginal distribution of the observations was strongly positively skewed, and so the observed EC_as were transformed to their logarithms to give an approximately symmetric distribution. That distribution was strongly platykurtic with short tails and no evident outliers. The logarithms were analysed as a mixed model of quadratic drift plus correlated random residuals with a spherical variogram. The kriged predictions that deviated from their true values with an MSDR of 0.993, but with a medSDR=0.324. The coefficient of kurtosis of the deviations was 1.45, i.e. substantially larger than 0 for a normal distribution. The reasons for this behaviour are being sought. The most likely explanation is that there are spatial outliers, i.e. points at which the observed values that differ markedly from those at their their closest neighbours.

  20. Comparison of Spatial Correlation Parameters between Full and Model Scale Launch Vehicles

    NASA Technical Reports Server (NTRS)

    Kenny, Jeremy; Giacomoni, Clothilde

    2016-01-01

    The current vibro-acoustic analysis tools require specific spatial correlation parameters as input to define the liftoff acoustic environment experienced by the launch vehicle. Until recently these parameters have not been very well defined. A comprehensive set of spatial correlation data were obtained during a scale model acoustic test conducted in 2014. From these spatial correlation data, several parameters were calculated: the decay coefficient, the diffuse to propagating ratio, and the angle of incidence. Spatial correlation data were also collected on the EFT-1 flight of the Delta IV vehicle which launched on December 5th, 2014. A comparison of the spatial correlation parameters from full scale and model scale data will be presented.

  1. The formulation and estimation of a spatial skew-normal generalized ordered-response model.

    DOT National Transportation Integrated Search

    2016-06-01

    This paper proposes a new spatial generalized ordered response model with skew-normal kernel error terms and an : associated estimation method. It contributes to the spatial analysis field by allowing a flexible and parametric skew-normal : distribut...

  2. Effects of errors and gaps in spatial data sets on assessment of conservation progress.

    PubMed

    Visconti, P; Di Marco, M; Álvarez-Romero, J G; Januchowski-Hartley, S R; Pressey, R L; Weeks, R; Rondinini, C

    2013-10-01

    Data on the location and extent of protected areas, ecosystems, and species' distributions are essential for determining gaps in biodiversity protection and identifying future conservation priorities. However, these data sets always come with errors in the maps and associated metadata. Errors are often overlooked in conservation studies, despite their potential negative effects on the reported extent of protection of species and ecosystems. We used 3 case studies to illustrate the implications of 3 sources of errors in reporting progress toward conservation objectives: protected areas with unknown boundaries that are replaced by buffered centroids, propagation of multiple errors in spatial data, and incomplete protected-area data sets. As of 2010, the frequency of protected areas with unknown boundaries in the World Database on Protected Areas (WDPA) caused the estimated extent of protection of 37.1% of the terrestrial Neotropical mammals to be overestimated by an average 402.8% and of 62.6% of species to be underestimated by an average 10.9%. Estimated level of protection of the world's coral reefs was 25% higher when using recent finer-resolution data on coral reefs as opposed to globally available coarse-resolution data. Accounting for additional data sets not yet incorporated into WDPA contributed up to 6.7% of additional protection to marine ecosystems in the Philippines. We suggest ways for data providers to reduce the errors in spatial and ancillary data and ways for data users to mitigate the effects of these errors on biodiversity assessments. © 2013 Society for Conservation Biology.

  3. Accuracy and Spatial Variability in GPS Surveying for Landslide Mapping on Road Inventories at a Semi-Detailed Scale: the Case in Colombia

    NASA Astrophysics Data System (ADS)

    Murillo Feo, C. A.; Martnez Martinez, L. J.; Correa Muñoz, N. A.

    2016-06-01

    The accuracy of locating attributes on topographic surfaces when, using GPS in mountainous areas, is affected by obstacles to wave propagation. As part of this research on the semi-automatic detection of landslides, we evaluate the accuracy and spatial distribution of the horizontal error in GPS positioning in the tertiary road network of six municipalities located in mountainous areas in the department of Cauca, Colombia, using geo-referencing with GPS mapping equipment and static-fast and pseudo-kinematic methods. We obtained quality parameters for the GPS surveys with differential correction, using a post-processing method. The consolidated database underwent exploratory analyses to determine the statistical distribution, a multivariate analysis to establish relationships and partnerships between the variables, and an analysis of the spatial variability and calculus of accuracy, considering the effect of non-Gaussian distribution errors. The evaluation of the internal validity of the data provide metrics with a confidence level of 95% between 1.24 and 2.45 m in the static-fast mode and between 0.86 and 4.2 m in the pseudo-kinematic mode. The external validity had an absolute error of 4.69 m, indicating that this descriptor is more critical than precision. Based on the ASPRS standard, the scale obtained with the evaluated equipment was in the order of 1:20000, a level of detail expected in the landslide-mapping project. Modelling the spatial variability of the horizontal errors from the empirical semi-variogram analysis showed predictions errors close to the external validity of the devices.

  4. Comparison of HSPF and PRMS model simulated flows using different temporal and spatial scales in the Black Hills, South Dakota

    USGS Publications Warehouse

    Chalise, D. R.; Haj, Adel E.; Fontaine, T.A.

    2018-01-01

    The hydrological simulation program Fortran (HSPF) [Hydrological Simulation Program Fortran version 12.2 (Computer software). USEPA, Washington, DC] and the precipitation runoff modeling system (PRMS) [Precipitation Runoff Modeling System version 4.0 (Computer software). USGS, Reston, VA] models are semidistributed, deterministic hydrological tools for simulating the impacts of precipitation, land use, and climate on basin hydrology and streamflow. Both models have been applied independently to many watersheds across the United States. This paper reports the statistical results assessing various temporal (daily, monthly, and annual) and spatial (small versus large watershed) scale biases in HSPF and PRMS simulations using two watersheds in the Black Hills, South Dakota. The Nash-Sutcliffe efficiency (NSE), Pearson correlation coefficient (r">rr), and coefficient of determination (R2">R2R2) statistics for the daily, monthly, and annual flows were used to evaluate the models’ performance. Results from the HSPF models showed that the HSPF consistently simulated the annual flows for both large and small basins better than the monthly and daily flows, and the simulated flows for the small watershed better than flows for the large watershed. In comparison, the PRMS model results show that the PRMS simulated the monthly flows for both the large and small watersheds better than the daily and annual flows, and the range of statistical error in the PRMS models was greater than that in the HSPF models. Moreover, it can be concluded that the statistical error in the HSPF and the PRMSdaily, monthly, and annual flow estimates for watersheds in the Black Hills was influenced by both temporal and spatial scale variability.

  5. Functional CAR models for large spatially correlated functional datasets.

    PubMed

    Zhang, Lin; Baladandayuthapani, Veerabhadran; Zhu, Hongxiao; Baggerly, Keith A; Majewski, Tadeusz; Czerniak, Bogdan A; Morris, Jeffrey S

    2016-01-01

    We develop a functional conditional autoregressive (CAR) model for spatially correlated data for which functions are collected on areal units of a lattice. Our model performs functional response regression while accounting for spatial correlations with potentially nonseparable and nonstationary covariance structure, in both the space and functional domains. We show theoretically that our construction leads to a CAR model at each functional location, with spatial covariance parameters varying and borrowing strength across the functional domain. Using basis transformation strategies, the nonseparable spatial-functional model is computationally scalable to enormous functional datasets, generalizable to different basis functions, and can be used on functions defined on higher dimensional domains such as images. Through simulation studies, we demonstrate that accounting for the spatial correlation in our modeling leads to improved functional regression performance. Applied to a high-throughput spatially correlated copy number dataset, the model identifies genetic markers not identified by comparable methods that ignore spatial correlations.

  6. A technique for evaluating the influence of spatial sampling on the determination of global mean total columnar ozone

    NASA Technical Reports Server (NTRS)

    Tolson, R. H.

    1981-01-01

    A technique is described for providing a means of evaluating the influence of spatial sampling on the determination of global mean total columnar ozone. A finite number of coefficients in the expansion are determined, and the truncated part of the expansion is shown to contribute an error to the estimate, which depends strongly on the spatial sampling and is relatively insensitive to data noise. First and second order statistics are derived for each term in a spherical harmonic expansion which represents the ozone field, and the statistics are used to estimate systematic and random errors in the estimates of total ozone.

  7. The modulating effect of personality traits on neural error monitoring: evidence from event-related FMRI.

    PubMed

    Sosic-Vasic, Zrinka; Ulrich, Martin; Ruchsow, Martin; Vasic, Nenad; Grön, Georg

    2012-01-01

    The present study investigated the association between traits of the Five Factor Model of Personality (Neuroticism, Extraversion, Openness for Experiences, Agreeableness, and Conscientiousness) and neural correlates of error monitoring obtained from a combined Eriksen-Flanker-Go/NoGo task during event-related functional magnetic resonance imaging in 27 healthy subjects. Individual expressions of personality traits were measured using the NEO-PI-R questionnaire. Conscientiousness correlated positively with error signaling in the left inferior frontal gyrus and adjacent anterior insula (IFG/aI). A second strong positive correlation was observed in the anterior cingulate gyrus (ACC). Neuroticism was negatively correlated with error signaling in the inferior frontal cortex possibly reflecting the negative inter-correlation between both scales observed on the behavioral level. Under present statistical thresholds no significant results were obtained for remaining scales. Aligning the personality trait of Conscientiousness with task accomplishment striving behavior the correlation in the left IFG/aI possibly reflects an inter-individually different involvement whenever task-set related memory representations are violated by the occurrence of errors. The strong correlations in the ACC may indicate that more conscientious subjects were stronger affected by these violations of a given task-set expressed by individually different, negatively valenced signals conveyed by the ACC upon occurrence of an error. Present results illustrate that for predicting individual responses to errors underlying personality traits should be taken into account and also lend external validity to the personality trait approach suggesting that personality constructs do reflect more than mere descriptive taxonomies.

  8. Identifying presence of correlated errors in GRACE monthly harmonic coefficients using machine learning algorithms

    NASA Astrophysics Data System (ADS)

    Piretzidis, Dimitrios; Sra, Gurveer; Karantaidis, George; Sideris, Michael G.

    2017-04-01

    A new method for identifying correlated errors in Gravity Recovery and Climate Experiment (GRACE) monthly harmonic coefficients has been developed and tested. Correlated errors are present in the differences between monthly GRACE solutions, and can be suppressed using a de-correlation filter. In principle, the de-correlation filter should be implemented only on coefficient series with correlated errors to avoid losing useful geophysical information. In previous studies, two main methods of implementing the de-correlation filter have been utilized. In the first one, the de-correlation filter is implemented starting from a specific minimum order until the maximum order of the monthly solution examined. In the second one, the de-correlation filter is implemented only on specific coefficient series, the selection of which is based on statistical testing. The method proposed in the present study exploits the capabilities of supervised machine learning algorithms such as neural networks and support vector machines (SVMs). The pattern of correlated errors can be described by several numerical and geometric features of the harmonic coefficient series. The features of extreme cases of both correlated and uncorrelated coefficients are extracted and used for the training of the machine learning algorithms. The trained machine learning algorithms are later used to identify correlated errors and provide the probability of a coefficient series to be correlated. Regarding SVMs algorithms, an extensive study is performed with various kernel functions in order to find the optimal training model for prediction. The selection of the optimal training model is based on the classification accuracy of the trained SVM algorithm on the same samples used for training. Results show excellent performance of all algorithms with a classification accuracy of 97% - 100% on a pre-selected set of training samples, both in the validation stage of the training procedure and in the subsequent use of the trained algorithms to classify independent coefficients. This accuracy is also confirmed by the external validation of the trained algorithms using the hydrology model GLDAS NOAH. The proposed method meet the requirement of identifying and de-correlating only coefficients with correlated errors. Also, there is no need of applying statistical testing or other techniques that require prior de-correlation of the harmonic coefficients.

  9. The Contributions of Near Work and Outdoor Activity to the Correlation Between Siblings in the Collaborative Longitudinal Evaluation of Ethnicity and Refractive Error (CLEERE) Study

    PubMed Central

    Jones-Jordan, Lisa A.; Sinnott, Loraine T.; Graham, Nicholas D.; Cotter, Susan A.; Kleinstein, Robert N.; Manny, Ruth E.; Mutti, Donald O.; Twelker, J. Daniel; Zadnik, Karla

    2014-01-01

    Purpose. We determined the correlation between sibling refractive errors adjusted for shared and unique environmental factors using data from the Collaborative Longitudinal Evaluation of Ethnicity and Refractive Error (CLEERE) Study. Methods. Refractive error from subjects' last study visits was used to estimate the intraclass correlation coefficient (ICC) between siblings. The correlation models used environmental factors (diopter-hours and outdoor/sports activity) assessed annually from parents by survey to adjust for shared and unique environmental exposures when estimating the heritability of refractive error (2*ICC). Results. Data from 700 families contributed to the between-sibling correlation for spherical equivalent refractive error. The mean age of the children at the last visit was 13.3 ± 0.90 years. Siblings engaged in similar amounts of near and outdoor activities (correlations ranged from 0.40–0.76). The ICC for spherical equivalent, controlling for age, sex, ethnicity, and site was 0.367 (95% confidence interval [CI] = 0.304, 0.420), with an estimated heritability of no more than 0.733. After controlling for these variables, and near and outdoor/sports activities, the resulting ICC was 0.364 (95% CI = 0.304, 0.420; estimated heritability no more than 0.728, 95% CI = 0.608, 0.850). The ICCs did not differ significantly between male–female and single sex pairs. Conclusions. Adjusting for shared family and unique, child-specific environmental factors only reduced the estimate of refractive error correlation between siblings by 0.5%. Consistent with a lack of association between myopia progression and either near work or outdoor/sports activity, substantial common environmental exposures had little effect on this correlation. Genetic effects appear to have the major role in determining the similarity of refractive error between siblings. PMID:25205866

  10. Crime Modeling using Spatial Regression Approach

    NASA Astrophysics Data System (ADS)

    Saleh Ahmar, Ansari; Adiatma; Kasim Aidid, M.

    2018-01-01

    Act of criminality in Indonesia increased both variety and quantity every year. As murder, rape, assault, vandalism, theft, fraud, fencing, and other cases that make people feel unsafe. Risk of society exposed to crime is the number of reported cases in the police institution. The higher of the number of reporter to the police institution then the number of crime in the region is increasing. In this research, modeling criminality in South Sulawesi, Indonesia with the dependent variable used is the society exposed to the risk of crime. Modelling done by area approach is the using Spatial Autoregressive (SAR) and Spatial Error Model (SEM) methods. The independent variable used is the population density, the number of poor population, GDP per capita, unemployment and the human development index (HDI). Based on the analysis using spatial regression can be shown that there are no dependencies spatial both lag or errors in South Sulawesi.

  11. Neural Mechanisms of Cognitive Dissonance (Revised): An EEG Study.

    PubMed

    Colosio, Marco; Shestakova, Anna; Nikulin, Vadim V; Blagovechtchenski, Evgeny; Klucharev, Vasily

    2017-05-17

    Cognitive dissonance theory suggests that our preferences are modulated by the mere act of choosing. A choice between two similarly valued alternatives creates psychological tension (cognitive dissonance) that is reduced by a postdecisional reevaluation of the alternatives. We measured EEG of human subjects during rest and free-choice paradigm. Our study demonstrates that choices associated with stronger cognitive dissonance trigger a larger negative frontocentral evoked response similar to error-related negativity, which has in turn been implicated in general performance monitoring. Furthermore, the amplitude of the evoked response is correlated with the reevaluation of the alternatives. We also found a link between individual neural dynamics (long-range temporal correlations) of the frontocentral cortices during rest and follow-up neural and behavioral effects of cognitive dissonance. Individuals with stronger resting-state long-range temporal correlations demonstrated a greater postdecisional reevaluation of the alternatives and larger evoked brain responses associated with stronger cognitive dissonance. Thus, our results suggest that cognitive dissonance is reflected in both resting-state and choice-related activity of the prefrontal cortex as part of the general performance-monitoring circuitry. SIGNIFICANCE STATEMENT Contrary to traditional decision theory, behavioral studies repeatedly demonstrate that our preferences are modulated by the mere act of choosing. Difficult choices generate psychological (cognitive) dissonance, which is reduced by the postdecisional devaluation of unchosen options. We found that decisions associated with a higher level of cognitive dissonance elicited a stronger negative frontocentral deflection that peaked ∼60 ms after the response. This activity shares similar spatial and temporal features as error-related negativity, the electrophysiological correlate of performance monitoring. Furthermore, the frontocentral resting-state activity predicted the individual magnitude of preference change and the strength of cognitive dissonance-related neural activity. Copyright © 2017 Colosio et al.

  12. Neural Mechanisms of Cognitive Dissonance (Revised): An EEG Study

    PubMed Central

    Nikulin, Vadim V.; Blagovechtchenski, Evgeny

    2017-01-01

    Cognitive dissonance theory suggests that our preferences are modulated by the mere act of choosing. A choice between two similarly valued alternatives creates psychological tension (cognitive dissonance) that is reduced by a postdecisional reevaluation of the alternatives. We measured EEG of human subjects during rest and free-choice paradigm. Our study demonstrates that choices associated with stronger cognitive dissonance trigger a larger negative frontocentral evoked response similar to error-related negativity, which has in turn been implicated in general performance monitoring. Furthermore, the amplitude of the evoked response is correlated with the reevaluation of the alternatives. We also found a link between individual neural dynamics (long-range temporal correlations) of the frontocentral cortices during rest and follow-up neural and behavioral effects of cognitive dissonance. Individuals with stronger resting-state long-range temporal correlations demonstrated a greater postdecisional reevaluation of the alternatives and larger evoked brain responses associated with stronger cognitive dissonance. Thus, our results suggest that cognitive dissonance is reflected in both resting-state and choice-related activity of the prefrontal cortex as part of the general performance-monitoring circuitry. SIGNIFICANCE STATEMENT Contrary to traditional decision theory, behavioral studies repeatedly demonstrate that our preferences are modulated by the mere act of choosing. Difficult choices generate psychological (cognitive) dissonance, which is reduced by the postdecisional devaluation of unchosen options. We found that decisions associated with a higher level of cognitive dissonance elicited a stronger negative frontocentral deflection that peaked ∼60 ms after the response. This activity shares similar spatial and temporal features as error-related negativity, the electrophysiological correlate of performance monitoring. Furthermore, the frontocentral resting-state activity predicted the individual magnitude of preference change and the strength of cognitive dissonance-related neural activity. PMID:28438968

  13. Joint transform correlators with spatially incoherent illumination

    NASA Astrophysics Data System (ADS)

    Bykovsky, Yuri A.; Karpiouk, Andrey B.; Markilov, Anatoly A.; Rodin, Vladislav G.; Starikov, Sergey N.

    1997-03-01

    Two variants of joint transform correlators with monochromatic spatially incoherent illumination are considered. The Fourier-holograms of the reference and recognized images are recorded simultaneously or apart in a time on the same spatial light modulator directly by monochromatic spatially incoherent light. To create the signal of mutual correlation of the images it is necessary to execute nonlinear transformation when the hologram is illuminated by coherent light. In the first scheme of the correlator this aim was achieved by using double pas of a restoring coherent wave through the hologram. In the second variant of the correlator the non-linearity of the characteristic of the spatial light modulator for hologram recording was used. Experimental schemes and results on processing teste images by both variants of joint transform correlators with monochromatic spatially incoherent illumination. The use of spatially incoherent light on the input of joint transform correlators permits to reduce the requirements to optical quality of elements, to reduce accuracy requirements on elements positioning and to expand a number of devices suitable to input images in correlators.

  14. The Infinitesimal Jackknife with Exploratory Factor Analysis

    ERIC Educational Resources Information Center

    Zhang, Guangjian; Preacher, Kristopher J.; Jennrich, Robert I.

    2012-01-01

    The infinitesimal jackknife, a nonparametric method for estimating standard errors, has been used to obtain standard error estimates in covariance structure analysis. In this article, we adapt it for obtaining standard errors for rotated factor loadings and factor correlations in exploratory factor analysis with sample correlation matrices. Both…

  15. Canopy reflectance modelling of semiarid vegetation

    NASA Technical Reports Server (NTRS)

    Franklin, Janet

    1994-01-01

    Three different types of remote sensing algorithms for estimating vegetation amount and other land surface biophysical parameters were tested for semiarid environments. These included statistical linear models, the Li-Strahler geometric-optical canopy model, and linear spectral mixture analysis. The two study areas were the National Science Foundation's Jornada Long Term Ecological Research site near Las Cruces, NM, in the northern Chihuahuan desert, and the HAPEX-Sahel site near Niamey, Niger, in West Africa, comprising semiarid rangeland and subtropical crop land. The statistical approach (simple and multiple regression) resulted in high correlations between SPOT satellite spectral reflectance and shrub and grass cover, although these correlations varied with the spatial scale of aggregation of the measurements. The Li-Strahler model produced estimated of shrub size and density for both study sites with large standard errors. In the Jornada, the estimates were accurate enough to be useful for characterizing structural differences among three shrub strata. In Niger, the range of shrub cover and size in short-fallow shrublands is so low that the necessity of spatially distributed estimation of shrub size and density is questionable. Spectral mixture analysis of multiscale, multitemporal, multispectral radiometer data and imagery for Niger showed a positive relationship between fractions of spectral endmembers and surface parameters of interest including soil cover, vegetation cover, and leaf area index.

  16. High-precision spatial localization of mouse vocalizations during social interaction.

    PubMed

    Heckman, Jesse J; Proville, Rémi; Heckman, Gert J; Azarfar, Alireza; Celikel, Tansu; Englitz, Bernhard

    2017-06-07

    Mice display a wide repertoire of vocalizations that varies with age, sex, and context. Especially during courtship, mice emit ultrasonic vocalizations (USVs) of high complexity, whose detailed structure is poorly understood. As animals of both sexes vocalize, the study of social vocalizations requires attributing single USVs to individuals. The state-of-the-art in sound localization for USVs allows spatial localization at centimeter resolution, however, animals interact at closer ranges, involving tactile, snout-snout exploration. Hence, improved algorithms are required to reliably assign USVs. We develop multiple solutions to USV localization, and derive an analytical solution for arbitrary vertical microphone positions. The algorithms are compared on wideband acoustic noise and single mouse vocalizations, and applied to social interactions with optically tracked mouse positions. A novel, (frequency) envelope weighted generalised cross-correlation outperforms classical cross-correlation techniques. It achieves a median error of ~1.4 mm for noise and ~4-8.5 mm for vocalizations. Using this algorithms in combination with a level criterion, we can improve the assignment for interacting mice. We report significant differences in mean USV properties between CBA mice of different sexes during social interaction. Hence, the improved USV attribution to individuals lays the basis for a deeper understanding of social vocalizations, in particular sequences of USVs.

  17. Speeding up 3D speckle tracking using PatchMatch

    NASA Astrophysics Data System (ADS)

    Zontak, Maria; O'Donnell, Matthew

    2016-03-01

    Echocardiography provides valuable information to diagnose heart dysfunction. A typical exam records several minutes of real-time cardiac images. To enable complete analysis of 3D cardiac strains, 4-D (3-D+t) echocardiography is used. This results in a huge dataset and requires effective automated analysis. Ultrasound speckle tracking is an effective method for tissue motion analysis. It involves correlation of a 3D kernel (block) around a voxel with kernels in later frames. The search region is usually confined to a local neighborhood, due to biomechanical and computational constraints. For high strains and moderate frame-rates, however, this search region will remain large, leading to a considerable computational burden. Moreover, speckle decorrelation (due to high strains) leads to errors in tracking. To solve this, spatial motion coherency between adjacent voxels should be imposed, e.g., by averaging their correlation functions.1 This requires storing correlation functions for neighboring voxels, thus increasing memory demands. In this work, we propose an efficient search using PatchMatch, 2 a powerful method to find correspondences between images. Here we adopt PatchMatch for 3D volumes and radio-frequency signals. As opposed to an exact search, PatchMatch performs random sampling of the search region and propagates successive matches among neighboring voxels. We show that: 1) Inherently smooth offset propagation in PatchMatch contributes to spatial motion coherence without any additional processing or memory demand. 2) For typical scenarios, PatchMatch is at least 20 times faster than the exact search, while maintaining comparable tracking accuracy.

  18. Extreme Universe Space Observatory (EUSO) Optics Module

    NASA Technical Reports Server (NTRS)

    Young, Roy; Christl, Mark

    2008-01-01

    A demonstration part will be manufactured in Japan on one of the large Toshiba machines with a diameter of 2.5 meters. This will be a flat PMMA disk that is cut between 0.5 and 1.25 meters radius. The cut should demonstrate manufacturing the most difficult parts of the 2.5 meter Fresnel pattern and the blazed grating on the diffractive surface. Optical simulations, validated with the subscale prototype, will be used to determine the limits on manufacturing errors (tolerances) that will result in optics that meet EUSO s requirements. There will be limits on surface roughness (or errors at high spatial frequency); radial and azimuthal slope errors (at lower spatial frequencies) and plunge cut depth errors in the blazed grating. The demonstration part will be measured to determine whether it was made within the allowable tolerances.

  19. Effects of Shame and Guilt on Error Reporting Among Obstetric Clinicians.

    PubMed

    Zabari, Mara Lynne; Southern, Nancy L

    2018-04-17

    To understand how the experiences of shame and guilt, coupled with organizational factors, affect error reporting by obstetric clinicians. Descriptive cross-sectional. A sample of 84 obstetric clinicians from three maternity units in Washington State. In this quantitative inquiry, a variant of the Test of Self-Conscious Affect was used to measure proneness to guilt and shame. In addition, we developed questions to assess attitudes regarding concerns about damaging one's reputation if an error was reported and the choice to keep an error to oneself. Both assessments were analyzed separately and then correlated to identify relationships between constructs. Interviews were used to identify organizational factors that affect error reporting. As a group, mean scores indicated that obstetric clinicians would not choose to keep errors to themselves. However, bivariate correlations showed that proneness to shame was positively correlated to concerns about one's reputation if an error was reported, and proneness to guilt was negatively correlated with keeping errors to oneself. Interview data analysis showed that Past Experience with Responses to Errors, Management and Leadership Styles, Professional Hierarchy, and Relationships With Colleagues were influential factors in error reporting. Although obstetric clinicians want to report errors, their decisions to report are influenced by their proneness to guilt and shame and perceptions of the degree to which organizational factors facilitate or create barriers to restore their self-images. Findings underscore the influence of the organizational context on clinicians' decisions to report errors. Copyright © 2018 AWHONN, the Association of Women’s Health, Obstetric and Neonatal Nurses. Published by Elsevier Inc. All rights reserved.

  20. Voxel-based statistical analysis of uncertainties associated with deformable image registration

    NASA Astrophysics Data System (ADS)

    Li, Shunshan; Glide-Hurst, Carri; Lu, Mei; Kim, Jinkoo; Wen, Ning; Adams, Jeffrey N.; Gordon, James; Chetty, Indrin J.; Zhong, Hualiang

    2013-09-01

    Deformable image registration (DIR) algorithms have inherent uncertainties in their displacement vector fields (DVFs).The purpose of this study is to develop an optimal metric to estimate DIR uncertainties. Six computational phantoms have been developed from the CT images of lung cancer patients using a finite element method (FEM). The FEM generated DVFs were used as a standard for registrations performed on each of these phantoms. A mechanics-based metric, unbalanced energy (UE), was developed to evaluate these registration DVFs. The potential correlation between UE and DIR errors was explored using multivariate analysis, and the results were validated by landmark approach and compared with two other error metrics: DVF inverse consistency (IC) and image intensity difference (ID). Landmark-based validation was performed using the POPI-model. The results show that the Pearson correlation coefficient between UE and DIR error is rUE-error = 0.50. This is higher than rIC-error = 0.29 for IC and DIR error and rID-error = 0.37 for ID and DIR error. The Pearson correlation coefficient between UE and the product of the DIR displacements and errors is rUE-error × DVF = 0.62 for the six patients and rUE-error × DVF = 0.73 for the POPI-model data. It has been demonstrated that UE has a strong correlation with DIR errors, and the UE metric outperforms the IC and ID metrics in estimating DIR uncertainties. The quantified UE metric can be a useful tool for adaptive treatment strategies, including probability-based adaptive treatment planning.

Top