NASA Technical Reports Server (NTRS)
Kicklighter, David W.; Melillo, Jerry M.; Peterjohn, William T.; Rastetter, Edward B.; Mcguire, A. David; Steudler, Paul A.; Aber, John D.
1994-01-01
We examine the influence of aggregation errors on developing estimates of regional soil-CO2 flux from temperate forests. We find daily soil-CO2 fluxes to be more sensitive to changes in soil temperatures (Q(sub 10) = 3.08) than air temperatures (Q(sub 10) = 1.99). The direct use of mean monthly air temperatures with a daily flux model underestimates regional fluxes by approximately 4%. Temporal aggregation error varies with spatial resolution. Overall, our calibrated modeling approach reduces spatial aggregation error by 9.3% and temporal aggregation error by 15.5%. After minimizing spatial and temporal aggregation errors, mature temperate forest soils are estimated to contribute 12.9 Pg C/yr to the atmosphere as carbon dioxide. Georeferenced model estimates agree well with annual soil-CO2 fluxes measured during chamber studies in mature temperate forest stands around the globe.
On representation of temporal variability in electricity capacity planning models
Merrick, James H.
2016-08-23
This study systematically investigates how to represent intra-annual temporal variability in models of optimum electricity capacity investment. Inappropriate aggregation of temporal resolution can introduce substantial error into model outputs and associated economic insight. The mechanisms underlying the introduction of this error are shown. How many representative periods are needed to fully capture the variability is then investigated. For a sample dataset, a scenario-robust aggregation of hourly (8760) resolution is possible in the order of 10 representative hours when electricity demand is the only source of variability. The inclusion of wind and solar supply variability increases the resolution of the robustmore » aggregation to the order of 1000. A similar scale of expansion is shown for representative days and weeks. These concepts can be applied to any such temporal dataset, providing, at the least, a benchmark that any other aggregation method can aim to emulate. Finally, how prior information about peak pricing hours can potentially reduce resolution further is also discussed.« less
On representation of temporal variability in electricity capacity planning models
DOE Office of Scientific and Technical Information (OSTI.GOV)
Merrick, James H.
This study systematically investigates how to represent intra-annual temporal variability in models of optimum electricity capacity investment. Inappropriate aggregation of temporal resolution can introduce substantial error into model outputs and associated economic insight. The mechanisms underlying the introduction of this error are shown. How many representative periods are needed to fully capture the variability is then investigated. For a sample dataset, a scenario-robust aggregation of hourly (8760) resolution is possible in the order of 10 representative hours when electricity demand is the only source of variability. The inclusion of wind and solar supply variability increases the resolution of the robustmore » aggregation to the order of 1000. A similar scale of expansion is shown for representative days and weeks. These concepts can be applied to any such temporal dataset, providing, at the least, a benchmark that any other aggregation method can aim to emulate. Finally, how prior information about peak pricing hours can potentially reduce resolution further is also discussed.« less
NASA Astrophysics Data System (ADS)
Akers, P. D.; Welker, J. M.
2015-12-01
Spatial variations in precipitation isotopes have been the focus of much recent research, but relatively less work has explored changes at various temporal scales. This is partly because most spatially-diverse and long-term isotope databases are offered at a monthly resolution, while daily or event-level records are spatially and temporally limited by cost and logistics. A subset of 25 United States Network for Isotopes in Precipitation (USNIP) sites with weekly-resolution in the east-central United States was analyzed for site-specific relationships between δ18O and δD (the local meteoric water line/LMWL), δ18O and surface temperature, and δ18O and precipitation amount. Weekly data were then aggregated into monthly and seasonal data to examine the effect of aggregation on correlation and slope values for each of the relationships. Generally, increasing aggregation improved correlations (>25% for some sites) due to a reduced effect of extreme values, but estimates on regression variable error increased (>100%) because of reduced sample sizes. Aggregation resulted in small, but significant drops (5-25%) in relationship slope values for some sites. Weekly data were also grouped by month and season to explore changes in relationships throughout the year. Significant subannual variability exists in slope values and correlations even for sites with very strong overall correlations. LMWL slopes are highest in winter and lowest in summer, while the δ18O-surface temperature relationship is strongest in spring. Despite these overall trends, a high level of month-to-month and season-to-season variability is the norm for these sites. Researchers blindly applying overall relationships drawn from monthly-resolved databases to paleoclimate or environmental research risk assuming these relationships apply at all temporal resolutions. When possible, researchers should match the temporal resolution used to calculate an isotopic relationship with the temporal resolution of their applied proxy.
Dirmeyer, Paul A.; Wu, Jiexia; Norton, Holly E.; Dorigo, Wouter A.; Quiring, Steven M.; Ford, Trenton W.; Santanello, Joseph A.; Bosilovich, Michael G.; Ek, Michael B.; Koster, Randal D.; Balsamo, Gianpaolo; Lawrence, David M.
2018-01-01
Four land surface models in uncoupled and coupled configurations are compared to observations of daily soil moisture from 19 networks in the conterminous United States to determine the viability of such comparisons and explore the characteristics of model and observational data. First, observations are analyzed for error characteristics and representation of spatial and temporal variability. Some networks have multiple stations within an area comparable to model grid boxes; for those we find that aggregation of stations before calculation of statistics has little effect on estimates of variance, but soil moisture memory is sensitive to aggregation. Statistics for some networks stand out as unlike those of their neighbors, likely due to differences in instrumentation, calibration and maintenance. Buried sensors appear to have less random error than near-field remote sensing techniques, and heat dissipation sensors show less temporal variability than other types. Model soil moistures are evaluated using three metrics: standard deviation in time, temporal correlation (memory) and spatial correlation (length scale). Models do relatively well in capturing large-scale variability of metrics across climate regimes, but poorly reproduce observed patterns at scales of hundreds of kilometers and smaller. Uncoupled land models do no better than coupled model configurations, nor do reanalyses outperform free-running models. Spatial decorrelation scales are found to be difficult to diagnose. Using data for model validation, calibration or data assimilation from multiple soil moisture networks with different types of sensors and measurement techniques requires great caution. Data from models and observations should be put on the same spatial and temporal scales before comparison. PMID:29645013
NASA Technical Reports Server (NTRS)
Dirmeyer, Paul A.; Wu, Jiexia; Norton, Holly E.; Dorigo, Wouter A.; Quiring, Steven M.; Ford, Trenton W.; Santanello, Joseph A., Jr.; Bosilovich, Michael G.; Ek, Michael B.; Koster, Randal Dean;
2016-01-01
Four land surface models in uncoupled and coupled configurations are compared to observations of daily soil moisture from 19 networks in the conterminous United States to determine the viability of such comparisons and explore the characteristics of model and observational data. First, observations are analyzed for error characteristics and representation of spatial and temporal variability. Some networks have multiple stations within an area comparable to model grid boxes; for those we find that aggregation of stations before calculation of statistics has little effect on estimates of variance, but soil moisture memory is sensitive to aggregation. Statistics for some networks stand out as unlike those of their neighbors, likely due to differences in instrumentation, calibration and maintenance. Buried sensors appear to have less random error than near-field remote sensing techniques, and heat dissipation sensors show less temporal variability than other types. Model soil moistures are evaluated using three metrics: standard deviation in time, temporal correlation (memory) and spatial correlation (length scale). Models do relatively well in capturing large-scale variability of metrics across climate regimes, but poorly reproduce observed patterns at scales of hundreds of kilometers and smaller. Uncoupled land models do no better than coupled model configurations, nor do reanalyses out perform free-running models. Spatial decorrelation scales are found to be difficult to diagnose. Using data for model validation, calibration or data assimilation from multiple soil moisture networks with different types of sensors and measurement techniques requires great caution. Data from models and observations should be put on the same spatial and temporal scales before comparison.
Dirmeyer, Paul A; Wu, Jiexia; Norton, Holly E; Dorigo, Wouter A; Quiring, Steven M; Ford, Trenton W; Santanello, Joseph A; Bosilovich, Michael G; Ek, Michael B; Koster, Randal D; Balsamo, Gianpaolo; Lawrence, David M
2016-04-01
Four land surface models in uncoupled and coupled configurations are compared to observations of daily soil moisture from 19 networks in the conterminous United States to determine the viability of such comparisons and explore the characteristics of model and observational data. First, observations are analyzed for error characteristics and representation of spatial and temporal variability. Some networks have multiple stations within an area comparable to model grid boxes; for those we find that aggregation of stations before calculation of statistics has little effect on estimates of variance, but soil moisture memory is sensitive to aggregation. Statistics for some networks stand out as unlike those of their neighbors, likely due to differences in instrumentation, calibration and maintenance. Buried sensors appear to have less random error than near-field remote sensing techniques, and heat dissipation sensors show less temporal variability than other types. Model soil moistures are evaluated using three metrics: standard deviation in time, temporal correlation (memory) and spatial correlation (length scale). Models do relatively well in capturing large-scale variability of metrics across climate regimes, but poorly reproduce observed patterns at scales of hundreds of kilometers and smaller. Uncoupled land models do no better than coupled model configurations, nor do reanalyses outperform free-running models. Spatial decorrelation scales are found to be difficult to diagnose. Using data for model validation, calibration or data assimilation from multiple soil moisture networks with different types of sensors and measurement techniques requires great caution. Data from models and observations should be put on the same spatial and temporal scales before comparison.
Sreenivas, K; Sekhar, N Seshadri; Saxena, Manoj; Paliwal, R; Pathak, S; Porwal, M C; Fyzee, M A; Rao, S V C Kameswara; Wadodkar, M; Anasuya, T; Murthy, M S R; Ravisankar, T; Dadhwal, V K
2015-09-15
The present study aims at analysis of spatial and temporal variability in agricultural land cover during 2005-6 and 2011-12 from an ongoing program of annual land use mapping using multidate Advanced Wide Field Sensor (AWiFS) data aboard Resourcesat-1 and 2. About 640-690 multi-temporal AWiFS quadrant data products per year (depending on cloud cover) were co-registered and radiometrically normalized to prepare state (administrative unit) mosaics. An 18-fold classification was adopted in this project. Rule-based techniques along with maximum-likelihood algorithm were employed to deriving land cover information as well as changes within agricultural land cover classes. The agricultural land cover classes include - kharif (June-October), rabi (November-April), zaid (April-June), area sown more than once, fallow lands and plantation crops. Mean kappa accuracy of these estimates varied from 0.87 to 0.96 for various classes. Standard error of estimate has been computed for each class annually and the area estimates were corrected using standard error of estimate. The corrected estimates range between 99 and 116 Mha for kharif and 77-91 Mha for rabi. The kharif, rabi and net sown area were aggregated at 10 km × 10 km grid on annual basis for entire India and CV was computed at each grid cell using temporal spatially-aggregated area as input. This spatial variability of agricultural land cover classes was analyzed across meteorological zones, irrigated command areas and administrative boundaries. The results indicate that out of various states/meteorological zones, Punjab was consistently cropped during kharif as well as rabi seasons. Out of all irrigated commands, Tawa irrigated command was consistently cropped during rabi season. Copyright © 2014 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Gong, Caixia; Chen, Xinjun; Gao, Feng; Tian, Siquan
2014-12-01
Temporal and spatial scales play important roles in fishery ecology, and an inappropriate spatio-temporal scale may result in large errors in modeling fish distribution. The objective of this study is to evaluate the roles of spatio-temporal scales in habitat suitability modeling, with the western stock of winter-spring cohort of neon flying squid ( Ommastrephes bartramii) in the northwest Pacific Ocean as an example. In this study, the fishery-dependent data from the Chinese Mainland Squid Jigging Technical Group and sea surface temperature (SST) from remote sensing during August to October of 2003-2008 were used. We evaluated the differences in a habitat suitability index model resulting from aggregating data with 36 different spatial scales with a combination of three latitude scales (0.5°, 1° and 2°), four longitude scales (0.5°, 1°, 2° and 4°), and three temporal scales (week, fortnight, and month). The coefficients of variation (CV) of the weekly, biweekly and monthly suitability index (SI) were compared to determine which temporal and spatial scales of SI model are more precise. This study shows that the optimal temporal and spatial scales with the lowest CV are month, and 0.5° latitude and 0.5° longitude for O. bartramii in the northwest Pacific Ocean. This suitability index model developed with an optimal scale can be cost-effective in improving forecasting fishing ground and requires no excessive sampling efforts. We suggest that the uncertainty associated with spatial and temporal scales used in data aggregations needs to be considered in habitat suitability modeling.
Impact of Spatial Soil and Climate Input Data Aggregation on Regional Yield Simulations
Hoffmann, Holger; Zhao, Gang; Asseng, Senthold; Bindi, Marco; Biernath, Christian; Constantin, Julie; Coucheney, Elsa; Dechow, Rene; Doro, Luca; Eckersten, Henrik; Gaiser, Thomas; Grosz, Balázs; Heinlein, Florian; Kassie, Belay T.; Kersebaum, Kurt-Christian; Klein, Christian; Kuhnert, Matthias; Lewan, Elisabet; Moriondo, Marco; Nendel, Claas; Priesack, Eckart; Raynal, Helene; Roggero, Pier P.; Rötter, Reimund P.; Siebert, Stefan; Specka, Xenia; Tao, Fulu; Teixeira, Edmar; Trombi, Giacomo; Wallach, Daniel; Weihermüller, Lutz; Yeluripati, Jagadeesh; Ewert, Frank
2016-01-01
We show the error in water-limited yields simulated by crop models which is associated with spatially aggregated soil and climate input data. Crop simulations at large scales (regional, national, continental) frequently use input data of low resolution. Therefore, climate and soil data are often generated via averaging and sampling by area majority. This may bias simulated yields at large scales, varying largely across models. Thus, we evaluated the error associated with spatially aggregated soil and climate data for 14 crop models. Yields of winter wheat and silage maize were simulated under water-limited production conditions. We calculated this error from crop yields simulated at spatial resolutions from 1 to 100 km for the state of North Rhine-Westphalia, Germany. Most models showed yields biased by <15% when aggregating only soil data. The relative mean absolute error (rMAE) of most models using aggregated soil data was in the range or larger than the inter-annual or inter-model variability in yields. This error increased further when both climate and soil data were aggregated. Distinct error patterns indicate that the rMAE may be estimated from few soil variables. Illustrating the range of these aggregation effects across models, this study is a first step towards an ex-ante assessment of aggregation errors in large-scale simulations. PMID:27055028
Impact of Spatial Soil and Climate Input Data Aggregation on Regional Yield Simulations.
Hoffmann, Holger; Zhao, Gang; Asseng, Senthold; Bindi, Marco; Biernath, Christian; Constantin, Julie; Coucheney, Elsa; Dechow, Rene; Doro, Luca; Eckersten, Henrik; Gaiser, Thomas; Grosz, Balázs; Heinlein, Florian; Kassie, Belay T; Kersebaum, Kurt-Christian; Klein, Christian; Kuhnert, Matthias; Lewan, Elisabet; Moriondo, Marco; Nendel, Claas; Priesack, Eckart; Raynal, Helene; Roggero, Pier P; Rötter, Reimund P; Siebert, Stefan; Specka, Xenia; Tao, Fulu; Teixeira, Edmar; Trombi, Giacomo; Wallach, Daniel; Weihermüller, Lutz; Yeluripati, Jagadeesh; Ewert, Frank
2016-01-01
We show the error in water-limited yields simulated by crop models which is associated with spatially aggregated soil and climate input data. Crop simulations at large scales (regional, national, continental) frequently use input data of low resolution. Therefore, climate and soil data are often generated via averaging and sampling by area majority. This may bias simulated yields at large scales, varying largely across models. Thus, we evaluated the error associated with spatially aggregated soil and climate data for 14 crop models. Yields of winter wheat and silage maize were simulated under water-limited production conditions. We calculated this error from crop yields simulated at spatial resolutions from 1 to 100 km for the state of North Rhine-Westphalia, Germany. Most models showed yields biased by <15% when aggregating only soil data. The relative mean absolute error (rMAE) of most models using aggregated soil data was in the range or larger than the inter-annual or inter-model variability in yields. This error increased further when both climate and soil data were aggregated. Distinct error patterns indicate that the rMAE may be estimated from few soil variables. Illustrating the range of these aggregation effects across models, this study is a first step towards an ex-ante assessment of aggregation errors in large-scale simulations.
Yin, Yihang; Liu, Fengzheng; Zhou, Xiang; Li, Quanzhong
2015-08-07
Wireless sensor networks (WSNs) have been widely used to monitor the environment, and sensors in WSNs are usually power constrained. Because inner-node communication consumes most of the power, efficient data compression schemes are needed to reduce the data transmission to prolong the lifetime of WSNs. In this paper, we propose an efficient data compression model to aggregate data, which is based on spatial clustering and principal component analysis (PCA). First, sensors with a strong temporal-spatial correlation are grouped into one cluster for further processing with a novel similarity measure metric. Next, sensor data in one cluster are aggregated in the cluster head sensor node, and an efficient adaptive strategy is proposed for the selection of the cluster head to conserve energy. Finally, the proposed model applies principal component analysis with an error bound guarantee to compress the data and retain the definite variance at the same time. Computer simulations show that the proposed model can greatly reduce communication and obtain a lower mean square error than other PCA-based algorithms.
An online detection system for aggregate sizes and shapes based on digital image processing
NASA Astrophysics Data System (ADS)
Yang, Jianhong; Chen, Sijia
2017-02-01
Traditional aggregate size measuring methods are time-consuming, taxing, and do not deliver online measurements. A new online detection system for determining aggregate size and shape based on a digital camera with a charge-coupled device, and subsequent digital image processing, have been developed to overcome these problems. The system captures images of aggregates while falling and flat lying. Using these data, the particle size and shape distribution can be obtained in real time. Here, we calibrate this method using standard globules. Our experiments show that the maximum particle size distribution error was only 3 wt%, while the maximum particle shape distribution error was only 2 wt% for data derived from falling aggregates, having good dispersion. In contrast, the data for flat-lying aggregates had a maximum particle size distribution error of 12 wt%, and a maximum particle shape distribution error of 10 wt%; their accuracy was clearly lower than for falling aggregates. However, they performed well for single-graded aggregates, and did not require a dispersion device. Our system is low-cost and easy to install. It can successfully achieve online detection of aggregate size and shape with good reliability, and it has great potential for aggregate quality assurance.
Spatial and Temporal Uncertainty of Crop Yield Aggregations
NASA Technical Reports Server (NTRS)
Porwollik, Vera; Mueller, Christoph; Elliott, Joshua; Chryssanthacopoulos, James; Iizumi, Toshichika; Ray, Deepak K.; Ruane, Alex C.; Arneth, Almut; Balkovic, Juraj; Ciais, Philippe;
2016-01-01
The aggregation of simulated gridded crop yields to national or regional scale requires information on temporal and spatial patterns of crop-specific harvested areas. This analysis estimates the uncertainty of simulated gridded yield time series related to the aggregation with four different harvested area data sets. We compare aggregated yield time series from the Global Gridded Crop Model Inter-comparison project for four crop types from 14 models at global, national, and regional scale to determine aggregation-driven differences in mean yields and temporal patterns as measures of uncertainty. The quantity and spatial patterns of harvested areas differ for individual crops among the four datasets applied for the aggregation. Also simulated spatial yield patterns differ among the 14 models. These differences in harvested areas and simulated yield patterns lead to differences in aggregated productivity estimates, both in mean yield and in the temporal dynamics. Among the four investigated crops, wheat yield (17% relative difference) is most affected by the uncertainty introduced by the aggregation at the global scale. The correlation of temporal patterns of global aggregated yield time series can be as low as for soybean (r = 0.28).For the majority of countries, mean relative differences of nationally aggregated yields account for10% or less. The spatial and temporal difference can be substantial higher for individual countries. Of the top-10 crop producers, aggregated national multi-annual mean relative difference of yields can be up to 67% (maize, South Africa), 43% (wheat, Pakistan), 51% (rice, Japan), and 427% (soybean, Bolivia).Correlations of differently aggregated yield time series can be as low as r = 0.56 (maize, India), r = 0.05*Corresponding (wheat, Russia), r = 0.13 (rice, Vietnam), and r = -0.01 (soybean, Uruguay). The aggregation to sub-national scale in comparison to country scale shows that spatial uncertainties can cancel out in countries with large harvested areas per crop type. We conclude that the aggregation uncertainty can be substantial for crop productivity and production estimations in the context of food security, impact assessment, and model evaluation exercises.
Objectified quantification of uncertainties in Bayesian atmospheric inversions
NASA Astrophysics Data System (ADS)
Berchet, A.; Pison, I.; Chevallier, F.; Bousquet, P.; Bonne, J.-L.; Paris, J.-D.
2015-05-01
Classical Bayesian atmospheric inversions process atmospheric observations and prior emissions, the two being connected by an observation operator picturing mainly the atmospheric transport. These inversions rely on prescribed errors in the observations, the prior emissions and the observation operator. When data pieces are sparse, inversion results are very sensitive to the prescribed error distributions, which are not accurately known. The classical Bayesian framework experiences difficulties in quantifying the impact of mis-specified error distributions on the optimized fluxes. In order to cope with this issue, we rely on recent research results to enhance the classical Bayesian inversion framework through a marginalization on a large set of plausible errors that can be prescribed in the system. The marginalization consists in computing inversions for all possible error distributions weighted by the probability of occurrence of the error distributions. The posterior distribution of the fluxes calculated by the marginalization is not explicitly describable. As a consequence, we carry out a Monte Carlo sampling based on an approximation of the probability of occurrence of the error distributions. This approximation is deduced from the well-tested method of the maximum likelihood estimation. Thus, the marginalized inversion relies on an automatic objectified diagnosis of the error statistics, without any prior knowledge about the matrices. It robustly accounts for the uncertainties on the error distributions, contrary to what is classically done with frozen expert-knowledge error statistics. Some expert knowledge is still used in the method for the choice of an emission aggregation pattern and of a sampling protocol in order to reduce the computation cost. The relevance and the robustness of the method is tested on a case study: the inversion of methane surface fluxes at the mesoscale with virtual observations on a realistic network in Eurasia. Observing system simulation experiments are carried out with different transport patterns, flux distributions and total prior amounts of emitted methane. The method proves to consistently reproduce the known "truth" in most cases, with satisfactory tolerance intervals. Additionally, the method explicitly provides influence scores and posterior correlation matrices. An in-depth interpretation of the inversion results is then possible. The more objective quantification of the influence of the observations on the fluxes proposed here allows us to evaluate the impact of the observation network on the characterization of the surface fluxes. The explicit correlations between emission aggregates reveal the mis-separated regions, hence the typical temporal and spatial scales the inversion can analyse. These scales are consistent with the chosen aggregation patterns.
Balancing aggregation and smoothing errors in inverse models
Turner, A. J.; Jacob, D. J.
2015-06-30
Inverse models use observations of a system (observation vector) to quantify the variables driving that system (state vector) by statistical optimization. When the observation vector is large, such as with satellite data, selecting a suitable dimension for the state vector is a challenge. A state vector that is too large cannot be effectively constrained by the observations, leading to smoothing error. However, reducing the dimension of the state vector leads to aggregation error as prior relationships between state vector elements are imposed rather than optimized. Here we present a method for quantifying aggregation and smoothing errors as a function ofmore » state vector dimension, so that a suitable dimension can be selected by minimizing the combined error. Reducing the state vector within the aggregation error constraints can have the added advantage of enabling analytical solution to the inverse problem with full error characterization. We compare three methods for reducing the dimension of the state vector from its native resolution: (1) merging adjacent elements (grid coarsening), (2) clustering with principal component analysis (PCA), and (3) applying a Gaussian mixture model (GMM) with Gaussian pdfs as state vector elements on which the native-resolution state vector elements are projected using radial basis functions (RBFs). The GMM method leads to somewhat lower aggregation error than the other methods, but more importantly it retains resolution of major local features in the state vector while smoothing weak and broad features.« less
Balancing aggregation and smoothing errors in inverse models
NASA Astrophysics Data System (ADS)
Turner, A. J.; Jacob, D. J.
2015-01-01
Inverse models use observations of a system (observation vector) to quantify the variables driving that system (state vector) by statistical optimization. When the observation vector is large, such as with satellite data, selecting a suitable dimension for the state vector is a challenge. A state vector that is too large cannot be effectively constrained by the observations, leading to smoothing error. However, reducing the dimension of the state vector leads to aggregation error as prior relationships between state vector elements are imposed rather than optimized. Here we present a method for quantifying aggregation and smoothing errors as a function of state vector dimension, so that a suitable dimension can be selected by minimizing the combined error. Reducing the state vector within the aggregation error constraints can have the added advantage of enabling analytical solution to the inverse problem with full error characterization. We compare three methods for reducing the dimension of the state vector from its native resolution: (1) merging adjacent elements (grid coarsening), (2) clustering with principal component analysis (PCA), and (3) applying a Gaussian mixture model (GMM) with Gaussian pdfs as state vector elements on which the native-resolution state vector elements are projected using radial basis functions (RBFs). The GMM method leads to somewhat lower aggregation error than the other methods, but more importantly it retains resolution of major local features in the state vector while smoothing weak and broad features.
Balancing aggregation and smoothing errors in inverse models
NASA Astrophysics Data System (ADS)
Turner, A. J.; Jacob, D. J.
2015-06-01
Inverse models use observations of a system (observation vector) to quantify the variables driving that system (state vector) by statistical optimization. When the observation vector is large, such as with satellite data, selecting a suitable dimension for the state vector is a challenge. A state vector that is too large cannot be effectively constrained by the observations, leading to smoothing error. However, reducing the dimension of the state vector leads to aggregation error as prior relationships between state vector elements are imposed rather than optimized. Here we present a method for quantifying aggregation and smoothing errors as a function of state vector dimension, so that a suitable dimension can be selected by minimizing the combined error. Reducing the state vector within the aggregation error constraints can have the added advantage of enabling analytical solution to the inverse problem with full error characterization. We compare three methods for reducing the dimension of the state vector from its native resolution: (1) merging adjacent elements (grid coarsening), (2) clustering with principal component analysis (PCA), and (3) applying a Gaussian mixture model (GMM) with Gaussian pdfs as state vector elements on which the native-resolution state vector elements are projected using radial basis functions (RBFs). The GMM method leads to somewhat lower aggregation error than the other methods, but more importantly it retains resolution of major local features in the state vector while smoothing weak and broad features.
Multiscale measurement error models for aggregated small area health data.
Aregay, Mehreteab; Lawson, Andrew B; Faes, Christel; Kirby, Russell S; Carroll, Rachel; Watjou, Kevin
2016-08-01
Spatial data are often aggregated from a finer (smaller) to a coarser (larger) geographical level. The process of data aggregation induces a scaling effect which smoothes the variation in the data. To address the scaling problem, multiscale models that link the convolution models at different scale levels via the shared random effect have been proposed. One of the main goals in aggregated health data is to investigate the relationship between predictors and an outcome at different geographical levels. In this paper, we extend multiscale models to examine whether a predictor effect at a finer level hold true at a coarser level. To adjust for predictor uncertainty due to aggregation, we applied measurement error models in the framework of multiscale approach. To assess the benefit of using multiscale measurement error models, we compare the performance of multiscale models with and without measurement error in both real and simulated data. We found that ignoring the measurement error in multiscale models underestimates the regression coefficient, while it overestimates the variance of the spatially structured random effect. On the other hand, accounting for the measurement error in multiscale models provides a better model fit and unbiased parameter estimates. © The Author(s) 2016.
On Time/Space Aggregation of Fine-Scale Error Estimates (Invited)
NASA Astrophysics Data System (ADS)
Huffman, G. J.
2013-12-01
Estimating errors inherent in fine time/space-scale satellite precipitation data sets is still an on-going problem and a key area of active research. Complicating features of these data sets include the intrinsic intermittency of the precipitation in space and time and the resulting highly skewed distribution of precipitation rates. Additional issues arise from the subsampling errors that satellites introduce, the errors due to retrieval algorithms, and the correlated error that retrieval and merger algorithms sometimes introduce. Several interesting approaches have been developed recently that appear to make progress on these long-standing issues. At the same time, the monthly averages over 2.5°x2.5° grid boxes in the Global Precipitation Climatology Project (GPCP) Satellite-Gauge (SG) precipitation data set follow a very simple sampling-based error model (Huffman 1997) with coefficients that are set using coincident surface and GPCP SG data. This presentation outlines the unsolved problem of how to aggregate the fine-scale errors (discussed above) to an arbitrary time/space averaging volume for practical use in applications, reducing in the limit to simple Gaussian expressions at the monthly 2.5°x2.5° scale. Scatter diagrams with different time/space averaging show that the relationship between the satellite and validation data improves due to the reduction in random error. One of the key, and highly non-linear, issues is that fine-scale estimates tend to have large numbers of cases with points near the axes on the scatter diagram (one of the values is exactly or nearly zero, while the other value is higher). Averaging 'pulls' the points away from the axes and towards the 1:1 line, which usually happens for higher precipitation rates before lower rates. Given this qualitative observation of how aggregation affects error, we observe that existing aggregation rules, such as the Steiner et al. (2003) power law, only depend on the aggregated precipitation rate. Is this sufficient, or is it necessary to aggregate the precipitation error estimates across the time/space data cube used for averaging? At least for small time/space data cubes it would seem that the detailed variables that affect each precipitation error estimate in the aggregation, such as sensor type, land/ocean surface type, convective/stratiform type, and so on, drive variations that must be accounted for explicitly.
Büttner, Kathrin; Salau, Jennifer; Krieter, Joachim
2016-07-01
Recent analyses of animal movement networks focused on the static aggregation of trade contacts over different time windows, which neglects the system's temporal variation. In terms of disease spread, ignoring the temporal dynamics can lead to an over- or underestimation of an outbreak's speed and extent. This becomes particularly evident, if the static aggregation allows for the existence of more paths compared to the number of time-respecting paths (i.e. paths in the right chronological order). Therefore, the aim of this study was to reveal differences between static and temporal representations of an animal trade network and to assess the quality of the static aggregation in comparison to the temporal counterpart. Contact data from a pig trade network (2006-2009) of a producer community in Northern Germany were analysed. The results show that a median value of 8.7 % (4.6-14.1%) of the nodes and 3.1% (1.6-5.5%) of the edges were active on a weekly resolution. No fluctuations in the activity patterns were obvious. Furthermore, 50% of the nodes already had one trade contact after approximately six months. For an accumulation window with increasing size (one day each), the accumulation rate, i.e. the relative increase in the number of nodes or edges, stayed relatively constant below 0.07% for the nodes and 0.12 % for the edges. The temporal distances had a much wider distribution than the topological distances. 84% of the temporal distances were smaller than 90 days. The maximum temporal distance was 1000 days, which corresponds to the temporal diameter of the present network. The median temporal correlation coefficient, which measures the probability for an edge to persist across two consecutive time steps, was 0.47, with a maximum value of 0.63 at the accumulation window of 88 days. The causal fidelity measures the fraction of the number of static paths which can also be taken in the temporal network. For the whole observation period relatively high values indicate that 67% of the time-respecting paths existed in both network representations. An increase to 0.87 (0.82-0.88) and 0.92 (0.80-0.98), respectively, could be observed for yearly and monthly aggregation windows. The results show that the investigated pig trade network in its static aggregation represents the temporal dynamics of the system sufficiently well. Therefore, the methodology for analysing static instead of dynamic networks can be used without losing too much information. Copyright © 2016 Elsevier B.V. All rights reserved.
Calibration of Safecast dose rate measurements.
Cervone, Guido; Hultquist, Carolynne
2018-10-01
A methodology is presented to calibrate contributed Safecast dose rate measurements acquired between 2011 and 2016 in the Fukushima prefecture of Japan. The Safecast data are calibrated using observations acquired by the U.S. Department of Energy at the time of the 2011 Fukushima Daiichi power plant nuclear accident. The methodology performs a series of interpolations between the U.S. government and contributed datasets at specific temporal windows and at corresponding spatial locations. The coefficients found for all the different temporal windows are aggregated and interpolated using quadratic regressions to generate a time dependent calibration function. Normal background radiation, decay rates, and missing values are taken into account during the analysis. Results show that the standard Safecast static transformation function overestimates the official measurements because it fails to capture the presence of two different Cesium isotopes and their changing magnitudes with time. A model is created to predict the ratio of the isotopes from the time of the accident through 2020. The proposed time dependent calibration takes into account this Cesium isotopes ratio, and it is shown to reduce the error between U.S. government and contributed data. The proposed calibration is needed through 2020, after which date the errors introduced by ignoring the presence of different isotopes will become negligible. Copyright © 2018 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Villoria, Nelson B.; Elliott, Joshua; Müller, Christoph; Shin, Jaewoo; Zhao, Lan; Song, Carol
2018-01-01
Access to climate and spatial datasets by non-specialists is restricted by technical barriers involving hardware, software and data formats. We discuss an open-source online tool that facilitates downloading the climate data from the global circulation models used by the Inter-Sectoral Impacts Model Intercomparison Project. The tool also offers temporal and spatial aggregation capabilities for incorporating future climate scenarios in applications where spatial aggregation is important. We hope that streamlined access to these data facilitates analysis of climate related issues while considering the uncertainties derived from future climate projections and temporal aggregation choices.
NASA Astrophysics Data System (ADS)
Salinas, J. L.; Nester, T.; Komma, J.; Bloeschl, G.
2017-12-01
Generation of realistic synthetic spatial rainfall is of pivotal importance for assessing regional hydroclimatic hazard as the input for long term rainfall-runoff simulations. The correct reproduction of observed rainfall characteristics, such as regional intensity-duration-frequency curves, and spatial and temporal correlations is necessary to adequately model the magnitude and frequency of the flood peaks, by reproducing antecedent soil moisture conditions before extreme rainfall events, and joint probability of flood waves at confluences. In this work, a modification of the model presented by Bardossy and Platte (1992), where precipitation is first modeled on a station basis as a multivariate autoregressive model (mAr) in a Normal space. The spatial and temporal correlation structures are imposed in the Normal space, allowing for a different temporal autocorrelation parameter for each station, and simultaneously ensuring the positive-definiteness of the correlation matrix of the mAr errors. The Normal rainfall is then transformed to a Gamma-distributed space, with parameters varying monthly according to a sinusoidal function, in order to adapt to the observed rainfall seasonality. One of the main differences with the original model is the simulation time-step, reduced from 24h to 6h. Due to a larger availability of daily rainfall data, as opposite to sub-daily (e.g. hourly), the parameters of the Gamma distributions are calibrated to reproduce simultaneously a series of daily rainfall characteristics (mean daily rainfall, standard deviations of daily rainfall, and 24h intensity-duration-frequency [IDF] curves), as well as other aggregated rainfall measures (mean annual rainfall, and monthly rainfall). The calibration of the spatial and temporal correlation parameters is performed in a way that the catchment-averaged IDF curves aggregated at different temporal scales fit the measured ones. The rainfall model is used to generate 10.000 years of synthetic precipitation, fed into a rainfall-runoff model to derive the flood frequency in the Tirolean Alps in Austria. Given the number of generated events, the simulation framework is able to generate a large variety of rainfall patterns, as well as reproduce the variograms of relevant extreme rainfall events in the region of interest.
NASA Astrophysics Data System (ADS)
Wiese, D. N.; McCullough, C. M.
2017-12-01
Studies have shown that both single pair low-low satellite-to-satellite tracking (LL-SST) and dual-pair LL-SST hypothetical future satellite gravimetry missions utilizing improved onboard measurement systems relative to the Gravity Recovery and Climate Experiment (GRACE) will be limited by temporal aliasing errors; that is, the error introduced through deficiencies in models of high frequency mass variations required for the data processing. Here, we probe the spatio-temporal characteristics of temporal aliasing errors to understand their impact on satellite gravity retrievals using high fidelity numerical simulations. We find that while aliasing errors are dominant at long wavelengths and multi-day timescales, improving knowledge of high frequency mass variations at these resolutions translates into only modest improvements (i.e. spatial resolution/accuracy) in the ability to measure temporal gravity variations at monthly timescales. This result highlights the reliance on accurate models of high frequency mass variations for gravity processing, and the difficult nature of reducing temporal aliasing errors and their impact on satellite gravity retrievals.
NASA Astrophysics Data System (ADS)
Stewart, Michael K.; Morgenstern, Uwe; Gusyev, Maksym A.; Małoszewski, Piotr
2017-09-01
Kirchner (2016a) demonstrated that aggregation errors due to spatial heterogeneity, represented by two homogeneous subcatchments, could cause severe underestimation of the mean transit times (MTTs) of water travelling through catchments when simple lumped parameter models were applied to interpret seasonal tracer cycle data. Here we examine the effects of such errors on the MTTs and young water fractions estimated using tritium concentrations in two-part hydrological systems. We find that MTTs derived from tritium concentrations in streamflow are just as susceptible to aggregation bias as those from seasonal tracer cycles. Likewise, groundwater wells or springs fed by two or more water sources with different MTTs will also have aggregation bias. However, the transit times over which the biases are manifested are different because the two methods are applicable over different time ranges, up to 5 years for seasonal tracer cycles and up to 200 years for tritium concentrations. Our virtual experiments with two water components show that the aggregation errors are larger when the MTT differences between the components are larger and the amounts of the components are each close to 50 % of the mixture. We also find that young water fractions derived from tritium (based on a young water threshold of 18 years) are almost immune to aggregation errors as were those derived from seasonal tracer cycles with a threshold of about 2 months.
Mueller, Matthias Y; Moritz, Robin FA; Kraus, F Bernhard
2012-01-01
Drone aggregations are a widespread phenomenon in many stingless bee species (Meliponini), but the ultimate and proximate causes for their formation are still not well understood. One adaptive explanation for this phenomenon is the avoidance of inbreeding, which is especially detrimental for stingless bees due to the combined effects of the complementary sex-determining system and the small effective population size caused by eusociality and monandry. We analyzed the temporal genetic dynamics of a drone aggregation of the stingless bee Scaptotrigona mexicana with microsatellite markers over a time window of four weeks. We estimated the drones of the aggregation to originate from a total of 55 colonies using sibship re-construction. There was no detectable temporal genetic differentiation or sub-structuring in the aggregation. Most important, we could exclude all colonies in close proximity of the aggregation as origin of the drones in the aggregation, implicating that they originate from more distant colonies. We conclude that the diverse genetic composition and the distant origin of the drones of the S. mexicana drone congregation provides an effective mechanism to avoid mating among close relatives. PMID:22833802
Mueller, Matthias Y; Moritz, Robin Fa; Kraus, F Bernhard
2012-06-01
Drone aggregations are a widespread phenomenon in many stingless bee species (Meliponini), but the ultimate and proximate causes for their formation are still not well understood. One adaptive explanation for this phenomenon is the avoidance of inbreeding, which is especially detrimental for stingless bees due to the combined effects of the complementary sex-determining system and the small effective population size caused by eusociality and monandry. We analyzed the temporal genetic dynamics of a drone aggregation of the stingless bee Scaptotrigona mexicana with microsatellite markers over a time window of four weeks. We estimated the drones of the aggregation to originate from a total of 55 colonies using sibship re-construction. There was no detectable temporal genetic differentiation or sub-structuring in the aggregation. Most important, we could exclude all colonies in close proximity of the aggregation as origin of the drones in the aggregation, implicating that they originate from more distant colonies. We conclude that the diverse genetic composition and the distant origin of the drones of the S. mexicana drone congregation provides an effective mechanism to avoid mating among close relatives.
NASA Astrophysics Data System (ADS)
Lorite, I. J.; Mateos, L.; Fereres, E.
2005-01-01
SummaryThe simulations of dynamic, spatially distributed non-linear models are impacted by the degree of spatial and temporal aggregation of their input parameters and variables. This paper deals with the impact of these aggregations on the assessment of irrigation scheme performance by simulating water use and crop yield. The analysis was carried out on a 7000 ha irrigation scheme located in Southern Spain. Four irrigation seasons differing in rainfall patterns were simulated (from 1996/1997 to 1999/2000) with the actual soil parameters and with hypothetical soil parameters representing wider ranges of soil variability. Three spatial aggregation levels were considered: (I) individual parcels (about 800), (II) command areas (83) and (III) the whole irrigation scheme. Equally, five temporal aggregation levels were defined: daily, weekly, monthly, quarterly and annually. The results showed little impact of spatial aggregation in the predictions of irrigation requirements and of crop yield for the scheme. The impact of aggregation was greater in rainy years, for deep-rooted crops (sunflower) and in scenarios with heterogeneous soils. The highest impact on irrigation requirement estimations was in the scenario of most heterogeneous soil and in 1999/2000, a year with frequent rainfall during the irrigation season: difference of 7% between aggregation levels I and III was found. Equally, it was found that temporal aggregation had only significant impact on irrigation requirements predictions for time steps longer than 4 months. In general, simulated annual irrigation requirements decreased as the time step increased. The impact was greater in rainy years (specially with abundant and concentrated rain events) and in crops which cycles coincide in part with the rainy season (garlic, winter cereals and olive). It is concluded that in this case, average, representative values for the main inputs of the model (crop, soil properties and sowing dates) can generate results within 1% of those obtained by providing spatially specific values for about 800 parcels.
Topographical gradients of semantics and phonology revealed by temporal lobe stimulation.
Miozzo, Michele; Williams, Alicia C; McKhann, Guy M; Hamberger, Marla J
2017-02-01
Word retrieval is a fundamental component of oral communication, and it is well established that this function is supported by left temporal cortex. Nevertheless, the specific temporal areas mediating word retrieval and the particular linguistic processes these regions support have not been well delineated. Toward this end, we analyzed over 1000 naming errors induced by left temporal cortical stimulation in epilepsy surgery patients. Errors were primarily semantic (lemon → "pear"), phonological (horn → "corn"), non-responses, and delayed responses (correct responses after a delay), and each error type appeared predominantly in a specific region: semantic errors in mid-middle temporal gyrus (TG), phonological errors and delayed responses in middle and posterior superior TG, and non-responses in anterior inferior TG. To the extent that semantic errors, phonological errors and delayed responses reflect disruptions in different processes, our results imply topographical specialization of semantic and phonological processing. Specifically, results revealed an inferior-to-superior gradient, with more superior regions associated with phonological processing. Further, errors were increasingly semantically related to targets toward posterior temporal cortex. We speculate that detailed semantic input is needed to support phonological retrieval, and thus, the specificity of semantic input increases progressively toward posterior temporal regions implicated in phonological processing. Hum Brain Mapp 38:688-703, 2017. © 2016 Wiley Periodicals, Inc. © 2016 Wiley Periodicals, Inc.
Genetic particle filter application to land surface temperature downscaling
NASA Astrophysics Data System (ADS)
Mechri, Rihab; Ottlé, Catherine; Pannekoucke, Olivier; Kallel, Abdelaziz
2014-03-01
Thermal infrared data are widely used for surface flux estimation giving the possibility to assess water and energy budgets through land surface temperature (LST). Many applications require both high spatial resolution (HSR) and high temporal resolution (HTR), which are not presently available from space. It is therefore necessary to develop methodologies to use the coarse spatial/high temporal resolutions LST remote-sensing products for a better monitoring of fluxes at appropriate scales. For that purpose, a data assimilation method was developed to downscale LST based on particle filtering. The basic tenet of our approach is to constrain LST dynamics simulated at both HSR and HTR, through the optimization of aggregated temperatures at the coarse observation scale. Thus, a genetic particle filter (GPF) data assimilation scheme was implemented and applied to a land surface model which simulates prior subpixel temperatures. First, the GPF downscaling scheme was tested on pseudoobservations generated in the framework of the study area landscape (Crau-Camargue, France) and climate for the year 2006. The GPF performances were evaluated against observation errors and temporal sampling. Results show that GPF outperforms prior model estimations. Finally, the GPF method was applied on Spinning Enhanced Visible and InfraRed Imager time series and evaluated against HSR data provided by an Advanced Spaceborne Thermal Emission and Reflection Radiometer image acquired on 26 July 2006. The temperatures of seven land cover classes present in the study area were estimated with root-mean-square errors less than 2.4 K which is a very promising result for downscaling LST satellite products.
Water quality modeling in the dead end sections of drinking water distribution networks.
Abokifa, Ahmed A; Yang, Y Jeffrey; Lo, Cynthia S; Biswas, Pratim
2016-02-01
Dead-end sections of drinking water distribution networks are known to be problematic zones in terms of water quality degradation. Extended residence time due to water stagnation leads to rapid reduction of disinfectant residuals allowing the regrowth of microbial pathogens. Water quality models developed so far apply spatial aggregation and temporal averaging techniques for hydraulic parameters by assigning hourly averaged water demands to the main nodes of the network. Although this practice has generally resulted in minimal loss of accuracy for the predicted disinfectant concentrations in main water transmission lines, this is not the case for the peripheries of the distribution network. This study proposes a new approach for simulating disinfectant residuals in dead end pipes while accounting for both spatial and temporal variability in hydraulic and transport parameters. A stochastic demand generator was developed to represent residential water pulses based on a non-homogenous Poisson process. Dispersive solute transport was considered using highly dynamic dispersion rates. A genetic algorithm was used to calibrate the axial hydraulic profile of the dead-end pipe based on the different demand shares of the withdrawal nodes. A parametric sensitivity analysis was done to assess the model performance under variation of different simulation parameters. A group of Monte-Carlo ensembles was carried out to investigate the influence of spatial and temporal variations in flow demands on the simulation accuracy. A set of three correction factors were analytically derived to adjust residence time, dispersion rate and wall demand to overcome simulation error caused by spatial aggregation approximation. The current model results show better agreement with field-measured concentrations of conservative fluoride tracer and free chlorine disinfectant than the simulations of recent advection dispersion reaction models published in the literature. Accuracy of the simulated concentration profiles showed significant dependence on the spatial distribution of the flow demands compared to temporal variation. Copyright © 2015 Elsevier Ltd. All rights reserved.
Estimating instream constituent loads using replicate synoptic sampling, Peru Creek, Colorado
NASA Astrophysics Data System (ADS)
Runkel, Robert L.; Walton-Day, Katherine; Kimball, Briant A.; Verplanck, Philip L.; Nimick, David A.
2013-05-01
SummaryThe synoptic mass balance approach is often used to evaluate constituent mass loading in streams affected by mine drainage. Spatial profiles of constituent mass load are used to identify sources of contamination and prioritize sites for remedial action. This paper presents a field scale study in which replicate synoptic sampling campaigns are used to quantify the aggregate uncertainty in constituent load that arises from (1) laboratory analyses of constituent and tracer concentrations, (2) field sampling error, and (3) temporal variation in concentration from diel constituent cycles and/or source variation. Consideration of these factors represents an advance in the application of the synoptic mass balance approach by placing error bars on estimates of constituent load and by allowing all sources of uncertainty to be quantified in aggregate; previous applications of the approach have provided only point estimates of constituent load and considered only a subset of the possible errors. Given estimates of aggregate uncertainty, site specific data and expert judgement may be used to qualitatively assess the contributions of individual factors to uncertainty. This assessment can be used to guide the collection of additional data to reduce uncertainty. Further, error bars provided by the replicate approach can aid the investigator in the interpretation of spatial loading profiles and the subsequent identification of constituent source areas within the watershed. The replicate sampling approach is applied to Peru Creek, a stream receiving acidic, metal-rich effluent from the Pennsylvania Mine. Other sources of acidity and metals within the study reach include a wetland area adjacent to the mine and tributary inflow from Cinnamon Gulch. Analysis of data collected under low-flow conditions indicates that concentrations of Al, Cd, Cu, Fe, Mn, Pb, and Zn in Peru Creek exceed aquatic life standards. Constituent loading within the study reach is dominated by effluent from the Pennsylvania Mine, with over 50% of the Cd, Cu, Fe, Mn, and Zn loads attributable to a collapsed adit near the top of the study reach. These estimates of mass load may underestimate the effect of the Pennsylvania Mine as leakage from underground mine workings may contribute to metal loads that are currently attributed to the wetland area. This potential leakage confounds the evaluation of remedial options and additional research is needed to determine the magnitude and location of the leakage.
Estimating instream constituent loads using replicate synoptic sampling, Peru Creek, Colorado
Runkel, Robert L.; Walton-Day, Katherine; Kimball, Briant A.; Verplanck, Philip L.; Nimick, David A.
2013-01-01
The synoptic mass balance approach is often used to evaluate constituent mass loading in streams affected by mine drainage. Spatial profiles of constituent mass load are used to identify sources of contamination and prioritize sites for remedial action. This paper presents a field scale study in which replicate synoptic sampling campaigns are used to quantify the aggregate uncertainty in constituent load that arises from (1) laboratory analyses of constituent and tracer concentrations, (2) field sampling error, and (3) temporal variation in concentration from diel constituent cycles and/or source variation. Consideration of these factors represents an advance in the application of the synoptic mass balance approach by placing error bars on estimates of constituent load and by allowing all sources of uncertainty to be quantified in aggregate; previous applications of the approach have provided only point estimates of constituent load and considered only a subset of the possible errors. Given estimates of aggregate uncertainty, site specific data and expert judgement may be used to qualitatively assess the contributions of individual factors to uncertainty. This assessment can be used to guide the collection of additional data to reduce uncertainty. Further, error bars provided by the replicate approach can aid the investigator in the interpretation of spatial loading profiles and the subsequent identification of constituent source areas within the watershed.The replicate sampling approach is applied to Peru Creek, a stream receiving acidic, metal-rich effluent from the Pennsylvania Mine. Other sources of acidity and metals within the study reach include a wetland area adjacent to the mine and tributary inflow from Cinnamon Gulch. Analysis of data collected under low-flow conditions indicates that concentrations of Al, Cd, Cu, Fe, Mn, Pb, and Zn in Peru Creek exceed aquatic life standards. Constituent loading within the study reach is dominated by effluent from the Pennsylvania Mine, with over 50% of the Cd, Cu, Fe, Mn, and Zn loads attributable to a collapsed adit near the top of the study reach. These estimates of mass load may underestimate the effect of the Pennsylvania Mine as leakage from underground mine workings may contribute to metal loads that are currently attributed to the wetland area. This potential leakage confounds the evaluation of remedial options and additional research is needed to determine the magnitude and location of the leakage.
NASA Astrophysics Data System (ADS)
Peres, David J.; Cancelliere, Antonino; Greco, Roberto; Bogaard, Thom A.
2018-03-01
Uncertainty in rainfall datasets and landslide inventories is known to have negative impacts on the assessment of landslide-triggering thresholds. In this paper, we perform a quantitative analysis of the impacts of uncertain knowledge of landslide initiation instants on the assessment of rainfall intensity-duration landslide early warning thresholds. The analysis is based on a synthetic database of rainfall and landslide information, generated by coupling a stochastic rainfall generator and a physically based hydrological and slope stability model, and is therefore error-free in terms of knowledge of triggering instants. This dataset is then perturbed according to hypothetical reporting scenarios
that allow simulation of possible errors in landslide-triggering instants as retrieved from historical archives. The impact of these errors is analysed jointly using different criteria to single out rainfall events from a continuous series and two typical temporal aggregations of rainfall (hourly and daily). The analysis shows that the impacts of the above uncertainty sources can be significant, especially when errors exceed 1 day or the actual instants follow the erroneous ones. Errors generally lead to underestimated thresholds, i.e. lower than those that would be obtained from an error-free dataset. Potentially, the amount of the underestimation can be enough to induce an excessive number of false positives, hence limiting possible landslide mitigation benefits. Moreover, the uncertain knowledge of triggering rainfall limits the possibility to set up links between thresholds and physio-geographical factors.
Hybrid inversions of CO2 fluxes at regional scale applied to network design
NASA Astrophysics Data System (ADS)
Kountouris, Panagiotis; Gerbig, Christoph; -Thomas Koch, Frank
2013-04-01
Long term observations of atmospheric greenhouse gas measuring stations, located at representative regions over the continent, improve our understanding of greenhouse gas sources and sinks. These mixing ratio measurements can be linked to surface fluxes by atmospheric transport inversions. Within the upcoming years new stations are to be deployed, which requires decision making tools with respect to the location and the density of the network. We are developing a method to assess potential greenhouse gas observing networks in terms of their ability to recover specific target quantities. As target quantities we use CO2 fluxes aggregated to specific spatial and temporal scales. We introduce a high resolution inverse modeling framework, which attempts to combine advantages from pixel based inversions with those of a carbon cycle data assimilation system (CCDAS). The hybrid inversion system consists of the Lagrangian transport model STILT, the diagnostic biosphere model VPRM and a Bayesian inversion scheme. We aim to retrieve the spatiotemporal distribution of net ecosystem exchange (NEE) at a high spatial resolution (10 km x 10 km) by inverting for spatially and temporally varying scaling factors for gross ecosystem exchange (GEE) and respiration (R) rather than solving for the fluxes themselves. Thus the state space includes parameters for controlling photosynthesis and respiration, but unlike in a CCDAS it allows for spatial and temporal variations, which can be expressed as NEE(x,y,t) = λG(x,y,t) GEE(x,y,t) + λR(x,y,t) R(x,y,t) . We apply spatially and temporally correlated uncertainties by using error covariance matrices with non-zero off-diagonal elements. Synthetic experiments will test our system and select the optimal a priori error covariance by using different spatial and temporal correlation lengths on the error statistics of the a priori covariance and comparing the optimized fluxes against the 'known truth'. As 'known truth' we use independent fluxes generated from a different biosphere model (BIOME-BGC). Initially we perform single-station inversions for Ochsenkopf tall tower located in Germany. Further expansion of the inversion framework to multiple stations and its application to network design will address the questions of how well a set of network stations can constrain a given target quantity, and whether there are objective criteria to select an optimal configuration for new stations that maximizes the uncertainty reduction.
NASA Astrophysics Data System (ADS)
McIntyre, N.; Keir, G.
2014-12-01
Water supply systems typically encompass components of both natural systems (e.g. catchment runoff, aquifer interception) and engineered systems (e.g. process equipment, water storages and transfers). Many physical processes of varying spatial and temporal scales are contained within these hybrid systems models. The need to aggregate and simplify system components has been recognised for reasons of parsimony and comprehensibility; and the use of probabilistic methods for modelling water-related risks also prompts the need to seek computationally efficient up-scaled conceptualisations. How to manage the up-scaling errors in such hybrid systems models has not been well-explored, compared to research in the hydrological process domain. Particular challenges include the non-linearity introduced by decision thresholds and non-linear relations between water use, water quality, and discharge strategies. Using a case study of a mining region, we explore the nature of up-scaling errors in water use, water quality and discharge, and we illustrate an approach to identification of a scale-adjusted model including an error model. Ways forward for efficient modelling of such complex, hybrid systems are discussed, including interactions with human, energy and carbon systems models.
NASA Technical Reports Server (NTRS)
Oreopoulos, Lazaros
2004-01-01
The MODIS Level-3 optical thickness and effective radius cloud product is a gridded l deg. x 1 deg. dataset that is derived from aggregation and subsampling at 5 km of 1 km, resolution Level-2 orbital swath data (Level-2 granules). This study examines the impact of the 5 km subsampling on the mean, standard deviation and inhomogeneity parameter statistics of optical thickness and effective radius. The methodology is simple and consists of estimating mean errors for a large collection of Terra and Aqua Level-2 granules by taking the difference of the statistics at the original and subsampled resolutions. It is shown that the Level-3 sampling does not affect the various quantities investigated to the same degree, with second order moments suffering greater subsampling errors, as expected. Mean errors drop dramatically when averages over a sufficient number of regions (e.g., monthly and/or latitudinal averages) are taken, pointing to a dominance of errors that are of random nature. When histograms built from subsampled data with the same binning rules as in the Level-3 dataset are used to reconstruct the quantities of interest, the mean errors do not deteriorate significantly. The results in this paper provide guidance to users of MODIS Level-3 optical thickness and effective radius cloud products on the range of errors due to subsampling they should expect and perhaps account for, in scientific work with this dataset. In general, subsampling errors should not be a serious concern when moderate temporal and/or spatial averaging is performed.
Spatio-temporal dynamics of a fish spawning aggregation and its fishery in the Gulf of California
Erisman, Brad; Aburto-Oropeza, Octavio; Gonzalez-Abraham, Charlotte; Mascareñas-Osorio, Ismael; Moreno-Báez, Marcia; Hastings, Philip A.
2012-01-01
We engaged in cooperative research with fishers and stakeholders to characterize the fine-scale, spatio-temporal characteristics of spawning behavior in an aggregating marine fish (Cynoscion othonopterus: Sciaenidae) and coincident activities of its commercial fishery in the Upper Gulf of California. Approximately 1.5–1.8 million fish are harvested annually from spawning aggregations of C. othonopterus during 21–25 days of fishing and within an area of 1,149 km2 of a biosphere reserve. Spawning and fishing are synchronized on a semi-lunar cycle, with peaks in both occurring 5 to 2 days before the new and full moon, and fishing intensity and catch are highest at the spawning grounds within a no-take reserve. Results of this study demonstrate the benefits of combining GPS data loggers, fisheries data, biological surveys, and cooperative research with fishers to produce spatio-temporally explicit information relevant to the science and management of fish spawning aggregations and the spatial planning of marine reserves. PMID:22359736
Downscaling Land Surface Temperature in an Urban Area: A Case Study for Hamburg, Germany
NASA Astrophysics Data System (ADS)
Bechtel, Benjamin; Zakšek, Klemen
2013-04-01
Land surface temperature (LST) is an important parameter for the urban radiation and heat balance and a boundary condition for the atmospheric urban heat island (UHI). The increase in urban surface temperatures compared to the surrounding area (surface urban heat island, SUHI) has been described and analysed with satellite-based measurements for several decades. Besides continuous progress in the development of new sensors, an operational monitoring is still severely limited by physical constraints regarding the spatial and temporal resolution of the satellite data. Essentially, two measurement concepts must be distinguished: Sensors on geostationary platforms have high temporal (several times per hour) and poor spatial resolution (~ 5 km) while those on low earth orbiters have high spatial (~ 100-1000 m) resolution and a long return period (one day to several weeks). To enable an observation with high temporal and spatial resolution, a downscaling scheme for LST from the Spinning Enhanced Visible Infra-Red Imager (SEVIRI) sensor onboard the geostationary meteorological Meteosat 9 to spatial resolutions between 100 and 1000 m was developed and tested for Hamburg in this case study. Therefore, various predictor sets (including parameters derived from multi-temporal thermal data, NDVI, and morphological parameters) were tested. The relationship between predictors and LST was empirically calibrated in the low resolution domain and then transferred to the high resolution domain. The downscaling was validated with LST data from the Advanced Spaceborne Thermal Emission and Reflection Radiometer (ASTER) for the same time. Aggregated parameters from multi-temporal thermal data (in particular annual cycle parameters and principal components) proved particularly suitable. The results for the highest resolution of 100 m showed a high explained variance (R² = 0.71) and relatively low root mean square errors (RMSE = 2.2 K). Larger predictor sets resulted in higher errors, because they tended to overfit. As expected the results were better for coarser spatial resolutions (R² = 0.80, RMSE = 1.8 K for 500 m). These results are similar or slightly better than in previous studies, although we are not aware of any study with a comparably large downscaling factor. A considerable percentage of the error is systematic due to the different viewing geometry of the sensors (the high resolution LST was overestimated about 1.3 K). The study shows that downscaling of SEVIRI LST is possible up to a resolution of 100 m for urban areas and that multi-temporal thermal data are particularly suitable as predictors.
The Impact of Measurement Error on the Accuracy of Individual and Aggregate SGP
ERIC Educational Resources Information Center
McCaffrey, Daniel F.; Castellano, Katherine E.; Lockwood, J. R.
2015-01-01
Student growth percentiles (SGPs) express students' current observed scores as percentile ranks in the distribution of scores among students with the same prior-year scores. A common concern about SGPs at the student level, and mean or median SGPs (MGPs) at the aggregate level, is potential bias due to test measurement error (ME). Shang,…
Bhattarai, Bishnu P.; Myers, Kurt S.; Bak-Jensen, Brigitte; ...
2017-05-17
This paper determines optimum aggregation areas for a given distribution network considering spatial distribution of loads and costs of aggregation. An elitist genetic algorithm combined with a hierarchical clustering and a Thevenin network reduction is implemented to compute strategic locations and aggregate demand within each area. The aggregation reduces large distribution networks having thousands of nodes to an equivalent network with few aggregated loads, thereby significantly reducing the computational burden. Furthermore, it not only helps distribution system operators in making faster operational decisions by understanding during which time of the day will be in need of flexibility, from which specificmore » area, and in which amount, but also enables the flexibilities stemming from small distributed resources to be traded in various power/energy markets. A combination of central and local aggregation scheme where a central aggregator enables market participation, while local aggregators materialize the accepted bids, is implemented to realize this concept. The effectiveness of the proposed method is evaluated by comparing network performances with and without aggregation. Finally, for a given network configuration, steady-state performance of aggregated network is significantly accurate (≈ ±1.5% error) compared to very high errors associated with forecast of individual consumer demand.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bhattarai, Bishnu P.; Myers, Kurt S.; Bak-Jensen, Brigitte
This paper determines optimum aggregation areas for a given distribution network considering spatial distribution of loads and costs of aggregation. An elitist genetic algorithm combined with a hierarchical clustering and a Thevenin network reduction is implemented to compute strategic locations and aggregate demand within each area. The aggregation reduces large distribution networks having thousands of nodes to an equivalent network with few aggregated loads, thereby significantly reducing the computational burden. Furthermore, it not only helps distribution system operators in making faster operational decisions by understanding during which time of the day will be in need of flexibility, from which specificmore » area, and in which amount, but also enables the flexibilities stemming from small distributed resources to be traded in various power/energy markets. A combination of central and local aggregation scheme where a central aggregator enables market participation, while local aggregators materialize the accepted bids, is implemented to realize this concept. The effectiveness of the proposed method is evaluated by comparing network performances with and without aggregation. Finally, for a given network configuration, steady-state performance of aggregated network is significantly accurate (≈ ±1.5% error) compared to very high errors associated with forecast of individual consumer demand.« less
Temporal lobe stimulation reveals anatomic distinction between auditory naming processes.
Hamberger, M J; Seidel, W T; Goodman, R R; Perrine, K; McKhann, G M
2003-05-13
Language errors induced by cortical stimulation can provide insight into function(s) supported by the area stimulated. The authors observed that some stimulation-induced errors during auditory description naming were characterized by tip-of-the-tongue responses or paraphasic errors, suggesting expressive difficulty, whereas others were qualitatively different, suggesting receptive difficulty. They hypothesized that these two response types reflected disruption at different stages of auditory verbal processing and that these "subprocesses" might be supported by anatomically distinct cortical areas. To explore the topographic distribution of error types in auditory verbal processing. Twenty-one patients requiring left temporal lobe surgery underwent preresection language mapping using direct cortical stimulation. Auditory naming was tested at temporal sites extending from 1 cm from the anterior tip to the parietal operculum. Errors were dichotomized as either "expressive" or "receptive." The topographic distribution of error types was explored. Sites associated with the two error types were topographically distinct from one another. Most receptive sites were located in the middle portion of the superior temporal gyrus (STG), whereas most expressive sites fell outside this region, scattered along lateral temporal and temporoparietal cortex. Results raise clinical questions regarding the inclusion of the STG in temporal lobe epilepsy surgery and suggest that more detailed cortical mapping might enable better prediction of postoperative language decline. From a theoretical perspective, results carry implications regarding the understanding of structure-function relations underlying temporal lobe mediation of auditory language processing.
The problem with simple lumped parameter models: Evidence from tritium mean transit times
NASA Astrophysics Data System (ADS)
Stewart, Michael; Morgenstern, Uwe; Gusyev, Maksym; Maloszewski, Piotr
2017-04-01
Simple lumped parameter models (LPMs) based on assuming homogeneity and stationarity in catchments and groundwater bodies are widely used to model and predict hydrological system outputs. However, most systems are not homogeneous or stationary, and errors resulting from disregard of the real heterogeneity and non-stationarity of such systems are not well understood and rarely quantified. As an example, mean transit times (MTTs) of streamflow are usually estimated from tracer data using simple LPMs. The MTT or transit time distribution of water in a stream reveals basic catchment properties such as water flow paths, storage and mixing. Importantly however, Kirchner (2016a) has shown that there can be large (several hundred percent) aggregation errors in MTTs inferred from seasonal cycles in conservative tracers such as chloride or stable isotopes when they are interpreted using simple LPMs (i.e. a range of gamma models or GMs). Here we show that MTTs estimated using tritium concentrations are similarly affected by aggregation errors due to heterogeneity and non-stationarity when interpreted using simple LPMs (e.g. GMs). The tritium aggregation error series from the strong nonlinearity between tritium concentrations and MTT, whereas for seasonal tracer cycles it is due to the nonlinearity between tracer cycle amplitudes and MTT. In effect, water from young subsystems in the catchment outweigh water from old subsystems. The main difference between the aggregation errors with the different tracers is that with tritium it applies at much greater ages than it does with seasonal tracer cycles. We stress that the aggregation errors arise when simple LPMs are applied (with simple LPMs the hydrological system is assumed to be a homogeneous whole with parameters representing averages for the system). With well-chosen compound LPMs (which are combinations of simple LPMs) on the other hand, aggregation errors are very much smaller because young and old water flows are treated separately. "Well-chosen" means that the compound LPM is based on hydrologically- and geologically-validated information, and the choice can be assisted by matching simulations to time series of tritium measurements. References: Kirchner, J.W. (2016a): Aggregation in environmental systems - Part 1: Seasonal tracer cycles quantify young water fractions, but not mean transit times, in spatially heterogeneous catchments. Hydrol. Earth Syst. Sci. 20, 279-297. Stewart, M.K., Morgenstern, U., Gusyev, M.A., Maloszewski, P. 2016: Aggregation effects on tritium-based mean transit times and young water fractions in spatially heterogeneous catchments and groundwater systems, and implications for past and future applications of tritium. Submitted to Hydrol. Earth Syst. Sci., 10 October 2016, doi:10.5194/hess-2016-532.
Exploring the structure and function of temporal networks with dynamic graphlets
Hulovatyy, Y.; Chen, H.; Milenković, T.
2015-01-01
Motivation: With increasing availability of temporal real-world networks, how to efficiently study these data? One can model a temporal network as a single aggregate static network, or as a series of time-specific snapshots, each being an aggregate static network over the corresponding time window. Then, one can use established methods for static analysis on the resulting aggregate network(s), but losing in the process valuable temporal information either completely, or at the interface between different snapshots, respectively. Here, we develop a novel approach for studying a temporal network more explicitly, by capturing inter-snapshot relationships. Results: We base our methodology on well-established graphlets (subgraphs), which have been proven in numerous contexts in static network research. We develop new theory to allow for graphlet-based analyses of temporal networks. Our new notion of dynamic graphlets is different from existing dynamic network approaches that are based on temporal motifs (statistically significant subgraphs). The latter have limitations: their results depend on the choice of a null network model that is required to evaluate the significance of a subgraph, and choosing a good null model is non-trivial. Our dynamic graphlets overcome the limitations of the temporal motifs. Also, when we aim to characterize the structure and function of an entire temporal network or of individual nodes, our dynamic graphlets outperform the static graphlets. Clearly, accounting for temporal information helps. We apply dynamic graphlets to temporal age-specific molecular network data to deepen our limited knowledge about human aging. Availability and implementation: http://www.nd.edu/∼cone/DG. Contact: tmilenko@nd.edu Supplementary information: Supplementary data are available at Bioinformatics online. PMID:26072480
Nanocubes for real-time exploration of spatiotemporal datasets.
Lins, Lauro; Klosowski, James T; Scheidegger, Carlos
2013-12-01
Consider real-time exploration of large multidimensional spatiotemporal datasets with billions of entries, each defined by a location, a time, and other attributes. Are certain attributes correlated spatially or temporally? Are there trends or outliers in the data? Answering these questions requires aggregation over arbitrary regions of the domain and attributes of the data. Many relational databases implement the well-known data cube aggregation operation, which in a sense precomputes every possible aggregate query over the database. Data cubes are sometimes assumed to take a prohibitively large amount of space, and to consequently require disk storage. In contrast, we show how to construct a data cube that fits in a modern laptop's main memory, even for billions of entries; we call this data structure a nanocube. We present algorithms to compute and query a nanocube, and show how it can be used to generate well-known visual encodings such as heatmaps, histograms, and parallel coordinate plots. When compared to exact visualizations created by scanning an entire dataset, nanocube plots have bounded screen error across a variety of scales, thanks to a hierarchical structure in space and time. We demonstrate the effectiveness of our technique on a variety of real-world datasets, and present memory, timing, and network bandwidth measurements. We find that the timings for the queries in our examples are dominated by network and user-interaction latencies.
NASA Astrophysics Data System (ADS)
Jiang, C.; Xu, Q.; Gu, Y. K.; Qian, X. Y.; He, J. N.
2018-04-01
Aerosol Optical Depth (AOD) is of great value for studying air mass and its changes. In this paper, we studied the spatial-temporal changes of AOD and its driving factors based on spatial autocorrelation model, gravity model and multiple regression analysis in Jiangsu Province from 2007 to 2016. The results showed that in terms of spatial distribution, the southern AOD value is higher, and the high-value aggregation areas are significant, while the northern AOD value is lower, but the low-value aggregation areas constantly change. The AOD gravity centers showed a clear point-like aggregation. In terms of temporal changes, the overall AOD in Jiangsu Province increased year by year in fluctuation. In terms of driving factors, the total amount of vehicles, precipitation and temperature are important factors for the growth of AOD.
Performance of the Multi-Radar Multi-Sensor System over the Lower Colorado River, Texas
NASA Astrophysics Data System (ADS)
Bayabil, H. K.; Sharif, H. O.; Fares, A.; Awal, R.; Risch, E.
2017-12-01
Recently observed increases in intensities and frequencies of climate extremes (e.g., floods, dam failure, and overtopping of river banks) necessitate the development of effective disaster prevention and mitigation strategies. Hydrologic models can be useful tools in predicting such events at different spatial and temporal scales. However, accuracy and prediction capability of such models are often constrained by the availability of high-quality representative hydro-meteorological data (e.g., precipitation) that are required to calibrate and validate such models. Improved technologies and products such as the Multi-Radar Multi-Sensor (MRMS) system that allows gathering and transmission of vast meteorological data have been developed to provide such data needs. While the MRMS data are available with high spatial and temporal resolutions (1 km and 15 min, respectively), its accuracy in estimating precipitation is yet to be fully investigated. Therefore, the main objective of this study is to evaluate the performance of the MRMS system in effectively capturing precipitation over the Lower Colorado River, Texas using observations from a dense rain gauge network. In addition, effects of spatial and temporal aggregation scales on the performance of the MRMS system were evaluated. Point scale comparisons were made at 215 gauging locations using rain gauges and MRMS data from May 2015. Moreover, the effects of temporal and spatial data aggregation scales (30, 45, 60, 75, 90, 105, and 120 min) and (4 to 50 km), respectively on the performance of the MRMS system were tested. Overall, the MRMS system (at 15 min temporal resolution) captured precipitation reasonably well, with an average R2 value of 0.65 and RMSE of 0.5 mm. In addition, spatial and temporal data aggregations resulted in increases in R2 values. However, reduction in RMSE was achieved only with an increase in spatial aggregations.
NASA Astrophysics Data System (ADS)
Masselot, Pierre; Chebana, Fateh; Bélanger, Diane; St-Hilaire, André; Abdous, Belkacem; Gosselin, Pierre; Ouarda, Taha B. M. J.
2018-07-01
In environmental epidemiology studies, health response data (e.g. hospitalization or mortality) are often noisy because of hospital organization and other social factors. The noise in the data can hide the true signal related to the exposure. The signal can be unveiled by performing a temporal aggregation on health data and then using it as the response in regression analysis. From aggregated series, a general methodology is introduced to account for the particularities of an aggregated response in a regression setting. This methodology can be used with usually applied regression models in weather-related health studies, such as generalized additive models (GAM) and distributed lag nonlinear models (DLNM). In particular, the residuals are modelled using an autoregressive-moving average (ARMA) model to account for the temporal dependence. The proposed methodology is illustrated by modelling the influence of temperature on cardiovascular mortality in Canada. A comparison with classical DLNMs is provided and several aggregation methods are compared. Results show that there is an increase in the fit quality when the response is aggregated, and that the estimated relationship focuses more on the outcome over several days than the classical DLNM. More precisely, among various investigated aggregation schemes, it was found that an aggregation with an asymmetric Epanechnikov kernel is more suited for studying the temperature-mortality relationship.
NASA Astrophysics Data System (ADS)
Rabaut, Marijn; Vincx, Magda; Degraer, Steven
2009-03-01
The positive effects of the tube dwelling polychaete Lanice conchilega for the associated benthic community emphasizes this bio-engineer’s habitat structuring capacity (Rabaut et al. in Estuar Coastal Shelf Sci, 2007). Therefore, L. conchilega aggregations are often referred to as reefs. The reef building capacity of ecosystem engineers is important for marine management as the recognition as reef builder will increase the protected status the concerned species. To classify as reefs however, bio-engineering activities need to significantly alter several habitat characteristics: elevation, sediment consolidation, spatial extent, patchiness, reef builder density, biodiversity, community structure, longevity and stability [guidelines to apply the EU reef-definition by Hendrick and Foster-Smith (J Mar Biol Assoc UK 86:665-677, 2006)]. This study investigates the physical and temporal characteristics of high density aggregations of L. conchilega. Results show that the elevation and sediment consolidation of the biogenic mounds was significantly higher compared to the surrounding unstructured sediment. Areas with L. conchilega aggregations tend to be extensive and patchiness is high (coverage 5-18%). The discussion of present study evaluates whether L. conchilega aggregations can be considered as reefs (discussing physical, biological and temporal characteristics). Individual aggregations were found to persist for several years if yearly renewal of existing aggregations through juvenile settlement occurred. This renewal is enhanced by local hydrodynamic changes and availability of attaching structures (adult tubes). We conclude that the application of the EU definition for reefs provides evidence that all physical and biological characteristics are present to classify L. conchilega as a reef builder. For temporal characteristics, this study shows several mechanisms exist for reefs to persist for a longer period of time. However, a direct evidence of long-lived individual reefs does not exist. As a range of aggregation development exists, ‘reefiness’ is not equal for all aggregations and a scoring table to quantify L. conchilega reefiness is presented.
Spatio-temporal cluster detection of chickenpox in Valencia, Spain in the period 2008-2012.
Iftimi, Adina; Martínez-Ruiz, Francisco; Míguez Santiyán, Ana; Montes, Francisco
2015-05-18
Chickenpox is a highly contagious airborne disease caused by Varicella zoster, which affects nearly all non-immune children worldwide with an annual incidence estimated at 80-90 million cases. To analyze the spatiotemporal pattern of the chickenpox incidence in the city of Valencia, Spain two complementary statistical approaches were used. First, we evaluated the existence of clusters and spatio-temporal interaction; secondly, we used this information to find the locations of the spatio-temporal clusters via the space-time permutation model. The first method used detects any aggregation in our data but does not provide the spatial and temporal information. The second method gives the locations, areas and time-frame for the spatio-temporal clusters. An overall decreasing time trend, a pronounced 12-monthly periodicity and two complementary periods were observed. Several areas with high incidence, surrounding the center of the city were identified. The existence of aggregation in time and space was observed, and a number of spatio-temporal clusters were located.
Entropy of space-time outcome in a movement speed-accuracy task.
Hsieh, Tsung-Yu; Pacheco, Matheus Maia; Newell, Karl M
2015-12-01
The experiment reported was set-up to investigate the space-time entropy of movement outcome as a function of a range of spatial (10, 20 and 30 cm) and temporal (250-2500 ms) criteria in a discrete aiming task. The variability and information entropy of the movement spatial and temporal errors considered separately increased and decreased on the respective dimension as a function of an increment of movement velocity. However, the joint space-time entropy was lowest when the relative contribution of spatial and temporal task criteria was comparable (i.e., mid-range of space-time constraints), and it increased with a greater trade-off between spatial or temporal task demands, revealing a U-shaped function across space-time task criteria. The traditional speed-accuracy functions of spatial error and temporal error considered independently mapped to this joint space-time U-shaped entropy function. The trade-off in movement tasks with joint space-time criteria is between spatial error and timing error, rather than movement speed and accuracy. Copyright © 2015 Elsevier B.V. All rights reserved.
LACIE performance predictor FOC users manual
NASA Technical Reports Server (NTRS)
1976-01-01
The LACIE Performance Predictor (LPP) is a computer simulation of the LACIE process for predicting worldwide wheat production. The simulation provides for the introduction of various errors into the system and provides estimates based on these errors, thus allowing the user to determine the impact of selected error sources. The FOC LPP simulates the acquisition of the sample segment data by the LANDSAT Satellite (DAPTS), the classification of the agricultural area within the sample segment (CAMS), the estimation of the wheat yield (YES), and the production estimation and aggregation (CAS). These elements include data acquisition characteristics, environmental conditions, classification algorithms, the LACIE aggregation and data adjustment procedures. The operational structure for simulating these elements consists of the following key programs: (1) LACIE Utility Maintenance Process, (2) System Error Executive, (3) Ephemeris Generator, (4) Access Generator, (5) Acquisition Selector, (6) LACIE Error Model (LEM), and (7) Post Processor.
Sensitivity of geographic information system outputs to errors in remotely sensed data
NASA Technical Reports Server (NTRS)
Ramapriyan, H. K.; Boyd, R. K.; Gunther, F. J.; Lu, Y. C.
1981-01-01
The sensitivity of the outputs of a geographic information system (GIS) to errors in inputs derived from remotely sensed data (RSD) is investigated using a suitability model with per-cell decisions and a gridded geographic data base whose cells are larger than the RSD pixels. The process of preparing RSD as input to a GIS is analyzed, and the errors associated with classification and registration are examined. In the case of the model considered, it is found that the errors caused during classification and registration are partially compensated by the aggregation of pixels. The compensation is quantified by means of an analytical model, a Monte Carlo simulation, and experiments with Landsat data. The results show that error reductions of the order of 50% occur because of aggregation when 25 pixels of RSD are used per cell in the geographic data base.
Power law analysis of the human microbiome.
Ma, Zhanshan Sam
2015-11-01
Taylor's (1961, Nature, 189:732) power law, a power function (V = am(b) ) describing the scaling relationship between the mean and variance of population abundances of organisms, has been found to govern the population abundance distributions of single species in both space and time in macroecology. It is regarded as one of few generalities in ecology, and its parameter b has been widely applied to characterize spatial aggregation (i.e. heterogeneity) and temporal stability of single-species populations. Here, we test its applicability to bacterial populations in the human microbiome using extensive data sets generated by the US-NIH Human Microbiome Project (HMP). We further propose extending Taylor's power law from the population to the community level, and accordingly introduce four types of power-law extensions (PLEs): type I PLE for community spatial aggregation (heterogeneity), type II PLE for community temporal aggregation (stability), type III PLE for mixed-species population spatial aggregation (heterogeneity) and type IV PLE for mixed-species population temporal aggregation (stability). Our results show that fittings to the four PLEs with HMP data were statistically extremely significant and their parameters are ecologically sound, hence confirming the validity of the power law at both the population and community levels. These findings not only provide a powerful tool to characterize the aggregations of population and community in both time and space, offering important insights into community heterogeneity in space and/or stability in time, but also underscore the three general properties of power laws (scale invariance, no average and universality) and their specific manifestations in our four PLEs. © 2015 John Wiley & Sons Ltd.
Gauge-adjusted rainfall estimates from commercial microwave links
NASA Astrophysics Data System (ADS)
Fencl, Martin; Dohnal, Michal; Rieckermann, Jörg; Bareš, Vojtěch
2017-01-01
Increasing urbanization makes it more and more important to have accurate stormwater runoff predictions, especially with potentially severe weather and climatic changes on the horizon. Such stormwater predictions in turn require reliable rainfall information. Especially for urban centres, the problem is that the spatial and temporal resolution of rainfall observations should be substantially higher than commonly provided by weather services with their standard rainfall monitoring networks. Commercial microwave links (CMLs) are non-traditional sensors, which have been proposed about a decade ago as a promising solution. CMLs are line-of-sight radio connections widely used by operators of mobile telecommunication networks. They are typically very dense in urban areas and can provide path-integrated rainfall observations at sub-minute resolution. Unfortunately, quantitative precipitation estimates (QPEs) from CMLs are often highly biased due to several epistemic uncertainties, which significantly limit their usability. In this manuscript we therefore suggest a novel method to reduce this bias by adjusting QPEs to existing rain gauges. The method has been specifically designed to produce reliable results even with comparably distant rain gauges or cumulative observations. This eliminates the need to install reference gauges and makes it possible to work with existing information. First, the method is tested on data from a dedicated experiment, where a CML has been specifically set up for rainfall monitoring experiments, as well as operational CMLs from an existing cellular network. Second, we assess the performance for several experimental layouts of ground truth
from rain gauges (RGs) with different spatial and temporal resolutions. The results suggest that CMLs adjusted by RGs with a temporal aggregation of up to 1 h (i) provide precise high-resolution QPEs (relative error < 7 %, Nash-Sutcliffe efficiency coefficient > 0.75) and (ii) that the combination of both sensor types clearly outperforms each individual monitoring system. Unfortunately, adjusting CML observations to RGs with longer aggregation intervals of up to 24 h has drawbacks. Although it substantially reduces bias, it unfavourably smoothes out rainfall peaks of high intensities, which is undesirable for stormwater management. A similar, but less severe, effect occurs due to spatial averaging when CMLs are adjusted to remote RGs. Nevertheless, even here, adjusted CMLs perform better than RGs alone. Furthermore, we provide first evidence that the joint use of multiple CMLs together with RGs also reduces bias in their QPEs. In summary, we believe that our adjustment method has great potential to improve the space-time resolution of current urban rainfall monitoring networks. Nevertheless, future work should aim to better understand the reason for the observed systematic error in QPEs from CMLs.
Embryonic Mutant Huntingtin Aggregate Formation in Mouse Models of Huntington's Disease.
Osmand, Alexander P; Bichell, Terry Jo; Bowman, Aaron B; Bates, Gillian P
2016-12-15
The role of aggregate formation in the pathophysiology of Huntington's disease (HD) remains uncertain. However, the temporal appearance of aggregates tends to correlate with the onset of symptoms and the numbers of neuropil aggregates correlate with the progression of clinical disease. Using highly sensitive immunohistochemical methods we have detected the appearance of diffuse aggregates during embryonic development in the R6/2 and YAC128 mouse models of HD. These are initially seen in developing axonal tracts and appear to spread throughout the cerebrum in the early neonate.
Masselot, Pierre; Chebana, Fateh; Bélanger, Diane; St-Hilaire, André; Abdous, Belkacem; Gosselin, Pierre; Ouarda, Taha B M J
2018-07-01
In environmental epidemiology studies, health response data (e.g. hospitalization or mortality) are often noisy because of hospital organization and other social factors. The noise in the data can hide the true signal related to the exposure. The signal can be unveiled by performing a temporal aggregation on health data and then using it as the response in regression analysis. From aggregated series, a general methodology is introduced to account for the particularities of an aggregated response in a regression setting. This methodology can be used with usually applied regression models in weather-related health studies, such as generalized additive models (GAM) and distributed lag nonlinear models (DLNM). In particular, the residuals are modelled using an autoregressive-moving average (ARMA) model to account for the temporal dependence. The proposed methodology is illustrated by modelling the influence of temperature on cardiovascular mortality in Canada. A comparison with classical DLNMs is provided and several aggregation methods are compared. Results show that there is an increase in the fit quality when the response is aggregated, and that the estimated relationship focuses more on the outcome over several days than the classical DLNM. More precisely, among various investigated aggregation schemes, it was found that an aggregation with an asymmetric Epanechnikov kernel is more suited for studying the temperature-mortality relationship. Copyright © 2018. Published by Elsevier B.V.
Approaching Error-Free Customer Satisfaction through Process Change and Feedback Systems
ERIC Educational Resources Information Center
Berglund, Kristin M.; Ludwig, Timothy D.
2009-01-01
Employee-based errors result in quality defects that can often impact customer satisfaction. This study examined the effects of a process change and feedback system intervention on error rates of 3 teams of retail furniture distribution warehouse workers. Archival records of error codes were analyzed and aggregated as the measure of quality. The…
Centrality measures in temporal networks with time series analysis
NASA Astrophysics Data System (ADS)
Huang, Qiangjuan; Zhao, Chengli; Zhang, Xue; Wang, Xiaojie; Yi, Dongyun
2017-05-01
The study of identifying important nodes in networks has a wide application in different fields. However, the current researches are mostly based on static or aggregated networks. Recently, the increasing attention to networks with time-varying structure promotes the study of node centrality in temporal networks. In this paper, we define a supra-evolution matrix to depict the temporal network structure. With using of the time series analysis, the relationships between different time layers can be learned automatically. Based on the special form of the supra-evolution matrix, the eigenvector centrality calculating problem is turned into the calculation of eigenvectors of several low-dimensional matrices through iteration, which effectively reduces the computational complexity. Experiments are carried out on two real-world temporal networks, Enron email communication network and DBLP co-authorship network, the results of which show that our method is more efficient at discovering the important nodes than the common aggregating method.
Remmersmann, Christian; Stürwald, Stephan; Kemper, Björn; Langehanenberg, Patrik; von Bally, Gert
2009-03-10
In temporal phase-shifting-based digital holographic microscopy, high-resolution phase contrast imaging requires optimized conditions for hologram recording and phase retrieval. To optimize the phase resolution, for the example of a variable three-step algorithm, a theoretical analysis on statistical errors, digitalization errors, uncorrelated errors, and errors due to a misaligned temporal phase shift is carried out. In a second step the theoretically predicted results are compared to the measured phase noise obtained from comparative experimental investigations with several coherent and partially coherent light sources. Finally, the applicability for noise reduction is demonstrated by quantitative phase contrast imaging of pancreas tumor cells.
Wetherbee, G.A.; Latysh, N.E.; Gordon, J.D.
2005-01-01
Data from the U.S. Geological Survey (USGS) collocated-sampler program for the National Atmospheric Deposition Program/National Trends Network (NADP/NTN) are used to estimate the overall error of NADP/NTN measurements. Absolute errors are estimated by comparison of paired measurements from collocated instruments. Spatial and temporal differences in absolute error were identified and are consistent with longitudinal distributions of NADP/NTN measurements and spatial differences in precipitation characteristics. The magnitude of error for calcium, magnesium, ammonium, nitrate, and sulfate concentrations, specific conductance, and sample volume is of minor environmental significance to data users. Data collected after a 1994 sample-handling protocol change are prone to less absolute error than data collected prior to 1994. Absolute errors are smaller during non-winter months than during winter months for selected constituents at sites where frozen precipitation is common. Minimum resolvable differences are estimated for different regions of the USA to aid spatial and temporal watershed analyses.
NASA Astrophysics Data System (ADS)
Owens, P. R.; Libohova, Z.; Seybold, C. A.; Wills, S. A.; Peaslee, S.; Beaudette, D.; Lindbo, D. L.
2017-12-01
The measurement errors and spatial prediction uncertainties of soil properties in the modeling community are usually assessed against measured values when available. However, of equal importance is the assessment of errors and uncertainty impacts on cost benefit analysis and risk assessments. Soil pH was selected as one of the most commonly measured soil properties used for liming recommendations. The objective of this study was to assess the error size from different sources and their implications with respect to management decisions. Error sources include measurement methods, laboratory sources, pedotransfer functions, database transections, spatial aggregations, etc. Several databases of measured and predicted soil pH were used for this study including the United States National Cooperative Soil Survey Characterization Database (NCSS-SCDB), the US Soil Survey Geographic (SSURGO) Database. The distribution of errors among different sources from measurement methods to spatial aggregation showed a wide range of values. The greatest RMSE of 0.79 pH units was from spatial aggregation (SSURGO vs Kriging), while the measurement methods had the lowest RMSE of 0.06 pH units. Assuming the order of data acquisition based on the transaction distance i.e. from measurement method to spatial aggregation the RMSE increased from 0.06 to 0.8 pH units suggesting an "error propagation". This has major implications for practitioners and modeling community. Most soil liming rate recommendations are based on 0.1 pH unit increments, while the desired soil pH level increments are based on 0.4 to 0.5 pH units. Thus, even when the measured and desired target soil pH are the same most guidelines recommend 1 ton ha-1 lime, which translates in 111 ha-1 that the farmer has to factor in the cost-benefit analysis. However, this analysis need to be based on uncertainty predictions (0.5-1.0 pH units) rather than measurement errors (0.1 pH units) which would translate in 555-1,111 investment that need to be assessed against the risk. The modeling community can benefit from such analysis, however, error size and spatial distribution for global and regional predictions need to be assessed against the variability of other drivers and impact on management decisions.
Entropy of Movement Outcome in Space-Time.
Lai, Shih-Chiung; Hsieh, Tsung-Yu; Newell, Karl M
2015-07-01
Information entropy of the joint spatial and temporal (space-time) probability of discrete movement outcome was investigated in two experiments as a function of different movement strategies (space-time, space, and time instructional emphases), task goals (point-aiming and target-aiming) and movement speed-accuracy constraints. The variance of the movement spatial and temporal errors was reduced by instructional emphasis on the respective spatial or temporal dimension, but increased on the other dimension. The space-time entropy was lower in targetaiming task than the point aiming task but did not differ between instructional emphases. However, the joint probabilistic measure of spatial and temporal entropy showed that spatial error is traded for timing error in tasks with space-time criteria and that the pattern of movement error depends on the dimension of the measurement process. The unified entropy measure of movement outcome in space-time reveals a new relation for the speed-accuracy.
Temporal Prediction Errors Affect Short-Term Memory Scanning Response Time.
Limongi, Roberto; Silva, Angélica M
2016-11-01
The Sternberg short-term memory scanning task has been used to unveil cognitive operations involved in time perception. Participants produce time intervals during the task, and the researcher explores how task performance affects interval production - where time estimation error is the dependent variable of interest. The perspective of predictive behavior regards time estimation error as a temporal prediction error (PE), an independent variable that controls cognition, behavior, and learning. Based on this perspective, we investigated whether temporal PEs affect short-term memory scanning. Participants performed temporal predictions while they maintained information in memory. Model inference revealed that PEs affected memory scanning response time independently of the memory-set size effect. We discuss the results within the context of formal and mechanistic models of short-term memory scanning and predictive coding, a Bayes-based theory of brain function. We state the hypothesis that our finding could be associated with weak frontostriatal connections and weak striatal activity.
Pornography classification: The hidden clues in video space-time.
Moreira, Daniel; Avila, Sandra; Perez, Mauricio; Moraes, Daniel; Testoni, Vanessa; Valle, Eduardo; Goldenstein, Siome; Rocha, Anderson
2016-11-01
As web technologies and social networks become part of the general public's life, the problem of automatically detecting pornography is into every parent's mind - nobody feels completely safe when their children go online. In this paper, we focus on video-pornography classification, a hard problem in which traditional methods often employ still-image techniques - labeling frames individually prior to a global decision. Frame-based approaches, however, ignore significant cogent information brought by motion. Here, we introduce a space-temporal interest point detector and descriptor called Temporal Robust Features (TRoF). TRoF was custom-tailored for efficient (low processing time and memory footprint) and effective (high classification accuracy and low false negative rate) motion description, particularly suited to the task at hand. We aggregate local information extracted by TRoF into a mid-level representation using Fisher Vectors, the state-of-the-art model of Bags of Visual Words (BoVW). We evaluate our original strategy, contrasting it both to commercial pornography detection solutions, and to BoVW solutions based upon other space-temporal features from the scientific literature. The performance is assessed using the Pornography-2k dataset, a new challenging pornographic benchmark, comprising 2000 web videos and 140h of video footage. The dataset is also a contribution of this work and is very assorted, including both professional and amateur content, and it depicts several genres of pornography, from cartoon to live action, with diverse behavior and ethnicity. The best approach, based on a dense application of TRoF, yields a classification error reduction of almost 79% when compared to the best commercial classifier. A sparse description relying on TRoF detector is also noteworthy, for yielding a classification error reduction of over 69%, with 19× less memory footprint than the dense solution, and yet can also be implemented to meet real-time requirements. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.
Kelbauskas, L; Dietel, W
2002-12-01
Amphiphilic sensitizers self-associate in aqueous environments and form aggregated species that exhibit no or only negligible photodynamic activity. However, amphiphilic photosensitizers number among the most potent agents of photodynamic therapy. The processes by which these sensitizers are internalized into tumor cells have yet to be fully elucidated and thus remain the subject of debate. In this study the uptake of photosensitizer aggregates into tumor cells was examined directly using subcellular time-resolved fluorescence spectroscopy with a high temporal resolution (20-30 ps) and high sensitivity (time-correlated single-photon counting). The investigations were performed on selected sensitizers that exhibit short fluorescence decay times (< 50 ps) in aggregated form. Derivatives of pyropheophorbide-a ether and chlorin e6 with varying lipophilicity were used for the study. The characteristic fluorescence decay times and spectroscopic features of the sensitizer aggregates measured in aqueous solution also could be observed in A431 human endothelial carcinoma cells administered with these photosensitizers. This shows that tumor cells can internalize sensitizers in aggregated form. Uptake of aggregates and their monomerization inside cells were demonstrated directly for the first time by means of fluorescence lifetime imaging with a high temporal resolution. Internalization of the aggregates seems to be endocytosis mediated. The degree of their monomerization in tumor cells is strongly influenced by the lipophilicity of the compounds.
Speaking and Listening with the Eyes: Gaze Signaling during Dyadic Interactions.
Ho, Simon; Foulsham, Tom; Kingstone, Alan
2015-01-01
Cognitive scientists have long been interested in the role that eye gaze plays in social interactions. Previous research suggests that gaze acts as a signaling mechanism and can be used to control turn-taking behaviour. However, early research on this topic employed methods of analysis that aggregated gaze information across an entire trial (or trials), which masks any temporal dynamics that may exist in social interactions. More recently, attempts have been made to understand the temporal characteristics of social gaze but little research has been conducted in a natural setting with two interacting participants. The present study combines a temporally sensitive analysis technique with modern eye tracking technology to 1) validate the overall results from earlier aggregated analyses and 2) provide insight into the specific moment-to-moment temporal characteristics of turn-taking behaviour in a natural setting. Dyads played two social guessing games (20 Questions and Heads Up) while their eyes were tracked. Our general results are in line with past aggregated data, and using cross-correlational analysis on the specific gaze and speech signals of both participants we found that 1) speakers end their turn with direct gaze at the listener and 2) the listener in turn begins to speak with averted gaze. Convergent with theoretical models of social interaction, our data suggest that eye gaze can be used to signal both the end and the beginning of a speaking turn during a social interaction. The present study offers insight into the temporal dynamics of live dyadic interactions and also provides a new method of analysis for eye gaze data when temporal relationships are of interest.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Pau, G. S. H.; Bisht, G.; Riley, W. J.
Existing land surface models (LSMs) describe physical and biological processes that occur over a wide range of spatial and temporal scales. For example, biogeochemical and hydrological processes responsible for carbon (CO 2, CH 4) exchanges with the atmosphere range from the molecular scale (pore-scale O 2 consumption) to tens of kilometers (vegetation distribution, river networks). Additionally, many processes within LSMs are nonlinearly coupled (e.g., methane production and soil moisture dynamics), and therefore simple linear upscaling techniques can result in large prediction error. In this paper we applied a reduced-order modeling (ROM) technique known as "proper orthogonal decomposition mapping method" thatmore » reconstructs temporally resolved fine-resolution solutions based on coarse-resolution solutions. We developed four different methods and applied them to four study sites in a polygonal tundra landscape near Barrow, Alaska. Coupled surface–subsurface isothermal simulations were performed for summer months (June–September) at fine (0.25 m) and coarse (8 m) horizontal resolutions. We used simulation results from three summer seasons (1998–2000) to build ROMs of the 4-D soil moisture field for the study sites individually (single-site) and aggregated (multi-site). The results indicate that the ROM produced a significant computational speedup (> 10 3) with very small relative approximation error (< 0.1%) for 2 validation years not used in training the ROM. We also demonstrate that our approach: (1) efficiently corrects for coarse-resolution model bias and (2) can be used for polygonal tundra sites not included in the training data set with relatively good accuracy (< 1.7% relative error), thereby allowing for the possibility of applying these ROMs across a much larger landscape. By coupling the ROMs constructed at different scales together hierarchically, this method has the potential to efficiently increase the resolution of land models for coupled climate simulations to spatial scales consistent with mechanistic physical process representation.« less
Pau, G. S. H.; Bisht, G.; Riley, W. J.
2014-09-17
Existing land surface models (LSMs) describe physical and biological processes that occur over a wide range of spatial and temporal scales. For example, biogeochemical and hydrological processes responsible for carbon (CO 2, CH 4) exchanges with the atmosphere range from the molecular scale (pore-scale O 2 consumption) to tens of kilometers (vegetation distribution, river networks). Additionally, many processes within LSMs are nonlinearly coupled (e.g., methane production and soil moisture dynamics), and therefore simple linear upscaling techniques can result in large prediction error. In this paper we applied a reduced-order modeling (ROM) technique known as "proper orthogonal decomposition mapping method" thatmore » reconstructs temporally resolved fine-resolution solutions based on coarse-resolution solutions. We developed four different methods and applied them to four study sites in a polygonal tundra landscape near Barrow, Alaska. Coupled surface–subsurface isothermal simulations were performed for summer months (June–September) at fine (0.25 m) and coarse (8 m) horizontal resolutions. We used simulation results from three summer seasons (1998–2000) to build ROMs of the 4-D soil moisture field for the study sites individually (single-site) and aggregated (multi-site). The results indicate that the ROM produced a significant computational speedup (> 10 3) with very small relative approximation error (< 0.1%) for 2 validation years not used in training the ROM. We also demonstrate that our approach: (1) efficiently corrects for coarse-resolution model bias and (2) can be used for polygonal tundra sites not included in the training data set with relatively good accuracy (< 1.7% relative error), thereby allowing for the possibility of applying these ROMs across a much larger landscape. By coupling the ROMs constructed at different scales together hierarchically, this method has the potential to efficiently increase the resolution of land models for coupled climate simulations to spatial scales consistent with mechanistic physical process representation.« less
LACIE performance predictor final operational capability program description, volume 3
NASA Technical Reports Server (NTRS)
1976-01-01
The requirements and processing logic for the LACIE Error Model program (LEM) are described. This program is an integral part of the Large Area Crop Inventory Experiment (LACIE) system. LEM is that portion of the LPP (LACIE Performance Predictor) which simulates the sample segment classification, strata yield estimation, and production aggregation. LEM controls repetitive Monte Carlo trials based on input error distributions to obtain statistical estimates of the wheat area, yield, and production at different levels of aggregation. LEM interfaces with the rest of the LPP through a set of data files.
Evaluating platelet aggregation dynamics from laser speckle fluctuations.
Hajjarian, Zeinab; Tshikudi, Diane M; Nadkarni, Seemantini K
2017-07-01
Platelets are key to maintaining hemostasis and impaired platelet aggregation could lead to hemorrhage or thrombosis. We report a new approach that exploits laser speckle intensity fluctuations, emanated from a drop of platelet-rich-plasma (PRP), to profile aggregation. Speckle fluctuation rate is quantified by the speckle intensity autocorrelation, g 2 (t) , from which the aggregate size is deduced. We first apply this approach to evaluate polystyrene bead aggregation, triggered by salt. Next, we assess dose-dependent platelet aggregation and inhibition in human PRP spiked with adenosine diphosphate and clopidogrel. Additional spatio-temporal speckle analyses yield 2-dimensional maps of particle displacements to visualize platelet aggregate foci within minutes and quantify aggregation dynamics. These findings demonstrate the unique opportunity for assessing platelet health within minutes for diagnosing bleeding disorders and monitoring anti-platelet therapies.
Spatiotemporal integration for tactile localization during arm movements: a probabilistic approach.
Maij, Femke; Wing, Alan M; Medendorp, W Pieter
2013-12-01
It has been shown that people make systematic errors in the localization of a brief tactile stimulus that is delivered to the index finger while they are making an arm movement. Here we modeled these spatial errors with a probabilistic approach, assuming that they follow from temporal uncertainty about the occurrence of the stimulus. In the model, this temporal uncertainty converts into a spatial likelihood about the external stimulus location, depending on arm velocity. We tested the prediction of the model that the localization errors depend on arm velocity. Participants (n = 8) were instructed to localize a tactile stimulus that was presented to their index finger while they were making either slow- or fast-targeted arm movements. Our results confirm the model's prediction that participants make larger localization errors when making faster arm movements. The model, which was used to fit the errors for both slow and fast arm movements simultaneously, accounted very well for all the characteristics of these data with temporal uncertainty in stimulus processing as the only free parameter. We conclude that spatial errors in dynamic tactile perception stem from the temporal precision with which tactile inputs are processed.
Impact of geophysical model error for recovering temporal gravity field model
NASA Astrophysics Data System (ADS)
Zhou, Hao; Luo, Zhicai; Wu, Yihao; Li, Qiong; Xu, Chuang
2016-07-01
The impact of geophysical model error on recovered temporal gravity field models with both real and simulated GRACE observations is assessed in this paper. With real GRACE observations, we build four temporal gravity field models, i.e., HUST08a, HUST11a, HUST04 and HUST05. HUST08a and HUST11a are derived from different ocean tide models (EOT08a and EOT11a), while HUST04 and HUST05 are derived from different non-tidal models (AOD RL04 and AOD RL05). The statistical result shows that the discrepancies of the annual mass variability amplitudes in six river basins between HUST08a and HUST11a models, HUST04 and HUST05 models are all smaller than 1 cm, which demonstrates that geophysical model error slightly affects the current GRACE solutions. The impact of geophysical model error for future missions with more accurate satellite ranging is also assessed by simulation. The simulation results indicate that for current mission with range rate accuracy of 2.5 × 10- 7 m/s, observation error is the main reason for stripe error. However, when the range rate accuracy improves to 5.0 × 10- 8 m/s in the future mission, geophysical model error will be the main source for stripe error, which will limit the accuracy and spatial resolution of temporal gravity model. Therefore, observation error should be the primary error source taken into account at current range rate accuracy level, while more attention should be paid to improving the accuracy of background geophysical models for the future mission.
Healing assessment of tile sets for error tolerance in DNA self-assembly.
Hashempour, M; Mashreghian Arani, Z; Lombardi, F
2008-12-01
An assessment of the effectiveness of healing for error tolerance in DNA self-assembly tile sets for algorithmic/nano-manufacturing applications is presented. Initially, the conditions for correct binding of a tile to an existing aggregate are analysed using a Markovian approach; based on this analysis, it is proved that correct aggregation (as identified with a so-called ideal tile set) is not always met for the existing tile sets for nano-manufacturing. A metric for assessing tile sets for healing by utilising punctures is proposed. Tile sets are investigated and assessed with respect to features such as error (mismatched tile) movement, punctured area and bond types. Subsequently, it is shown that the proposed metric can comprehensively assess the healing effectiveness of a puncture type for a tile set and its capability to attain error tolerance for the desired pattern. Extensive simulation results are provided.
Metamodeling Techniques to Aid in the Aggregation Process of Large Hierarchical Simulation Models
2008-08-01
Level Outputs Campaign Level Model Campaign Level Outputs Aggregation Metamodeling Complexity (Spatial, Temporal, etc.) Others? Apply VRT (type......reduction, are called variance reduction techniques ( VRT ) [Law, 2006]. The implementation of some type of VRT can prove to be a very valuable tool
NASA Astrophysics Data System (ADS)
Tiemann, L. K.; Grandy, S.; Marin-Spiotta, E.; Atkinson, E. E.
2012-12-01
Generally, there are positive relationships between plant species diversity and net primary production and other key ecosystem functions. However, the effects of aboveground diversity on soil microbial communities and ecosystem processes they mediate, such as soil C sequestration, remain unclear. In this study, we used an 11-y cropping diversity study where increases in diversity have increased crop yields. At the experimental site, temporal diversity is altered using combinations of annual crop rotations, while spatial diversity is altered using cover crop species. We used five treatments ranging in diversity from one to five species consisting of continuous corn with no cover crop or one cover crop and corn-soy-wheat rotations with no cover, one cover or two cover crop species. We collected soils from four replicate plots of each treatment and measured the distribution of mega- (>2 mm), macro- (0.25-2 mm), and micro- (0.053-0.25 mm) aggregates. Within each aggregate size class, we also measured total soil C and N, permanganate oxidizable C (POXC), extracellular enzyme activities (EEA), and microbial community structure with phospholipid fatty acid (PLFA) analysis. We use these data to address the impacts of both rotational and cover crop diversity on soil physical structure, associated microbial community structure and activity and soil C storage. As spatial diversity increased, we found concurrent increases in mega-aggregate abundance as well as increasing soil C in the mega- and micro-aggregates but not macro-aggregates. The proportion of total soil C in each aggregate size class that is relatively labile (POXC) was highest in the micro-aggregates, as was enzyme activity associated with labile C acquisition across all levels of diversity. Enzyme activity associated with more recalcitrant forms of soil C was highest in the mega-aggregate class, also across all diversity levels; however, the ratio of labile to recalcitrant EEA increased with increasing diversity in the mega- and micro-aggregates. In addition, soil N increased with diversity such that microbial C:N EEA simultaneously decreased in mega-aggregates. We also found that cropping diversity has created distinctive soil microbial communities, highlighted by variation in the abundance of gram positive bacteria and Actinomycetes. Further research will help us determine how these changes in community structure with increasing diversity are related to concomitant changes in aggregation and enzyme activities. We suggest that the additional organic matter inputs from cover crops in the high diversity treatments have increased aggregation processes and C pools. While microbial activity has also increased in association with this increased C availability, the activity of recalcitrant and N-acquiring enzymes has declined, suggesting an overall decrease in SOM mineralization with possible increased SOM stabilization. The addition of crop species in rotation (temporal diversity) had minimal influence on any of the measured parameters. We thus conclude that spatial diversity is a more important driver of soil structure and microbial activity, likely due to the high quality organic matter inputs derived from the leguminous cover crops; however, spatial diversity alone did not lead to the same level of C storage potential as mixtures of temporal and spatial diversity.
Dalsgaard, Lise; Astrup, Rasmus; Antón-Fernández, Clara; Borgen, Signe Kynding; Breidenbach, Johannes; Lange, Holger; Lehtonen, Aleksi; Liski, Jari
2016-01-01
Boreal forests contain 30% of the global forest carbon with the majority residing in soils. While challenging to quantify, soil carbon changes comprise a significant, and potentially increasing, part of the terrestrial carbon cycle. Thus, their estimation is important when designing forest-based climate change mitigation strategies and soil carbon change estimates are required for the reporting of greenhouse gas emissions. Organic matter decomposition varies with climate in complex nonlinear ways, rendering data aggregation nontrivial. Here, we explored the effects of temporal and spatial aggregation of climatic and litter input data on regional estimates of soil organic carbon stocks and changes for upland forests. We used the soil carbon and decomposition model Yasso07 with input from the Norwegian National Forest Inventory (11275 plots, 1960-2012). Estimates were produced at three spatial and three temporal scales. Results showed that a national level average soil carbon stock estimate varied by 10% depending on the applied spatial and temporal scale of aggregation. Higher stocks were found when applying plot-level input compared to country-level input and when long-term climate was used as compared to annual or 5-year mean values. A national level estimate for soil carbon change was similar across spatial scales, but was considerably (60-70%) lower when applying annual or 5-year mean climate compared to long-term mean climate reflecting the recent climatic changes in Norway. This was particularly evident for the forest-dominated districts in the southeastern and central parts of Norway and in the far north. We concluded that the sensitivity of model estimates to spatial aggregation will depend on the region of interest. Further, that using long-term climate averages during periods with strong climatic trends results in large differences in soil carbon estimates. The largest differences in this study were observed in central and northern regions with strongly increasing temperatures.
Dalsgaard, Lise; Astrup, Rasmus; Antón-Fernández, Clara; Borgen, Signe Kynding; Breidenbach, Johannes; Lange, Holger; Lehtonen, Aleksi; Liski, Jari
2016-01-01
Boreal forests contain 30% of the global forest carbon with the majority residing in soils. While challenging to quantify, soil carbon changes comprise a significant, and potentially increasing, part of the terrestrial carbon cycle. Thus, their estimation is important when designing forest-based climate change mitigation strategies and soil carbon change estimates are required for the reporting of greenhouse gas emissions. Organic matter decomposition varies with climate in complex nonlinear ways, rendering data aggregation nontrivial. Here, we explored the effects of temporal and spatial aggregation of climatic and litter input data on regional estimates of soil organic carbon stocks and changes for upland forests. We used the soil carbon and decomposition model Yasso07 with input from the Norwegian National Forest Inventory (11275 plots, 1960–2012). Estimates were produced at three spatial and three temporal scales. Results showed that a national level average soil carbon stock estimate varied by 10% depending on the applied spatial and temporal scale of aggregation. Higher stocks were found when applying plot-level input compared to country-level input and when long-term climate was used as compared to annual or 5-year mean values. A national level estimate for soil carbon change was similar across spatial scales, but was considerably (60–70%) lower when applying annual or 5-year mean climate compared to long-term mean climate reflecting the recent climatic changes in Norway. This was particularly evident for the forest-dominated districts in the southeastern and central parts of Norway and in the far north. We concluded that the sensitivity of model estimates to spatial aggregation will depend on the region of interest. Further, that using long-term climate averages during periods with strong climatic trends results in large differences in soil carbon estimates. The largest differences in this study were observed in central and northern regions with strongly increasing temperatures. PMID:26901763
Evaluating platelet aggregation dynamics from laser speckle fluctuations
Hajjarian, Zeinab; Tshikudi, Diane M.; Nadkarni, Seemantini K.
2017-01-01
Platelets are key to maintaining hemostasis and impaired platelet aggregation could lead to hemorrhage or thrombosis. We report a new approach that exploits laser speckle intensity fluctuations, emanated from a drop of platelet-rich-plasma (PRP), to profile aggregation. Speckle fluctuation rate is quantified by the speckle intensity autocorrelation, g2(t), from which the aggregate size is deduced. We first apply this approach to evaluate polystyrene bead aggregation, triggered by salt. Next, we assess dose-dependent platelet aggregation and inhibition in human PRP spiked with adenosine diphosphate and clopidogrel. Additional spatio-temporal speckle analyses yield 2-dimensional maps of particle displacements to visualize platelet aggregate foci within minutes and quantify aggregation dynamics. These findings demonstrate the unique opportunity for assessing platelet health within minutes for diagnosing bleeding disorders and monitoring anti-platelet therapies. PMID:28717586
NASA Astrophysics Data System (ADS)
Marra, Francesco; Morin, Efrat
2017-04-01
Forecasting the occurrence of flash floods and debris flows is fundamental to save lives and protect infrastructures and properties. These natural hazards are generated by high-intensity convective storms, on space-time scales that cannot be properly monitored by conventional instrumentation. Consequently, a number of early-warning systems are nowadays based on remote sensing precipitation observations, e.g. from weather radars or satellites, that proved effective in a wide range of situations. However, the uncertainty affecting rainfall estimates represents an important issue undermining the operational use of early-warning systems. The uncertainty related to remote sensing estimates results from (a) an instrumental component, intrinsic of the measurement operation, and (b) a discretization component, caused by the discretization of the continuous rainfall process. Improved understanding on these sources of uncertainty will provide crucial information to modelers and decision makers. This study aims at advancing knowledge on the (b) discretization component. To do so, we take advantage of an extremely-high resolution X-Band weather radar (60 m, 1 min) recently installed in the Eastern Mediterranean. The instrument monitors a semiarid to arid transition area also covered by an accurate C-Band weather radar and by a relatively sparse rain gauge network ( 1 gauge/ 450 km2). Radar quantitative precipitation estimation includes corrections reducing the errors due to ground echoes, orographic beam blockage and attenuation of the signal in heavy rain. Intense, convection-rich, flooding events recently occurred in the area serve as study cases. We (i) describe with very high detail the spatiotemporal characteristics of the convective cores, and (ii) quantify the uncertainty due to spatial aggregation (spatial discretization) and temporal sampling (temporal discretization) operated by coarser resolution remote sensing instruments. We show that instantaneous rain intensity decreases very steeply with the distance from the core of convection with intensity observed at 1 km (2 km) being 10-40% (1-20%) of the core value. The use of coarser temporal resolutions leads to gaps in the observed rainfall and even relatively high resolutions (5 min) can be affected by the problem. We conclude providing to the final user indications about the effects of the discretization component of estimation uncertainty and suggesting viable ways to decrease them.
Pillay, Sara B.; Humphries, Colin J.; Gross, William L.; Graves, William W.; Book, Diane S.
2016-01-01
Patients with surface dyslexia have disproportionate difficulty pronouncing irregularly spelled words (e.g. pint), suggesting impaired use of lexical-semantic information to mediate phonological retrieval. Patients with this deficit also make characteristic ‘regularization’ errors, in which an irregularly spelled word is mispronounced by incorrect application of regular spelling-sound correspondences (e.g. reading plaid as ‘played’), indicating over-reliance on sublexical grapheme–phoneme correspondences. We examined the neuroanatomical correlates of this specific error type in 45 patients with left hemisphere chronic stroke. Voxel-based lesion–symptom mapping showed a strong positive relationship between the rate of regularization errors and damage to the posterior half of the left middle temporal gyrus. Semantic deficits on tests of single-word comprehension were generally mild, and these deficits were not correlated with the rate of regularization errors. Furthermore, the deep occipital-temporal white matter locus associated with these mild semantic deficits was distinct from the lesion site associated with regularization errors. Thus, in contrast to patients with surface dyslexia and semantic impairment from anterior temporal lobe degeneration, surface errors in our patients were not related to a semantic deficit. We propose that these patients have an inability to link intact semantic representations with phonological representations. The data provide novel evidence for a post-semantic mechanism mediating the production of surface errors, and suggest that the posterior middle temporal gyrus may compute an intermediate representation linking semantics with phonology. PMID:26966139
The Accuracy of Aggregate Student Growth Percentiles as Indicators of Educator Performance
ERIC Educational Resources Information Center
Castellano, Katherine E.; McCaffrey, Daniel F.
2017-01-01
Mean or median student growth percentiles (MGPs) are a popular measure of educator performance, but they lack rigorous evaluation. This study investigates the error in MGP due to test score measurement error (ME). Using analytic derivations, we find that errors in the commonly used MGP are correlated with average prior latent achievement: Teachers…
Speaking and Listening with the Eyes: Gaze Signaling during Dyadic Interactions
Ho, Simon; Foulsham, Tom; Kingstone, Alan
2015-01-01
Cognitive scientists have long been interested in the role that eye gaze plays in social interactions. Previous research suggests that gaze acts as a signaling mechanism and can be used to control turn-taking behaviour. However, early research on this topic employed methods of analysis that aggregated gaze information across an entire trial (or trials), which masks any temporal dynamics that may exist in social interactions. More recently, attempts have been made to understand the temporal characteristics of social gaze but little research has been conducted in a natural setting with two interacting participants. The present study combines a temporally sensitive analysis technique with modern eye tracking technology to 1) validate the overall results from earlier aggregated analyses and 2) provide insight into the specific moment-to-moment temporal characteristics of turn-taking behaviour in a natural setting. Dyads played two social guessing games (20 Questions and Heads Up) while their eyes were tracked. Our general results are in line with past aggregated data, and using cross-correlational analysis on the specific gaze and speech signals of both participants we found that 1) speakers end their turn with direct gaze at the listener and 2) the listener in turn begins to speak with averted gaze. Convergent with theoretical models of social interaction, our data suggest that eye gaze can be used to signal both the end and the beginning of a speaking turn during a social interaction. The present study offers insight into the temporal dynamics of live dyadic interactions and also provides a new method of analysis for eye gaze data when temporal relationships are of interest. PMID:26309216
Armbrecht, Anne-Simone; Wöhrmann, Anne; Gibbons, Henning; Stahl, Jutta
2010-09-01
The present electrophysiological study investigated the temporal development of response conflict and the effects of diverging conflict sources on error(-related) negativity (Ne). Eighteen participants performed a combined stop-signal flanker task, which was comprised of two different conflict sources: a left-right and a go-stop response conflict. It is assumed that the Ne reflects the activity of a conflict monitoring system and thus increases according to (i) the number of conflict sources and (ii) the temporal development of the conflict activity. No increase of the Ne amplitude after double errors (comprising two conflict sources) as compared to hand- and stop-errors (comprising one conflict source) was found, whereas a higher Ne amplitude was observed after a delayed stop-signal onset. The results suggest that the Ne is not sensitive to an increase in the number of conflict sources, but to the temporal dynamics of a go-stop response conflict. Copyright (c) 2010 Elsevier B.V. All rights reserved.
High Aggregate Stability Coefficients Can Be Obtained for Unstable Traits.
ERIC Educational Resources Information Center
Day, H. D.; Marshall, Dave
In the light of research by Epstein (1979) (which reported that error of measurement in the analysis of behavior stability may be reduced by examining the behavior of aggregate stability coefficients computed for measurements with known stability characteristics), this study examines stability coefficients for computer-generated data sets…
NASA Astrophysics Data System (ADS)
Dobronets, Boris S.; Popova, Olga A.
2018-05-01
The paper considers a new approach of regression modeling that uses aggregated data presented in the form of density functions. Approaches to Improving the reliability of aggregation of empirical data are considered: improving accuracy and estimating errors. We discuss the procedures of data aggregation as a preprocessing stage for subsequent to regression modeling. An important feature of study is demonstration of the way how represent the aggregated data. It is proposed to use piecewise polynomial models, including spline aggregate functions. We show that the proposed approach to data aggregation can be interpreted as the frequency distribution. To study its properties density function concept is used. Various types of mathematical models of data aggregation are discussed. For the construction of regression models, it is proposed to use data representation procedures based on piecewise polynomial models. New approaches to modeling functional dependencies based on spline aggregations are proposed.
NASA Astrophysics Data System (ADS)
Kountouris, Panagiotis; Gerbig, Christoph; Rödenbeck, Christian; Karstens, Ute; Koch, Thomas Frank; Heimann, Martin
2018-03-01
Atmospheric inversions are widely used in the optimization of surface carbon fluxes on a regional scale using information from atmospheric CO2 dry mole fractions. In many studies the prior flux uncertainty applied to the inversion schemes does not directly reflect the true flux uncertainties but is used to regularize the inverse problem. Here, we aim to implement an inversion scheme using the Jena inversion system and applying a prior flux error structure derived from a model-data residual analysis using high spatial and temporal resolution over a full year period in the European domain. We analyzed the performance of the inversion system with a synthetic experiment, in which the flux constraint is derived following the same residual analysis but applied to the model-model mismatch. The synthetic study showed a quite good agreement between posterior and true
fluxes on European, country, annual and monthly scales. Posterior monthly and country-aggregated fluxes improved their correlation coefficient with the known truth
by 7 % compared to the prior estimates when compared to the reference, with a mean correlation of 0.92. The ratio of the SD between the posterior and reference and between the prior and reference was also reduced by 33 % with a mean value of 1.15. We identified temporal and spatial scales on which the inversion system maximizes the derived information; monthly temporal scales at around 200 km spatial resolution seem to maximize the information gain.
Monitoring interannual variation in global crop yield using long-term AVHRR and MODIS observations
NASA Astrophysics Data System (ADS)
Zhang, Xiaoyang; Zhang, Qingyuan
2016-04-01
Advanced Very High Resolution Radiometer (AVHRR) and Moderate Resolution Imaging Spectroradiometer (MODIS) data have been extensively applied for crop yield prediction because of their daily temporal resolution and a global coverage. This study investigated global crop yield using daily two band Enhanced Vegetation Index (EVI2) derived from AVHRR (1981-1999) and MODIS (2000-2013) observations at a spatial resolution of 0.05° (∼5 km). Specifically, EVI2 temporal trajectory of crop growth was simulated using a hybrid piecewise logistic model (HPLM) for individual pixels, which was used to detect crop phenological metrics. The derived crop phenology was then applied to calculate crop greenness defined as EVI2 amplitude and EVI2 integration during annual crop growing seasons, which was further aggregated for croplands in each country, respectively. The interannual variations in EVI2 amplitude and EVI2 integration were combined to correlate to the variation in cereal yield from 1982-2012 for individual countries using a stepwise regression model, respectively. The results show that the confidence level of the established regression models was higher than 90% (P value < 0.1) in most countries in the northern hemisphere although it was relatively poor in the southern hemisphere (mainly in Africa). The error in the yield predication was relatively smaller in America, Europe and East Asia than that in Africa. In the 10 countries with largest cereal production across the world, the prediction error was less than 9% during past three decades. This suggests that crop phenology-controlled greenness from coarse resolution satellite data has the capability of predicting national crop yield across the world, which could provide timely and reliable crop information for global agricultural trade and policymakers.
Spatial Representativeness of Surface-Measured Variations of Downward Solar Radiation
NASA Astrophysics Data System (ADS)
Schwarz, M.; Folini, D.; Hakuba, M. Z.; Wild, M.
2017-12-01
When using time series of ground-based surface solar radiation (SSR) measurements in combination with gridded data, the spatial and temporal representativeness of the point observations must be considered. We use SSR data from surface observations and high-resolution (0.05°) satellite-derived data to infer the spatiotemporal representativeness of observations for monthly and longer time scales in Europe. The correlation analysis shows that the squared correlation coefficients (R2) between SSR times series decrease linearly with increasing distance between the surface observations. For deseasonalized monthly mean time series, R2 ranges from 0.85 for distances up to 25 km between the stations to 0.25 at distances of 500 km. A decorrelation length (i.e., the e-folding distance of R2) on the order of 400 km (with spread of 100-600 km) was found. R2 from correlations between point observations and colocated grid box area means determined from satellite data were found to be 0.80 for a 1° grid. To quantify the error which arises when using a point observation as a surrogate for the area mean SSR of larger surroundings, we calculated a spatial sampling error (SSE) for a 1° grid of 8 (3) W/m2 for monthly (annual) time series. The SSE based on a 1° grid, therefore, is of the same magnitude as the measurement uncertainty. The analysis generally reveals that monthly mean (or longer temporally aggregated) point observations of SSR capture the larger-scale variability well. This finding shows that comparing time series of SSR measurements with gridded data is feasible for those time scales.
NASA Astrophysics Data System (ADS)
Malinova, Lidia I.; Simonenko, Georgy V.; Denisova, Tatyana P.; Dovgalevsky, Pavel Y.; Tuchin, Valery V.
2004-05-01
The protocol of our study includes men with acute myocardial infarction, stable angina pectoris of II and III functional classes and unstable angina pectoris. Patients with arterial hypertension, disorders in carbohydrate metabolism were excluded from the study. Blood samples taken under standardized conditions, were stabilized with citrate sodium 3,8% (1:9). Erythrocytes and platelets aggregation activity under glucose influence (in vitro) was studied by means of computer aided microphotometer -- a visual analyzer. Erythrocyte and platelets were united in special subsystem of whole blood. Temporal and functional characteristics of their aggregation were analyzed by creation of phase patterns fragments. The received data testify to interrelation of erythrocytes and platelets processes of aggregation under conditions of increasing of glucose concentration of the incubatory environment, which temporal and functional characteristics may be used for diagnostics and the prognosis of destabilization coronary blood flow at an acute coronary syndrome.
Temporal trends in symptom experience predict the accuracy of recall PROs
Schneider, Stefan; Broderick, Joan E.; Junghaenel, Doerte U.; Schwartz, Joseph E.; Stone, Arthur A.
2013-01-01
Objective Patient-reported outcome measures with reporting periods of a week or more are often used to evaluate the change of symptoms over time, but the accuracy of recall in the context of change is not well understood. This study examined whether temporal trends in symptoms that occur during the reporting period impact the accuracy of 7-day recall reports. Methods Women with premenstrual symptoms (n = 95) completed daily reports of anger, depression, fatigue, and pain intensity for 4 weeks, as well as 7-day recall reports at the end of each week. Latent class growth analysis was used to categorize recall periods based on the direction and rate of change in the daily reports. Agreement (level differences and correlations) between 7-day recall and aggregated daily scores was compared for recall periods with different temporal trends. Results Recall periods with positive, negative, and flat temporal trends were identified and they varied in accordance with weeks of the menstrual cycle. Replicating previous research, 7-day recall scores were consistently higher than aggregated daily scores, but this level difference was more pronounced for recall periods involving positive and negative trends compared with flat trends. Moreover, correlations between 7-day recall and aggregated daily scores were lower in the presence of positive and negative trends compared with flat trends. These findings were largely consistent for anger, depression, fatigue, and pain intensity. Conclusion Temporal trends in symptoms can influence the accuracy of recall reports and this should be considered in research designs involving change. PMID:23915773
The error structure of the SMAP single and dual channel soil moisture retrievals
USDA-ARS?s Scientific Manuscript database
Knowledge of the temporal error structure for remotely-sensed surface soil moisture retrievals can improve our ability to exploit them for hydrology and climate studies. This study employs a triple collocation type analysis to investigate both the total variance and temporal auto-correlation of erro...
Timescales alter the inferred strength and temporal consistency of intraspecific diet specialization
Novak, Mark; Tinker, M. Tim
2015-01-01
Many populations consist of individuals that differ substantially in their diets. Quantification of the magnitude and temporal consistency of such intraspecific diet variation is needed to understand its importance, but the extent to which different approaches for doing so reflect instantaneous vs. time-aggregated measures of individual diets may bias inferences. We used direct observations of sea otter individuals (Enhydra lutris nereis) to assess how: (1) the timescale of sampling, (2) under-sampling, and (3) the incidence- vs. frequency-based consideration of prey species affect the inferred strength and consistency of intraspecific diet variation. Analyses of feeding observations aggregated over hourly to annual intervals revealed a substantial bias associated with time aggregation that decreases the inferred magnitude of specialization and increases the inferred consistency of individuals’ diets. Time aggregation also made estimates of specialization more sensitive to the consideration of prey frequency, which decreased estimates relative to the use of prey incidence; time aggregation did not affect the extent to which under-sampling contributed to its overestimation. Our analyses demonstrate the importance of studying intraspecific diet variation with an explicit consideration of time and thereby suggest guidelines for future empirical efforts. Failure to consider time will likely produce inconsistent predictions regarding the effects of intraspecific variation on predator–prey interactions.
Oxygen transport and stem cell aggregation in stirred-suspension bioreactor cultures.
Wu, Jincheng; Rostami, Mahboubeh Rahmati; Cadavid Olaya, Diana P; Tzanakakis, Emmanuel S
2014-01-01
Stirred-suspension bioreactors are a promising modality for large-scale culture of 3D aggregates of pluripotent stem cells and their progeny. Yet, cells within these clusters experience limitations in the transfer of factors and particularly O2 which is characterized by low solubility in aqueous media. Cultured stem cells under different O2 levels may exhibit significantly different proliferation, viability and differentiation potential. Here, a transient diffusion-reaction model was built encompassing the size distribution and ultrastructural characteristics of embryonic stem cell (ESC) aggregates. The model was coupled to experimental data from bioreactor and static cultures for extracting the effective diffusivity and kinetics of consumption of O2 within mouse (mESC) and human ESC (hESC) clusters. Under agitation, mESC aggregates exhibited a higher maximum consumption rate than hESC aggregates. Moreover, the reaction-diffusion model was integrated with a population balance equation (PBE) for the temporal distribution of ESC clusters changing due to aggregation and cell proliferation. Hypoxia was found to be negligible for ESCs with a smaller radius than 100 µm but became appreciable for aggregates larger than 300 µm. The integrated model not only captured the O2 profile both in the bioreactor bulk and inside ESC aggregates but also led to the calculation of the duration that fractions of cells experience a certain range of O2 concentrations. The approach described in this study can be employed for gaining a deeper understanding of the effects of O2 on the physiology of stem cells organized in 3D structures. Such frameworks can be extended to encompass the spatial and temporal availability of nutrients and differentiation factors and facilitate the design and control of relevant bioprocesses for the production of stem cell therapeutics.
The successively temporal error concealment algorithm using error-adaptive block matching principle
NASA Astrophysics Data System (ADS)
Lee, Yu-Hsuan; Wu, Tsai-Hsing; Chen, Chao-Chyun
2014-09-01
Generally, the temporal error concealment (TEC) adopts the blocks around the corrupted block (CB) as the search pattern to find the best-match block in previous frame. Once the CB is recovered, it is referred to as the recovered block (RB). Although RB can be the search pattern to find the best-match block of another CB, RB is not the same as its original block (OB). The error between the RB and its OB limits the performance of TEC. The successively temporal error concealment (STEC) algorithm is proposed to alleviate this error. The STEC procedure consists of tier-1 and tier-2. The tier-1 divides a corrupted macroblock into four corrupted 8 × 8 blocks and generates a recovering order for them. The corrupted 8 × 8 block with the first place of recovering order is recovered in tier-1, and remaining 8 × 8 CBs are recovered in tier-2 along the recovering order. In tier-2, the error-adaptive block matching principle (EA-BMP) is proposed for the RB as the search pattern to recover remaining corrupted 8 × 8 blocks. The proposed STEC outperforms sophisticated TEC algorithms on average PSNR by 0.3 dB on the packet error rate of 20% at least.
Effects of recombinant protein misfolding and aggregation on bacterial membranes.
Ami, D; Natalello, A; Schultz, T; Gatti-Lafranconi, P; Lotti, M; Doglia, S M; de Marco, A
2009-02-01
The expression of recombinant proteins is known to induce a metabolic rearrangement in the host cell. We used aggregation-sensitive model systems to study the effects elicited in Escherichia coli cells by the aggregation of recombinant glutathione-S-transferase and its fusion with the green fluorescent protein that, according to the expression conditions, accumulate intracellularly as soluble protein, or soluble and insoluble aggregates. We show that the folding state of the recombinant protein and the complexity of the intracellular aggregates critically affect the cell response. Specifically, protein misfolding and aggregation induce changes in specific host proteins involved in lipid metabolism and oxidative stress, a reduction in the membrane permeability, as well as a rearrangement of its lipid composition. The temporal evolution of the host cell response and that of the aggregation process pointed out that the misfolded protein and soluble aggregates are responsible for the membrane modifications and the changes in the host protein levels. Interestingly, native recombinant protein and large insoluble aggregates do not seem to activate stress markers and membrane rearrangements.
Temporal correlation coefficient for directed networks.
Büttner, Kathrin; Salau, Jennifer; Krieter, Joachim
2016-01-01
Previous studies dealing with network theory focused mainly on the static aggregation of edges over specific time window lengths. Thus, most of the dynamic information gets lost. To assess the quality of such a static aggregation the temporal correlation coefficient can be calculated. It measures the overall possibility for an edge to persist between two consecutive snapshots. Up to now, this measure is only defined for undirected networks. Therefore, we introduce the adaption of the temporal correlation coefficient to directed networks. This new methodology enables the distinction between ingoing and outgoing edges. Besides a small example network presenting the single calculation steps, we also calculated the proposed measurements for a real pig trade network to emphasize the importance of considering the edge direction. The farm types at the beginning of the pork supply chain showed clearly higher values for the outgoing temporal correlation coefficient compared to the farm types at the end of the pork supply chain. These farm types showed higher values for the ingoing temporal correlation coefficient. The temporal correlation coefficient is a valuable tool to understand the structural dynamics of these systems, as it assesses the consistency of the edge configuration. The adaption of this measure for directed networks may help to preserve meaningful additional information about the investigated network that might get lost if the edge directions are ignored.
Convective Self-Aggregation in Numerical Simulations: A Review
NASA Astrophysics Data System (ADS)
Wing, Allison A.; Emanuel, Kerry; Holloway, Christopher E.; Muller, Caroline
2017-11-01
Organized convection in the tropics occurs across a range of spatial and temporal scales and strongly influences cloud cover and humidity. One mode of organization found is "self-aggregation," in which moist convection spontaneously organizes into one or several isolated clusters despite spatially homogeneous boundary conditions and forcing. Self-aggregation is driven by interactions between clouds, moisture, radiation, surface fluxes, and circulation, and occurs in a wide variety of idealized simulations of radiative-convective equilibrium. Here we provide a review of convective self-aggregation in numerical simulations, including its character, causes, and effects. We describe the evolution of self-aggregation including its time and length scales and the physical mechanisms leading to its triggering and maintenance, and we also discuss possible links to climate and climate change.
Convective Self-Aggregation in Numerical Simulations: A Review
NASA Astrophysics Data System (ADS)
Wing, Allison A.; Emanuel, Kerry; Holloway, Christopher E.; Muller, Caroline
Organized convection in the tropics occurs across a range of spatial and temporal scales and strongly influences cloud cover and humidity. One mode of organization found is ``self-aggregation,'' in which moist convection spontaneously organizes into one or several isolated clusters despite spatially homogeneous boundary conditions and forcing. Self-aggregation is driven by interactions between clouds, moisture, radiation, surface fluxes, and circulation, and occurs in a wide variety of idealized simulations of radiative-convective equilibrium. Here we provide a review of convective self-aggregation in numerical simulations, including its character, causes, and effects. We describe the evolution of self-aggregation including its time and length scales and the physical mechanisms leading to its triggering and maintenance, and we also discuss possible links to climate and climate change.
Temporal information processing in short- and long-term memory of patients with schizophrenia.
Landgraf, Steffen; Steingen, Joerg; Eppert, Yvonne; Niedermeyer, Ulrich; van der Meer, Elke; Krueger, Frank
2011-01-01
Cognitive deficits of patients with schizophrenia have been largely recognized as core symptoms of the disorder. One neglected factor that contributes to these deficits is the comprehension of time. In the present study, we assessed temporal information processing and manipulation from short- and long-term memory in 34 patients with chronic schizophrenia and 34 matched healthy controls. On the short-term memory temporal-order reconstruction task, an incidental or intentional learning strategy was deployed. Patients showed worse overall performance than healthy controls. The intentional learning strategy led to dissociable performance improvement in both groups. Whereas healthy controls improved on a performance measure (serial organization), patients improved on an error measure (inappropriate semantic clustering) when using the intentional instead of the incidental learning strategy. On the long-term memory script-generation task, routine and non-routine events of everyday activities (e.g., buying groceries) had to be generated in either chronological or inverted temporal order. Patients were slower than controls at generating events in the chronological routine condition only. They also committed more sequencing and boundary errors in the inverted conditions. The number of irrelevant events was higher in patients in the chronological, non-routine condition. These results suggest that patients with schizophrenia imprecisely access temporal information from short- and long-term memory. In short-term memory, processing of temporal information led to a reduction in errors rather than, as was the case in healthy controls, to an improvement in temporal-order recall. When accessing temporal information from long-term memory, patients were slower and committed more sequencing, boundary, and intrusion errors. Together, these results suggest that time information can be accessed and processed only imprecisely by patients who provide evidence for impaired time comprehension. This could contribute to symptomatic cognitive deficits and strategic inefficiency in schizophrenia.
Total ozone trend significance from space time variability of daily Dobson data
NASA Technical Reports Server (NTRS)
Wilcox, R. W.
1981-01-01
Estimates of standard errors of total ozone time and area means, as derived from ozone's natural temporal and spatial variability and autocorrelation in middle latitudes determined from daily Dobson data are presented. Assessing the significance of apparent total ozone trends is equivalent to assessing the standard error of the means. Standard errors of time averages depend on the temporal variability and correlation of the averaged parameter. Trend detectability is discussed, both for the present network and for satellite measurements.
Little is known about how temporal changes in the physical–chemical properties of C60 aggregates formed in aqueous systems (termed aqu/C60) can impact transport pathways contributing to ecological exposures. In this study three aqu/C60 suspensions of short-term (100 days), interm...
ERIC Educational Resources Information Center
Ludtke, Oliver; Marsh, Herbert W.; Robitzsch, Alexander; Trautwein, Ulrich
2011-01-01
In multilevel modeling, group-level variables (L2) for assessing contextual effects are frequently generated by aggregating variables from a lower level (L1). A major problem of contextual analyses in the social sciences is that there is no error-free measurement of constructs. In the present article, 2 types of error occurring in multilevel data…
Asymmetric reproductive character displacement in male aggregation behaviour
Pfennig, Karin S.; Stewart, Alyssa B.
2011-01-01
Reproductive character displacement—the evolution of traits that minimize reproductive interactions between species—can promote striking divergence in male signals or female mate preferences between populations that do and do not occur with heterospecifics. However, reproductive character displacement can affect other aspects of mating behaviour. Indeed, avoidance of heterospecific interactions might contribute to spatial (or temporal) aggregation of conspecifics. We examined this possibility in two species of hybridizing spadefoot toad (genus Spea). We found that in Spea bombifrons sympatric males were more likely than allopatric males to associate with calling males. Moreover, contrary to allopatric males, sympatric S. bombifrons males preferentially associated with conspecific male calls. By contrast, Spea multiplicata showed no differences between sympatry and allopatry in likelihood to associate with calling males. Further, sympatric and allopatric males did not differ in preference for conspecifics. However, allopatric S. multiplicata were more variable than sympatric males in their responses. Thus, in S. multiplicata, character displacement may have refined pre-existing aggregation behaviour. Our results suggest that heterospecific interactions can foster aggregative behaviour that might ultimately contribute to clustering of conspecifics. Such clustering can generate spatial or temporal segregation of reproductive activities among species and ultimately promote reproductive isolation. PMID:21177683
How and why do toxic conformers of aberrant proteins accumulate during ageing?
Josefson, Rebecca; Andersson, Rebecca; Nyström, Thomas
2017-07-15
Ageing can be defined as a gradual decline in cellular and physical functions accompanied by an increased sensitivity to the environment and risk of death. The increased risk of mortality is causally connected to a gradual, intracellular accumulation of so-called ageing factors, of which damaged and aggregated proteins are believed to be one. Such aggregated proteins also contribute to several age-related neurodegenerative disorders e.g. Alzheimer's, Parkinson's, and Huntington's diseases, highlighting the importance of protein quality control (PQC) in ageing and its associated diseases. PQC consists of two interrelated systems: the temporal control system aimed at refolding, repairing, and/or removing aberrant proteins and their aggregates and the spatial control system aimed at harnessing the potential toxicity of aberrant proteins by sequestering them at specific cellular locations. The accumulation of toxic conformers of aberrant proteins during ageing is often declared to be a consequence of an incapacitated temporal PQC system-i.e. a gradual decline in the activity of chaperones and proteases. Here, we review the current knowledge on PQC in relation to ageing and highlight that the breakdown of both temporal and spatial PQC may contribute to ageing and thus comprise potential targets for therapeutic interventions of the ageing process. © 2017 The Author(s). Published by Portland Press Limited on behalf of the Biochemical Society.
Multiscale analysis of river networks using the R package linbin
Welty, Ethan Z.; Torgersen, Christian E.; Brenkman, Samuel J.; Duda, Jeffrey J.; Armstrong, Jonathan B.
2015-01-01
Analytical tools are needed in riverine science and management to bridge the gap between GIS and statistical packages that were not designed for the directional and dendritic structure of streams. We introduce linbin, an R package developed for the analysis of riverscapes at multiple scales. With this software, riverine data on aquatic habitat and species distribution can be scaled and plotted automatically with respect to their position in the stream network or—in the case of temporal data—their position in time. The linbin package aggregates data into bins of different sizes as specified by the user. We provide case studies illustrating the use of the software for (1) exploring patterns at different scales by aggregating variables at a range of bin sizes, (2) comparing repeat observations by aggregating surveys into bins of common coverage, and (3) tailoring analysis to data with custom bin designs. Furthermore, we demonstrate the utility of linbin for summarizing patterns throughout an entire stream network, and we analyze the diel and seasonal movements of tagged fish past a stationary receiver to illustrate how linbin can be used with temporal data. In short, linbin enables more rapid analysis of complex data sets by fisheries managers and stream ecologists and can reveal underlying spatial and temporal patterns of fish distribution and habitat throughout a riverscape.
Annotating spatio-temporal datasets for meaningful analysis in the Web
NASA Astrophysics Data System (ADS)
Stasch, Christoph; Pebesma, Edzer; Scheider, Simon
2014-05-01
More and more environmental datasets that vary in space and time are available in the Web. This comes along with an advantage of using the data for other purposes than originally foreseen, but also with the danger that users may apply inappropriate analysis procedures due to lack of important assumptions made during the data collection process. In order to guide towards a meaningful (statistical) analysis of spatio-temporal datasets available in the Web, we have developed a Higher-Order-Logic formalism that captures some relevant assumptions in our previous work [1]. It allows to proof on meaningful spatial prediction and aggregation in a semi-automated fashion. In this poster presentation, we will present a concept for annotating spatio-temporal datasets available in the Web with concepts defined in our formalism. Therefore, we have defined a subset of the formalism as a Web Ontology Language (OWL) pattern. It allows capturing the distinction between the different spatio-temporal variable types, i.e. point patterns, fields, lattices and trajectories, that in turn determine whether a particular dataset can be interpolated or aggregated in a meaningful way using a certain procedure. The actual annotations that link spatio-temporal datasets with the concepts in the ontology pattern are provided as Linked Data. In order to allow data producers to add the annotations to their datasets, we have implemented a Web portal that uses a triple store at the backend to store the annotations and to make them available in the Linked Data cloud. Furthermore, we have implemented functions in the statistical environment R to retrieve the RDF annotations and, based on these annotations, to support a stronger typing of spatio-temporal datatypes guiding towards a meaningful analysis in R. [1] Stasch, C., Scheider, S., Pebesma, E., Kuhn, W. (2014): "Meaningful spatial prediction and aggregation", Environmental Modelling & Software, 51, 149-165.
Emmert, Susan Y.; Tindall, Kelly; Ding, Hongjian; Boetel, Mark A.; Rajabaskar, D.; Eigenbrode, Sanford D.
2017-01-01
Male-biased aggregations of sugar beet root maggot, Tetanops myopaeformis (Röder) (Diptera: Ulidiidae), flies were observed on utility poles near sugar beet (Beta vulgaris L. [Chenopodiaceae]) fields in southern Idaho; this contrasts with the approximately equal sex ratio typically observed within fields. Peak observation of mating pairs coincided with peak diurnal abundance of flies. Volatiles released by individual male and female flies were sampled from 08:00 to 24:00 hours in the laboratory using solid-phase microextraction and analyzed using gas chromatography/mass spectrometry (GC/MS). Eleven compounds were uniquely detected from males. Three of these compounds (2-undecanol, 2-decanol, and sec-nonyl acetate) were detected in greater quantities during 12:00–24:00 hours than during 08:00–12:00 hours. The remaining eight compounds uniquely detected from males did not exhibit temporal trends in release. Both sexes produced 2-nonanol, but males produced substantially higher (ca. 80-fold) concentrations of this compound than females, again peaking after 12:00 hours. The temporal synchrony among male aggregation behavior, peak mating rates, and release of certain volatile compounds by males suggest that T. myopaeformis flies exhibit lekking behavior and produce an associated pheromone. Field assays using synthetic blends of the putative aggregation pheromone showed evidence of attraction in both females and males. PMID:28423428
DOE Office of Scientific and Technical Information (OSTI.GOV)
Pollesch, N.; Dale, V. H.
In order to aid in transition towards operations that promote sustainability goals, researchers and stakeholders use sustainability assessments. Although assessments take various forms, many utilize diverse sets of indicators that can number anywhere from two to over 2000. Indices, composite indicators, or aggregate values are used to simplify high dimensional and complex data sets and to clarify assessment results. Although the choice of aggregation function is a key component in the development of the assessment, there are few examples to be found in literature to guide appropriate aggregation function selection. This paper develops a connection between the mathematical study ofmore » aggregation functions and sustainability assessment in order to aid in providing criteria for aggregation function selection. Relevant mathematical properties of aggregation functions are presented and interpreted. Lastly, we provide cases of these properties and their relation to previous sustainability assessment research. Examples show that mathematical aggregation properties can be used to address the topics of compensatory behavior and weak versus strong sustainability, aggregation of data under varying units of measurements, multiple site multiple indicator aggregation, and the determination of error bounds in aggregate output for normalized and non-normalized indicator measures.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Marks, Shawn M.; Lockhart, Samuel N.; Baker, Suzanne L.
Normal aging is associated with a decline in episodic memory and also with aggregation of the β-amyloid (Aβ) and tau proteins and atrophy of medial temporal lobe (MTL) structures crucial to memory formation. Although some evidence suggests that Aβ is associated with aberrant neural activity, the relationships among these two aggregated proteins, neural function, and brain structure are poorly understood. Using in vivo human Aβ and tau imaging, we demonstrate that increased Aβ and tau are both associated with aberrant fMRI activity in the MTL during memory encoding in cognitively normal older adults. This pathological neural activity was in turnmore » associated with worse memory performance and atrophy within the MTL. A mediation analysis revealed that the relationship with regional atrophy was explained by MTL tau. These findings broaden the concept of cognitive aging to include evidence of Alzheimer’s disease-related protein aggregation as an underlying mechanism of age-related memory impairment.« less
Biological framework for soil aggregation: Implications for ecological functions.
NASA Astrophysics Data System (ADS)
Ghezzehei, Teamrat; Or, Dani
2016-04-01
Soil aggregation is heuristically understood as agglomeration of primary particles bound together by biotic and abiotic cementing agents. The organization of aggregates is believed to be hierarchical in nature; whereby primary particles bond together to form secondary particles and subsequently merge to form larger aggregates. Soil aggregates are not permanent structures, they continuously change in response to internal and external forces and other drivers, including moisture, capillary pressure, temperature, biological activity, and human disturbances. Soil aggregation processes and the resulting functionality span multiple spatial and temporal scales. The intertwined biological and physical nature of soil aggregation, and the time scales involved precluded a universally applicable and quantifiable framework for characterizing the nature and function of soil aggregation. We introduce a biophysical framework of soil aggregation that considers the various modes and factors of the genesis, maturation and degradation of soil aggregates including wetting/drying cycles, soil mechanical processes, biological activity and the nature of primary soil particles. The framework attempts to disentangle mechanical (compaction and soil fragmentation) from in-situ biophysical aggregation and provides a consistent description of aggregate size, hierarchical organization, and life time. It also enables quantitative description of biotic and abiotic functions of soil aggregates including diffusion and storage of mass and energy as well as role of aggregates as hot spots of nutrient accumulation, biodiversity, and biogeochemical cycles.
NASA Astrophysics Data System (ADS)
Rhodes, K. L.; Nemeth, R. S.; Kadison, E.; Joseph, E.
2014-09-01
Long-term and short-term underwater visual censuses using SCUBA, technical Nitrox, and closed circuit rebreathers (CCR) were carried out in Pohnpei, Micronesia, to define spatial and temporal dynamics within a semi-protected multi-species epinephelid (fish) spawning aggregation (FSA) of brown-marbled grouper, Epinephelus fuscoguttatus, camouflage grouper, Epinephelus polyphekadion, and squaretail coralgrouper, Plectropomus areolatus. Results identified species-specific patterns of habitat use, abundance, residency, and dispersal of FSAs. Fish spawning aggregations formed and dispersed monthly within a 21-160-d period after winter solstice within adjacent yet distinct outer reef habitats. The reproductive season coincided with periods of seasonally low sub-surface seawater temperature. Peaks in density varied among species both within the calendar year and relative to the winter solstice. Significant long-term declines in FSA density were observed for all three species, suggesting population-level fishery-induced impacts, similar to those previously reported for E. polyphekadion. Differences in density estimates were also observed between dive gear, with a threefold difference in densities measured by CCR for E. polyphekadion versus SCUBA that suggest a disturbance effect from exhaled SCUBA bubbles for this species. CCR also allowed surveys to be conducted over a larger area in a single dive, thereby improving the potential to gauge actual abundance and density within FSAs. Based on these findings, a combination of long-term and intensive short-term monitoring strategies is recommended to fully characterize trends in seasonal abundance and habitat use for aggregating species at single or multi-species FSA sites. Inherent variations in the timing and distribution of species within FSA make fine-scale temporal management protocols less effective than blanket protective coverage of these species at (e.g., marine protected areas covering FSAs and adjacent migratory corridors) and away from (i.e., temporal sales and catch restrictions) FSA sites.
Gravity field recovery in the framework of a Geodesy and Time Reference in Space (GETRIS)
NASA Astrophysics Data System (ADS)
Hauk, Markus; Schlicht, Anja; Pail, Roland; Murböck, Michael
2017-04-01
The study ;Geodesy and Time Reference in Space; (GETRIS), funded by European Space Agency (ESA), evaluates the potential and opportunities coming along with a global space-borne infrastructure for data transfer, clock synchronization and ranging. Gravity field recovery could be one of the first beneficiary applications of such an infrastructure. This paper analyzes and evaluates the two-way high-low satellite-to-satellite-tracking as a novel method and as a long-term perspective for the determination of the Earth's gravitational field, using it as a synergy of one-way high-low combined with low-low satellite-to-satellite-tracking, in order to generate adequate de-aliasing products. First planned as a constellation of geostationary satellites, it turned out, that an integration of European Union Global Navigation Satellite System (Galileo) satellites (equipped with inter-Galileo links) into a Geostationary Earth Orbit (GEO) constellation would extend the capability of such a mission constellation remarkably. We report about simulations of different Galileo and Low Earth Orbiter (LEO) satellite constellations, computed using time variable geophysical background models, to determine temporal changes in the Earth's gravitational field. Our work aims at an error analysis of this new satellite/instrument scenario by investigating the impact of different error sources. Compared to a low-low satellite-to-satellite-tracking mission, results show reduced temporal aliasing errors due to a more isotropic error behavior caused by an improved observation geometry, predominantly in near-radial direction within the inter-satellite-links, as well as the potential of an improved gravity recovery with higher spatial and temporal resolution. The major error contributors of temporal gravity retrieval are aliasing errors due to undersampling of high frequency signals (mainly atmosphere, ocean and ocean tides). In this context, we investigate adequate methods to reduce these errors. We vary the number of Galileo and LEO satellites and show reduced errors in the temporal gravity field solutions for this enhanced inter-satellite-links. Based on the GETRIS infrastructure, the multiplicity of satellites enables co-estimating short-period long-wavelength gravity field signals, indicating it as powerful method for non-tidal aliasing reduction.
Reliability Estimation for Aggregated Data: Applications for Organizational Research.
ERIC Educational Resources Information Center
Hart, Roland J.; Bradshaw, Stephen C.
This report provides the statistical tools necessary to measure the extent of error that exists in organizational record data and group survey data. It is felt that traditional methods of measuring error are inappropriate or incomplete when applied to organizational groups, especially in studies of organizational change when the same variables are…
Effects of Correlated Errors on the Analysis of Space Geodetic Data
NASA Technical Reports Server (NTRS)
Romero-Wolf, Andres; Jacobs, C. S.
2011-01-01
As thermal errors are reduced instrumental and troposphere correlated errors will increasingly become more important. Work in progress shows that troposphere covariance error models improve data analysis results. We expect to see stronger effects with higher data rates. Temperature modeling of delay errors may further reduce temporal correlations in the data.
A Psychological Model for Aggregating Judgments of Magnitude
NASA Astrophysics Data System (ADS)
Merkle, Edgar C.; Steyvers, Mark
In this paper, we develop and illustrate a psychologically-motivated model for aggregating judgments of magnitude across experts. The model assumes that experts' judgments are perturbed from the truth by both systematic biases and random error, and it provides aggregated estimates that are implicitly based on the application of nonlinear weights to individual judgments. The model is also easily extended to situations where experts report multiple quantile judgments. We apply the model to expert judgments concerning flange leaks in a chemical plant, illustrating its use and comparing it to baseline measures.
NASA Astrophysics Data System (ADS)
Zeng, Jicai; Zha, Yuanyuan; Zhang, Yonggen; Shi, Liangsheng; Zhu, Yan; Yang, Jinzhong
2017-11-01
Multi-scale modeling of the localized groundwater flow problems in a large-scale aquifer has been extensively investigated under the context of cost-benefit controversy. An alternative is to couple the parent and child models with different spatial and temporal scales, which may result in non-trivial sub-model errors in the local areas of interest. Basically, such errors in the child models originate from the deficiency in the coupling methods, as well as from the inadequacy in the spatial and temporal discretizations of the parent and child models. In this study, we investigate the sub-model errors within a generalized one-way coupling scheme given its numerical stability and efficiency, which enables more flexibility in choosing sub-models. To couple the models at different scales, the head solution at parent scale is delivered downward onto the child boundary nodes by means of the spatial and temporal head interpolation approaches. The efficiency of the coupling model is improved either by refining the grid or time step size in the parent and child models, or by carefully locating the sub-model boundary nodes. The temporal truncation errors in the sub-models can be significantly reduced by the adaptive local time-stepping scheme. The generalized one-way coupling scheme is promising to handle the multi-scale groundwater flow problems with complex stresses and heterogeneity.
Che-Castaldo, Christian; Jenouvrier, Stephanie; Youngflesh, Casey; Shoemaker, Kevin T; Humphries, Grant; McDowall, Philip; Landrum, Laura; Holland, Marika M; Li, Yun; Ji, Rubao; Lynch, Heather J
2017-10-10
Colonially-breeding seabirds have long served as indicator species for the health of the oceans on which they depend. Abundance and breeding data are repeatedly collected at fixed study sites in the hopes that changes in abundance and productivity may be useful for adaptive management of marine resources, but their suitability for this purpose is often unknown. To address this, we fit a Bayesian population dynamics model that includes process and observation error to all known Adélie penguin abundance data (1982-2015) in the Antarctic, covering >95% of their population globally. We find that process error exceeds observation error in this system, and that continent-wide "year effects" strongly influence population growth rates. Our findings have important implications for the use of Adélie penguins in Southern Ocean feedback management, and suggest that aggregating abundance across space provides the fastest reliable signal of true population change for species whose dynamics are driven by stochastic processes.Adélie penguins are a key Antarctic indicator species, but data patchiness has challenged efforts to link population dynamics to key drivers. Che-Castaldo et al. resolve this issue using a pan-Antarctic Bayesian model to infer missing data, and show that spatial aggregation leads to more robust inference regarding dynamics.
Counteracting estimation bias and social influence to improve the wisdom of crowds.
Kao, Albert B; Berdahl, Andrew M; Hartnett, Andrew T; Lutz, Matthew J; Bak-Coleman, Joseph B; Ioannou, Christos C; Giam, Xingli; Couzin, Iain D
2018-04-01
Aggregating multiple non-expert opinions into a collective estimate can improve accuracy across many contexts. However, two sources of error can diminish collective wisdom: individual estimation biases and information sharing between individuals. Here, we measure individual biases and social influence rules in multiple experiments involving hundreds of individuals performing a classic numerosity estimation task. We first investigate how existing aggregation methods, such as calculating the arithmetic mean or the median, are influenced by these sources of error. We show that the mean tends to overestimate, and the median underestimate, the true value for a wide range of numerosities. Quantifying estimation bias, and mapping individual bias to collective bias, allows us to develop and validate three new aggregation measures that effectively counter sources of collective estimation error. In addition, we present results from a further experiment that quantifies the social influence rules that individuals employ when incorporating personal estimates with social information. We show that the corrected mean is remarkably robust to social influence, retaining high accuracy in the presence or absence of social influence, across numerosities and across different methods for averaging social information. Using knowledge of estimation biases and social influence rules may therefore be an inexpensive and general strategy to improve the wisdom of crowds. © 2018 The Author(s).
Effect of spatial averaging on multifractal properties of meteorological time series
NASA Astrophysics Data System (ADS)
Hoffmann, Holger; Baranowski, Piotr; Krzyszczak, Jaromir; Zubik, Monika
2016-04-01
Introduction The process-based models for large-scale simulations require input of agro-meteorological quantities that are often in the form of time series of coarse spatial resolution. Therefore, the knowledge about their scaling properties is fundamental for transferring locally measured fluctuations to larger scales and vice-versa. However, the scaling analysis of these quantities is complicated due to the presence of localized trends and non-stationarities. Here we assess how spatially aggregating meteorological data to coarser resolutions affects the data's temporal scaling properties. While it is known that spatial aggregation may affect spatial data properties (Hoffmann et al., 2015), it is unknown how it affects temporal data properties. Therefore, the objective of this study was to characterize the aggregation effect (AE) with regard to both temporal and spatial input data properties considering scaling properties (i.e. statistical self-similarity) of the chosen agro-meteorological time series through multifractal detrended fluctuation analysis (MFDFA). Materials and Methods Time series coming from years 1982-2011 were spatially averaged from 1 to 10, 25, 50 and 100 km resolution to assess the impact of spatial aggregation. Daily minimum, mean and maximum air temperature (2 m), precipitation, global radiation, wind speed and relative humidity (Zhao et al., 2015) were used. To reveal the multifractal structure of the time series, we used the procedure described in Baranowski et al. (2015). The diversity of the studied multifractals was evaluated by the parameters of time series spectra. In order to analyse differences in multifractal properties to 1 km resolution grids, data of coarser resolutions was disaggregated to 1 km. Results and Conclusions Analysing the spatial averaging on multifractal properties we observed that spatial patterns of the multifractal spectrum (MS) of all meteorological variables differed from 1 km grids and MS-parameters were biased by -29.1 % (precipitation; width of MS) up to >4 % (min. Temperature, Radiation; asymmetry of MS). Also, the spatial variability of MS parameters was strongly affected at the highest aggregation (100 km). Obtained results confirm that spatial data aggregation may strongly affect temporal scaling properties. This should be taken into account when upscaling for large-scale studies. Acknowledgements The study was conducted within FACCE MACSUR. Please see Baranowski et al. (2015) for details on funding. References Baranowski, P., Krzyszczak, J., Sławiński, C. et al. (2015). Climate Research 65, 39-52. Hoffman, H., G. Zhao, L.G.J. Van Bussel et al. (2015). Climate Research 65, 53-69. Zhao, G., Siebert, S., Rezaei E. et al. (2015). Agricultural and Forest Meteorology 200, 156-171.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Koeylue, U.O.
1997-05-01
An in situ particulate diagnostic/analysis technique is outlined based on the Rayleigh-Debye-Gans polydisperse fractal aggregate (RDG/PFA) scattering interpretation of absolute angular light scattering and extinction measurements. Using proper particle refractive index, the proposed data analysis method can quantitatively yield all aggregate parameters (particle volume fraction, f{sub v}, fractal dimension, D{sub f}, primary particle diameter, d{sub p}, particle number density, n{sub p}, and aggregate size distribution, pdf(N)) without any prior knowledge about the particle-laden environment. The present optical diagnostic/interpretation technique was applied to two different soot-containing laminar and turbulent ethylene/air nonpremixed flames in order to assess its reliability. The aggregate interpretationmore » of optical measurements yielded D{sub f}, d{sub p}, and pdf(N) that are in excellent agreement with ex situ thermophoretic sampling/transmission electron microscope (TS/TEM) observations within experimental uncertainties. However, volume-equivalent single particle models (Rayleigh/Mie) overestimated d{sub p} by about a factor of 3, causing an order of magnitude underestimation in n{sub p}. Consequently, soot surface areas and growth rates were in error by a factor of 3, emphasizing that aggregation effects need to be taken into account when using optical diagnostics for a reliable understanding of soot formation/evolution mechanism in flames. The results also indicated that total soot emissivities were generally underestimated using Rayleigh analysis (up to 50%), mainly due to the uncertainties in soot refractive indices at infrared wavelengths. This suggests that aggregate considerations may not be essential for reasonable radiation heat transfer predictions from luminous flames because of fortuitous error cancellation, resulting in typically a 10 to 30% net effect.« less
Kovalchik, Stephanie A; Cumberland, William G
2012-05-01
Subgroup analyses are important to medical research because they shed light on the heterogeneity of treatment effectts. A treatment-covariate interaction in an individual patient data (IPD) meta-analysis is the most reliable means to estimate how a subgroup factor modifies a treatment's effectiveness. However, owing to the challenges in collecting participant data, an approach based on aggregate data might be the only option. In these circumstances, it would be useful to assess the relative efficiency and power loss of a subgroup analysis without patient-level data. We present methods that use aggregate data to estimate the standard error of an IPD meta-analysis' treatment-covariate interaction for regression models of a continuous or dichotomous patient outcome. Numerical studies indicate that the estimators have good accuracy. An application to a previously published meta-regression illustrates the practical utility of the methodology. © 2012 Wiley-VCH Verlag GmbH & Co. KGaA, Weinheim.
Chen, Shi; Ilany, Amiyaal; White, Brad J; Sanderson, Michael W; Lanzas, Cristina
2015-01-01
Animal social network is the key to understand many ecological and epidemiological processes. We used real-time location system (RTLS) to accurately track cattle position, analyze their proximity networks, and tested the hypothesis of temporal stationarity and spatial homogeneity in these networks during different daily time periods and in different areas of the pen. The network structure was analyzed using global network characteristics (network density), subgroup clustering (modularity), triadic property (transitivity), and dyadic interactions (correlation coefficient from a quadratic assignment procedure) at hourly level. We demonstrated substantial spatial-temporal heterogeneity in these networks and potential link between indirect animal-environment contact and direct animal-animal contact. But such heterogeneity diminished if data were collected at lower spatial (aggregated at entire pen level) or temporal (aggregated at daily level) resolution. The network structure (described by the characteristics such as density, modularity, transitivity, etc.) also changed substantially at different time and locations. There were certain time (feeding) and location (hay) that the proximity network structures were more consistent based on the dyadic interaction analysis. These results reveal new insights for animal network structure and spatial-temporal dynamics, provide more accurate descriptions of animal social networks, and allow more accurate modeling of multiple (both direct and indirect) disease transmission pathways.
Applications of aggregation theory to sustainability assessment
Pollesch, N.; Dale, V. H.
2015-04-01
In order to aid in transition towards operations that promote sustainability goals, researchers and stakeholders use sustainability assessments. Although assessments take various forms, many utilize diverse sets of indicators that can number anywhere from two to over 2000. Indices, composite indicators, or aggregate values are used to simplify high dimensional and complex data sets and to clarify assessment results. Although the choice of aggregation function is a key component in the development of the assessment, there are few examples to be found in literature to guide appropriate aggregation function selection. This paper develops a connection between the mathematical study ofmore » aggregation functions and sustainability assessment in order to aid in providing criteria for aggregation function selection. Relevant mathematical properties of aggregation functions are presented and interpreted. Lastly, we provide cases of these properties and their relation to previous sustainability assessment research. Examples show that mathematical aggregation properties can be used to address the topics of compensatory behavior and weak versus strong sustainability, aggregation of data under varying units of measurements, multiple site multiple indicator aggregation, and the determination of error bounds in aggregate output for normalized and non-normalized indicator measures.« less
NASA Astrophysics Data System (ADS)
Alves, Renata M. S.; Vanaverbeke, Jan; Bouma, Tjeerd J.; Guarini, Jean-Marc; Vincx, Magda; Van Colen, Carl
2017-03-01
Ecosystem engineers contribute to ecosystem functioning by regulating key environmental attributes, such as habitat availability and sediment biogeochemistry. While autogenic engineers can increase habitat complexity passively and provide physical protection to other species, allogenic engineers can regulate sediment oxygenation and biogeochemistry through bioturbation and/or bioirrigation. Their effects rely on the physical attributes of the engineer and/or its biogenic constructs, such as abundance and/or size. The present study focused on tube aggregations of a sessile, tube-building polychaete that engineers marine sediments, Lanice conchilega. Its tube aggregations modulate water flow by dissipating energy, influencing sedimentary processes and increasing particle retention. These effects can be influenced by temporal fluctuations in population demographic processes. Presently, we investigated the relationship between population processes and ecosystem engineering through an in-situ survey (1.5 years) of L. conchilega aggregations at the sandy beach of Boulogne-sur-Mer (France). We (1) evaluated temporal patterns in population structure, and (2) investigated how these are related to the ecosystem engineering of L. conchilega on marine sediments. During our survey, we assessed tube density, demographic structure, and sediment properties (surficial chl-a, EPS, TOM, median and mode grain size, sorting, and mud and water content) on a monthly basis for 12 intertidal aggregations. We found that the population was mainly composed by short-lived (6-10 months), small-medium individuals. Mass mortality severely reduced population density during winter. However the population persisted, likely due to recruits from other populations, which are associated to short- and long-term population dynamics. Two periods of recruitment were identified: spring/summer and autumn. Population density was highest during the spring recruitment and significantly affected several environmental properties (i.e. EPS, TOM, mode grain size, mud and water content), suggesting that demographic processes may be responsible for periods of pronounced ecosystem engineering with densities of approx. 30 000 ind·m-2.
Spatio-temporal error growth in the multi-scale Lorenz'96 model
NASA Astrophysics Data System (ADS)
Herrera, S.; Fernández, J.; Rodríguez, M. A.; Gutiérrez, J. M.
2010-07-01
The influence of multiple spatio-temporal scales on the error growth and predictability of atmospheric flows is analyzed throughout the paper. To this aim, we consider the two-scale Lorenz'96 model and study the interplay of the slow and fast variables on the error growth dynamics. It is shown that when the coupling between slow and fast variables is weak the slow variables dominate the evolution of fluctuations whereas in the case of strong coupling the fast variables impose a non-trivial complex error growth pattern on the slow variables with two different regimes, before and after saturation of fast variables. This complex behavior is analyzed using the recently introduced Mean-Variance Logarithmic (MVL) diagram.
Satellite-based drought monitoring in Kenya in an operational setting
NASA Astrophysics Data System (ADS)
Klisch, A.; Atzberger, C.; Luminari, L.
2015-04-01
The University of Natural Resources and Life Sciences (BOKU) in Vienna (Austria) in cooperation with the National Drought Management Authority (NDMA) in Nairobi (Kenya) has setup an operational processing chain for mapping drought occurrence and strength for the territory of Kenya using the Moderate Resolution Imaging Spectroradiometer (MODIS) NDVI at 250 m ground resolution from 2000 onwards. The processing chain employs a modified Whittaker smoother providing consistent NDVI "Mondayimages" in near real-time (NRT) at a 7-daily updating interval. The approach constrains temporally extrapolated NDVI values based on reasonable temporal NDVI paths. Contrary to other competing approaches, the processing chain provides a modelled uncertainty range for each pixel and time step. The uncertainties are calculated by a hindcast analysis of the NRT products against an "optimum" filtering. To detect droughts, the vegetation condition index (VCI) is calculated at pixel level and is spatially aggregated to administrative units. Starting from weekly temporal resolution, the indicator is also aggregated for 1- and 3-monthly intervals considering available uncertainty information. Analysts at NDMA use the spatially/temporally aggregated VCI and basic image products for their monthly bulletins. Based on the provided bio-physical indicators as well as a number of socio-economic indicators, contingency funds are released by NDMA to sustain counties in drought conditions. The paper shows the successful application of the products within NDMA by providing a retrospective analysis applied to droughts in 2006, 2009 and 2011. Some comparisons with alternative products (e.g. FEWS NET, the Famine Early Warning Systems Network) highlight main differences.
Cole, Sindy; McNally, Gavan P
2007-10-01
Three experiments studied temporal-difference (TD) prediction errors during Pavlovian fear conditioning. In Stage I, rats received conditioned stimulus A (CSA) paired with shock. In Stage II, they received pairings of CSA and CSB with shock that blocked learning to CSB. In Stage III, a serial overlapping compound, CSB --> CSA, was followed by shock. The change in intratrial durations supported fear learning to CSB but reduced fear of CSA, revealing the operation of TD prediction errors. N-methyl- D-aspartate (NMDA) receptor antagonism prior to Stage III prevented learning, whereas opioid receptor antagonism selectively affected predictive learning. These findings support a role for TD prediction errors in fear conditioning. They suggest that NMDA receptors contribute to fear learning by acting on the product of predictive error, whereas opioid receptors contribute to predictive error. (PsycINFO Database Record (c) 2007 APA, all rights reserved).
Takahashi, Yuji K.; Langdon, Angela J.; Niv, Yael; Schoenbaum, Geoffrey
2016-01-01
Summary Dopamine neurons signal reward prediction errors. This requires accurate reward predictions. It has been suggested that the ventral striatum provides these predictions. Here we tested this hypothesis by recording from putative dopamine neurons in the VTA of rats performing a task in which prediction errors were induced by shifting reward timing or number. In controls, the neurons exhibited error signals in response to both manipulations. However, dopamine neurons in rats with ipsilateral ventral striatal lesions exhibited errors only to changes in number and failed to respond to changes in timing of reward. These results, supported by computational modeling, indicate that predictions about the temporal specificity and the number of expected rewards are dissociable, and that dopaminergic prediction-error signals rely on the ventral striatum for the former but not the latter. PMID:27292535
NASA Astrophysics Data System (ADS)
Sinha, T.; Arumugam, S.
2012-12-01
Seasonal streamflow forecasts contingent on climate forecasts can be effectively utilized in updating water management plans and optimize generation of hydroelectric power. Streamflow in the rainfall-runoff dominated basins critically depend on forecasted precipitation in contrast to snow dominated basins, where initial hydrological conditions (IHCs) are more important. Since precipitation forecasts from Atmosphere-Ocean-General Circulation Models are available at coarse scale (~2.8° by 2.8°), spatial and temporal downscaling of such forecasts are required to implement land surface models, which typically runs on finer spatial and temporal scales. Consequently, multiple sources are introduced at various stages in predicting seasonal streamflow. Therefore, in this study, we addresses the following science questions: 1) How do we attribute the errors in monthly streamflow forecasts to various sources - (i) model errors, (ii) spatio-temporal downscaling, (iii) imprecise initial conditions, iv) no forecasts, and (iv) imprecise forecasts? and 2) How does monthly streamflow forecast errors propagate with different lead time over various seasons? In this study, the Variable Infiltration Capacity (VIC) model is calibrated over Apalachicola River at Chattahoochee, FL in the southeastern US and implemented with observed 1/8° daily forcings to estimate reference streamflow during 1981 to 2010. The VIC model is then forced with different schemes under updated IHCs prior to forecasting period to estimate relative mean square errors due to: a) temporally disaggregation, b) spatial downscaling, c) Reverse Ensemble Streamflow Prediction (imprecise IHCs), d) ESP (no forecasts), and e) ECHAM4.5 precipitation forecasts. Finally, error propagation under different schemes are analyzed with different lead time over different seasons.
NASA Astrophysics Data System (ADS)
Shi, Pu; Thorlacius, Sigurdur; Keller, Thomas; Keller, Martin; Schulin, Rainer
2017-04-01
Soil aggregate breakdown under rainfall impact is an important process in interrill erosion, but is not represented explicitly in water erosion models. Aggregate breakdown not only reduces infiltration through surface sealing during rainfall, but also determines the size distribution of the disintegrated fragments and thus their availability for size-selective sediment transport and re-deposition. An adequate representation of the temporal evolution of fragment mass size distribution (FSD) during rainfall events and the dependence of this dynamics on factors such as rainfall intensity and soil moisture content may help improve mechanistic erosion models. Yet, little is known about the role of those factors in the dynamics of aggregate breakdown under field conditions. In this study, we conducted a series of artificial rainfall experiments on a field silt loam soil to investigate aggregate breakdown dynamics at different rainfall intensity (RI) and initial soil water content (IWC). We found that the evolution of FSD in the course of a rainfall event followed a consistent two-stage pattern in all treatments. The fragment mean weight diameter (MWD) drastically decreased in an approximately exponential way at the beginning of a rainfall event, followed by a further slow linear decrease in the second stage. We proposed an empirical model that describes this temporal pattern of MWD decrease during a rainfall event and accounts for the effects of RI and IWC on the rate parameters. The model was successfully tested using an independent dataset, showing its potential to be used in erosion models for the prediction of aggregate breakdown. The FSD at the end of the experimental rainfall events differed significantly among treatments, indicating that different aggregate breakdown mechanisms responded differently to the variation in initial soil moisture and rainfall intensity. These results provide evidence that aggregate breakdown dynamics needs to be considered in a case-specific manner in modelling sediment mobilization and transport during water erosion events.
NASA Technical Reports Server (NTRS)
Dong, D.; Fang, P.; Bock, F.; Webb, F.; Prawirondirdjo, L.; Kedar, S.; Jamason, P.
2006-01-01
Spatial filtering is an effective way to improve the precision of coordinate time series for regional GPS networks by reducing so-called common mode errors, thereby providing better resolution for detecting weak or transient deformation signals. The commonly used approach to regional filtering assumes that the common mode error is spatially uniform, which is a good approximation for networks of hundreds of kilometers extent, but breaks down as the spatial extent increases. A more rigorous approach should remove the assumption of spatially uniform distribution and let the data themselves reveal the spatial distribution of the common mode error. The principal component analysis (PCA) and the Karhunen-Loeve expansion (KLE) both decompose network time series into a set of temporally varying modes and their spatial responses. Therefore they provide a mathematical framework to perform spatiotemporal filtering.We apply the combination of PCA and KLE to daily station coordinate time series of the Southern California Integrated GPS Network (SCIGN) for the period 2000 to 2004. We demonstrate that spatially and temporally correlated common mode errors are the dominant error source in daily GPS solutions. The spatial characteristics of the common mode errors are close to uniform for all east, north, and vertical components, which implies a very long wavelength source for the common mode errors, compared to the spatial extent of the GPS network in southern California. Furthermore, the common mode errors exhibit temporally nonrandom patterns.
ERIC Educational Resources Information Center
Spinelli, Simona; Vasa, Roma A.; Joel, Suresh; Nelson, Tess E.; Pekar, James J.; Mostofsky, Stewart H.
2011-01-01
Background: Error processing is reflected, behaviorally, by slower reaction times (RT) on trials immediately following an error (post-error). Children with attention-deficit hyperactivity disorder (ADHD) fail to show RT slowing and demonstrate increased intra-subject variability (ISV) on post-error trials. The neural correlates of these behavioral…
ERIC Educational Resources Information Center
Rocconi, Louis M.
2011-01-01
Hierarchical linear models (HLM) solve the problems associated with the unit of analysis problem such as misestimated standard errors, heterogeneity of regression and aggregation bias by modeling all levels of interest simultaneously. Hierarchical linear modeling resolves the problem of misestimated standard errors by incorporating a unique random…
ERIC Educational Resources Information Center
Schwarz, Wolf
2006-01-01
Paradigms used to study the time course of the redundant signals effect (RSE; J. O. Miller, 1986) and temporal order judgments (TOJs) share many important similarities and address related questions concerning the time course of sensory processing. The author of this article proposes and tests a new aggregate diffusion-based model to quantitatively…
Reactive Gas transport in soil: Kinetics versus Local Equilibrium Approach
NASA Astrophysics Data System (ADS)
Geistlinger, Helmut; Jia, Ruijan
2010-05-01
Gas transport through the unsaturated soil zone was studied using an analytical solution of the gas transport model that is mathematically equivalent to the Two-Region model. The gas transport model includes diffusive and convective gas fluxes, interphase mass transfer between the gas and water phase, and biodegradation. The influence of non-equilibrium phenomena, spatially variable initial conditions, and transient boundary conditions are studied. The objective of this paper is to compare the kinetic approach for interphase mass transfer with the standard local equilibrium approach and to find conditions and time-scales under which the local equilibrium approach is justified. The time-scale of investigation was limited to the day-scale, because this is the relevant scale for understanding gas emission from the soil zone with transient water saturation. For the first time a generalized mass transfer coefficient is proposed that justifies the often used steady-state Thin-Film mass transfer coefficient for small and medium water-saturated aggregates of about 10 mm. The main conclusion from this study is that non-equilibrium mass transfer depends strongly on the temporal and small-scale spatial distribution of water within the unsaturated soil zone. For regions with low water saturation and small water-saturated aggregates (radius about 1 mm) the local equilibrium approach can be used as a first approximation for diffusive gas transport. For higher water saturation and medium radii of water-saturated aggregates (radius about 10 mm) and for convective gas transport, the non-equilibrium effect becomes more and more important if the hydraulic residence time and the Damköhler number decrease. Relative errors can range up to 100% and more. While for medium radii the local equilibrium approach describes the main features both of the spatial concentration profile and the time-dependence of the emission rate, it fails completely for larger aggregates (radius about 100 mm). From the comparative study of relevant scenarios with and without biodegradation it can be concluded that, under realistic field conditions, biodegradation within the immobile water phase is often mass-transfer limited and the local equilibrium approach assuming instantaneous mass transfer becomes rather questionable. References Geistlinger, H., Ruiyan Jia, D. Eisermann, and C.-F. Stange (2008): Spatial and temporal variability of dissolved nitrous oxide in near-surface groundwater and bubble-mediated mass transfer to the unsaturated zone, J. Plant Nutrition and Soil Science, in press. Geistlinger, H. (2009) Vapor transport in soil: concepts and mathematical description. In: Eds.: S. Saponari, E. Sezenna, and L. Bonoma, Vapor emission to outdoor air and enclosed spaces for human health risk assessment: Site characterization, monitoring, and modeling. Nova Science Publisher. Milano. Accepted for publication.
Route Learning Impairment in Temporal Lobe Epilepsy
Bell, Brian D.
2012-01-01
Memory impairment on neuropsychological tests is relatively common in temporal lobe epilepsy (TLE) patients. But memory rarely has been evaluated in more naturalistic settings. This study assessed TLE (n = 19) and control (n = 32) groups on a real-world route learning (RL) test. Compared to the controls, the TLE group committed significantly more total errors across the three RL test trials. RL errors correlated significantly with standardized auditory and visual memory and visual-perceptual test scores in the TLE group. In the TLE subset for whom hippocampal data were available (n = 14), RL errors also correlated significantly with left hippocampal volume. This is one of the first studies to demonstrate real-world memory impairment in TLE patients and its association with both mesial temporal lobe integrity and standardized memory test performance. The results support the ecological validity of clinical neuropsychological assessment. PMID:23041173
Wakefield, C B
2010-10-01
Ichthyoplankton sampling and ovarian characteristics were used to elucidate whether the reproductive cycles of a spawning aggregation of snapper Pagrus auratus in a nearshore marine embayment were temporally and spatially specific and related with environmental conditions. The reproductive dynamics of this aggregation were studied over four consecutive years (2001-2004). Spawning occurred between September and January each year, when water temperatures ranged from 15·8 to 23·1° C. In all 4 years, the cumulative egg densities in Cockburn Sound were highest when water temperatures were between the narrow range of 19-20° C. The spawning fraction of females was monthly bimodal and peaked during new and the full moons at 96-100% and c. 75%, respectively. The backcalculated ages of P. auratus eggs collected from 16 ichthyoplankton surveys demonstrated that P. auratus in Cockburn Sound spawn at night during the 3 h following the high tide. The spatial distributions of P. auratus eggs in Cockburn Sound during the peak reproductive period in all 4 years were consistent, further implying spawning was temporally and spatially specific. High concentrations of recently spawned eggs (8-16 h old) demonstrated spawning also occurred within the adjacent marine embayments of Owen Anchorage and Warnbro Sound. Water circulation in Cockburn and Warnbro Sounds resembled an eddy that was most prominent during the period of highest egg densities, thereby facilitating the retention of eggs in these areas. The reproductive cycles of P. auratus described in this study have assisted managers with the appropriate temporal and spatial scale for a closed fishing season to protect these spawning aggregations. © 2010 The Author. Journal compilation © 2010 The Fisheries Society of the British Isles.
Mesoscale behavior study of collector aggregations in a wet dust scrubber.
Li, Xiaochuan; Wu, Xiang; Hu, Haibin; Jiang, Shuguang; Wei, Tao; Wang, Dongxue
2018-01-01
In order to address the bottleneck problem of low fine-particle removal efficiency of self-excited dust scrubbers, this paper is focused on the influence of the intermittent gas-liquid two-phase flow on the mesoscale behavior of collector aggregations. The latter is investigated by the application of high-speed dynamic image technology to the self-excited dust scrubber experimental setup. The real-time-scale monitoring of the dust removal process is provided to clarify its operating mechanism at the mesoscale level. The results obtained show that particulate capturing in self-excited dust scrubber is provided by liquid droplets, liquid films/curtains, bubbles, and their aggregations. Complex spatial and temporal structures are intrinsic to each kind of collector morphology, and these are considered as the major factors controlling the dust removal mechanism of self-excited dust scrubbers. For the specific parameters of gas-liquid two-phase flow under study, the evolution patterns of particular collectors reflect the intrinsic, intermittent, and complex characteristics of the temporal structure. The intermittent initiation of the collector and the air hole formation-collapse cyclic processes provide time and space for the fine dust to escape from being trapped by the collectors. The above mesoscale experimental data provide more insight into the factors reducing the dust removal efficiency of self-excited dust scrubbers. This paper focuses on the reconsideration of the capturer aggregations of self-excited dust scrubbers from the mesoscale. Complex structures in time and space scales exist in each kind of capturer morphology. With changes of operating parameters, the morphology and spatial distributions of capturers diversely change. The change of the capturer over time presents remarkable, intermittent, and complex characteristics of the temporal structure.
Tallot, Lucille; Diaz-Mataix, Lorenzo; Perry, Rosemarie E.; Wood, Kira; LeDoux, Joseph E.; Mouly, Anne-Marie; Sullivan, Regina M.; Doyère, Valérie
2017-01-01
The updating of a memory is triggered whenever it is reactivated and a mismatch from what is expected (i.e., prediction error) is detected, a process that can be unraveled through the memory's sensitivity to protein synthesis inhibitors (i.e., reconsolidation). As noted in previous studies, in Pavlovian threat/aversive conditioning in adult rats, prediction error detection and its associated protein synthesis-dependent reconsolidation can be triggered by reactivating the memory with the conditioned stimulus (CS), but without the unconditioned stimulus (US), or by presenting a CS–US pairing with a different CS–US interval than during the initial learning. Whether similar mechanisms underlie memory updating in the young is not known. Using similar paradigms with rapamycin (an mTORC1 inhibitor), we show that preweaning rats (PN18–20) do form a long-term memory of the CS–US interval, and detect a 10-sec versus 30-sec temporal prediction error. However, the resulting updating/reconsolidation processes become adult-like after adolescence (PN30–40). Our results thus show that while temporal prediction error detection exists in preweaning rats, specific infant-type mechanisms are at play for associative learning and memory. PMID:28202715
NASA Astrophysics Data System (ADS)
Liao, S.; Chen, L.; Li, J.; Xiong, W.; Wu, Q.
2015-07-01
Existing spatiotemporal database supports spatiotemporal aggregation query over massive moving objects datasets. Due to the large amounts of data and single-thread processing method, the query speed cannot meet the application requirements. On the other hand, the query efficiency is more sensitive to spatial variation then temporal variation. In this paper, we proposed a spatiotemporal aggregation query method using multi-thread parallel technique based on regional divison and implemented it on the server. Concretely, we divided the spatiotemporal domain into several spatiotemporal cubes, computed spatiotemporal aggregation on all cubes using the technique of multi-thread parallel processing, and then integrated the query results. By testing and analyzing on the real datasets, this method has improved the query speed significantly.
Bruce, Jared; Echemendia, Ruben; Tangeman, Lindy; Meeuwisse, Willem; Comper, Paul; Hutchison, Michael; Aubry, Mark
2016-01-01
Computerized neuropsychological tests are frequently used to assist in return-to-play decisions following sports concussion. However, due to concerns about test reliability, the Centers for Disease Control and Prevention recommends yearly baseline testing. The standard practice that has developed in baseline/postinjury comparisons is to examine the difference between the most recent baseline test and postconcussion performance. Drawing from classical test theory, the present study investigated whether temporal stability could be improved by taking an alternate approach that uses the aggregate of 2 baselines to more accurately estimate baseline cognitive ability. One hundred fifteen English-speaking professional hockey players with 3 consecutive Immediate Postconcussion Assessment and Testing (ImPACT) baseline tests were extracted from a clinical program evaluation database overseen by the National Hockey League and National Hockey League Players' Association. The temporal stability of ImPACT composite scores was significantly increased by aggregating test performance during Sessions 1 and 2 to predict performance during Session 3. Using this approach, the 2-factor Memory (r = .72) and Speed (r = .79) composites of ImPACT showed acceptable long-term reliability. Using the aggregate of 2 baseline scores significantly improves temporal stability and allows for more accurate predictions of cognitive change following concussion. Clinicians are encouraged to estimate baseline abilities by taking into account all of an athlete's previous baseline scores.
Martin, Markus; Dressing, Andrea; Bormann, Tobias; Schmidt, Charlotte S M; Kümmerer, Dorothee; Beume, Lena; Saur, Dorothee; Mader, Irina; Rijntjes, Michel; Kaller, Christoph P; Weiller, Cornelius
2017-08-01
The study aimed to elucidate areas involved in recognizing tool-associated actions, and to characterize the relationship between recognition and active performance of tool use.We performed voxel-based lesion-symptom mapping in a prospective cohort of 98 acute left-hemisphere ischemic stroke patients (68 male, age mean ± standard deviation, 65 ± 13 years; examination 4.4 ± 2 days post-stroke). In a video-based test, patients distinguished correct tool-related actions from actions with spatio-temporal (incorrect grip, kinematics, or tool orientation) or conceptual errors (incorrect tool-recipient matching, e.g., spreading jam on toast with a paintbrush). Moreover, spatio-temporal and conceptual errors were determined during actual tool use.Deficient spatio-temporal error discrimination followed lesions within a dorsal network in which the inferior parietal lobule (IPL) and the lateral temporal cortex (sLTC) were specifically relevant for assessing functional hand postures and kinematics, respectively. Conversely, impaired recognition of conceptual errors resulted from damage to ventral stream regions including anterior temporal lobe. Furthermore, LTC and IPL lesions impacted differently on action recognition and active tool use, respectively.In summary, recognition of tool-associated actions relies on a componential network. Our study particularly highlights the dissociable roles of LTC and IPL for the recognition of action kinematics and functional hand postures, respectively. © The Author 2016. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.
Errors Affect Hypothetical Intertemporal Food Choice in Women
Sellitto, Manuela; di Pellegrino, Giuseppe
2014-01-01
Growing evidence suggests that the ability to control behavior is enhanced in contexts in which errors are more frequent. Here we investigated whether pairing desirable food with errors could decrease impulsive choice during hypothetical temporal decisions about food. To this end, healthy women performed a Stop-signal task in which one food cue predicted high-error rate, and another food cue predicted low-error rate. Afterwards, we measured participants’ intertemporal preferences during decisions between smaller-immediate and larger-delayed amounts of food. We expected reduced sensitivity to smaller-immediate amounts of food associated with high-error rate. Moreover, taking into account that deprivational states affect sensitivity for food, we controlled for participants’ hunger. Results showed that pairing food with high-error likelihood decreased temporal discounting. This effect was modulated by hunger, indicating that, the lower the hunger level, the more participants showed reduced impulsive preference for the food previously associated with a high number of errors as compared with the other food. These findings reveal that errors, which are motivationally salient events that recruit cognitive control and drive avoidance learning against error-prone behavior, are effective in reducing impulsive choice for edible outcomes. PMID:25244534
NASA Astrophysics Data System (ADS)
Huber, Franz J. T.; Will, Stefan; Daun, Kyle J.
2016-11-01
Inferring the size distribution of aerosolized fractal aggregates from the angular distribution of elastically scattered light is a mathematically ill-posed problem. This paper presents a procedure for analyzing Wide-Angle Light Scattering (WALS) data using Bayesian inference. The outcome is probability densities for the recovered size distribution and aggregate morphology parameters. This technique is applied to both synthetic data and experimental data collected on soot-laden aerosols, using a measurement equation derived from Rayleigh-Debye-Gans fractal aggregate (RDG-FA) theory. In the case of experimental data, the recovered aggregate size distribution parameters are generally consistent with TEM-derived values, but the accuracy is impaired by the well-known limited accuracy of RDG-FA theory. Finally, we show how this bias could potentially be avoided using the approximation error technique.
Block-accelerated aggregation multigrid for Markov chains with application to PageRank problems
NASA Astrophysics Data System (ADS)
Shen, Zhao-Li; Huang, Ting-Zhu; Carpentieri, Bruno; Wen, Chun; Gu, Xian-Ming
2018-06-01
Recently, the adaptive algebraic aggregation multigrid method has been proposed for computing stationary distributions of Markov chains. This method updates aggregates on every iterative cycle to keep high accuracies of coarse-level corrections. Accordingly, its fast convergence rate is well guaranteed, but often a large proportion of time is cost by aggregation processes. In this paper, we show that the aggregates on each level in this method can be utilized to transfer the probability equation of that level into a block linear system. Then we propose a Block-Jacobi relaxation that deals with the block system on each level to smooth error. Some theoretical analysis of this technique is presented, meanwhile it is also adapted to solve PageRank problems. The purpose of this technique is to accelerate the adaptive aggregation multigrid method and its variants for solving Markov chains and PageRank problems. It also attempts to shed some light on new solutions for making aggregation processes more cost-effective for aggregation multigrid methods. Numerical experiments are presented to illustrate the effectiveness of this technique.
Martire, Kristy A; Growns, Bethany; Navarro, Danielle J
2018-04-17
Forensic handwriting examiners currently testify to the origin of questioned handwriting for legal purposes. However, forensic scientists are increasingly being encouraged to assign probabilities to their observations in the form of a likelihood ratio. This study is the first to examine whether handwriting experts are able to estimate the frequency of US handwriting features more accurately than novices. The results indicate that the absolute error for experts was lower than novices, but the size of the effect is modest, and the overall error rate even for experts is large enough as to raise questions about whether their estimates can be sufficiently trustworthy for presentation in courts. When errors are separated into effects caused by miscalibration and those caused by imprecision, we find systematic differences between individuals. Finally, we consider several ways of aggregating predictions from multiple experts, suggesting that quite substantial improvements in expert predictions are possible when a suitable aggregation method is used.
Treatment of ocean tide aliasing in the context of a next generation gravity field mission
NASA Astrophysics Data System (ADS)
Hauk, Markus; Pail, Roland
2018-07-01
Current temporal gravity field solutions from Gravity Recovery and Climate Experiment (GRACE) suffer from temporal aliasing errors due to undersampling of signal to be recovered (e.g. hydrology), uncertainties in the de-aliasing models (usually atmosphere and ocean) and imperfect ocean tide models. Especially the latter will be one of the most limiting factors in determining high-resolution temporal gravity fields from future gravity missions such as GRACE Follow-On and Next-Generation Gravity Missions (NGGM). In this paper a method to co-parametrize ocean tide parameters of the eight main tidal constituents over time spans of several years is analysed and assessed. Numerical closed-loop simulations of low-low satellite-to-satellite-tracking missions for a single polar pair and a double pair Bender-type formation are performed, using time variable geophysical background models and noise assumptions for new generation instrument technology. Compared to the single pair mission, results show a reduction of tide model errors up to 70 per cent for dedicated tidal constituents due to an enhanced spatial and temporal sampling and error isotropy for the double pair constellation. Extending the observation period from 1 to 3 yr leads to a further reduction of tidal errors up to 60 per cent for certain constituents, and considering non-tidal mass changes during the estimation process leads to reductions of tidal errors between 20 and 80 per cent. As part of a two-step approach, the estimated tide model is used for de-aliasing during gravity field retrieval in a second iteration, resulting in more than 50 per cent reduction of ocean tide aliasing errors for a NGGM Bender-type formation.
Treatment of ocean tide aliasing in the context of a next generation gravity field mission
NASA Astrophysics Data System (ADS)
Hauk, Markus; Pail, Roland
2018-04-01
Current temporal gravity field solutions from GRACE suffer from temporal aliasing errors due to under-sampling of signal to be recovered (e.g. hydrology), uncertainties in the de-aliasing models (usually atmosphere and ocean), and imperfect ocean tide models. Especially the latter will be one of the most limiting factors in determining high resolution temporal gravity fields from future gravity missions such as GRACE Follow-on and Next-Generation Gravity Missions (NGGM). In this paper a method to co-parameterize ocean tide parameters of the 8 main tidal constituents over time spans of several years is analysed and assessed. Numerical closed-loop simulations of low-low satellite-to-satellite-tracking missions for a single polar pair and a double pair Bender-type formation are performed, using time variable geophysical background models and noise assumptions for new generation instrument technology. Compared to the single pair mission, results show a reduction of tide model errors up to 70 per cent for dedicated tidal constituents due to an enhanced spatial and temporal sampling and error isotropy for the double pair constellation. Extending the observation period from one to three years leads to a further reduction of tidal errors up to 60 per cent for certain constituents, and considering non-tidal mass changes during the estimation process leads to reductions of tidal errors between 20 per cent and 80 per cent. As part of a two-step approach, the estimated tide model is used for de-aliasing during gravity field retrieval in a second iteration, resulting in more than 50 per cent reduction of ocean tide aliasing errors for a NGGM Bender-type formation.
Rausch, R; MacDonald, K
1997-03-01
We used a protocol consisting of a continuous presentation of stimuli with associated response requests during an intracarotid sodium amobarbital procedure (IAP) to study the effects of hemisphere injected (speech dominant vs. nondominant) and seizure focus (left temporal lobe vs. right temporal lobe) on the pattern of behavioral response errors for three types of visual stimuli (pictures of common objects, words, and abstract forms). Injection of the left speech dominant hemisphere compared to the right nondominant hemisphere increased overall errors and affected the pattern of behavioral errors. The presence of a seizure focus in the contralateral hemisphere increased overall errors, particularly for the right temporal lobe seizure patients, but did not affect the pattern of behavioral errors. Left hemisphere injections disrupted both naming and reading responses at a rate similar to that of matching-to-sample performance. Also, a short-term memory deficit was observed with all three stimuli. Long-term memory testing following the left hemisphere injection indicated that only for pictures of common objects were there fewer errors during the early postinjection period than for the later long-term memory testing. Therefore, despite the inability to respond to picture stimuli, picture items, but not words or forms, could be sufficiently encoded for later recall. In contrast, right hemisphere injections resulted in few errors, with a pattern suggesting a mild general cognitive decrease. A selective weakness in learning unfamiliar forms was found. Our findings indicate that different patterns of behavioral deficits occur following the left vs. right hemisphere injections, with selective patterns specific to stimulus type.
Measurement Error and Bias in Value-Added Models. Research Report. ETS RR-17-25
ERIC Educational Resources Information Center
Kane, Michael T.
2017-01-01
By aggregating residual gain scores (the differences between each student's current score and a predicted score based on prior performance) for a school or a teacher, value-added models (VAMs) can be used to generate estimates of school or teacher effects. It is known that random errors in the prior scores will introduce bias into predictions of…
An Aggregate of Four Anthrax Cases during the Dry Summer of 2011 in Epirus, Greece.
Gaitanis, Georgios; Lolis, Christos J; Tsartsarakis, Antonios; Kalogeropoulos, Chris; Leveidiotou-Stefanou, Stamatina; Bartzokas, Aristidis; Bassukas, Ioannis D
2016-01-01
Human anthrax is currently a sporadic disease in Europe, without significant regional clustering. To report an unexpected aggregate of anthrax cases and correlate local climatic factors with yearly anthrax admissions. Clinical description of a geographical-temporal anthrax aggregate, correlation of disease admissions with local weather data in the period 2001-2014 and literature reports of anthrax clusters from Europe in the last 20 years. We identified 5 cases, all cutaneous: an unexpected aggregate of 4 cases in mid-summer 2011 (including a probable human-to-human transmission) and a sporadic case in August 2005, all in relatively dry periods (p < 0.05). Remarkably, 3/6 reports of human anthrax aggregates from Europe were observed in Balkan Peninsula countries in the year 2011. In the light of the predicted climatic change, unexpected anthrax aggregates during dry periods in southern Europe underscore the risk of future anthrax re-emergence on this continent. © 2015 S. Karger AG, Basel.
de Azevedo Neto, Raymundo Machado; Teixeira, Luis Augusto
2011-05-01
This investigation aimed at assessing the extent to which memory from practice in a specific condition of target displacement modulates temporal errors and movement timing of interceptive movements. We compared two groups practicing with certainty of future target velocity either in unchanged target velocity or in target velocity decrease. Following practice, both experimental groups were probed in the situations of unchanged target velocity and target velocity decrease either under the context of certainty or uncertainty about target velocity. Results from practice showed similar improvement of temporal accuracy between groups, revealing that target velocity decrease did not disturb temporal movement organization when fully predictable. Analysis of temporal errors in the probing trials indicated that both groups had higher timing accuracy in velocity decrease in comparison with unchanged velocity. Effect of practice was detected by increased temporal accuracy of the velocity decrease group in situations of decreased velocity; a trend consistent with the expected effect of practice was observed for temporal errors in the unchanged velocity group and in movement initiation at a descriptive level. An additional point of theoretical interest was the fast adaptation in both groups to a target velocity pattern different from that practiced. These points are discussed under the perspective of integration of vision and motor control by means of an internal forward model of external motion.
Event-related potentials in response to violations of content and temporal event knowledge.
Drummer, Janna; van der Meer, Elke; Schaadt, Gesa
2016-01-08
Scripts that store knowledge of everyday events are fundamentally important for managing daily routines. Content event knowledge (i.e., knowledge about which events belong to a script) and temporal event knowledge (i.e., knowledge about the chronological order of events in a script) constitute qualitatively different forms of knowledge. However, there is limited information about each distinct process and the time course involved in accessing content and temporal event knowledge. Therefore, we analyzed event-related potentials (ERPs) in response to either correctly presented event sequences or event sequences that contained a content or temporal error. We found an N400, which was followed by a posteriorly distributed P600 in response to content errors in event sequences. By contrast, we did not find an N400 but an anteriorly distributed P600 in response to temporal errors in event sequences. Thus, the N400 seems to be elicited as a response to a general mismatch between an event and the established event model. We assume that the expectancy violation of content event knowledge, as indicated by the N400, induces the collapse of the established event model, a process indicated by the posterior P600. The expectancy violation of temporal event knowledge is assumed to induce an attempt to reorganize the event model in working memory, a process indicated by the frontal P600. Copyright © 2015 Elsevier Ltd. All rights reserved.
Qiao, Jie; Papa, J.; Liu, X.
2015-09-24
Monolithic large-scale diffraction gratings are desired to improve the performance of high-energy laser systems and scale them to higher energy, but the surface deformation of these diffraction gratings induce spatio-temporal coupling that is detrimental to the focusability and compressibility of the output pulse. A new deformable-grating-based pulse compressor architecture with optimized actuator positions has been designed to correct the spatial and temporal aberrations induced by grating wavefront errors. An integrated optical model has been built to analyze the effect of grating wavefront errors on the spatio-temporal performance of a compressor based on four deformable gratings. Moreover, a 1.5-meter deformable gratingmore » has been optimized using an integrated finite-element-analysis and genetic-optimization model, leading to spatio-temporal performance similar to the baseline design with ideal gratings.« less
Animal movement constraints improve resource selection inference in the presence of telemetry error
Brost, Brian M.; Hooten, Mevin B.; Hanks, Ephraim M.; Small, Robert J.
2016-01-01
Multiple factors complicate the analysis of animal telemetry location data. Recent advancements address issues such as temporal autocorrelation and telemetry measurement error, but additional challenges remain. Difficulties introduced by complicated error structures or barriers to animal movement can weaken inference. We propose an approach for obtaining resource selection inference from animal location data that accounts for complicated error structures, movement constraints, and temporally autocorrelated observations. We specify a model for telemetry data observed with error conditional on unobserved true locations that reflects prior knowledge about constraints in the animal movement process. The observed telemetry data are modeled using a flexible distribution that accommodates extreme errors and complicated error structures. Although constraints to movement are often viewed as a nuisance, we use constraints to simultaneously estimate and account for telemetry error. We apply the model to simulated data, showing that it outperforms common ad hoc approaches used when confronted with measurement error and movement constraints. We then apply our framework to an Argos satellite telemetry data set on harbor seals (Phoca vitulina) in the Gulf of Alaska, a species that is constrained to move within the marine environment and adjacent coastlines.
Nakayama, Masataka; Saito, Satoru
2015-08-01
The present study investigated principles of phonological planning, a common serial ordering mechanism for speech production and phonological short-term memory. Nakayama and Saito (2014) have investigated the principles by using a speech-error induction technique, in which participants were exposed to an auditory distracIor word immediately before an utterance of a target word. They demonstrated within-word adjacent mora exchanges and serial position effects on error rates. These findings support, respectively, the temporal distance and the edge principles at a within-word level. As this previous study induced errors using word distractors created by exchanging adjacent morae in the target words, it is possible that the speech errors are expressions of lexical intrusions reflecting interactive activation of phonological and lexical/semantic representations. To eliminate this possibility, the present study used nonword distractors that had no lexical or semantic representations. This approach successfully replicated the error patterns identified in the abovementioned study, further confirming that the temporal distance and edge principles are organizing precepts in phonological planning.
A priori discretization error metrics for distributed hydrologic modeling applications
NASA Astrophysics Data System (ADS)
Liu, Hongli; Tolson, Bryan A.; Craig, James R.; Shafii, Mahyar
2016-12-01
Watershed spatial discretization is an important step in developing a distributed hydrologic model. A key difficulty in the spatial discretization process is maintaining a balance between the aggregation-induced information loss and the increase in computational burden caused by the inclusion of additional computational units. Objective identification of an appropriate discretization scheme still remains a challenge, in part because of the lack of quantitative measures for assessing discretization quality, particularly prior to simulation. This study proposes a priori discretization error metrics to quantify the information loss of any candidate discretization scheme without having to run and calibrate a hydrologic model. These error metrics are applicable to multi-variable and multi-site discretization evaluation and provide directly interpretable information to the hydrologic modeler about discretization quality. The first metric, a subbasin error metric, quantifies the routing information loss from discretization, and the second, a hydrological response unit (HRU) error metric, improves upon existing a priori metrics by quantifying the information loss due to changes in land cover or soil type property aggregation. The metrics are straightforward to understand and easy to recode. Informed by the error metrics, a two-step discretization decision-making approach is proposed with the advantage of reducing extreme errors and meeting the user-specified discretization error targets. The metrics and decision-making approach are applied to the discretization of the Grand River watershed in Ontario, Canada. Results show that information loss increases as discretization gets coarser. Moreover, results help to explain the modeling difficulties associated with smaller upstream subbasins since the worst discretization errors and highest error variability appear in smaller upstream areas instead of larger downstream drainage areas. Hydrologic modeling experiments under candidate discretization schemes validate the strong correlation between the proposed discretization error metrics and hydrologic simulation responses. Discretization decision-making results show that the common and convenient approach of making uniform discretization decisions across the watershed performs worse than the proposed non-uniform discretization approach in terms of preserving spatial heterogeneity under the same computational cost.
FPGA implementation of advanced FEC schemes for intelligent aggregation networks
NASA Astrophysics Data System (ADS)
Zou, Ding; Djordjevic, Ivan B.
2016-02-01
In state-of-the-art fiber-optics communication systems the fixed forward error correction (FEC) and constellation size are employed. While it is important to closely approach the Shannon limit by using turbo product codes (TPC) and low-density parity-check (LDPC) codes with soft-decision decoding (SDD) algorithm; rate-adaptive techniques, which enable increased information rates over short links and reliable transmission over long links, are likely to become more important with ever-increasing network traffic demands. In this invited paper, we describe a rate adaptive non-binary LDPC coding technique, and demonstrate its flexibility and good performance exhibiting no error floor at BER down to 10-15 in entire code rate range, by FPGA-based emulation, making it a viable solution in the next-generation high-speed intelligent aggregation networks.
NASA Technical Reports Server (NTRS)
Wang, James S.; Kawa, S. Randolph; Eluszkiewicz, Janusz; Collatz, G. J.; Mountain, Marikate; Henderson, John; Nehrkorn, Thomas; Aschbrenner, Ryan; Zaccheo, T. Scott
2012-01-01
Knowledge of the spatiotemporal variations in emissions and uptake of CO2 is hampered by sparse measurements. The recent advent of satellite measurements of CO2 concentrations is increasing the density of measurements, and the future mission ASCENDS (Active Sensing of CO2 Emissions over Nights, Days and Seasons) will provide even greater coverage and precision. Lagrangian atmospheric transport models run backward in time can quantify surface influences ("footprints") of diverse measurement platforms and are particularly well suited for inverse estimation of regional surface CO2 fluxes at high resolution based on satellite observations. We utilize the STILT Lagrangian particle dispersion model, driven by WRF meteorological fields at 40-km resolution, in a Bayesian synthesis inversion approach to quantify the ability of ASCENDS column CO2 observations to constrain fluxes at high resolution. This study focuses on land-based biospheric fluxes, whose uncertainties are especially large, in a domain encompassing North America. We present results based on realistic input fields for 2007. Pseudo-observation random errors are estimated from backscatter and optical depth measured by the CALIPSO satellite. We estimate a priori flux uncertainties based on output from the CASA-GFED (v.3) biosphere model and make simple assumptions about spatial and temporal error correlations. WRF-STILT footprints are convolved with candidate vertical weighting functions for ASCENDS. We find that at a horizontal flux resolution of 1 degree x 1 degree, ASCENDS observations are potentially able to reduce average weekly flux uncertainties by 0-8% in July, and 0-0.5% in January (assuming an error of 0.5 ppm at the Railroad Valley reference site). Aggregated to coarser resolutions, e.g. 5 degrees x 5 degrees, the uncertainty reductions are larger and more similar to those estimated in previous satellite data observing system simulation experiments.
NASA Astrophysics Data System (ADS)
Du, J.; Kimball, J. S.; Galantowicz, J. F.; Kim, S.; Chan, S.; Reichle, R. H.; Jones, L. A.; Watts, J. D.
2017-12-01
A method to monitor global land surface water (fw) inundation dynamics was developed by exploiting the enhanced fw sensitivity of L-band (1.4 GHz) passive microwave observations from the Soil Moisture Active Passive (SMAP) mission. The L-band fw (fwLBand) retrievals were derived using SMAP H-polarization brightness temperature (Tb) observations and predefined L-band reference microwave emissivities for water and land endmembers. Potential soil moisture and vegetation contributions to the microwave signal were represented from overlapping higher frequency Tb observations from AMSR2. The resulting fwLBand global record has high temporal sampling (1-3 days) and 36-km spatial resolution. The fwLBand annual averages corresponded favourably (R=0.84, p<0.001) with a 250-m resolution static global water map (MOD44W) aggregated at the same spatial scale, while capturing significant inundation variations worldwide. The monthly fwLBand averages also showed seasonal inundation changes consistent with river discharge records within six major US river basins. An uncertainty analysis indicated generally reliable fwLBand performance for major land cover areas and under low to moderate vegetation cover, but with lower accuracy for detecting water bodies covered by dense vegetation. Finer resolution (30-m) fwLBand results were obtained for three sub-regions in North America using an empirical downscaling approach and ancillary global Water Occurrence Dataset (WOD) derived from the historical Landsat record. The resulting 30-m fwLBand retrievals showed favourable classification accuracy for water (commission error 31.84%; omission error 28.08%) and land (commission error 0.82%; omission error 0.99%) and seasonal wet and dry periods when compared to independent water maps derived from Landsat-8 imagery. The new fwLBand algorithms and continuing SMAP and AMSR2 operations provide for near real-time, multi-scale monitoring of global surface water inundation dynamics, potentially benefiting hydrological monitoring, flood assessments, and global climate and carbon modeling.
Behavior-based aggregation of land categories for temporal change analysis
NASA Astrophysics Data System (ADS)
Aldwaik, Safaa Zakaria; Onsted, Jeffrey A.; Pontius, Robert Gilmore, Jr.
2015-03-01
Comparison between two time points of the same categorical variable for the same study extent can reveal changes among categories over time, such as transitions among land categories. If many categories exist, then analysis can be difficult to interpret. Category aggregation is the procedure that combines two or more categories to create a single broader category. Aggregation can simplify interpretation, and can also influence the sizes and types of changes. Some classifications have an a priori hierarchy to facilitate aggregation, but an a priori aggregation might make researchers blind to important category dynamics. We created an algorithm to aggregate categories in a sequence of steps based on the categories' behaviors in terms of gross losses and gross gains. The behavior-based algorithm aggregates net gaining categories with net gaining categories and aggregates net losing categories with net losing categories, but never aggregates a net gaining category with a net losing category. The behavior-based algorithm at each step in the sequence maintains net change and maximizes swap change. We present a case study where data from 2001 and 2006 for 64 land categories indicate change on 17% of the study extent. The behavior-based algorithm produces a set of 10 categories that maintains nearly the original amount of change. In contrast, an a priori aggregation produces 10 categories while reducing the change to 9%. We offer a free computer program to perform the behavior-based aggregation.
Digital Paper Technologies for Topographical Applications
2011-09-19
measures examine were training time for each method, time for entry offeatures, procedural errors, handwriting recognition errors, and user preference...time for entry of features, procedural errors, handwriting recognition errors, and user preference. For these metrics, temporal association was...checkbox, text restricted to a specific list of values, etc.) that provides constraints to the handwriting recognizer. When the user fills out the form
NASA Astrophysics Data System (ADS)
Asadzadeh, M.; Maclean, A.; Tolson, B. A.; Burn, D. H.
2009-05-01
Hydrologic model calibration aims to find a set of parameters that adequately simulates observations of watershed behavior, such as streamflow, or a state variable, such as snow water equivalent (SWE). There are different metrics for evaluating calibration effectiveness that involve quantifying prediction errors, such as the Nash-Sutcliffe (NS) coefficient and bias evaluated for the entire calibration period, on a seasonal basis, for low flows, or for high flows. Many of these metrics are conflicting such that the set of parameters that maximizes the high flow NS differs from the set of parameters that maximizes the low flow NS. Conflicting objectives are very likely when different calibration objectives are based on different fluxes and/or state variables (e.g., NS based on streamflow versus SWE). One of the most popular ways to balance different metrics is to aggregate them based on their importance and find the set of parameters that optimizes a weighted sum of the efficiency metrics. Comparing alternative hydrologic models (e.g., assessing model improvement when a process or more detail is added to the model) based on the aggregated objective might be misleading since it represents one point on the tradeoff of desired error metrics. To derive a more comprehensive model comparison, we solved a bi-objective calibration problem to estimate the tradeoff between two error metrics for each model. Although this approach is computationally more expensive than the aggregation approach, it results in a better understanding of the effectiveness of selected models at each level of every error metric and therefore provides a better rationale for judging relative model quality. The two alternative models used in this study are two MESH hydrologic models (version 1.2) of the Wolf Creek Research basin that differ in their watershed spatial discretization (a single Grouped Response Unit, GRU, versus multiple GRUs). The MESH model, currently under development by Environment Canada, is a coupled land-surface and hydrologic model. Results will demonstrate the conclusions a modeller might make regarding the value of additional watershed spatial discretization under both an aggregated (single-objective) and multi-objective model comparison framework.
Niechwiej-Szwedo, Ewa; Gonzalez, David; Nouredanesh, Mina; Tung, James
2018-01-01
Kinematic analysis of upper limb reaching provides insight into the central nervous system control of movements. Until recently, kinematic examination of motor control has been limited to studies conducted in traditional research laboratories because motion capture equipment used for data collection is not easily portable and expensive. A recently developed markerless system, the Leap Motion Controller (LMC), is a portable and inexpensive tracking device that allows recording of 3D hand and finger position. The main goal of this study was to assess the concurrent reliability and validity of the LMC as compared to the Optotrak, a criterion-standard motion capture system, for measures of temporal accuracy and peak velocity during the performance of upper limb, visually-guided movements. In experiment 1, 14 participants executed aiming movements to visual targets presented on a computer monitor. Bland-Altman analysis was conducted to assess the validity and limits of agreement for measures of temporal accuracy (movement time, duration of deceleration interval), peak velocity, and spatial accuracy (endpoint accuracy). In addition, a one-sample t-test was used to test the hypothesis that the error difference between measures obtained from Optotrak and LMC is zero. In experiment 2, 15 participants performed a Fitts' type aiming task in order to assess whether the LMC is capable of assessing a well-known speed-accuracy trade-off relationship. Experiment 3 assessed the temporal coordination pattern during the performance of a sequence consisting of a reaching, grasping, and placement task in 15 participants. Results from the t-test showed that the error difference in temporal measures was significantly different from zero. Based on the results from the 3 experiments, the average temporal error in movement time was 40±44 ms, and the error in peak velocity was 0.024±0.103 m/s. The limits of agreement between the LMC and Optotrak for spatial accuracy measures ranged between 2-5 cm. Although the LMC system is a low-cost, highly portable system, which could facilitate collection of kinematic data outside of the traditional laboratory settings, the temporal and spatial errors may limit the use of the device in some settings.
Gonzalez, David; Nouredanesh, Mina; Tung, James
2018-01-01
Kinematic analysis of upper limb reaching provides insight into the central nervous system control of movements. Until recently, kinematic examination of motor control has been limited to studies conducted in traditional research laboratories because motion capture equipment used for data collection is not easily portable and expensive. A recently developed markerless system, the Leap Motion Controller (LMC), is a portable and inexpensive tracking device that allows recording of 3D hand and finger position. The main goal of this study was to assess the concurrent reliability and validity of the LMC as compared to the Optotrak, a criterion-standard motion capture system, for measures of temporal accuracy and peak velocity during the performance of upper limb, visually-guided movements. In experiment 1, 14 participants executed aiming movements to visual targets presented on a computer monitor. Bland-Altman analysis was conducted to assess the validity and limits of agreement for measures of temporal accuracy (movement time, duration of deceleration interval), peak velocity, and spatial accuracy (endpoint accuracy). In addition, a one-sample t-test was used to test the hypothesis that the error difference between measures obtained from Optotrak and LMC is zero. In experiment 2, 15 participants performed a Fitts’ type aiming task in order to assess whether the LMC is capable of assessing a well-known speed-accuracy trade-off relationship. Experiment 3 assessed the temporal coordination pattern during the performance of a sequence consisting of a reaching, grasping, and placement task in 15 participants. Results from the t-test showed that the error difference in temporal measures was significantly different from zero. Based on the results from the 3 experiments, the average temporal error in movement time was 40±44 ms, and the error in peak velocity was 0.024±0.103 m/s. The limits of agreement between the LMC and Optotrak for spatial accuracy measures ranged between 2–5 cm. Although the LMC system is a low-cost, highly portable system, which could facilitate collection of kinematic data outside of the traditional laboratory settings, the temporal and spatial errors may limit the use of the device in some settings. PMID:29529064
Balachandran, Ramya; Labadie, Robert F.
2015-01-01
Purpose A minimally invasive approach for cochlear implantation involves drilling a narrow linear path through the temporal bone from the skull surface directly to the cochlea for insertion of the electrode array without the need for an invasive mastoidectomy. Potential drill positioning errors must be accounted for to predict the effectiveness and safety of the procedure. The drilling accuracy of a system used for this procedure was evaluated in bone surrogate material under a range of clinically relevant parameters. Additional experiments were performed to isolate the error at various points along the path to better understand why deflections occur. Methods An experimental setup to precisely position the drill press over a target was used. Custom bone surrogate test blocks were manufactured to resemble the mastoid region of the temporal bone. The drilling error was measured by creating divots in plastic sheets before and after drilling and using a microscope to localize the divots. Results The drilling error was within the tolerance needed to avoid vital structures and ensure accurate placement of the electrode; however, some parameter sets yielded errors that may impact the effectiveness of the procedure when combined with other error sources. The error increases when the lateral stage of the path terminates in an air cell and when the guide bushings are positioned further from the skull surface. At contact points due to air cells along the trajectory, higher errors were found for impact angles of 45° and higher as well as longer cantilevered drill lengths. Conclusion The results of these experiments can be used to define more accurate and safe drill trajectories for this minimally invasive surgical procedure. PMID:26183149
Dillon, Neal P; Balachandran, Ramya; Labadie, Robert F
2016-03-01
A minimally invasive approach for cochlear implantation involves drilling a narrow linear path through the temporal bone from the skull surface directly to the cochlea for insertion of the electrode array without the need for an invasive mastoidectomy. Potential drill positioning errors must be accounted for to predict the effectiveness and safety of the procedure. The drilling accuracy of a system used for this procedure was evaluated in bone surrogate material under a range of clinically relevant parameters. Additional experiments were performed to isolate the error at various points along the path to better understand why deflections occur. An experimental setup to precisely position the drill press over a target was used. Custom bone surrogate test blocks were manufactured to resemble the mastoid region of the temporal bone. The drilling error was measured by creating divots in plastic sheets before and after drilling and using a microscope to localize the divots. The drilling error was within the tolerance needed to avoid vital structures and ensure accurate placement of the electrode; however, some parameter sets yielded errors that may impact the effectiveness of the procedure when combined with other error sources. The error increases when the lateral stage of the path terminates in an air cell and when the guide bushings are positioned further from the skull surface. At contact points due to air cells along the trajectory, higher errors were found for impact angles of [Formula: see text] and higher as well as longer cantilevered drill lengths. The results of these experiments can be used to define more accurate and safe drill trajectories for this minimally invasive surgical procedure.
Nematode Damage Functions: The Problems of Experimental and Sampling Error
Ferris, H.
1984-01-01
The development and use of pest damage functions involves measurement and experimental errors associated with cultural, environmental, and distributional factors. Damage predictions are more valuable if considered with associated probability. Collapsing population densities into a geometric series of population classes allows a pseudo-replication removal of experimental and sampling error in damage function development. Recognition of the nature of sampling error for aggregated populations allows assessment of probability associated with the population estimate. The product of the probabilities incorporated in the damage function and in the population estimate provides a basis for risk analysis of the yield loss prediction and the ensuing management decision. PMID:19295865
Ramseyer, Fabian; Kupper, Zeno; Caspar, Franz; Znoj, Hansjörg; Tschacher, Wolfgang
2014-10-01
Processes occurring in the course of psychotherapy are characterized by the simple fact that they unfold in time and that the multiple factors engaged in change processes vary highly between individuals (idiographic phenomena). Previous research, however, has neglected the temporal perspective by its traditional focus on static phenomena, which were mainly assessed at the group level (nomothetic phenomena). To support a temporal approach, the authors introduce time-series panel analysis (TSPA), a statistical methodology explicitly focusing on the quantification of temporal, session-to-session aspects of change in psychotherapy. TSPA-models are initially built at the level of individuals and are subsequently aggregated at the group level, thus allowing the exploration of prototypical models. TSPA is based on vector auto-regression (VAR), an extension of univariate auto-regression models to multivariate time-series data. The application of TSPA is demonstrated in a sample of 87 outpatient psychotherapy patients who were monitored by postsession questionnaires. Prototypical mechanisms of change were derived from the aggregation of individual multivariate models of psychotherapy process. In a 2nd step, the associations between mechanisms of change (TSPA) and pre- to postsymptom change were explored. TSPA allowed a prototypical process pattern to be identified, where patient's alliance and self-efficacy were linked by a temporal feedback-loop. Furthermore, therapist's stability over time in both mastery and clarification interventions was positively associated with better outcomes. TSPA is a statistical tool that sheds new light on temporal mechanisms of change. Through this approach, clinicians may gain insight into prototypical patterns of change in psychotherapy. PsycINFO Database Record (c) 2014 APA, all rights reserved.
An Environmental Data Set for Vector-Borne Disease Modeling and Epidemiology
Chabot-Couture, Guillaume; Nigmatulina, Karima; Eckhoff, Philip
2014-01-01
Understanding the environmental conditions of disease transmission is important in the study of vector-borne diseases. Low- and middle-income countries bear a significant portion of the disease burden; but data about weather conditions in those countries can be sparse and difficult to reconstruct. Here, we describe methods to assemble high-resolution gridded time series data sets of air temperature, relative humidity, land temperature, and rainfall for such areas; and we test these methods on the island of Madagascar. Air temperature and relative humidity were constructed using statistical interpolation of weather station measurements; the resulting median 95th percentile absolute errors were 2.75°C and 16.6%. Missing pixels from the MODIS11 remote sensing land temperature product were estimated using Fourier decomposition and time-series analysis; thus providing an alternative to the 8-day and 30-day aggregated products. The RFE 2.0 remote sensing rainfall estimator was characterized by comparing it with multiple interpolated rainfall products, and we observed significant differences in temporal and spatial heterogeneity relevant to vector-borne disease modeling. PMID:24755954
Impact of Temporal Masking of Flip-Flop Upsets on Soft Error Rates of Sequential Circuits
NASA Astrophysics Data System (ADS)
Chen, R. M.; Mahatme, N. N.; Diggins, Z. J.; Wang, L.; Zhang, E. X.; Chen, Y. P.; Liu, Y. N.; Narasimham, B.; Witulski, A. F.; Bhuva, B. L.; Fleetwood, D. M.
2017-08-01
Reductions in single-event (SE) upset (SEU) rates for sequential circuits due to temporal masking effects are evaluated. The impacts of supply voltage, combinational-logic delay, flip-flop (FF) SEU performance, and particle linear energy transfer (LET) values are analyzed for SE cross sections of sequential circuits. Alpha particles and heavy ions with different LET values are used to characterize the circuits fabricated at the 40-nm bulk CMOS technology node. Experimental results show that increasing the delay of the logic circuit present between FFs and decreasing the supply voltage are two effective ways of reducing SE error rates for sequential circuits for particles with low LET values due to temporal masking. SEU-hardened FFs benefit less from temporal masking than conventional FFs. Circuit hardening implications for SEU-hardened and unhardened FFs are discussed.
NASA Astrophysics Data System (ADS)
Merker, Claire; Ament, Felix; Clemens, Marco
2017-04-01
The quantification of measurement uncertainty for rain radar data remains challenging. Radar reflectivity measurements are affected, amongst other things, by calibration errors, noise, blocking and clutter, and attenuation. Their combined impact on measurement accuracy is difficult to quantify due to incomplete process understanding and complex interdependencies. An improved quality assessment of rain radar measurements is of interest for applications both in meteorology and hydrology, for example for precipitation ensemble generation, rainfall runoff simulations, or in data assimilation for numerical weather prediction. Especially a detailed description of the spatial and temporal structure of errors is beneficial in order to make best use of the areal precipitation information provided by radars. Radar precipitation ensembles are one promising approach to represent spatially variable radar measurement errors. We present a method combining ensemble radar precipitation nowcasting with data assimilation to estimate radar measurement uncertainty at each pixel. This combination of ensemble forecast and observation yields a consistent spatial and temporal evolution of the radar error field. We use an advection-based nowcasting method to generate an ensemble reflectivity forecast from initial data of a rain radar network. Subsequently, reflectivity data from single radars is assimilated into the forecast using the Local Ensemble Transform Kalman Filter. The spread of the resulting analysis ensemble provides a flow-dependent, spatially and temporally correlated reflectivity error estimate at each pixel. We will present first case studies that illustrate the method using data from a high-resolution X-band radar network.
Downscaling of land surface temperatures from SEVIRI
NASA Astrophysics Data System (ADS)
Bechtel, B.; Zaksek, K.
2013-12-01
Land surface temperature (LST) determines the radiance emitted by the surface and hence is an important boundary condition of the energy balance. In urban areas, detailed knowledge about the diurnal cycle in LST can contribute to understand the urban heat island (UHI). Although the increased surface temperatures compared to the surrounding rural areas (surface urban heat island, SUHI) have been measured by satellites and analysed for several decades, an operational SUHI monitoring is still not available due to the lack of sensors with appropriate spatiotemporal resolution. While sensors on polar orbiting satellites are still restricted to approx. 100 m spatial resolution and coarse temporal coverage (about 1-2 weeks), sensors on geostationary platforms have high temporal (several times per hour) and poor spatial resolution (>3 km). Further, all polar orbiting satellites have a similar equator crossing time and hence the SUHI can at best be observed at two times a day. A downscaling DS scheme for LST from the Spinning Enhanced Visible Infra-Red Imager (SEVIRI) sensor onboard the geostationary meteorological Meteosat 8 to spatial resolutions between 100 and 1000 m was developed and tested for Hamburg. Various data were tested as predictors, including multispectral data and derived indices, morphological parameters from interferometric SAR and multitemporal thermal data. All predictors were upscaled to the coarse resolution approximating the point spread function of SEVIRI. Then empirical relationships between the predictors and LST were derived which are then transferred to the high resolution domain, assuming they are scale invariant. For validation LST data from the Advanced Spaceborne Thermal Emission and Reflection Radiometer (ASTER) and the Enhanced Thematic Mapper Plus (ETM+) for two dates were used. Aggregated parameters from multi-temporal thermal data (in particular annual cycle parameters and principal components) proved particularly suitable. The results for the highest resolution of 100 m showed a high explained variance (R^2 = 0.71) and relatively low root mean square errors (RMSE = 2.2 K) for the ASTER scene and slightly higher errors (R^2 = 0.73, RMSE = 2.53) for the ETM+ scene. A considerable percentage of the error was systematic due to the different viewing geometry of the sensors (the high resolution LST was overestimated about 1.3 K for ASTER and 0.66 K for ETM+). This shows that DS of SEVIRI LST is possible up to a resolution of 100 m for urban areas and that multitemporal thermal data are particularly suitable as predictors. Further, the scheme was used to produce an entire diurnal cycle in high resolution. While essential characteristics of the diurnal cycle were well reproduced, certain artefacts resulting from the multitemporal predictors from different seasons (like phenology and different water surface temperatures) were generated. Eventually, the bias and its dependence on the viewing geometry and topography are currently investigated.
Scaling water and energy fluxes in climate systems - Three land-atmospheric modeling experiments
NASA Technical Reports Server (NTRS)
Wood, Eric F.; Lakshmi, Venkataraman
1993-01-01
Three numerical experiments that investigate the scaling of land-surface processes - either of the inputs or parameters - are reported, and the aggregated processes are compared to the spatially variable case. The first is the aggregation of the hydrologic response in a catchment due to rainfall during a storm event and due to evaporative demands during interstorm periods. The second is the spatial and temporal aggregation of latent heat fluxes, as calculated from SiB. The third is the aggregation of remotely sensed land vegetation and latent and sensible heat fluxes using TM data from the FIFE experiment of 1987 in Kansas. In all three experiments it was found that the surface fluxes and land characteristics can be scaled, and that macroscale models based on effective parameters are sufficient to account for the small-scale heterogeneities investigated.
Comulada, W. Scott
2015-01-01
Stata’s mi commands provide powerful tools to conduct multiple imputation in the presence of ignorable missing data. In this article, I present Stata code to extend the capabilities of the mi commands to address two areas of statistical inference where results are not easily aggregated across imputed datasets. First, mi commands are restricted to covariate selection. I show how to address model fit to correctly specify a model. Second, the mi commands readily aggregate model-based standard errors. I show how standard errors can be bootstrapped for situations where model assumptions may not be met. I illustrate model specification and bootstrapping on frequency counts for the number of times that alcohol was consumed in data with missing observations from a behavioral intervention. PMID:26973439
Reducing representativeness and sampling errors in radio occultation-radiosonde comparisons
NASA Astrophysics Data System (ADS)
Gilpin, Shay; Rieckh, Therese; Anthes, Richard
2018-05-01
Radio occultation (RO) and radiosonde (RS) comparisons provide a means of analyzing errors associated with both observational systems. Since RO and RS observations are not taken at the exact same time or location, temporal and spatial sampling errors resulting from atmospheric variability can be significant and inhibit error analysis of the observational systems. In addition, the vertical resolutions of RO and RS profiles vary and vertical representativeness errors may also affect the comparison. In RO-RS comparisons, RO observations are co-located with RS profiles within a fixed time window and distance, i.e. within 3-6 h and circles of radii ranging between 100 and 500 km. In this study, we first show that vertical filtering of RO and RS profiles to a common vertical resolution reduces representativeness errors. We then test two methods of reducing horizontal sampling errors during RO-RS comparisons: restricting co-location pairs to within ellipses oriented along the direction of wind flow rather than circles and applying a spatial-temporal sampling correction based on model data. Using data from 2011 to 2014, we compare RO and RS differences at four GCOS Reference Upper-Air Network (GRUAN) RS stations in different climatic locations, in which co-location pairs were constrained to a large circle ( ˜ 666 km radius), small circle ( ˜ 300 km radius), and ellipse parallel to the wind direction ( ˜ 666 km semi-major axis, ˜ 133 km semi-minor axis). We also apply a spatial-temporal sampling correction using European Centre for Medium-Range Weather Forecasts Interim Reanalysis (ERA-Interim) gridded data. Restricting co-locations to within the ellipse reduces root mean square (RMS) refractivity, temperature, and water vapor pressure differences relative to RMS differences within the large circle and produces differences that are comparable to or less than the RMS differences within circles of similar area. Applying the sampling correction shows the most significant reduction in RMS differences, such that RMS differences are nearly identical to the sampling correction regardless of the geometric constraints. We conclude that implementing the spatial-temporal sampling correction using a reliable model will most effectively reduce sampling errors during RO-RS comparisons; however, if a reliable model is not available, restricting spatial comparisons to within an ellipse parallel to the wind flow will reduce sampling errors caused by horizontal atmospheric variability.
Alfei, Joaquín M.; Ferrer Monti, Roque I.; Molina, Victor A.; Bueno, Adrián M.
2015-01-01
Different mnemonic outcomes have been observed when associative memories are reactivated by CS exposure and followed by amnestics. These outcomes include mere retrieval, destabilization–reconsolidation, a transitional period (which is insensitive to amnestics), and extinction learning. However, little is known about the interaction between initial learning conditions and these outcomes during a reinforced or nonreinforced reactivation. Here we systematically combined temporally specific memories with different reactivation parameters to observe whether these four outcomes are determined by the conditions established during training. First, we validated two training regimens with different temporal expectations about US arrival. Then, using Midazolam (MDZ) as an amnestic agent, fear memories in both learning conditions were submitted to retraining either under identical or different parameters to the original training. Destabilization (i.e., susceptibly to MDZ) occurred when reactivation was reinforced, provided the occurrence of a temporal prediction error about US arrival. In subsequent experiments, both treatments were systematically reactivated by nonreinforced context exposure of different lengths, which allowed to explore the interaction between training and reactivation lengths. These results suggest that temporal prediction error and trace dominance determine the extent to which reactivation produces the different outcomes. PMID:26179232
Jarosz, Jessica; Mecê, Pedro; Conan, Jean-Marc; Petit, Cyril; Paques, Michel; Meimon, Serge
2017-04-01
We formed a database gathering the wavefront aberrations of 50 healthy eyes measured with an original custom-built Shack-Hartmann aberrometer at a temporal frequency of 236 Hz, with 22 lenslets across a 7-mm diameter pupil, for a duration of 20 s. With this database, we draw statistics on the spatial and temporal behavior of the dynamic aberrations of the eye. Dynamic aberrations were studied on a 5-mm diameter pupil and on a 3.4 s sequence between blinks. We noted that, on average, temporal wavefront variance exhibits a n -2 power-law with radial order n and temporal spectra follow a f -1.5 power-law with temporal frequency f . From these statistics, we then extract guidelines for designing an adaptive optics system. For instance, we show the residual wavefront error evolution as a function of the number of corrected modes and of the adaptive optics loop frame rate. In particular, we infer that adaptive optics performance rapidly increases with the loop frequency up to 50 Hz, with gain being more limited at higher rates.
Jarosz, Jessica; Mecê, Pedro; Conan, Jean-Marc; Petit, Cyril; Paques, Michel; Meimon, Serge
2017-01-01
We formed a database gathering the wavefront aberrations of 50 healthy eyes measured with an original custom-built Shack-Hartmann aberrometer at a temporal frequency of 236 Hz, with 22 lenslets across a 7-mm diameter pupil, for a duration of 20 s. With this database, we draw statistics on the spatial and temporal behavior of the dynamic aberrations of the eye. Dynamic aberrations were studied on a 5-mm diameter pupil and on a 3.4 s sequence between blinks. We noted that, on average, temporal wavefront variance exhibits a n−2 power-law with radial order n and temporal spectra follow a f−1.5 power-law with temporal frequency f. From these statistics, we then extract guidelines for designing an adaptive optics system. For instance, we show the residual wavefront error evolution as a function of the number of corrected modes and of the adaptive optics loop frame rate. In particular, we infer that adaptive optics performance rapidly increases with the loop frequency up to 50 Hz, with gain being more limited at higher rates. PMID:28736657
ERIC Educational Resources Information Center
Vocat, Roland; Pourtois, Gilles; Vuilleumier, Patrik
2008-01-01
The detection of errors is known to be associated with two successive neurophysiological components in EEG, with an early time-course following motor execution: the error-related negativity (ERN/Ne) and late positivity (Pe). The exact cognitive and physiological processes contributing to these two EEG components, as well as their functional…
NASA Astrophysics Data System (ADS)
Valldecabres, L.; Friedrichs, W.; von Bremen, L.; Kühn, M.
2016-09-01
An analysis of the spatial and temporal power fluctuations of a simplified wind farm model is conducted on four offshore wind fields data sets, two from lidar measurements and two from LES under unstable and neutral atmospheric conditions. The integral length scales of the horizontal wind speed computed in the streamwise and the cross-stream direction revealed the elongation of the structures in the direction of the mean flow. To analyse the effect of the structures on the power output of a wind turbine, the aggregated equivalent power of two wind turbines with different turbine spacing in the streamwise and cross-stream direction is analysed at different time scales under 10 minutes. The fact of considering the summation of the power of two wind turbines smooths out the fluctuations of the power output of a single wind turbine. This effect, which is stronger with increasing spacing between turbines, can be seen in the aggregation of the power of two wind turbines in the streamwise direction. Due to the anti-correlation of the coherent structures in the cross-stream direction, this smoothing effect is stronger when the aggregated power is computed with two wind turbines aligned orthogonally to the mean flow direction.
Optimized two-frequency phase-measuring-profilometry light-sensor temporal-noise sensitivity.
Li, Jielin; Hassebrook, Laurence G; Guan, Chun
2003-01-01
Temporal frame-to-frame noise in multipattern structured light projection can significantly corrupt depth measurement repeatability. We present a rigorous stochastic analysis of phase-measuring-profilometry temporal noise as a function of the pattern parameters and the reconstruction coefficients. The analysis is used to optimize the two-frequency phase measurement technique. In phase-measuring profilometry, a sequence of phase-shifted sine-wave patterns is projected onto a surface. In two-frequency phase measurement, two sets of pattern sequences are used. The first, low-frequency set establishes a nonambiguous depth estimate, and the second, high-frequency set is unwrapped, based on the low-frequency estimate, to obtain an accurate depth estimate. If the second frequency is too low, then depth error is caused directly by temporal noise in the phase measurement. If the second frequency is too high, temporal noise triggers ambiguous unwrapping, resulting in depth measurement error. We present a solution for finding the second frequency, where intensity noise variance is at its minimum.
NASA Technical Reports Server (NTRS)
Villarreal, James A.; Shelton, Robert O.
1992-01-01
Concept of space-time neural network affords distributed temporal memory enabling such network to model complicated dynamical systems mathematically and to recognize temporally varying spatial patterns. Digital filters replace synaptic-connection weights of conventional back-error-propagation neural network.
Generation, Validation, and Application of Abundance Map Reference Data for Spectral Unmixing
NASA Astrophysics Data System (ADS)
Williams, McKay D.
Reference data ("ground truth") maps traditionally have been used to assess the accuracy of imaging spectrometer classification algorithms. However, these reference data can be prohibitively expensive to produce, often do not include sub-pixel abundance estimates necessary to assess spectral unmixing algorithms, and lack published validation reports. Our research proposes methodologies to efficiently generate, validate, and apply abundance map reference data (AMRD) to airborne remote sensing scenes. We generated scene-wide AMRD for three different remote sensing scenes using our remotely sensed reference data (RSRD) technique, which spatially aggregates unmixing results from fine scale imagery (e.g., 1-m Ground Sample Distance (GSD)) to co-located coarse scale imagery (e.g., 10-m GSD or larger). We validated the accuracy of this methodology by estimating AMRD in 51 randomly-selected 10 m x 10 m plots, using seven independent methods and observers, including field surveys by two observers, imagery analysis by two observers, and RSRD using three algorithms. Results indicated statistically-significant differences between all versions of AMRD, suggesting that all forms of reference data need to be validated. Given these significant differences between the independent versions of AMRD, we proposed that the mean of all (MOA) versions of reference data for each plot and class were most likely to represent true abundances. We then compared each version of AMRD to MOA. Best case accuracy was achieved by a version of imagery analysis, which had a mean coverage area error of 2.0%, with a standard deviation of 5.6%. One of the RSRD algorithms was nearly as accurate, achieving a mean error of 3.0%, with a standard deviation of 6.3%, showing the potential of RSRD-based AMRD generation. Application of validated AMRD to specific coarse scale imagery involved three main parts: 1) spatial alignment of coarse and fine scale imagery, 2) aggregation of fine scale abundances to produce coarse scale imagery-specific AMRD, and 3) demonstration of comparisons between coarse scale unmixing abundances and AMRD. Spatial alignment was performed using our scene-wide spectral comparison (SWSC) algorithm, which aligned imagery with accuracy approaching the distance of a single fine scale pixel. We compared simple rectangular aggregation to coarse sensor point spread function (PSF) aggregation, and found that the PSF approach returned lower error, but that rectangular aggregation more accurately estimated true abundances at ground level. We demonstrated various metrics for comparing unmixing results to AMRD, including mean absolute error (MAE) and linear regression (LR). We additionally introduced reference data mean adjusted MAE (MA-MAE), and reference data confidence interval adjusted MAE (CIA-MAE), which account for known error in the reference data itself. MA-MAE analysis indicated that fully constrained linear unmixing of coarse scale imagery across all three scenes returned an error of 10.83% per class and pixel, with regression analysis yielding a slope = 0.85, intercept = 0.04, and R2 = 0.81. Our reference data research has demonstrated a viable methodology to efficiently generate, validate, and apply AMRD to specific examples of airborne remote sensing imagery, thereby enabling direct quantitative assessment of spectral unmixing performance.
Light scattering method to measure red blood cell aggregation during incubation
NASA Astrophysics Data System (ADS)
Grzegorzewski, B.; Szołna-Chodór, A.; Baryła, J.; DreŻek, D.
2018-01-01
Red blood cell (RBC) aggregation can be observed both in vivo as well as in vitro. This process is a cause of alterations of blood flow in microvascular network. Enhanced RBC aggregation makes oxygen and nutrients delivery difficult. Measurements of RBC aggregation usually give a description of the process for a sample where the state of a solution and cells is well-defined and the system reached an equilibrium. Incubation of RBCs in various solutions is frequently used to study the effects of the solutions on the RBC aggregation. The aggregation parameters are compared before and after incubation while the detailed changes of the parameters during incubation remain unknown. In this paper we have proposed a method to measure red blood cell aggregation during incubation based on the well-known technique where backscattered light is used to assess the parameters of the RBC aggregation. Couette system consisting of two cylinders is adopted in the method. The incubation is observed in the Couette system. In the proposed method following sequence of rotations is adapted. Two minutes rotation is followed by two minutes stop. In this way we have obtained a time series of back scattered intensity consisting of signals respective for disaggregation and aggregation. It is shown that the temporal changes of the intensity manifest changes of RBC aggregation during incubation. To show the ability of the method to assess the effect of incubation time on RBC aggregation the results are shown for solutions that cause an increase of RBC aggregation as well as for the case where the aggregation is decreased.
New Methods for Assessing and Reducing Uncertainty in Microgravity Studies
NASA Astrophysics Data System (ADS)
Giniaux, J. M.; Hooper, A. J.; Bagnardi, M.
2017-12-01
Microgravity surveying, also known as dynamic or 4D gravimetry is a time-dependent geophysical method used to detect mass fluctuations within the shallow crust, by analysing temporal changes in relative gravity measurements. We present here a detailed uncertainty analysis of temporal gravity measurements, considering for the first time all possible error sources, including tilt, error in drift estimations and timing errors. We find that some error sources that are actually ignored, can have a significant impact on the total error budget and it is therefore likely that some gravity signals may have been misinterpreted in previous studies. Our analysis leads to new methods for reducing some of the uncertainties associated with residual gravity estimation. In particular, we propose different approaches for drift estimation and free air correction depending on the survey set up. We also provide formulae to recalculate uncertainties for past studies and lay out a framework for best practice in future studies. We demonstrate our new approach on volcanic case studies, which include Kilauea in Hawaii and Askja in Iceland.
NASA Astrophysics Data System (ADS)
Qin, Xuerong; van Sebille, Erik; Sen Gupta, Alexander
2014-04-01
Lagrangian particle tracking within ocean models is an important tool for the examination of ocean circulation, ventilation timescales and connectivity and is increasingly being used to understand ocean biogeochemistry. Lagrangian trajectories are obtained by advecting particles within velocity fields derived from hydrodynamic ocean models. For studies of ocean flows on scales ranging from mesoscale up to basin scales, the temporal resolution of the velocity fields should ideally not be more than a few days to capture the high frequency variability that is inherent in mesoscale features. However, in reality, the model output is often archived at much lower temporal resolutions. Here, we quantify the differences in the Lagrangian particle trajectories embedded in velocity fields of varying temporal resolution. Particles are advected from 3-day to 30-day averaged fields in a high-resolution global ocean circulation model. We also investigate whether adding lateral diffusion to the particle movement can compensate for the reduced temporal resolution. Trajectory errors reveal the expected degradation of accuracy in the trajectory positions when decreasing the temporal resolution of the velocity field. Divergence timescales associated with averaging velocity fields up to 30 days are faster than the intrinsic dispersion of the velocity fields but slower than the dispersion caused by the interannual variability of the velocity fields. In experiments focusing on the connectivity along major currents, including western boundary currents, the volume transport carried between two strategically placed sections tends to increase with increased temporal averaging. Simultaneously, the average travel times tend to decrease. Based on these two bulk measured diagnostics, Lagrangian experiments that use temporal averaging of up to nine days show no significant degradation in the flow characteristics for a set of six currents investigated in more detail. The addition of random-walk-style diffusion does not mitigate the errors introduced by temporal averaging for large-scale open ocean Lagrangian simulations.
Directional selection in temporally replicated studies is remarkably consistent.
Morrissey, Michael B; Hadfield, Jarrod D
2012-02-01
Temporal variation in selection is a fundamental determinant of evolutionary outcomes. A recent paper presented a synthetic analysis of temporal variation in selection in natural populations. The authors concluded that there is substantial variation in the strength and direction of selection over time, but acknowledged that sampling error would result in estimates of selection that were more variable than the true values. We reanalyze their dataset using techniques that account for the necessary effect of sampling error to inflate apparent levels of variation and show that directional selection is remarkably constant over time, both in magnitude and direction. Thus we cannot claim that the available data support the existence of substantial temporal heterogeneity in selection. Nonetheless, we conject that temporal variation in selection could be important, but that there are good reasons why it may not appear in the available data. These new analyses highlight the importance of applying techniques that estimate parameters of the distribution of selection, rather than parameters of the distribution of estimated selection (which will reflect both sampling error and "real" variation in selection); indeed, despite availability of methods for the former, focus on the latter has been common in synthetic reviews of the aspects of selection in nature, and can lead to serious misinterpretations. © 2011 The Author(s). Evolution© 2011 The Society for the Study of Evolution.
Evrendilek, Fatih
2007-12-12
This study aims at quantifying spatio-temporal dynamics of monthly mean dailyincident photosynthetically active radiation (PAR) over a vast and complex terrain such asTurkey. The spatial interpolation method of universal kriging, and the combination ofmultiple linear regression (MLR) models and map algebra techniques were implemented togenerate surface maps of PAR with a grid resolution of 500 x 500 m as a function of fivegeographical and 14 climatic variables. Performance of the geostatistical and MLR modelswas compared using mean prediction error (MPE), root-mean-square prediction error(RMSPE), average standard prediction error (ASE), mean standardized prediction error(MSPE), root-mean-square standardized prediction error (RMSSPE), and adjustedcoefficient of determination (R² adj. ). The best-fit MLR- and universal kriging-generatedmodels of monthly mean daily PAR were validated against an independent 37-year observeddataset of 35 climate stations derived from 160 stations across Turkey by the Jackknifingmethod. The spatial variability patterns of monthly mean daily incident PAR were moreaccurately reflected in the surface maps created by the MLR-based models than in thosecreated by the universal kriging method, in particular, for spring (May) and autumn(November). The MLR-based spatial interpolation algorithms of PAR described in thisstudy indicated the significance of the multifactor approach to understanding and mappingspatio-temporal dynamics of PAR for a complex terrain over meso-scales.
Secure data aggregation in wireless sensor networks using homomorphic encryption
NASA Astrophysics Data System (ADS)
Kumar, Manish; Verma, Shekhar; Lata, Kusum
2015-04-01
In a Wireless Sensor Network (WSN), aggregation exploits the correlation between spatially and temporally proximate sensor data to reduce the total data volume to be transmitted to the sink. Mobile agents (MAs) fit into this paradigm, and data can be aggregated and collected by an MA from different sensor nodes using context specific codes. The MA-based data collection suffers due to large size of a typical WSN and is prone to security problems. In this article, homomorphic encryption in a clustered WSN has been proposed for secure and efficient data collection using MAs. The nodes keep encrypted data that are given to an MA for data aggregation tasks. The MA performs all the data aggregation operations upon encrypted data as it migrates between nodes in a tree-like structure in which the nodes are leafs and the cluster head is the root of the tree. It returns and deposits the encrypted aggregated data to the cluster head after traversing through all the intra cluster nodes over a shortest path route. The homomorphic encryption and aggregation processing in encrypted domain makes the data collection process secure. Simulation results confirm the effectiveness of the proposed secure data aggregation mechanism. In addition to security, MA-based mechanism leads to lesser delay and bandwidth requirements.
Estimating abundance of adult striped bass in reservoirs using mobile hydroacoustics
Hightower, Joseph E.; Taylor, J. Christopher; Degan, Donald J.
2013-01-01
Hydroacoustic surveys have proven valuable for estimating reservoir forage fish abundance but are more challenging for adult predators such as striped bass Morone saxatilis. Difficulties in assessing striped bass in reservoirs include their low density and the inability to distinguish species with hydroacoustic data alone. Despite these difficulties, mobile hydroacoustic surveys have potential to provide useful data for management because of the large sample volume compared to traditional methods such as gill netting and the ability to target specific areas where striped bass are aggregated. Hydroacoustic estimates of reservoir striped bass have been made using mobile surveys, with data analysis using a threshold for target strength in order to focus on striped bass-sized targets, and auxiliary sampling with nets to obtain species composition. We provide recommendations regarding survey design, based in part on simulations that provide insight on the level of effort that would be required to achieve reasonable estimates of abundance. Future surveys may be able to incorporate telemetry or other sonar techniques such as side-scan or multibeam in order to focus survey efforts on productive habitats (within lake and vertically). However, species apportionment will likely remain the main source of error, and we see no hydroacoustic system on the horizon that will identify fish by species at the spatial and temporal scale required for most reservoir surveys. In situations where species composition can be reliably assessed using traditional gears, abundance estimates from hydroacoustic methods should be useful to fishery managers interested in developing harvest regulations, assessing survival of stocked juveniles, identifying seasonal aggregations, and examining predator–prey balance.
Forecasting Hourly Water Demands With Seasonal Autoregressive Models for Real-Time Application
NASA Astrophysics Data System (ADS)
Chen, Jinduan; Boccelli, Dominic L.
2018-02-01
Consumer water demands are not typically measured at temporal or spatial scales adequate to support real-time decision making, and recent approaches for estimating unobserved demands using observed hydraulic measurements are generally not capable of forecasting demands and uncertainty information. While time series modeling has shown promise for representing total system demands, these models have generally not been evaluated at spatial scales appropriate for representative real-time modeling. This study investigates the use of a double-seasonal time series model to capture daily and weekly autocorrelations to both total system demands and regional aggregated demands at a scale that would capture demand variability across a distribution system. Emphasis was placed on the ability to forecast demands and quantify uncertainties with results compared to traditional time series pattern-based demand models as well as nonseasonal and single-seasonal time series models. Additional research included the implementation of an adaptive-parameter estimation scheme to update the time series model when unobserved changes occurred in the system. For two case studies, results showed that (1) for the smaller-scale aggregated water demands, the log-transformed time series model resulted in improved forecasts, (2) the double-seasonal model outperformed other models in terms of forecasting errors, and (3) the adaptive adjustment of parameters during forecasting improved the accuracy of the generated prediction intervals. These results illustrate the capabilities of time series modeling to forecast both water demands and uncertainty estimates at spatial scales commensurate for real-time modeling applications and provide a foundation for developing a real-time integrated demand-hydraulic model.
NASA Astrophysics Data System (ADS)
Daras, Ilias; Pail, Roland
2017-09-01
Temporal aliasing effects have a large impact on the gravity field accuracy of current gravimetry missions and are also expected to dominate the error budget of Next Generation Gravimetry Missions (NGGMs). This paper focuses on aspects concerning their treatment in the context of Low-Low Satellite-to-Satellite Tracking NGGMs. Closed-loop full-scale simulations are performed for a two-pair Bender-type Satellite Formation Flight (SFF), by taking into account error models of new generation instrument technology. The enhanced spatial sampling and error isotropy enable a further reduction of temporal aliasing errors from the processing perspective. A parameterization technique is adopted where the functional model is augmented by low-resolution gravity field solutions coestimated at short time intervals, while the remaining higher-resolution gravity field solution is estimated at a longer time interval. Fine-tuning the parameterization choices leads to significant reduction of the temporal aliasing effects. The investigations reveal that the parameterization technique in case of a Bender-type SFF can successfully mitigate aliasing effects caused by undersampling of high-frequency atmospheric and oceanic signals, since their most significant variations can be captured by daily coestimated solutions. This amounts to a "self-dealiasing" method that differs significantly from the classical dealiasing approach used nowadays for Gravity Recovery and Climate Experiment processing, enabling NGGMs to retrieve the complete spectrum of Earth's nontidal geophysical processes, including, for the first time, high-frequency atmospheric and oceanic variations.
Pantzar, Mika; Ruckenstein, Minna; Mustonen, Veera
2017-01-01
ABSTRACT A long-term research focus on the temporality of everyday life has become revitalised with new tracking technologies that allow methodological experimentation and innovation. This article approaches rhythms of daily lives with heart-rate variability measurements that use algorithms to discover physiological stress and recovery. In the spirit of the ‘social life of methods’ approach, we aggregated individual data (n = 35) in order to uncover temporal rhythms of daily lives. The visualisation of the aggregated data suggests both daily and weekly patterns. Daily stress was at its highest in the mornings and around eight o’clock in the evening. Weekend stress patterns were dissimilar, indicating a stress peak in the early afternoon especially for men. In addition to discussing our explorations using quantitative data, the more general aim of the article is to explore the potential of new digital and mobile physiological tracking technologies for contextualising the individual in the everyday. PMID:28163655
Zhang, Zhijun; Ashraf, Muhammad; Sahn, David J; Song, Xubo
2014-05-01
Quantitative analysis of cardiac motion is important for evaluation of heart function. Three dimensional (3D) echocardiography is among the most frequently used imaging modalities for motion estimation because it is convenient, real-time, low-cost, and nonionizing. However, motion estimation from 3D echocardiographic sequences is still a challenging problem due to low image quality and image corruption by noise and artifacts. The authors have developed a temporally diffeomorphic motion estimation approach in which the velocity field instead of the displacement field was optimized. The optimal velocity field optimizes a novel similarity function, which we call the intensity consistency error, defined as multiple consecutive frames evolving to each time point. The optimization problem is solved by using the steepest descent method. Experiments with simulated datasets, images of anex vivo rabbit phantom, images of in vivo open-chest pig hearts, and healthy human images were used to validate the authors' method. Simulated and real cardiac sequences tests showed that results in the authors' method are more accurate than other competing temporal diffeomorphic methods. Tests with sonomicrometry showed that the tracked crystal positions have good agreement with ground truth and the authors' method has higher accuracy than the temporal diffeomorphic free-form deformation (TDFFD) method. Validation with an open-access human cardiac dataset showed that the authors' method has smaller feature tracking errors than both TDFFD and frame-to-frame methods. The authors proposed a diffeomorphic motion estimation method with temporal smoothness by constraining the velocity field to have maximum local intensity consistency within multiple consecutive frames. The estimated motion using the authors' method has good temporal consistency and is more accurate than other temporally diffeomorphic motion estimation methods.
Corn rootworms (Coleoptera: Chrysomelidae) in space and time
NASA Astrophysics Data System (ADS)
Park, Yong-Lak
Spatial dispersion is a main characteristic of insect populations. Dispersion pattern provides useful information for developing effective sampling and scouting programs because it affects sampling accuracy, efficiency, and precision. Insect dispersion, however, is dynamic in space and time and largely dependent upon interactions among insect, plant and environmental factors. This study investigated the spatial and temporal dynamics of corn rootworm dispersion at different spatial scales by using the global positioning system, the geographic information system, and geostatistics. Egg dispersion pattern was random or uniform in 8-ha cornfields, but could be aggregated at a smaller scale. Larval dispersion pattern was aggregated regardless of spatial scales used in this study. Soil moisture positively affected corn rootworm egg and larval dispersions. Adult dispersion tended to be aggregated during peak population period and random or uniform early and late in the season and corn plant phenology was a major factor to determine dispersion patterns. The dispersion pattern of root injury by corn rootworm larval feeding was aggregated and the degree of aggregation increased as the root injury increased within the range of root injury observed in microscale study. Between-year relationships in dispersion among eggs, larvae, adult, and environment provided a strategy that could predict potential root damage the subsequent year. The best prediction map for the subsequent year's potential root damage was the dispersion maps of adults during population peaked in the cornfield. The prediction map was used to develop site-specific pest management that can reduce chemical input and increase control efficiency by controlling pests only where management is needed. This study demonstrated the spatio-temporal dynamics of insect population and spatial interactions among insects, plants, and environment.
Big Data and Large Sample Size: A Cautionary Note on the Potential for Bias
Chambers, David A.; Glasgow, Russell E.
2014-01-01
Abstract A number of commentaries have suggested that large studies are more reliable than smaller studies and there is a growing interest in the analysis of “big data” that integrates information from many thousands of persons and/or different data sources. We consider a variety of biases that are likely in the era of big data, including sampling error, measurement error, multiple comparisons errors, aggregation error, and errors associated with the systematic exclusion of information. Using examples from epidemiology, health services research, studies on determinants of health, and clinical trials, we conclude that it is necessary to exercise greater caution to be sure that big sample size does not lead to big inferential errors. Despite the advantages of big studies, large sample size can magnify the bias associated with error resulting from sampling or study design. Clin Trans Sci 2014; Volume #: 1–5 PMID:25043853
An audit of some processing effects in aggregated occurrence records.
Mesibov, Robert
2018-01-01
A total of ca 800,000 occurrence records from the Australian Museum (AM), Museums Victoria (MV) and the New Zealand Arthropod Collection (NZAC) were audited for changes in selected Darwin Core fields after processing by the Atlas of Living Australia (ALA; for AM and MV records) and the Global Biodiversity Information Facility (GBIF; for AM, MV and NZAC records). Formal taxon names in the genus- and species-groups were changed in 13-21% of AM and MV records, depending on dataset and aggregator. There was little agreement between the two aggregators on processed names, with names changed in two to three times as many records by one aggregator alone compared to records with names changed by both aggregators. The type status of specimen records did not change with name changes, resulting in confusion as to the name with which a type was associated. Data losses of up to 100% were found after processing in some fields, apparently due to programming errors. The taxonomic usefulness of occurrence records could be improved if aggregators included both original and the processed taxonomic data items for each record. It is recommended that end-users check original and processed records for data loss and name replacements after processing by aggregators.
Uncertainty aggregation and reduction in structure-material performance prediction
NASA Astrophysics Data System (ADS)
Hu, Zhen; Mahadevan, Sankaran; Ao, Dan
2018-02-01
An uncertainty aggregation and reduction framework is presented for structure-material performance prediction. Different types of uncertainty sources, structural analysis model, and material performance prediction model are connected through a Bayesian network for systematic uncertainty aggregation analysis. To reduce the uncertainty in the computational structure-material performance prediction model, Bayesian updating using experimental observation data is investigated based on the Bayesian network. It is observed that the Bayesian updating results will have large error if the model cannot accurately represent the actual physics, and that this error will be propagated to the predicted performance distribution. To address this issue, this paper proposes a novel uncertainty reduction method by integrating Bayesian calibration with model validation adaptively. The observation domain of the quantity of interest is first discretized into multiple segments. An adaptive algorithm is then developed to perform model validation and Bayesian updating over these observation segments sequentially. Only information from observation segments where the model prediction is highly reliable is used for Bayesian updating; this is found to increase the effectiveness and efficiency of uncertainty reduction. A composite rotorcraft hub component fatigue life prediction model, which combines a finite element structural analysis model and a material damage model, is used to demonstrate the proposed method.
Frequency, probability, and prediction: easy solutions to cognitive illusions?
Griffin, D; Buehler, R
1999-02-01
Many errors in probabilistic judgment have been attributed to people's inability to think in statistical terms when faced with information about a single case. Prior theoretical analyses and empirical results imply that the errors associated with case-specific reasoning may be reduced when people make frequentistic predictions about a set of cases. In studies of three previously identified cognitive biases, we find that frequency-based predictions are different from-but no better than-case-specific judgments of probability. First, in studies of the "planning fallacy, " we compare the accuracy of aggregate frequency and case-specific probability judgments in predictions of students' real-life projects. When aggregate and single-case predictions are collected from different respondents, there is little difference between the two: Both are overly optimistic and show little predictive validity. However, in within-subject comparisons, the aggregate judgments are significantly more conservative than the single-case predictions, though still optimistically biased. Results from studies of overconfidence in general knowledge and base rate neglect in categorical prediction underline a general conclusion. Frequentistic predictions made for sets of events are no more statistically sophisticated, nor more accurate, than predictions made for individual events using subjective probability. Copyright 1999 Academic Press.
Structural Controllability and Controlling Centrality of Temporal Networks
Pan, Yujian; Li, Xiang
2014-01-01
Temporal networks are such networks where nodes and interactions may appear and disappear at various time scales. With the evidence of ubiquity of temporal networks in our economy, nature and society, it's urgent and significant to focus on its structural controllability as well as the corresponding characteristics, which nowadays is still an untouched topic. We develop graphic tools to study the structural controllability as well as its characteristics, identifying the intrinsic mechanism of the ability of individuals in controlling a dynamic and large-scale temporal network. Classifying temporal trees of a temporal network into different types, we give (both upper and lower) analytical bounds of the controlling centrality, which are verified by numerical simulations of both artificial and empirical temporal networks. We find that the positive relationship between aggregated degree and controlling centrality as well as the scale-free distribution of node's controlling centrality are virtually independent of the time scale and types of datasets, meaning the inherent robustness and heterogeneity of the controlling centrality of nodes within temporal networks. PMID:24747676
Ymeti, Irena; van der Werff, Harald; Shrestha, Dhruba Pikha; Jetten, Victor G.; Lievens, Caroline; van der Meer, Freek
2017-01-01
Remote sensing has shown its potential to assess soil properties and is a fast and non-destructive method for monitoring soil surface changes. In this paper, we monitor soil aggregate breakdown under natural conditions. From November 2014 to February 2015, images and weather data were collected on a daily basis from five soils susceptible to detachment (Silty Loam with various organic matter content, Loam and Sandy Loam). Three techniques that vary in image processing complexity and user interaction were tested for the ability of monitoring aggregate breakdown. Considering that the soil surface roughness causes shadow cast, the blue/red band ratio is utilized to observe the soil aggregate changes. Dealing with images with high spatial resolution, image texture entropy, which reflects the process of soil aggregate breakdown, is used. In addition, the Huang thresholding technique, which allows estimation of the image area occupied by soil aggregate, is performed. Our results show that all three techniques indicate soil aggregate breakdown over time. The shadow ratio shows a gradual change over time with no details related to weather conditions. Both the entropy and the Huang thresholding technique show variations of soil aggregate breakdown responding to weather conditions. Using data obtained with a regular camera, we found that freezing–thawing cycles are the cause of soil aggregate breakdown. PMID:28556803
Ymeti, Irena; van der Werff, Harald; Shrestha, Dhruba Pikha; Jetten, Victor G; Lievens, Caroline; van der Meer, Freek
2017-05-30
Remote sensing has shown its potential to assess soil properties and is a fast and non-destructive method for monitoring soil surface changes. In this paper, we monitor soil aggregate breakdown under natural conditions. From November 2014 to February 2015, images and weather data were collected on a daily basis from five soils susceptible to detachment (Silty Loam with various organic matter content, Loam and Sandy Loam). Three techniques that vary in image processing complexity and user interaction were tested for the ability of monitoring aggregate breakdown. Considering that the soil surface roughness causes shadow cast, the blue/red band ratio is utilized to observe the soil aggregate changes. Dealing with images with high spatial resolution, image texture entropy, which reflects the process of soil aggregate breakdown, is used. In addition, the Huang thresholding technique, which allows estimation of the image area occupied by soil aggregate, is performed. Our results show that all three techniques indicate soil aggregate breakdown over time. The shadow ratio shows a gradual change over time with no details related to weather conditions. Both the entropy and the Huang thresholding technique show variations of soil aggregate breakdown responding to weather conditions. Using data obtained with a regular camera, we found that freezing-thawing cycles are the cause of soil aggregate breakdown.
Guidance to Achieve Accurate Aggregate Quantitation in Biopharmaceuticals by SV-AUC.
Arthur, Kelly K; Kendrick, Brent S; Gabrielson, John P
2015-01-01
The levels and types of aggregates present in protein biopharmaceuticals must be assessed during all stages of product development, manufacturing, and storage of the finished product. Routine monitoring of aggregate levels in biopharmaceuticals is typically achieved by size exclusion chromatography (SEC) due to its high precision, speed, robustness, and simplicity to operate. However, SEC is error prone and requires careful method development to ensure accuracy of reported aggregate levels. Sedimentation velocity analytical ultracentrifugation (SV-AUC) is an orthogonal technique that can be used to measure protein aggregation without many of the potential inaccuracies of SEC. In this chapter, we discuss applications of SV-AUC during biopharmaceutical development and how characteristics of the technique make it better suited for some applications than others. We then discuss the elements of a comprehensive analytical control strategy for SV-AUC. Successful implementation of these analytical control elements ensures that SV-AUC provides continued value over the long time frames necessary to bring biopharmaceuticals to market. © 2015 Elsevier Inc. All rights reserved.
Talibov, Madar; Salmelin, Raili; Lehtinen-Jacks, Susanna; Auvinen, Anssi
2017-04-01
Job-exposure matrices (JEM) are used for exposure assessment in occupational studies, but they can involve errors. We assessed agreement between the Nordic Occupational Cancer Studies JEM (NOCCA-JEM) and aggregate and individual dose estimates for cosmic radiation exposure among Finnish airline personnel. Cumulative cosmic radiation exposure for 5,022 airline crew members was compared between a JEM and aggregate and individual dose estimates. The NOCCA-JEM underestimated individual doses. Intraclass correlation coefficient was 0.37, proportion of agreement 64%, kappa 0.46 compared with individual doses. Higher agreement was achieved with aggregate dose estimates, that is annual medians of individual doses and estimates adjusted for heliocentric potentials. The substantial disagreement between NOCCA-JEM and individual dose estimates of cosmic radiation may lead to exposure misclassification and biased risk estimates in epidemiological studies. Using aggregate data may provide improved estimates. Am. J. Ind. Med. 60:386-393, 2017. © 2017 Wiley Periodicals, Inc. © 2017 Wiley Periodicals, Inc.
Geometric error characterization and error budgets. [thematic mapper
NASA Technical Reports Server (NTRS)
Beyer, E.
1982-01-01
Procedures used in characterizing geometric error sources for a spaceborne imaging system are described using the LANDSAT D thematic mapper ground segment processing as the prototype. Software was tested through simulation and is undergoing tests with the operational hardware as part of the prelaunch system evaluation. Geometric accuracy specifications, geometric correction, and control point processing are discussed. Cross track and along track errors are tabulated for the thematic mapper, the spacecraft, and ground processing to show the temporal registration error budget in pixel (42.5 microrad) 90%.
Hasani, Mohammad; Sakieh, Yousef; Dezhkam, Sadeq; Ardakani, Tahereh; Salmanmahiny, Abdolrassoul
2017-04-01
A hierarchical intensity analysis of land-use change is applied to evaluate the dynamics of a coupled urban coastal system in Rasht County, Iran. Temporal land-use layers of 1987, 1999, and 2011 are employed, while spatial accuracy metrics are only available for 2011 data (overall accuracy of 94%). The errors in 1987 and 1999 layers are unknown, which can influence the accuracy of temporal change information. Such data were employed to examine the size and the type of errors that could justify deviations from uniform change intensities. Accordingly, errors comprising 3.31 and 7.47% of 1999 and 2011 maps, respectively, could explain all differences from uniform gains and errors including 5.21 and 1.81% of 1987 and 1999 maps, respectively, could explain all deviations from uniform losses. Additional historical information is also applied for uncertainty assessment and to separate probable map errors from actual land-use changes. In this regard, historical processes in Rasht County can explain different types of transition that are either consistent or inconsistent to known processes. The intensity analysis assisted in identification of systematic transitions and detection of competitive categories, which cannot be investigated through conventional change detection methods. Based on results, built-up area is the most active gaining category in the area and wetland category with less areal extent is more sensitive to intense land-use change processes. Uncertainty assessment results also indicated that there are no considerable classification errors in temporal land-use data and these imprecise layers can reliably provide implications for informed decision making.
Nock, Nl; Zhang, Lx
2011-11-29
Methods that can evaluate aggregate effects of rare and common variants are limited. Therefore, we applied a two-stage approach to evaluate aggregate gene effects in the 1000 Genomes Project data, which contain 24,487 single-nucleotide polymorphisms (SNPs) in 697 unrelated individuals from 7 populations. In stage 1, we identified potentially interesting genes (PIGs) as those having at least one SNP meeting Bonferroni correction using univariate, multiple regression models. In stage 2, we evaluate aggregate PIG effects on trait, Q1, by modeling each gene as a latent construct, which is defined by multiple common and rare variants, using the multivariate statistical framework of structural equation modeling (SEM). In stage 1, we found that PIGs varied markedly between a randomly selected replicate (replicate 137) and 100 other replicates, with the exception of FLT1. In stage 1, collapsing rare variants decreased false positives but increased false negatives. In stage 2, we developed a good-fitting SEM model that included all nine genes simulated to affect Q1 (FLT1, KDR, ARNT, ELAV4, FLT4, HIF1A, HIF3A, VEGFA, VEGFC) and found that FLT1 had the largest effect on Q1 (βstd = 0.33 ± 0.05). Using replicate 137 estimates as population values, we found that the mean relative bias in the parameters (loadings, paths, residuals) and their standard errors across 100 replicates was on average, less than 5%. Our latent variable SEM approach provides a viable framework for modeling aggregate effects of rare and common variants in multiple genes, but more elegant methods are needed in stage 1 to minimize type I and type II error.
FHWA statistical program : a customer's guide to using highway statistics
DOT National Transportation Integrated Search
1995-08-01
The appropriate level of spatial and temporal data aggregation for highway vehicle emissions analyses is one of several important analytical questions that has received considerable interest following passage of the Clean Air Act Amendments (CAAA) of...
Linear regression crash prediction models : issues and proposed solutions.
DOT National Transportation Integrated Search
2010-05-01
The paper develops a linear regression model approach that can be applied to : crash data to predict vehicle crashes. The proposed approach involves novice data aggregation : to satisfy linear regression assumptions; namely error structure normality ...
Wallace, Brian M; Krzic, Maja; Forge, Tom A; Broersma, Klaas; Newman, Reg F
2009-01-01
Biosolids application to rangelands and pastures recycles nutrients and organic matter back to soils. The effects of biosolids (20 and 60 dry Mg ha(-)(1)) and N+P fertilizer on soil aggregate stability, bulk density, aeration porosity, and total C and N of stable aggregates were evaluated 4 and 5 yr after surface application to a crested wheatgrass [Agropyron cristatum (L.) Gaertn.] pasture in the southern interior of British Columbia (BC). The experiment was established in 2001 in a randomized complete block design with four replications. The 60 Mg ha(-1) biosolids treatment (Bio 60) had a greater aggregate mean weight diameter (MWD) and proportion of water-stable soil aggregates > 1 mm relative to the control and fertilizer treatments. Temporal variation in aggregate stability was attributed to seasonal variations in soil water content. Surface application of 60 Mg ha(-1) of biosolids increased C concentrations within water-stable aggregates relative to the control from 29 to 104, 24 to 79, and 12 to 38 g kg(-1) for the 2 to 6, 1 to 2, and 0.25 to 1 mm size fractions, respectively. The concentration of N within aggregates increased in similar proportions to C. Neither soil bulk density, nor aeration porosity were affected by biosolids application. Increased aggregation and the accumulation of soil C within aggregates following biosolids application creates a potential for better soil C storage, soil water retention, nutrient availability, and ultimately the overall health of semiarid perennial pastures.
NASA Technical Reports Server (NTRS)
Hardy, E. E. (Principal Investigator); Skaley, J. E.; Dawson, C. P.; Weiner, G. D.; Phillips, E. S.; Fisher, R. A.
1975-01-01
The author has identified the following significant results. Three sites were evaluated for land use inventory: Finger Lakes - Tompkins County, Lower Hudson Valley - Newburgh, and Suffolk County - Long Island. Special photo enhancement processes were developed to standardize the density range and contrast among S190A negatives. Enhanced black and white enlargements were converted to color by contact printing onto diazo film. A color prediction model related the density values on each spectral band for each category of land use to the spectral properties of the various diazo dyes. The S190A multispectral system proved to be almost as effective as the S190B high resolution camera for inventorying land use. Aggregate error for Level 1 averaged about 12% while Level 2 aggregate error averaged about 25%. The S190A system proved to be much superior to LANDSAT in inventorying land use, primarily because of increased resolution.
Temporal Control over Transient Chemical Systems using Structurally Diverse Chemical Fuels.
Chen, Jack L-Y; Maiti, Subhabrata; Fortunati, Ilaria; Ferrante, Camilla; Prins, Leonard J
2017-08-25
The next generation of adaptive, intelligent chemical systems will rely on a continuous supply of energy to maintain the functional state. Such systems will require chemical methodology that provides precise control over the energy dissipation process, and thus, the lifetime of the transiently activated function. This manuscript reports on the use of structurally diverse chemical fuels to control the lifetime of two different systems under dissipative conditions: transient signal generation and the transient formation of self-assembled aggregates. The energy stored in the fuels is dissipated at different rates by an enzyme, which installs a dependence of the lifetime of the active system on the chemical structure of the fuel. In the case of transient signal generation, it is shown that different chemical fuels can be used to generate a vast range of signal profiles, allowing temporal control over two orders of magnitude. Regarding self-assembly under dissipative conditions, the ability to control the lifetime using different fuels turns out to be particularly important as stable aggregates are formed only at well-defined surfactant/fuel ratios, meaning that temporal control cannot be achieved by simply changing the fuel concentration. © 2017 Wiley-VCH Verlag GmbH & Co. KGaA, Weinheim.
Agent Based Modeling: Fine-Scale Spatio-Temporal Analysis of Pertussis
NASA Astrophysics Data System (ADS)
Mills, D. A.
2017-10-01
In epidemiology, spatial and temporal variables are used to compute vaccination efficacy and effectiveness. The chosen resolution and scale of a spatial or spatio-temporal analysis will affect the results. When calculating vaccination efficacy, for example, a simple environment that offers various ideal outcomes is often modeled using coarse scale data aggregated on an annual basis. In contrast to the inadequacy of this aggregated method, this research uses agent based modeling of fine-scale neighborhood data centered around the interactions of infants in daycare and their families to demonstrate an accurate reflection of vaccination capabilities. Despite being able to prevent major symptoms, recent studies suggest that acellular Pertussis does not prevent the colonization and transmission of Bordetella Pertussis bacteria. After vaccination, a treated individual becomes a potential asymptomatic carrier of the Pertussis bacteria, rather than an immune individual. Agent based modeling enables the measurable depiction of asymptomatic carriers that are otherwise unaccounted for when calculating vaccination efficacy and effectiveness. Using empirical data from a Florida Pertussis outbreak case study, the results of this model demonstrate that asymptomatic carriers bias the calculated vaccination efficacy and reveal a need for reconsidering current methods that are widely used for calculating vaccination efficacy and effectiveness.
Temporal dynamics and impact of event interactions in cyber-social populations
NASA Astrophysics Data System (ADS)
Zhang, Yi-Qing; Li, Xiang
2013-03-01
The advance of information technologies provides powerful measures to digitize social interactions and facilitate quantitative investigations. To explore large-scale indoor interactions of a social population, we analyze 18 715 users' Wi-Fi access logs recorded in a Chinese university campus during 3 months, and define event interaction (EI) to characterize the concurrent interactions of multiple users inferred by their geographic coincidences—co-locating in the same small region at the same time. We propose three rules to construct a transmission graph, which depicts the topological and temporal features of event interactions. The vertex dynamics of transmission graph tells that the active durations of EIs fall into the truncated power-law distributions, which is independent on the number of involved individuals. The edge dynamics of transmission graph reports that the transmission durations present a truncated power-law pattern independent on the daily and weekly periodicities. Besides, in the aggregated transmission graph, low-degree vertices previously neglected in the aggregated static networks may participate in the large-degree EIs, which is verified by three data sets covering different sizes of social populations with various rendezvouses. This work highlights the temporal significance of event interactions in cyber-social populations.
NASA Technical Reports Server (NTRS)
Rigney, Matt; Jedlovec, Gary; LaFontaine, Frank; Shafer, Jaclyn
2010-01-01
Heat and moisture exchange between ocean surface and atmosphere plays an integral role in short-term, regional NWP. Current SST products lack both spatial and temporal resolution to accurately capture small-scale features that affect heat and moisture flux. NASA satellite is used to produce high spatial and temporal resolution SST analysis using an OI technique.
Rokicki, Slawa; Cohen, Jessica; Fink, Günther; Salomon, Joshua A; Landrum, Mary Beth
2018-01-01
Difference-in-differences (DID) estimation has become increasingly popular as an approach to evaluate the effect of a group-level policy on individual-level outcomes. Several statistical methodologies have been proposed to correct for the within-group correlation of model errors resulting from the clustering of data. Little is known about how well these corrections perform with the often small number of groups observed in health research using longitudinal data. First, we review the most commonly used modeling solutions in DID estimation for panel data, including generalized estimating equations (GEE), permutation tests, clustered standard errors (CSE), wild cluster bootstrapping, and aggregation. Second, we compare the empirical coverage rates and power of these methods using a Monte Carlo simulation study in scenarios in which we vary the degree of error correlation, the group size balance, and the proportion of treated groups. Third, we provide an empirical example using the Survey of Health, Ageing, and Retirement in Europe. When the number of groups is small, CSE are systematically biased downwards in scenarios when data are unbalanced or when there is a low proportion of treated groups. This can result in over-rejection of the null even when data are composed of up to 50 groups. Aggregation, permutation tests, bias-adjusted GEE, and wild cluster bootstrap produce coverage rates close to the nominal rate for almost all scenarios, though GEE may suffer from low power. In DID estimation with a small number of groups, analysis using aggregation, permutation tests, wild cluster bootstrap, or bias-adjusted GEE is recommended.
Response Errors Explain the Failure of Independent-Channels Models of Perception of Temporal Order
García-Pérez, Miguel A.; Alcalá-Quintana, Rocío
2012-01-01
Independent-channels models of perception of temporal order (also referred to as threshold models or perceptual latency models) have been ruled out because two formal properties of these models (monotonicity and parallelism) are not borne out by data from ternary tasks in which observers must judge whether stimulus A was presented before, after, or simultaneously with stimulus B. These models generally assume that observed responses are authentic indicators of unobservable judgments, but blinks, lapses of attention, or errors in pressing the response keys (maybe, but not only, motivated by time pressure when reaction times are being recorded) may make observers misreport their judgments or simply guess a response. We present an extension of independent-channels models that considers response errors and we show that the model produces psychometric functions that do not satisfy monotonicity and parallelism. The model is illustrated by fitting it to data from a published study in which the ternary task was used. The fitted functions describe very accurately the absence of monotonicity and parallelism shown by the data. These characteristics of empirical data are thus consistent with independent-channels models when response errors are taken into consideration. The implications of these results for the analysis and interpretation of temporal order judgment data are discussed. PMID:22493586
Linger, Michele L; Ray, Glen E; Zachar, Peter; Underhill, Andrea T; LoBello, Steven G
2007-10-01
Studies of graduate students learning to administer the Wechsler scales have generally shown that training is not associated with the development of scoring proficiency. Many studies report on the reduction of aggregated administration and scoring errors, a strategy that does not highlight the reduction of errors on subtests identified as most prone to error. This study evaluated the development of scoring proficiency specifically on the Wechsler (WISC-IV and WAIS-III) Vocabulary, Comprehension, and Similarities subtests during training by comparing a set of 'early test administrations' to 'later test administrations.' Twelve graduate students enrolled in an intelligence-testing course participated in the study. Scoring errors (e.g., incorrect point assignment) were evaluated on the students' actual practice administration test protocols. Errors on all three subtests declined significantly when scoring errors on 'early' sets of Wechsler scales were compared to those made on 'later' sets. However, correcting these subtest scoring errors did not cause significant changes in subtest scaled scores. Implications for clinical instruction and future research are discussed.
ERIC Educational Resources Information Center
Walker, Grant M.; Schwartz, Myrna F.; Kimberg, Daniel Y.; Faseyitan, Olufunsho; Brecher, Adelyn; Dell, Gary S.; Coslett, H. Branch
2011-01-01
Semantic errors in aphasia (e.g., naming a horse as "dog") frequently arise from faulty mapping of concepts onto lexical items. A recent study by our group used voxel-based lesion-symptom mapping (VLSM) methods with 64 patients with chronic aphasia to identify voxels that carry an association with semantic errors. The strongest associations were…
Garcia Espinosa, Arlety; Andrade Machado, René; Borges González, Susana; García González, María Eugenia; Pérez Montoto, Ariadna; Toledo Sotomayor, Guillermo
2010-01-01
The goal of the study described here was to determine if executive dysfunction and impulsivity are related to risk for suicide and suicide attempts in patients with temporal lobe epilepsy. Forty-two patients with temporal lobe epilepsy were recruited. A detailed medical history, neurological examination, serial EEGs, Mini-International Neuropsychiatric Interview, executive function, and MRI were assessed. Multiple regression analysis was carried out to examine predictive associations between clinical variables and Wisconsin Card Sorting Test measures. Patients' scores on the Risk for Suicide Scale (n=24) were greater than 7, which means they had the highest relative risk for suicide attempts. Family history of psychiatric disease, current major depressive episode, left temporal lobe epilepsy, and perseverative responses and total errors on the Wisconsin Card Sorting Test increased by 6.3 and 7.5 suicide risk and suicide attempts, respectively. Executive dysfunction (specifically perseverative responses and more total errors) contributed greatly to suicide risk. Executive performance has a major impact on suicide risk and suicide attempts in patients with temporal lobe epilepsy. 2009 Elsevier Inc. All rights reserved.
Implicit transfer of reversed temporal structure in visuomotor sequence learning.
Tanaka, Kanji; Watanabe, Katsumi
2014-04-01
Some spatio-temporal structures are easier to transfer implicitly in sequential learning. In this study, we investigated whether the consistent reversal of triads of learned components would support the implicit transfer of their temporal structure in visuomotor sequence learning. A triad comprised three sequential button presses ([1][2][3]) and seven consecutive triads comprised a sequence. Participants learned sequences by trial and error, until they could complete it 20 times without error. Then, they learned another sequence, in which each triad was reversed ([3][2][1]), partially reversed ([2][1][3]), or switched so as not to overlap with the other conditions ([2][3][1] or [3][1][2]). Even when the participants did not notice the alternation rule, the consistent reversal of the temporal structure of each triad led to better implicit transfer; this was confirmed in a subsequent experiment. These results suggest that the implicit transfer of the temporal structure of a learned sequence can be influenced by both the structure and consistency of the change. Copyright © 2013 Cognitive Science Society, Inc.
Use of machine learning methods to reduce predictive error of groundwater models.
Xu, Tianfang; Valocchi, Albert J; Choi, Jaesik; Amir, Eyal
2014-01-01
Quantitative analyses of groundwater flow and transport typically rely on a physically-based model, which is inherently subject to error. Errors in model structure, parameter and data lead to both random and systematic error even in the output of a calibrated model. We develop complementary data-driven models (DDMs) to reduce the predictive error of physically-based groundwater models. Two machine learning techniques, the instance-based weighting and support vector regression, are used to build the DDMs. This approach is illustrated using two real-world case studies of the Republican River Compact Administration model and the Spokane Valley-Rathdrum Prairie model. The two groundwater models have different hydrogeologic settings, parameterization, and calibration methods. In the first case study, cluster analysis is introduced for data preprocessing to make the DDMs more robust and computationally efficient. The DDMs reduce the root-mean-square error (RMSE) of the temporal, spatial, and spatiotemporal prediction of piezometric head of the groundwater model by 82%, 60%, and 48%, respectively. In the second case study, the DDMs reduce the RMSE of the temporal prediction of piezometric head of the groundwater model by 77%. It is further demonstrated that the effectiveness of the DDMs depends on the existence and extent of the structure in the error of the physically-based model. © 2013, National GroundWater Association.
The Error Structure of the SMAP Single and Dual Channel Soil Moisture Retrievals
NASA Astrophysics Data System (ADS)
Dong, Jianzhi; Crow, Wade T.; Bindlish, Rajat
2018-01-01
Knowledge of the temporal error structure for remotely sensed surface soil moisture retrievals can improve our ability to exploit them for hydrologic and climate studies. This study employs a triple collocation analysis to investigate both the total variance and temporal autocorrelation of errors in Soil Moisture Active and Passive (SMAP) products generated from two separate soil moisture retrieval algorithms, the vertically polarized brightness temperature-based single-channel algorithm (SCA-V, the current baseline SMAP algorithm) and the dual-channel algorithm (DCA). A key assumption made in SCA-V is that real-time vegetation opacity can be accurately captured using only a climatology for vegetation opacity. Results demonstrate that while SCA-V generally outperforms DCA, SCA-V can produce larger total errors when this assumption is significantly violated by interannual variability in vegetation health and biomass. Furthermore, larger autocorrelated errors in SCA-V retrievals are found in areas with relatively large vegetation opacity deviations from climatological expectations. This implies that a significant portion of the autocorrelated error in SCA-V is attributable to the violation of its vegetation opacity climatology assumption and suggests that utilizing a real (as opposed to climatological) vegetation opacity time series in the SCA-V algorithm would reduce the magnitude of autocorrelated soil moisture retrieval errors.
Model-free and model-based reward prediction errors in EEG.
Sambrook, Thomas D; Hardwick, Ben; Wills, Andy J; Goslin, Jeremy
2018-05-24
Learning theorists posit two reinforcement learning systems: model-free and model-based. Model-based learning incorporates knowledge about structure and contingencies in the world to assign candidate actions with an expected value. Model-free learning is ignorant of the world's structure; instead, actions hold a value based on prior reinforcement, with this value updated by expectancy violation in the form of a reward prediction error. Because they use such different learning mechanisms, it has been previously assumed that model-based and model-free learning are computationally dissociated in the brain. However, recent fMRI evidence suggests that the brain may compute reward prediction errors to both model-free and model-based estimates of value, signalling the possibility that these systems interact. Because of its poor temporal resolution, fMRI risks confounding reward prediction errors with other feedback-related neural activity. In the present study, EEG was used to show the presence of both model-based and model-free reward prediction errors and their place in a temporal sequence of events including state prediction errors and action value updates. This demonstration of model-based prediction errors questions a long-held assumption that model-free and model-based learning are dissociated in the brain. Copyright © 2018 Elsevier Inc. All rights reserved.
Temporal Decompostion of a Distribution System Quasi-Static Time-Series Simulation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mather, Barry A; Hunsberger, Randolph J
This paper documents the first phase of an investigation into reducing runtimes of complex OpenDSS models through parallelization. As the method seems promising, future work will quantify - and further mitigate - errors arising from this process. In this initial report, we demonstrate how, through the use of temporal decomposition, the run times of a complex distribution-system-level quasi-static time series simulation can be reduced roughly proportional to the level of parallelization. Using this method, the monolithic model runtime of 51 hours was reduced to a minimum of about 90 minutes. As expected, this comes at the expense of control- andmore » voltage-errors at the time-slice boundaries. All evaluations were performed using a real distribution circuit model with the addition of 50 PV systems - representing a mock complex PV impact study. We are able to reduce induced transition errors through the addition of controls initialization, though small errors persist. The time savings with parallelization are so significant that we feel additional investigation to reduce control errors is warranted.« less
Matsuda, F; Lan, W C; Tanimura, R
1999-02-01
In Matsuda's 1996 study, 4- to 11-yr.-old children (N = 133) watched two cars running on two parallel tracks on a CRT display and judged whether their durations and distances were equal and, if not, which was larger. In the present paper, the relative contributions of the four critical stimulus attributes (whether temporal starting points, temporal stopping points, spatial starting points, and spatial stopping points were the same or different between two cars) to the production of errors were quantitatively estimated based on the data for rates of errors obtained by Matsuda. The present analyses made it possible not only to understand numerically the findings about qualitative characteristics of the critical attributes described by Matsuda, but also to add more detailed findings about them.
The loggerhead shrike, Lanius ludovicianus, is a declining songbird that forms breeding aggregations. Despite such reports from several populations, only one statistical analysis of loggerhead shrike territory distribution has been published to date. I use a spatio-temporal sim...
NASA Astrophysics Data System (ADS)
Dasgupta, Anushka
Many studies have suggested that oxidative stress plays an important role in the pathophysiology of both multiple sclerosis (MS) and its animal model experimental autoimmune encephalomyelitis (EAE). Yet, the mechanism by which oxidative stress leads to tissue damage in these disorders is unclear. Recent work from our laboratory has revealed that protein carbonylation, a major oxidative modification caused by severe and/or chronic oxidative stress conditions, is elevated in MS and EAE. Furthermore, protein carbonylation has been shown to alter protein structure leading to misfolding/aggregation. These findings prompted me to hypothesize that carbonylated proteins, formed as a consequence of oxidative stress and/or decreased proteasomal activity, promote protein aggregation to mediate neuronal apoptosis in vitro and in EAE. To test this novel hypothesis, I first characterized protein carbonylation, protein aggregation and apoptosis along the spinal cord during the course of myelin-oligodendrocyte glycoprotein (MOG)35-55 peptide-induced EAE in C57BL/6 mice [Chapter 2]. The results show that carbonylated proteins accumulate throughout the course of the disease, albeit by different mechanisms: increased oxidative stress in acute EAE and decreased proteasomal activity in chronic EAE. I discovered not only that there is a temporal correlation between protein carbonylation and apoptosis but also that carbonyl levels are significantly higher in apoptotic cells. A high number of juxta-nuclear and cytoplasmic protein aggregates containing the majority of the oxidized proteins are also present during the course of EAE, which seems to be due to reduced autophagy. In chapter 3, I show that when gluthathione levels are reduced to those in EAE spinal cord, both neuron-like PC12 (nPC12) cells and primary neuronal cultures accumulate carbonylated proteins and undergo cell death (both by necrosis and apoptosis). Immunocytochemical and biochemical studies also revealed a temporal/spatial relationship between carbonylation, protein aggregation and cellular apoptosis. Furthermore, the effectiveness of the carbonyl scavenger hydralazine, histidine hydrazide and methoxylamine at preventing cell death identifies protein carbonyls as the toxic species. Experiments using well-characterized apoptosis inhibitors place protein carbonylation downstream of the mitochondrial transition pore opening and upstream of caspase activation. These in vitro studies demonstrate for the first time a causal relationship between carbonylation, protein aggregation and apoptosis of neurons undergoing oxidative damage. This relationship was further strengthened with the experiments carried out in chapter 4, which show that inhibition of protein aggregation with congo red (CR) or 2-hydroxypropyl beta-cyclodextrin (HPCD) significantly reduced neuronal cell death without affecting the levels of oxidized proteins. Interestingly, large, juxta-nuclear aggregates are not formed upon GSH depletion, suggesting that the small protein aggregates are the cytotoxic species. Together, our data suggest that protein carbonylation causes protein aggregation to mediate neuronal apoptosis in vitro and that a similar mechanism might be contributing to neuronal/glial apoptosis in EAE. These studies provide the basis for testing protein carbonylation scavengers and protein aggregation inhibitors for the treatment of inflammatory demyelinating disorders.
Modeling the impact of soil aggregate size on selenium immobilization
NASA Astrophysics Data System (ADS)
Kausch, M. F.; Pallud, C. E.
2013-03-01
Soil aggregates are mm- to cm-sized microporous structures separated by macropores. Whereas fast advective transport prevails in macropores, advection is inhibited by the low permeability of intra-aggregate micropores. This can lead to mass transfer limitations and the formation of aggregate scale concentration gradients affecting the distribution and transport of redox sensitive elements. Selenium (Se) mobilized through irrigation of seleniferous soils has emerged as a major aquatic contaminant. In the absence of oxygen, the bioavailable oxyanions selenate, Se(VI), and selenite, Se(IV), can be microbially reduced to solid, elemental Se, Se(0), and anoxic microzones within soil aggregates are thought to promote this process in otherwise well-aerated soils. To evaluate the impact of soil aggregate size on selenium retention, we developed a dynamic 2-D reactive transport model of selenium cycling in a single idealized aggregate surrounded by a macropore. The model was developed based on flow-through-reactor experiments involving artificial soil aggregates (diameter: 2.5 cm) made of sand and containing Enterobacter cloacae SLD1a-1 that reduces Se(VI) via Se(IV) to Se(0). Aggregates were surrounded by a constant flow providing Se(VI) and pyruvate under oxic or anoxic conditions. In the model, reactions were implemented with double-Monod rate equations coupled to the transport of pyruvate, O2, and Se species. The spatial and temporal dynamics of the model were validated with data from experiments, and predictive simulations were performed covering aggregate sizes 1-2.5 cm in diameter. Simulations predict that selenium retention scales with aggregate size. Depending on O2, Se(VI), and pyruvate concentrations, selenium retention was 4-23 times higher in 2.5 cm aggregates compared to 1 cm aggregates. Under oxic conditions, aggregate size and pyruvate concentrations were found to have a positive synergistic effect on selenium retention. Promoting soil aggregation on seleniferous agricultural soils, through organic matter amendments and conservation tillage, may thus help decrease the impacts of selenium contaminated drainage water on downstream aquatic ecosystems.
Modeling the impact of soil aggregate size on selenium immobilization
NASA Astrophysics Data System (ADS)
Kausch, M. F.; Pallud, C. E.
2012-09-01
Soil aggregates are mm- to cm-sized microporous structures separated by macropores. Whereas fast advective transport prevails in macropores, advection is inhibited by the low permeability of intra-aggregate micropores. This can lead to mass transfer limitations and the formation of aggregate-scale concentration gradients affecting the distribution and transport of redox sensitive elements. Selenium (Se) mobilized through irrigation of seleniferous soils has emerged as a major aquatic contaminant. In the absence of oxygen, the bioavailable oxyanions selenate, Se(VI), and selenite, Se(IV), can be microbially reduced to solid, elemental Se, Se(0), and anoxic microzones within soil aggregates are thought to promote this process in otherwise well aerated soils. To evaluate the impact of soil aggregate size on selenium retention, we developed a dynamic 2-D reactive transport model of selenium cycling in a single idealized aggregate surrounded by a macropore. The model was developed based on flow-through-reactor experiments involving artificial soil aggregates (diameter: 2.5 cm) made of sand and containing Enterobacter cloacae SLD1a-1 that reduces Se(VI) via Se(IV) to Se(0). Aggregates were surrounded by a constant flow providing Se(VI) and pyruvate under oxic or anoxic conditions. In the model, reactions were implemented with double-Monod rate equations coupled to the transport of pyruvate, O2, and Se-species. The spatial and temporal dynamics of the model were validated with data from experiments and predictive simulations were performed covering aggregate sizes between 1 and 2.5 cm diameter. Simulations predict that selenium retention scales with aggregate size. Depending on O2, Se(VI), and pyruvate concentrations, selenium retention was 4-23 times higher in 2.5-cm-aggregates compared to 1-cm-aggregates. Under oxic conditions, aggregate size and pyruvate-concentrations were found to have a positive synergistic effect on selenium retention. Promoting soil aggregation on seleniferous agricultural soils, through organic matter amendments and conservation tillage, may thus help decrease the impacts of selenium contaminated drainage water on downstream aquatic ecosystems.
76 FR 55139 - Order Making Fiscal Year 2012 Annual Adjustments to Registration Fee Rates
Federal Register 2010, 2011, 2012, 2013, 2014
2011-09-06
... Congressional Budget Office (``CBO'') and Office of Management and Budget (``OMB'') to project the aggregate... given by exp(FLAAMOP t + [sigma] n \\2\\/2), where [sigma] n denotes the standard error of the n-step...
NASA Astrophysics Data System (ADS)
Li, Zhongyu; Jin, Zhaohui; Kasatani, Kazuo
2005-01-01
The third-order optical nonlinearities and responses of thin films containing the J-aggregates of a cyanine dye or a squarylium dye were measured using the degenerate four-wave mixing (DFWM) technique under resonant conditions. The sol-gel silica coating films containing the J-aggregates of the cyanine dye, NK-3261, are stable at room temperature and durable against laser beam irradiation. The temporal profiles of the DFWM signal were measured with a time resolution of 0.3 ps, and were found to consist of at least three components, i.e., the coherent instantaneous nonlinear response and the two slow responses with delay time constants of ca. 1.0 ps and ca. 5.6 ps. The contribution of the later was small. The electronic component of the effective third-order optical nonlinear susceptibility of the film had value of as high as ca. 3.0 x 10-7 esu. We also studied the neat film of a squarylium dye J-aggregates. The temporal profile of the DFWM signal of the neat film of squarylium dye was also found to consist of at least three components, the coherent instantaneous nonlinear response and the delayed response with decay time constants of ca. 0.6 ps and ca. 6.5 ps. The contribution of the slow tail was also very small. The electronic component of effective third-order optical nonlinear susceptibility of the neat film of squarylium dye had value of as high as ca. 3.6 x 10-8 esu.
Vidal-Martínez, V M; Pal, P; Aguirre-Macedo, M L; May-Tec, A L; Lewis, J W
2014-03-01
Global climate change (GCC) is expected to affect key environmental variables such as temperature and rainfall, which in turn influence the infection dynamics of metazoan parasites in tropical aquatic hosts. Thus, our aim was to determine how temporal patterns of temperature and rainfall influence the mean abundance and aggregation of three parasite species of the fish Cichlasoma urophthalmus from Yucatán, México. We calculated mean abundance and the aggregation parameter of the negative binomial distribution k for the larval digeneans Oligogonotylus manteri and Ascocotyle (Phagicola) nana and the ectoparasite Argulus yucatanus monthly from April 2005 to December 2010. Fourier analysis of time series and cross-correlations were used to determine potential associations between mean abundance and k for the three parasite species with water temperature and rainfall. Both O. manteri and A. (Ph.) nana exhibited their highest frequency peaks in mean abundance at 6 and 12 months, respectively, while their peak in k occurred every 24 months. For A. yucatanus the frequency peaks in mean abundance and k occurred every 12 months. We suggest that the level of aggregation at 24 months of O. manteri increases the likelihood of fish mortality. Such a scenario is less likely for A. (Ph.) nana and A. yucatanus, due to their low infection levels. Our findings suggest that under the conditions of GCC it would be reasonable to expect higher levels of parasite aggregation in tropical aquatic hosts, in turn leading to a potential increase in parasite-induced host mortality.
What Is Spatio-Temporal Data Warehousing?
NASA Astrophysics Data System (ADS)
Vaisman, Alejandro; Zimányi, Esteban
In the last years, extending OLAP (On-Line Analytical Processing) systems with spatial and temporal features has attracted the attention of the GIS (Geographic Information Systems) and database communities. However, there is no a commonly agreed definition of what is a spatio-temporal data warehouse and what functionality such a data warehouse should support. Further, the solutions proposed in the literature vary considerably in the kind of data that can be represented as well as the kind of queries that can be expressed. In this paper we present a conceptual framework for defining spatio-temporal data warehouses using an extensible data type system. We also define a taxonomy of different classes of queries of increasing expressive power, and show how to express such queries using an extension of the tuple relational calculus with aggregated functions.
Spatial and temporal patterns of coexistence between competing Aedes mosquitoes in urban Florida
Juliano, S. A.
2009-01-01
Understanding mechanisms fostering coexistence between invasive and resident species is important in predicting ecological, economic, or health impacts of invasive species. The mosquito Aedes aegypti coexists at some urban sites in southeastern United States with invasive Aedes albopictus, which is often superior in interspecific competition. We tested predictions for three hypotheses of species coexistence: seasonal condition-specific competition, aggregation among individual water-filled containers, and colonization–competition tradeoff across spatially partitioned habitat patches (cemeteries) that have high densities of containers. We measured spatial and temporal patterns of abundance for both species among water-filled resident cemetery vases and experimentally positioned standard cemetery vases and ovitraps in metropolitan Tampa, Florida. Consistent with the seasonal condition-specific competition hypothesis, abundances of both species in resident and standard cemetery vases were higher early in the wet season (June) versus late in the wet season (September), but the proportional increase of A. albopictus was greater than that of A. aegypti, presumably due to higher dry-season egg mortality and strong wet-season competitive superiority of larval A. albopictus. Spatial partitioning was not evident among cemeteries, a result inconsistent with the colonization-competition tradeoff hypothesis, but both species were highly independently aggregated among standard cemetery vases and ovitraps, which is consistent with the aggregation hypothesis. Densities of A. aegypti but not A. albopictus differed among land use categories, with A. aegypti more abundant in ovitraps in residential areas compared to industrial and commercial areas. Spatial partitioning among land use types probably results from effects of land use on conditions in both terrestrial and aquatic-container environments. These results suggest that both temporal and spatial variation may contribute to local coexistence between these Aedes in urban areas. PMID:19263086
Spatial and temporal patterns of coexistence between competing Aedes mosquitoes in urban Florida.
Leisnham, Paul T; Juliano, S A
2009-05-01
Understanding mechanisms fostering coexistence between invasive and resident species is important in predicting ecological, economic, or health impacts of invasive species. The mosquito Aedes aegypti coexists at some urban sites in southeastern United States with invasive Aedes albopictus, which is often superior in interspecific competition. We tested predictions for three hypotheses of species coexistence: seasonal condition-specific competition, aggregation among individual water-filled containers, and colonization-competition tradeoff across spatially partitioned habitat patches (cemeteries) that have high densities of containers. We measured spatial and temporal patterns of abundance for both species among water-filled resident cemetery vases and experimentally positioned standard cemetery vases and ovitraps in metropolitan Tampa, Florida. Consistent with the seasonal condition-specific competition hypothesis, abundances of both species in resident and standard cemetery vases were higher early in the wet season (June) versus late in the wet season (September), but the proportional increase of A. albopictus was greater than that of A. aegypti, presumably due to higher dry-season egg mortality and strong wet-season competitive superiority of larval A. albopictus. Spatial partitioning was not evident among cemeteries, a result inconsistent with the colonization-competition tradeoff hypothesis, but both species were highly independently aggregated among standard cemetery vases and ovitraps, which is consistent with the aggregation hypothesis. Densities of A. aegypti but not A. albopictus differed among land use categories, with A. aegypti more abundant in ovitraps in residential areas compared to industrial and commercial areas. Spatial partitioning among land use types probably results from effects of land use on conditions in both terrestrial and aquatic-container environments. These results suggest that both temporal and spatial variation may contribute to local coexistence between these Aedes in urban areas.
Avulsion research using flume experiments and highly accurate and temporal-rich SfM datasets
NASA Astrophysics Data System (ADS)
Javernick, L.; Bertoldi, W.; Vitti, A.
2017-12-01
SfM's ability to produce high-quality, large-scale digital elevation models (DEMs) of complicated and rapidly evolving systems has made it a valuable technique for low-budget researchers and practitioners. While SfM has provided valuable datasets that capture single-flood event DEMs, there is an increasing scientific need to capture higher temporal resolution datasets that can quantify the evolutionary processes instead of pre- and post-flood snapshots. However, flood events' dangerous field conditions and image matching challenges (e.g. wind, rain) prevent quality SfM-image acquisition. Conversely, flume experiments offer opportunities to document flood events, but achieving consistent and accurate DEMs to detect subtle changes in dry and inundated areas remains a challenge for SfM (e.g. parabolic error signatures).This research aimed at investigating the impact of naturally occurring and manipulated avulsions on braided river morphology and on the encroachment of floodplain vegetation, using laboratory experiments. This required DEMs with millimeter accuracy and precision and at a temporal resolution to capture the processes. SfM was chosen as it offered the most practical method. Through redundant local network design and a meticulous ground control point (GCP) survey with a Leica Total Station in red laser configuration (reported 2 mm accuracy), the SfM residual errors compared to separate ground truthing data produced mean errors of 1.5 mm (accuracy) and standard deviations of 1.4 mm (precision) without parabolic error signatures. Lighting conditions in the flume were limited to uniform, oblique, and filtered LED strips, which removed glint and thus improved bed elevation mean errors to 4 mm, but errors were further reduced by means of an open source software for refraction correction. The obtained datasets have provided the ability to quantify how small flood events with avulsion can have similar morphologic and vegetation impacts as large flood events without avulsion. Further, this research highlights the potential application of SfM in the laboratory and ability to document physical and biological processes at greater spatial and temporal resolution. Marie Sklodowska-Curie Individual Fellowship: River-HMV, 656917
Optical Oversampled Analog-to-Digital Conversion
1992-06-29
hologram weights and interconnects in the digital image halftoning configuration. First, no temporal error diffusion occurs in the digital image... halftoning error diffusion ar- chitecture as demonstrated by Equation (6.1). Equation (6.2) ensures that the hologram weights sum to one so that the exact...optimum halftone image should be faster. Similarly, decreased convergence time suggests that an error diffusion filter with larger spatial dimensions
Monitoring gait in multiple sclerosis with novel wearable motion sensors.
Moon, Yaejin; McGinnis, Ryan S; Seagers, Kirsten; Motl, Robert W; Sheth, Nirav; Wright, John A; Ghaffari, Roozbeh; Sosnoff, Jacob J
2017-01-01
Mobility impairment is common in people with multiple sclerosis (PwMS) and there is a need to assess mobility in remote settings. Here, we apply a novel wireless, skin-mounted, and conformal inertial sensor (BioStampRC, MC10 Inc.) to examine gait characteristics of PwMS under controlled conditions. We determine the accuracy and precision of BioStampRC in measuring gait kinematics by comparing to contemporary research-grade measurement devices. A total of 45 PwMS, who presented with diverse walking impairment (Mild MS = 15, Moderate MS = 15, Severe MS = 15), and 15 healthy control subjects participated in the study. Participants completed a series of clinical walking tests. During the tests participants were instrumented with BioStampRC and MTx (Xsens, Inc.) sensors on their shanks, as well as an activity monitor GT3X (Actigraph, Inc.) on their non-dominant hip. Shank angular velocity was simultaneously measured with the inertial sensors. Step number and temporal gait parameters were calculated from the data recorded by each sensor. Visual inspection and the MTx served as the reference standards for computing the step number and temporal parameters, respectively. Accuracy (error) and precision (variance of error) was assessed based on absolute and relative metrics. Temporal parameters were compared across groups using ANOVA. Mean accuracy±precision for the BioStampRC was 2±2 steps error for step number, 6±9ms error for stride time and 6±7ms error for step time (0.6-2.6% relative error). Swing time had the least accuracy±precision (25±19ms error, 5±4% relative error) among the parameters. GT3X had the least accuracy±precision (8±14% relative error) in step number estimate among the devices. Both MTx and BioStampRC detected significantly distinct gait characteristics between PwMS with different disability levels (p<0.01). BioStampRC sensors accurately and precisely measure gait parameters in PwMS across diverse walking impairment levels and detected differences in gait characteristics by disability level in PwMS. This technology has the potential to provide granular monitoring of gait both inside and outside the clinic.
Streak camera based SLR receiver for two color atmospheric measurements
NASA Technical Reports Server (NTRS)
Varghese, Thomas K.; Clarke, Christopher; Oldham, Thomas; Selden, Michael
1993-01-01
To realize accurate two-color differential measurements, an image digitizing system with variable spatial resolution was designed, built, and integrated to a photon-counting picosecond streak camera, yielding a temporal scan resolution better than 300 femtosecond/pixel. The streak camera is configured to operate with 3 spatial channels; two of these support green (532 nm) and uv (355 nm) while the third accommodates reference pulses (764 nm) for real-time calibration. Critical parameters affecting differential timing accuracy such as pulse width and shape, number of received photons, streak camera/imaging system nonlinearities, dynamic range, and noise characteristics were investigated to optimize the system for accurate differential delay measurements. The streak camera output image consists of three image fields, each field is 1024 pixels along the time axis and 16 pixels across the spatial axis. Each of the image fields may be independently positioned across the spatial axis. Two of the image fields are used for the two wavelengths used in the experiment; the third window measures the temporal separation of a pair of diode laser pulses which verify the streak camera sweep speed for each data frame. The sum of the 16 pixel intensities across each of the 1024 temporal positions for the three data windows is used to extract the three waveforms. The waveform data is processed using an iterative three-point running average filter (10 to 30 iterations are used) to remove high-frequency structure. The pulse pair separations are determined using the half-max and centroid type analysis. Rigorous experimental verification has demonstrated that this simplified process provides the best measurement accuracy. To calibrate the receiver system sweep, two laser pulses with precisely known temporal separation are scanned along the full length of the sweep axis. The experimental measurements are then modeled using polynomial regression to obtain a best fit to the data. Data aggregation using normal point approach has provided accurate data fitting techniques and is found to be much more convenient than using the full rate single shot data. The systematic errors from this model have been found to be less than 3 ps for normal points.
NASA Astrophysics Data System (ADS)
El Serafy, Ghada; Gaytan Aguilar, Sandra; Ziemba, Alexander
2016-04-01
There is an increasing use of process-based models in the investigation of ecological systems and scenario predictions. The accuracy and quality of these models are improved when run with high spatial and temporal resolution data sets. However, ecological data can often be difficult to collect which manifests itself through irregularities in the spatial and temporal domain of these data sets. Through the use of Data INterpolating Empirical Orthogonal Functions(DINEOF) methodology, earth observation products can be improved to have full spatial coverage within the desired domain as well as increased temporal resolution to daily and weekly time step, those frequently required by process-based models[1]. The DINEOF methodology results in a degree of error being affixed to the refined data product. In order to determine the degree of error introduced through this process, the suspended particulate matter and chlorophyll-a data from MERIS is used with DINEOF to produce high resolution products for the Wadden Sea. These new data sets are then compared with in-situ and other data sources to determine the error. Also, artificial cloud cover scenarios are conducted in order to substantiate the findings from MERIS data experiments. Secondly, the accuracy of DINEOF is explored to evaluate the variance of the methodology. The degree of accuracy is combined with the overall error produced by the methodology and reported in an assessment of the quality of DINEOF when applied to resolution refinement of chlorophyll-a and suspended particulate matter in the Wadden Sea. References [1] Sirjacobs, D.; Alvera-Azcárate, A.; Barth, A.; Lacroix, G.; Park, Y.; Nechad, B.; Ruddick, K.G.; Beckers, J.-M. (2011). Cloud filling of ocean colour and sea surface temperature remote sensing products over the Southern North Sea by the Data Interpolating Empirical Orthogonal Functions methodology. J. Sea Res. 65(1): 114-130. Dx.doi.org/10.1016/j.seares.2010.08.002
Real Time Land-Surface Hydrologic Modeling Over Continental US
NASA Technical Reports Server (NTRS)
Houser, Paul R.
1998-01-01
The land surface component of the hydrological cycle is fundamental to the overall functioning of the atmospheric and climate processes. Spatially and temporally variable rainfall and available energy, combined with land surface heterogeneity cause complex variations in all processes related to surface hydrology. The characterization of the spatial and temporal variability of water and energy cycles are critical to improve our understanding of land surface-atmosphere interaction and the impact of land surface processes on climate extremes. Because the accurate knowledge of these processes and their variability is important for climate predictions, most Numerical Weather Prediction (NWP) centers have incorporated land surface schemes in their models. However, errors in the NWP forcing accumulate in the surface and energy stores, leading to incorrect surface water and energy partitioning and related processes. This has motivated the NWP to impose ad hoc corrections to the land surface states to prevent this drift. A proposed methodology is to develop Land Data Assimilation schemes (LDAS), which are uncoupled models forced with observations, and not affected by NWP forcing biases. The proposed research is being implemented as a real time operation using an existing Surface Vegetation Atmosphere Transfer Scheme (SVATS) model at a 40 km degree resolution across the United States to evaluate these critical science questions. The model will be forced with real time output from numerical prediction models, satellite data, and radar precipitation measurements. Model parameters will be derived from the existing GIS vegetation and soil coverages. The model results will be aggregated to various scales to assess water and energy balances and these will be validated with various in-situ observations.
NASA Astrophysics Data System (ADS)
Taucher, Jan; Stange, Paul; Algueró-Muñiz, María; Bach, Lennart T.; Nauendorf, Alice; Kolzenburg, Regina; Büdenbender, Jan; Riebesell, Ulf
2018-05-01
Particle aggregation and the consequent formation of marine snow alter important properties of biogenic particles (size, sinking rate, degradability), thus playing a key role in controlling the vertical flux of organic matter to the deep ocean. However, there are still large uncertainties about rates and mechanisms of particle aggregation, as well as the role of plankton community structure in modifying biomass transfer from small particles to large fast-sinking aggregates. Here we present data from a high-resolution underwater camera system that we used to observe particle size distributions and formation of marine snow (aggregates >0.5 mm) over the course of a 9-week in situ mesocosm experiment in the Eastern Subtropical North Atlantic. After an oligotrophic phase of almost 4 weeks, addition of nutrient-rich deep water (650 m) initiated the development of a pronounced diatom bloom and the subsequent formation of large marine snow aggregates in all 8 mesocosms. We observed a substantial time lag between the peaks of chlorophyll a and marine snow biovolume of 9-12 days, which is much longer than previously reported and indicates a marked temporal decoupling of phytoplankton growth and marine snow formation during our study. Despite this time lag, our observations revealed substantial transfer of biomass from small particle sizes (single phytoplankton cells and chains) to marine snow aggregates of up to 2.5 mm diameter (ESD), with most of the biovolume being contained in the 0.5-1 mm size range. Notably, the abundance and community composition of mesozooplankton had a substantial influence on the temporal development of particle size spectra and formation of marine snow aggregates: While higher copepod abundances were related to reduced aggregate formation and biomass transfer towards larger particle sizes, the presence of appendicularia and doliolids enhanced formation of large marine snow. Furthermore, we combined in situ particle size distributions with measurements of particle sinking velocity to compute instantaneous (potential) vertical mass flux. However, somewhat surprisingly, we did not find a coherent relationship between our computed flux and measured vertical mass flux (collected by sediment traps in 15 m depth). Although the onset of measured vertical flux roughly coincided with the emergence of marine snow, we found substantial variability in mass flux among mesocosms that was not related to marine snow numbers, and was instead presumably driven by zooplankton-mediated alteration of sinking biomass and export of small particles (fecal pellets). Altogether, our findings highlight the role of zooplankton community composition and feeding interactions on particle size spectra and formation of marine snow aggregates, with important implications for our understanding of particle aggregation and vertical flux of organic matter in the ocean.
Zgliczynski, Brian J.; Teer, Bradford Z.; Laughlin, Joseph L.
2014-01-01
The giant bumphead parrotfish (Bolbometopon muricatum) has experienced precipitous population declines throughout its range due to its importance as a highly-prized fishery target and cultural resource. Because of its diet, Bolbometopon may serve as a keystone species on Indo-Pacific coral reefs, yet comprehensive descriptions of its reproductive ecology do not exist. We used a variety of underwater visual census (UVC) methods to study an intact population of Bolbometopon at Wake Atoll, a remote and protected coral atoll in the west Pacific. Key observations include spawning activities in the morning around the full and last quarter moon, with possible spawning extending to the new moon. We observed peaks in aggregation size just prior to and following the full and last quarter moon, respectively, and observed a distinct break in spawning at the site that persisted for four days; individuals returned to the aggregation site one day prior to the last quarter moon and resumed spawning the following day. The mating system was lek-based, characterized by early male arrival at the spawning site followed by vigorous defense (including head-butting between large males) of small territories. These territories were apparently used to attract females that arrived later in large schools, causing substantial changes in the sex ratio on the aggregation site at any given time during the morning spawning period. Aggression between males and courtship of females led to pair spawning within the upper water column. Mating interference was not witnessed but we noted instances suggesting that sperm competition might occur. Densities of Bolbometopon on the aggregation site averaged 10.07(±3.24 SE) fish per hectare (ha) with maximum densities of 51.5 fish per ha. By comparing our observations to the results of biennial surveys conducted by the National Oceanic and Atmospheric Administration (NOAA) Coral Reef Ecosystem Division (CRED), we confirmed spatial consistency of the aggregation across years as well as a temporal break in spawning activity and aggregation that occurred during the lunar phase. We estimated the area encompassed by the spawning aggregation to be 0.72 ha, suggesting that spawning site closures and temporal closures centered around the full to the new moon might form one component of a management and conservation plan for this species. Our study of the mating system and spawning aggregation behavior of Bolbometopon from the protected, relatively pristine population at Wake Atoll provides crucial baselines of population density, sex ratio composition, and productivity of a spawning aggregation site from an oceanic atoll. Such information is key for conservation efforts and provides a basic platform for the design of marine protected areas for this threatened iconic coral reef fish, as well as for species with similar ecological and life history characteristics. PMID:25469322
Cao, Youfang; Terebus, Anna; Liang, Jie
2016-01-01
The discrete chemical master equation (dCME) provides a general framework for studying stochasticity in mesoscopic reaction networks. Since its direct solution rapidly becomes intractable due to the increasing size of the state space, truncation of the state space is necessary for solving most dCMEs. It is therefore important to assess the consequences of state space truncations so errors can be quantified and minimized. Here we describe a novel method for state space truncation. By partitioning a reaction network into multiple molecular equivalence groups (MEG), we truncate the state space by limiting the total molecular copy numbers in each MEG. We further describe a theoretical framework for analysis of the truncation error in the steady state probability landscape using reflecting boundaries. By aggregating the state space based on the usage of a MEG and constructing an aggregated Markov process, we show that the truncation error of a MEG can be asymptotically bounded by the probability of states on the reflecting boundary of the MEG. Furthermore, truncating states of an arbitrary MEG will not undermine the estimated error of truncating any other MEGs. We then provide an overall error estimate for networks with multiple MEGs. To rapidly determine the appropriate size of an arbitrary MEG, we also introduce an a priori method to estimate the upper bound of its truncation error. This a priori estimate can be rapidly computed from reaction rates of the network, without the need of costly trial solutions of the dCME. As examples, we show results of applying our methods to the four stochastic networks of 1) the birth and death model, 2) the single gene expression model, 3) the genetic toggle switch model, and 4) the phage lambda bistable epigenetic switch model. We demonstrate how truncation errors and steady state probability landscapes can be computed using different sizes of the MEG(s) and how the results validate out theories. Overall, the novel state space truncation and error analysis methods developed here can be used to ensure accurate direct solutions to the dCME for a large number of stochastic networks. PMID:27105653
DOE Office of Scientific and Technical Information (OSTI.GOV)
Cao, Youfang; Terebus, Anna; Liang, Jie
The discrete chemical master equation (dCME) provides a general framework for studying stochasticity in mesoscopic reaction networks. Since its direct solution rapidly becomes intractable due to the increasing size of the state space, truncation of the state space is necessary for solving most dCMEs. It is therefore important to assess the consequences of state space truncations so errors can be quantified and minimized. Here we describe a novel method for state space truncation. By partitioning a reaction network into multiple molecular equivalence groups (MEGs), we truncate the state space by limiting the total molecular copy numbers in each MEG. Wemore » further describe a theoretical framework for analysis of the truncation error in the steady-state probability landscape using reflecting boundaries. By aggregating the state space based on the usage of a MEG and constructing an aggregated Markov process, we show that the truncation error of a MEG can be asymptotically bounded by the probability of states on the reflecting boundary of the MEG. Furthermore, truncating states of an arbitrary MEG will not undermine the estimated error of truncating any other MEGs. We then provide an overall error estimate for networks with multiple MEGs. To rapidly determine the appropriate size of an arbitrary MEG, we also introduce an a priori method to estimate the upper bound of its truncation error. This a priori estimate can be rapidly computed from reaction rates of the network, without the need of costly trial solutions of the dCME. As examples, we show results of applying our methods to the four stochastic networks of (1) the birth and death model, (2) the single gene expression model, (3) the genetic toggle switch model, and (4) the phage lambda bistable epigenetic switch model. We demonstrate how truncation errors and steady-state probability landscapes can be computed using different sizes of the MEG(s) and how the results validate our theories. Overall, the novel state space truncation and error analysis methods developed here can be used to ensure accurate direct solutions to the dCME for a large number of stochastic networks.« less
Cao, Youfang; Terebus, Anna; Liang, Jie
2016-04-22
The discrete chemical master equation (dCME) provides a general framework for studying stochasticity in mesoscopic reaction networks. Since its direct solution rapidly becomes intractable due to the increasing size of the state space, truncation of the state space is necessary for solving most dCMEs. It is therefore important to assess the consequences of state space truncations so errors can be quantified and minimized. Here we describe a novel method for state space truncation. By partitioning a reaction network into multiple molecular equivalence groups (MEGs), we truncate the state space by limiting the total molecular copy numbers in each MEG. Wemore » further describe a theoretical framework for analysis of the truncation error in the steady-state probability landscape using reflecting boundaries. By aggregating the state space based on the usage of a MEG and constructing an aggregated Markov process, we show that the truncation error of a MEG can be asymptotically bounded by the probability of states on the reflecting boundary of the MEG. Furthermore, truncating states of an arbitrary MEG will not undermine the estimated error of truncating any other MEGs. We then provide an overall error estimate for networks with multiple MEGs. To rapidly determine the appropriate size of an arbitrary MEG, we also introduce an a priori method to estimate the upper bound of its truncation error. This a priori estimate can be rapidly computed from reaction rates of the network, without the need of costly trial solutions of the dCME. As examples, we show results of applying our methods to the four stochastic networks of (1) the birth and death model, (2) the single gene expression model, (3) the genetic toggle switch model, and (4) the phage lambda bistable epigenetic switch model. We demonstrate how truncation errors and steady-state probability landscapes can be computed using different sizes of the MEG(s) and how the results validate our theories. Overall, the novel state space truncation and error analysis methods developed here can be used to ensure accurate direct solutions to the dCME for a large number of stochastic networks.« less
Tracking calcification in tissue-engineered bone using synchrotron micro-FTIR and SEM.
Deegan, Anthony J; Cinque, Gianfelice; Wehbe, Katia; Konduru, Sandeep; Yang, Ying
2015-02-01
One novel tissue engineering approach to mimic in vivo bone formation is the use of aggregate or micromass cultures. Various qualitative and quantitative techniques, such as histochemical staining, protein assay kits and RT-PCR, have been used previously on cellular aggregate studies to investigate how these intricate arrangements lead to mature bone tissue. However, these techniques struggle to reveal spatial and temporal distribution of proliferation and mineralization simultaneously. Synchrotron-based Fourier transform infrared microspectroscopy (micro-FTIR) offers a unique insight at the molecular scale by coupling high IR sensitivity to organic matter with the high spatial resolution allowed by diffraction limited SR microbeam. This study is set to investigate the effects of culture duration and aggregate size on the dynamics and spatial distribution of calcification in engineered bone aggregates by a combination of micro-FTIR and scanning electron microscopy (SEM)/energy-dispersive X-ray spectroscopy (EDX). A murine bone cell line has been used, and small/large bone aggregates have been induced using different chemically treated culture substrates. Our findings suggest that bone cell aggregate culturing can greatly increase levels of mineralization over short culture periods. The size of the aggregates influences mineralisation rates with larger aggregates mineralizing at a faster rate than their smaller counterparts. The micro-FTIR mapping has demonstrated that mineralization in the larger aggregates initiated from the periphery and spread to the centre, whilst the smaller aggregates have more minerals in the centre at the early stage and deposited more in the periphery after further culturing, implying that aggregate size influences calcification distribution and development over time. SEM/EDX data correlates well with the micro-FTIR results for the total mineral content. Thus, synchrotron-based micro-FTIR can accurately track mineralization process/mechanism in the engineered bone.
EnKF with closed-eye period - bridging intermittent model structural errors in soil hydrology
NASA Astrophysics Data System (ADS)
Bauser, Hannes H.; Jaumann, Stefan; Berg, Daniel; Roth, Kurt
2017-04-01
The representation of soil water movement exposes uncertainties in all model components, namely dynamics, forcing, subscale physics and the state itself. Especially model structural errors in the description of the dynamics are difficult to represent and can lead to an inconsistent estimation of the other components. We address the challenge of a consistent aggregation of information for a manageable specific hydraulic situation: a 1D soil profile with TDR-measured water contents during a time period of less than 2 months. We assess the uncertainties for this situation and detect initial condition, soil hydraulic parameters, small-scale heterogeneity, upper boundary condition, and (during rain events) the local equilibrium assumption by the Richards equation as the most important ones. We employ an iterative Ensemble Kalman Filter (EnKF) with an augmented state. Based on a single rain event, we are able to reduce all uncertainties directly, except for the intermittent violation of the local equilibrium assumption. We detect these times by analyzing the temporal evolution of estimated parameters. By introducing a closed-eye period - during which we do not estimate parameters, but only guide the state based on measurements - we can bridge these times. The introduced closed-eye period ensured constant parameters, suggesting that they resemble the believed true material properties. The closed-eye period improves predictions during periods when the local equilibrium assumption is met, but consequently worsens predictions when the assumption is violated. Such a prediction requires a description of the dynamics during local non-equilibrium phases, which remains an open challenge.
The Impact of Temporal Aggregation of Land Surface Temperature Data for Urban Heat Island Monitoring
NASA Astrophysics Data System (ADS)
Hu, L.; Brunsell, N. A.
2012-12-01
Temporally composited remote sensing products are widely used in monitoring the urban heat island (UHI). In order to quantify the impact of temporal aggregation for assessing the UHI, we examined MODIS land surface temperature (LST) products for 11 years focusing on Houston, Texas and its surroundings. By using the daily LST from 2000 to 2010, the urban and rural daily LST were presented for the 8-day period and annual comparisons for both day and night. Statistics based on the rural-urban LST differences show that the 8-day composite mean UHI effects are generally more intensive than that calculated by daily UHI images. Moreover, the seasonal pattern shows that the summer daytime UHI has the largest magnitude and variation while nighttime UHI magnitudes are much smaller and less variable. Regression analyses enhance the results showing an apparently higher UHI derived from 8-day composite dataset. The summer mean UHI maps were compared, indicating a land cover related pattern. We introduced yearly MODIS land cover type product to explore the spatial differences caused by temporal aggression of LST product. The mean bias caused by land cover types are calculated about 0.5 ~ 0.7K during the daytime, and less than 0.1K at night. The potential causes of the higher UHI are discussed. The analysis shows that the land-atmosphere interactions, which result in the regional cloud formation, are the primary reason.
The chorus environment and the shape of communication systems in frogs
NASA Astrophysics Data System (ADS)
Marshall, Vince
2003-04-01
Many species of frogs breed in dense and structurally complex aggregations of calling males termed choruses. Females entering a chorus are faced with the tasks of recognizing and locating mates on the basis of their advertisement calls. The chorus environment poses particular challenges for communication as signalers and receivers will face high levels of background noise and interference between signals. For females, such conditions may decrease the efficiency of communication, with the consequences of increasing the time required to find a mate or errors in mate choice. For males, it will give rise to intense competition for the attention of females. Additionally, the chorus environment for a species is not static, and will vary over both spatial and temporal scales. This complex and dynamic environment has shaped the signals and signaling behaviors of frogs in sometimes surprising ways. In this talk, some of the implications of the chorus environment for both receivers and signalers is discussed. In particular, examples from North American hylid frogs are drawn upon and research on the role of signal timing in influencing the responses of females and plasticity in aggressive behavior between neighbors in choruses are discussed.
Synchrony between reanalysis-driven RCM simulations and observations: variation with time scale
NASA Astrophysics Data System (ADS)
de Elía, Ramón; Laprise, René; Biner, Sébastien; Merleau, James
2017-04-01
Unlike coupled global climate models (CGCMs) that run in a stand-alone mode, nested regional climate models (RCMs) are driven by either a CGCM or a reanalysis dataset. This feature makes high correlations between the RCM simulation and its driver possible. When the driving dataset is a reanalysis, time correlations between RCM output and observations are also common and to be expected. In certain situations time correlation between driver and driven RCM is of particular interest and techniques have been developed to increase it (e.g. large-scale spectral nudging). For such cases, a question that remains open is whether aggregating in time increases the correlation between RCM output and observations. That is, although the RCM may be unable to reproduce a given daily event, whether it will still be able to satisfactorily simulate an anomaly on a monthly or annual basis. This is a preconception that the authors of this work and others in the community have held, perhaps as a natural extension of the properties of upscaling or aggregating other statistics such as the mean squared error. Here we explore analytically four particular cases that help us partially answer this question. In addition, we use observations datasets and RCM-simulated data to illustrate our findings. Results indicate that time upscaling does not necessarily increase time correlations, and that those interested in achieving high monthly or annual time correlations between RCM output and observations may have to do so by increasing correlation as much as possible at the shortest time scale. This may indicate that even when only concerned with time correlations at large temporal scale, large-scale spectral nudging acting at the time-step level may have to be used.
Jones, John W.
2015-01-01
The U.S. Geological Survey is developing new Landsat science products. One, named Dynamic Surface Water Extent (DSWE), is focused on the representation of ground surface inundation as detected in cloud-/shadow-/snow-free pixels for scenes collected over the U.S. and its territories. Characterization of DSWE uncertainty to facilitate its appropriate use in science and resource management is a primary objective. A unique evaluation dataset developed from data made publicly available through the Everglades Depth Estimation Network (EDEN) was used to evaluate one candidate DSWE algorithm that is relatively simple, requires no scene-based calibration data, and is intended to detect inundation in the presence of marshland vegetation. A conceptual model of expected algorithm performance in vegetated wetland environments was postulated, tested and revised. Agreement scores were calculated at the level of scenes and vegetation communities, vegetation index classes, water depths, and individual EDEN gage sites for a variety of temporal aggregations. Landsat Archive cloud cover attribution errors were documented. Cloud cover had some effect on model performance. Error rates increased with vegetation cover. Relatively low error rates for locations of little/no vegetation were unexpectedly dominated by omission errors due to variable substrates and mixed pixel effects. Examined discrepancies between satellite and in situ modeled inundation demonstrated the utility of such comparisons for EDEN database improvement. Importantly, there seems no trend or bias in candidate algorithm performance as a function of time or general hydrologic conditions, an important finding for long-term monitoring. The developed database and knowledge gained from this analysis will be used for improved evaluation of candidate DSWE algorithms as well as other measurements made on Everglades surface inundation, surface water heights and vegetation using radar, lidar and hyperspectral instruments. Although no other sites have such an extensive in situ network or long-term records, the broader applicability of this and other candidate DSWE algorithms is being evaluated in other wetlands using this work as a guide. Continued interaction among DSWE producers and potential users will help determine whether the measured accuracies are adequate for practical utility in resource management.
NASA Astrophysics Data System (ADS)
Lund, M. T.; Samset, B. H.; Skeie, R. B.; Berntsen, T.
2017-12-01
Several recent studies have used observations from the HIPPO flight campaigns to constrain the modeled vertical distribution of black carbon (BC) over the Pacific. Results indicate a relatively linear relationship between global-mean atmospheric BC residence time, or lifetime, and bias in current models. A lifetime of less than 5 days is necessary for models to reasonably reproduce these observations. This is shorter than what many global models predict, which will in turn affect their estimates of BC climate impacts. Here we use the chemistry-transport model OsloCTM to examine whether this relationship between global BC lifetime and model skill also holds for a broader a set of flight campaigns from 2009-2013 covering both remote marine and continental regions at a range of latitudes. We perform four sets of simulations with varying scavenging efficiency to obtain a spread in the modeled global BC lifetime and calculate the model error and bias for each campaign and region. Vertical BC profiles are constructed using an online flight simulator, as well by averaging and interpolating monthly mean model output, allowing us to quantify sampling errors arising when measurements are compared with model output at different spatial and temporal resolutions. Using the OsloCTM coupled with a microphysical aerosol parameterization, we investigate the sensitivity of modeled BC vertical distribution to uncertainties in the aerosol aging and scavenging processes in more detail. From this, we can quantify how model uncertainties in the BC life cycle propagate into uncertainties in its climate impacts. For most campaigns and regions, a short global-mean BC lifetime corresponds with the lowest model error and bias. On an aggregated level, sampling errors appear to be small, but larger differences are seen in individual regions. However, we also find that model-measurement discrepancies in BC vertical profiles cannot be uniquely attributed to uncertainties in a single process or parameter, at least in this model. Model development therefore needs to focus on improvements to individual processes, supported by a broad range of observational and experimental data, rather than tuning individual, effective parameters such as global BC lifetime.
Temporal prediction errors modulate task-switching performance
Limongi, Roberto; Silva, Angélica M.; Góngora-Costa, Begoña
2015-01-01
We have previously shown that temporal prediction errors (PEs, the differences between the expected and the actual stimulus’ onset times) modulate the effective connectivity between the anterior cingulate cortex and the right anterior insular cortex (rAI), causing the activity of the rAI to decrease. The activity of the rAI is associated with efficient performance under uncertainty (e.g., changing a prepared behavior when a change demand is not expected), which leads to hypothesize that temporal PEs might disrupt behavior-change performance under uncertainty. This hypothesis has not been tested at a behavioral level. In this work, we evaluated this hypothesis within the context of task switching and concurrent temporal predictions. Our participants performed temporal predictions while observing one moving ball striking a stationary ball which bounced off with a variable temporal gap. Simultaneously, they performed a simple color comparison task. In some trials, a change signal made the participants change their behaviors. Performance accuracy decreased as a function of both the temporal PE and the delay. Explaining these results without appealing to ad hoc concepts such as “executive control” is a challenge for cognitive neuroscience. We provide a predictive coding explanation. We hypothesize that exteroceptive and proprioceptive minimization of PEs would converge in a fronto-basal ganglia network which would include the rAI. Both temporal gaps (or uncertainty) and temporal PEs would drive and modulate this network respectively. Whereas the temporal gaps would drive the activity of the rAI, the temporal PEs would modulate the endogenous excitatory connections of the fronto-striatal network. We conclude that in the context of perceptual uncertainty, the system is not able to minimize perceptual PE, causing the ongoing behavior to finalize and, in consequence, disrupting task switching. PMID:26379568
The function of the left anterior temporal pole: evidence from acute stroke and infarct volume
Tsapkini, Kyrana; Frangakis, Constantine E.
2011-01-01
The role of the anterior temporal lobes in cognition and language has been much debated in the literature over the last few years. Most prevailing theories argue for an important role of the anterior temporal lobe as a semantic hub or a place for the representation of unique entities such as proper names of peoples and places. Lately, a few studies have investigated the role of the most anterior part of the left anterior temporal lobe, the left temporal pole in particular, and argued that the left anterior temporal pole is the area responsible for mapping meaning on to sound through evidence from tasks such as object naming. However, another recent study indicates that bilateral anterior temporal damage is required to cause a clinically significant semantic impairment. In the present study, we tested these hypotheses by evaluating patients with acute stroke before reorganization of structure–function relationships. We compared a group of 20 patients with acute stroke with anterior temporal pole damage to a group of 28 without anterior temporal pole damage matched for infarct volume. We calculated the average percent error in auditory comprehension and naming tasks as a function of infarct volume using a non-parametric regression method. We found that infarct volume was the only predictive variable in the production of semantic errors in both auditory comprehension and object naming tasks. This finding favours the hypothesis that left unilateral anterior temporal pole lesions, even acutely, are unlikely to cause significant deficits in mapping meaning to sound by themselves, although they contribute to networks underlying both naming and comprehension of objects. Therefore, the anterior temporal lobe may be a semantic hub for object meaning, but its role must be represented bilaterally and perhaps redundantly. PMID:21685458
Temporal prediction errors modulate task-switching performance.
Limongi, Roberto; Silva, Angélica M; Góngora-Costa, Begoña
2015-01-01
We have previously shown that temporal prediction errors (PEs, the differences between the expected and the actual stimulus' onset times) modulate the effective connectivity between the anterior cingulate cortex and the right anterior insular cortex (rAI), causing the activity of the rAI to decrease. The activity of the rAI is associated with efficient performance under uncertainty (e.g., changing a prepared behavior when a change demand is not expected), which leads to hypothesize that temporal PEs might disrupt behavior-change performance under uncertainty. This hypothesis has not been tested at a behavioral level. In this work, we evaluated this hypothesis within the context of task switching and concurrent temporal predictions. Our participants performed temporal predictions while observing one moving ball striking a stationary ball which bounced off with a variable temporal gap. Simultaneously, they performed a simple color comparison task. In some trials, a change signal made the participants change their behaviors. Performance accuracy decreased as a function of both the temporal PE and the delay. Explaining these results without appealing to ad hoc concepts such as "executive control" is a challenge for cognitive neuroscience. We provide a predictive coding explanation. We hypothesize that exteroceptive and proprioceptive minimization of PEs would converge in a fronto-basal ganglia network which would include the rAI. Both temporal gaps (or uncertainty) and temporal PEs would drive and modulate this network respectively. Whereas the temporal gaps would drive the activity of the rAI, the temporal PEs would modulate the endogenous excitatory connections of the fronto-striatal network. We conclude that in the context of perceptual uncertainty, the system is not able to minimize perceptual PE, causing the ongoing behavior to finalize and, in consequence, disrupting task switching.
A comparison of optical gradation analysis devices to current test methods--phase 2.
DOT National Transportation Integrated Search
2012-04-01
Optical devices are being developed to deliver accurate size and shape of aggregate particles with, less labor, less consistency error, : and greater reliability. This study was initiated to review the existing technology, and generate basic data to ...
NASA Astrophysics Data System (ADS)
Bartkiewicz, Karol; Černoch, Antonín; Lemr, Karel; Miranowicz, Adam; Nori, Franco
2016-06-01
Temporal steering, which is a temporal analog of Einstein-Podolsky-Rosen steering, refers to temporal quantum correlations between the initial and final state of a quantum system. Our analysis of temporal steering inequalities in relation to the average quantum bit error rates reveals the interplay between temporal steering and quantum cloning, which guarantees the security of quantum key distribution based on mutually unbiased bases against individual attacks. The key distributions analyzed here include the Bennett-Brassard 1984 protocol and the six-state 1998 protocol by Bruss. Moreover, we define a temporal steerable weight, which enables us to identify a kind of monogamy of temporal correlation that is essential to quantum cryptography and useful for analyzing various scenarios of quantum causality.
Incorporating GIS and remote sensing for census population disaggregation
NASA Astrophysics Data System (ADS)
Wu, Shuo-Sheng'derek'
Census data are the primary source of demographic data for a variety of researches and applications. For confidentiality issues and administrative purposes, census data are usually released to the public by aggregated areal units. In the United States, the smallest census unit is census blocks. Due to data aggregation, users of census data may have problems in visualizing population distribution within census blocks and estimating population counts for areas not coinciding with census block boundaries. The main purpose of this study is to develop methodology for estimating sub-block areal populations and assessing the estimation errors. The City of Austin, Texas was used as a case study area. Based on tax parcel boundaries and parcel attributes derived from ancillary GIS and remote sensing data, detailed urban land use classes were first classified using a per-field approach. After that, statistical models by land use classes were built to infer population density from other predictor variables, including four census demographic statistics (the Hispanic percentage, the married percentage, the unemployment rate, and per capita income) and three physical variables derived from remote sensing images and building footprints vector data (a landscape heterogeneity statistics, a building pattern statistics, and a building volume statistics). In addition to statistical models, deterministic models were proposed to directly infer populations from building volumes and three housing statistics, including the average space per housing unit, the housing unit occupancy rate, and the average household size. After population models were derived or proposed, how well the models predict populations for another set of sample blocks was assessed. The results show that deterministic models were more accurate than statistical models. Further, by simulating the base unit for modeling from aggregating blocks, I assessed how well the deterministic models estimate sub-unit-level populations. I also assessed the aggregation effects and the resealing effects on sub-unit estimates. Lastly, from another set of mixed-land-use sample blocks, a mixed-land-use model was derived and compared with a residential-land-use model. The results of per-field land use classification are satisfactory with a Kappa accuracy statistics of 0.747. Model Assessments by land use show that population estimates for multi-family land use areas have higher errors than those for single-family land use areas, and population estimates for mixed land use areas have higher errors than those for residential land use areas. The assessments of sub-unit estimates using a simulation approach indicate that smaller areas show higher estimation errors, estimation errors do not relate to the base unit size, and resealing improves all levels of sub-unit estimates.
Satellite-based high-resolution mapping of rainfall over southern Africa
NASA Astrophysics Data System (ADS)
Meyer, Hanna; Drönner, Johannes; Nauss, Thomas
2017-06-01
A spatially explicit mapping of rainfall is necessary for southern Africa for eco-climatological studies or nowcasting but accurate estimates are still a challenging task. This study presents a method to estimate hourly rainfall based on data from the Meteosat Second Generation (MSG) Spinning Enhanced Visible and Infrared Imager (SEVIRI). Rainfall measurements from about 350 weather stations from 2010-2014 served as ground truth for calibration and validation. SEVIRI and weather station data were used to train neural networks that allowed the estimation of rainfall area and rainfall quantities over all times of the day. The results revealed that 60 % of recorded rainfall events were correctly classified by the model (probability of detection, POD). However, the false alarm ratio (FAR) was high (0.80), leading to a Heidke skill score (HSS) of 0.18. Estimated hourly rainfall quantities were estimated with an average hourly correlation of ρ = 0. 33 and a root mean square error (RMSE) of 0.72. The correlation increased with temporal aggregation to 0.52 (daily), 0.67 (weekly) and 0.71 (monthly). The main weakness was the overestimation of rainfall events. The model results were compared to the Integrated Multi-satellitE Retrievals for GPM (IMERG) of the Global Precipitation Measurement (GPM) mission. Despite being a comparably simple approach, the presented MSG-based rainfall retrieval outperformed GPM IMERG in terms of rainfall area detection: GPM IMERG had a considerably lower POD. The HSS was not significantly different compared to the MSG-based retrieval due to a lower FAR of GPM IMERG. There were no further significant differences between the MSG-based retrieval and GPM IMERG in terms of correlation with the observed rainfall quantities. The MSG-based retrieval, however, provides rainfall in a higher spatial resolution. Though estimating rainfall from satellite data remains challenging, especially at high temporal resolutions, this study showed promising results towards improved spatio-temporal estimates of rainfall over southern Africa.
Larson, Nicholas B; McDonnell, Shannon; Cannon Albright, Lisa; Teerlink, Craig; Stanford, Janet; Ostrander, Elaine A; Isaacs, William B; Xu, Jianfeng; Cooney, Kathleen A; Lange, Ethan; Schleutker, Johanna; Carpten, John D; Powell, Isaac; Bailey-Wilson, Joan E; Cussenot, Olivier; Cancel-Tassin, Geraldine; Giles, Graham G; MacInnis, Robert J; Maier, Christiane; Whittemore, Alice S; Hsieh, Chih-Lin; Wiklund, Fredrik; Catalona, William J; Foulkes, William; Mandal, Diptasri; Eeles, Rosalind; Kote-Jarai, Zsofia; Ackerman, Michael J; Olson, Timothy M; Klein, Christopher J; Thibodeau, Stephen N; Schaid, Daniel J
2017-05-01
Next-generation sequencing technologies have afforded unprecedented characterization of low-frequency and rare genetic variation. Due to low power for single-variant testing, aggregative methods are commonly used to combine observed rare variation within a single gene. Causal variation may also aggregate across multiple genes within relevant biomolecular pathways. Kernel-machine regression and adaptive testing methods for aggregative rare-variant association testing have been demonstrated to be powerful approaches for pathway-level analysis, although these methods tend to be computationally intensive at high-variant dimensionality and require access to complete data. An additional analytical issue in scans of large pathway definition sets is multiple testing correction. Gene set definitions may exhibit substantial genic overlap, and the impact of the resultant correlation in test statistics on Type I error rate control for large agnostic gene set scans has not been fully explored. Herein, we first outline a statistical strategy for aggregative rare-variant analysis using component gene-level linear kernel score test summary statistics as well as derive simple estimators of the effective number of tests for family-wise error rate control. We then conduct extensive simulation studies to characterize the behavior of our approach relative to direct application of kernel and adaptive methods under a variety of conditions. We also apply our method to two case-control studies, respectively, evaluating rare variation in hereditary prostate cancer and schizophrenia. Finally, we provide open-source R code for public use to facilitate easy application of our methods to existing rare-variant analysis results. © 2017 WILEY PERIODICALS, INC.
NASA Technical Reports Server (NTRS)
Li, Hui; Faruque, Fazlay; Williams, Worth; Al-Hamdan, Mohammad; Luvall, Jeffrey C.; Crosson, William; Rickman, Douglas; Limaye, Ashutosh
2009-01-01
Aerosol optical depth (AOD), an indirect estimate of particle matter using satellite observations, has shown great promise in improving estimates of PM 2.5 air quality surface. Currently, few studies have been conducted to explore the optimal way to apply AOD data to improve the model accuracy of PM 2.5 surface estimation in a real-time air quality system. We believe that two major aspects may be worthy of consideration in that area: 1) the approach to integrate satellite measurements with ground measurements in the pollution estimation, and 2) identification of an optimal temporal scale to calculate the correlation of AOD and ground measurements. This paper is focused on the second aspect on the identifying the optimal temporal scale to correlate AOD with PM2.5. Five following different temporal scales were chosen to evaluate their impact on the model performance: 1) within the last 3 days, 2) within the last 10 days, 3) within the last 30 days, 4) within the last 90 days, and 5) the time period with the highest correlation in a year. The model performance is evaluated for its accuracy, bias, and errors based on the following selected statistics: the Mean Bias, the Normalized Mean Bias, the Root Mean Square Error, Normalized Mean Error, and the Index of Agreement. This research shows that the model with the temporal scale of within the last 30 days displays the best model performance in this study area using 2004 and 2005 data sets.
Wilson, C R E; Baxter, M G; Easton, A; Gaffan, D
2008-04-01
Both frontal-inferotemporal disconnection and fornix transection (Fx) in the monkey impair object-in-place scene learning, a model of human episodic memory. If the contribution of the fornix to scene learning is via interaction with or modulation of frontal-temporal interaction--that is, if they form a unitary system--then Fx should have no further effect when added to frontal-temporal disconnection. However, if the contribution of the fornix is to some extent distinct, then fornix lesions may produce an additional deficit in scene learning beyond that caused by frontal-temporal disconnection. To distinguish between these possibilities, we trained three male rhesus monkeys on the object-in-place scene-learning task. We tested their learning on the task following frontal-temporal disconnection, achieved by crossed unilateral aspiration of the frontal cortex in one hemisphere and the inferotemporal cortex in the other, and again following the addition of Fx. The monkeys were significantly impaired in scene learning following frontal-temporal disconnection, and furthermore showed a significant increase in this impairment following the addition of Fx, from 32.8% error to 40.5% error (chance = 50%). The increased impairment following the addition of Fx provides evidence that the fornix and frontal-inferotemporal interaction make distinct contributions to episodic memory.
Characterizing local biological hotspots in the Gulf of Maine using remote sensing data
NASA Astrophysics Data System (ADS)
Ribera, Marta M.
Researchers increasingly advocate the use of ecosystem-based management (EBM) for managing complex marine ecosystems. This approach requires managers to focus on processes and cross-scale interactions, rather than individual components. However, they often lack appropriate tools and data sources to pursue this change in management approach. One method that has been proposed to understand the ecological complexity inherent in marine ecosystems is the study of biological hotspots. Biological hotspots are locations where organisms from different trophic levels aggregate to feed on abundant supplies, and they are considered a first step toward understanding the processes driving spatial and temporal heterogeneity in marine systems. Biological hotspots are supported by phytoplankton aggregations, which are characterized by high spatial and temporal variability. As a result, methods developed to locate biological hotspots in relatively stable terrestrial systems are not well suited for more dynamic marine ecosystems. The main objective of this thesis is thus to identify and characterize local-scale biological hotspots in the western side of the Gulf of Maine. The first chapter describes a new methodological framework with the steps needed to locate these types of hotspots in marine ecosystems using remote sensing datasets. Then, in the second chapter these hotspots are characterized using a novel metric that uses time series information and spatial statistics to account for both the temporal variability and spatial structure of these marine aggregations. This metric redefines biological hotspots as areas with a high probability of exhibiting positive anomalies of productivity compared to the expected regional seasonal pattern. Finally, the third chapter compares the resulting biological hotspots to fishery-dependent abundance indices of surface and benthic predators to determine the effect of the location and magnitude of phytoplankton aggregations on the rest of the ecosystem. Analyses indicate that the spatial scale and magnitude of biological hotspots in the Gulf of Maine depend on the location and time of the year. Results also show that these hotspots change over time in response to both short-term oceanographic processes and long-term climatic cycles. Finally, the new metric presented here facilitates the spatial comparison between different trophic levels, thus allowing interdisciplinary ecosystem-wide studies.
Pickard, Alexandria E; Vaudo, Jeremy J; Wetherbee, Bradley M; Nemeth, Richard S; Blondeau, Jeremiah B; Kadison, Elizabeth A; Shivji, Mahmood S
2016-01-01
Understanding of species interactions within mesophotic coral ecosystems (MCEs; ~ 30-150 m) lags well behind that for shallow coral reefs. MCEs are often sites of fish spawning aggregations (FSAs) for a variety of species, including many groupers. Such reproductive fish aggregations represent temporal concentrations of potential prey that may be drivers of habitat use by predatory species, including sharks. We investigated movements of three species of sharks within a MCE and in relation to FSAs located on the shelf edge south of St. Thomas, United States Virgin Islands. Movements of 17 tiger (Galeocerdo cuvier), seven lemon (Negaprion brevirostris), and six Caribbean reef (Carcharhinus perezi) sharks tagged with acoustic transmitters were monitored within the MCE using an array of acoustic receivers spanning an area of 1,060 km2 over a five year period. Receivers were concentrated around prominent grouper FSAs to monitor movements of sharks in relation to these temporally transient aggregations. Over 130,000 detections of telemetered sharks were recorded, with four sharks tracked in excess of 3 years. All three shark species were present within the MCE over long periods of time and detected frequently at FSAs, but patterns of MCE use and orientation towards FSAs varied both spatially and temporally among species. Lemon sharks moved over a large expanse of the MCE, but concentrated their activities around FSAs during grouper spawning and were present within the MCE significantly more during grouper spawning season. Caribbean reef sharks were present within a restricted portion of the MCE for prolonged periods of time, but were also absent for long periods. Tiger sharks were detected throughout the extent of the acoustic array, with the MCE representing only portion of their habitat use, although a high degree of individual variation was observed. Our findings indicate that although patterns of use varied, all three species of sharks repeatedly utilized the MCE and as upper trophic level predators they are likely involved in a range of interactions with other members of MCEs.
Pickard, Alexandria E.; Vaudo, Jeremy J.; Wetherbee, Bradley M.; Nemeth, Richard S.; Blondeau, Jeremiah B.; Kadison, Elizabeth A.; Shivji, Mahmood S.
2016-01-01
Understanding of species interactions within mesophotic coral ecosystems (MCEs; ~ 30–150 m) lags well behind that for shallow coral reefs. MCEs are often sites of fish spawning aggregations (FSAs) for a variety of species, including many groupers. Such reproductive fish aggregations represent temporal concentrations of potential prey that may be drivers of habitat use by predatory species, including sharks. We investigated movements of three species of sharks within a MCE and in relation to FSAs located on the shelf edge south of St. Thomas, United States Virgin Islands. Movements of 17 tiger (Galeocerdo cuvier), seven lemon (Negaprion brevirostris), and six Caribbean reef (Carcharhinus perezi) sharks tagged with acoustic transmitters were monitored within the MCE using an array of acoustic receivers spanning an area of 1,060 km2 over a five year period. Receivers were concentrated around prominent grouper FSAs to monitor movements of sharks in relation to these temporally transient aggregations. Over 130,000 detections of telemetered sharks were recorded, with four sharks tracked in excess of 3 years. All three shark species were present within the MCE over long periods of time and detected frequently at FSAs, but patterns of MCE use and orientation towards FSAs varied both spatially and temporally among species. Lemon sharks moved over a large expanse of the MCE, but concentrated their activities around FSAs during grouper spawning and were present within the MCE significantly more during grouper spawning season. Caribbean reef sharks were present within a restricted portion of the MCE for prolonged periods of time, but were also absent for long periods. Tiger sharks were detected throughout the extent of the acoustic array, with the MCE representing only portion of their habitat use, although a high degree of individual variation was observed. Our findings indicate that although patterns of use varied, all three species of sharks repeatedly utilized the MCE and as upper trophic level predators they are likely involved in a range of interactions with other members of MCEs. PMID:27144275
Noise in two-color electronic distance meter measurements revisited
Langbein, J.
2004-01-01
Frequent, high-precision geodetic data have temporally correlated errors. Temporal correlations directly affect both the estimate of rate and its standard error; the rate of deformation is a key product from geodetic measurements made in tectonically active areas. Various models of temporally correlated errors are developed and these provide relations between the power spectral density and the data covariance matrix. These relations are applied to two-color electronic distance meter (EDM) measurements made frequently in California over the past 15-20 years. Previous analysis indicated that these data have significant random walk error. Analysis using the noise models developed here indicates that the random walk model is valid for about 30% of the data. A second 30% of the data can be better modeled with power law noise with a spectral index between 1 and 2, while another 30% of the data can be modeled with a combination of band-pass-filtered plus random walk noise. The remaining 10% of the data can be best modeled as a combination of band-pass-filtered plus power law noise. This band-pass-filtered noise is a product of an annual cycle that leaks into adjacent frequency bands. For time spans of more than 1 year these more complex noise models indicate that the precision in rate estimates is better than that inferred by just the simpler, random walk model of noise.
Aagten-Murphy, David; Cappagli, Giulia; Burr, David
2014-03-01
Expert musicians are able to time their actions accurately and consistently during a musical performance. We investigated how musical expertise influences the ability to reproduce auditory intervals and how this generalises across different techniques and sensory modalities. We first compared various reproduction strategies and interval length, to examine the effects in general and to optimise experimental conditions for testing the effect of music, and found that the effects were robust and consistent across different paradigms. Focussing on a 'ready-set-go' paradigm subjects reproduced time intervals drawn from distributions varying in total length (176, 352 or 704 ms) or in the number of discrete intervals within the total length (3, 5, 11 or 21 discrete intervals). Overall, Musicians performed more veridical than Non-Musicians, and all subjects reproduced auditory-defined intervals more accurately than visually-defined intervals. However, Non-Musicians, particularly with visual stimuli, consistently exhibited a substantial and systematic regression towards the mean interval. When subjects judged intervals from distributions of longer total length they tended to regress more towards the mean, while the ability to discriminate between discrete intervals within the distribution had little influence on subject error. These results are consistent with a Bayesian model that minimizes reproduction errors by incorporating a central tendency prior weighted by the subject's own temporal precision relative to the current distribution of intervals. Finally a strong correlation was observed between all durations of formal musical training and total reproduction errors in both modalities (accounting for 30% of the variance). Taken together these results demonstrate that formal musical training improves temporal reproduction, and that this improvement transfers from audition to vision. They further demonstrate the flexibility of sensorimotor mechanisms in adapting to different task conditions to minimise temporal estimation errors. © 2013.
SU-E-T-186: Cloud-Based Quality Assurance Application for Linear Accelerator Commissioning
DOE Office of Scientific and Technical Information (OSTI.GOV)
Rogers, J
2015-06-15
Purpose: To identify anomalies and safety issues during data collection and modeling for treatment planning systems Methods: A cloud-based quality assurance system (AQUIRE - Automated QUalIty REassurance) has been developed to allow the uploading and analysis of beam data aquired during the treatment planning system commissioning process. In addition to comparing and aggregating measured data, tools have also been developed to extract dose from the treatment planning system for end-to-end testing. A gamma index is perfomed on the data to give a dose difference and distance-to-agreement for validation that a beam model is generating plans consistent with the beam datamore » collection. Results: Over 20 linear accelerators have been commissioning using this platform, and a variety of errors and potential saftey issues have been caught through the validation process. For example, the gamma index of 2% dose, 2mm DTA is quite sufficient to see curves not corrected for effective point of measurement. Also, data imported into the database is analyzed against an aggregate of similar linear accelerators to show data points that are outliers. The resulting curves in the database exhibit a very small standard deviation and imply that a preconfigured beam model based on aggregated linear accelerators will be sufficient in most cases. Conclusion: With the use of this new platform for beam data commissioning, errors in beam data collection and treatment planning system modeling are greatly reduced. With the reduction in errors during acquisition, the resulting beam models are quite similar, suggesting that a common beam model may be possible in the future. Development is ongoing to create routine quality assurance tools to compare back to the beam data acquired during commissioning. I am a medical physicist for Alzyen Medical Physics, and perform commissioning services.« less
Senay, Gabriel; Gowda, Prasanna H.; Bohms, Stefanie; Howell, T.A.; Friedrichs, Mackenzie; Marek, T.H.; Verdin, James
2014-01-01
The operational Simplified Surface Energy Balance (SSEBop) approach was applied on 14 Landsat 5 thermal infrared images for mapping daily actual evapotranspiration (ETa) fluxes during the spring and summer seasons (March–October) in 2006 and 2007. Data from four large lysimeters, managed by the USDA-ARS Conservation and Production Research Laboratory were used for evaluating the SSEBop estimated ETa. Lysimeter fields are arranged in a 2 × 2 block pattern with two fields each managed under irrigated and dryland cropping systems. The modeled and observed daily ETa values were grouped as "irrigated" and "dryland" at four different aggregation periods (1-day, 2-day, 3 day and "seasonal") for evaluation. There was a strong linear relationship between observed and modeled ETa with R2 values ranging from 0.87 to 0.97. The root mean square error (RMSE), as percent of their respective mean values, were reduced progressively with 28, 24, 16 and 12% at 1-day, 2-day, 3-day, and seasonal aggregation periods, respectively. With a further correction of the underestimation bias (−11%), the seasonal RMSE reduced from 12 to 6%. The random error contribution to the total error was reduced from 86 to 20% while the bias' contribution increased from 14 to 80% when aggregated from daily to seasonal scale, respectively. This study shows the reliable performance of the SSEBop approach on the Landsat data stream with a transferable approach for use with the recently launched LDCM (Landsat Data Continuity Mission) Thermal InfraRed Sensor (TIRS) data. Thus, SSEBop can produce quick, reliable and useful ET estimations at various time scales with higher seasonal accuracy for use in regional water management decisions.
Scaling in the aggregation dynamics of a magnetorheological fluid.
Domínguez-García, P; Melle, Sonia; Pastor, J M; Rubio, M A
2007-11-01
We present experimental results on the aggregation dynamics of a magnetorheological fluid, namely, an aqueous suspension of micrometer-sized superparamagnetic particles, under the action of a constant uniaxial magnetic field using video microscopy and image analysis. We find a scaling behavior in several variables describing the aggregation kinetics. The data agree well with the Family-Vicsek scaling ansatz for diffusion-limited cluster-cluster aggregation. The kinetic exponents z and z' are obtained from the temporal evolution of the mean cluster size S(t) and the number of clusters N(t), respectively. The crossover exponent Delta is calculated in two ways: first, from the initial slope of the scaling function; second, from the evolution of the nonaggregated particles, n1(t). We report on results of Brownian two-dimensional dynamics simulations and compare the results with the experiments. Finally, we discuss the differences obtained between the kinetic exponents in terms of the variation in the crossover exponent and relate this behavior to the physical interpretation of the crossover exponent.
NASA Astrophysics Data System (ADS)
Sadegh, M.; Vrugt, J. A.
2013-12-01
The ever increasing pace of computational power, along with continued advances in measurement technologies and improvements in process understanding has stimulated the development of increasingly complex hydrologic models that simulate soil moisture flow, groundwater recharge, surface runoff, root water uptake, and river discharge at increasingly finer spatial and temporal scales. Reconciling these system models with field and remote sensing data is a difficult task, particularly because average measures of model/data similarity inherently lack the power to provide a meaningful comparative evaluation of the consistency in model form and function. The very construction of the likelihood function - as a summary variable of the (usually averaged) properties of the error residuals - dilutes and mixes the available information into an index having little remaining correspondence to specific behaviors of the system (Gupta et al., 2008). The quest for a more powerful method for model evaluation has inspired Vrugt and Sadegh [2013] to introduce "likelihood-free" inference as vehicle for diagnostic model evaluation. This class of methods is also referred to as Approximate Bayesian Computation (ABC) and relaxes the need for an explicit likelihood function in favor of one or multiple different summary statistics rooted in hydrologic theory that together have a much stronger and compelling diagnostic power than some aggregated measure of the size of the error residuals. Here, we will introduce an efficient ABC sampling method that is orders of magnitude faster in exploring the posterior parameter distribution than commonly used rejection and Population Monte Carlo (PMC) samplers. Our methodology uses Markov Chain Monte Carlo simulation with DREAM, and takes advantage of a simple computational trick to resolve discontinuity problems with the application of set-theoretic summary statistics. We will also demonstrate a set of summary statistics that are rather insensitive to errors in the forcing data. This enhances prospects of detecting model structural deficiencies.
ERIC Educational Resources Information Center
Pourtois, Gilles; Vocat, Roland; N'Diaye, Karim; Spinelli, Laurent; Seeck, Margitta; Vuilleumier, Patrik
2010-01-01
We studied error monitoring in a human patient with unique implantation of depth electrodes in both the left dorsal cingulate gyrus and medial temporal lobe prior to surgery. The patient performed a speeded go/nogo task and made a substantial number of commission errors (false alarms). As predicted, intracranial Local Field Potentials (iLFPs) in…
Effect of random errors in planar PIV data on pressure estimation in vortex dominated flows
NASA Astrophysics Data System (ADS)
McClure, Jeffrey; Yarusevych, Serhiy
2015-11-01
The sensitivity of pressure estimation techniques from Particle Image Velocimetry (PIV) measurements to random errors in measured velocity data is investigated using the flow over a circular cylinder as a test case. Direct numerical simulations are performed for ReD = 100, 300 and 1575, spanning laminar, transitional, and turbulent wake regimes, respectively. A range of random errors typical for PIV measurements is applied to synthetic PIV data extracted from numerical results. A parametric study is then performed using a number of common pressure estimation techniques. Optimal temporal and spatial resolutions are derived based on the sensitivity of the estimated pressure fields to the simulated random error in velocity measurements, and the results are compared to an optimization model derived from error propagation theory. It is shown that the reductions in spatial and temporal scales at higher Reynolds numbers leads to notable changes in the optimal pressure evaluation parameters. The effect of smaller scale wake structures is also quantified. The errors in the estimated pressure fields are shown to depend significantly on the pressure estimation technique employed. The results are used to provide recommendations for the use of pressure and force estimation techniques from experimental PIV measurements in vortex dominated laminar and turbulent wake flows.
USDA-ARS?s Scientific Manuscript database
Soil-structural stability (expressed in terms of aggregate stability and pore size distribution) depends on (i) soil inherent properties, (ii) extrinsic condition prevailing in the soil that may vary temporally and spatially, and (iii) addition of soil amendments. Different soil management practices...
Through the comparison of several regional-scale chemistry transport modelling systems that simulate meteorology and air quality over the European and American continents, this study aims at i) apportioning the error to the responsible processes using time-scale analysis, ii) hel...
77 FR 55240 - Order Making Fiscal Year 2013 Annual Adjustments to Registration Fee Rates
Federal Register 2010, 2011, 2012, 2013, 2014
2012-09-07
... Management and Budget (``OMB'') to project the aggregate offering price for purposes of the fiscal year 2012... AAMOP is given by exp(FLAAMOP t + [sigma] n \\2\\/2), where [sigma] n denotes the standard error of the n...
NASA Astrophysics Data System (ADS)
Bast, A.; Wilcke, W.; Graf, F.; Lüscher, P.; Gärtner, H.
2016-08-01
Steep vegetation-free talus slopes in high mountain environments are prone to superficial slope failures and surface erosion. Eco-engineering measures can reduce slope instabilities and thus contribute to risk mitigation. In a field experiment, we established mycorrhizal and nonmycorrhizal research plots and determined their biophysical contribution to small-scale soil fixation. Mycorrhizal inoculation impact on plant survival, aggregate stability, and fine root development was analyzed. Here we present plant survival (ntotal = 1248) and soil core (ntotal = 108) analyses of three consecutive years in the Swiss Alps. Soil cores were assayed for their aggregate stability coefficient (ASC), root length density (RLD), and mean root diameter (MRD). Inoculation improved plant survival significantly, but it delayed aggregate stabilization relative to the noninoculated site. Higher aggregate stability occurred only after three growing seasons. Then also RLD tended to be higher and MRD increased significantly at the mycorrhizal treated site. There was a positive correlation between RLD, ASC, and roots <0.5 mm, which had the strongest impact on soil aggregation. Our results revealed a temporal offset between inoculation effects tested in laboratory and field experiments. Consequently, we recommend to establish an intermediate to long-term field experimental monitoring before transferring laboratory results to the field.
Complementary roles for amygdala and periaqueductal gray in temporal-difference fear learning.
Cole, Sindy; McNally, Gavan P
2009-01-01
Pavlovian fear conditioning is not a unitary process. At the neurobiological level multiple brain regions and neurotransmitters contribute to fear learning. At the behavioral level many variables contribute to fear learning including the physical salience of the events being learned about, the direction and magnitude of predictive error, and the rate at which these are learned about. These experiments used a serial compound conditioning design to determine the roles of basolateral amygdala (BLA) NMDA receptors and ventrolateral midbrain periaqueductal gray (vlPAG) mu-opioid receptors (MOR) in predictive fear learning. Rats received a three-stage design, which arranged for both positive and negative prediction errors producing bidirectional changes in fear learning within the same subjects during the test stage. Intra-BLA infusion of the NR2B receptor antagonist Ifenprodil prevented all learning. In contrast, intra-vlPAG infusion of the MOR antagonist CTAP enhanced learning in response to positive predictive error but impaired learning in response to negative predictive error--a pattern similar to Hebbian learning and an indication that fear learning had been divorced from predictive error. These findings identify complementary but dissociable roles for amygdala NMDA receptors and vlPAG MOR in temporal-difference predictive fear learning.
Rong, Hao; Tian, Jin; Zhao, Tingdi
2016-01-01
In traditional approaches of human reliability assessment (HRA), the definition of the error producing conditions (EPCs) and the supporting guidance are such that some of the conditions (especially organizational or managerial conditions) can hardly be included, and thus the analysis is burdened with incomprehensiveness without reflecting the temporal trend of human reliability. A method based on system dynamics (SD), which highlights interrelationships among technical and organizational aspects that may contribute to human errors, is presented to facilitate quantitatively estimating the human error probability (HEP) and its related variables changing over time in a long period. Taking the Minuteman III missile accident in 2008 as a case, the proposed HRA method is applied to assess HEP during missile operations over 50 years by analyzing the interactions among the variables involved in human-related risks; also the critical factors are determined in terms of impact that the variables have on risks in different time periods. It is indicated that both technical and organizational aspects should be focused on to minimize human errors in a long run. Copyright © 2015 Elsevier Ltd and The Ergonomics Society. All rights reserved.
Microfluidic-Based Measurement Method of Red Blood Cell Aggregation under Hematocrit Variations
2017-01-01
Red blood cell (RBC) aggregation and erythrocyte sedimentation rate (ESR) are considered to be promising biomarkers for effectively monitoring blood rheology at extremely low shear rates. In this study, a microfluidic-based measurement technique is suggested to evaluate RBC aggregation under hematocrit variations due to the continuous ESR. After the pipette tip is tightly fitted into an inlet port, a disposable suction pump is connected to the outlet port through a polyethylene tube. After dropping blood (approximately 0.2 mL) into the pipette tip, the blood flow can be started and stopped by periodically operating a pinch valve. To evaluate variations in RBC aggregation due to the continuous ESR, an EAI (Erythrocyte-sedimentation-rate Aggregation Index) is newly suggested, which uses temporal variations of image intensity. To demonstrate the proposed method, the dynamic characterization of the disposable suction pump is first quantitatively measured by varying the hematocrit levels and cavity volume of the suction pump. Next, variations in RBC aggregation and ESR are quantified by varying the hematocrit levels. The conventional aggregation index (AI) is maintained constant, unrelated to the hematocrit values. However, the EAI significantly decreased with respect to the hematocrit values. Thus, the EAI is more effective than the AI for monitoring variations in RBC aggregation due to the ESR. Lastly, the proposed method is employed to detect aggregated blood and thermally-induced blood. The EAI gradually increased as the concentration of a dextran solution increased. In addition, the EAI significantly decreased for thermally-induced blood. From this experimental demonstration, the proposed method is able to effectively measure variations in RBC aggregation due to continuous hematocrit variations, especially by quantifying the EAI. PMID:28878199
Analysis of all-optical temporal integrator employing phased-shifted DFB-SOA.
Jia, Xin-Hong; Ji, Xiao-Ling; Xu, Cong; Wang, Zi-Nan; Zhang, Wei-Li
2014-11-17
All-optical temporal integrator using phase-shifted distributed-feedback semiconductor optical amplifier (DFB-SOA) is investigated. The influences of system parameters on its energy transmittance and integration error are explored in detail. The numerical analysis shows that, enhanced energy transmittance and integration time window can be simultaneously achieved by increased injected current in the vicinity of lasing threshold. We find that the range of input pulse-width with lower integration error is highly sensitive to the injected optical power, due to gain saturation and induced detuning deviation mechanism. The initial frequency detuning should also be carefully chosen to suppress the integration deviation with ideal waveform output.
Music Recognition in Frontotemporal Lobar Degeneration and Alzheimer Disease
Johnson, Julene K; Chang, Chiung-Chih; Brambati, Simona M; Migliaccio, Raffaella; Gorno-Tempini, Maria Luisa; Miller, Bruce L; Janata, Petr
2013-01-01
Objective To compare music recognition in patients with frontotemporal dementia, semantic dementia, Alzheimer disease, and controls and to evaluate the relationship between music recognition and brain volume. Background Recognition of familiar music depends on several levels of processing. There are few studies about how patients with dementia recognize familiar music. Methods Subjects were administered tasks that assess pitch and melody discrimination, detection of pitch errors in familiar melodies, and naming of familiar melodies. Results There were no group differences on pitch and melody discrimination tasks. However, patients with semantic dementia had considerable difficulty naming familiar melodies and also scored the lowest when asked to identify pitch errors in the same melodies. Naming familiar melodies, but not other music tasks, was strongly related to measures of semantic memory. Voxel-based morphometry analysis of brain MRI showed that difficulty in naming songs was associated with the bilateral temporal lobes and inferior frontal gyrus, whereas difficulty in identifying pitch errors in familiar melodies correlated with primarily the right temporal lobe. Conclusions The results support a view that the anterior temporal lobes play a role in familiar melody recognition, and that musical functions are affected differentially across forms of dementia. PMID:21617528
Xia, Yongqiu; Weller, Donald E; Williams, Meghan N; Jordan, Thomas E; Yan, Xiaoyuan
2016-11-15
Export coefficient models (ECMs) are often used to predict nutrient sources and sinks in watersheds because ECMs can flexibly incorporate processes and have minimal data requirements. However, ECMs do not quantify uncertainties in model structure, parameters, or predictions; nor do they account for spatial and temporal variability in land characteristics, weather, and management practices. We applied Bayesian hierarchical methods to address these problems in ECMs used to predict nitrate concentration in streams. We compared four model formulations, a basic ECM and three models with additional terms to represent competing hypotheses about the sources of error in ECMs and about spatial and temporal variability of coefficients: an ADditive Error Model (ADEM), a SpatioTemporal Parameter Model (STPM), and a Dynamic Parameter Model (DPM). The DPM incorporates a first-order random walk to represent spatial correlation among parameters and a dynamic linear model to accommodate temporal correlation. We tested the modeling approach in a proof of concept using watershed characteristics and nitrate export measurements from watersheds in the Coastal Plain physiographic province of the Chesapeake Bay drainage. Among the four models, the DPM was the best--it had the lowest mean error, explained the most variability (R 2 = 0.99), had the narrowest prediction intervals, and provided the most effective tradeoff between fit complexity (its deviance information criterion, DIC, was 45.6 units lower than any other model, indicating overwhelming support for the DPM). The superiority of the DPM supports its underlying hypothesis that the main source of error in ECMs is their failure to account for parameter variability rather than structural error. Analysis of the fitted DPM coefficients for cropland export and instream retention revealed some of the factors controlling nitrate concentration: cropland nitrate exports were positively related to stream flow and watershed average slope, while instream nitrate retention was positively correlated with nitrate concentration. By quantifying spatial and temporal variability in sources and sinks, the DPM provides new information to better target management actions to the most effective times and places. Given the wide use of ECMs as research and management tools, our approach can be broadly applied in other watersheds and to other materials. Copyright © 2016 Elsevier Ltd. All rights reserved.
Spatial and temporal temperature distribution optimization for a geostationary antenna
NASA Technical Reports Server (NTRS)
Tsuyuki, G.; Miyake, R.
1992-01-01
The Geostationary Microwave Precipitation Radiometer antenna is considered and a thermal design analysis is performed to determine a design that would minimize on-orbit antenna temporal and spatial temperature gradients. The final design is based on an optically opaque radome which covered the antenna. The average orbital antenna temperature is found to be 9 C with maximum temporal and spatial variations of 34 C and 1 C, respectively. An independent thermal distortion analysis showed that this temporal variation would give an antenna figure error of 14 microns.
Role of data aggregation in biosurveillance detection strategies with applications from ESSENCE.
Burkom, Howard S; Elbert, Y; Feldman, A; Lin, J
2004-09-24
Syndromic surveillance systems are used to monitor daily electronic data streams for anomalous counts of features of varying specificity. The monitored quantities might be counts of clinical diagnoses, sales of over-the-counter influenza remedies, school absenteeism among a given age group, and so forth. Basic data-aggregation decisions for these systems include determining which records to count and how to group them in space and time. This paper discusses the application of spatial and temporal data-aggregation strategies for multiple data streams to alerting algorithms appropriate to the surveillance region and public health threat of interest. Such a strategy was applied and evaluated for a complex, authentic, multisource, multiregion environment, including >2 years of data records from a system-evaluation exercise for the Defense Advanced Research Project Agency (DARPA). Multivariate and multiple univariate statistical process control methods were adapted and applied to the DARPA data collection. Comparative parametric analyses based on temporal aggregation were used to optimize the performance of these algorithms for timely detection of a set of outbreaks identified in the data by a team of epidemiologists. The sensitivity and timeliness of the most promising detection methods were tested at empirically calculated thresholds corresponding to multiple practical false-alert rates. Even at the strictest false-alert rate, all but one of the outbreaks were detected by the best method, and the best methods achieved a 1-day median time before alert over the set of test outbreaks. These results indicate that a biosurveillance system can provide a substantial alerting-timeliness advantage over traditional public health monitoring for certain outbreaks. Comparative analyses of individual algorithm results indicate further achievable improvement in sensitivity and specificity.
Notelaers, Kristof; Smisdom, Nick; Rocha, Susana; Janssen, Daniel; Meier, Jochen C; Rigo, Jean-Michel; Hofkens, Johan; Ameloot, Marcel
2012-12-01
The spatio-temporal membrane behavior of glycine receptors (GlyRs) is known to be of influence on receptor homeostasis and functionality. In this work, an elaborate fluorimetric strategy was applied to study the GlyR α3K and L isoforms. Previously established differential clustering, desensitization and synaptic localization of these isoforms imply that membrane behavior is crucial in determining GlyR α3 physiology. Therefore diffusion and aggregation of homomeric α3 isoform-containing GlyRs were studied in HEK 293 cells. A unique combination of multiple diffraction-limited ensemble average methods and subdiffraction single particle techniques was used in order to achieve an integrated view of receptor properties. Static measurements of aggregation were performed with image correlation spectroscopy (ICS) and, single particle based, direct stochastic optical reconstruction microscopy (dSTORM). Receptor diffusion was measured by means of raster image correlation spectroscopy (RICS), temporal image correlation spectroscopy (TICS), fluorescence recovery after photobleaching (FRAP) and single particle tracking (SPT). The results show a significant difference in diffusion coefficient and cluster size between the isoforms. This reveals a positive correlation between desensitization and diffusion and disproves the notion that receptor aggregation is a universal mechanism for accelerated desensitization. The difference in diffusion coefficient between the clustering GlyR α3L and the non-clustering GlyR α3K cannot be explained by normal diffusion. SPT measurements indicate that the α3L receptors undergo transient trapping and directed motion, while the GlyR α3K displays mild hindered diffusion. These findings are suggestive of differential molecular interaction of the isoforms after incorporation in the membrane. Copyright © 2012 Elsevier B.V. All rights reserved.
Interactions between commercial fishing and walleye pollock aggregations
NASA Astrophysics Data System (ADS)
Stienessen, Sarah; Wilson, Chris D.; Hallowed, Anne B.
2002-05-01
Scientists with the Alaska Fisheries Science Center are conducting a multiyear field experiment off the eastern side of Kodiak Island in the Gulf of Alaska to determine whether commercial fishing activities significantly affect the distribution and abundance of walleye pollock (Theragra chalcogramma), an important prey species of endangered Steller sea lions (Eumetopias jubatus). In support of this activity, spatio-temporal patterns were described for pollock aggregations. Acoustic-trawl surveys were conducted in two adjacent submarine troughs in August 2001. One trough served as a control site where fishing was prohibited and the other as a treatment site where fishing was allowed. Software, which included patch recognition algorithms, was used to extract acoustic data and generate patch size and shape-related variables to analyze fish aggregations. Important patch related descriptors included skewness, kurtosis, length, height, and density. Estimates of patch fractal dimensions, which relate school perimeter to school area, were less for juvenile than for adult aggregations, indicating a more complex school shape for adults. Comparisons of other patch descriptors were made between troughs and in the presence and absence of the fishery to determine whether trends in pollock aggregation dynamics were a result of the fishery or of naturally occurring events.
A functional model for characterizing long-distance movement behaviour
Buderman, Frances E.; Hooten, Mevin B.; Ivan, Jacob S.; Shenk, Tanya M.
2016-01-01
Advancements in wildlife telemetry techniques have made it possible to collect large data sets of highly accurate animal locations at a fine temporal resolution. These data sets have prompted the development of a number of statistical methodologies for modelling animal movement.Telemetry data sets are often collected for purposes other than fine-scale movement analysis. These data sets may differ substantially from those that are collected with technologies suitable for fine-scale movement modelling and may consist of locations that are irregular in time, are temporally coarse or have large measurement error. These data sets are time-consuming and costly to collect but may still provide valuable information about movement behaviour.We developed a Bayesian movement model that accounts for error from multiple data sources as well as movement behaviour at different temporal scales. The Bayesian framework allows us to calculate derived quantities that describe temporally varying movement behaviour, such as residence time, speed and persistence in direction. The model is flexible, easy to implement and computationally efficient.We apply this model to data from Colorado Canada lynx (Lynx canadensis) and use derived quantities to identify changes in movement behaviour.
A framework for simulating map error in ecosystem models
Sean P. Healey; Shawn P. Urbanski; Paul L. Patterson; Chris Garrard
2014-01-01
The temporal depth and spatial breadth of observations from platforms such as Landsat provide unique perspective on ecosystem dynamics, but the integration of these observations into formal decision support will rely upon improved uncertainty accounting. Monte Carlo (MC) simulations offer a practical, empirical method of accounting for potential map errors in broader...
Matsushima, Ken; Komune, Noritaka; Matsuo, Satoshi; Kohno, Michihiro
2017-07-01
The use of the retrosigmoid approach has recently been expanded by several modifications, including the suprameatal, transmeatal, suprajugular, and inframeatal extensions. Intradural temporal bone drilling without damaging vital structures inside or beside the bone, such as the internal carotid artery and jugular bulb, is a key step for these extensions. This study aimed to examine the microsurgical and endoscopic anatomy of the extensions of the retrosigmoid approach and to evaluate the clinical feasibility of an electromagnetic navigation system during intradural temporal bone drilling. Five temporal bones and 8 cadaveric cerebellopontine angles were examined to clarify the anatomy of retrosigmoid intradural temporal bone drilling. Twenty additional cerebellopontine angles were dissected in a clinical setting with an electromagnetic navigation system while measuring the target registration errors at 8 surgical landmarks on and inside the temporal bone. Retrosigmoid intradural temporal bone drilling expanded the surgical exposure to allow access to the petroclival and parasellar regions (suprameatal), internal acoustic meatus (transmeatal), upper jugular foramen (suprajugular), and petrous apex (inframeatal). The electromagnetic navigation continuously guided the drilling without line of sight limitation, and its small devices were easily manipulated in the deep and narrow surgical field in the posterior fossa. Mean target registration error was less than 0.50 mm during these procedures. The combination of endoscopic and microsurgical techniques aids in achieving optimal exposure for retrosigmoid intradural temporal bone drilling. The electromagnetic navigation system had clear advantages with acceptable accuracy including the usability of small devices without line of sight limitation. Copyright © 2017 Elsevier Inc. All rights reserved.
Neural dynamics of reward probability coding: a Magnetoencephalographic study in humans
Thomas, Julie; Vanni-Mercier, Giovanna; Dreher, Jean-Claude
2013-01-01
Prediction of future rewards and discrepancy between actual and expected outcomes (prediction error) are crucial signals for adaptive behavior. In humans, a number of fMRI studies demonstrated that reward probability modulates these two signals in a large brain network. Yet, the spatio-temporal dynamics underlying the neural coding of reward probability remains unknown. Here, using magnetoencephalography, we investigated the neural dynamics of prediction and reward prediction error computations while subjects learned to associate cues of slot machines with monetary rewards with different probabilities. We showed that event-related magnetic fields (ERFs) arising from the visual cortex coded the expected reward value 155 ms after the cue, demonstrating that reward value signals emerge early in the visual stream. Moreover, a prediction error was reflected in ERF peaking 300 ms after the rewarded outcome and showing decreasing amplitude with higher reward probability. This prediction error signal was generated in a network including the anterior and posterior cingulate cortex. These findings pinpoint the spatio-temporal characteristics underlying reward probability coding. Together, our results provide insights into the neural dynamics underlying the ability to learn probabilistic stimuli-reward contingencies. PMID:24302894
VAUD: A Visual Analysis Approach for Exploring Spatio-Temporal Urban Data.
Chen, Wei; Huang, Zhaosong; Wu, Feiran; Zhu, Minfeng; Guan, Huihua; Maciejewski, Ross
2017-10-02
Urban data is massive, heterogeneous, and spatio-temporal, posing a substantial challenge for visualization and analysis. In this paper, we design and implement a novel visual analytics approach, Visual Analyzer for Urban Data (VAUD), that supports the visualization, querying, and exploration of urban data. Our approach allows for cross-domain correlation from multiple data sources by leveraging spatial-temporal and social inter-connectedness features. Through our approach, the analyst is able to select, filter, aggregate across multiple data sources and extract information that would be hidden to a single data subset. To illustrate the effectiveness of our approach, we provide case studies on a real urban dataset that contains the cyber-, physical-, and socialinformation of 14 million citizens over 22 days.
Temporal and spatial scaling impacts on extreme precipitation
NASA Astrophysics Data System (ADS)
Eggert, B.; Berg, P.; Haerter, J. O.; Jacob, D.; Moseley, C.
2015-01-01
Both in the current climate and in the light of climate change, understanding of the causes and risk of precipitation extremes is essential for protection of human life and adequate design of infrastructure. Precipitation extreme events depend qualitatively on the temporal and spatial scales at which they are measured, in part due to the distinct types of rain formation processes that dominate extremes at different scales. To capture these differences, we first filter large datasets of high-resolution radar measurements over Germany (5 min temporally and 1 km spatially) using synoptic cloud observations, to distinguish convective and stratiform rain events. In a second step, for each precipitation type, the observed data are aggregated over a sequence of time intervals and spatial areas. The resulting matrix allows a detailed investigation of the resolutions at which convective or stratiform events are expected to contribute most to the extremes. We analyze where the statistics of the two types differ and discuss at which resolutions transitions occur between dominance of either of the two precipitation types. We characterize the scales at which the convective or stratiform events will dominate the statistics. For both types, we further develop a mapping between pairs of spatially and temporally aggregated statistics. The resulting curve is relevant when deciding on data resolutions where statistical information in space and time is balanced. Our study may hence also serve as a practical guide for modelers, and for planning the space-time layout of measurement campaigns. We also describe a mapping between different pairs of resolutions, possibly relevant when working with mismatched model and observational resolutions, such as in statistical bias correction.
Monitoring gait in multiple sclerosis with novel wearable motion sensors
McGinnis, Ryan S.; Seagers, Kirsten; Motl, Robert W.; Sheth, Nirav; Wright, John A.; Ghaffari, Roozbeh; Sosnoff, Jacob J.
2017-01-01
Background Mobility impairment is common in people with multiple sclerosis (PwMS) and there is a need to assess mobility in remote settings. Here, we apply a novel wireless, skin-mounted, and conformal inertial sensor (BioStampRC, MC10 Inc.) to examine gait characteristics of PwMS under controlled conditions. We determine the accuracy and precision of BioStampRC in measuring gait kinematics by comparing to contemporary research-grade measurement devices. Methods A total of 45 PwMS, who presented with diverse walking impairment (Mild MS = 15, Moderate MS = 15, Severe MS = 15), and 15 healthy control subjects participated in the study. Participants completed a series of clinical walking tests. During the tests participants were instrumented with BioStampRC and MTx (Xsens, Inc.) sensors on their shanks, as well as an activity monitor GT3X (Actigraph, Inc.) on their non-dominant hip. Shank angular velocity was simultaneously measured with the inertial sensors. Step number and temporal gait parameters were calculated from the data recorded by each sensor. Visual inspection and the MTx served as the reference standards for computing the step number and temporal parameters, respectively. Accuracy (error) and precision (variance of error) was assessed based on absolute and relative metrics. Temporal parameters were compared across groups using ANOVA. Results Mean accuracy±precision for the BioStampRC was 2±2 steps error for step number, 6±9ms error for stride time and 6±7ms error for step time (0.6–2.6% relative error). Swing time had the least accuracy±precision (25±19ms error, 5±4% relative error) among the parameters. GT3X had the least accuracy±precision (8±14% relative error) in step number estimate among the devices. Both MTx and BioStampRC detected significantly distinct gait characteristics between PwMS with different disability levels (p<0.01). Conclusion BioStampRC sensors accurately and precisely measure gait parameters in PwMS across diverse walking impairment levels and detected differences in gait characteristics by disability level in PwMS. This technology has the potential to provide granular monitoring of gait both inside and outside the clinic. PMID:28178288
A novel multiple description scalable coding scheme for mobile wireless video transmission
NASA Astrophysics Data System (ADS)
Zheng, Haifeng; Yu, Lun; Chen, Chang Wen
2005-03-01
We proposed in this paper a novel multiple description scalable coding (MDSC) scheme based on in-band motion compensation temporal filtering (IBMCTF) technique in order to achieve high video coding performance and robust video transmission. The input video sequence is first split into equal-sized groups of frames (GOFs). Within a GOF, each frame is hierarchically decomposed by discrete wavelet transform. Since there is a direct relationship between wavelet coefficients and what they represent in the image content after wavelet decomposition, we are able to reorganize the spatial orientation trees to generate multiple bit-streams and employed SPIHT algorithm to achieve high coding efficiency. We have shown that multiple bit-stream transmission is very effective in combating error propagation in both Internet video streaming and mobile wireless video. Furthermore, we adopt the IBMCTF scheme to remove the redundancy for inter-frames along the temporal direction using motion compensated temporal filtering, thus high coding performance and flexible scalability can be provided in this scheme. In order to make compressed video resilient to channel error and to guarantee robust video transmission over mobile wireless channels, we add redundancy to each bit-stream and apply error concealment strategy for lost motion vectors. Unlike traditional multiple description schemes, the integration of these techniques enable us to generate more than two bit-streams that may be more appropriate for multiple antenna transmission of compressed video. Simulate results on standard video sequences have shown that the proposed scheme provides flexible tradeoff between coding efficiency and error resilience.
Automation of aggregate characterization using laser profiling and digital image analysis
NASA Astrophysics Data System (ADS)
Kim, Hyoungkwan
2002-08-01
Particle morphological properties such as size, shape, angularity, and texture are key properties that are frequently used to characterize aggregates. The characteristics of aggregates are crucial to the strength, durability, and serviceability of the structure in which they are used. Thus, it is important to select aggregates that have proper characteristics for each specific application. Use of improper aggregate can cause rapid deterioration or even failure of the structure. The current standard aggregate test methods are generally labor-intensive, time-consuming, and subject to human errors. Moreover, important properties of aggregates may not be captured by the standard methods due to a lack of an objective way of quantifying critical aggregate properties. Increased quality expectations of products along with recent technological advances in information technology are motivating new developments to provide fast and accurate aggregate characterization. The resulting information can enable a real time quality control of aggregate production as well as lead to better design and construction methods of portland cement concrete and hot mix asphalt. This dissertation presents a system to measure various morphological characteristics of construction aggregates effectively. Automatic measurement of various particle properties is of great interest because it has the potential to solve such problems in manual measurements as subjectivity, labor intensity, and slow speed. The main efforts of this research are placed on three-dimensional (3D) laser profiling, particle segmentation algorithms, particle measurement algorithms, and generalized particle descriptors. First, true 3D data of aggregate particles obtained by laser profiling are transformed into digital images. Second, a segmentation algorithm and a particle measurement algorithm are developed to separate particles and process each particle data individually with the aid of various kinds of digital image technologies. Finally, in order to provide a generalized, quantitative, and representative way to characterize aggregate particles, 3D particle descriptors are developed using the multi-resolution analysis feature of wavelet transforms. Verification tests show that this approach could characterize various aggregate properties in a fast, accurate, and reliable way. When implemented, this ability to automatically analyze multiple characteristics of an aggregate sample is expected to provide not only economic but also intangible strategic gains.
NASA Astrophysics Data System (ADS)
Boschetti, Fabio; Thouret, Valerie; Nedelec, Philippe; Chen, Huilin; Gerbig, Christoph
2015-04-01
Airborne platforms have their main strength in the ability of collecting mixing ratio and meteorological data at different heights across a vertical profile, allowing an insight in the internal structure of the atmosphere. However, rental airborne platforms are usually expensive, limiting the number of flights that can be afforded and hence on the amount of data that can be collected. To avoid this disadvantage, the MOZAIC/IAGOS (Measurements of Ozone and water vapor by Airbus In-service airCraft/In-service Aircraft for a Global Observing System) program makes use of commercial airliners, providing data on a regular basis. It is therefore considered an important tool in atmospheric investigations. However, due to the nature of said platforms, MOZAIC/IAGOS's profiles are located near international airports, which are usually significant emission sources, and are in most cases close to major urban settlements, characterized by higher anthropogenic emissions compared to rural areas. When running transport models at finite resolution, these local emissions can heavily affect measurements resulting in biases in model/observation mismatch. Model/observation mismatch can include different aspects in both horizontal and vertical direction, for example spatial and temporal resolution of the modeled fluxes, or poorly represented convective transport or turbulent mixing in the boundary layer. In the framework of the IGAS (IAGOS for GMES Atmospheric Service) project, whose aim is to improve connections between data collected by MOZAIC/IAGOS and Copernicus Atmospheric Service, the present study is focused on the effect of the spatial resolution of emission fluxes, referred to here as representation error. To investigate this, the Lagrangian transport model STILT (Stochastic Time Inverted Lagrangian Transport) was coupled with EDGAR (Emission Database for Global Atmospheric Research) version-4.3 emission inventory at European regional scale. EDGAR's simulated fluxes for CO, CO2 and CH4 with a spatial resolution of 10x10 km for the time frame 2006-2011 was be aggregated into coarser and coarser grid cells in order to evaluate the representation error at different spatial scales. The dependence of representation error from wind direction and month of the year was evaluated for different location in the European domain, for both random and bias component. The representation error was then validated against the model-data mismatch derived from the comparison of MACC (Monitoring Atmospheric Composition and Climate) reanalysis with IAGOS observations for CO to investigate its suitability for modeling applications. We found that the random and bias components of the representation error show a similar pattern dependent on wind direction. In addition, we found a clear linear relationship between the representation error and the model-data mismatch for both (random and bias) components, indicating that about 50% of the model-data mismatch is related to the representation error. This suggests that the representation error derived using STILT provides useful information for better understanding causes for model-data mismatch.
75 FR 63106 - Correction of Administrative Errors
Federal Register 2010, 2011, 2012, 2013, 2014
2010-10-14
... million or more by state, local, and tribal governments, in the aggregate, or by the private sector... contains notices to the public of #0;the proposed issuance of rules and regulations. The purpose of these... was established by the Federal Employees' Retirement System Act of 1986 (FERSA), Public Law 99-335...
Centler, Florian; Thullner, Martin
2015-01-01
Substrate competition is a common mode of microbial interaction in natural environments. While growth properties play an important and well-studied role in competition, we here focus on the influence of motility. In a simulated two-strain community populating a homogeneous two-dimensional environment, strains competed for a common substrate and only differed in their chemotactic preference, either responding more sensitively to a chemoattractant excreted by themselves or responding more sensitively to substrate. Starting from homogeneous distributions, three possible behaviors were observed depending on the competitors' chemotactic preferences: (i) distributions remained homogeneous, (ii) patterns formed but dissolved at a later time point, resulting in a shifted community composition, and (iii) patterns emerged and led to the extinction of one strain. When patterns formed, the more aggregating strain populated the core of microbial aggregates where starving conditions prevailed, while the less aggregating strain populated the more productive zones at the fringe or outside aggregates, leading to a competitive advantage of the less aggregating strain. The presence of a competitor was found to modulate a strain's behavior, either suppressing or promoting aggregate formation. This observation provides a potential mechanism by which an aggregated lifestyle might evolve even if it is initially disadvantageous. Adverse effects can be avoided as a competitor hinders aggregate formation by a strain which has just acquired this ability. The presented results highlight both, the importance of microbial motility for competition and pattern formation, and the importance of the temporal evolution, or history, of microbial communities when trying to explain an observed distribution.
Davis, Matthew H.
2016-01-01
Successful perception depends on combining sensory input with prior knowledge. However, the underlying mechanism by which these two sources of information are combined is unknown. In speech perception, as in other domains, two functionally distinct coding schemes have been proposed for how expectations influence representation of sensory evidence. Traditional models suggest that expected features of the speech input are enhanced or sharpened via interactive activation (Sharpened Signals). Conversely, Predictive Coding suggests that expected features are suppressed so that unexpected features of the speech input (Prediction Errors) are processed further. The present work is aimed at distinguishing between these two accounts of how prior knowledge influences speech perception. By combining behavioural, univariate, and multivariate fMRI measures of how sensory detail and prior expectations influence speech perception with computational modelling, we provide evidence in favour of Prediction Error computations. Increased sensory detail and informative expectations have additive behavioural and univariate neural effects because they both improve the accuracy of word report and reduce the BOLD signal in lateral temporal lobe regions. However, sensory detail and informative expectations have interacting effects on speech representations shown by multivariate fMRI in the posterior superior temporal sulcus. When prior knowledge was absent, increased sensory detail enhanced the amount of speech information measured in superior temporal multivoxel patterns, but with informative expectations, increased sensory detail reduced the amount of measured information. Computational simulations of Sharpened Signals and Prediction Errors during speech perception could both explain these behavioural and univariate fMRI observations. However, the multivariate fMRI observations were uniquely simulated by a Prediction Error and not a Sharpened Signal model. The interaction between prior expectation and sensory detail provides evidence for a Predictive Coding account of speech perception. Our work establishes methods that can be used to distinguish representations of Prediction Error and Sharpened Signals in other perceptual domains. PMID:27846209
Memory and betweenness preference in temporal networks induced from time series
NASA Astrophysics Data System (ADS)
Weng, Tongfeng; Zhang, Jie; Small, Michael; Zheng, Rui; Hui, Pan
2017-02-01
We construct temporal networks from time series via unfolding the temporal information into an additional topological dimension of the networks. Thus, we are able to introduce memory entropy analysis to unravel the memory effect within the considered signal. We find distinct patterns in the entropy growth rate of the aggregate network at different memory scales for time series with different dynamics ranging from white noise, 1/f noise, autoregressive process, periodic to chaotic dynamics. Interestingly, for a chaotic time series, an exponential scaling emerges in the memory entropy analysis. We demonstrate that the memory exponent can successfully characterize bifurcation phenomenon, and differentiate the human cardiac system in healthy and pathological states. Moreover, we show that the betweenness preference analysis of these temporal networks can further characterize dynamical systems and separate distinct electrocardiogram recordings. Our work explores the memory effect and betweenness preference in temporal networks constructed from time series data, providing a new perspective to understand the underlying dynamical systems.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Finley, Cathy
2014-04-30
This report contains the results from research aimed at improving short-range (0-6 hour) hub-height wind forecasts in the NOAA weather forecast models through additional data assimilation and model physics improvements for use in wind energy forecasting. Additional meteorological observing platforms including wind profilers, sodars, and surface stations were deployed for this study by NOAA and DOE, and additional meteorological data at or near wind turbine hub height were provided by South Dakota State University and WindLogics/NextEra Energy Resources over a large geographical area in the U.S. Northern Plains for assimilation into NOAA research weather forecast models. The resulting improvements inmore » wind energy forecasts based on the research weather forecast models (with the additional data assimilation and model physics improvements) were examined in many different ways and compared with wind energy forecasts based on the current operational weather forecast models to quantify the forecast improvements important to power grid system operators and wind plant owners/operators participating in energy markets. Two operational weather forecast models (OP_RUC, OP_RAP) and two research weather forecast models (ESRL_RAP, HRRR) were used as the base wind forecasts for generating several different wind power forecasts for the NextEra Energy wind plants in the study area. Power forecasts were generated from the wind forecasts in a variety of ways, from very simple to quite sophisticated, as they might be used by a wide range of both general users and commercial wind energy forecast vendors. The error characteristics of each of these types of forecasts were examined and quantified using bulk error statistics for both the local wind plant and the system aggregate forecasts. The wind power forecast accuracy was also evaluated separately for high-impact wind energy ramp events. The overall bulk error statistics calculated over the first six hours of the forecasts at both the individual wind plant and at the system-wide aggregate level over the one year study period showed that the research weather model-based power forecasts (all types) had lower overall error rates than the current operational weather model-based power forecasts, both at the individual wind plant level and at the system aggregate level. The bulk error statistics of the various model-based power forecasts were also calculated by season and model runtime/forecast hour as power system operations are more sensitive to wind energy forecast errors during certain times of year and certain times of day. The results showed that there were significant differences in seasonal forecast errors between the various model-based power forecasts. The results from the analysis of the various wind power forecast errors by model runtime and forecast hour showed that the forecast errors were largest during the times of day that have increased significance to power system operators (the overnight hours and the morning/evening boundary layer transition periods), but the research weather model-based power forecasts showed improvement over the operational weather model-based power forecasts at these times.« less
Ueki, Takeshi; Yoshida, Ryo
2014-06-14
Herein, we summarise the recent developments in self-oscillating polymeric materials based on the concepts of supramolecular chemistry, where aggregates of molecular building blocks with non-covalent bonds evolve the temporal or spatiotemporal structure. By utilising the rhythmic oscillation of the association/dissociation of molecular aggregates coupled with the redox oscillation by the BZ reaction, novel soft materials that express similar functions as those of living matter will be achieved. Further, from the viewpoint of materials science, our recent approach to prepare self-oscillating materials that operate long-term under mild conditions will be introduced.
McKenna, Erin; Bray, Laurence C Jayet; Zhou, Weiwei; Joiner, Wilsaan M
2017-10-01
Delays in transmitting and processing sensory information require correctly associating delayed feedback to issued motor commands for accurate error compensation. The flexibility of this alignment between motor signals and feedback has been demonstrated for movement recalibration to visual manipulations, but the alignment dependence for adapting movement dynamics is largely unknown. Here we examined the effect of visual feedback manipulations on force-field adaptation. Three subject groups used a manipulandum while experiencing a lag in the corresponding cursor motion (0, 75, or 150 ms). When the offset was applied at the start of the session (continuous condition), adaptation was not significantly different between groups. However, these similarities may be due to acclimation to the offset before motor adaptation. We tested additional subjects who experienced the same delays concurrent with the introduction of the perturbation (abrupt condition). In this case adaptation was statistically indistinguishable from the continuous condition, indicating that acclimation to feedback delay was not a factor. In addition, end-point errors were not significantly different across the delay or onset conditions, but end-point correction (e.g., deceleration duration) was influenced by the temporal offset. As an additional control, we tested a group of subjects who performed without visual feedback and found comparable movement adaptation results. These results suggest that visual feedback manipulation (absence or temporal misalignment) does not affect adaptation to novel dynamics, independent of both acclimation and perceptual awareness. These findings could have implications for modeling how the motor system adjusts to errors despite concurrent delays in sensory feedback information. NEW & NOTEWORTHY A temporal offset between movement and distorted visual feedback (e.g., visuomotor rotation) influences the subsequent motor recalibration, but the effects of this offset for altered movement dynamics are largely unknown. Here we examined the influence of 1 ) delayed and 2 ) removed visual feedback on the adaptation to novel movement dynamics. These results contribute to understanding of the control strategies that compensate for movement errors when there is a temporal separation between motion state and sensory information. Copyright © 2017 the American Physiological Society.
Gonçalves, Fabio; Treuhaft, Robert; Law, Beverly; ...
2017-01-07
Mapping and monitoring of forest carbon stocks across large areas in the tropics will necessarily rely on remote sensing approaches, which in turn depend on field estimates of biomass for calibration and validation purposes. Here, we used field plot data collected in a tropical moist forest in the central Amazon to gain a better understanding of the uncertainty associated with plot-level biomass estimates obtained specifically for the calibration of remote sensing measurements. In addition to accounting for sources of error that would be normally expected in conventional biomass estimates (e.g., measurement and allometric errors), we examined two sources of uncertaintymore » that are specific to the calibration process and should be taken into account in most remote sensing studies: the error resulting from spatial disagreement between field and remote sensing measurements (i.e., co-location error), and the error introduced when accounting for temporal differences in data acquisition. We found that the overall uncertainty in the field biomass was typically 25% for both secondary and primary forests, but ranged from 16 to 53%. Co-location and temporal errors accounted for a large fraction of the total variance (>65%) and were identified as important targets for reducing uncertainty in studies relating tropical forest biomass to remotely sensed data. Although measurement and allometric errors were relatively unimportant when considered alone, combined they accounted for roughly 30% of the total variance on average and should not be ignored. Lastly, our results suggest that a thorough understanding of the sources of error associated with field-measured plot-level biomass estimates in tropical forests is critical to determine confidence in remote sensing estimates of carbon stocks and fluxes, and to develop strategies for reducing the overall uncertainty of remote sensing approaches.« less
Feedbacks Between Soil Structure and Microbial Activities in Soil
NASA Astrophysics Data System (ADS)
Bailey, V. L.; Smith, A. P.; Fansler, S.; Varga, T.; Kemner, K. M.; McCue, L. A.
2017-12-01
Soil structure provides the physical framework for soil microbial habitats. The connectivity and size distribution of soil pores controls the microbial access to nutrient resources for growth and metabolism. Thus, a crucial component of soil research is how a soil's three-dimensional structure and organization influences its biological potential on a multitude of spatial and temporal scales. In an effort to understand microbial processes at scale more consistent with a microbial community, we have used soil aggregates as discrete units of soil microbial habitats. Our research has shown that mean pore diameter (x-ray computed tomography) of soil aggregates varies with the aggregate diameter itself. Analyzing both the bacterial composition (16S) and enzyme activities of individual aggregates showed significant differences in the relative abundances of key members the microbial communities associated with high enzyme activities compared to those with low activities, even though we observed no differences in the size of the biomass, nor in the overall richness or diversity of these communities. We hypothesize that resources and substrates have stimulated key populations in the aggregates identified as highly active, and as such, we conducted further research that explored how such key populations (i.e. fungal or bacterial dominated populations) alter pathways of C accumulation in aggregate size domains and microbial C utilization. Fungi support and stabilize soil structure through both physical and chemical effects of their hyphal networks. In contrast, bacterial-dominated communities are purported to facilitate micro- and fine aggregate stabilization. Here we quantify the direct effects fungal versus bacterial dominated communities on aggregate formation (both the rate of aggregation and the quality, quantity and distribution of SOC contained within aggregates). A quantitative understanding of the different mechanisms through which fungi or bacteria shape aggregate formation could alter how we currently treat our predictions of soil biogeochemistry. Current predictions are largely site- or biome-specific; quantitative mechanisms could underpin "rules" that operate at the pore-scale leading to more robust, mechanistic models.
NASA Astrophysics Data System (ADS)
Philip, S.; Martin, R. V.; Keller, C. A.
2015-11-01
Chemical transport models involve considerable computational expense. Fine temporal resolution offers accuracy at the expense of computation time. Assessment is needed of the sensitivity of simulation accuracy to the duration of chemical and transport operators. We conduct a series of simulations with the GEOS-Chem chemical transport model at different temporal and spatial resolutions to examine the sensitivity of simulated atmospheric composition to temporal resolution. Subsequently, we compare the tracers simulated with operator durations from 10 to 60 min as typically used by global chemical transport models, and identify the timesteps that optimize both computational expense and simulation accuracy. We found that longer transport timesteps increase concentrations of emitted species such as nitrogen oxides and carbon monoxide since a more homogeneous distribution reduces loss through chemical reactions and dry deposition. The increased concentrations of ozone precursors increase ozone production at longer transport timesteps. Longer chemical timesteps decrease sulfate and ammonium but increase nitrate due to feedbacks with in-cloud sulfur dioxide oxidation and aerosol thermodynamics. The simulation duration decreases by an order of magnitude from fine (5 min) to coarse (60 min) temporal resolution. We assess the change in simulation accuracy with resolution by comparing the root mean square difference in ground-level concentrations of nitrogen oxides, ozone, carbon monoxide and secondary inorganic aerosols with a finer temporal or spatial resolution taken as truth. Simulation error for these species increases by more than a factor of 5 from the shortest (5 min) to longest (60 min) temporal resolution. Chemical timesteps twice that of the transport timestep offer more simulation accuracy per unit computation. However, simulation error from coarser spatial resolution generally exceeds that from longer timesteps; e.g. degrading from 2° × 2.5° to 4° × 5° increases error by an order of magnitude. We recommend prioritizing fine spatial resolution before considering different temporal resolutions in offline chemical transport models. We encourage the chemical transport model users to specify in publications the durations of operators due to their effects on simulation accuracy.
Are there meaningful individual differences in temporal inconsistency in self-reported personality?
Soubelet, Andrea; Salthouse, Timothy A; Oishi, Shigehiro
2014-11-01
The current project had three goals. The first was to examine whether it is meaningful to refer to across-time variability in self-reported personality as an individual differences characteristic. The second was to investigate whether negative affect was associated with variability in self-reported personality, while controlling for mean levels, and correcting for measurement errors. The third goal was to examine whether variability in self-reported personality would be larger among young adults than among older adults, and whether the relation of variability with negative affect would be stronger at older ages than at younger ages. Two moderately large samples of participants completed the International Item Pool Personality questionnaire assessing the Big Five personality dimensions either twice or thrice, in addition to several measures of negative affect. Results were consistent with the hypothesis that within-person variability in self-reported personality is a meaningful individual difference characteristic. Some people exhibited greater across-time variability than others after removing measurement error, and people who showed temporal instability in one trait also exhibited temporal instability across the other four traits. However, temporal variability was not related to negative affect, and there was no evidence that either temporal variability or its association with negative affect varied with age.
NASA Astrophysics Data System (ADS)
Edirisinghe, Asoka; Clark, Dave; Waugh, Deanne
2012-06-01
Pasture biomass is a vital input for management of dairy systems in New Zealand. An accurate estimate of pasture biomass information is required for the calculation of feed budget, on which decisions are made for farm practices such as conservation, nitrogen use, rotational lengths and supplementary feeding leading to profitability and sustainable use of pasture resources. The traditional field based methods of measuring pasture biomass such as using rising plate metres (RPM) are largely inefficient in providing the timely information at the spatial extent and temporal frequency demanded by commercial environments. In recent times remote sensing has emerged as an alternative tool. In this paper we have examined the Normalised Difference Vegetation Index (NDVI) derived from medium resolution imagery of SPOT-4 and SPOT-5 satellite sensors to predict pasture biomass of intensively grazed dairy pastures. In the space and time domain analysis we have found a significant dependency of time over the season and no dependency of space across the scene at a given time for the relationship between NDVI and field based pasture biomass. We have established a positive correlation (81%) between the two variables in a pixel scale analysis. The application of the model on 2 selected farms over 3 images and aggregation of the predicted biomass to paddock scale has produced paddock average pasture biomass values with a coefficient of determination of 0.71 and a standard error of 260 kg DM ha-1 in the field observed range between 1500 and 3500 kg DM ha-1. This result indicates a high potential for operational use of remotely sensed data to predict pasture biomass of intensively grazed dairy pastures.
Multilevel Space-Time Aggregation for Bright Field Cell Microscopy Segmentation and Tracking
Inglis, Tiffany; De Sterck, Hans; Sanders, Geoffrey; Djambazian, Haig; Sladek, Robert; Sundararajan, Saravanan; Hudson, Thomas J.
2010-01-01
A multilevel aggregation method is applied to the problem of segmenting live cell bright field microscope images. The method employed is a variant of the so-called “Segmentation by Weighted Aggregation” technique, which itself is based on Algebraic Multigrid methods. The variant of the method used is described in detail, and it is explained how it is tailored to the application at hand. In particular, a new scale-invariant “saliency measure” is proposed for deciding when aggregates of pixels constitute salient segments that should not be grouped further. It is shown how segmentation based on multilevel intensity similarity alone does not lead to satisfactory results for bright field cells. However, the addition of multilevel intensity variance (as a measure of texture) to the feature vector of each aggregate leads to correct cell segmentation. Preliminary results are presented for applying the multilevel aggregation algorithm in space time to temporal sequences of microscope images, with the goal of obtaining space-time segments (“object tunnels”) that track individual cells. The advantages and drawbacks of the space-time aggregation approach for segmentation and tracking of live cells in sequences of bright field microscope images are presented, along with a discussion on how this approach may be used in the future work as a building block in a complete and robust segmentation and tracking system. PMID:20467468
Exploiting autoregressive properties to develop prospective urban arson forecasts by target
Jeffrey P. Prestemon; David T. Butry; Douglas Thomas
2013-01-01
Municipal fire departments responded to approximately 53,000 intentionally-set fires annually from 2003 to 2007, according to National Fire Protection Association figures. A disproportionate amount of these fires occur in spatio-temporal clusters, making them predictable and, perhaps, preventable. The objective of this research is to evaluate how the aggregation of...
Temporal Aggregation and Testing For Timber Price Behavior
Jeffrey P. Prestemon; John M. Pye; Thomas P. Holmes
2004-01-01
Different harvest timing models make different assumptions about timber price behavior. Those seeking to optimize harvest timing are thus first faced with a decision regarding which assumption of price behavior is appropriate for their market, particularly regarding the presence of a unit root in the timber price time series. Unfortunately for landowners and investors...
Seabird aggregative patterns: a new tool for offshore wind energy risk assessment.
Christel, Isadora; Certain, Grégoire; Cama, Albert; Vieites, David R; Ferrer, Xavier
2013-01-15
The emerging development of offshore wind energy has raised public concern over its impact on seabird communities. There is a need for an adequate methodology to determine its potential impacts on seabirds. Environmental Impact Assessments (EIAs) are mostly relying on a succession of plain density maps without integrated interpretation of seabird spatio-temporal variability. Using Taylor's power law coupled with mixed effect models, the spatio-temporal variability of species' distributions can be synthesized in a measure of the aggregation levels of individuals over time and space. Applying the method to a seabird aerial survey in the Ebro Delta, NW Mediterranean Sea, we were able to make an explicit distinction between transitional and feeding areas to define and map the potential impacts of an offshore wind farm project. We use the Ebro Delta study case to discuss the advantages of potential impacts maps over density maps, as well as to illustrate how these potential impact maps can be applied to inform on concern levels, optimal EIA design and monitoring in the assessment of local offshore wind energy projects. Copyright © 2012 Elsevier Ltd. All rights reserved.
Allan Cheyne, J; Solman, Grayden J F; Carriere, Jonathan S A; Smilek, Daniel
2009-04-01
We present arguments and evidence for a three-state attentional model of task engagement/disengagement. The model postulates three states of mind-wandering: occurrent task inattention, generic task inattention, and response disengagement. We hypothesize that all three states are both causes and consequences of task performance outcomes and apply across a variety of experimental and real-world tasks. We apply this model to the analysis of a widely used GO/NOGO task, the Sustained Attention to Response Task (SART). We identify three performance characteristics of the SART that map onto the three states of the model: RT variability, anticipations, and omissions. Predictions based on the model are tested, and largely corroborated, via regression and lag-sequential analyses of both successful and unsuccessful withholding on NOGO trials as well as self-reported mind-wandering and everyday cognitive errors. The results revealed theoretically consistent temporal associations among the state indicators and between these and SART errors as well as with self-report measures. Lag analysis was consistent with the hypotheses that temporal transitions among states are often extremely abrupt and that the association between mind-wandering and performance is bidirectional. The bidirectional effects suggest that errors constitute important occasions for reactive mind-wandering. The model also enables concrete phenomenological, behavioral, and physiological predictions for future research.
Example MODIS Global Cloud Optical and Microphysical Properties: Comparisons between Terra and Aqua
NASA Technical Reports Server (NTRS)
Hubanks, P. A.; Platnick, S.; King, M. D.; Ackerman, S. A.; Frey, R. A.
2003-01-01
MODIS observations from the NASA EOS Terra spacecraft (launched in December 1999, 1030 local time equatorial crossing) have provided a unique data set of Earth observations. With the launch of the NASA Aqua spacecraft in May 2002 (1330 local time), two MODIS daytime (sunlit) and nighttime observations are now available in a 24 hour period, allowing for some measure of diurnal variability. We report on an initial analysis of several operational global (Level-3) cloud products from the two platforms. The MODIS atmosphere Level-3 products, which include clear-sky and aerosol products in addition to cloud products, are available as three separate files providing daily, eight-day, and monthly aggregations; each temporal aggregation is spatially aggregated to a 1 degree grid. The files contain approximately 600 statisitical datasets (from simple means and standard deviations to 1 - and 2-dimensional histograms). Operational cloud products include detection (cloud fraction), cloud-top properties, and daytimeonly cloud optical thickness and particle effective radius for both water and ice clouds. We will compare example global Terra and Aqua cloud fraction, optical thickness, and effective radius aggregations.
Effects of Convective Aggregation on Radiative Cooling and Precipitation in a CRM
NASA Astrophysics Data System (ADS)
Naegele, A. C.; Randall, D. A.
2017-12-01
In the global energy budget, the atmospheric radiative cooling (ARC) is approximately balanced by latent heating, but on regional scales, the ARC and precipitation rates are inversely related. We use a cloud-resolving model to explore how the relationship between precipitation and the ARC is affected by convective aggregation, in which the convective activity is confined to a small portion of the domain that is surrounded by a much larger region of dry, subsiding air. Sensitivity tests show that the precipitation rate and ARC are highly sensitive to both SST and microphysics; a higher SST and 1-moment microphysics both act to increase the domain-averaged ARC and precipitation rates. In all simulations, both the domain-averaged ARC and precipitation rates increased due to convective aggregation, resulting in a positive temporal correlation. Furthermore, the radiative effect of clouds in these simulations is to decrease the ARC. This finding is consistent with our observational results of the cloud effect on the ARC, and has implications for convective aggregation and the geographic extent in which it can occur.
Temporal Correlations and Neural Spike Train Entropy
NASA Astrophysics Data System (ADS)
Schultz, Simon R.; Panzeri, Stefano
2001-06-01
Sampling considerations limit the experimental conditions under which information theoretic analyses of neurophysiological data yield reliable results. We develop a procedure for computing the full temporal entropy and information of ensembles of neural spike trains, which performs reliably for limited samples of data. This approach also yields insight to the role of correlations between spikes in temporal coding mechanisms. The method, when applied to recordings from complex cells of the monkey primary visual cortex, results in lower rms error information estimates in comparison to a ``brute force'' approach.
Price, Weather, and `Acreage Abandonment' in Western Great Plains Wheat Culture.
NASA Astrophysics Data System (ADS)
Michaels, Patrick J.
1983-07-01
Multivariate analyses of acreage abandonment patterns in the U.S. Great Plains winter wheat region indicate that the major mode of variation is an in-phase oscillation confined to the western half of the overall area, which is also the area with lowest average yields. This is one of the more agroclimatically marginal environments in the United States, with wide interannual fluctuations in both climate and profitability.We developed a multiple regression model to determine the relative roles of weather and expected price in the decision not to harvest. The overall model explained 77% of the spatial and temporal variation in abandonment. The 36.5% of the non-spatial variation was explained by two simple transformations of climatic data from three monthly aggregates-September-October, November-February and March-April. Price factors, expressed as indexed future delivery quotations,were barely significant, with only between 3 and 5% of the non-spatial variation explained, depending upon the model.The model was based upon weather, climate and price data from 1932 through 1975. It was tested by sequentially withholding three-year blocks of data, and using the respecified regression coefficients, along with observed weather and price, to estimate abandonment in the withheld years. Error analyses indicate no loss of model fidelity in the test mode. Also, prediction errors in the 1970-75 period, characterized by widely fluctuating prices, were not different from those in the rest of the model.The overall results suggest that the perceived quality of the crop, as influenced by weather, is a much more important determinant of the abandonment decision than are expected returns based upon price considerations.
Prediction of human errors by maladaptive changes in event-related brain networks.
Eichele, Tom; Debener, Stefan; Calhoun, Vince D; Specht, Karsten; Engel, Andreas K; Hugdahl, Kenneth; von Cramon, D Yves; Ullsperger, Markus
2008-04-22
Humans engaged in monotonous tasks are susceptible to occasional errors that may lead to serious consequences, but little is known about brain activity patterns preceding errors. Using functional MRI and applying independent component analysis followed by deconvolution of hemodynamic responses, we studied error preceding brain activity on a trial-by-trial basis. We found a set of brain regions in which the temporal evolution of activation predicted performance errors. These maladaptive brain activity changes started to evolve approximately 30 sec before the error. In particular, a coincident decrease of deactivation in default mode regions of the brain, together with a decline of activation in regions associated with maintaining task effort, raised the probability of future errors. Our findings provide insights into the brain network dynamics preceding human performance errors and suggest that monitoring of the identified precursor states may help in avoiding human errors in critical real-world situations.
Prediction of human errors by maladaptive changes in event-related brain networks
Eichele, Tom; Debener, Stefan; Calhoun, Vince D.; Specht, Karsten; Engel, Andreas K.; Hugdahl, Kenneth; von Cramon, D. Yves; Ullsperger, Markus
2008-01-01
Humans engaged in monotonous tasks are susceptible to occasional errors that may lead to serious consequences, but little is known about brain activity patterns preceding errors. Using functional MRI and applying independent component analysis followed by deconvolution of hemodynamic responses, we studied error preceding brain activity on a trial-by-trial basis. We found a set of brain regions in which the temporal evolution of activation predicted performance errors. These maladaptive brain activity changes started to evolve ≈30 sec before the error. In particular, a coincident decrease of deactivation in default mode regions of the brain, together with a decline of activation in regions associated with maintaining task effort, raised the probability of future errors. Our findings provide insights into the brain network dynamics preceding human performance errors and suggest that monitoring of the identified precursor states may help in avoiding human errors in critical real-world situations. PMID:18427123
Real-Time Charging Strategies for an Electric Vehicle Aggregator to Provide Ancillary Services
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wenzel, George; Negrete-Pincetic, Matias; Olivares, Daniel E.
Real-time charging strategies, in the context of vehicle to grid (V2G) technology, are needed to enable the use of electric vehicle (EV) fleets batteries to provide ancillary services (AS). Here, we develop tools to manage charging and discharging in a fleet to track an Automatic Generation Control (AGC) signal when aggregated. We also propose a real-time controller that considers bidirectional charging efficiency and extend it to study the effect of looking ahead when implementing Model Predictive Control (MPC). Simulations show that the controller improves tracking error as compared with benchmark scheduling algorithms, as well as regulation capacity and battery cycling.
Real-Time Charging Strategies for an Electric Vehicle Aggregator to Provide Ancillary Services
Wenzel, George; Negrete-Pincetic, Matias; Olivares, Daniel E.; ...
2017-03-13
Real-time charging strategies, in the context of vehicle to grid (V2G) technology, are needed to enable the use of electric vehicle (EV) fleets batteries to provide ancillary services (AS). Here, we develop tools to manage charging and discharging in a fleet to track an Automatic Generation Control (AGC) signal when aggregated. We also propose a real-time controller that considers bidirectional charging efficiency and extend it to study the effect of looking ahead when implementing Model Predictive Control (MPC). Simulations show that the controller improves tracking error as compared with benchmark scheduling algorithms, as well as regulation capacity and battery cycling.
Improving patient safety through quality assurance.
Raab, Stephen S
2006-05-01
Anatomic pathology laboratories use several quality assurance tools to detect errors and to improve patient safety. To review some of the anatomic pathology laboratory patient safety quality assurance practices. Different standards and measures in anatomic pathology quality assurance and patient safety were reviewed. Frequency of anatomic pathology laboratory error, variability in the use of specific quality assurance practices, and use of data for error reduction initiatives. Anatomic pathology error frequencies vary according to the detection method used. Based on secondary review, a College of American Pathologists Q-Probes study showed that the mean laboratory error frequency was 6.7%. A College of American Pathologists Q-Tracks study measuring frozen section discrepancy found that laboratories improved the longer they monitored and shared data. There is a lack of standardization across laboratories even for governmentally mandated quality assurance practices, such as cytologic-histologic correlation. The National Institutes of Health funded a consortium of laboratories to benchmark laboratory error frequencies, perform root cause analysis, and design error reduction initiatives, using quality assurance data. Based on the cytologic-histologic correlation process, these laboratories found an aggregate nongynecologic error frequency of 10.8%. Based on gynecologic error data, the laboratory at my institution used Toyota production system processes to lower gynecologic error frequencies and to improve Papanicolaou test metrics. Laboratory quality assurance practices have been used to track error rates, and laboratories are starting to use these data for error reduction initiatives.
Community temporal variability increases with fluctuating resource availability
Li, Wei; Stevens, M. Henry H.
2017-01-01
An increase in the quantity of available resources is known to affect temporal variability of aggregate community properties. However, it is unclear how might fluctuations in resource availability alter community-level temporal variability. Here we conduct a microcosm experiment with laboratory protist community subjected to manipulated resource pulses that vary in intensity, duration and time of supply, and examine the impact of fluctuating resource availability on temporal variability of the recipient community. The results showed that the temporal variation of total protist abundance increased with the magnitude of resource pulses, as protist community receiving infrequent resource pulses (i.e., high-magnitude nutrients per pulse) was relatively more unstable than community receiving multiple resource pulses (i.e., low-magnitude nutrients per pulse), although the same total amounts of nutrients were added to each community. Meanwhile, the timing effect of fluctuating resources did not significantly alter community temporal variability. Further analysis showed that fluctuating resource availability increased community temporal variability by increasing the degree of community-wide species synchrony and decreasing the stabilizing effects of dominant species. Hence, the importance of fluctuating resource availability in influencing community stability and the regulatory mechanisms merit more attention, especially when global ecosystems are experiencing high rates of anthropogenic nutrient inputs. PMID:28345592
Community temporal variability increases with fluctuating resource availability
NASA Astrophysics Data System (ADS)
Li, Wei; Stevens, M. Henry H.
2017-03-01
An increase in the quantity of available resources is known to affect temporal variability of aggregate community properties. However, it is unclear how might fluctuations in resource availability alter community-level temporal variability. Here we conduct a microcosm experiment with laboratory protist community subjected to manipulated resource pulses that vary in intensity, duration and time of supply, and examine the impact of fluctuating resource availability on temporal variability of the recipient community. The results showed that the temporal variation of total protist abundance increased with the magnitude of resource pulses, as protist community receiving infrequent resource pulses (i.e., high-magnitude nutrients per pulse) was relatively more unstable than community receiving multiple resource pulses (i.e., low-magnitude nutrients per pulse), although the same total amounts of nutrients were added to each community. Meanwhile, the timing effect of fluctuating resources did not significantly alter community temporal variability. Further analysis showed that fluctuating resource availability increased community temporal variability by increasing the degree of community-wide species synchrony and decreasing the stabilizing effects of dominant species. Hence, the importance of fluctuating resource availability in influencing community stability and the regulatory mechanisms merit more attention, especially when global ecosystems are experiencing high rates of anthropogenic nutrient inputs.
How does aging affect the types of error made in a visual short-term memory ‘object-recall’ task?
Sapkota, Raju P.; van der Linde, Ian; Pardhan, Shahina
2015-01-01
This study examines how normal aging affects the occurrence of different types of incorrect responses in a visual short-term memory (VSTM) object-recall task. Seventeen young (Mean = 23.3 years, SD = 3.76), and 17 normally aging older (Mean = 66.5 years, SD = 6.30) adults participated. Memory stimuli comprised two or four real world objects (the memory load) presented sequentially, each for 650 ms, at random locations on a computer screen. After a 1000 ms retention interval, a test display was presented, comprising an empty box at one of the previously presented two or four memory stimulus locations. Participants were asked to report the name of the object presented at the cued location. Errors rates wherein participants reported the names of objects that had been presented in the memory display but not at the cued location (non-target errors) vs. objects that had not been presented at all in the memory display (non-memory errors) were compared. Significant effects of aging, memory load and target recency on error type and absolute error rates were found. Non-target error rate was higher than non-memory error rate in both age groups, indicating that VSTM may have been more often than not populated with partial traces of previously presented items. At high memory load, non-memory error rate was higher in young participants (compared to older participants) when the memory target had been presented at the earliest temporal position. However, non-target error rates exhibited a reversed trend, i.e., greater error rates were found in older participants when the memory target had been presented at the two most recent temporal positions. Data are interpreted in terms of proactive interference (earlier examined non-target items interfering with more recent items), false memories (non-memory items which have a categorical relationship to presented items, interfering with memory targets), slot and flexible resource models, and spatial coding deficits. PMID:25653615
How does aging affect the types of error made in a visual short-term memory 'object-recall' task?
Sapkota, Raju P; van der Linde, Ian; Pardhan, Shahina
2014-01-01
This study examines how normal aging affects the occurrence of different types of incorrect responses in a visual short-term memory (VSTM) object-recall task. Seventeen young (Mean = 23.3 years, SD = 3.76), and 17 normally aging older (Mean = 66.5 years, SD = 6.30) adults participated. Memory stimuli comprised two or four real world objects (the memory load) presented sequentially, each for 650 ms, at random locations on a computer screen. After a 1000 ms retention interval, a test display was presented, comprising an empty box at one of the previously presented two or four memory stimulus locations. Participants were asked to report the name of the object presented at the cued location. Errors rates wherein participants reported the names of objects that had been presented in the memory display but not at the cued location (non-target errors) vs. objects that had not been presented at all in the memory display (non-memory errors) were compared. Significant effects of aging, memory load and target recency on error type and absolute error rates were found. Non-target error rate was higher than non-memory error rate in both age groups, indicating that VSTM may have been more often than not populated with partial traces of previously presented items. At high memory load, non-memory error rate was higher in young participants (compared to older participants) when the memory target had been presented at the earliest temporal position. However, non-target error rates exhibited a reversed trend, i.e., greater error rates were found in older participants when the memory target had been presented at the two most recent temporal positions. Data are interpreted in terms of proactive interference (earlier examined non-target items interfering with more recent items), false memories (non-memory items which have a categorical relationship to presented items, interfering with memory targets), slot and flexible resource models, and spatial coding deficits.
A facile in vitro model to study rapid mineralization in bone tissues.
Deegan, Anthony J; Aydin, Halil M; Hu, Bin; Konduru, Sandeep; Kuiper, Jan Herman; Yang, Ying
2014-09-16
Mineralization in bone tissue involves stepwise cell-cell and cell-ECM interaction. Regulation of osteoblast culture microenvironments can tailor osteoblast proliferation and mineralization rate, and the quality and/or quantity of the final calcified tissue. An in vitro model to investigate the influencing factors is highly required. We developed a facile in vitro model in which an osteoblast cell line and aggregate culture (through the modification of culture well surfaces) were used to mimic intramembranous bone mineralization. The effect of culture environments including culture duration (up to 72 hours for rapid mineralization study) and aggregates size (monolayer culture as control) on mineralization rate and mineral quantity/quality were examined by osteogenic gene expression (PCR) and mineral markers (histological staining, SEM-EDX and micro-CT). Two size aggregates (on average, large aggregates were 745 μm and small 79 μm) were obtained by the facile technique with high yield. Cells in aggregate culture generated visible and quantifiable mineralized matrix within 24 hours, whereas cells in monolayer failed to do so by 72 hours. The gene expression of important ECM molecules for bone formation including collagen type I, alkaline phosphatase, osteopontin and osteocalcin, varied temporally, differed between monolayer and aggregate cultures, and depended on aggregate size. Monolayer specimens stayed in a proliferation phase for the first 24 hours, and remained in matrix synthesis up to 72 hours; whereas the small aggregates were in the maturation phase for the first 24 and 48 hour cultures and then jumped to a mineralization phase at 72 hours. Large aggregates were in a mineralization phase at all these three time points and produced 36% larger bone nodules with a higher calcium content than those in the small aggregates after just 72 hours in culture. This study confirms that aggregate culture is sufficient to induce rapid mineralization and that aggregate size determines the mineralization rate. Mineral content depended on aggregate size and culture duration. Thus, our culture system may provide a good model to study regulation factors at different development phases of the osteoblastic lineage.
Dopamine reward prediction-error signalling: a two-component response
Schultz, Wolfram
2017-01-01
Environmental stimuli and objects, including rewards, are often processed sequentially in the brain. Recent work suggests that the phasic dopamine reward prediction-error response follows a similar sequential pattern. An initial brief, unselective and highly sensitive increase in activity unspecifically detects a wide range of environmental stimuli, then quickly evolves into the main response component, which reflects subjective reward value and utility. This temporal evolution allows the dopamine reward prediction-error signal to optimally combine speed and accuracy. PMID:26865020
Effects of Tropospheric Spatio-Temporal Correlated Noise on the Analysis of Space Geodetic Data
NASA Technical Reports Server (NTRS)
Romero-Wolf, A. F.; Jacobs, C. S.
2011-01-01
The standard VLBI analysis models measurement noise as purely thermal errors modeled according to uncorrelated Gaussian distributions. As the price of recording bits steadily decreases, thermal errors will soon no longer dominate. It is therefore expected that troposphere and instrumentation/clock errors will increasingly become more dominant. Given that both of these errors have correlated spectra, properly modeling the error distributions will become more relevant for optimal analysis. This paper will discuss the advantages of including the correlations between tropospheric delays using a Kolmogorov spectrum and the frozen ow model pioneered by Treuhaft and Lanyi. We will show examples of applying these correlated noise spectra to the weighting of VLBI data analysis.
Why Is Rainfall Error Analysis Requisite for Data Assimilation and Climate Modeling?
NASA Technical Reports Server (NTRS)
Hou, Arthur Y.; Zhang, Sara Q.
2004-01-01
Given the large temporal and spatial variability of precipitation processes, errors in rainfall observations are difficult to quantify yet crucial to making effective use of rainfall data for improving atmospheric analysis, weather forecasting, and climate modeling. We highlight the need for developing a quantitative understanding of systematic and random errors in precipitation observations by examining explicit examples of how each type of errors can affect forecasts and analyses in global data assimilation. We characterize the error information needed from the precipitation measurement community and how it may be used to improve data usage within the general framework of analysis techniques, as well as accuracy requirements from the perspective of climate modeling and global data assimilation.
Barrera-Ocampo, Alvaro; Arlt, Sönke; Matschke, Jakob; Hartmann, Ursula; Puig, Berta; Ferrer, Isidre; Zürbig, Petra; Glatzel, Markus; Sepulveda-Falla, Diego; Jahn, Holger
2016-09-01
The mechanisms leading to amyloid-β (Aβ) accumulation in sporadic Alzheimer disease (AD) are unknown but both increased production or impaired clearance likely contribute to aggregation. To understand the potential roles of the extracellular matrix proteoglycan Testican-1 in the pathophysiology of AD, we used samples from AD patients and controls and an in vitro approach. Protein expression analysis showed increased levels of Testican-1 in frontal and temporal cortex of AD patients; histological analysis showed that Testican-1 accumulates and co-aggregates with Aβ plaques in the frontal, temporal and entorhinal cortices of AD patients. Proteomic analysis identified 10 fragments of Testican-1 in cerebrospinal fluid (CSF) from AD patients. HEK293T cells expressing human wild type or mutant Aβ precursor protein (APP) were transfected with Testican-1. The co-expression of both proteins modified the sorting of Testican-1 into the endocytic pathway leading to its transient accumulation in Golgi, which seemed to affect APP processing, as indicated by reduced Aβ40 and Aβ42 levels in APP mutant cells. In conclusion, patient data reflect a clearance impairment that may favor Aβ accumulation in AD brains and our in vitro model supports the notion that the interaction between APP and Testican-1 may be a key step in the production and aggregation of Aβ species. © 2016 Oxford University Press OR American Association of Neuropathologists.
Valdivia, Nelson; Golléty, Claire; Migné, Aline; Davoult, Dominique; Molis, Markus
2012-01-01
The temporal stability of aggregate community properties depends on the dynamics of the component species. Since species growth can compensate for the decline of other species, synchronous species dynamics can maintain stability (i.e. invariability) in aggregate properties such as community abundance and metabolism. In field experiments we tested the separate and interactive effects of two stressors associated with storminess–loss of a canopy-forming species and mechanical disturbances–on species synchrony and community respiration of intertidal hard-bottom communities on Helgoland Island, NE Atlantic. Treatments consisted of regular removal of the canopy-forming seaweed Fucus serratus and a mechanical disturbance applied once at the onset of the experiment in March 2006. The level of synchrony in species abundances was assessed from estimates of species percentage cover every three months until September 2007. Experiments at two sites consistently showed that canopy loss significantly reduced species synchrony. Mechanical disturbance had neither separate nor interactive effects on species synchrony. Accordingly, in situ measurements of CO2-fluxes showed that canopy loss, but not mechanical disturbances, significantly reduced net primary productivity and temporal variation in community respiration during emersion periods. Our results support the idea that compensatory dynamics may stabilise aggregate properties. They further suggest that the ecological consequences of the loss of a single structurally important species may be stronger than those derived from smaller-scale mechanical disturbances in natural ecosystems. PMID:22574181
The cerebellum predicts the temporal consequences of observed motor acts.
Avanzino, Laura; Bove, Marco; Pelosin, Elisa; Ogliastro, Carla; Lagravinese, Giovanna; Martino, Davide
2015-01-01
It is increasingly clear that we extract patterns of temporal regularity between events to optimize information processing. The ability to extract temporal patterns and regularity of events is referred as temporal expectation. Temporal expectation activates the same cerebral network usually engaged in action selection, comprising cerebellum. However, it is unclear whether the cerebellum is directly involved in temporal expectation, when timing information is processed to make predictions on the outcome of a motor act. Healthy volunteers received one session of either active (inhibitory, 1 Hz) or sham repetitive transcranial magnetic stimulation covering the right lateral cerebellum prior the execution of a temporal expectation task. Subjects were asked to predict the end of a visually perceived human body motion (right hand handwriting) and of an inanimate object motion (a moving circle reaching a target). Videos representing movements were shown in full; the actual tasks consisted of watching the same videos, but interrupted after a variable interval from its onset by a dark interval of variable duration. During the 'dark' interval, subjects were asked to indicate when the movement represented in the video reached its end by clicking on the spacebar of the keyboard. Performance on the timing task was analyzed measuring the absolute value of timing error, the coefficient of variability and the percentage of anticipation responses. The active group exhibited greater absolute timing error compared with the sham group only in the human body motion task. Our findings suggest that the cerebellum is engaged in cognitive and perceptual domains that are strictly connected to motor control.
In vitro evaluation of the imaging accuracy of C-arm conebeam CT in cerebral perfusion imaging
Ganguly, A.; Fieselmann, A.; Boese, J.; Rohkohl, C.; Hornegger, J.; Fahrig, R.
2012-01-01
Purpose: The authors have developed a method to enable cerebral perfusion CT imaging using C-arm based conebeam CT (CBCT). This allows intraprocedural monitoring of brain perfusion during treatment of stroke. Briefly, the technique consists of acquiring multiple scans (each scan comprised of six sweeps) acquired at different time delays with respect to the start of the x-ray contrast agent injection. The projections are then reconstructed into angular blocks and interpolated at desired time points. The authors have previously demonstrated its feasibility in vivo using an animal model. In this paper, the authors describe an in vitro technique to evaluate the accuracy of their method for measuring the relevant temporal signals. Methods: The authors’ evaluation method is based on the concept that any temporal signal can be represented by a Fourier series of weighted sinusoids. A sinusoidal phantom was developed by varying the concentration of iodine as successive steps of a sine wave. Each step corresponding to a different dilution of iodine contrast solution contained in partitions along a cylinder. By translating the phantom along the axis at different velocities, sinusoidal signals at different frequencies were generated. Using their image acquisition and reconstruction algorithm, these sinusoidal signals were imaged with a C-arm system and the 3D volumes were reconstructed. The average value in a slice was plotted as a function of time. The phantom was also imaged using a clinical CT system with 0.5 s rotation. C-arm CBCT results using 6, 3, 2, and 1 scan sequences were compared to those obtained using CT. Data were compared for linear velocities of the phantom ranging from 0.6 to 1 cm/s. This covers the temporal frequencies up to 0.16 Hz corresponding to a frequency range within which 99% of the spectral energy for all temporal signals in cerebral perfusion imaging is contained. Results: The errors in measurement of temporal frequencies are mostly below 2% for all multiscan sequences. For single scan sequences, the errors increase sharply beyond 0.10 Hz. The amplitude errors increase with frequency and with decrease in the number of scans used. Conclusions: Our multiscan perfusion CT approach allows low errors in signal frequency measurement. Increasing the number of scans reduces the amplitude errors. A two-scan sequence appears to offer the best compromise between accuracy and the associated total x-ray and iodine dose. PMID:23127059
Linning, Shannon J; Andresen, Martin A; Brantingham, Paul J
2017-12-01
This study investigates whether crime patterns fluctuate periodically throughout the year using data containing different property crime types in two Canadian cities with differing climates. Using police report data, a series of ordinary least squares (OLS; Vancouver, British Columbia) and negative binomial (Ottawa, Ontario) regressions were employed to examine the corresponding temporal patterns of property crime in Vancouver (2003-2013) and Ottawa (2006-2008). Moreover, both aggregate and disaggregate models were run to examine whether different weather and temporal variables had a distinctive impact on particular offences. Overall, results suggest that cities that experience greater variations in weather throughout the year have more distinct increases of property offences in the summer months and that different climate variables affect certain crime types, thus advocating for disaggregate analysis in the future.
75 FR 74607 - Correction of Administrative Errors
Federal Register 2010, 2011, 2012, 2013, 2014
2010-12-01
... established for private-sector employees under section 401(k) of the Internal Revenue Code (26 U.S.C. 401(k... regulation on state, local, and tribal governments and the private sector have been assessed. This regulation... governments, in the aggregate, or by the private sector. Therefore, a statement under section 1532 is not...
Carkovic, Athena B; Pastén, Pablo A; Bonilla, Carlos A
2015-04-15
Water erosion is a leading cause of soil degradation and a major nonpoint source pollution problem. Many efforts have been undertaken to estimate the amount and size distribution of the sediment leaving the field. Multi-size class water erosion models subdivide eroded soil into different sizes and estimate the aggregate's composition based on empirical equations derived from agricultural soils. The objective of this study was to evaluate these equations on soil samples collected from natural landscapes (uncultivated) and fire-affected soils. Chemical, physical, and soil fractions and aggregate composition analyses were performed on samples collected in the Chilean Patagonia and later compared with the equations' estimates. The results showed that the empirical equations were not suitable for predicting the sediment fractions. Fine particles, including primary clay, primary silt, and small aggregates (<53 μm) were over-estimated, and large aggregates (>53 μm) and primary sand were under-estimated. The uncultivated and fire-affected soils showed a reduced fraction of fine particles in the sediment, as clay and silt were mostly in the form of large aggregates. Thus, a new set of equations was developed for these soils, where small aggregates were defined as particles with sizes between 53 μm and 250 μm and large aggregates as particles>250 μm. With r(2) values between 0.47 and 0.98, the new equations provided better estimates for primary sand and large aggregates. The aggregate's composition was also well predicted, especially the silt and clay fractions in the large aggregates from uncultivated soils (r(2)=0.63 and 0.83, respectively) and the fractions of silt in the small aggregates (r(2)=0.84) and clay in the large aggregates (r(2)=0.78) from fire-affected soils. Overall, these new equations proved to be better predictors for the sediment and aggregate's composition in uncultivated and fire-affected soils, and they reduce the error when estimating soil loss in natural landscapes. Copyright © 2015 Elsevier B.V. All rights reserved.
Povarova, Natalia V.; Petri, Natalia D.; Blokhina, Anna E.; Bogdanov, Alexey M.; Lukyanov, Konstantin A.
2017-01-01
Despite great advances in practical applications of fluorescent proteins (FPs), their natural function is poorly understood. FPs display complex spatio-temporal expression patterns in living Anthozoa coral polyps. Here we applied confocal microscopy, specifically, the fluorescence recovery after photobleaching (FRAP) technique to analyze intracellular localization and mobility of endogenous FPs in live tissues. We observed three distinct types of protein distributions in living tissues. One type of distribution, characteristic for Anemonia, Discosoma and Zoanthus, is free, highly mobile cytoplasmic localization. Another pattern is seen in FPs localized to numerous intracellular vesicles, observed in Clavularia. The third most intriguing type of intracellular localization is with respect to the spindle-shaped aggregates and lozenge crystals several micrometers in size observed in Zoanthus samples. No protein mobility within those structures was detected by FRAP. This finding encouraged us to develop artificial aggregating FPs. We constructed “trio-FPs” consisting of three tandem copies of tetrameric FPs and demonstrated that they form multiple bright foci upon expression in mammalian cells. High brightness of the aggregates is advantageous for early detection of weak promoter activities. Simultaneously, larger aggregates can induce significant cytostatic and cytotoxic effects and thus such tags are not suitable for long-term and high-level expression. PMID:28704934
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gonçalves, Fabio; Treuhaft, Robert; Law, Beverly
Mapping and monitoring of forest carbon stocks across large areas in the tropics will necessarily rely on remote sensing approaches, which in turn depend on field estimates of biomass for calibration and validation purposes. Here, we used field plot data collected in a tropical moist forest in the central Amazon to gain a better understanding of the uncertainty associated with plot-level biomass estimates obtained specifically for the calibration of remote sensing measurements. In addition to accounting for sources of error that would be normally expected in conventional biomass estimates (e.g., measurement and allometric errors), we examined two sources of uncertaintymore » that are specific to the calibration process and should be taken into account in most remote sensing studies: the error resulting from spatial disagreement between field and remote sensing measurements (i.e., co-location error), and the error introduced when accounting for temporal differences in data acquisition. We found that the overall uncertainty in the field biomass was typically 25% for both secondary and primary forests, but ranged from 16 to 53%. Co-location and temporal errors accounted for a large fraction of the total variance (>65%) and were identified as important targets for reducing uncertainty in studies relating tropical forest biomass to remotely sensed data. Although measurement and allometric errors were relatively unimportant when considered alone, combined they accounted for roughly 30% of the total variance on average and should not be ignored. Lastly, our results suggest that a thorough understanding of the sources of error associated with field-measured plot-level biomass estimates in tropical forests is critical to determine confidence in remote sensing estimates of carbon stocks and fluxes, and to develop strategies for reducing the overall uncertainty of remote sensing approaches.« less
Chen, David D; Pei, Laura; Chan, John S Y; Yan, Jin H
2012-10-01
Recent research using deliberate amplification of spatial errors to increase motor learning leads to the question of whether amplifying temporal errors may also facilitate learning. We investigated transfer effects caused by manipulating temporal constraints on learning a two-choice reaction time (CRT) task with varying degrees of stimulus-response compatibility. Thirty-four participants were randomly assigned to one of the three groups and completed 120 trials during acquisition. For every fourth trial, one group was instructed to decrease CRT by 50 msec. relative to the previous trial and a second group was instructed to increase CRT by 50 msec. The third group (the control) was told not to change their responses. After a 5-min. break, participants completed a 40-trial no-feedback transfer test. A 40-trial delayed transfer test was administered 24 hours later. During acquisition, the Decreased Reaction Time group responded faster than the two other groups, but this group also made more errors than the other two groups. In the 5-min. delayed test (immediate transfer), the Decreased Reaction Time group had faster reaction times than the other two groups, while for the 24-hr. delayed test (delayed transfer), both the Decreased Reaction Time group and Increased Reaction Time group had significantly faster reaction times than the control. For delayed transfer, both Decreased and Increased Reaction Time groups reacted significantly faster than the control group. Analyses of error scores in the transfer tests indicated revealed no significant group differences. Results were discussed with regard to the notion of practice variability and goal-setting benefits.
ERIC Educational Resources Information Center
Unsworth, Nash
2008-01-01
Retrieval dynamics in free recall were explored based on a two-stage search model that relies on temporal-contextual cues. Participants were tested on both delayed and final free recall and correct recalls, errors, and latency measures were examined. In delayed free recall, participants began recall with the first word presented and tended to…
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mendoza, D.; Gurney, Kevin R.; Geethakumar, Sarath
2013-04-01
In this study we present onroad fossil fuel CO2 emissions estimated by the Vulcan Project, an effort quantifying fossil fuel CO2 emissions for the U.S. in high spatial and temporal resolution. This high-resolution data, aggregated at the state-level and classified in broad road and vehicle type categories, is compared to a commonly used national-average approach. We find that the use of national averages incurs state-level biases for road groupings that are almost twice as large as for vehicle groupings. The uncertainty for all groups exceeds the bias, and both quantities are positively correlated with total state emissions. States with themore » largest emissions totals are typically similar to one another in terms of emissions fraction distribution across road and vehicle groups, while smaller-emitting states have a wider range of variation in all groups. Errors in reduction estimates as large as ±60% corresponding to ±0.2 MtC are found for a national-average emissions mitigation strategy focused on a 10% emissions reduction from a single vehicle class, such as passenger gas vehicles or heavy diesel trucks. Recommendations are made for reducing CO2 emissions uncertainty by addressing its main drivers: VMT and fuel efficiency uncertainty.« less
NASA Astrophysics Data System (ADS)
Ciaramello, Frank M.; Hemami, Sheila S.
2009-02-01
Communication of American Sign Language (ASL) over mobile phones would be very beneficial to the Deaf community. ASL video encoded to achieve the rates provided by current cellular networks must be heavily compressed and appropriate assessment techniques are required to analyze the intelligibility of the compressed video. As an extension to a purely spatial measure of intelligibility, this paper quantifies the effect of temporal compression artifacts on sign language intelligibility. These artifacts can be the result of motion-compensation errors that distract the observer or frame rate reductions. They reduce the the perception of smooth motion and disrupt the temporal coherence of the video. Motion-compensation errors that affect temporal coherence are identified by measuring the block-level correlation between co-located macroblocks in adjacent frames. The impact of frame rate reductions was quantified through experimental testing. A subjective study was performed in which fluent ASL participants rated the intelligibility of sequences encoded at a range of 5 different frame rates and with 3 different levels of distortion. The subjective data is used to parameterize an objective intelligibility measure which is highly correlated with subjective ratings at multiple frame rates.
Murdoch, Maureen; Pryor, John B; Griffin, Joan M; Ripley, Diane Cowper; Gackstetter, Gary D; Polusny, Melissa A; Hodges, James S
2011-01-01
The Department of Defense's "gold standard" sexual harassment measure, the Sexual Harassment Core Measure (SHCore), is based on an earlier measure that was developed primarily in college women. Furthermore, the SHCore requires a reading grade level of 9.1. This may be higher than some troops' reading abilities and could generate unreliable estimates of their sexual harassment experiences. Results from 108 male and 96 female soldiers showed that the SHCore's temporal stability and alternate-forms reliability was significantly worse (a) in soldiers without college experience compared to soldiers with college experience and (b) in men compared to women. For men without college experience, almost 80% of the temporal variance in SHCore scores was attributable to error. A plain language version of the SHCore had mixed effects on temporal stability depending on education and gender. The SHCore may be particularly ill suited for evaluating population trends of sexual harassment in military men without college experience.
Stochastic goal-oriented error estimation with memory
NASA Astrophysics Data System (ADS)
Ackmann, Jan; Marotzke, Jochem; Korn, Peter
2017-11-01
We propose a stochastic dual-weighted error estimator for the viscous shallow-water equation with boundaries. For this purpose, previous work on memory-less stochastic dual-weighted error estimation is extended by incorporating memory effects. The memory is introduced by describing the local truncation error as a sum of time-correlated random variables. The random variables itself represent the temporal fluctuations in local truncation errors and are estimated from high-resolution information at near-initial times. The resulting error estimator is evaluated experimentally in two classical ocean-type experiments, the Munk gyre and the flow around an island. In these experiments, the stochastic process is adapted locally to the respective dynamical flow regime. Our stochastic dual-weighted error estimator is shown to provide meaningful error bounds for a range of physically relevant goals. We prove, as well as show numerically, that our approach can be interpreted as a linearized stochastic-physics ensemble.
Yuen, Elaine Y.L.
2015-01-01
Terrestrial predators have been shown to aggregate along stream margins during periods when the emergence of adult aquatic insects is high. Such aggregation may be especially evident when terrestrial surroundings are relatively unproductive, and there are steep productivity gradients across riparia. In tropical forests, however, the productivity of inland terrestrial habitats may decrease the resource gradient across riparia, thus lessening any tendency of terrestrial predators to aggregate along stream margins. We elucidated the spatio-temporal variability in the distribution of ground-dwelling spiders and terrestrial arthropod prey within the riparia of two forest streams in tropical Hong Kong by sampling arthropods along transects at different distances from the streams during the wet and dry seasons. Environmental variables that may have influenced spider distributions were also measured. The vast majority of ground-dwelling predators along all transects at both sites were spiders. Of the three most abundant spiders captured along stream margins, Heteropoda venatoria (Sparassidae) and Draconarius spp. (Agelenidae) were terrestrially inclined and abundant during both seasons. Only Pardosa sumatrana (Lycosidae) showed some degree of aggregation at the stream banks, indicating a potential reliance on aquatic insect prey. Circumstantial evidence supports this notion, as P. sumatrana was virtually absent during the dry season when aquatic insect emergence was low. In general, forest-stream riparia in Hong Kong did not appear to be feeding hotspots for ground-dwelling predators. The lack of aggregation in ground-dwelling spiders in general may be attributed to the low rates of emergence of aquatic insects from the study streams compared to counterpart systems, as well as the potentially high availability of terrestrial insect prey in the surrounding forest. Heteropoda venatoria, the largest of the three spiders maintained a high biomass (up to 28 mg dry weight/m2) in stream riparia, exceeding the total standing stock of all other spiders by 2–80 times. The biomass and inland distribution of H. venatoria could make it a likely conduit for the stream-to-land transfer of energy. PMID:26246974
Ni, Xinzhi; Wilson, Jeffrey P; Toews, Michael D; Buntin, G David; Lee, R Dewey; Li, Xin; Lei, Zhongren; He, Kanglai; Xu, Wenwei; Li, Xianchun; Huffaker, Alisa; Schmelz, Eric A
2014-10-01
Spatial and temporal patterns of insect damage in relation to aflatoxin contamination in a corn field with plants of uniform genetic background are not well understood. After previous examination of spatial patterns of insect damage and aflatoxin in pre-harvest corn fields, we further examined both spatial and temporal patterns of cob- and kernel-feeding insect damage, and aflatoxin level with two samplings at pre-harvest in 2008 and 2009. The feeding damage by each of the ear/kernel-feeding insects (i.e., corn earworm/fall armyworm damage on the silk/cob, and discoloration of corn kernels by stink bugs) and maize weevil population were assessed at each grid point with five ears. Sampling data showed a field edge effect in both insect damage and aflatoxin contamination in both years. Maize weevils tended toward an aggregated distribution more frequently than either corn earworm or stink bug damage in both years. The frequency of detecting aggregated distribution for aflatoxin level was less than any of the insect damage assessments. Stink bug damage and maize weevil number were more closely associated with aflatoxin level than was corn earworm damage. In addition, the indices of spatial-temporal association (χ) demonstrated that the number of maize weevils was associated between the first (4 weeks pre-harvest) and second (1 week pre-harvest) samplings in both years on all fields. In contrast, corn earworm damage between the first and second samplings from the field on the Belflower Farm, and aflatoxin level and corn earworm damage from the field on the Lang Farm were dissociated in 2009. Published 2012. This article is a U.S. Government work and is in the public domain in the USA.
Gorsich, Erin E; Luis, Angela D; Buhnerkempe, Michael G; Grear, Daniel A; Portacci, Katie; Miller, Ryan S; Webb, Colleen T
2016-11-01
The application of network analysis to cattle shipments broadens our understanding of shipment patterns beyond pairwise interactions to the network as a whole. Such a quantitative description of cattle shipments in the U.S. can identify trade communities, describe temporal shipment patterns, and inform the design of disease surveillance and control strategies. Here, we analyze a longitudinal dataset of beef and dairy cattle shipments from 2009 to 2011 in the United States to characterize communities within the broader cattle shipment network, which are groups of counties that ship mostly to each other. Because shipments occur over time, we aggregate the data at various temporal scales to examine the consistency of network and community structure over time. Our results identified nine large (>50 counties) communities based on shipments of beef cattle in 2009 aggregated into an annual network and nine large communities based on shipments of dairy cattle. The size and connectance of the shipment network was highly dynamic; monthly networks were smaller than yearly networks and revealed seasonal shipment patterns consistent across years. Comparison of the shipment network over time showed largely consistent shipping patterns, such that communities identified on annual networks of beef and diary shipments from 2009 still represented 41-95% of shipments in monthly networks from 2009 and 41-66% of shipments from networks in 2010 and 2011. The temporal aspects of cattle shipments suggest that future applications of the U.S. cattle shipment network should consider seasonal shipment patterns. However, the consistent within-community shipping patterns indicate that yearly communities could provide a reasonable way to group regions for management. Copyright © 2016 Elsevier B.V. All rights reserved.
Vogel, Curtis R; Tyler, Glenn A; Wittich, Donald J
2014-07-01
We introduce a framework for modeling, analysis, and simulation of aero-optics wavefront aberrations that is based on spatial-temporal covariance matrices extracted from wavefront sensor measurements. Within this framework, we present a quasi-homogeneous structure function to analyze nonhomogeneous, mildly anisotropic spatial random processes, and we use this structure function to show that phase aberrations arising in aero-optics are, for an important range of operating parameters, locally Kolmogorov. This strongly suggests that the d5/3 power law for adaptive optics (AO) deformable mirror fitting error, where d denotes actuator separation, holds for certain important aero-optics scenarios. This framework also allows us to compute bounds on AO servo lag error and predictive control error. In addition, it provides us with the means to accurately simulate AO systems for the mitigation of aero-effects, and it may provide insight into underlying physical processes associated with turbulent flow. The techniques introduced here are demonstrated using data obtained from the Airborne Aero-Optics Laboratory.
ERIC Educational Resources Information Center
Tallot, Lucille; Diaz-Mataix, Lorenzo; Perry, Rosemarie E.; Wood, Kira; LeDoux, Joseph E.; Mouly, Anne-Marie; Sullivan, Regina M.; Doyère, Valérie
2017-01-01
The updating of a memory is triggered whenever it is reactivated and a mismatch from what is expected (i.e., prediction error) is detected, a process that can be unraveled through the memory's sensitivity to protein synthesis inhibitors (i.e., reconsolidation). As noted in previous studies, in Pavlovian threat/aversive conditioning in adult rats,…
Using R for analysing spatio-temporal datasets: a satellite-based precipitation case study
NASA Astrophysics Data System (ADS)
Zambrano-Bigiarini, Mauricio
2017-04-01
Increasing computer power and the availability of remote-sensing data measuring different environmental variables has led to unprecedented opportunities for Earth sciences in recent decades. However, dealing with hundred or thousands of files, usually in different vectorial and raster formats and measured with different temporal frequencies, impose high computation challenges to take full advantage of all the available data. R is a language and environment for statistical computing and graphics which includes several functions for data manipulation, calculation and graphical display, which are particularly well suited for Earth sciences. In this work I describe how R was used to exhaustively evaluate seven state-of-the-art satellite-based rainfall estimates (SRE) products (TMPA 3B42v7, CHIRPSv2, CMORPH, PERSIANN-CDR, PERSIAN-CCS-adj, MSWEPv1.1 and PGFv3) over the complex topography and diverse climatic gradients of Chile. First, built-in functions were used to automatically download the satellite-images in different raster formats and spatial resolutions and to clip them into the Chilean spatial extent if necessary. Second, the raster package was used to read, plot, and conduct an exploratory data analysis in selected files of each SRE product, in order to detect unexpected problems (rotated spatial domains, order or variables in NetCDF files, etc). Third, raster was used along with the hydroTSM package to aggregate SRE files into different temporal scales (daily, monthly, seasonal, annual). Finally, the hydroTSM and hydroGOF packages were used to carry out a point-to-pixel comparison between precipitation time series measured at 366 stations and the corresponding grid cell of each SRE. The modified Kling-Gupta index of model performance was used to identify possible sources of systematic errors in each SRE, while five categorical indices (PC, POD, FAR, ETS, fBIAS) were used to assess the ability of each SRE to correctly identify different precipitation intensities. In the end, R proved to be and efficient environment to deal with thousands of raster, vectorial and time series files, with different spatial and temporal resolutions and spatial reference systems. In addition, the use of well-documented R scripts made code readable and re-usable, facilitating reproducible research which is essential to build trust in stakeholders and scientific community.
NASA Astrophysics Data System (ADS)
Reyes, J.; Vizuete, W.; Serre, M. L.; Xu, Y.
2015-12-01
The EPA employs a vast monitoring network to measure ambient PM2.5 concentrations across the United States with one of its goals being to quantify exposure within the population. However, there are several areas of the country with sparse monitoring spatially and temporally. One means to fill in these monitoring gaps is to use PM2.5 modeled estimates from Chemical Transport Models (CTMs) specifically the Community Multi-scale Air Quality (CMAQ) model. CMAQ is able to provide complete spatial coverage but is subject to systematic and random error due to model uncertainty. Due to the deterministic nature of CMAQ, often these uncertainties are not quantified. Much effort is employed to quantify the efficacy of these models through different metrics of model performance. Currently evaluation is specific to only locations with observed data. Multiyear studies across the United States are challenging because the error and model performance of CMAQ are not uniform over such large space/time domains. Error changes regionally and temporally. Because of the complex mix of species that constitute PM2.5, CMAQ error is also a function of increasing PM2.5 concentration. To address this issue we introduce a model performance evaluation for PM2.5 CMAQ that is regionalized and non-linear. This model performance evaluation leads to error quantification for each CMAQ grid. Areas and time periods of error being better qualified. The regionalized error correction approach is non-linear and is therefore more flexible at characterizing model performance than approaches that rely on linearity assumptions and assume homoscedasticity of CMAQ predictions errors. Corrected CMAQ data are then incorporated into the modern geostatistical framework of Bayesian Maximum Entropy (BME). Through cross validation it is shown that incorporating error-corrected CMAQ data leads to more accurate estimates than just using observed data by themselves.
A comparative analysis of errors in long-term econometric forecasts
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tepel, R.
1986-04-01
The growing body of literature that documents forecast accuracy falls generally into two parts. The first is prescriptive and is carried out by modelers who use simulation analysis as a tool for model improvement. These studies are ex post, that is, they make use of known values for exogenous variables and generate an error measure wholly attributable to the model. The second type of analysis is descriptive and seeks to measure errors, identify patterns among errors and variables and compare forecasts from different sources. Most descriptive studies use an ex ante approach, that is, they evaluate model outputs based onmore » estimated (or forecasted) exogenous variables. In this case, it is the forecasting process, rather than the model, that is under scrutiny. This paper uses an ex ante approach to measure errors in forecast series prepared by Data Resources Incorporated (DRI), Wharton Econometric Forecasting Associates (Wharton), and Chase Econometrics (Chase) and to determine if systematic patterns of errors can be discerned between services, types of variables (by degree of aggregation), length of forecast and time at which the forecast is made. Errors are measured as the percent difference between actual and forecasted values for the historical period of 1971 to 1983.« less
NASA Astrophysics Data System (ADS)
Ampil, L. J. Y.; Yao, J. G.; Lagrosas, N.; Lorenzo, G. R. H.; Simpas, J.
2017-12-01
The Global Precipitation Measurement (GPM) mission is a group of satellites that provides global observations of precipitation. Satellite-based observations act as an alternative if ground-based measurements are inadequate or unavailable. Data provided by satellites however must be validated for this data to be reliable and used effectively. In this study, the Integrated Multisatellite Retrievals for GPM (IMERG) Final Run v3 half-hourly product is validated by comparing against interpolated ground measurements derived from sixteen ground stations in Metro Manila. The area considered in this study is the region 14.4° - 14.8° latitude and 120.9° - 121.2° longitude, subdivided into twelve 0.1° x 0.1° grid squares. Satellite data from June 1 - August 31, 2014 with the data aggregated to 1-day temporal resolution are used in this study. The satellite data is directly compared to measurements from individual ground stations to determine the effect of the interpolation by contrast against the comparison of satellite data and interpolated measurements. The comparisons are calculated by taking a fractional root-mean-square error (F-RMSE) between two datasets. The results show that interpolation improves errors compared to using raw station data except during days with very small amounts of rainfall. F-RMSE reaches extreme values of up to 654 without a rainfall threshold. A rainfall threshold is inferred to remove extreme error values and make the distribution of F-RMSE more consistent. Results show that the rainfall threshold varies slightly per month. The threshold for June is inferred to be 0.5 mm, reducing the maximum F-RMSE to 9.78, while the threshold for July and August is inferred to be 0.1 mm, reducing the maximum F-RMSE to 4.8 and 10.7, respectively. The maximum F-RMSE is reduced further as the threshold is increased. Maximum F-RMSE is reduced to 3.06 when a rainfall threshold of 10 mm is applied over the entire duration of JJA. These results indicate that IMERG performs well for moderate to high intensity rainfall and that the interpolation remains effective only when rainfall exceeds a certain threshold value. Over Metro Manila, an F-RMSE threshold of 0.5 mm indicated better correspondence between ground measured and satellite measured rainfall.
Velikina, Julia V; Samsonov, Alexey A
2015-11-01
To accelerate dynamic MR imaging through development of a novel image reconstruction technique using low-rank temporal signal models preestimated from training data. We introduce the model consistency condition (MOCCO) technique, which utilizes temporal models to regularize reconstruction without constraining the solution to be low-rank, as is performed in related techniques. This is achieved by using a data-driven model to design a transform for compressed sensing-type regularization. The enforcement of general compliance with the model without excessively penalizing deviating signal allows recovery of a full-rank solution. Our method was compared with a standard low-rank approach utilizing model-based dimensionality reduction in phantoms and patient examinations for time-resolved contrast-enhanced angiography (CE-MRA) and cardiac CINE imaging. We studied the sensitivity of all methods to rank reduction and temporal subspace modeling errors. MOCCO demonstrated reduced sensitivity to modeling errors compared with the standard approach. Full-rank MOCCO solutions showed significantly improved preservation of temporal fidelity and aliasing/noise suppression in highly accelerated CE-MRA (acceleration up to 27) and cardiac CINE (acceleration up to 15) data. MOCCO overcomes several important deficiencies of previously proposed methods based on pre-estimated temporal models and allows high quality image restoration from highly undersampled CE-MRA and cardiac CINE data. © 2014 Wiley Periodicals, Inc.
Velikina, Julia V.; Samsonov, Alexey A.
2014-01-01
Purpose To accelerate dynamic MR imaging through development of a novel image reconstruction technique using low-rank temporal signal models pre-estimated from training data. Theory We introduce the MOdel Consistency COndition (MOCCO) technique that utilizes temporal models to regularize the reconstruction without constraining the solution to be low-rank as performed in related techniques. This is achieved by using a data-driven model to design a transform for compressed sensing-type regularization. The enforcement of general compliance with the model without excessively penalizing deviating signal allows recovery of a full-rank solution. Methods Our method was compared to standard low-rank approach utilizing model-based dimensionality reduction in phantoms and patient examinations for time-resolved contrast-enhanced angiography (CE MRA) and cardiac CINE imaging. We studied sensitivity of all methods to rank-reduction and temporal subspace modeling errors. Results MOCCO demonstrated reduced sensitivity to modeling errors compared to the standard approach. Full-rank MOCCO solutions showed significantly improved preservation of temporal fidelity and aliasing/noise suppression in highly accelerated CE MRA (acceleration up to 27) and cardiac CINE (acceleration up to 15) data. Conclusions MOCCO overcomes several important deficiencies of previously proposed methods based on pre-estimated temporal models and allows high quality image restoration from highly undersampled CE-MRA and cardiac CINE data. PMID:25399724
Improving the accuracy of livestock distribution estimates through spatial interpolation.
Bryssinckx, Ward; Ducheyne, Els; Muhwezi, Bernard; Godfrey, Sunday; Mintiens, Koen; Leirs, Herwig; Hendrickx, Guy
2012-11-01
Animal distribution maps serve many purposes such as estimating transmission risk of zoonotic pathogens to both animals and humans. The reliability and usability of such maps is highly dependent on the quality of the input data. However, decisions on how to perform livestock surveys are often based on previous work without considering possible consequences. A better understanding of the impact of using different sample designs and processing steps on the accuracy of livestock distribution estimates was acquired through iterative experiments using detailed survey. The importance of sample size, sample design and aggregation is demonstrated and spatial interpolation is presented as a potential way to improve cattle number estimates. As expected, results show that an increasing sample size increased the precision of cattle number estimates but these improvements were mainly seen when the initial sample size was relatively low (e.g. a median relative error decrease of 0.04% per sampled parish for sample sizes below 500 parishes). For higher sample sizes, the added value of further increasing the number of samples declined rapidly (e.g. a median relative error decrease of 0.01% per sampled parish for sample sizes above 500 parishes. When a two-stage stratified sample design was applied to yield more evenly distributed samples, accuracy levels were higher for low sample densities and stabilised at lower sample sizes compared to one-stage stratified sampling. Aggregating the resulting cattle number estimates yielded significantly more accurate results because of averaging under- and over-estimates (e.g. when aggregating cattle number estimates from subcounty to district level, P <0.009 based on a sample of 2,077 parishes using one-stage stratified samples). During aggregation, area-weighted mean values were assigned to higher administrative unit levels. However, when this step is preceded by a spatial interpolation to fill in missing values in non-sampled areas, accuracy is improved remarkably. This counts especially for low sample sizes and spatially even distributed samples (e.g. P <0.001 for a sample of 170 parishes using one-stage stratified sampling and aggregation on district level). Whether the same observations apply on a lower spatial scale should be further investigated.
From GCM grid cell to agricultural plot: scale issues affecting modelling of climate impact
Baron, Christian; Sultan, Benjamin; Balme, Maud; Sarr, Benoit; Traore, Seydou; Lebel, Thierry; Janicot, Serge; Dingkuhn, Michael
2005-01-01
General circulation models (GCM) are increasingly capable of making relevant predictions of seasonal and long-term climate variability, thus improving prospects of predicting impact on crop yields. This is particularly important for semi-arid West Africa where climate variability and drought threaten food security. Translating GCM outputs into attainable crop yields is difficult because GCM grid boxes are of larger scale than the processes governing yield, involving partitioning of rain among runoff, evaporation, transpiration, drainage and storage at plot scale. This study analyses the bias introduced to crop simulation when climatic data is aggregated spatially or in time, resulting in loss of relevant variation. A detailed case study was conducted using historical weather data for Senegal, applied to the crop model SARRA-H (version for millet). The study was then extended to a 10°N–17° N climatic gradient and a 31 year climate sequence to evaluate yield sensitivity to the variability of solar radiation and rainfall. Finally, a down-scaling model called LGO (Lebel–Guillot–Onibon), generating local rain patterns from grid cell means, was used to restore the variability lost by aggregation. Results indicate that forcing the crop model with spatially aggregated rainfall causes yield overestimations of 10–50% in dry latitudes, but nearly none in humid zones, due to a biased fraction of rainfall available for crop transpiration. Aggregation of solar radiation data caused significant bias in wetter zones where radiation was limiting yield. Where climatic gradients are steep, these two situations can occur within the same GCM grid cell. Disaggregation of grid cell means into a pattern of virtual synoptic stations having high-resolution rainfall distribution removed much of the bias caused by aggregation and gave realistic simulations of yield. It is concluded that coupling of GCM outputs with plot level crop models can cause large systematic errors due to scale incompatibility. These errors can be avoided by transforming GCM outputs, especially rainfall, to simulate the variability found at plot level. PMID:16433096
Brébion, G; Ohlsen, R I; Bressan, R A; David, A S
2012-12-01
Previous research has shown associations between source memory errors and hallucinations in patients with schizophrenia. We bring together here findings from a broad memory investigation to specify better the type of source memory failure that is associated with auditory and visual hallucinations. Forty-one patients with schizophrenia and 43 healthy participants underwent a memory task involving recall and recognition of lists of words, recognition of pictures, memory for temporal and spatial context of presentation of the stimuli, and remembering whether target items were presented as words or pictures. False recognition of words and pictures was associated with hallucination scores. The extra-list intrusions in free recall were associated with verbal hallucinations whereas the intra-list intrusions were associated with a global hallucination score. Errors in discriminating the temporal context of word presentation and the spatial context of picture presentation were associated with auditory hallucinations. The tendency to remember verbal labels of items as pictures of these items was associated with visual hallucinations. Several memory errors were also inversely associated with affective flattening and anhedonia. Verbal and visual hallucinations are associated with confusion between internal verbal thoughts or internal visual images and perception. In addition, auditory hallucinations are associated with failure to process or remember the context of presentation of the events. Certain negative symptoms have an opposite effect on memory errors.
Possin, Katherine L; Chester, Serana K; Laluz, Victor; Bostrom, Alan; Rosen, Howard J; Miller, Bruce L; Kramer, Joel H
2012-09-01
On tests of design fluency, an examinee draws as many different designs as possible in a specified time limit while avoiding repetition. The neuroanatomical substrates and diagnostic group differences of design fluency repetition errors and total correct scores were examined in 110 individuals diagnosed with dementia, 53 with mild cognitive impairment (MCI), and 37 neurologically healthy controls. The errors correlated significantly with volumes in the right and left orbitofrontal cortex (OFC), the right and left superior frontal gyrus, the right inferior frontal gyrus, and the right striatum, but did not correlate with volumes in any parietal or temporal lobe regions. Regression analyses indicated that the lateral OFC may be particularly crucial for preventing these errors, even after excluding patients with behavioral variant frontotemporal dementia (bvFTD) from the analysis. Total correct correlated more diffusely with volumes in the right and left frontal and parietal cortex, the right temporal cortex, and the right striatum and thalamus. Patients diagnosed with bvFTD made significantly more repetition errors than patients diagnosed with MCI, Alzheimer's disease, semantic dementia, progressive supranuclear palsy, or corticobasal syndrome. In contrast, total correct design scores did not differentiate the dementia patients. These results highlight the frontal-anatomic specificity of design fluency repetitions. In addition, the results indicate that the propensity to make these errors supports the diagnosis of bvFTD. (JINS, 2012, 18, 1-11).
NASA Astrophysics Data System (ADS)
Wang, C.; Platnick, S. E.; Meyer, K.; Zhang, Z.
2014-12-01
We developed an optimal estimation (OE)-based method using infrared (IR) observations to retrieve ice cloud optical thickness (COT), cloud effective radius (CER), and cloud top height (CTH) simultaneously. The OE-based retrieval is coupled with a fast IR radiative transfer model (RTM) that simulates observations of different sensors, and corresponding Jacobians in cloudy atmospheres. Ice cloud optical properties are calculated using the MODIS Collection 6 (C6) ice crystal habit (severely roughened hexagonal column aggregates). The OE-based method can be applied to various IR space-borne and airborne sensors, such as the Moderate Resolution Imaging Spectroradiometer (MODIS) and the enhanced MODIS Airborne Simulator (eMAS), by optimally selecting IR bands with high information content. Four major error sources (i.e., the measurement error, fast RTM error, model input error, and pre-assumed ice crystal habit error) are taken into account in our OE retrieval method. We show that measurement error and fast RTM error have little impact on cloud retrievals, whereas errors from the model input and pre-assumed ice crystal habit significantly increase retrieval uncertainties when the cloud is optically thin. Comparisons between the OE-retrieved ice cloud properties and other operational cloud products (e.g., the MODIS C6 and CALIOP cloud products) are shown.
A multistate dynamic site occupancy model for spatially aggregated sessile communities
Fukaya, Keiichi; Royle, J. Andrew; Okuda, Takehiro; Nakaoka, Masahiro; Noda, Takashi
2017-01-01
Estimation of transition probabilities of sessile communities seems easy in principle but may still be difficult in practice because resampling error (i.e. a failure to resample exactly the same location at fixed points) may cause significant estimation bias. Previous studies have developed novel analytical methods to correct for this estimation bias. However, they did not consider the local structure of community composition induced by the aggregated distribution of organisms that is typically observed in sessile assemblages and is very likely to affect observations.We developed a multistate dynamic site occupancy model to estimate transition probabilities that accounts for resampling errors associated with local community structure. The model applies a nonparametric multivariate kernel smoothing methodology to the latent occupancy component to estimate the local state composition near each observation point, which is assumed to determine the probability distribution of data conditional on the occurrence of resampling error.By using computer simulations, we confirmed that an observation process that depends on local community structure may bias inferences about transition probabilities. By applying the proposed model to a real data set of intertidal sessile communities, we also showed that estimates of transition probabilities and of the properties of community dynamics may differ considerably when spatial dependence is taken into account.Results suggest the importance of accounting for resampling error and local community structure for developing management plans that are based on Markovian models. Our approach provides a solution to this problem that is applicable to broad sessile communities. It can even accommodate an anisotropic spatial correlation of species composition, and may also serve as a basis for inferring complex nonlinear ecological dynamics.
Phase stabilization of multidimensional amplification architectures for ultrashort pulses
NASA Astrophysics Data System (ADS)
Müller, M.; Kienel, M.; Klenke, A.; Eidam, T.; Limpert, J.; Tünnermann, A.
2015-03-01
The active phase stabilization of spatially and temporally combined ultrashort pulses is investigated theoretically and experimentally. Particularly, considering a combining scheme applying 2 amplifier channels and 4 divided-pulse replicas a bistable behavior is observed. The reason is mutual influence of the optical error signals that is intrinsic to temporal polarization beam combining. A successful mitigation strategy is proposed and is analyzed theoretically and experimentally.
Supervised Learning in Spiking Neural Networks for Precise Temporal Encoding
Gardner, Brian; Grüning, André
2016-01-01
Precise spike timing as a means to encode information in neural networks is biologically supported, and is advantageous over frequency-based codes by processing input features on a much shorter time-scale. For these reasons, much recent attention has been focused on the development of supervised learning rules for spiking neural networks that utilise a temporal coding scheme. However, despite significant progress in this area, there still lack rules that have a theoretical basis, and yet can be considered biologically relevant. Here we examine the general conditions under which synaptic plasticity most effectively takes place to support the supervised learning of a precise temporal code. As part of our analysis we examine two spike-based learning methods: one of which relies on an instantaneous error signal to modify synaptic weights in a network (INST rule), and the other one relying on a filtered error signal for smoother synaptic weight modifications (FILT rule). We test the accuracy of the solutions provided by each rule with respect to their temporal encoding precision, and then measure the maximum number of input patterns they can learn to memorise using the precise timings of individual spikes as an indication of their storage capacity. Our results demonstrate the high performance of the FILT rule in most cases, underpinned by the rule’s error-filtering mechanism, which is predicted to provide smooth convergence towards a desired solution during learning. We also find the FILT rule to be most efficient at performing input pattern memorisations, and most noticeably when patterns are identified using spikes with sub-millisecond temporal precision. In comparison with existing work, we determine the performance of the FILT rule to be consistent with that of the highly efficient E-learning Chronotron rule, but with the distinct advantage that our FILT rule is also implementable as an online method for increased biological realism. PMID:27532262
Supervised Learning in Spiking Neural Networks for Precise Temporal Encoding.
Gardner, Brian; Grüning, André
2016-01-01
Precise spike timing as a means to encode information in neural networks is biologically supported, and is advantageous over frequency-based codes by processing input features on a much shorter time-scale. For these reasons, much recent attention has been focused on the development of supervised learning rules for spiking neural networks that utilise a temporal coding scheme. However, despite significant progress in this area, there still lack rules that have a theoretical basis, and yet can be considered biologically relevant. Here we examine the general conditions under which synaptic plasticity most effectively takes place to support the supervised learning of a precise temporal code. As part of our analysis we examine two spike-based learning methods: one of which relies on an instantaneous error signal to modify synaptic weights in a network (INST rule), and the other one relying on a filtered error signal for smoother synaptic weight modifications (FILT rule). We test the accuracy of the solutions provided by each rule with respect to their temporal encoding precision, and then measure the maximum number of input patterns they can learn to memorise using the precise timings of individual spikes as an indication of their storage capacity. Our results demonstrate the high performance of the FILT rule in most cases, underpinned by the rule's error-filtering mechanism, which is predicted to provide smooth convergence towards a desired solution during learning. We also find the FILT rule to be most efficient at performing input pattern memorisations, and most noticeably when patterns are identified using spikes with sub-millisecond temporal precision. In comparison with existing work, we determine the performance of the FILT rule to be consistent with that of the highly efficient E-learning Chronotron rule, but with the distinct advantage that our FILT rule is also implementable as an online method for increased biological realism.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wang, Yueqi; Lava, Pascal; Reu, Phillip
This study presents a theoretical uncertainty quantification of displacement measurements by subset-based 2D-digital image correlation. A generalized solution to estimate the random error of displacement measurement is presented. The obtained solution suggests that the random error of displacement measurements is determined by the image noise, the summation of the intensity gradient in a subset, the subpixel part of displacement, and the interpolation scheme. The proposed method is validated with virtual digital image correlation tests.
Wang, Yueqi; Lava, Pascal; Reu, Phillip; ...
2015-12-23
This study presents a theoretical uncertainty quantification of displacement measurements by subset-based 2D-digital image correlation. A generalized solution to estimate the random error of displacement measurement is presented. The obtained solution suggests that the random error of displacement measurements is determined by the image noise, the summation of the intensity gradient in a subset, the subpixel part of displacement, and the interpolation scheme. The proposed method is validated with virtual digital image correlation tests.
ERIC Educational Resources Information Center
Buchsbaum, Bradley R.; Baldo, Juliana; Okada, Kayoko; Berman, Karen F.; Dronkers, Nina; D'Esposito, Mark; Hickok, Gregory
2011-01-01
Conduction aphasia is a language disorder characterized by frequent speech errors, impaired verbatim repetition, a deficit in phonological short-term memory, and naming difficulties in the presence of otherwise fluent and grammatical speech output. While traditional models of conduction aphasia have typically implicated white matter pathways,…
Aggregate and Individual Replication Probability within an Explicit Model of the Research Process
ERIC Educational Resources Information Center
Miller, Jeff; Schwarz, Wolf
2011-01-01
We study a model of the research process in which the true effect size, the replication jitter due to changes in experimental procedure, and the statistical error of effect size measurement are all normally distributed random variables. Within this model, we analyze the probability of successfully replicating an initial experimental result by…
18F-AV-1451 tau PET imaging correlates strongly with tau neuropathology in MAPT mutation carriers
Puschmann, Andreas; Schöll, Michael; Ohlsson, Tomas; van Swieten, John; Honer, Michael; Englund, Elisabet
2016-01-01
Abstract Tau positron emission tomography ligands provide the novel possibility to image tau pathology in vivo. However, little is known about how in vivo brain uptake of tau positron emission tomography ligands relates to tau aggregates observed post-mortem. We performed tau positron emission tomography imaging with 18F-AV-1451 in three patients harbouring a p.R406W mutation in the MAPT gene, encoding tau. This mutation results in 3- and 4-repeat tau aggregates similar to those in Alzheimer’s disease, and many of the mutation carriers initially suffer from memory impairment and temporal lobe atrophy. Two patients with short disease duration and isolated memory impairment exhibited 18F-AV-1451 uptake mainly in the hippocampus and adjacent temporal lobe regions, correlating with glucose hypometabolism in corresponding regions. One patient died after 26 years of disease duration with dementia and behavioural deficits. Pre-mortem, there was 18F-AV-1451 uptake in the temporal and frontal lobes, as well as in the basal ganglia, which strongly correlated with the regional extent and amount of tau pathology in post-mortem brain sections. Amyloid-β (18F-flutemetamol) positron emission tomography scans were negative in all cases, as were stainings of brain sections for amyloid. This provides strong evidence that 18F-AV-1451 positron emission tomography can be used to accurately quantify in vivo the regional distribution of hyperphosphorylated tau protein. PMID:27357347
Kapfer, Paul M.; Streby, Henry M.; Gurung, B.; Simcharoen, A.; McDougal, C.C.; Smith, J.L.D.
2011-01-01
Attempts to conserve declining tiger Panthera tigris populations and distributions have experienced limited success. The poaching of tiger prey is a key threat to tiger persistence; a clear understanding of tiger diet is a prerequisite to conserve dwindling populations. We used unpublished data on tiger diet in combination with two previously published studies to examine fine-scale spatio-temporal changes in tiger diet relative to prey abundance in Chitwan National Park, Nepal, and aggregated data from the three studies to examine the effect that study duration and the size of the study area have on estimates of tiger diet. Our results correspond with those of previous studies: in all three studies, tiger diet was dominated by members of Cervidae; small to medium-sized prey was important in one study. Tiger diet was unrelated to prey abundance, and the aggregation of studies indicates that increasing study duration and study area size both result in increased dietary diversity in terms of prey categories consumed, and increasing study duration changed which prey species contributed most to tiger diet. Based on our results, we suggest that managers focus their efforts on minimizing the poaching of all tiger prey, and that future studies of tiger diet be of long duration and large spatial extent to improve our understanding of spatio-temporal variation in estimates of tiger diet. ?? 2011 Wildlife Biology, NKV.
Void Growth and Coalescence Simulations
2013-08-01
distortion and damage, minimum time step, and appropriate material model parameters. Further, a temporal and spatial convergence study was used to...estimate errors, thus, this study helps to provide guidelines for modeling of materials with voids. Finally, we use a Gurson model with Johnson-Cook...spatial convergence study was used to estimate errors, thus, this study helps to provide guidelines for modeling of materials with voids. Finally, we
NASA Astrophysics Data System (ADS)
Ndehedehe, Christopher E.; Agutu, Nathan O.; Okwuashi, Onuwa; Ferreira, Vagner G.
2016-09-01
Lake Chad has recently been perceived to be completely desiccated and almost extinct due to insufficient published ground observations. Given the high spatial variability of rainfall in the region, and the fact that extreme climatic conditions (for example, droughts) could be intensifying in the Lake Chad basin (LCB) due to human activities, a spatio-temporal approach to drought analysis becomes essential. This study employed independent component analysis (ICA), a fourth-order cumulant statistics, to decompose standardised precipitation index (SPI), standardised soil moisture index (SSI), and terrestrial water storage (TWS) derived from Gravity Recovery and Climate Experiment (GRACE) into spatial and temporal patterns over the LCB. In addition, this study uses satellite altimetry data to estimate variations in the Lake Chad water levels, and further employs relevant climate teleconnection indices (El-Niño Southern Oscillation-ENSO, Atlantic Multi-decadal Oscillation-AMO, and Atlantic Meridional Mode-AMM) to examine their links to the observed drought temporal patterns over the basin. From the spatio-temporal drought analysis, temporal evolutions of SPI at 12 month aggregation show relatively wet conditions in the last two decades (although with marked alterations) with the 2012-2014 period being the wettest. In addition to the improved rainfall conditions during this period, there was a statistically significant increase of 0.04 m/yr in altimetry water levels observed over Lake Chad between 2008 and 2014, which confirms a shift in the hydrological conditions of the basin. Observed trend in TWS changes during the 2002-2014 period shows a statistically insignificant increase of 3.0 mm/yr at the centre of the basin, coinciding with soil moisture deficit indicated by the temporal evolutions of SSI at all monthly accumulations during the 2002-2003 and 2009-2012 periods. Further, SPI at 3 and 6 month scales indicated fluctuating drought conditions at the extreme south of the basin, coinciding with a statistically insignificant decline in TWS of about 4.5 mm/yr at the southern catchment of the basin. Finally, correlation analyses indicate that ENSO, AMO, and AMM are associated with extreme rainfall conditions in the basin, with AMO showing the strongest association (statistically significant correlation of 0.55) with SPI 12 month aggregation. Therefore, this study provides a framework that will support drought monitoring in the LCB.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Liang, X; Li, Z; Zheng, D
Purpose: In the context of evaluating dosimetric impacts of a variety of uncertainties involved in HDR Tandem-and-Ovoid treatment, to study the correlations between conventional point doses and 3D volumetric doses. Methods: For 5 cervical cancer patients treated with HDR T&O, 150 plans were retrospectively created to study dosimetric impacts of the following uncertainties: (1) inter-fractional applicator displacement between two treatment fractions within a single insertion by applying Fraction#1 plan to Fraction#2 CT; (2) positional dwell error simulated from −5mm to 5mm in 1mm steps; (3) simulated temporal dwell error of 0.05s, 0.1s, 0.5s, and 1s. The original plans were basedmore » on point dose prescription, from which the volume covered by the prescription dose was generated as the pseudo target volume to study the 3D target dose effect. OARs were contoured. The point and volumetric dose errors were calculated by taking the differences between original and simulated plans. The correlations between the point and volumetric dose errors were analyzed. Results: For the most clinically relevant positional dwell uncertainty of 1mm, temporal uncertainty of 0.05s, and inter-fractional applicator displacement within the same insertion, the mean target D90 and V100 deviation were within 1%. Among these uncertainties, the applicator displacement showed the largest potential target coverage impact (2.6% on D90) as well as the OAR dose impact (2.5% and 3.4% on bladder D2cc and rectum D2cc). The Spearman correlation analysis shows a correlation coefficient of 0.43 with a p-value of 0.11 between target D90 coverage and H point dose. Conclusion: With the most clinically relevant positional and temporal dwell uncertainties and patient interfractional applicator displacement within the same insertion, the dose error is within clinical acceptable range. The lack of correlation between H point and 3D volumetric dose errors is a motivator for the use of 3D treatment planning in cervical HDR brachytherapy.« less
Selecting a Separable Parametric Spatiotemporal Covariance Structure for Longitudinal Imaging Data
George, Brandon; Aban, Inmaculada
2014-01-01
Longitudinal imaging studies allow great insight into how the structure and function of a subject’s internal anatomy changes over time. Unfortunately, the analysis of longitudinal imaging data is complicated by inherent spatial and temporal correlation: the temporal from the repeated measures, and the spatial from the outcomes of interest being observed at multiple points in a patients body. We propose the use of a linear model with a separable parametric spatiotemporal error structure for the analysis of repeated imaging data. The model makes use of spatial (exponential, spherical, and Matérn) and temporal (compound symmetric, autoregressive-1, Toeplitz, and unstructured) parametric correlation functions. A simulation study, inspired by a longitudinal cardiac imaging study on mitral regurgitation patients, compared different information criteria for selecting a particular separable parametric spatiotemporal correlation structure as well as the effects on Type I and II error rates for inference on fixed effects when the specified model is incorrect. Information criteria were found to be highly accurate at choosing between separable parametric spatiotemporal correlation structures. Misspecification of the covariance structure was found to have the ability to inflate the Type I error or have an overly conservative test size, which corresponded to decreased power. An example with clinical data is given illustrating how the covariance structure procedure can be done in practice, as well as how covariance structure choice can change inferences about fixed effects. PMID:25293361
NASA Astrophysics Data System (ADS)
Bernard, A. M.; Feldheim, K. A.; Nemeth, R.; Kadison, E.; Blondeau, J.; Semmens, B. X.; Shivji, M. S.
2016-03-01
The Nassau grouper ( Epinephelus striatus) has sustained large declines across its distribution, including extirpation of many of its fish spawning aggregations (FSAs). Within US Virgin Islands (USVI) waters, Nassau grouper FSAs were overfished until their disappearance in the 1970s and 1980s. In the early 2000s, however, Nassau grouper were found gathering at Grammanik Bank, USVI, a mesophotic coral reef adjacent to one of the extinct aggregation sites, and regulatory protective measures were implemented to protect this fledgling FSA. The population genetic dynamics of this rapid FSA deterioration followed by protection-facilitated, incipient recovery are unknown. We addressed two objectives: (1) we explored which factors (i.e., local vs. external recruitment) might be key in shaping the USVI FSA recovery; and (2) we examined the consequences of severe past overfishing on this FSA's current genetic status. We genotyped individuals (15 microsatellites) from the USVI FSA comprising three successive spawning years (2008-2010), as well as individuals from a much larger, presumably less impacted, Nassau grouper FSA in the Cayman Islands, to assess their comparative population dynamics. No population structure was detected between the USVI and Cayman FSAs ( F ST = -0.0004); however, a temporally waning, genetic bottleneck signal was detected in the USVI FSA. Parentage analysis failed to identify any parent-offspring matches between USVI FSA adults and nearby juveniles, and relatedness analysis showed low levels of genetic relatedness among USVI FSA individuals. Genetic diversity across USVI FSA temporal collections was relatively high, and no marked differences were found between the USVI and Cayman FSAs. These collective results suggest that external recruitment is an important driver of the USVI FSA recovery. Furthermore, despite an apparent genetic bottleneck, the genetic diversity of USVI Nassau grouper has not been severely compromised. Our findings also provide a baseline for future genetic monitoring of the nascent USVI aggregation.
IPUMS: Detailed global data on population characteristics
NASA Astrophysics Data System (ADS)
Kugler, T.
2017-12-01
Many new and exciting sources of data on human population distributions based on remote sensing, mobile technology, and other mechanisms are becoming available. These new data sources often provide fine scale spatial and/or temporal resolution. However, they typically focus on the location of population, with little or no information on population characteristics. The large and growing collection of data available through the IPUMS family of products complements datasets that provide spatial and temporal detail but little attribute detail by providing the full depth of characteristics covered by population censuses, including demographic, household structure, economic, employment, education, and housing characteristics. IPUMS International provides census microdata for 85 countries. Microdata provide the responses to every census question for each individual in a sample of households. Microdata identify the sub-national geographic unit in which a household is located, but for confidentiality reasons, identified units must include a minimum population, typically 20,000 people. Small-area aggregate data often describe much smaller geographic units, enabling study of detailed spatial patterns of population characteristics. However the structure of aggregate data tables is highly heterogeneous across countries, census years, and even topics within a given census, making these data difficult to work with in any systematic way. A recently funded project will assemble small-area aggregate population and agricultural census data published by national statistical offices. Through preliminary work collecting and cataloging over 10,000 tables, we have identified a small number of structural families that can be used to organize the many different structures. These structural families will form the basis for software tools to document and standardize the tables for ingest into a common database. Both the microdata and aggregate data are made available through IPUMS Terra, facilitating integration with land use, land cover, climate, and other environmental data. These data can be used to address pressing global challenges, such as food and water security, development and deforestation, and environmentally-influenced migration.
NASA Astrophysics Data System (ADS)
Mano, T.; Guo, X.; Fujii, N.; Yoshie, N.; Takeoka, H.
2016-02-01
Jellyfishes often form dense aggregation that causes a variety of social problems such as clogging seawater intake of power plant, breaking fisheries net and more. Understanding on jellyfish aggregation is not sufficient due to the difficulty of observation on this phenomenon. In this study, high-resolution observations using scientific echo sounder and underwater camera were carried out to reveal the fine structure of moon jellyfish distribution in a 3D space, as well as its abundance and temporal variation. In addition, water temperature, salinity and current speed were also measured for inferring formation mechanisms of jellyfish aggregation. The field observations with a target on moon jellyfish were carried out in August 2013 and August 2014, in a semi-enclosed bay in Japan. The ship equipped with scientific echo sounder was cruised over the entire bay to reveal the distribution and the form of the moon jellyfish aggregation. In August 2013, the jellyfish aggregations present a high density (maximum: 70 ind. /m3) and their outline shows spherical or zonal shape with a hollow structure. In August 2014, the jellyfish aggregations present a low density (maximum: 20 ind./m3) and the jellyfishes distributed in a layer structure over a wide area. The depth of jellyfish aggregation was consistent with thermocline. During three days of observations in 2014, the average population density of jellyfish reduced by one-tenth, showing a possibility that the jellyfish abundance in a bay may vary significantly in a short timescale of several days. Not only the active swimming of jellyfishes but also the ambient flow field associated with internal waves or Langmuir circulation may contribute to the jellyfish aggregations. In order to clarify the mechanisms for the formation of high density patchy aggregation, we plan to perform more detailed observations and numerical simulations that are able to capture the fine structure of these physical processes in the future.
Quantifying Errors in TRMM-Based Multi-Sensor QPE Products Over Land in Preparation for GPM
NASA Technical Reports Server (NTRS)
Peters-Lidard, Christa D.; Tian, Yudong
2011-01-01
Determining uncertainties in satellite-based multi-sensor quantitative precipitation estimates over land of fundamental importance to both data producers and hydro climatological applications. ,Evaluating TRMM-era products also lays the groundwork and sets the direction for algorithm and applications development for future missions including GPM. QPE uncertainties result mostly from the interplay of systematic errors and random errors. In this work, we will synthesize our recent results quantifying the error characteristics of satellite-based precipitation estimates. Both systematic errors and total uncertainties have been analyzed for six different TRMM-era precipitation products (3B42, 3B42RT, CMORPH, PERSIANN, NRL and GSMap). For systematic errors, we devised an error decomposition scheme to separate errors in precipitation estimates into three independent components, hit biases, missed precipitation and false precipitation. This decomposition scheme reveals hydroclimatologically-relevant error features and provides a better link to the error sources than conventional analysis, because in the latter these error components tend to cancel one another when aggregated or averaged in space or time. For the random errors, we calculated the measurement spread from the ensemble of these six quasi-independent products, and thus produced a global map of measurement uncertainties. The map yields a global view of the error characteristics and their regional and seasonal variations, reveals many undocumented error features over areas with no validation data available, and provides better guidance to global assimilation of satellite-based precipitation data. Insights gained from these results and how they could help with GPM will be highlighted.
Aguado-Giménez, Felipe; Eguía-Martínez, Sergio; Cerezo-Valverde, Jesús; García-García, Benjamín
2018-06-14
Ichthyophagous birds aggregate at cage fish farms attracted by caged and associated wild fish. Spatio-temporal variability of such birds was studied for a year through seasonal visual counts at eight farms in the western Mediterranean. Correlation with farm and location descriptors was assessed. Considerable spatio-temporal variability in fish-eating bird density and assemblage structure was observed among farms and seasons. Bird density increased from autumn to winter, with the great cormorant being the most abundant species, also accounting largely for differences among farms. Grey heron and little egret were also numerous at certain farms during the coldest seasons. Cattle egret was only observed at one farm. No shags were observed during winter. During spring and summer, bird density decreased markedly and only shags and little egrets were observed at only a few farms. Season and distance from farms to bird breeding/wintering grounds helped to explain some of the spatio-temporal variability. Copyright © 2018 Elsevier Ltd. All rights reserved.
Bambha, Ray P.; Michelsen, Hope A.
2015-07-03
We have used a Single-Particle Soot Photometer (SP2) to measure time-resolved laser-induced incandescence (LII) and laser scatter from combustion-generated mature soot with a fractal dimension of 1.88 extracted from a burner. We have also made measurements on restructured mature-soot particles with a fractal dimension of 2.3–2.4. We reproduced the LII and laser-scatter temporal profiles with an energy- and mass-balance model, which accounted for heating of particles passed through a CW-laser beam over laser–particle interaction times of ~10 μs. Furthermore, the results demonstrate a strong influence of aggregate size and morphology on LII and scattering signals. Conductive cooling competes with absorptivemore » heating on these time scales; the effects are reduced with increasing aggregate size and fractal dimension. These effects can lead to a significant delay in the onset of the LII signal and may explain an apparent low bias in the SP2 measurements for small particle sizes, particularly for fresh, mature soot. The results also reveal significant perturbations to the measured scattering signal from LII interference and suggest rapid expansion of the aggregates during sublimation.« less
Decomposition of Sources of Errors in Seasonal Streamflow Forecasting over the U.S. Sunbelt
NASA Technical Reports Server (NTRS)
Mazrooei, Amirhossein; Sinah, Tusshar; Sankarasubramanian, A.; Kumar, Sujay V.; Peters-Lidard, Christa D.
2015-01-01
Seasonal streamflow forecasts, contingent on climate information, can be utilized to ensure water supply for multiple uses including municipal demands, hydroelectric power generation, and for planning agricultural operations. However, uncertainties in the streamflow forecasts pose significant challenges in their utilization in real-time operations. In this study, we systematically decompose various sources of errors in developing seasonal streamflow forecasts from two Land Surface Models (LSMs) (Noah3.2 and CLM2), which are forced with downscaled and disaggregated climate forecasts. In particular, the study quantifies the relative contributions of the sources of errors from LSMs, climate forecasts, and downscaling/disaggregation techniques in developing seasonal streamflow forecast. For this purpose, three month ahead seasonal precipitation forecasts from the ECHAM4.5 general circulation model (GCM) were statistically downscaled from 2.8deg to 1/8deg spatial resolution using principal component regression (PCR) and then temporally disaggregated from monthly to daily time step using kernel-nearest neighbor (K-NN) approach. For other climatic forcings, excluding precipitation, we considered the North American Land Data Assimilation System version 2 (NLDAS-2) hourly climatology over the years 1979 to 2010. Then the selected LSMs were forced with precipitation forecasts and NLDAS-2 hourly climatology to develop retrospective seasonal streamflow forecasts over a period of 20 years (1991-2010). Finally, the performance of LSMs in forecasting streamflow under different schemes was analyzed to quantify the relative contribution of various sources of errors in developing seasonal streamflow forecast. Our results indicate that the most dominant source of errors during winter and fall seasons is the errors due to ECHAM4.5 precipitation forecasts, while temporal disaggregation scheme contributes to maximum errors during summer season.
Interactions of timing and prediction error learning.
Kirkpatrick, Kimberly
2014-01-01
Timing and prediction error learning have historically been treated as independent processes, but growing evidence has indicated that they are not orthogonal. Timing emerges at the earliest time point when conditioned responses are observed, and temporal variables modulate prediction error learning in both simple conditioning and cue competition paradigms. In addition, prediction errors, through changes in reward magnitude or value alter timing of behavior. Thus, there appears to be a bi-directional interaction between timing and prediction error learning. Modern theories have attempted to integrate the two processes with mixed success. A neurocomputational approach to theory development is espoused, which draws on neurobiological evidence to guide and constrain computational model development. Heuristics for future model development are presented with the goal of sparking new approaches to theory development in the timing and prediction error fields. Copyright © 2013 Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Banerjee, Torsha
Unlike conventional networks, wireless sensor networks (WSNs) are limited in power, have much smaller memory buffers, and possess relatively slower processing speeds. These characteristics necessitate minimum transfer and storage of information in order to prolong the network lifetime. In this dissertation, we exploit the spatio-temporal nature of sensor data to approximate the current values of the sensors based on readings obtained from neighboring sensors and itself. We propose a Tree based polynomial REGression algorithm, (TREG) that addresses the problem of data compression in wireless sensor networks. Instead of aggregated data, a polynomial function (P) is computed by the regression function, TREG. The coefficients of P are then passed to achieve the following goals: (i) The sink can get attribute values in the regions devoid of sensor nodes, and (ii) Readings over any portion of the region can be obtained at one time by querying the root of the tree. As the size of the data packet from each tree node to its parent remains constant, the proposed scheme scales very well with growing network density or increased coverage area. Since physical attributes exhibit a gradual change over time, we propose an iterative scheme, UPDATE_COEFF, which obviates the need to perform the regression function repeatedly and uses approximations based on previous readings. Extensive simulations are performed on real world data to demonstrate the effectiveness of our proposed aggregation algorithm, TREG. Results reveal that for a network density of 0.0025 nodes/m2, a complete binary tree of depth 4 could provide the absolute error to be less than 6%. A data compression ratio of about 0.02 is achieved using our proposed algorithm, which is almost independent of the tree depth. In addition, our proposed updating scheme makes the aggregation process faster while maintaining the desired error bounds. We also propose a Polynomial-based scheme that addresses the problem of Event Region Detection (PERD) for WSNs. When a single event occurs, a child of the tree sends a Flagged Polynomial (FP) to its parent, if the readings approximated by it falls outside the data range defining the existing phenomenon. After the aggregation process is over, the root having the two polynomials, P and FP can be queried for FP (approximating the new event region) instead of flooding the whole network. For multiple such events, instead of computing a polynomial corresponding to each new event, areas with same data range are combined by the corresponding tree nodes and the aggregated coefficients are passed on. Results reveal that a new event can be detected by PERD while error in detection remains constant and is less than a threshold of 10%. As the node density increases, accuracy and delay for event detection are found to remain almost constant, making PERD highly scalable. Whenever an event occurs in a WSN, data is generated by closeby sensors and relaying the data to the base station (BS) make sensors closer to the BS run out of energy at a much faster rate than sensors in other parts of the network. This gives rise to an unequal distribution of residual energy in the network and makes those sensors with lower remaining energy level die at much faster rate than others. We propose a scheme for enhancing network Lifetime using mobile cluster heads (CH) in a WSN. To maintain remaining energy more evenly, some energy-rich nodes are designated as CHs which move in a controlled manner towards sensors rich in energy and data. This eliminates multihop transmission required by the static sensors and thus increases the overall lifetime of the WSN. We combine the idea of clustering and mobile CH to first form clusters of static sensor nodes. A collaborative strategy among the CHs further increases the lifetime of the network. Time taken for transmitting data to the BS is reduced further by making the CHs follow a connectivity strategy that always maintain a connected path to the BS. Spatial correlation of sensor data can be further exploited for dynamic channel selection in Cellular Communication. In such a scenario within a licensed band, wireless sensors can be deployed (each sensor tuned to a frequency of the channel at a particular time) to sense the interference power of the frequency band. In an ideal channel, interference temperature (IT) which is directly proportional to the interference power, can be assumed to vary spatially with the frequency of the sub channel. We propose a scheme for fitting the sub channel frequencies and corresponding ITs to a regression model for calculating the IT of a random sub channel for further analysis of the channel interference at the base station. Our scheme, based on the readings reported by Sensors helps in Dynamic Channel Selection (S-DCS) in extended C-band for assignment to unlicensed secondary users. S-DCS proves to be economic from energy consumption point of view and it also achieves accuracy with error bound within 6.8%. Again, users are assigned empty sub channels without actually probing them, incurring minimum delay in the process. The overall channel throughput is maximized along with fairness to individual users.
Video error concealment using block matching and frequency selective extrapolation algorithms
NASA Astrophysics Data System (ADS)
P. K., Rajani; Khaparde, Arti
2017-06-01
Error Concealment (EC) is a technique at the decoder side to hide the transmission errors. It is done by analyzing the spatial or temporal information from available video frames. It is very important to recover distorted video because they are used for various applications such as video-telephone, video-conference, TV, DVD, internet video streaming, video games etc .Retransmission-based and resilient-based methods, are also used for error removal. But these methods add delay and redundant data. So error concealment is the best option for error hiding. In this paper, the error concealment methods such as Block Matching error concealment algorithm is compared with Frequency Selective Extrapolation algorithm. Both the works are based on concealment of manually error video frames as input. The parameter used for objective quality measurement was PSNR (Peak Signal to Noise Ratio) and SSIM(Structural Similarity Index). The original video frames along with error video frames are compared with both the Error concealment algorithms. According to simulation results, Frequency Selective Extrapolation is showing better quality measures such as 48% improved PSNR and 94% increased SSIM than Block Matching Algorithm.
An Imperfect Dopaminergic Error Signal Can Drive Temporal-Difference Learning
Potjans, Wiebke; Diesmann, Markus; Morrison, Abigail
2011-01-01
An open problem in the field of computational neuroscience is how to link synaptic plasticity to system-level learning. A promising framework in this context is temporal-difference (TD) learning. Experimental evidence that supports the hypothesis that the mammalian brain performs temporal-difference learning includes the resemblance of the phasic activity of the midbrain dopaminergic neurons to the TD error and the discovery that cortico-striatal synaptic plasticity is modulated by dopamine. However, as the phasic dopaminergic signal does not reproduce all the properties of the theoretical TD error, it is unclear whether it is capable of driving behavior adaptation in complex tasks. Here, we present a spiking temporal-difference learning model based on the actor-critic architecture. The model dynamically generates a dopaminergic signal with realistic firing rates and exploits this signal to modulate the plasticity of synapses as a third factor. The predictions of our proposed plasticity dynamics are in good agreement with experimental results with respect to dopamine, pre- and post-synaptic activity. An analytical mapping from the parameters of our proposed plasticity dynamics to those of the classical discrete-time TD algorithm reveals that the biological constraints of the dopaminergic signal entail a modified TD algorithm with self-adapting learning parameters and an adapting offset. We show that the neuronal network is able to learn a task with sparse positive rewards as fast as the corresponding classical discrete-time TD algorithm. However, the performance of the neuronal network is impaired with respect to the traditional algorithm on a task with both positive and negative rewards and breaks down entirely on a task with purely negative rewards. Our model demonstrates that the asymmetry of a realistic dopaminergic signal enables TD learning when learning is driven by positive rewards but not when driven by negative rewards. PMID:21589888
Toward a formal definition of water scarcity in natural human systems
W.K. Jaeger; A.J. Plantinga; H. Chang; K. Dello; G. Grant; D. Hulse; J.J. McDonnell; S. Lancaster; H. Moradkhani; A.T. Morzillo; P. Mote; A. Nolin; M. Santlemann; J. Wu
2013-01-01
Water scarcity may appear to be a simple concept, but it can be difficult to apply to complex natural-human systems. While aggregate scarcity indices are straightforward to compute, they do not adequately represent the spatial and temporal variations in water scarcity that arise from complex systems interactions. The uncertain effects of future climate change on water...
Temporal requirements of insulin/IGF-1 signaling for proteotoxicity protection.
Cohen, Ehud; Du, Deguo; Joyce, Derek; Kapernick, Erik A; Volovik, Yuli; Kelly, Jeffery W; Dillin, Andrew
2010-04-01
Toxic protein aggregation (proteotoxicity) is a unifying feature in the development of late-onset human neurodegenerative disorders. Reduction of insulin/IGF-1 signaling (IIS), a prominent lifespan, developmental and reproductive regulatory pathway, protects worms from proteotoxicity associated with the aggregation of the Alzheimer's disease-linked Abeta peptide. We utilized transgenic nematodes that express human Abeta and found that late life IIS reduction efficiently protects from Abeta toxicity without affecting development, reproduction or lifespan. To alleviate proteotoxic stress in the animal, the IIS requires heat shock factor (HSF)-1 to modulate a protein disaggregase, while DAF-16 regulates a presumptive active aggregase, raising the question of how these opposing activities could be co-regulated. One possibility is that HSF-1 and DAF-16 have distinct temporal requirements for protection from proteotoxicity. Using a conditional RNAi approach, we found an early requirement for HSF-1 that is distinct from the adult functions of DAF-16 for protection from proteotoxicity. Our data also indicate that late life IIS reduction can protect from proteotoxicity when it can no longer promote longevity, strengthening the prospect that IIS reduction might be a promising strategy for the treatment of neurodegenerative disorders caused by proteotoxicity.
Ren, Hao; Zhang, Yu; Guo, Sibei; ...
2017-10-31
The aggregation of amyloid beta (Aβ) peptides plays a crucial role in the pathology and etiology of Alzheimer's disease. Experimental evidence shows that copper ion is an aggregation-prone species with the ability to coordinately bind to Aβ and further induce the formation of neurotoxic Aβ oligomers. However, the detailed structures of Cu(II)–Aβ complexes have not been illustrated, and the kinetics and dynamics of the Cu(II) binding are not well understood. Two Cu(II)–Aβ complexes have been proposed to exist under physiological conditions, and another two might exist at higher pH values. By using ab initio simulations for the spontaneous resonance Ramanmore » and time domain stimulated resonance Raman spectroscopy signals, we obtained the characteristic Raman vibronic features of each complex. Finally, these signals contain rich structural information with high temporal resolution, enabling the characterization of transient states during the fast Cu–Aβ binding and interconversion processes.« less
A sound worth saving: acoustic characteristics of a massive fish spawning aggregation.
Erisman, Brad E; Rowell, Timothy J
2017-12-01
Group choruses of marine animals can produce extraordinarily loud sounds that markedly elevate levels of the ambient soundscape. We investigated sound production in the Gulf corvina ( Cynoscion othonopterus ), a soniferous marine fish with a unique reproductive behaviour threatened by overfishing, to compare with sounds produced by other marine animals. We coupled echosounder and hydrophone surveys to estimate the magnitude of the aggregation and sounds produced during spawning. We characterized individual calls and documented changes in the soundscape generated by the presence of as many as 1.5 million corvina within a spawning aggregation spanning distances up to 27 km. We show that calls by male corvina represent the loudest sounds recorded in a marine fish, and the spatio-temporal magnitude of their collective choruses are among the loudest animal sounds recorded in aquatic environments. While this wildlife spectacle is at great risk of disappearing due to overfishing, regional conservation efforts are focused on other endangered marine animals. © 2017 The Author(s).
Maufroy, Alexandra; Chassot, Emmanuel; Joo, Rocío; Kaplan, David Michael
2015-01-01
Since the 1990s, massive use of drifting Fish Aggregating Devices (dFADs) to aggregate tropical tunas has strongly modified global purse-seine fisheries. For the first time, a large data set of GPS positions from buoys deployed by French purse-seiners to monitor dFADs is analysed to provide information on spatio-temporal patterns of dFAD use in the Atlantic and Indian Oceans during 2007-2011. First, we select among four classification methods the model that best separates "at sea" from "on board" buoy positions. A random forest model had the best performance, both in terms of the rate of false "at sea" predictions and the amount of over-segmentation of "at sea" trajectories (i.e., artificial division of trajectories into multiple, shorter pieces due to misclassification). Performance is improved via post-processing removing unrealistically short "at sea" trajectories. Results derived from the selected model enable us to identify the main areas and seasons of dFAD deployment and the spatial extent of their drift. We find that dFADs drift at sea on average for 39.5 days, with time at sea being shorter and distance travelled longer in the Indian than in the Atlantic Ocean. 9.9% of all trajectories end with a beaching event, suggesting that 1,500-2,000 may be lost onshore each year, potentially impacting sensitive habitat areas, such as the coral reefs of the Maldives, the Chagos Archipelago, and the Seychelles.
Maufroy, Alexandra; Chassot, Emmanuel; Joo, Rocío; Kaplan, David Michael
2015-01-01
Since the 1990s, massive use of drifting Fish Aggregating Devices (dFADs) to aggregate tropical tunas has strongly modified global purse-seine fisheries. For the first time, a large data set of GPS positions from buoys deployed by French purse-seiners to monitor dFADs is analysed to provide information on spatio-temporal patterns of dFAD use in the Atlantic and Indian Oceans during 2007-2011. First, we select among four classification methods the model that best separates “at sea” from “on board” buoy positions. A random forest model had the best performance, both in terms of the rate of false “at sea” predictions and the amount of over-segmentation of “at sea” trajectories (i.e., artificial division of trajectories into multiple, shorter pieces due to misclassification). Performance is improved via post-processing removing unrealistically short “at sea” trajectories. Results derived from the selected model enable us to identify the main areas and seasons of dFAD deployment and the spatial extent of their drift. We find that dFADs drift at sea on average for 39.5 days, with time at sea being shorter and distance travelled longer in the Indian than in the Atlantic Ocean. 9.9% of all trajectories end with a beaching event, suggesting that 1,500-2,000 may be lost onshore each year, potentially impacting sensitive habitat areas, such as the coral reefs of the Maldives, the Chagos Archipelago, and the Seychelles. PMID:26010151
NASA Astrophysics Data System (ADS)
Tuozzolo, S.; Frasson, R. P. M.; Durand, M. T.
2017-12-01
We analyze a multi-temporal dataset of in-situ and airborne water surface measurements from the March 2015 AirSWOT field campaign on the Willamette River in Western Oregon, which included six days of AirSWOT flights over a 75km stretch of the river. We examine systematic errors associated with dark water and layover effects in the AirSWOT dataset, and test the efficacies of different filtering and spatial averaging techniques at reconstructing the water surface profile. Finally, we generate a spatially-averaged time-series of water surface elevation and water surface slope. These AirSWOT-derived reach-averaged values are ingested in a prospective SWOT discharge algorithm to assess its performance on SWOT-like data collected from a borderline SWOT-measurable river (mean width = 90m).
Spatial-temporal features of thermal images for Carpal Tunnel Syndrome detection
NASA Astrophysics Data System (ADS)
Estupinan Roldan, Kevin; Ortega Piedrahita, Marco A.; Benitez, Hernan D.
2014-02-01
Disorders associated with repeated trauma account for about 60% of all occupational illnesses, Carpal Tunnel Syndrome (CTS) being the most consulted today. Infrared Thermography (IT) has come to play an important role in the field of medicine. IT is non-invasive and detects diseases based on measuring temperature variations. IT represents a possible alternative to prevalent methods for diagnosis of CTS (i.e. nerve conduction studies and electromiography). This work presents a set of spatial-temporal features extracted from thermal images taken in healthy and ill patients. Support Vector Machine (SVM) classifiers test this feature space with Leave One Out (LOO) validation error. The results of the proposed approach show linear separability and lower validation errors when compared to features used in previous works that do not account for temperature spatial variability.
Knowledge-rich temporal relation identification and classification in clinical notes
D’Souza, Jennifer; Ng, Vincent
2014-01-01
Motivation: We examine the task of temporal relation classification for the clinical domain. Our approach to this task departs from existing ones in that it is (i) ‘knowledge-rich’, employing sophisticated knowledge derived from discourse relations as well as both domain-independent and domain-dependent semantic relations, and (ii) ‘hybrid’, combining the strengths of rule-based and learning-based approaches. Evaluation results on the i2b2 Clinical Temporal Relations Challenge corpus show that our approach yields a 17–24% and 8–14% relative reduction in error over a state-of-the-art learning-based baseline system when gold-standard and automatically identified temporal relations are used, respectively. Database URL: http://www.hlt.utdallas.edu/~jld082000/temporal-relations/ PMID:25414383
Audit of the global carbon budget: estimate errors and their impact on uptake uncertainty
NASA Astrophysics Data System (ADS)
Ballantyne, A. P.; Andres, R.; Houghton, R.; Stocker, B. D.; Wanninkhof, R.; Anderegg, W.; Cooper, L. A.; DeGrandpre, M.; Tans, P. P.; Miller, J. B.; Alden, C.; White, J. W. C.
2015-04-01
Over the last 5 decades monitoring systems have been developed to detect changes in the accumulation of carbon (C) in the atmosphere and ocean; however, our ability to detect changes in the behavior of the global C cycle is still hindered by measurement and estimate errors. Here we present a rigorous and flexible framework for assessing the temporal and spatial components of estimate errors and their impact on uncertainty in net C uptake by the biosphere. We present a novel approach for incorporating temporally correlated random error into the error structure of emission estimates. Based on this approach, we conclude that the 2σ uncertainties of the atmospheric growth rate have decreased from 1.2 Pg C yr-1 in the 1960s to 0.3 Pg C yr-1 in the 2000s due to an expansion of the atmospheric observation network. The 2σ uncertainties in fossil fuel emissions have increased from 0.3 Pg C yr-1 in the 1960s to almost 1.0 Pg C yr-1 during the 2000s due to differences in national reporting errors and differences in energy inventories. Lastly, while land use emissions have remained fairly constant, their errors still remain high and thus their global C uptake uncertainty is not trivial. Currently, the absolute errors in fossil fuel emissions rival the total emissions from land use, highlighting the extent to which fossil fuels dominate the global C budget. Because errors in the atmospheric growth rate have decreased faster than errors in total emissions have increased, a ~20% reduction in the overall uncertainty of net C global uptake has occurred. Given all the major sources of error in the global C budget that we could identify, we are 93% confident that terrestrial C uptake has increased and 97% confident that ocean C uptake has increased over the last 5 decades. Thus, it is clear that arguably one of the most vital ecosystem services currently provided by the biosphere is the continued removal of approximately half of atmospheric CO2 emissions from the atmosphere, although there are certain environmental costs associated with this service, such as the acidification of ocean waters.
Using First Differences to Reduce Inhomogeneity in Radiosonde Temperature Datasets.
NASA Astrophysics Data System (ADS)
Free, Melissa; Angell, James K.; Durre, Imke; Lanzante, John; Peterson, Thomas C.; Seidel, Dian J.
2004-11-01
The utility of a “first difference” method for producing temporally homogeneous large-scale mean time series is assessed. Starting with monthly averages, the method involves dropping data around the time of suspected discontinuities and then calculating differences in temperature from one year to the next, resulting in a time series of year-to-year differences for each month at each station. These first difference time series are then combined to form large-scale means, and mean temperature time series are constructed from the first difference series. When applied to radiosonde temperature data, the method introduces random errors that decrease with the number of station time series used to create the large-scale time series and increase with the number of temporal gaps in the station time series. Root-mean-square errors for annual means of datasets produced with this method using over 500 stations are estimated at no more than 0.03 K, with errors in trends less than 0.02 K decade-1 for 1960 97 at 500 mb. For a 50-station dataset, errors in trends in annual global means introduced by the first differencing procedure may be as large as 0.06 K decade-1 (for six breaks per series), which is greater than the standard error of the trend. Although the first difference method offers significant resource and labor advantages over methods that attempt to adjust the data, it introduces an error in large-scale mean time series that may be unacceptable in some cases.
Forster, Sarah E; Zirnheld, Patrick; Shekhar, Anantha; Steinhauer, Stuart R; O'Donnell, Brian F; Hetrick, William P
2017-09-01
Signals carried by the mesencephalic dopamine system and conveyed to anterior cingulate cortex are critically implicated in probabilistic reward learning and performance monitoring. A common evaluative mechanism purportedly subserves both functions, giving rise to homologous medial frontal negativities in feedback- and response-locked event-related brain potentials (the feedback-related negativity (FRN) and the error-related negativity (ERN), respectively), reflecting dopamine-dependent prediction error signals to unexpectedly negative events. Consistent with this model, the dopamine receptor antagonist, haloperidol, attenuates the ERN, but effects on FRN have not yet been evaluated. ERN and FRN were recorded during a temporal interval learning task (TILT) following randomized, double-blind administration of haloperidol (3 mg; n = 18), diphenhydramine (an active control for haloperidol; 25 mg; n = 20), or placebo (n = 21) to healthy controls. Centroparietal positivities, the Pe and feedback-locked P300, were also measured and correlations between ERP measures and behavioral indices of learning, overall accuracy, and post-error compensatory behavior were evaluated. We hypothesized that haloperidol would reduce ERN and FRN, but that ERN would uniquely track automatic, error-related performance adjustments, while FRN would be associated with learning and overall accuracy. As predicted, ERN was reduced by haloperidol and in those exhibiting less adaptive post-error performance; however, these effects were limited to ERNs following fast timing errors. In contrast, the FRN was not affected by drug condition, although increased FRN amplitude was associated with improved accuracy. Significant drug effects on centroparietal positivities were also absent. Our results support a functional and neurobiological dissociation between the ERN and FRN.
Determination of the spectral behaviour of atmospheric soot using different particle models
NASA Astrophysics Data System (ADS)
Skorupski, Krzysztof
2017-08-01
In the atmosphere, black carbon aggregates interact with both organic and inorganic matter. In many studies they are modeled using different, less complex, geometries. However, some common simplification might lead to many inaccuracies in the following light scattering simulations. The goal of this study was to compare the spectral behavior of different, commonly used soot particle models. For light scattering simulations, in the visible spectrum, the ADDA algorithm was used. The results prove that the relative extinction error δCext, in some cases, can be unexpectedly large. Therefore, before starting excessive simulations, it is important to know what error might occur.
Cross, Paul C.; Caillaud, Damien; Heisey, Dennis M.
2013-01-01
Many ecological and epidemiological studies occur in systems with mobile individuals and heterogeneous landscapes. Using a simulation model, we show that the accuracy of inferring an underlying biological process from observational data depends on movement and spatial scale of the analysis. As an example, we focused on estimating the relationship between host density and pathogen transmission. Observational data can result in highly biased inference about the underlying process when individuals move among sampling areas. Even without sampling error, the effect of host density on disease transmission is underestimated by approximately 50 % when one in ten hosts move among sampling areas per lifetime. Aggregating data across larger regions causes minimal bias when host movement is low, and results in less biased inference when movement rates are high. However, increasing data aggregation reduces the observed spatial variation, which would lead to the misperception that a spatially targeted control effort may not be very effective. In addition, averaging over the local heterogeneity will result in underestimating the importance of spatial covariates. Minimizing the bias due to movement is not just about choosing the best spatial scale for analysis, but also about reducing the error associated with using the sampling location as a proxy for an individual’s spatial history. This error associated with the exposure covariate can be reduced by choosing sampling regions with less movement, including longitudinal information of individuals’ movements, or reducing the window of exposure by using repeated sampling or younger individuals.
NASA Astrophysics Data System (ADS)
Kirchner, J. W.
2016-01-01
Environmental heterogeneity is ubiquitous, but environmental systems are often analyzed as if they were homogeneous instead, resulting in aggregation errors that are rarely explored and almost never quantified. Here I use simple benchmark tests to explore this general problem in one specific context: the use of seasonal cycles in chemical or isotopic tracers (such as Cl-, δ18O, or δ2H) to estimate timescales of storage in catchments. Timescales of catchment storage are typically quantified by the mean transit time, meaning the average time that elapses between parcels of water entering as precipitation and leaving again as streamflow. Longer mean transit times imply greater damping of seasonal tracer cycles. Thus, the amplitudes of tracer cycles in precipitation and streamflow are commonly used to calculate catchment mean transit times. Here I show that these calculations will typically be wrong by several hundred percent, when applied to catchments with realistic degrees of spatial heterogeneity. This aggregation bias arises from the strong nonlinearity in the relationship between tracer cycle amplitude and mean travel time. I propose an alternative storage metric, the young water fraction in streamflow, defined as the fraction of runoff with transit times of less than roughly 0.2 years. I show that this young water fraction (not to be confused with event-based "new water" in hydrograph separations) is accurately predicted by seasonal tracer cycles within a precision of a few percent, across the entire range of mean transit times from almost zero to almost infinity. Importantly, this relationship is also virtually free from aggregation error. That is, seasonal tracer cycles also accurately predict the young water fraction in runoff from highly heterogeneous mixtures of subcatchments with strongly contrasting transit-time distributions. Thus, although tracer cycle amplitudes yield biased and unreliable estimates of catchment mean travel times in heterogeneous catchments, they can be used to reliably estimate the fraction of young water in runoff.
Local and global evaluation for remote sensing image segmentation
NASA Astrophysics Data System (ADS)
Su, Tengfei; Zhang, Shengwei
2017-08-01
In object-based image analysis, how to produce accurate segmentation is usually a very important issue that needs to be solved before image classification or target recognition. The study for segmentation evaluation method is key to solving this issue. Almost all of the existent evaluation strategies only focus on the global performance assessment. However, these methods are ineffective for the situation that two segmentation results with very similar overall performance have very different local error distributions. To overcome this problem, this paper presents an approach that can both locally and globally quantify segmentation incorrectness. In doing so, region-overlapping metrics are utilized to quantify each reference geo-object's over and under-segmentation error. These quantified error values are used to produce segmentation error maps which have effective illustrative power to delineate local segmentation error patterns. The error values for all of the reference geo-objects are aggregated through using area-weighted summation, so that global indicators can be derived. An experiment using two scenes of very different high resolution images showed that the global evaluation part of the proposed approach was almost as effective as other two global evaluation methods, and the local part was a useful complement to comparing different segmentation results.
Detection of long duration cloud contamination in hyper-temporal NDVI imagery
NASA Astrophysics Data System (ADS)
Ali, A.; de Bie, C. A. J. M.; Skidmore, A. K.; Scarrott, R. G.
2012-04-01
NDVI time series imagery are commonly used as a reliable source for land use and land cover mapping and monitoring. However long duration cloud can significantly influence its precision in areas where persistent clouds prevails. Therefore quantifying errors related to cloud contamination are essential for accurate land cover mapping and monitoring. This study aims to detect long duration cloud contamination in hyper-temporal NDVI imagery based land cover mapping and monitoring. MODIS-Terra NDVI imagery (250 m; 16-day; Feb'03-Dec'09) were used after necessary pre-processing using quality flags and upper envelope filter (ASAVOGOL). Subsequently stacked MODIS-Terra NDVI image (161 layers) was classified for 10 to 100 clusters using ISODATA. After classifications, 97 clusters image was selected as best classified with the help of divergence statistics. To detect long duration cloud contamination, mean NDVI class profiles of 97 clusters image was analyzed for temporal artifacts. Results showed that long duration clouds affect the normal temporal progression of NDVI and caused anomalies. Out of total 97 clusters, 32 clusters were found with cloud contamination. Cloud contamination was found more prominent in areas where high rainfall occurs. This study can help to stop error propagation in regional land cover mapping and monitoring, caused by long duration cloud contamination.
Huang, Ying-Zu; Chang, Yao-Shun; Hsu, Miao-Ju; Wong, Alice M K; Chang, Ya-Ju
2015-01-01
Disrupted triphasic electromyography (EMG) patterns of agonist and antagonist muscle pairs during fast goal-directed movements have been found in patients with hypermetria. Since peripheral electrical stimulation (ES) and motor training may modulate motor cortical excitability through plasticity mechanisms, we aimed to investigate whether temporal ES-assisted movement training could influence premovement cortical excitability and alleviate hypermetria in patients with spinal cerebellar ataxia (SCA). The EMG of the agonist extensor carpi radialis muscle and antagonist flexor carpi radialis muscle, premovement motor evoked potentials (MEPs) of the flexor carpi radialis muscle, and the constant and variable errors of movements were assessed before and after 4 weeks of ES-assisted fast goal-directed wrist extension training in the training group and of general health education in the control group. After training, the premovement MEPs of the antagonist muscle were facilitated at 50 ms before the onset of movement. In addition, the EMG onset latency of the antagonist muscle shifted earlier and the constant error decreased significantly. In summary, temporal ES-assisted training alleviated hypermetria by restoring antagonist premovement and temporal triphasic EMG patterns in SCA patients. This technique may be applied to treat hypermetria in cerebellar disorders. (This trial is registered with NCT01983670.).
NASA Technical Reports Server (NTRS)
Holdaway, Daniel; Yang, Yuekui
2016-01-01
Satellites always sample the Earth-atmosphere system in a finite temporal resolution. This study investigates the effect of sampling frequency on the satellite-derived Earth radiation budget, with the Deep Space Climate Observatory (DSCOVR) as an example. The output from NASA's Goddard Earth Observing System Version 5 (GEOS-5) Nature Run is used as the truth. The Nature Run is a high spatial and temporal resolution atmospheric simulation spanning a two-year period. The effect of temporal resolution on potential DSCOVR observations is assessed by sampling the full Nature Run data with 1-h to 24-h frequencies. The uncertainty associated with a given sampling frequency is measured by computing means over daily, monthly, seasonal and annual intervals and determining the spread across different possible starting points. The skill with which a particular sampling frequency captures the structure of the full time series is measured using correlations and normalized errors. Results show that higher sampling frequency gives more information and less uncertainty in the derived radiation budget. A sampling frequency coarser than every 4 h results in significant error. Correlations between true and sampled time series also decrease more rapidly for a sampling frequency less than 4 h.
Petersen, J.H.
2001-01-01
Predation by northern pikeminnow Ptychocheilus oregonensis on juvenile salmonids Oncorhynchus spp. occurred probably during brief feeding bouts since diets were either dominated by salmonids (>80% by weight), or contained other prey types and few salmonids (<5%). In samples where salmonids had been consumed, large rather than small predators were more likely to have captured salmonids. Transects with higher catch-per-unit of effort of predators also had higher incidences of salmonids in predator guts. Predators in two of three reservoir areas were distributed more contagiously if they had preyed recently on salmonids. Spatial and temporal patchiness of salmonid prey may be generating differences in local density, aggregation, and body size of their predators in this large river.
Eliciting Naturalistic Cortical Responses with a Sensory Prosthesis via Optimized Microstimulation
2016-08-12
error and correlation as metrics amenable to highly efficient convex optimization. This study concentrates on characterizing the neural responses to both...spiking signal. For LFP, distance measures such as the traditional mean-squared error and cross- correlation can be used, whereas distances between spike...with parameters that describe their associated temporal dynamics and relations to the observed output. A description of the model follows, but we
Differential processing of melodic, rhythmic and simple tone deviations in musicians--an MEG study.
Lappe, Claudia; Lappe, Markus; Pantev, Christo
2016-01-01
Rhythm and melody are two basic characteristics of music. Performing musicians have to pay attention to both, and avoid errors in either aspect of their performance. To investigate the neural processes involved in detecting melodic and rhythmic errors from auditory input we tested musicians on both kinds of deviations in a mismatch negativity (MMN) design. We found that MMN responses to a rhythmic deviation occurred at shorter latencies than MMN responses to a melodic deviation. Beamformer source analysis showed that the melodic deviation activated superior temporal, inferior frontal and superior frontal areas whereas the activation pattern of the rhythmic deviation focused more strongly on inferior and superior parietal areas, in addition to superior temporal cortex. Activation in the supplementary motor area occurred for both types of deviations. We also recorded responses to similar pitch and tempo deviations in a simple, non-musical repetitive tone pattern. In this case, there was no latency difference between the MMNs and cortical activation was smaller and mostly limited to auditory cortex. The results suggest that prediction and error detection of musical stimuli in trained musicians involve a broad cortical network and that rhythmic and melodic errors are processed in partially different cortical streams. Copyright © 2015 Elsevier Inc. All rights reserved.
Smart Grid Privacy through Distributed Trust
NASA Astrophysics Data System (ADS)
Lipton, Benjamin
Though the smart electrical grid promises many advantages in efficiency and reliability, the risks to consumer privacy have impeded its deployment. Researchers have proposed protecting privacy by aggregating user data before it reaches the utility, using techniques of homomorphic encryption to prevent exposure of unaggregated values. However, such schemes generally require users to trust in the correct operation of a single aggregation server. We propose two alternative systems based on secret sharing techniques that distribute this trust among multiple service providers, protecting user privacy against a misbehaving server. We also provide an extensive evaluation of the systems considered, comparing their robustness to privacy compromise, error handling, computational performance, and data transmission costs. We conclude that while all the systems should be computationally feasible on smart meters, the two methods based on secret sharing require much less computation while also providing better protection against corrupted aggregators. Building systems using these techniques could help defend the privacy of electricity customers, as well as customers of other utilities as they move to a more data-driven architecture.
Refolding techniques for recovering biologically active recombinant proteins from inclusion bodies.
Yamaguchi, Hiroshi; Miyazaki, Masaya
2014-02-20
Biologically active proteins are useful for studying the biological functions of genes and for the development of therapeutic drugs and biomaterials in a biotechnology industry. Overexpression of recombinant proteins in bacteria, such as Escherichia coli, often results in the formation of inclusion bodies, which are protein aggregates with non-native conformations. As inclusion bodies contain relatively pure and intact proteins, protein refolding is an important process to obtain active recombinant proteins from inclusion bodies. However, conventional refolding methods, such as dialysis and dilution, are time consuming and, often, recovered yields of active proteins are low, and a trial-and-error process is required to achieve success. Recently, several approaches have been reported to refold these aggregated proteins into an active form. The strategies largely aim at reducing protein aggregation during the refolding procedure. This review focuses on protein refolding techniques using chemical additives and laminar flow in microfluidic chips for the efficient recovery of active proteins from inclusion bodies.
Galvín, Adela P; Ayuso, Jesús; Barbudo, Auxi; Cabrera, Manuel; López-Uceda, Antonio; Rosales, Julia
2017-12-27
In general terms, plant managers of sites producing construction wastes assess materials according to concise, legally recommended leaching tests that do not consider the compaction stage of the materials when they are applied on-site. Thus, the tests do not account for the real on-site physical conditions of the recycled aggregates used in civil works (e.g., roads or embankments). This leads to errors in estimating the pollutant potential of these materials. For that reason, in the present research, an experimental procedure is designed as a leaching test for construction materials under compaction. The aim of this laboratory test (designed specifically for the granular materials used in civil engineering infrastructures) is to evaluate the release of pollutant elements when the recycled aggregate is tested at its commercial grain-size distribution and when the material is compacted under on-site conditions. Two recycled aggregates with different gypsum contents (0.95 and 2.57%) were used in this study. In addition to the designed leaching laboratory test, the conventional compliance leaching test and the Dutch percolation test were performed. The results of the new leaching method were compared with the conventional leaching test results. After analysis, the chromium and sulphate levels obtained from the newly designed test were lower than those obtained from the conventional leaching test, and these were considered more seriously pollutant elements. This result confirms that when the leaching behaviour is evaluated for construction aggregates without density alteration, crushing the aggregate and using only the finest fraction, as is done in the conventional test (which is an unrealistic situation for aggregates that are applied under on-site conditions), the leaching behaviour is not accurately assessed.
Estimating top-of-atmosphere thermal infrared radiance using MERRA-2 atmospheric data
NASA Astrophysics Data System (ADS)
Kleynhans, Tania; Montanaro, Matthew; Gerace, Aaron; Kanan, Christopher
2017-05-01
Thermal infrared satellite images have been widely used in environmental studies. However, satellites have limited temporal resolution, e.g., 16 day Landsat or 1 to 2 day Terra MODIS. This paper investigates the use of the Modern-Era Retrospective analysis for Research and Applications, Version 2 (MERRA-2) reanalysis data product, produced by NASA's Global Modeling and Assimilation Office (GMAO) to predict global topof-atmosphere (TOA) thermal infrared radiance. The high temporal resolution of the MERRA-2 data product presents opportunities for novel research and applications. Various methods were applied to estimate TOA radiance from MERRA-2 variables namely (1) a parameterized physics based method, (2) Linear regression models and (3) non-linear Support Vector Regression. Model prediction accuracy was evaluated using temporally and spatially coincident Moderate Resolution Imaging Spectroradiometer (MODIS) thermal infrared data as reference data. This research found that Support Vector Regression with a radial basis function kernel produced the lowest error rates. Sources of errors are discussed and defined. Further research is currently being conducted to train deep learning models to predict TOA thermal radiance
The ADaptation and Anticipation Model (ADAM) of sensorimotor synchronization
van der Steen, M. C. (Marieke); Keller, Peter E.
2013-01-01
A constantly changing environment requires precise yet flexible timing of movements. Sensorimotor synchronization (SMS)—the temporal coordination of an action with events in a predictable external rhythm—is a fundamental human skill that contributes to optimal sensory-motor control in daily life. A large body of research related to SMS has focused on adaptive error correction mechanisms that support the synchronization of periodic movements (e.g., finger taps) with events in regular pacing sequences. The results of recent studies additionally highlight the importance of anticipatory mechanisms that support temporal prediction in the context of SMS with sequences that contain tempo changes. To investigate the role of adaptation and anticipatory mechanisms in SMS we introduce ADAM: an ADaptation and Anticipation Model. ADAM combines reactive error correction processes (adaptation) with predictive temporal extrapolation processes (anticipation) inspired by the computational neuroscience concept of internal models. The combination of simulations and experimental manipulations based on ADAM creates a novel and promising approach for exploring adaptation and anticipation in SMS. The current paper describes the conceptual basis and architecture of ADAM. PMID:23772211
Virtual Patterson Experiment - A Way to Access the Rheology of Aggregates and Melanges
NASA Astrophysics Data System (ADS)
Delannoy, Thomas; Burov, Evgueni; Wolf, Sylvie
2014-05-01
Understanding the mechanisms of lithospheric deformation requires bridging the gap between human-scale laboratory experiments and the huge geological objects they represent. Those experiments are limited in spatial and time scale as well as in choice of materials (e.g., mono-phase minerals, exaggerated temperatures and strain rates), which means that the resulting constitutive laws may not fully represent real rocks at geological spatial and temporal scales. We use the thermo-mechanical numerical modelling approach as a tool to link both experiments and nature and hence better understand the rheology of the lithosphere, by enabling us to study the behavior of polymineralic aggregates and their impact on the localization of the deformation. We have adapted the large strain visco-elasto-plastic Flamar code to allow it to operate at all spatial and temporal scales, from sub-grain to geodynamic scale, and from seismic time scales to millions of years. Our first goal was to reproduce real rock mechanics experiments on deformation of mono and polymineralic aggregates in Patterson's load machine in order to deepen our understanding of the rheology of polymineralic rocks. In particular, we studied in detail the deformation of a 15x15 mm mica-quartz sample at 750 °C and 300 MPa. This mixture includes a molten phase and a solid phase in which shear bands develop as a result of interactions between ductile and brittle deformation and stress concentration at the boundaries between weak and strong phases. We used digitized x-ray scans of real samples as initial configuration for the numerical models so the model-predicted deformation and stress-strain behavior can match those observed in the laboratory experiment. Analyzing the numerical experiments providing the best match with the press experiments and making other complementary models by changing different parameters in the initial state (strength contrast between the phases, proportions, microstructure, etc.) provides a number of new elements of understanding of the mechanisms governing the localization of the deformation across the aggregates. We next used stress-strain curves derived from the numerical experiments to study in detail the evolution of the rheological behavior of each mineral phase as well as that of the mixtures in order to formulate constitutive relations for mélanges and polymineralic aggregates. The next step of our approach would be to link the constitutive laws obtained at small scale (laws that govern the rheology of a polymineralic aggregate, the effect of the presence of a molten phase, etc.) to the large-scale behavior of the Earth by implementing them in lithosphere-scale models.
Possible artifacts in inferring seismic properties from X-ray data
NASA Astrophysics Data System (ADS)
Bosak, A.; Krisch, M.; Chumakov, A.; Abrikosov, I. A.; Dubrovinsky, L.
2016-11-01
We consider the experimental and computational artifacts relevant for the extraction of aggregate elastic properties of polycrystalline materials with particular emphasis on the derivation of seismic velocities. We use the case of iron as an example, and show that the improper use of definitions and neglecting the crystalline anisotropy can result in unexpectedly large errors up to a few percent.
Paul Dunham; Dale Weyermann; Dale Azuma
2002-01-01
Stratifications developed from National Land Cover Data (NLCD) and from photointerpretation (PI) were tested for effectiveness in reducing sampling error associated with estimates of timberland area and volume from FIA plots in western Oregon. Strata were created from NLCD through the aggregation of cover classes and the creation of 'edge' strata by...
Mechanisms of protein misfolding: Novel therapeutic approaches to protein-misfolding diseases
NASA Astrophysics Data System (ADS)
Salahuddin, Parveen; Siddiqi, Mohammad Khursheed; Khan, Sanaullah; Abdelhameed, Ali Saber; Khan, Rizwan Hasan
2016-11-01
In protein misfolding, protein molecule acquires wrong tertiary structure, thereby induces protein misfolding diseases. Protein misfolding can occur through various mechanisms. For instance, changes in environmental conditions, oxidative stress, dominant negative mutations, error in post-translational modifications, increase in degradation rate and trafficking error. All of these factors cause protein misfolding thereby leading to diseases conditions. Both in vitro and in vivo observations suggest that partially unfolded or misfolded intermediates are particularly prone to aggregation. These partially misfolded intermediates aggregate via the interaction with the complementary intermediates and consequently enhance oligomers formation that grows into fibrils and proto-fibrils. The amyloid fibrils for example, accumulate in the brain and central nervous system (CNS) as amyloid deposits in the Parkinson's disease (PD), Alzheimer's disease (AD), Prion disease and Amylo lateral Sclerosis (ALS). Furthermore, tau protein shows intrinsically disorder conformation; therefore its interaction with microtubule is impaired and this protein undergoes aggregation. This is also underlying cause of Alzheimers and other neurodegenerative diseases. Treatment of such misfolding maladies is considered as one of the most important challenges of the 21st century. Currently, several treatments strategies have been and are being discovered. These therapeutic interventions partly reversed or prevented the pathological state. More recently, a new approach was discovered, which employs nanobodies that targets multisteps in fibril formation pathway that may possibly completely cure these misfolding diseases. Keeping the above views in mind in the current review, we have comprehensively discussed the different mechanisms underlying protein misfolding thereby leading to diseases conditions and their therapeutic interventions.
Irigoyen, Alejo J; Rojo, Irene; Calò, Antonio; Trobbiani, Gastón; Sánchez-Carnero, Noela; García-Charton, José A
2018-01-01
Underwater visual census (UVC) is the most common approach for estimating diversity, abundance and size of reef fishes in shallow and clear waters. Abundance estimation through UVC is particularly problematic in species occurring at low densities and/or highly aggregated because of their high variability at both spatial and temporal scales. The statistical power of experiments involving UVC techniques may be increased by augmenting the number of replicates or the area surveyed. In this work we present and test the efficiency of an UVC method based on diver towed GPS, the Tracked Roaming Transect (TRT), designed to maximize transect length (and thus the surveyed area) with respect to diving time invested in monitoring, as compared to Conventional Strip Transects (CST). Additionally, we analyze the effect of increasing transect width and length on the precision of density estimates by comparing TRT vs. CST methods using different fixed widths of 6 and 20 m (FW3 and FW10, respectively) and the Distance Sampling (DS) method, in which perpendicular distance of each fish or group of fishes to the transect line is estimated by divers up to 20 m from the transect line. The TRT was 74% more time and cost efficient than the CST (all transect widths considered together) and, for a given time, the use of TRT and/or increasing the transect width increased the precision of density estimates. In addition, since with the DS method distances of fishes to the transect line have to be estimated, and not measured directly as in terrestrial environments, errors in estimations of perpendicular distances can seriously affect DS density estimations. To assess the occurrence of distance estimation errors and their dependence on the observer's experience, a field experiment using wooden fish models was performed. We tested the precision and accuracy of density estimators based on fixed widths and the DS method. The accuracy of the estimates was measured comparing the actual total abundance with those estimated by divers using FW3, FW10, and DS estimators. Density estimates differed by 13% (range 0.1-31%) from the actual values (average = 13.09%; median = 14.16%). Based on our results we encourage the use of the Tracked Roaming Transect with Distance Sampling (TRT+DS) method for improving density estimates of species occurring at low densities and/or highly aggregated, as well as for exploratory rapid-assessment surveys in which divers could gather spatial ecological and ecosystem information on large areas during UVC.
2018-01-01
Underwater visual census (UVC) is the most common approach for estimating diversity, abundance and size of reef fishes in shallow and clear waters. Abundance estimation through UVC is particularly problematic in species occurring at low densities and/or highly aggregated because of their high variability at both spatial and temporal scales. The statistical power of experiments involving UVC techniques may be increased by augmenting the number of replicates or the area surveyed. In this work we present and test the efficiency of an UVC method based on diver towed GPS, the Tracked Roaming Transect (TRT), designed to maximize transect length (and thus the surveyed area) with respect to diving time invested in monitoring, as compared to Conventional Strip Transects (CST). Additionally, we analyze the effect of increasing transect width and length on the precision of density estimates by comparing TRT vs. CST methods using different fixed widths of 6 and 20 m (FW3 and FW10, respectively) and the Distance Sampling (DS) method, in which perpendicular distance of each fish or group of fishes to the transect line is estimated by divers up to 20 m from the transect line. The TRT was 74% more time and cost efficient than the CST (all transect widths considered together) and, for a given time, the use of TRT and/or increasing the transect width increased the precision of density estimates. In addition, since with the DS method distances of fishes to the transect line have to be estimated, and not measured directly as in terrestrial environments, errors in estimations of perpendicular distances can seriously affect DS density estimations. To assess the occurrence of distance estimation errors and their dependence on the observer’s experience, a field experiment using wooden fish models was performed. We tested the precision and accuracy of density estimators based on fixed widths and the DS method. The accuracy of the estimates was measured comparing the actual total abundance with those estimated by divers using FW3, FW10, and DS estimators. Density estimates differed by 13% (range 0.1–31%) from the actual values (average = 13.09%; median = 14.16%). Based on our results we encourage the use of the Tracked Roaming Transect with Distance Sampling (TRT+DS) method for improving density estimates of species occurring at low densities and/or highly aggregated, as well as for exploratory rapid-assessment surveys in which divers could gather spatial ecological and ecosystem information on large areas during UVC. PMID:29324887
NASA Astrophysics Data System (ADS)
Lechtenberg, Travis; McLaughlin, Craig A.; Locke, Travis; Krishna, Dhaval Mysore
2013-01-01
paper examines atmospheric density estimated using precision orbit ephemerides (POE) from the CHAMP and GRACE satellites during short periods of greater atmospheric density variability. The results of the calibration of CHAMP densities derived using POEs with those derived using accelerometers are examined for three different types of density perturbations, [traveling atmospheric disturbances (TADs), geomagnetic cusp phenomena, and midnight density maxima] in order to determine the temporal resolution of POE solutions. In addition, the densities are compared to High-Accuracy Satellite Drag Model (HASDM) densities to compare temporal resolution for both types of corrections. The resolution for these models of thermospheric density was found to be inadequate to sufficiently characterize the short-term density variations examined here. Also examined in this paper is the effect of differing density estimation schemes by propagating an initial orbit state forward in time and examining induced errors. The propagated POE-derived densities incurred errors of a smaller magnitude than the empirical models and errors on the same scale or better than those incurred using the HASDM model.
Geological Carbon Sequestration: A New Approach for Near-Surface Assurance Monitoring
Wielopolski, Lucian
2011-01-01
There are two distinct objectives in monitoring geological carbon sequestration (GCS): Deep monitoring of the reservoir’s integrity and plume movement and near-surface monitoring (NSM) to ensure public health and the safety of the environment. However, the minimum detection limits of the current instrumentation for NSM is too high for detecting weak signals that are embedded in the background levels of the natural variations, and the data obtained represents point measurements in space and time. A new approach for NSM, based on gamma-ray spectroscopy induced by inelastic neutron scatterings (INS), offers novel and unique characteristics providing the following: (1) High sensitivity with a reducible error of measurement and detection limits, and, (2) temporal- and spatial-integration of carbon in soil that results from underground CO2 seepage. Preliminary field results validated this approach showing carbon suppression of 14% in the first year and 7% in the second year. In addition the temporal behavior of the error propagation is presented and it is shown that for a signal at the level of the minimum detection level the error asymptotically approaches 47%. PMID:21556180
Historical spatial reconstruction of a spawning-aggregation fishery.
Buckley, Sarah M; Thurstan, Ruth H; Tobin, Andrew; Pandolfi, John M
2017-12-01
Aggregations of individual animals that form for breeding purposes are a critical ecological process for many species, yet these aggregations are inherently vulnerable to exploitation. Studies of the decline of exploited populations that form breeding aggregations tend to focus on catch rate and thus often overlook reductions in geographic range. We tested the hypothesis that catch rate and site occupancy of exploited fish-spawning aggregations (FSAs) decline in synchrony over time. We used the Spanish mackerel (Scomberomorus commerson) spawning-aggregation fishery in the Great Barrier Reef as a case study. Data were compiled from historical newspaper archives, fisher knowledge, and contemporary fishery logbooks to reconstruct catch rates and exploitation trends from the inception of the fishery. Our fine-scale analysis of catch and effort data spanned 103 years (1911-2013) and revealed a spatial expansion of fishing effort. Effort shifted offshore at a rate of 9.4 nm/decade, and 2.9 newly targeted FSAs were reported/decade. Spatial expansion of effort masked the sequential exploitation, commercial extinction, and loss of 70% of exploited FSAs. After standardizing for improvements in technological innovations, average catch rates declined by 90.5% from 1934 to 2011 (from 119.4 to 11.41 fish/vessel/trip). Mean catch rate of Spanish mackerel and occupancy of exploited mackerel FSAs were not significantly related. Our study revealed a special kind of shifting spatial baseline in which a contraction in exploited FSAs occurred undetected. Knowledge of temporally and spatially explicit information on FSAs can be relevant for the conservation and management of FSA species. © 2017 Society for Conservation Biology.
NASA Astrophysics Data System (ADS)
Zheng, Donghui; Chen, Lei; Li, Jinpeng; Sun, Qinyuan; Zhu, Wenhua; Anderson, James; Zhao, Jian; Schülzgen, Axel
2018-03-01
Circular carrier squeezing interferometry (CCSI) is proposed and applied to suppress phase shift error in simultaneous phase-shifting point-diffraction interferometer (SPSPDI). By introducing a defocus, four phase-shifting point-diffraction interferograms with circular carrier are acquired, and then converted into linear carrier interferograms by a coordinate transform. Rearranging the transformed interferograms into a spatial-temporal fringe (STF), so the error lobe will be separated from the phase lobe in the Fourier spectrum of the STF, and filtering the phase lobe to calculate the extended phase, when combined with the corresponding inverse coordinate transform, exactly retrieves the initial phase. Both simulations and experiments validate the ability of CCSI to suppress the ripple error generated by the phase shift error. Compared with carrier squeezing interferometry (CSI), CCSI is effective on some occasions in which a linear carrier is difficult to introduce, and with the added benefit of eliminating retrace error.
Hellander, Andreas; Lawson, Michael J; Drawert, Brian; Petzold, Linda
2015-01-01
The efficiency of exact simulation methods for the reaction-diffusion master equation (RDME) is severely limited by the large number of diffusion events if the mesh is fine or if diffusion constants are large. Furthermore, inherent properties of exact kinetic-Monte Carlo simulation methods limit the efficiency of parallel implementations. Several approximate and hybrid methods have appeared that enable more efficient simulation of the RDME. A common feature to most of them is that they rely on splitting the system into its reaction and diffusion parts and updating them sequentially over a discrete timestep. This use of operator splitting enables more efficient simulation but it comes at the price of a temporal discretization error that depends on the size of the timestep. So far, existing methods have not attempted to estimate or control this error in a systematic manner. This makes the solvers hard to use for practitioners since they must guess an appropriate timestep. It also makes the solvers potentially less efficient than if the timesteps are adapted to control the error. Here, we derive estimates of the local error and propose a strategy to adaptively select the timestep when the RDME is simulated via a first order operator splitting. While the strategy is general and applicable to a wide range of approximate and hybrid methods, we exemplify it here by extending a previously published approximate method, the Diffusive Finite-State Projection (DFSP) method, to incorporate temporal adaptivity. PMID:26865735
Peripheral refraction in normal infant rhesus monkeys
Hung, Li-Fang; Ramamirtham, Ramkumar; Huang, Juan; Qiao-Grider, Ying; Smith, Earl L.
2008-01-01
Purpose To characterize peripheral refractions in infant monkeys. Methods Cross-sectional data for horizontal refractions were obtained from 58 normal rhesus monkeys at 3 weeks of age. Longitudinal data were obtained for both the vertical and horizontal meridians from 17 monkeys. Refractive errors were measured by retinoscopy along the pupillary axis and at eccentricities of 15, 30, and 45 degrees. Axial dimensions and corneal power were measured by ultrasonography and keratometry, respectively. Results In infant monkeys, the degree of radial astigmatism increased symmetrically with eccentricity in all meridians. There were, however, initial nasal-temporal and superior-inferior asymmetries in the spherical-equivalent refractive errors. Specifically, the refractions in the temporal and superior fields were similar to the central ametropia, but the refractions in the nasal and inferior fields were more myopic than the central ametropia and the relative nasal field myopia increased with the degree of central hyperopia. With age, the degree of radial astigmatism decreased in all meridians and the refractions became more symmetrical along both the horizontal and vertical meridians; small degrees of relative myopia were evident in all fields. Conclusions As in adult humans, refractive error varied as a function of eccentricity in infant monkeys and the pattern of peripheral refraction varied with the central refractive error. With age, emmetropization occurred for both central and peripheral refractive errors resulting in similar refractions across the central 45 degrees of the visual field, which may reflect the actions of vision-dependent, growth-control mechanisms operating over a wide area of the posterior globe. PMID:18487366
Hellander, Andreas; Lawson, Michael J; Drawert, Brian; Petzold, Linda
2014-06-01
The efficiency of exact simulation methods for the reaction-diffusion master equation (RDME) is severely limited by the large number of diffusion events if the mesh is fine or if diffusion constants are large. Furthermore, inherent properties of exact kinetic-Monte Carlo simulation methods limit the efficiency of parallel implementations. Several approximate and hybrid methods have appeared that enable more efficient simulation of the RDME. A common feature to most of them is that they rely on splitting the system into its reaction and diffusion parts and updating them sequentially over a discrete timestep. This use of operator splitting enables more efficient simulation but it comes at the price of a temporal discretization error that depends on the size of the timestep. So far, existing methods have not attempted to estimate or control this error in a systematic manner. This makes the solvers hard to use for practitioners since they must guess an appropriate timestep. It also makes the solvers potentially less efficient than if the timesteps are adapted to control the error. Here, we derive estimates of the local error and propose a strategy to adaptively select the timestep when the RDME is simulated via a first order operator splitting. While the strategy is general and applicable to a wide range of approximate and hybrid methods, we exemplify it here by extending a previously published approximate method, the Diffusive Finite-State Projection (DFSP) method, to incorporate temporal adaptivity.
NASA Astrophysics Data System (ADS)
Perčec Tadić, M.
2010-09-01
The increased availability of satellite products of high spatial and temporal resolution together with developing user support, encourages the climatologists to use this data in research and practice. Since climatologists are mainly interested in monthly or even annual averages or aggregates, this high temporal resolution and hence, large amount of data, can be challenging for the less experienced users. Even if the attempt is made to aggregate e. g. the 15' (temporal) MODIS LST (land surface temperature) to daily temperature average, the development of the algorithm is not straight forward and should be done by the experts. Recent development of many temporary aggregated products on daily, several days or even monthly scale substantially decrease the amount of satellite data that needs to be processed and rise the possibility for development of various climatological applications. Here the attempt is presented in incorporating the MODIS satellite MOD11C3 product (Wan, 2009), that is monthly CMG (climate modelling 0.05 degree latitude/longitude grids) LST, as predictor in geostatistical interpolation of climatological data in Croatia. While in previous applications, e. g. in Climate Atlas of Croatia (Zaninović et al. 2008), the static predictors as digital elevation model, distance to the sea, latitude and longitude were used for the interpolation of monthly, seasonal and annual 30-years averages (reference climatology), here the monthly MOD11C3 is used to support the interpolation of the individual monthly average in the regression kriging framework. We believe that this can be a valuable show case of incorporating the remote sensed data for climatological application, especially in the areas that are under-sampled by conventional observations. Zaninović K, Gajić-Čapka M, Perčec Tadić M et al (2008) Klimatski atlas Hrvatske / Climate atlas of Croatia 1961-1990, 1971-2000. Meteorological and Hydrological Service of Croatia, Zagreb, pp 200. Wan Z, 2009: Collection-5 MODIS Land Surface Temperature Products Users' Guide, ICESS, University of California, Santa Barbara, pp 30.
NASA Astrophysics Data System (ADS)
China, S.; Mazzoleni, C.; Dubey, M. K.; Chakrabarty, R. K.; Moosmuller, H.; Onasch, T. B.; Herndon, S. C.
2010-12-01
We present an analysis of morphological characteristics of atmospheric aerosol collected during the MILAGRO (Megacity Initiative: Local and Global Research Observations) field campaign that took place in Mexico City in March 2006. The sampler was installed on the Aerodyne mobile laboratory. The aerosol samples were collected on nuclepore clear polycarbonate filters mounted in Costar pop-top membrane holders. More than one hundred filters were collected at different ground sites with different atmospheric and geographical characteristics (urban, sub-urban, mountain-top, industrial, etc.) over a month period. Selected subsets of these filters were analyzed for aerosol morphology using a scanning electron microscope and image analysis techniques. In this study we investigate spatial and temporal variations of aerosol shape descriptors, morphological parameters, and fractal dimension. We also compare the morphological results with other aerosol measurements such as aerosol optical properties(scattering and absorption) and size distribution data. Atmospheric aerosols have different morphological characteristics depending on many parameters such as emission sources, atmospheric formation pathways, aging processes, and aerosol mixing state. The aerosol morphology influences aerosol chemical and mechanical interactions with the environment, physical properties, and radiative effects. In this study, ambient aerosol particles have been classified in different shape groups as spherical, irregularly shaped, and fractal-like aggregates. Different morphological parameters such as aspect ratio, roundness, feret diameter, etc. have been estimated for irregular shaped and spherical particles and for different kinds of soot particles including fresh soot, collapsed and coated soot. Fractal geometry and image processing have been used to obtain morphological characteristics of different soot particles. The number of monomers constituting each aggregate and their diameters were measured and used to estimate an ensemble three-dimensional (3-d) fractal dimension. One-dimensional (1-d) and two-dimensional (2-d) fractal geometries have been measured using a power-law scaling relationship between 1-d and 2-d properties of projected images. Temporal variations in fractal dimension of soot-like aggregates have been observed at the mountaintop site and spatial variation of fractal dimension and other morphological descriptors of different shaped particles have been investigated for the different ground sites.
Spatial Modeling of Iron Transformations Within Artificial Soil Aggregates
NASA Astrophysics Data System (ADS)
Kausch, M.; Meile, C.; Pallud, C.
2008-12-01
Structured soils exhibit significant variations in transport characteristics at the aggregate scale. Preferential flow occurs through macropores while predominantly diffusive exchange takes place in intra-aggregate micropores. Such environments characterized by mass transfer limitations are conducive to the formation of small-scale chemical gradients and promote strong spatial variation in processes controlling the fate of redox-sensitive elements such as Fe. In this study, we present a reactive transport model used to spatially resolve iron bioreductive processes occurring within a spherical aggregate at the interface between advective and diffusive domains. The model is derived from current conceptual models of iron(hydr)oxide (HFO) transformations and constrained by literature and experimental data. Data were obtained from flow-through experiments on artificial soil aggregates inoculated with Shewanella putrefaciens strain CN32, and include the temporal evolution of the bulk solution composition, as well as spatial information on the final solid phase distribution within aggregates. With all iron initially in the form of ferrihydrite, spatially heterogeneous formation of goethite/lepidocrocite, magnetite and siderite was observed during the course of the experiments. These transformations were reproduced by the model, which ascribes a central role to divalent iron as a driver of HFO transformations and master variable in the rate laws of the considered reaction network. The predicted dissolved iron breakthrough curves also match the experimental ones closely. Thus, the computed chemical concentration fields help identify factors governing the observed trends in the solid phase distribution patterns inside the aggregate. Building on a mechanistic description of transformation reactions, fluid flow and solute transport, the model was able to describe the observations and hence illustrates the importance of small-scale gradients and dynamics of bioreductive processes for assessing bulk iron cycling. As HFOs are ubiquitous in soils, such process-level understanding of aggregate-scale iron dynamics has broad implications for the prediction of the subsurface fate of nutrients and contaminants that interact strongly with HFO surfaces.
Optimal Runge-Kutta Schemes for High-order Spatial and Temporal Discretizations
2015-06-01
using larger time steps versus lower-order time integration with smaller time steps.4 In the present work, an attempt is made to gener - alize these... generality and because of interest in multi-speed and high Reynolds number, wall-bounded flow regimes, a dual-time framework is adopted in the present work...errors of general combinations of high-order spatial and temporal discretizations. Different Runge-Kutta time integrators are applied to central
Quantifying drivers of wild pig movement across multiple spatial and temporal scales
Kay, Shannon L.; Fischer, Justin W.; Monaghan, Andrew J.; Beasley, James C; Boughton, Raoul; Campbell, Tyler A; Cooper, Susan M; Ditchkoff, Stephen S.; Hartley, Stephen B.; Kilgo, John C; Wisely, Samantha M; Wyckoff, A Christy; Vercauteren, Kurt C.; Pipen, Kim M
2017-01-01
The analytical framework we present can be used to assess movement patterns arising from multiple data sources for a range of species while accounting for spatio-temporal correlations. Our analyses show the magnitude by which reaction norms can change based on the temporal scale of response data, illustrating the importance of appropriately defining temporal scales of both the movement response and covariates depending on the intended implications of research (e.g., predicting effects of movement due to climate change versus planning local-scale management). We argue that consideration of multiple spatial scales within the same framework (rather than comparing across separate studies post-hoc) gives a more accurate quantification of cross-scale spatial effects by appropriately accounting for error correlation.
NASA Astrophysics Data System (ADS)
Yin, Ping; Mu, Lan; Madden, Marguerite; Vena, John E.
2014-10-01
Lung cancer is the second most commonly diagnosed cancer in both men and women in Georgia, USA. However, the spatio-temporal patterns of lung cancer risk in Georgia have not been fully studied. Hierarchical Bayesian models are used here to explore the spatio-temporal patterns of lung cancer incidence risk by race and gender in Georgia for the period of 2000-2007. With the census tract level as the spatial scale and the 2-year period aggregation as the temporal scale, we compare a total of seven Bayesian spatio-temporal models including two under a separate modeling framework and five under a joint modeling framework. One joint model outperforms others based on the deviance information criterion. Results show that the northwest region of Georgia has consistently high lung cancer incidence risk for all population groups during the study period. In addition, there are inverse relationships between the socioeconomic status and the lung cancer incidence risk among all Georgian population groups, and the relationships in males are stronger than those in females. By mapping more reliable variations in lung cancer incidence risk at a relatively fine spatio-temporal scale for different Georgian population groups, our study aims to better support healthcare performance assessment, etiological hypothesis generation, and health policy making.
Selecting a separable parametric spatiotemporal covariance structure for longitudinal imaging data.
George, Brandon; Aban, Inmaculada
2015-01-15
Longitudinal imaging studies allow great insight into how the structure and function of a subject's internal anatomy changes over time. Unfortunately, the analysis of longitudinal imaging data is complicated by inherent spatial and temporal correlation: the temporal from the repeated measures and the spatial from the outcomes of interest being observed at multiple points in a patient's body. We propose the use of a linear model with a separable parametric spatiotemporal error structure for the analysis of repeated imaging data. The model makes use of spatial (exponential, spherical, and Matérn) and temporal (compound symmetric, autoregressive-1, Toeplitz, and unstructured) parametric correlation functions. A simulation study, inspired by a longitudinal cardiac imaging study on mitral regurgitation patients, compared different information criteria for selecting a particular separable parametric spatiotemporal correlation structure as well as the effects on types I and II error rates for inference on fixed effects when the specified model is incorrect. Information criteria were found to be highly accurate at choosing between separable parametric spatiotemporal correlation structures. Misspecification of the covariance structure was found to have the ability to inflate the type I error or have an overly conservative test size, which corresponded to decreased power. An example with clinical data is given illustrating how the covariance structure procedure can be performed in practice, as well as how covariance structure choice can change inferences about fixed effects. Copyright © 2014 John Wiley & Sons, Ltd.
Sparse Representation with Spatio-Temporal Online Dictionary Learning for Efficient Video Coding.
Dai, Wenrui; Shen, Yangmei; Tang, Xin; Zou, Junni; Xiong, Hongkai; Chen, Chang Wen
2016-07-27
Classical dictionary learning methods for video coding suer from high computational complexity and interfered coding eciency by disregarding its underlying distribution. This paper proposes a spatio-temporal online dictionary learning (STOL) algorithm to speed up the convergence rate of dictionary learning with a guarantee of approximation error. The proposed algorithm incorporates stochastic gradient descents to form a dictionary of pairs of 3-D low-frequency and highfrequency spatio-temporal volumes. In each iteration of the learning process, it randomly selects one sample volume and updates the atoms of dictionary by minimizing the expected cost, rather than optimizes empirical cost over the complete training data like batch learning methods, e.g. K-SVD. Since the selected volumes are supposed to be i.i.d. samples from the underlying distribution, decomposition coecients attained from the trained dictionary are desirable for sparse representation. Theoretically, it is proved that the proposed STOL could achieve better approximation for sparse representation than K-SVD and maintain both structured sparsity and hierarchical sparsity. It is shown to outperform batch gradient descent methods (K-SVD) in the sense of convergence speed and computational complexity, and its upper bound for prediction error is asymptotically equal to the training error. With lower computational complexity, extensive experiments validate that the STOL based coding scheme achieves performance improvements than H.264/AVC or HEVC as well as existing super-resolution based methods in ratedistortion performance and visual quality.
Illusory Reversal of Causality between Touch and Vision has No Effect on Prism Adaptation Rate.
Tanaka, Hirokazu; Homma, Kazuhiro; Imamizu, Hiroshi
2012-01-01
Learning, according to Oxford Dictionary, is "to gain knowledge or skill by studying, from experience, from being taught, etc." In order to learn from experience, the central nervous system has to decide what action leads to what consequence, and temporal perception plays a critical role in determining the causality between actions and consequences. In motor adaptation, causality between action and consequence is implicitly assumed so that a subject adapts to a new environment based on the consequence caused by her action. Adaptation to visual displacement induced by prisms is a prime example; the visual error signal associated with the motor output contributes to the recovery of accurate reaching, and a delayed feedback of visual error can decrease the adaptation rate. Subjective feeling of temporal order of action and consequence, however, can be modified or even reversed when her sense of simultaneity is manipulated with an artificially delayed feedback. Our previous study (Tanaka et al., 2011; Exp. Brain Res.) demonstrated that the rate of prism adaptation was unaffected when the subjective delay of visual feedback was shortened. This study asked whether subjects could adapt to prism adaptation and whether the rate of prism adaptation was affected when the subjective temporal order was illusory reversed. Adapting to additional 100 ms delay and its sudden removal caused a positive shift of point of simultaneity in a temporal order judgment experiment, indicating an illusory reversal of action and consequence. We found that, even in this case, the subjects were able to adapt to prism displacement with the learning rate that was statistically indistinguishable to that without temporal adaptation. This result provides further evidence to the dissociation between conscious temporal perception and motor adaptation.
NASA Astrophysics Data System (ADS)
Schurgers, G.; Arneth, A.; Hickler, T.
2011-11-01
Regional or global modeling studies of dynamic vegetation often represent vegetation by large functional units (plant functional types (PFTs)). For simulation of biogenic volatile organic compounds (BVOC) in these models, emission capacities, which give the emission under standardized conditions, are provided as an average value for a PFT. These emission capacities thus hide the known heterogeneity in emission characteristics that are not straightforwardly related to functional characteristics of plants. Here we study the effects of the aggregation of species-level information on emission characteristics at PFT level. The roles of temporal and spatial variability are assessed for Europe by comparing simulations that represent vegetation by dominant tree species on the one hand and by plant functional types on the other. We compare a number of time slices between the Last Glacial Maximum (21,000 years ago) and the present day to quantify the effects of dynamically changing vegetation on BVOC emissions. Spatial heterogeneity of emission factors is studied with present-day simulations. We show that isoprene and monoterpene emissions are of similar magnitude in Europe when the simulation represents dominant European tree species, which indicates that simulations applying typical global-scale emission capacities for PFTs tend to overestimate isoprene and underestimate monoterpene emissions. Moreover, both spatial and temporal variability affect emission capacities considerably, and by aggregating these to PFT level averages, one loses the information on local heterogeneity. Given the reactive nature of these compounds, accounting for spatial and temporal heterogeneity can be important for studies of their fate in the atmosphere.
NASA Astrophysics Data System (ADS)
Cameron, K. C.; Sirovic, A.; Jaffe, J. S.; Semmens, B.; Pattengill-Semmens, C.; Gibb, J.
2016-02-01
Fish spawning aggregation (FSA) sites are extremely vulnerable to over-exploitation. Accurate understanding of the spatial and temporal use of such sites is necessary for effective species management. The size of FSAs can be on the order of kilometers and peak spawning often occurs at night, posing challenges to visual observation. Passive acoustics are an alternative method for dealing with these challenges. An array of passive acoustic recorders and GoPro cameras were deployed during Nassau grouper (Epinephelus striatus) spawning from February 7th to 12th, 2015 at a multispecies spawning aggregation site in Little Cayman, Cayman Islands. In addition to Nassau grouper, at least 10 other species are known to spawn at this location including tiger grouper (Mycteroperca tigris), red hind (Epinephelus guttatus), black grouper (Mycteroperca bonaci), and yellowfin grouper (Mycteroperca venenosa). During 5 days of continuous recordings, over 21,000 fish calls were detected. These calls were classified into 15 common types. Species identification and behavioral context of unknown common call types were determined by coupling video recordings collected during this time with call localizations. There are distinct temporal patterns in call production of different species. For example, red hind and yellowfin grouper call predominately at night with yellowfin call rates increasing after midnight, and black grouper call primarily during dusk and dawn. In addition, localization methods were used to reveal how the FSA area was divided among species. These findings facilitate a better understanding of the behavior of these important reef fish species allowing policymakers to more effectively manage and protect them.
Tranmer, Mark; Marcum, Christopher Steven; Morton, F Blake; Croft, Darren P; de Kort, Selvino R
2015-03-01
Social dynamics are of fundamental importance in animal societies. Studies on nonhuman animal social systems often aggregate social interaction event data into a single network within a particular time frame. Analysis of the resulting network can provide a useful insight into the overall extent of interaction. However, through aggregation, information is lost about the order in which interactions occurred, and hence the sequences of actions over time. Many research hypotheses relate directly to the sequence of actions, such as the recency or rate of action, rather than to their overall volume or presence. Here, we demonstrate how the temporal structure of social interaction sequences can be quantified from disaggregated event data using the relational event model (REM). We first outline the REM, explaining why it is different from other models for longitudinal data, and how it can be used to model sequences of events unfolding in a network. We then discuss a case study on the European jackdaw, Corvus monedula , in which temporal patterns of persistence and reciprocity of action are of interest, and present and discuss the results of a REM analysis of these data. One of the strengths of a REM analysis is its ability to take into account different ways in which data are collected. Having explained how to take into account the way in which the data were collected for the jackdaw study, we briefly discuss the application of the model to other studies. We provide details of how the models may be fitted in the R statistical software environment and outline some recent extensions to the REM framework.
Comparison of Urban Human Movements Inferring from Multi-Source Spatial-Temporal Data
NASA Astrophysics Data System (ADS)
Cao, Rui; Tu, Wei; Cao, Jinzhou; Li, Qingquan
2016-06-01
The quantification of human movements is very hard because of the sparsity of traditional data and the labour intensive of the data collecting process. Recently, much spatial-temporal data give us an opportunity to observe human movement. This research investigates the relationship of city-wide human movements inferring from two types of spatial-temporal data at traffic analysis zone (TAZ) level. The first type of human movement is inferred from long-time smart card transaction data recording the boarding actions. The second type of human movement is extracted from citywide time sequenced mobile phone data with 30 minutes interval. Travel volume, travel distance and travel time are used to measure aggregated human movements in the city. To further examine the relationship between the two types of inferred movements, the linear correlation analysis is conducted on the hourly travel volume. The obtained results show that human movements inferred from smart card data and mobile phone data have a correlation of 0.635. However, there are still some non-ignorable differences in some special areas. This research not only reveals the citywide spatial-temporal human dynamic but also benefits the understanding of the reliability of the inference of human movements with big spatial-temporal data.
Audit of the global carbon budget: estimate errors and their impact on uptake uncertainty
NASA Astrophysics Data System (ADS)
Ballantyne, A. P.; Andres, R.; Houghton, R.; Stocker, B. D.; Wanninkhof, R.; Anderegg, W.; Cooper, L. A.; DeGrandpre, M.; Tans, P. P.; Miller, J. C.; Alden, C.; White, J. W. C.
2014-10-01
Over the last 5 decades monitoring systems have been developed to detect changes in the accumulation of C in the atmosphere, ocean, and land; however, our ability to detect changes in the behavior of the global C cycle is still hindered by measurement and estimate errors. Here we present a rigorous and flexible framework for assessing the temporal and spatial components of estimate error and their impact on uncertainty in net C uptake by the biosphere. We present a novel approach for incorporating temporally correlated random error into the error structure of emission estimates. Based on this approach, we conclude that the 2 σ error of the atmospheric growth rate has decreased from 1.2 Pg C yr-1 in the 1960s to 0.3 Pg C yr-1 in the 2000s, leading to a ~20% reduction in the over-all uncertainty of net global C uptake by the biosphere. While fossil fuel emissions have increased by a factor of 4 over the last 5 decades, 2 σ errors in fossil fuel emissions due to national reporting errors and differences in energy reporting practices have increased from 0.3 Pg C yr-1 in the 1960s to almost 1.0 Pg C yr-1 during the 2000s. At the same time land use emissions have declined slightly over the last 5 decades, but their relative errors remain high. Notably, errors associated with fossil fuel emissions have come to dominate uncertainty in the global C budget and are now comparable to the total emissions from land use, thus efforts to reduce errors in fossil fuel emissions are necessary. Given all the major sources of error in the global C budget that we could identify, we are 93% confident that C uptake has increased and 97% confident that C uptake by the terrestrial biosphere has increased over the last 5 decades. Although the persistence of future C sinks remains unknown and some ecosystem services may be compromised by this continued C uptake (e.g. ocean acidification), it is clear that arguably the greatest ecosystem service currently provided by the biosphere is the continued removal of approximately half of atmospheric CO2 emissions from the atmosphere.
Neuroanatomical dissociation for taxonomic and thematic knowledge in the human brain
Schwartz, Myrna F.; Kimberg, Daniel Y.; Walker, Grant M.; Brecher, Adelyn; Faseyitan, Olufunsho K.; Dell, Gary S.; Mirman, Daniel; Coslett, H. Branch
2011-01-01
It is thought that semantic memory represents taxonomic information differently from thematic information. This study investigated the neural basis for the taxonomic-thematic distinction in a unique way. We gathered picture-naming errors from 86 individuals with poststroke language impairment (aphasia). Error rates were determined separately for taxonomic errors (“pear” in response to apple) and thematic errors (“worm” in response to apple), and their shared variance was regressed out of each measure. With the segmented lesions normalized to a common template, we carried out voxel-based lesion-symptom mapping on each error type separately. We found that taxonomic errors localized to the left anterior temporal lobe and thematic errors localized to the left temporoparietal junction. This is an indication that the contribution of these regions to semantic memory cleaves along taxonomic-thematic lines. Our findings show that a distinction long recognized in the psychological sciences is grounded in the structure and function of the human brain. PMID:21540329
NASA Astrophysics Data System (ADS)
Mulsow, C.
2012-07-01
The paper describes the determination of the percentage area of bitumen on partly covered aggregate. This task is a typical issue in material testing in road construction. The asphalt components bitumen and aggregate are subjected to defined mechanical stress in the presence of water in order to test the affine properties of the components. The degree to which the bitumen separates from the aggregate surface serves as an indicator for the quality of the affinity. Until now, examiners have been judging the coverage degree of samples by visual rating. Several research projects attempted to replace the error-prone subjective assessment by automatic procedures. These procedures analyse the different chromaticities of aggregate and bitumen in RGB images. However, these approaches as a whole are not reliable enough because of the rather specific requirements that are made on the environmental conditions when the picture is taken (illumination, exclusion of extraneous light) and also on the lab assistant (manual definition of training areas, management of camera and illumination parameters). Moreover, the analysis is not suitable for all types of rock because of the necessary difference in colour between bitumen and aggregate (e.g. dark rock samples). Contrary to previous approaches, the new multi-directional reflectance measurements use the different surface characteristics of bitumen and aggregate instead of the chromaticities as separation criteria. These differences are made visible by directional lighting with a laser. The diffuse reflection from the aggregate surface and the directional reflection from the optically smoother bitumen produce definitely distinguishable brightnesses in the image. Thus the colour of the material is of no significance. The approach was used in a procedure and assessed. The paper presents the method itself, approaches for the elimination of reflections and first results. Moreover, the measuring principle is compared with existing procedures and benefits and drawbacks are outlined.
Role of color memory in successive color constancy.
Ling, Yazhu; Hurlbert, Anya
2008-06-01
We investigate color constancy for real 2D paper samples using a successive matching paradigm in which the observer memorizes a reference surface color under neutral illumination and after a temporal interval selects a matching test surface under the same or different illumination. We find significant effects of the illumination, reference surface, and their interaction on the matching error. We characterize the matching error in the absence of illumination change as the "pure color memory shift" and introduce a new index for successive color constancy that compares this shift against the matching error under changing illumination. The index also incorporates the vector direction of the matching errors in chromaticity space, unlike the traditional constancy index. With this index, we find that color constancy is nearly perfect.
The cerebellum for jocks and nerds alike.
Popa, Laurentiu S; Hewitt, Angela L; Ebner, Timothy J
2014-01-01
Historically the cerebellum has been implicated in the control of movement. However, the cerebellum's role in non-motor functions, including cognitive and emotional processes, has also received increasing attention. Starting from the premise that the uniform architecture of the cerebellum underlies a common mode of information processing, this review examines recent electrophysiological findings on the motor signals encoded in the cerebellar cortex and then relates these signals to observations in the non-motor domain. Simple spike firing of individual Purkinje cells encodes performance errors, both predicting upcoming errors as well as providing feedback about those errors. Further, this dual temporal encoding of prediction and feedback involves a change in the sign of the simple spike modulation. Therefore, Purkinje cell simple spike firing both predicts and responds to feedback about a specific parameter, consistent with computing sensory prediction errors in which the predictions about the consequences of a motor command are compared with the feedback resulting from the motor command execution. These new findings are in contrast with the historical view that complex spikes encode errors. Evaluation of the kinematic coding in the simple spike discharge shows the same dual temporal encoding, suggesting this is a common mode of signal processing in the cerebellar cortex. Decoding analyses show the considerable accuracy of the predictions provided by Purkinje cells across a range of times. Further, individual Purkinje cells encode linearly and independently a multitude of signals, both kinematic and performance errors. Therefore, the cerebellar cortex's capacity to make associations across different sensory, motor and non-motor signals is large. The results from studying how Purkinje cells encode movement signals suggest that the cerebellar cortex circuitry can support associative learning, sequencing, working memory, and forward internal models in non-motor domains.
The cerebellum for jocks and nerds alike
Popa, Laurentiu S.; Hewitt, Angela L.; Ebner, Timothy J.
2014-01-01
Historically the cerebellum has been implicated in the control of movement. However, the cerebellum's role in non-motor functions, including cognitive and emotional processes, has also received increasing attention. Starting from the premise that the uniform architecture of the cerebellum underlies a common mode of information processing, this review examines recent electrophysiological findings on the motor signals encoded in the cerebellar cortex and then relates these signals to observations in the non-motor domain. Simple spike firing of individual Purkinje cells encodes performance errors, both predicting upcoming errors as well as providing feedback about those errors. Further, this dual temporal encoding of prediction and feedback involves a change in the sign of the simple spike modulation. Therefore, Purkinje cell simple spike firing both predicts and responds to feedback about a specific parameter, consistent with computing sensory prediction errors in which the predictions about the consequences of a motor command are compared with the feedback resulting from the motor command execution. These new findings are in contrast with the historical view that complex spikes encode errors. Evaluation of the kinematic coding in the simple spike discharge shows the same dual temporal encoding, suggesting this is a common mode of signal processing in the cerebellar cortex. Decoding analyses show the considerable accuracy of the predictions provided by Purkinje cells across a range of times. Further, individual Purkinje cells encode linearly and independently a multitude of signals, both kinematic and performance errors. Therefore, the cerebellar cortex's capacity to make associations across different sensory, motor and non-motor signals is large. The results from studying how Purkinje cells encode movement signals suggest that the cerebellar cortex circuitry can support associative learning, sequencing, working memory, and forward internal models in non-motor domains. PMID:24987338
Federal Register 2010, 2011, 2012, 2013, 2014
2010-05-05
... Management and Budget (``OMB'') to project aggregate offering price for purposes of the fiscal year 2010... methodology it developed in consultation with the CBO and OMB to project dollar volume for purposes of prior... AAMOP is given by exp(FLAAMOP t + [sigma] n \\2\\/2), where [sigma] n denotes the standard error of the n...
Jones, Benjamin A; Stanton, Timothy K; Colosi, John A; Gauss, Roger C; Fialkowski, Joseph M; Michael Jech, J
2017-06-01
For horizontal-looking sonar systems operating at mid-frequencies (1-10 kHz), scattering by fish with resonant gas-filled swimbladders can dominate seafloor and surface reverberation at long-ranges (i.e., distances much greater than the water depth). This source of scattering, which can be difficult to distinguish from other sources of scattering in the water column or at the boundaries, can add spatio-temporal variability to an already complex acoustic record. Sparsely distributed, spatially compact fish aggregations were measured in the Gulf of Maine using a long-range broadband sonar with continuous spectral coverage from 1.5 to 5 kHz. Observed echoes, that are at least 15 decibels above background levels in the horizontal-looking sonar data, are classified spectrally by the resonance features as due to swimbladder-bearing fish. Contemporaneous multi-frequency echosounder measurements (18, 38, and 120 kHz) and net samples are used in conjunction with physics-based acoustic models to validate this approach. Furthermore, the fish aggregations are statistically characterized in the long-range data by highly non-Rayleigh distributions of the echo magnitudes. These distributions are accurately predicted by a computationally efficient, physics-based model. The model accounts for beam-pattern and waveguide effects as well as the scattering response of aggregations of fish.
Collective Intelligence: Aggregation of Information from Neighbors in a Guessing Game.
Pérez, Toni; Zamora, Jordi; Eguíluz, Víctor M
2016-01-01
Complex systems show the capacity to aggregate information and to display coordinated activity. In the case of social systems the interaction of different individuals leads to the emergence of norms, trends in political positions, opinions, cultural traits, and even scientific progress. Examples of collective behavior can be observed in activities like the Wikipedia and Linux, where individuals aggregate their knowledge for the benefit of the community, and citizen science, where the potential of collectives to solve complex problems is exploited. Here, we conducted an online experiment to investigate the performance of a collective when solving a guessing problem in which each actor is endowed with partial information and placed as the nodes of an interaction network. We measure the performance of the collective in terms of the temporal evolution of the accuracy, finding no statistical difference in the performance for two classes of networks, regular lattices and random networks. We also determine that a Bayesian description captures the behavior pattern the individuals follow in aggregating information from neighbors to make decisions. In comparison with other simple decision models, the strategy followed by the players reveals a suboptimal performance of the collective. Our contribution provides the basis for the micro-macro connection between individual based descriptions and collective phenomena.
Collective Intelligence: Aggregation of Information from Neighbors in a Guessing Game
Pérez, Toni; Zamora, Jordi; Eguíluz, Víctor M.
2016-01-01
Complex systems show the capacity to aggregate information and to display coordinated activity. In the case of social systems the interaction of different individuals leads to the emergence of norms, trends in political positions, opinions, cultural traits, and even scientific progress. Examples of collective behavior can be observed in activities like the Wikipedia and Linux, where individuals aggregate their knowledge for the benefit of the community, and citizen science, where the potential of collectives to solve complex problems is exploited. Here, we conducted an online experiment to investigate the performance of a collective when solving a guessing problem in which each actor is endowed with partial information and placed as the nodes of an interaction network. We measure the performance of the collective in terms of the temporal evolution of the accuracy, finding no statistical difference in the performance for two classes of networks, regular lattices and random networks. We also determine that a Bayesian description captures the behavior pattern the individuals follow in aggregating information from neighbors to make decisions. In comparison with other simple decision models, the strategy followed by the players reveals a suboptimal performance of the collective. Our contribution provides the basis for the micro-macro connection between individual based descriptions and collective phenomena. PMID:27093274
Thutupalli, Shashi; Sun, Mingzhai; Bunyak, Filiz; Palaniappan, Kannappan; Shaevitz, Joshua W.
2015-01-01
The formation of a collectively moving group benefits individuals within a population in a variety of ways. The surface-dwelling bacterium Myxococcus xanthus forms dynamic collective groups both to feed on prey and to aggregate during times of starvation. The latter behaviour, termed fruiting-body formation, involves a complex, coordinated series of density changes that ultimately lead to three-dimensional aggregates comprising hundreds of thousands of cells and spores. How a loose, two-dimensional sheet of motile cells produces a fixed aggregate has remained a mystery as current models of aggregation are either inconsistent with experimental data or ultimately predict unstable structures that do not remain fixed in space. Here, we use high-resolution microscopy and computer vision software to spatio-temporally track the motion of thousands of individuals during the initial stages of fruiting-body formation. We find that cells undergo a phase transition from exploratory flocking, in which unstable cell groups move rapidly and coherently over long distances, to a reversal-mediated localization into one-dimensional growing streams that are inherently stable in space. These observations identify a new phase of active collective behaviour and answer a long-standing open question in Myxococcus development by describing how motile cell groups can remain statistically fixed in a spatial location. PMID:26246416
Wu, S.-S.; Wang, L.; Qiu, X.
2008-01-01
This article presents a deterministic model for sub-block-level population estimation based on the total building volumes derived from geographic information system (GIS) building data and three census block-level housing statistics. To assess the model, we generated artificial blocks by aggregating census block areas and calculating the respective housing statistics. We then applied the model to estimate populations for sub-artificial-block areas and assessed the estimates with census populations of the areas. Our analyses indicate that the average percent error of population estimation for sub-artificial-block areas is comparable to those for sub-census-block areas of the same size relative to associated blocks. The smaller the sub-block-level areas, the higher the population estimation errors. For example, the average percent error for residential areas is approximately 0.11 percent for 100 percent block areas and 35 percent for 5 percent block areas.
Tourism forecasting using modified empirical mode decomposition and group method of data handling
NASA Astrophysics Data System (ADS)
Yahya, N. A.; Samsudin, R.; Shabri, A.
2017-09-01
In this study, a hybrid model using modified Empirical Mode Decomposition (EMD) and Group Method of Data Handling (GMDH) model is proposed for tourism forecasting. This approach reconstructs intrinsic mode functions (IMFs) produced by EMD using trial and error method. The new component and the remaining IMFs is then predicted respectively using GMDH model. Finally, the forecasted results for each component are aggregated to construct an ensemble forecast. The data used in this experiment are monthly time series data of tourist arrivals from China, Thailand and India to Malaysia from year 2000 to 2016. The performance of the model is evaluated using Root Mean Square Error (RMSE) and Mean Absolute Percentage Error (MAPE) where conventional GMDH model and EMD-GMDH model are used as benchmark models. Empirical results proved that the proposed model performed better forecasts than the benchmarked models.
Krefeld-Schwalb, Antonia; Witte, Erich H.; Zenker, Frank
2018-01-01
In psychology as elsewhere, the main statistical inference strategy to establish empirical effects is null-hypothesis significance testing (NHST). The recent failure to replicate allegedly well-established NHST-results, however, implies that such results lack sufficient statistical power, and thus feature unacceptably high error-rates. Using data-simulation to estimate the error-rates of NHST-results, we advocate the research program strategy (RPS) as a superior methodology. RPS integrates Frequentist with Bayesian inference elements, and leads from a preliminary discovery against a (random) H0-hypothesis to a statistical H1-verification. Not only do RPS-results feature significantly lower error-rates than NHST-results, RPS also addresses key-deficits of a “pure” Frequentist and a standard Bayesian approach. In particular, RPS aggregates underpowered results safely. RPS therefore provides a tool to regain the trust the discipline had lost during the ongoing replicability-crisis. PMID:29740363
Krefeld-Schwalb, Antonia; Witte, Erich H; Zenker, Frank
2018-01-01
In psychology as elsewhere, the main statistical inference strategy to establish empirical effects is null-hypothesis significance testing (NHST). The recent failure to replicate allegedly well-established NHST-results, however, implies that such results lack sufficient statistical power, and thus feature unacceptably high error-rates. Using data-simulation to estimate the error-rates of NHST-results, we advocate the research program strategy (RPS) as a superior methodology. RPS integrates Frequentist with Bayesian inference elements, and leads from a preliminary discovery against a (random) H 0 -hypothesis to a statistical H 1 -verification. Not only do RPS-results feature significantly lower error-rates than NHST-results, RPS also addresses key-deficits of a "pure" Frequentist and a standard Bayesian approach. In particular, RPS aggregates underpowered results safely. RPS therefore provides a tool to regain the trust the discipline had lost during the ongoing replicability-crisis.
Interspecific reciprocity explains mobbing behaviour of the breeding chaffinches, Fringilla coelebs.
Krams, Indrikis; Krama, Tatjana
2002-11-22
When prey animals discover a predator close by, they mob it while uttering characteristic sounds that attract other prey individuals to the vicinity. Mobbing causes a predator to vacate its immediate foraging area, which gives an opportunity for prey individuals to continue their interrupted daily activity. Besides the increased benefits, mobbing behaviour also has its costs owing to injuries or death. The initiator of mobbing may be at increased risk of predation by attracting the predator's attention, especially if not joined by other neighbouring prey individuals. Communities of breeding birds have always been considered as temporal aggregations. Since an altruist could not prevent cheaters from exploiting its altruism in an anonymous community, this excluded any possibility of explaining mobbing behaviour in terms of reciprocal altruism. However, sedentary birds may have become acquainted since the previous non-breeding season. Migrant birds, forming anonymous communities at the beginning of the breeding season, may also develop closer social ties during the course of the breeding season. We tested whether a male chaffinch, a migrant bird, would initiate active harassment of a predator both at the beginning of the breeding season and a week later when it has become a member of a non-anonymous multi-species aggregation of sedentary birds. We expected that male chaffinches would be less likely to initiate a mob at the beginning of the breeding season when part of an anonymous multi-species aggregation of migratory birds. However, their mobbing activity should increase as the breeding season advances. Our results support these predictions. Cooperation among individuals belonging to different species in driving the predator away may be explained as interspecific reciprocity based on interspecific recognition and temporal stability of the breeding communities.
Interspecific reciprocity explains mobbing behaviour of the breeding chaffinches, Fringilla coelebs.
Krams, Indrikis; Krama, Tatjana
2002-01-01
When prey animals discover a predator close by, they mob it while uttering characteristic sounds that attract other prey individuals to the vicinity. Mobbing causes a predator to vacate its immediate foraging area, which gives an opportunity for prey individuals to continue their interrupted daily activity. Besides the increased benefits, mobbing behaviour also has its costs owing to injuries or death. The initiator of mobbing may be at increased risk of predation by attracting the predator's attention, especially if not joined by other neighbouring prey individuals. Communities of breeding birds have always been considered as temporal aggregations. Since an altruist could not prevent cheaters from exploiting its altruism in an anonymous community, this excluded any possibility of explaining mobbing behaviour in terms of reciprocal altruism. However, sedentary birds may have become acquainted since the previous non-breeding season. Migrant birds, forming anonymous communities at the beginning of the breeding season, may also develop closer social ties during the course of the breeding season. We tested whether a male chaffinch, a migrant bird, would initiate active harassment of a predator both at the beginning of the breeding season and a week later when it has become a member of a non-anonymous multi-species aggregation of sedentary birds. We expected that male chaffinches would be less likely to initiate a mob at the beginning of the breeding season when part of an anonymous multi-species aggregation of migratory birds. However, their mobbing activity should increase as the breeding season advances. Our results support these predictions. Cooperation among individuals belonging to different species in driving the predator away may be explained as interspecific reciprocity based on interspecific recognition and temporal stability of the breeding communities. PMID:12495502
2017-01-01
Normal aging is associated with a decline in episodic memory and also with aggregation of the β-amyloid (Aβ) and tau proteins and atrophy of medial temporal lobe (MTL) structures crucial to memory formation. Although some evidence suggests that Aβ is associated with aberrant neural activity, the relationships among these two aggregated proteins, neural function, and brain structure are poorly understood. Using in vivo human Aβ and tau imaging, we demonstrate that increased Aβ and tau are both associated with aberrant fMRI activity in the MTL during memory encoding in cognitively normal older adults. This pathological neural activity was in turn associated with worse memory performance and atrophy within the MTL. A mediation analysis revealed that the relationship with regional atrophy was explained by MTL tau. These findings broaden the concept of cognitive aging to include evidence of Alzheimer's disease-related protein aggregation as an underlying mechanism of age-related memory impairment. SIGNIFICANCE STATEMENT Alterations in episodic memory and the accumulation of Alzheimer's pathology are common in cognitively normal older adults. However, evidence of pathological effects on episodic memory has largely been limited to β-amyloid (Aβ). Because Aβ and tau often cooccur in older adults, previous research offers an incomplete understanding of the relationship between pathology and episodic memory. With the recent development of in vivo tau PET radiotracers, we show that Aβ and tau are associated with different aspects of memory encoding, leading to aberrant neural activity that is behaviorally detrimental. In addition, our results provide evidence linking Aβ- and tau-associated neural dysfunction to brain atrophy. PMID:28213439
Effect of Divided Attention on Children's Rhythmic Response
ERIC Educational Resources Information Center
Thomas, Jerry R.; Stratton, Richard K.
1977-01-01
Audio and visual interference did not significantly impair rhythmic response levels of second- and fourth-grade boys as measured by space error scores, though audio input resulted in significantly less consistent temporal performance. (MB)
The role of spatial aggregation in forensic entomology.
Fiene, Justin G; Sword, Gregory A; Van Laerhoven, Sherah L; Tarone, Aaron M
2014-01-01
A central concept in forensic entomology is that arthropod succession on carrion is predictable and can be used to estimate the postmortem interval (PMI) of human remains. However, most studies have reported significant variation in successional patterns, particularly among replicate carcasses, which has complicated estimates of PMIs. Several forensic entomology researchers have proposed that further integration of ecological and evolutionary theory in forensic entomology could help advance the application of succession data for producing PMI estimates. The purpose of this essay is to draw attention to the role of spatial aggregation of arthropods among carrion resources as a potentially important aspect to consider for understanding and predicting the assembly of arthropods on carrion over time. We review ecological literature related to spatial aggregation of arthropods among patchy and ephemeral resources, such as carrion, and when possible integrate these results with published forensic literature. We show that spatial aggregation of arthropods across resources is commonly reported and has been used to provide fundamental insight for understanding regional and local patterns of arthropod diversity and coexistence. Moreover, two suggestions are made for conducting future research. First, because intraspecific aggregation affects species frequency distributions across carcasses, data from replicate carcasses should not be combined, but rather statistically quantified to generate occurrence probabilities. Second, we identify a need for studies that tease apart the degree to which community assembly on carrion is spatially versus temporally structured, which will aid in developing mechanistic hypotheses on the ecological factors shaping community assembly on carcasses.
Kovatchev, Boris P; Clarke, William L; Breton, Marc; Brayman, Kenneth; McCall, Anthony
2005-12-01
Continuous glucose monitors (CGMs) collect detailed blood glucose (BG) time series, which carry significant information about the dynamics of BG fluctuations. In contrast, the methods for analysis of CGM data remain those developed for infrequent BG self-monitoring. As a result, important information about the temporal structure of the data is lost during the translation of raw sensor readings into clinically interpretable statistics and images. The following mathematical methods are introduced into the field of CGM data interpretation: (1) analysis of BG rate of change; (2) risk analysis using previously reported Low/High BG Indices and Poincare (lag) plot of risk associated with temporal BG variability; and (3) spatial aggregation of the process of BG fluctuations and its Markov chain visualization. The clinical application of these methods is illustrated by analysis of data of a patient with Type 1 diabetes mellitus who underwent islet transplantation and with data from clinical trials. Normative data [12,025 reference (YSI device, Yellow Springs Instruments, Yellow Springs, OH) BG determinations] in patients with Type 1 diabetes mellitus who underwent insulin and glucose challenges suggest that the 90%, 95%, and 99% confidence intervals of BG rate of change that could be maximally sustained over 15-30 min are [-2,2], [-3,3], and [-4,4] mg/dL/min, respectively. BG dynamics and risk parameters clearly differentiated the stages of transplantation and the effects of medication. Aspects of treatment were clearly visualized by graphs of BG rate of change and Low/High BG Indices, by a Poincare plot of risk for rapid BG fluctuations, and by a plot of the aggregated Markov process. Advanced analysis and visualization of CGM data allow for evaluation of dynamical characteristics of diabetes and reveal clinical information that is inaccessible via standard statistics, which do not take into account the temporal structure of the data. The use of such methods improves the assessment of patients' glycemic control.
NASA Astrophysics Data System (ADS)
Jolivet, R.; Simons, M.
2016-12-01
InSAR time series analysis allows reconstruction of ground deformation with meter-scale spatial resolution and high temporal sampling. For instance, the ESA Sentinel-1 Constellation is capable of providing 6-day temporal sampling, thereby opening a new window on the spatio-temporal behavior of tectonic processes. However, due to computational limitations, most time series methods rely on a pixel-by-pixel approach. This limitation is a concern because (1) accounting for orbital errors requires referencing all interferograms to a common set of pixels before reconstruction of the time series and (2) spatially correlated atmospheric noise due to tropospheric turbulence is ignored. Decomposing interferograms into statistically independent wavelets will mitigate issues of correlated noise, but prior estimation of orbital uncertainties will still be required. Here, we explore a method that considers all pixels simultaneously when solving for the spatio-temporal evolution of interferometric phase Our method is based on a massively parallel implementation of a conjugate direction solver. We consider an interferogram as the sum of the phase difference between 2 SAR acquisitions and the corresponding orbital errors. In addition, we fit the temporal evolution with a physically parameterized function while accounting for spatially correlated noise in the data covariance. We assume noise is isotropic for any given InSAR pair with a covariance described by an exponential function that decays with increasing separation distance between pixels. We regularize our solution in space using a similar exponential function as model covariance. Given the problem size, we avoid matrix multiplications of the full covariances by computing convolutions in the Fourier domain. We first solve the unregularized least squares problem using the LSQR algorithm to approach the final solution, then run our conjugate direction solver to account for data and model covariances. We present synthetic tests showing the efficiency of our method. We then reconstruct a 20-year continuous time series covering Northern Chile. Without input from any additional GNSS data, we recover the secular deformation rate, seasonal oscillations and the deformation fields from the 2005 Mw 7.8 Tarapaca and 2007 Mw 7.7 Tocopilla earthquakes.
Hierarchical Spatio-temporal Visual Analysis of Cluster Evolution in Electrocorticography Data
Murugesan, Sugeerth; Bouchard, Kristofer; Chang, Edward; ...
2016-10-02
Here, we present ECoG ClusterFlow, a novel interactive visual analysis tool for the exploration of high-resolution Electrocorticography (ECoG) data. Our system detects and visualizes dynamic high-level structures, such as communities, using the time-varying spatial connectivity network derived from the high-resolution ECoG data. ECoG ClusterFlow provides a multi-scale visualization of the spatio-temporal patterns underlying the time-varying communities using two views: 1) an overview summarizing the evolution of clusters over time and 2) a hierarchical glyph-based technique that uses data aggregation and small multiples techniques to visualize the propagation of clusters in their spatial domain. ECoG ClusterFlow makes it possible 1) tomore » compare the spatio-temporal evolution patterns across various time intervals, 2) to compare the temporal information at varying levels of granularity, and 3) to investigate the evolution of spatial patterns without occluding the spatial context information. Lastly, we present case studies done in collaboration with neuroscientists on our team for both simulated and real epileptic seizure data aimed at evaluating the effectiveness of our approach.« less
An exploratory study of temporal integration in the peripheral retina of myopes
NASA Astrophysics Data System (ADS)
Macedo, Antonio F.; Encarnação, Tito J.; Vilarinho, Daniel; Baptista, António M. G.
2017-08-01
The visual system takes time to respond to visual stimuli, neurons need to accumulate information over a time span in order to fire. Visual information perceived by the peripheral retina might be impaired by imperfect peripheral optics leading to myopia development. This study explored the effect of eccentricity, moderate myopia and peripheral refraction in temporal visual integration. Myopes and emmetropes showed similar performance at detecting briefly flashed stimuli in different retinal locations. Our results show evidence that moderate myopes have normal visual integration when refractive errors are corrected with contact lens; however, the tendency to increased temporal integration thresholds observed in myopes deserves further investigation.
Dynamic state estimation based on Poisson spike trains—towards a theory of optimal encoding
NASA Astrophysics Data System (ADS)
Susemihl, Alex; Meir, Ron; Opper, Manfred
2013-03-01
Neurons in the nervous system convey information to higher brain regions by the generation of spike trains. An important question in the field of computational neuroscience is how these sensory neurons encode environmental information in a way which may be simply analyzed by subsequent systems. Many aspects of the form and function of the nervous system have been understood using the concepts of optimal population coding. Most studies, however, have neglected the aspect of temporal coding. Here we address this shortcoming through a filtering theory of inhomogeneous Poisson processes. We derive exact relations for the minimal mean squared error of the optimal Bayesian filter and, by optimizing the encoder, obtain optimal codes for populations of neurons. We also show that a class of non-Markovian, smooth stimuli are amenable to the same treatment, and provide results for the filtering and prediction error which hold for a general class of stochastic processes. This sets a sound mathematical framework for a population coding theory that takes temporal aspects into account. It also formalizes a number of studies which discussed temporal aspects of coding using time-window paradigms, by stating them in terms of correlation times and firing rates. We propose that this kind of analysis allows for a systematic study of temporal coding and will bring further insights into the nature of the neural code.
Temporal characteristics of imagined and actual walking in frail older adults.
Nakano, Hideki; Murata, Shin; Shiraiwa, Kayoko; Iwase, Hiroaki; Kodama, Takayuki
2018-05-09
Mental chronometry, commonly used to evaluate motor imagery ability, measures the imagined time required for movements. Previous studies investigating mental chronometry of walking have investigated healthy older adults. However, mental chronometry in frail older adults has not yet been clarified. To investigate temporal characteristics of imagined and actual walking in frail older adults. We investigated the time required for imagined and actual walking along three walkways of different widths [width(s): 50, 25, 15 cm × length: 5 m] in 29 frail older adults and 20 young adults. Imagined walking was measured with mental chronometry. We observed significantly longer imagined and actual walking times along walkways of 50, 25, and 15 cm width in frail older adults compared with young adults. Moreover, temporal differences (absolute error) between imagined and actual walking were significantly greater in frail older adults than in young adults along walkways with a width of 25 and 15 cm. Furthermore, we observed significant differences in temporal differences (constant error) between frail older adults and young adults for walkways with a width of 25 and 15 cm. Frail older adults tended to underestimate actual walking time in imagined walking trials. Our results suggest that walkways of different widths may be a useful tool to evaluate age-related changes in imagined and actual walking in frail older adults.
NASA Astrophysics Data System (ADS)
Ziemba, Alexander; El Serafy, Ghada
2016-04-01
Ecological modeling and water quality investigations are complex processes which can require a high level of parameterization and a multitude of varying data sets in order to properly execute the model in question. Since models are generally complex, their calibration and validation can benefit from the application of data and information fusion techniques. The data applied to ecological models comes from a wide range of sources such as remote sensing, earth observation, and in-situ measurements, resulting in a high variability in the temporal and spatial resolution of the various data sets available to water quality investigators. It is proposed that effective fusion into a comprehensive singular set will provide a more complete and robust data resource with which models can be calibrated, validated, and driven by. Each individual product contains a unique valuation of error resulting from the method of measurement and application of pre-processing techniques. The uncertainty and error is further compounded when the data being fused is of varying temporal and spatial resolution. In order to have a reliable fusion based model and data set, the uncertainty of the results and confidence interval of the data being reported must be effectively communicated to those who would utilize the data product or model outputs in a decision making process[2]. Here we review an array of data fusion techniques applied to various remote sensing, earth observation, and in-situ data sets whose domains' are varied in spatial and temporal resolution. The data sets examined are combined in a manner so that the various classifications, complementary, redundant, and cooperative, of data are all assessed to determine classification's impact on the propagation and compounding of error. In order to assess the error of the fused data products, a comparison is conducted with data sets containing a known confidence interval and quality rating. We conclude with a quantification of the performance of the data fusion techniques and a recommendation on the feasibility of applying of the fused products in operating forecast systems and modeling scenarios. The error bands and confidence intervals derived can be used in order to clarify the error and confidence of water quality variables produced by prediction and forecasting models. References [1] F. Castanedo, "A Review of Data Fusion Techniques", The Scientific World Journal, vol. 2013, pp. 1-19, 2013. [2] T. Keenan, M. Carbone, M. Reichstein and A. Richardson, "The model-data fusion pitfall: assuming certainty in an uncertain world", Oecologia, vol. 167, no. 3, pp. 587-597, 2011.
ACCURATE CHEMICAL MASTER EQUATION SOLUTION USING MULTI-FINITE BUFFERS
Cao, Youfang; Terebus, Anna; Liang, Jie
2016-01-01
The discrete chemical master equation (dCME) provides a fundamental framework for studying stochasticity in mesoscopic networks. Because of the multi-scale nature of many networks where reaction rates have large disparity, directly solving dCMEs is intractable due to the exploding size of the state space. It is important to truncate the state space effectively with quantified errors, so accurate solutions can be computed. It is also important to know if all major probabilistic peaks have been computed. Here we introduce the Accurate CME (ACME) algorithm for obtaining direct solutions to dCMEs. With multi-finite buffers for reducing the state space by O(n!), exact steady-state and time-evolving network probability landscapes can be computed. We further describe a theoretical framework of aggregating microstates into a smaller number of macrostates by decomposing a network into independent aggregated birth and death processes, and give an a priori method for rapidly determining steady-state truncation errors. The maximal sizes of the finite buffers for a given error tolerance can also be pre-computed without costly trial solutions of dCMEs. We show exactly computed probability landscapes of three multi-scale networks, namely, a 6-node toggle switch, 11-node phage-lambda epigenetic circuit, and 16-node MAPK cascade network, the latter two with no known solutions. We also show how probabilities of rare events can be computed from first-passage times, another class of unsolved problems challenging for simulation-based techniques due to large separations in time scales. Overall, the ACME method enables accurate and efficient solutions of the dCME for a large class of networks. PMID:27761104
Error correcting coding-theory for structured light illumination systems
NASA Astrophysics Data System (ADS)
Porras-Aguilar, Rosario; Falaggis, Konstantinos; Ramos-Garcia, Ruben
2017-06-01
Intensity discrete structured light illumination systems project a series of projection patterns for the estimation of the absolute fringe order using only the temporal grey-level sequence at each pixel. This work proposes the use of error-correcting codes for pixel-wise correction of measurement errors. The use of an error correcting code is advantageous in many ways: it allows reducing the effect of random intensity noise, it corrects outliners near the border of the fringe commonly present when using intensity discrete patterns, and it provides a robustness in case of severe measurement errors (even for burst errors where whole frames are lost). The latter aspect is particular interesting in environments with varying ambient light as well as in critical safety applications as e.g. monitoring of deformations of components in nuclear power plants, where a high reliability is ensured even in case of short measurement disruptions. A special form of burst errors is the so-called salt and pepper noise, which can largely be removed with error correcting codes using only the information of a given pixel. The performance of this technique is evaluated using both simulations and experiments.
Temporal rainfall estimation using input data reduction and model inversion
NASA Astrophysics Data System (ADS)
Wright, A. J.; Vrugt, J. A.; Walker, J. P.; Pauwels, V. R. N.
2016-12-01
Floods are devastating natural hazards. To provide accurate, precise and timely flood forecasts there is a need to understand the uncertainties associated with temporal rainfall and model parameters. The estimation of temporal rainfall and model parameter distributions from streamflow observations in complex dynamic catchments adds skill to current areal rainfall estimation methods, allows for the uncertainty of rainfall input to be considered when estimating model parameters and provides the ability to estimate rainfall from poorly gauged catchments. Current methods to estimate temporal rainfall distributions from streamflow are unable to adequately explain and invert complex non-linear hydrologic systems. This study uses the Discrete Wavelet Transform (DWT) to reduce rainfall dimensionality for the catchment of Warwick, Queensland, Australia. The reduction of rainfall to DWT coefficients allows the input rainfall time series to be simultaneously estimated along with model parameters. The estimation process is conducted using multi-chain Markov chain Monte Carlo simulation with the DREAMZS algorithm. The use of a likelihood function that considers both rainfall and streamflow error allows for model parameter and temporal rainfall distributions to be estimated. Estimation of the wavelet approximation coefficients of lower order decomposition structures was able to estimate the most realistic temporal rainfall distributions. These rainfall estimates were all able to simulate streamflow that was superior to the results of a traditional calibration approach. It is shown that the choice of wavelet has a considerable impact on the robustness of the inversion. The results demonstrate that streamflow data contains sufficient information to estimate temporal rainfall and model parameter distributions. The extent and variance of rainfall time series that are able to simulate streamflow that is superior to that simulated by a traditional calibration approach is a demonstration of equifinality. The use of a likelihood function that considers both rainfall and streamflow error combined with the use of the DWT as a model data reduction technique allows the joint inference of hydrologic model parameters along with rainfall.
Audit of the global carbon budget: estimate errors and their impact on uptake uncertainty
Ballantyne, A. P.; Andres, R.; Houghton, R.; ...
2015-04-30
Over the last 5 decades monitoring systems have been developed to detect changes in the accumulation of carbon (C) in the atmosphere and ocean; however, our ability to detect changes in the behavior of the global C cycle is still hindered by measurement and estimate errors. Here we present a rigorous and flexible framework for assessing the temporal and spatial components of estimate errors and their impact on uncertainty in net C uptake by the biosphere. We present a novel approach for incorporating temporally correlated random error into the error structure of emission estimates. Based on this approach, we concludemore » that the 2σ uncertainties of the atmospheric growth rate have decreased from 1.2 Pg C yr ₋1 in the 1960s to 0.3 Pg C yr ₋1 in the 2000s due to an expansion of the atmospheric observation network. The 2σ uncertainties in fossil fuel emissions have increased from 0.3 Pg C yr ₋1 in the 1960s to almost 1.0 Pg C yr ₋1 during the 2000s due to differences in national reporting errors and differences in energy inventories. Lastly, while land use emissions have remained fairly constant, their errors still remain high and thus their global C uptake uncertainty is not trivial. Currently, the absolute errors in fossil fuel emissions rival the total emissions from land use, highlighting the extent to which fossil fuels dominate the global C budget. Because errors in the atmospheric growth rate have decreased faster than errors in total emissions have increased, a ~20% reduction in the overall uncertainty of net C global uptake has occurred. Given all the major sources of error in the global C budget that we could identify, we are 93% confident that terrestrial C uptake has increased and 97% confident that ocean C uptake has increased over the last 5 decades. Thus, it is clear that arguably one of the most vital ecosystem services currently provided by the biosphere is the continued removal of approximately half of atmospheric CO 2 emissions from the atmosphere, although there are certain environmental costs associated with this service, such as the acidification of ocean waters.« less
Pilkington, Emma; Keidel, James; Kendrick, Luke T.; Saddy, James D.; Sage, Karen; Robson, Holly
2017-01-01
This study examined patterns of neologistic and perseverative errors during word repetition in fluent Jargon aphasia. The principal hypotheses accounting for Jargon production indicate that poor activation of a target stimulus leads to weakly activated target phoneme segments, which are outcompeted at the phonological encoding level. Voxel-lesion symptom mapping studies of word repetition errors suggest a breakdown in the translation from auditory-phonological analysis to motor activation. Behavioral analyses of repetition data were used to analyse the target relatedness (Phonological Overlap Index: POI) of neologistic errors and patterns of perseveration in 25 individuals with Jargon aphasia. Lesion-symptom analyses explored the relationship between neurological damage and jargon repetition in a group of 38 aphasia participants. Behavioral results showed that neologisms produced by 23 jargon individuals contained greater degrees of target lexico-phonological information than predicted by chance and that neologistic and perseverative production were closely associated. A significant relationship between jargon production and lesions to temporoparietal regions was identified. Region of interest regression analyses suggested that damage to the posterior superior temporal gyrus and superior temporal sulcus in combination was best predictive of a Jargon aphasia profile. Taken together, these results suggest that poor phonological encoding, secondary to impairment in sensory-motor integration, alongside impairments in self-monitoring result in jargon repetition. Insights for clinical management and future directions are discussed. PMID:28522967
Spatial configuration trends in coastal Louisiana from 1985 to 2010
Couvillion, Brady; Fischer, Michelle; Beck, Holly J.; Sleavin, William J.
2016-01-01
From 1932 to 2010, coastal Louisiana has experienced a net loss of 4877 km2 of wetlands. As the area of these wetlands has changed, so too has the spatial configuration of the landscape. The resulting landscape is a mosaic of patches of wetlands and open water. This study examined the spatial and temporal variability of trajectories of landscape configuration and the relation of those patterns to the trajectories of land change in wetlands during a 1985–2010 observation period. Spatial configuration was quantified using multi-temporal satellite imagery and an aggregation index (AI). The results of this analysis indicate that coastal Louisiana experienced a reduction in the AI of coastal wetlands of 1.07 %. In general, forested wetland and fresh marsh types displayed the highest aggregation and stability. The remaining marsh types, (intermediate, brackish, and saline) all experienced disaggregation during the time period, with increasing severity of disaggregation along an increasing salinity gradient. Finally, a correlation (r 2 = 0.5562) was found between AI and the land change rate for the subsequent period, indicating that fragmentation can increase the vulnerability of wetlands to further wetland loss. These results can help identify coastal areas which are susceptible to future wetland loss.
Batterman, Stuart
2015-01-01
Patterns of traffic activity, including changes in the volume and speed of vehicles, vary over time and across urban areas and can substantially affect vehicle emissions of air pollutants. Time-resolved activity at the street scale typically is derived using temporal allocation factors (TAFs) that allow the development of emissions inventories needed to predict concentrations of traffic-related air pollutants. This study examines the spatial and temporal variation of TAFs, and characterizes prediction errors resulting from their use. Methods are presented to estimate TAFs and their spatial and temporal variability and used to analyze total, commercial and non-commercial traffic in the Detroit, Michigan, U.S. metropolitan area. The variability of total volume estimates, quantified by the coefficient of variation (COV) representing the percentage departure from expected hourly volume, was 21, 33, 24 and 33% for weekdays, Saturdays, Sundays and holidays, respectively. Prediction errors mostly resulted from hour-to-hour variability on weekdays and Saturdays, and from day-to-day variability on Sundays and holidays. Spatial variability was limited across the study roads, most of which were large freeways. Commercial traffic had different temporal patterns and greater variability than noncommercial vehicle traffic, e.g., the weekday variability of hourly commercial volume was 28%. The results indicate that TAFs for a metropolitan region can provide reasonably accurate estimates of hourly vehicle volume on major roads. While vehicle volume is only one of many factors that govern on-road emission rates, air quality analyses would be strengthened by incorporating information regarding the uncertainty and variability of traffic activity. PMID:26688671
Biases in Time-Averaged Field and Paleosecular Variation Studies
NASA Astrophysics Data System (ADS)
Johnson, C. L.; Constable, C.
2009-12-01
Challenges to constructing time-averaged field (TAF) and paleosecular variation (PSV) models of Earth’s magnetic field over million year time scales are the uneven geographical and temporal distribution of paleomagnetic data and the absence of full vector records of the magnetic field variability at any given site. Recent improvements in paleomagnetic data sets now allow regional assessment of the biases introduced by irregular temporal sampling and the absence of full vector information. We investigate these effects over the past few Myr for regions with large paleomagnetic data sets, where the TAF and/or PSV have been of previous interest (e.g., significant departures of the TAF from the field predicted by a geocentric axial dipole). We calculate the effects of excluding paleointensity data from TAF calculations, and find these to be small. For example, at Hawaii, we find that for the past 50 ka, estimates of the TAF direction are minimally affected if only paleodirectional data versus the full paleofield vector are used. We use resampling techniques to investigate biases incurred by the uneven temporal distribution. Key to the latter issue is temporal information on a site-by-site basis. At Hawaii, resampling of the paleodirectional data onto a uniform temporal distribution, assuming no error in the site ages, reduces the magnitude of the inclination anomaly for the Brunhes, Gauss and Matuyama epochs. However inclusion of age errors in the sampling procedure leads to TAF estimates that are close to those reported for the original data sets. We discuss the implications of our results for global field models.
Improving wave forecasting by integrating ensemble modelling and machine learning
NASA Astrophysics Data System (ADS)
O'Donncha, F.; Zhang, Y.; James, S. C.
2017-12-01
Modern smart-grid networks use technologies to instantly relay information on supply and demand to support effective decision making. Integration of renewable-energy resources with these systems demands accurate forecasting of energy production (and demand) capacities. For wave-energy converters, this requires wave-condition forecasting to enable estimates of energy production. Current operational wave forecasting systems exhibit substantial errors with wave-height RMSEs of 40 to 60 cm being typical, which limits the reliability of energy-generation predictions thereby impeding integration with the distribution grid. In this study, we integrate physics-based models with statistical learning aggregation techniques that combine forecasts from multiple, independent models into a single "best-estimate" prediction of the true state. The Simulating Waves Nearshore physics-based model is used to compute wind- and currents-augmented waves in the Monterey Bay area. Ensembles are developed based on multiple simulations perturbing input data (wave characteristics supplied at the model boundaries and winds) to the model. A learning-aggregation technique uses past observations and past model forecasts to calculate a weight for each model. The aggregated forecasts are compared to observation data to quantify the performance of the model ensemble and aggregation techniques. The appropriately weighted ensemble model outperforms an individual ensemble member with regard to forecasting wave conditions.
Global Vertical Rates from VLBl
NASA Technical Reports Server (NTRS)
Ma, Chopo; MacMillan, D.; Petrov, L.
2003-01-01
The analysis of global VLBI observations provides vertical rates for 50 sites with formal errors less than 2 mm/yr and median formal error of 0.4 mm/yr. These sites are largely in Europe and North America with a few others in east Asia, Australia, South America and South Africa. The time interval of observations is up to 20 years. The error of the velocity reference frame is less than 0.5 mm/yr, but results from several sites with observations from more than one antenna suggest that the estimated vertical rates may have temporal variations or non-geophysical components. Comparisons with GPS rates and corresponding site position time series will be discussed.
Dynamics of protein aggregation and oligomer formation governed by secondary nucleation
NASA Astrophysics Data System (ADS)
Michaels, Thomas C. T.; Lazell, Hamish W.; Arosio, Paolo; Knowles, Tuomas P. J.
2015-08-01
The formation of aggregates in many protein systems can be significantly accelerated by secondary nucleation, a process where existing assemblies catalyse the nucleation of new species. In particular, secondary nucleation has emerged as a central process controlling the proliferation of many filamentous protein structures, including molecular species related to diseases such as sickle cell anemia and a range of neurodegenerative conditions. Increasing evidence suggests that the physical size of protein filaments plays a key role in determining their potential for deleterious interactions with living cells, with smaller aggregates of misfolded proteins, oligomers, being particularly toxic. It is thus crucial to progress towards an understanding of the factors that control the sizes of protein aggregates. However, the influence of secondary nucleation on the time evolution of aggregate size distributions has been challenging to quantify. This difficulty originates in large part from the fact that secondary nucleation couples the dynamics of species distant in size space. Here, we approach this problem by presenting an analytical treatment of the master equation describing the growth kinetics of linear protein structures proliferating through secondary nucleation and provide closed-form expressions for the temporal evolution of the resulting aggregate size distribution. We show how the availability of analytical solutions for the full filament distribution allows us to identify the key physical parameters that control the sizes of growing protein filaments. Furthermore, we use these results to probe the dynamics of the populations of small oligomeric species as they are formed through secondary nucleation and discuss the implications of our work for understanding the factors that promote or curtail the production of these species with a potentially high deleterious biological activity.
Dynamics of protein aggregation and oligomer formation governed by secondary nucleation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Michaels, Thomas C. T., E-mail: tctm3@cam.ac.uk; Lazell, Hamish W.; Arosio, Paolo
2015-08-07
The formation of aggregates in many protein systems can be significantly accelerated by secondary nucleation, a process where existing assemblies catalyse the nucleation of new species. In particular, secondary nucleation has emerged as a central process controlling the proliferation of many filamentous protein structures, including molecular species related to diseases such as sickle cell anemia and a range of neurodegenerative conditions. Increasing evidence suggests that the physical size of protein filaments plays a key role in determining their potential for deleterious interactions with living cells, with smaller aggregates of misfolded proteins, oligomers, being particularly toxic. It is thus crucial tomore » progress towards an understanding of the factors that control the sizes of protein aggregates. However, the influence of secondary nucleation on the time evolution of aggregate size distributions has been challenging to quantify. This difficulty originates in large part from the fact that secondary nucleation couples the dynamics of species distant in size space. Here, we approach this problem by presenting an analytical treatment of the master equation describing the growth kinetics of linear protein structures proliferating through secondary nucleation and provide closed-form expressions for the temporal evolution of the resulting aggregate size distribution. We show how the availability of analytical solutions for the full filament distribution allows us to identify the key physical parameters that control the sizes of growing protein filaments. Furthermore, we use these results to probe the dynamics of the populations of small oligomeric species as they are formed through secondary nucleation and discuss the implications of our work for understanding the factors that promote or curtail the production of these species with a potentially high deleterious biological activity.« less
Characterizing the mechanical behavior of the zebrafish germ layers
NASA Astrophysics Data System (ADS)
Kealhofer, David; Serwane, Friedhelm; Mongera, Alessandro; Rowghanian, Payam; Lucio, Adam; Campàs, Otger
Organ morphogenesis and the development of the animal body plan involve complex spatial and temporal control of tissue- and cell-level mechanics. A prime example is the generation of stresses by individual cells to reorganize the tissue. These processes have remained poorly understood due to a lack of techniques to characterize the local constitutive law of the material, which relates local cellular forces to the resulting tissue flows. We have developed a method for quantitative, local in vivo study of material properties in living tissue using magnetic droplet probes. We use this technique to study the material properties of the different zebrafish germ layers using aggregates of zebrafish mesendodermal and ectodermal cells as a model system. These aggregates are ideal for controlled studies of the mechanics of individual germ layers because of the homogeneity of the cell type and the simple spherical geometry. Furthermore, the numerous molecular tools and transgenic lines already developed for this model organism can be applied to these aggregates, allowing us to characterize the contributions of cell cortex tension and cell adhesion to the mechanical properties of the zebrafish germ layers.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bambha, Ray P.; Michelsen, Hope A.
We have used a Single-Particle Soot Photometer (SP2) to measure time-resolved laser-induced incandescence (LII) and laser scatter from combustion-generated mature soot with a fractal dimension of 1.88 extracted from a burner. We have also made measurements on restructured mature-soot particles with a fractal dimension of 2.3–2.4. We reproduced the LII and laser-scatter temporal profiles with an energy- and mass-balance model, which accounted for heating of particles passed through a CW-laser beam over laser–particle interaction times of ~10 μs. Furthermore, the results demonstrate a strong influence of aggregate size and morphology on LII and scattering signals. Conductive cooling competes with absorptivemore » heating on these time scales; the effects are reduced with increasing aggregate size and fractal dimension. These effects can lead to a significant delay in the onset of the LII signal and may explain an apparent low bias in the SP2 measurements for small particle sizes, particularly for fresh, mature soot. The results also reveal significant perturbations to the measured scattering signal from LII interference and suggest rapid expansion of the aggregates during sublimation.« less
Deshmukh, Ruchi; Mehra, Anurag
2017-01-01
Aggregation and self-assembly are influenced by molecular interactions. With precise control of molecular interactions, in this study, a wide range of nanostructures ranging from zero-dimensional nanospheres to hierarchical nanoplates and spindles have been successfully synthesized at ambient temperature in aqueous solution. The nanostructures reported here are formed by aggregation of spherical seed particles (monomers) in presence of quaternary ammonium salts. Hydroxide ions and a magnetic moment of the monomers are essential to induce shape anisotropy in the nanostructures. The cobalt nanoplates are studied in detail, and a growth mechanism based on collision, aggregation, and crystal consolidation is proposed based on a electron microscopy studies. The growth mechanism is generalized for rods, spindles, and nearly spherical nanostructures, obtained by varying the cation group in the quaternary ammonium hydroxides. Electron diffraction shows different predominant lattice planes on the edge and on the surface of a nanoplate. The study explains, hereto unaddressed, the temporal evolution of complex magnetic nanostructures. These ferromagnetic nanostructures represent an interesting combination of shape anisotropy and magnetic characteristics. PMID:28326240
NASA Astrophysics Data System (ADS)
Albers, D. J.; Hripcsak, George
2012-03-01
This paper addresses how to calculate and interpret the time-delayed mutual information (TDMI) for a complex, diversely and sparsely measured, possibly non-stationary population of time-series of unknown composition and origin. The primary vehicle used for this analysis is a comparison between the time-delayed mutual information averaged over the population and the time-delayed mutual information of an aggregated population (here, aggregation implies the population is conjoined before any statistical estimates are implemented). Through the use of information theoretic tools, a sequence of practically implementable calculations are detailed that allow for the average and aggregate time-delayed mutual information to be interpreted. Moreover, these calculations can also be used to understand the degree of homo or heterogeneity present in the population. To demonstrate that the proposed methods can be used in nearly any situation, the methods are applied and demonstrated on the time series of glucose measurements from two different subpopulations of individuals from the Columbia University Medical Center electronic health record repository, revealing a picture of the composition of the population as well as physiological features.
Extending Value of Information Methods to Include the Co-Net Benefits of Earth Observations
NASA Astrophysics Data System (ADS)
Macauley, M.
2015-12-01
The widening relevance of Earth observations information across the spectrum of natural and environmental resources markedly enhances the value of these observations. An example is observations of forest extent, species composition, health, and change; this information can help in assessing carbon sequestration, biodiversity and habitat, watershed management, fuelwood potential, and other ecosystem services as well as inform the opportunity cost of forest removal for alternative land use such as agriculture, pasture, or development. These "stacked" indicators or co- net benefits add significant value to Earth observations. In part because of reliance on case studies, much previous research about the value of information from Earth observations has assessed individual applications rather than aggregate across applications, thus tending to undervalue the observations. Aggregating across applications is difficult, however, because it requires common units of measurement: controlling for spatial, spectral, and temporal attributes of the observations; and consistent application of value of information techniques. This paper will discuss general principles of co-net benefit aggregation and illustrate its application to attributing value to Earth observations.
Spatio-temporal networks: reachability, centrality and robustness.
Williams, Matthew J; Musolesi, Mirco
2016-06-01
Recent advances in spatial and temporal networks have enabled researchers to more-accurately describe many real-world systems such as urban transport networks. In this paper, we study the response of real-world spatio-temporal networks to random error and systematic attack, taking a unified view of their spatial and temporal performance. We propose a model of spatio-temporal paths in time-varying spatially embedded networks which captures the property that, as in many real-world systems, interaction between nodes is non-instantaneous and governed by the space in which they are embedded. Through numerical experiments on three real-world urban transport systems, we study the effect of node failure on a network's topological, temporal and spatial structure. We also demonstrate the broader applicability of this framework to three other classes of network. To identify weaknesses specific to the behaviour of a spatio-temporal system, we introduce centrality measures that evaluate the importance of a node as a structural bridge and its role in supporting spatio-temporally efficient flows through the network. This exposes the complex nature of fragility in a spatio-temporal system, showing that there is a variety of failure modes when a network is subject to systematic attacks.
Zhao, C.Y.; Zhang, Q.; Ding, X.-L.; Lu, Z.; Yang, C.S.; Qi, X.M.
2009-01-01
The City of Xian, China, has been experiencing significant land subsidence and ground fissure activities since 1960s, which have brought various severe geohazards including damages to buildings, bridges and other facilities. Monitoring of land subsidence and ground fissure activities can provide useful information for assessing the extent of, and mitigating such geohazards. In order to achieve robust Synthetic Aperture Radar Interferometry (InSAR) results, six interferometric pairs of Envisat ASAR data covering 2005–2006 are collected to analyze the InSAR processing errors firstly, such as temporal and spatial decorrelation error, external DEM error, atmospheric error and unwrapping error. Then the annual subsidence rate during 2005–2006 is calculated by weighted averaging two pairs of D-InSAR results with similar time spanning. Lastly, GPS measurements are applied to calibrate the InSAR results and centimeter precision is achieved. As for the ground fissure monitoring, five InSAR cross-sections are designed to demonstrate the relative subsidence difference across ground fissures. In conclusion, the final InSAR subsidence map during 2005–2006 shows four large subsidence zones in Xian hi-tech zones in western, eastern and southern suburbs of Xian City, among which two subsidence cones are newly detected and two ground fissures are deduced to be extended westward in Yuhuazhai subsidence cone. This study shows that the land subsidence and ground fissures are highly correlated spatially and temporally and both are correlated with hi-tech zone construction in Xian during the year of 2005–2006.
NASA Astrophysics Data System (ADS)
Xu, Yadong; Serre, Marc L.; Reyes, Jeanette M.; Vizuete, William
2017-10-01
We have developed a Bayesian Maximum Entropy (BME) framework that integrates observations from a surface monitoring network and predictions from a Chemical Transport Model (CTM) to create improved exposure estimates that can be resolved into any spatial and temporal resolution. The flexibility of the framework allows for input of data in any choice of time scales and CTM predictions of any spatial resolution with varying associated degrees of estimation error and cost in terms of implementation and computation. This study quantifies the impact on exposure estimation error due to these choices by first comparing estimations errors when BME relied on ozone concentration data either as an hourly average, the daily maximum 8-h average (DM8A), or the daily 24-h average (D24A). Our analysis found that the use of DM8A and D24A data, although less computationally intensive, reduced estimation error more when compared to the use of hourly data. This was primarily due to the poorer CTM model performance in the hourly average predicted ozone. Our second analysis compared spatial variability and estimation errors when BME relied on CTM predictions with a grid cell resolution of 12 × 12 km2 versus a coarser resolution of 36 × 36 km2. Our analysis found that integrating the finer grid resolution CTM predictions not only reduced estimation error, but also increased the spatial variability in daily ozone estimates by 5 times. This improvement was due to the improved spatial gradients and model performance found in the finer resolved CTM simulation. The integration of observational and model predictions that is permitted in a BME framework continues to be a powerful approach for improving exposure estimates of ambient air pollution. The results of this analysis demonstrate the importance of also understanding model performance variability and its implications on exposure error.
NASA Astrophysics Data System (ADS)
Calzolari, C.; Ungaro, F.; Salvador, P.; Torri, D.
2009-04-01
Results of a long term trial (2002-2007) on the effect of different organic amendments on topsoil structural properties at the end of the 6th year are presented. Two soils located in two experimental farms of the Emilia-Romagna region (Northern Italy), namely a silty clay loam Haplic Calcisol under sorghum (Sorghum bicolor, L.) continuous cropping, and a silty Calcaric Cambisols under peach (Persica vulgaris, Mill.), have been treated with a different amount of organic amendments. Four different treatments were tested plus control: manure (10 Mg ha-1 y-1), low input compost (5 and 10 Mg ha-1 y-1), high input compost (10 and 40 Mg ha-1 y-1), and no-tillage. In all the plots soil samples were collected three times every year: at the beginning of the growing season, at full crop coverage and after harvest. At each time, samples were collected in three replicates and soil bulk density and aggregate stability were measured. At the end of the 6 years trial 930 bulk density and 405 aggregate stability measurements were made available. The influence of organic amendments on soil physical properties is different according to the considered soil property and to the different soils. Soil bulk density (BD) shows clear and statistically significant differences among the tested theses, all with a marked seasonality and distinct temporal trends. The overall trends observed in the two soils are coherent with the amount of organic matter distributed in the different theses and with the field operations (tillage mainly), but with a short term effect. More important, over the period of observation and within each year, the treatments exhibit cyclical variations due to climate seasonality. Among the treatments, that with distribution of manure exhibits the weakest seasonal variations and a substantially stable general trend, with BD values slightly lower than those observed for the control. Different effects are also observed on soil aggregates stability, but also in this case a temporal trend is not clearly detectable, suggesting that the amendments have no cumulative effect at least during the 6 years of observations, and the responses are different in the two trials: slightly positive for the low compost supply in the silty clay loam Haplic Calcisol and negative for both low and high compost supply in the silty Calcaric Cambisols. The dominant issue is the seasonal variability of aggregate resistance which is well shown at the site where more data are available. Data also hints an ambiguous behavior of the compost: increasing the amount of applied compost leads to a slight increase in aggregate stability which is then followed by a decrease, as if the aggregation capability of the compost is counteracted by a dispersion effect.
NASA Astrophysics Data System (ADS)
Xu, Baodong; Li, Jing; Liu, Qinhuo; Zeng, Yelu; Yin, Gaofei
2014-11-01
Leaf Area Index (LAI) is known as a key vegetation biophysical variable. To effectively use remote sensing LAI products in various disciplines, it is critical to understand the accuracy of them. The common method for the validation of LAI products is firstly establish the empirical relationship between the field data and high-resolution imagery, to derive LAI maps, then aggregate high-resolution LAI maps to match moderate-resolution LAI products. This method is just suited for the small region, and its frequencies of measurement are limited. Therefore, the continuous observing LAI datasets from ground station network are important for the validation of multi-temporal LAI products. However, due to the scale mismatch between the point observation in the ground station and the pixel observation, the direct comparison will bring the scale error. Thus it is needed to evaluate the representativeness of ground station measurement within pixel scale of products for the reasonable validation. In this paper, a case study with Chinese Ecosystem Research Network (CERN) in situ data was taken to introduce a methodology to estimate representativeness of LAI station observation for validating LAI products. We first analyzed the indicators to evaluate the observation representativeness, and then graded the station measurement data. Finally, the LAI measurement data which can represent the pixel scale was used to validate the MODIS, GLASS and GEOV1 LAI products. The result shows that the best agreement is reached between the GLASS and GEOV1, while the lowest uncertainty is achieved by GEOV1 followed by GLASS and MODIS. We conclude that the ground station measurement data can validate multi-temporal LAI products objectively based on the evaluation indicators of station observation representativeness, which can also improve the reliability for the validation of remote sensing products.
Mapping the Dynamics of Surface Water Extent 1999-2015 with Landsat 5, 7, and 8 Archives
NASA Astrophysics Data System (ADS)
Pickens, A. H.; Hansen, M.; Hancher, M.; Potapov, P.
2016-12-01
Surface water extent fluctuates through both seasons and years due to changes in climatic conditions and human extraction and impoundments. This study maps the presence of surface water every month since January 1999, evaluates the detection reliability, visualizes the trends, and explores future applications. The Global Land Analysis and Discovery group at the University of Maryland developed a 30-m mask of persistent water during the growing seasons of 2000-2012 in conjunction with the Global Forest Change product published by Hansen et al. in 2013. A total of 654,178 Landsat 7 scenes were used for the study. Persistent water was defined as all pixels with water classified in more than 50% of observations over the study period. We validated this mask by stratifying and comparing against a random sample of 135 RapidEye, single-date images at 5-m resolution. It was found to have estimated user's and producer's accuracies of 94% and 88%, respectively. This estimated error is due primarily to temporal differences, such as dam construction, and to mixed water-land pixels along water body edges and narrow rivers. In order to investigate temporal extent dynamics, we expanded our analysis of surface water to classify every Landsat 5, 7, and 8 scene since 1999, augmented with elevation data from SRTM and ASTER, via a series of decision trees applied using Google Earth Engine. The water and land observations are aggregated per each month of each year. We developed a model to visualize the dynamic trend in surface water presence since 1999, either per month or annually as shown below. This model can be used directly to assess the seasonal and inter-annual trends globally or regionally, or the raw monthly counts can be used for more intensive hydrological analysis and as inputs for other related studies such as wetland mapping.
Improving z-tracking accuracy in the two-photon single-particle tracking microscope.
Liu, C; Liu, Y-L; Perillo, E P; Jiang, N; Dunn, A K; Yeh, H-C
2015-10-12
Here, we present a method that can improve the z-tracking accuracy of the recently invented TSUNAMI (Tracking of Single particles Using Nonlinear And Multiplexed Illumination) microscope. This method utilizes a maximum likelihood estimator (MLE) to determine the particle's 3D position that maximizes the likelihood of the observed time-correlated photon count distribution. Our Monte Carlo simulations show that the MLE-based tracking scheme can improve the z-tracking accuracy of TSUNAMI microscope by 1.7 fold. In addition, MLE is also found to reduce the temporal correlation of the z-tracking error. Taking advantage of the smaller and less temporally correlated z-tracking error, we have precisely recovered the hybridization-melting kinetics of a DNA model system from thousands of short single-particle trajectories in silico . Our method can be generally applied to other 3D single-particle tracking techniques.
Understanding The Neural Mechanisms Involved In Sensory Control Of Voice Production
Parkinson, Amy L.; Flagmeier, Sabina G.; Manes, Jordan L.; Larson, Charles R.; Rogers, Bill; Robin, Donald A.
2012-01-01
Auditory feedback is important for the control of voice fundamental frequency (F0). In the present study we used neuroimaging to identify regions of the brain responsible for sensory control of the voice. We used a pitch-shift paradigm where subjects respond to an alteration, or shift, of voice pitch auditory feedback with a reflexive change in F0. To determine the neural substrates involved in these audio-vocal responses, subjects underwent fMRI scanning while vocalizing with or without pitch-shifted feedback. The comparison of shifted and unshifted vocalization revealed activation bilaterally in the superior temporal gyrus (STG) in response to the pitch shifted feedback. We hypothesize that the STG activity is related to error detection by auditory error cells located in the superior temporal cortex and efference copy mechanisms whereby this region is responsible for the coding of a mismatch between actual and predicted voice F0. PMID:22406500
NASA Technical Reports Server (NTRS)
Chelton, D. B.
1986-01-01
Two tasks were performed: (1) determination of the accuracy of Seasat scatterometer, altimeter, and scanning multichannel microwave radiometer measurements of wind speed; and (2) application of Seasat altimeter measurements of sea level to study the spatial and temporal variability of geostrophic flow in the Antarctic Circumpolar Current. The results of the first task have identified systematic errors in wind speeds estimated by all three satellite sensors. However, in all cases the errors are correctable and corrected wind speeds agree between the three sensors to better than 1 ms sup -1 in 96-day 2 deg. latitude by 6 deg. longitude averages. The second task has resulted in development of a new technique for using altimeter sea level measurements to study the temporal variability of large scale sea level variations. Application of the technique to the Antarctic Circumpolar Current yielded new information about the ocean circulation in this region of the ocean that is poorly sampled by conventional ship-based measurements.
Complex phase error and motion estimation in synthetic aperture radar imaging
NASA Astrophysics Data System (ADS)
Soumekh, M.; Yang, H.
1991-06-01
Attention is given to a SAR wave equation-based system model that accurately represents the interaction of the impinging radar signal with the target to be imaged. The model is used to estimate the complex phase error across the synthesized aperture from the measured corrupted SAR data by combining the two wave equation models governing the collected SAR data at two temporal frequencies of the radar signal. The SAR system model shows that the motion of an object in a static scene results in coupled Doppler shifts in both the temporal frequency domain and the spatial frequency domain of the synthetic aperture. The velocity of the moving object is estimated through these two Doppler shifts. It is shown that once the dynamic target's velocity is known, its reconstruction can be formulated via a squint-mode SAR geometry with parameters that depend upon the dynamic target's velocity.