Proposed hardware architectures of particle filter for object tracking
NASA Astrophysics Data System (ADS)
Abd El-Halym, Howida A.; Mahmoud, Imbaby Ismail; Habib, SED
2012-12-01
In this article, efficient hardware architectures for particle filter (PF) are presented. We propose three different architectures for Sequential Importance Resampling Filter (SIRF) implementation. The first architecture is a two-step sequential PF machine, where particle sampling, weight, and output calculations are carried out in parallel during the first step followed by sequential resampling in the second step. For the weight computation step, a piecewise linear function is used instead of the classical exponential function. This decreases the complexity of the architecture without degrading the results. The second architecture speeds up the resampling step via a parallel, rather than a serial, architecture. This second architecture targets a balance between hardware resources and the speed of operation. The third architecture implements the SIRF as a distributed PF composed of several processing elements and central unit. All the proposed architectures are captured using VHDL synthesized using Xilinx environment, and verified using the ModelSim simulator. Synthesis results confirmed the resource reduction and speed up advantages of our architectures.
Propagating probability distributions of stand variables using sequential Monte Carlo methods
Jeffrey H. Gove
2009-01-01
A general probabilistic approach to stand yield estimation is developed based on sequential Monte Carlo filters, also known as particle filters. The essential steps in the development of the sampling importance resampling (SIR) particle filter are presented. The SIR filter is then applied to simulated and observed data showing how the 'predictor - corrector'...
A Sequential Ensemble Prediction System at Convection Permitting Scales
NASA Astrophysics Data System (ADS)
Milan, M.; Simmer, C.
2012-04-01
A Sequential Assimilation Method (SAM) following some aspects of particle filtering with resampling, also called SIR (Sequential Importance Resampling), is introduced and applied in the framework of an Ensemble Prediction System (EPS) for weather forecasting on convection permitting scales, with focus to precipitation forecast. At this scale and beyond, the atmosphere increasingly exhibits chaotic behaviour and non linear state space evolution due to convectively driven processes. One way to take full account of non linear state developments are particle filter methods, their basic idea is the representation of the model probability density function by a number of ensemble members weighted by their likelihood with the observations. In particular particle filter with resampling abandons ensemble members (particles) with low weights restoring the original number of particles adding multiple copies of the members with high weights. In our SIR-like implementation we substitute the likelihood way to define weights and introduce a metric which quantifies the "distance" between the observed atmospheric state and the states simulated by the ensemble members. We also introduce a methodology to counteract filter degeneracy, i.e. the collapse of the simulated state space. To this goal we propose a combination of resampling taking account of simulated state space clustering and nudging. By keeping cluster representatives during resampling and filtering, the method maintains the potential for non linear system state development. We assume that a particle cluster with initially low likelihood may evolve in a state space with higher likelihood in a subsequent filter time thus mimicking non linear system state developments (e.g. sudden convection initiation) and remedies timing errors for convection due to model errors and/or imperfect initial condition. We apply a simplified version of the resampling, the particles with highest weights in each cluster are duplicated; for the model evolution for each particle pair one particle evolves using the forward model; the second particle, however, is nudged to the radar and satellite observation during its evolution based on the forward model.
NASA Astrophysics Data System (ADS)
Yuan, Shenfang; Chen, Jian; Yang, Weibo; Qiu, Lei
2017-08-01
Fatigue crack growth prognosis is important for prolonging service time, improving safety, and reducing maintenance cost in many safety-critical systems, such as in aircraft, wind turbines, bridges, and nuclear plants. Combining fatigue crack growth models with the particle filter (PF) method has proved promising to deal with the uncertainties during fatigue crack growth and reach a more accurate prognosis. However, research on prognosis methods integrating on-line crack monitoring with the PF method is still lacking, as well as experimental verifications. Besides, the PF methods adopted so far are almost all sequential importance resampling-based PFs, which usually encounter sample impoverishment problems, and hence performs poorly. To solve these problems, in this paper, the piezoelectric transducers (PZTs)-based active Lamb wave method is adopted for on-line crack monitoring. The deterministic resampling PF (DRPF) is proposed to be used in fatigue crack growth prognosis, which can overcome the sample impoverishment problem. The proposed method is verified through fatigue tests of attachment lugs, which are a kind of important joint component in aerospace systems.
NASA Astrophysics Data System (ADS)
Ruggeri, Paolo; Irving, James; Holliger, Klaus
2015-08-01
We critically examine the performance of sequential geostatistical resampling (SGR) as a model proposal mechanism for Bayesian Markov-chain-Monte-Carlo (MCMC) solutions to near-surface geophysical inverse problems. Focusing on a series of simple yet realistic synthetic crosshole georadar tomographic examples characterized by different numbers of data, levels of data error and degrees of model parameter spatial correlation, we investigate the efficiency of three different resampling strategies with regard to their ability to generate statistically independent realizations from the Bayesian posterior distribution. Quite importantly, our results show that, no matter what resampling strategy is employed, many of the examined test cases require an unreasonably high number of forward model runs to produce independent posterior samples, meaning that the SGR approach as currently implemented will not be computationally feasible for a wide range of problems. Although use of a novel gradual-deformation-based proposal method can help to alleviate these issues, it does not offer a full solution. Further, we find that the nature of the SGR is found to strongly influence MCMC performance; however no clear rule exists as to what set of inversion parameters and/or overall proposal acceptance rate will allow for the most efficient implementation. We conclude that although the SGR methodology is highly attractive as it allows for the consideration of complex geostatistical priors as well as conditioning to hard and soft data, further developments are necessary in the context of novel or hybrid MCMC approaches for it to be considered generally suitable for near-surface geophysical inversions.
Burkness, Eric C; Hutchison, W D
2009-10-01
Populations of cabbage looper, Trichoplusiani (Lepidoptera: Noctuidae), were sampled in experimental plots and commercial fields of cabbage (Brasicca spp.) in Minnesota during 1998-1999 as part of a larger effort to implement an integrated pest management program. Using a resampling approach and the Wald's sequential probability ratio test, sampling plans with different sampling parameters were evaluated using independent presence/absence and enumerative data. Evaluations and comparisons of the different sampling plans were made based on the operating characteristic and average sample number functions generated for each plan and through the use of a decision probability matrix. Values for upper and lower decision boundaries, sequential error rates (alpha, beta), and tally threshold were modified to determine parameter influence on the operating characteristic and average sample number functions. The following parameters resulted in the most desirable operating characteristic and average sample number functions; action threshold of 0.1 proportion of plants infested, tally threshold of 1, alpha = beta = 0.1, upper boundary of 0.15, lower boundary of 0.05, and resampling with replacement. We found that sampling parameters can be modified and evaluated using resampling software to achieve desirable operating characteristic and average sample number functions. Moreover, management of T. ni by using binomial sequential sampling should provide a good balance between cost and reliability by minimizing sample size and maintaining a high level of correct decisions (>95%) to treat or not treat.
Dunham, Kylee; Grand, James B.
2016-01-01
We examined the effects of complexity and priors on the accuracy of models used to estimate ecological and observational processes, and to make predictions regarding population size and structure. State-space models are useful for estimating complex, unobservable population processes and making predictions about future populations based on limited data. To better understand the utility of state space models in evaluating population dynamics, we used them in a Bayesian framework and compared the accuracy of models with differing complexity, with and without informative priors using sequential importance sampling/resampling (SISR). Count data were simulated for 25 years using known parameters and observation process for each model. We used kernel smoothing to reduce the effect of particle depletion, which is common when estimating both states and parameters with SISR. Models using informative priors estimated parameter values and population size with greater accuracy than their non-informative counterparts. While the estimates of population size and trend did not suffer greatly in models using non-informative priors, the algorithm was unable to accurately estimate demographic parameters. This model framework provides reasonable estimates of population size when little to no information is available; however, when information on some vital rates is available, SISR can be used to obtain more precise estimates of population size and process. Incorporating model complexity such as that required by structured populations with stage-specific vital rates affects precision and accuracy when estimating latent population variables and predicting population dynamics. These results are important to consider when designing monitoring programs and conservation efforts requiring management of specific population segments.
Shahbi, M; Rajabpour, A
2017-08-01
Phthorimaea operculella Zeller is an important pest of potato in Iran. Spatial distribution and fixed-precision sequential sampling for population estimation of the pest on two potato cultivars, Arinda ® and Sante ® , were studied in two separate potato fields during two growing seasons (2013-2014 and 2014-2015). Spatial distribution was investigated by Taylor's power law and Iwao's patchiness. Results showed that the spatial distribution of eggs and larvae was random. In contrast to Iwao's patchiness, Taylor's power law provided a highly significant relationship between variance and mean density. Therefore, fixed-precision sequential sampling plan was developed by Green's model at two precision levels of 0.25 and 0.1. The optimum sample size on Arinda ® and Sante ® cultivars at precision level of 0.25 ranged from 151 to 813 and 149 to 802 leaves, respectively. At 0.1 precision level, the sample sizes varied from 5083 to 1054 and 5100 to 1050 leaves for Arinda ® and Sante ® cultivars, respectively. Therefore, the optimum sample sizes for the cultivars, with different resistance levels, were not significantly different. According to the calculated stop lines, the sampling must be continued until cumulative number of eggs + larvae reached to 15-16 or 96-101 individuals at precision levels of 0.25 or 0.1, respectively. The performance of the sampling plan was validated by resampling analysis using resampling for validation of sampling plans software. The sampling plant provided in this study can be used to obtain a rapid estimate of the pest density with minimal effort.
NASA Astrophysics Data System (ADS)
Montzka, Carsten; Hendricks Franssen, Harrie-Jan; Moradkhani, Hamid; Pütz, Thomas; Han, Xujun; Vereecken, Harry
2013-04-01
An adequate description of soil hydraulic properties is essential for a good performance of hydrological forecasts. So far, several studies showed that data assimilation could reduce the parameter uncertainty by considering soil moisture observations. However, these observations and also the model forcings were recorded with a specific measurement error. It seems a logical step to base state updating and parameter estimation on observations made at multiple time steps, in order to reduce the influence of outliers at single time steps given measurement errors and unknown model forcings. Such outliers could result in erroneous state estimation as well as inadequate parameters. This has been one of the reasons to use a smoothing technique as implemented for Bayesian data assimilation methods such as the Ensemble Kalman Filter (i.e. Ensemble Kalman Smoother). Recently, an ensemble-based smoother has been developed for state update with a SIR particle filter. However, this method has not been used for dual state-parameter estimation. In this contribution we present a Particle Smoother with sequentially smoothing of particle weights for state and parameter resampling within a time window as opposed to the single time step data assimilation used in filtering techniques. This can be seen as an intermediate variant between a parameter estimation technique using global optimization with estimation of single parameter sets valid for the whole period, and sequential Monte Carlo techniques with estimation of parameter sets evolving from one time step to another. The aims are i) to improve the forecast of evaporation and groundwater recharge by estimating hydraulic parameters, and ii) to reduce the impact of single erroneous model inputs/observations by a smoothing method. In order to validate the performance of the proposed method in a real world application, the experiment is conducted in a lysimeter environment.
NASA Astrophysics Data System (ADS)
Khaki, M.; Hoteit, I.; Kuhn, M.; Awange, J.; Forootan, E.; van Dijk, A. I. J. M.; Schumacher, M.; Pattiaratchi, C.
2017-09-01
The time-variable terrestrial water storage (TWS) products from the Gravity Recovery And Climate Experiment (GRACE) have been increasingly used in recent years to improve the simulation of hydrological models by applying data assimilation techniques. In this study, for the first time, we assess the performance of the most popular data assimilation sequential techniques for integrating GRACE TWS into the World-Wide Water Resources Assessment (W3RA) model. We implement and test stochastic and deterministic ensemble-based Kalman filters (EnKF), as well as Particle filters (PF) using two different resampling approaches of Multinomial Resampling and Systematic Resampling. These choices provide various opportunities for weighting observations and model simulations during the assimilation and also accounting for error distributions. Particularly, the deterministic EnKF is tested to avoid perturbing observations before assimilation (that is the case in an ordinary EnKF). Gaussian-based random updates in the EnKF approaches likely do not fully represent the statistical properties of the model simulations and TWS observations. Therefore, the fully non-Gaussian PF is also applied to estimate more realistic updates. Monthly GRACE TWS are assimilated into W3RA covering the entire Australia. To evaluate the filters performances and analyze their impact on model simulations, their estimates are validated by independent in-situ measurements. Our results indicate that all implemented filters improve the estimation of water storage simulations of W3RA. The best results are obtained using two versions of deterministic EnKF, i.e. the Square Root Analysis (SQRA) scheme and the Ensemble Square Root Filter (EnSRF), respectively, improving the model groundwater estimations errors by 34% and 31% compared to a model run without assimilation. Applying the PF along with Systematic Resampling successfully decreases the model estimation error by 23%.
NASA Astrophysics Data System (ADS)
Guo, Jun; Lu, Siliang; Zhai, Chao; He, Qingbo
2018-02-01
An automatic bearing fault diagnosis method is proposed for permanent magnet synchronous generators (PMSGs), which are widely installed in wind turbines subjected to low rotating speeds, speed fluctuations, and electrical device noise interferences. The mechanical rotating angle curve is first extracted from the phase current of a PMSG by sequentially applying a series of algorithms. The synchronous sampled vibration signal of the fault bearing is then resampled in the angular domain according to the obtained rotating phase information. Considering that the resampled vibration signal is still overwhelmed by heavy background noise, an adaptive stochastic resonance filter is applied to the resampled signal to enhance the fault indicator and facilitate bearing fault identification. Two types of fault bearings with different fault sizes in a PMSG test rig are subjected to experiments to test the effectiveness of the proposed method. The proposed method is fully automated and thus shows potential for convenient, highly efficient and in situ bearing fault diagnosis for wind turbines subjected to harsh environments.
Communication Optimizations for a Wireless Distributed Prognostic Framework
NASA Technical Reports Server (NTRS)
Saha, Sankalita; Saha, Bhaskar; Goebel, Kai
2009-01-01
Distributed architecture for prognostics is an essential step in prognostic research in order to enable feasible real-time system health management. Communication overhead is an important design problem for such systems. In this paper we focus on communication issues faced in the distributed implementation of an important class of algorithms for prognostics - particle filters. In spite of being computation and memory intensive, particle filters lend well to distributed implementation except for one significant step - resampling. We propose new resampling scheme called parameterized resampling that attempts to reduce communication between collaborating nodes in a distributed wireless sensor network. Analysis and comparison with relevant resampling schemes is also presented. A battery health management system is used as a target application. A new resampling scheme for distributed implementation of particle filters has been discussed in this paper. Analysis and comparison of this new scheme with existing resampling schemes in the context for minimizing communication overhead have also been discussed. Our proposed new resampling scheme performs significantly better compared to other schemes by attempting to reduce both the communication message length as well as number total communication messages exchanged while not compromising prediction accuracy and precision. Future work will explore the effects of the new resampling scheme in the overall computational performance of the whole system as well as full implementation of the new schemes on the Sun SPOT devices. Exploring different network architectures for efficient communication is an importance future research direction as well.
Maximum a posteriori resampling of noisy, spatially correlated data
NASA Astrophysics Data System (ADS)
Goff, John A.; Jenkins, Chris; Calder, Brian
2006-08-01
In any geologic application, noisy data are sources of consternation for researchers, inhibiting interpretability and marring images with unsightly and unrealistic artifacts. Filtering is the typical solution to dealing with noisy data. However, filtering commonly suffers from ad hoc (i.e., uncalibrated, ungoverned) application. We present here an alternative to filtering: a newly developed method for correcting noise in data by finding the "best" value given available information. The motivating rationale is that data points that are close to each other in space cannot differ by "too much," where "too much" is governed by the field covariance. Data with large uncertainties will frequently violate this condition and therefore ought to be corrected, or "resampled." Our solution for resampling is determined by the maximum of the a posteriori density function defined by the intersection of (1) the data error probability density function (pdf) and (2) the conditional pdf, determined by the geostatistical kriging algorithm applied to proximal data values. A maximum a posteriori solution can be computed sequentially going through all the data, but the solution depends on the order in which the data are examined. We approximate the global a posteriori solution by randomizing this order and taking the average. A test with a synthetic data set sampled from a known field demonstrates quantitatively and qualitatively the improvement provided by the maximum a posteriori resampling algorithm. The method is also applied to three marine geology/geophysics data examples, demonstrating the viability of the method for diverse applications: (1) three generations of bathymetric data on the New Jersey shelf with disparate data uncertainties; (2) mean grain size data from the Adriatic Sea, which is a combination of both analytic (low uncertainty) and word-based (higher uncertainty) sources; and (3) side-scan backscatter data from the Martha's Vineyard Coastal Observatory which are, as is typical for such data, affected by speckle noise. Compared to filtering, maximum a posteriori resampling provides an objective and optimal method for reducing noise, and better preservation of the statistical properties of the sampled field. The primary disadvantage is that maximum a posteriori resampling is a computationally expensive procedure.
NASA Astrophysics Data System (ADS)
Laloy, Eric; Linde, Niklas; Jacques, Diederik; Mariethoz, Grégoire
2016-04-01
The sequential geostatistical resampling (SGR) algorithm is a Markov chain Monte Carlo (MCMC) scheme for sampling from possibly non-Gaussian, complex spatially-distributed prior models such as geologic facies or categorical fields. In this work, we highlight the limits of standard SGR for posterior inference of high-dimensional categorical fields with realistically complex likelihood landscapes and benchmark a parallel tempering implementation (PT-SGR). Our proposed PT-SGR approach is demonstrated using synthetic (error corrupted) data from steady-state flow and transport experiments in categorical 7575- and 10,000-dimensional 2D conductivity fields. In both case studies, every SGR trial gets trapped in a local optima while PT-SGR maintains an higher diversity in the sampled model states. The advantage of PT-SGR is most apparent in an inverse transport problem where the posterior distribution is made bimodal by construction. PT-SGR then converges towards the appropriate data misfit much faster than SGR and partly recovers the two modes. In contrast, for the same computational resources SGR does not fit the data to the appropriate error level and hardly produces a locally optimal solution that looks visually similar to one of the two reference modes. Although PT-SGR clearly surpasses SGR in performance, our results also indicate that using a small number (16-24) of temperatures (and thus parallel cores) may not permit complete sampling of the posterior distribution by PT-SGR within a reasonable computational time (less than 1-2 weeks).
Kafeshani, Farzaneh Alizadeh; Rajabpour, Ali; Aghajanzadeh, Sirous; Gholamian, Esmaeil; Farkhari, Mohammad
2018-04-02
Aphis spiraecola Patch, Aphis gossypii Glover, and Toxoptera aurantii Boyer de Fonscolombe are three important aphid pests of citrus orchards. In this study, spatial distributions of the aphids on two orange species, Satsuma mandarin and Thomson navel, were evaluated using Taylor's power law and Iwao's patchiness. In addition, a fixed-precision sequential sampling plant was developed for each species on the host plant by Green's model at precision levels of 0.25 and 0.1. The results revealed that spatial distribution parameters and therefore the sampling plan were significantly different according to aphid and host plant species. Taylor's power law provides a better fit for the data than Iwao's patchiness regression. Except T. aurantii on Thomson navel orange, spatial distribution patterns of the aphids were aggregative on both citrus. T. aurantii had regular dispersion pattern on Thomson navel orange. Optimum sample size of the aphids varied from 30-2061 and 1-1622 shoots on Satsuma mandarin and Thomson navel orange based on aphid species and desired precision level. Calculated stop lines of the aphid species on Satsuma mandarin and Thomson navel orange ranged from 0.48 to 19 and 0.19 to 80.4 aphids per 24 shoots according to aphid species and desired precision level. The performance of the sampling plan was validated by resampling analysis using resampling for validation of sampling plans (RVSP) software. This sampling program is useful for IPM program of the aphids in citrus orchards.
NASA Astrophysics Data System (ADS)
Granade, Christopher; Wiebe, Nathan
2017-08-01
A major challenge facing existing sequential Monte Carlo methods for parameter estimation in physics stems from the inability of existing approaches to robustly deal with experiments that have different mechanisms that yield the results with equivalent probability. We address this problem here by proposing a form of particle filtering that clusters the particles that comprise the sequential Monte Carlo approximation to the posterior before applying a resampler. Through a new graphical approach to thinking about such models, we are able to devise an artificial-intelligence based strategy that automatically learns the shape and number of the clusters in the support of the posterior. We demonstrate the power of our approach by applying it to randomized gap estimation and a form of low circuit-depth phase estimation where existing methods from the physics literature either exhibit much worse performance or even fail completely.
Wavelet analysis in ecology and epidemiology: impact of statistical tests
Cazelles, Bernard; Cazelles, Kévin; Chavez, Mario
2014-01-01
Wavelet analysis is now frequently used to extract information from ecological and epidemiological time series. Statistical hypothesis tests are conducted on associated wavelet quantities to assess the likelihood that they are due to a random process. Such random processes represent null models and are generally based on synthetic data that share some statistical characteristics with the original time series. This allows the comparison of null statistics with those obtained from original time series. When creating synthetic datasets, different techniques of resampling result in different characteristics shared by the synthetic time series. Therefore, it becomes crucial to consider the impact of the resampling method on the results. We have addressed this point by comparing seven different statistical testing methods applied with different real and simulated data. Our results show that statistical assessment of periodic patterns is strongly affected by the choice of the resampling method, so two different resampling techniques could lead to two different conclusions about the same time series. Moreover, our results clearly show the inadequacy of resampling series generated by white noise and red noise that are nevertheless the methods currently used in the wide majority of wavelets applications. Our results highlight that the characteristics of a time series, namely its Fourier spectrum and autocorrelation, are important to consider when choosing the resampling technique. Results suggest that data-driven resampling methods should be used such as the hidden Markov model algorithm and the ‘beta-surrogate’ method. PMID:24284892
Wavelet analysis in ecology and epidemiology: impact of statistical tests.
Cazelles, Bernard; Cazelles, Kévin; Chavez, Mario
2014-02-06
Wavelet analysis is now frequently used to extract information from ecological and epidemiological time series. Statistical hypothesis tests are conducted on associated wavelet quantities to assess the likelihood that they are due to a random process. Such random processes represent null models and are generally based on synthetic data that share some statistical characteristics with the original time series. This allows the comparison of null statistics with those obtained from original time series. When creating synthetic datasets, different techniques of resampling result in different characteristics shared by the synthetic time series. Therefore, it becomes crucial to consider the impact of the resampling method on the results. We have addressed this point by comparing seven different statistical testing methods applied with different real and simulated data. Our results show that statistical assessment of periodic patterns is strongly affected by the choice of the resampling method, so two different resampling techniques could lead to two different conclusions about the same time series. Moreover, our results clearly show the inadequacy of resampling series generated by white noise and red noise that are nevertheless the methods currently used in the wide majority of wavelets applications. Our results highlight that the characteristics of a time series, namely its Fourier spectrum and autocorrelation, are important to consider when choosing the resampling technique. Results suggest that data-driven resampling methods should be used such as the hidden Markov model algorithm and the 'beta-surrogate' method.
Accelerated spike resampling for accurate multiple testing controls.
Harrison, Matthew T
2013-02-01
Controlling for multiple hypothesis tests using standard spike resampling techniques often requires prohibitive amounts of computation. Importance sampling techniques can be used to accelerate the computation. The general theory is presented, along with specific examples for testing differences across conditions using permutation tests and for testing pairwise synchrony and precise lagged-correlation between many simultaneously recorded spike trains using interval jitter.
Serra, Gerardo V.; Porta, Norma C. La; Avalos, Susana; Mazzuferi, Vilma
2013-01-01
The alfalfa caterpillar, Colias lesbia (Fabricius) (Lepidoptera: Pieridae), is a major pest of alfalfa, Medicago sativa L. (Fabales: Fabaceae), crops in Argentina. Its management is based mainly on chemical control of larvae whenever the larvae exceed the action threshold. To develop and validate fixed-precision sequential sampling plans, an intensive sampling programme for C. lesbia eggs was carried out in two alfalfa plots located in the Province of Córdoba, Argentina, from 1999 to 2002. Using Resampling for Validation of Sampling Plans software, 12 additional independent data sets were used to validate the sequential sampling plan with precision levels of 0.10 and 0.25 (SE/mean), respectively. For a range of mean densities of 0.10 to 8.35 eggs/sample, an average sample size of only 27 and 26 sample units was required to achieve a desired precision level of 0.25 for the sampling plans of Green and Kuno, respectively. As the precision level was increased to 0.10, average sample size increased to 161 and 157 sample units for the sampling plans of Green and Kuno, respectively. We recommend using Green's sequential sampling plan because it is less sensitive to changes in egg density. These sampling plans are a valuable tool for researchers to study population dynamics and to evaluate integrated pest management strategies. PMID:23909840
Generation of the global cloud free data set of MODIS
NASA Astrophysics Data System (ADS)
Oguro, Y.; Tsuchiya, K.
To extract temporal change of the land cover from remotely sensed data from space the generation of the reliable cloud free data set is the first priority item With the objectives of generating accurate global basic data and to find the effects of spectral and spatial resolution differences and observation time an attempt is made to generate reliable global cloud free data set of Terra and Aqua MODIS utilizing personal computers Out of 36 bands seven bands with similar spectral features to those of Landsat TM i e Band 1 through 7 are selected These bands cover the most important spectra to derive landcover features The procedure of the data set generation is as follows 1 Download the global Terra and Aqua MODIS day time data MOD02 Level-1B Calibrated Geolocation Data Set of 250 meter Band 1 and 2 and 500 meter Band 3 through 7 resolution from NASA web site 2 Separate the data into several BSQ Band SeQuential image and several text geolocation information of pixels files 3 The geolocation information is given to the pixels of several kms interval Based on the information resampling of the data are made at 1 2 and 1 4 degrees intervals of latitude and longitude thus the resampled pixels are distributed in the latitude and longitudinal axis plane at 1 4 degrees high resolution and 1 2 degrees low resolution intervals 4 A global data for one day is composed 5 Compute NDVI for each pixel 6 Compare the value of NDVI of successive days and keep the larger NDVI At the same time keep the values of each band of the day of the larger
Forensic identification of resampling operators: A semi non-intrusive approach.
Cao, Gang; Zhao, Yao; Ni, Rongrong
2012-03-10
Recently, several new resampling operators have been proposed and successfully invalidate the existing resampling detectors. However, the reliability of such anti-forensic techniques is unaware and needs to be investigated. In this paper, we focus on the forensic identification of digital image resampling operators including the traditional type and the anti-forensic type which hides the trace of traditional resampling. Various resampling algorithms involving geometric distortion (GD)-based, dual-path-based and postprocessing-based are investigated. The identification is achieved in the manner of semi non-intrusive, supposing the resampling software could be accessed. Given an input pattern of monotone signal, polarity aberration of GD-based resampled signal's first derivative is analyzed theoretically and measured by effective feature metric. Dual-path-based and postprocessing-based resampling can also be identified by feeding proper test patterns. Experimental results on various parameter settings demonstrate the effectiveness of the proposed approach. Copyright © 2011 Elsevier Ireland Ltd. All rights reserved.
NASA Technical Reports Server (NTRS)
Nearing, Grey S.; Crow, Wade T.; Thorp, Kelly R.; Moran, Mary S.; Reichle, Rolf H.; Gupta, Hoshin V.
2012-01-01
Observing system simulation experiments were used to investigate ensemble Bayesian state updating data assimilation of observations of leaf area index (LAI) and soil moisture (theta) for the purpose of improving single-season wheat yield estimates with the Decision Support System for Agrotechnology Transfer (DSSAT) CropSim-Ceres model. Assimilation was conducted in an energy-limited environment and a water-limited environment. Modeling uncertainty was prescribed to weather inputs, soil parameters and initial conditions, and cultivar parameters and through perturbations to model state transition equations. The ensemble Kalman filter and the sequential importance resampling filter were tested for the ability to attenuate effects of these types of uncertainty on yield estimates. LAI and theta observations were synthesized according to characteristics of existing remote sensing data, and effects of observation error were tested. Results indicate that the potential for assimilation to improve end-of-season yield estimates is low. Limitations are due to a lack of root zone soil moisture information, error in LAI observations, and a lack of correlation between leaf and grain growth.
Thompson, Steven K
2006-12-01
A flexible class of adaptive sampling designs is introduced for sampling in network and spatial settings. In the designs, selections are made sequentially with a mixture distribution based on an active set that changes as the sampling progresses, using network or spatial relationships as well as sample values. The new designs have certain advantages compared with previously existing adaptive and link-tracing designs, including control over sample sizes and of the proportion of effort allocated to adaptive selections. Efficient inference involves averaging over sample paths consistent with the minimal sufficient statistic. A Markov chain resampling method makes the inference computationally feasible. The designs are evaluated in network and spatial settings using two empirical populations: a hidden human population at high risk for HIV/AIDS and an unevenly distributed bird population.
NASA Astrophysics Data System (ADS)
Plaza Guingla, D. A.; Pauwels, V. R.; De Lannoy, G. J.; Matgen, P.; Giustarini, L.; De Keyser, R.
2012-12-01
The objective of this work is to analyze the improvement in the performance of the particle filter by including a resample-move step or by using a modified Gaussian particle filter. Specifically, the standard particle filter structure is altered by the inclusion of the Markov chain Monte Carlo move step. The second choice adopted in this study uses the moments of an ensemble Kalman filter analysis to define the importance density function within the Gaussian particle filter structure. Both variants of the standard particle filter are used in the assimilation of densely sampled discharge records into a conceptual rainfall-runoff model. In order to quantify the obtained improvement, discharge root mean square errors are compared for different particle filters, as well as for the ensemble Kalman filter. First, a synthetic experiment is carried out. The results indicate that the performance of the standard particle filter can be improved by the inclusion of the resample-move step, but its effectiveness is limited to situations with limited particle impoverishment. The results also show that the modified Gaussian particle filter outperforms the rest of the filters. Second, a real experiment is carried out in order to validate the findings from the synthetic experiment. The addition of the resample-move step does not show a considerable improvement due to performance limitations in the standard particle filter with real data. On the other hand, when an optimal importance density function is used in the Gaussian particle filter, the results show a considerably improved performance of the particle filter.
NASA Astrophysics Data System (ADS)
Fan, Y. R.; Huang, G. H.; Baetz, B. W.; Li, Y. P.; Huang, K.
2017-06-01
In this study, a copula-based particle filter (CopPF) approach was developed for sequential hydrological data assimilation by considering parameter correlation structures. In CopPF, multivariate copulas are proposed to reflect parameter interdependence before the resampling procedure with new particles then being sampled from the obtained copulas. Such a process can overcome both particle degeneration and sample impoverishment. The applicability of CopPF is illustrated with three case studies using a two-parameter simplified model and two conceptual hydrologic models. The results for the simplified model indicate that model parameters are highly correlated in the data assimilation process, suggesting a demand for full description of their dependence structure. Synthetic experiments on hydrologic data assimilation indicate that CopPF can rejuvenate particle evolution in large spaces and thus achieve good performances with low sample size scenarios. The applicability of CopPF is further illustrated through two real-case studies. It is shown that, compared with traditional particle filter (PF) and particle Markov chain Monte Carlo (PMCMC) approaches, the proposed method can provide more accurate results for both deterministic and probabilistic prediction with a sample size of 100. Furthermore, the sample size would not significantly influence the performance of CopPF. Also, the copula resampling approach dominates parameter evolution in CopPF, with more than 50% of particles sampled by copulas in most sample size scenarios.
Lessio, Federico; Alma, Alberto
2006-04-01
The spatial distribution of the nymphs of Scaphoideus titanus Ball (Homoptera Cicadellidae), the vector of grapevine flavescence dorée (Candidatus Phytoplasma vitis, 16Sr-V), was studied by applying Taylor's power law. Studies were conducted from 2002 to 2005, in organic and conventional vineyards of Piedmont, northern Italy. Minimum sample size and fixed precision level stop lines were calculated to develop appropriate sampling plans. Model validation was performed, using independent field data, by means of Resampling Validation of Sample Plans (RVSP) resampling software. The nymphal distribution, analyzed via Taylor's power law, was aggregated, with b = 1.49. A sample of 32 plants was adequate at low pest densities with a precision level of D0 = 0.30; but for a more accurate estimate (D0 = 0.10), the required sample size needs to be 292 plants. Green's fixed precision level stop lines seem to be more suitable for field sampling: RVSP simulations of this sampling plan showed precision levels very close to the desired levels. However, at a prefixed precision level of 0.10, sampling would become too time-consuming, whereas a precision level of 0.25 is easily achievable. How these results could influence the correct application of the compulsory control of S. titanus and Flavescence dorée in Italy is discussed.
NASA Technical Reports Server (NTRS)
Benner, R.; Young, W.
1977-01-01
The results of an experimental study conducted to determine the geometric and radiometric effects of double resampling (bi-resampling) performed on image data in the process of performing map projection transformations are reported.
Dunham, Kylee; Grand, James B.
2016-10-11
The Alaskan breeding population of Steller’s eiders (Polysticta stelleri) was listed as threatened under the Endangered Species Act in 1997 in response to perceived declines in abundance throughout their breeding and nesting range. Aerial surveys suggest the breeding population is small and highly variable in number, with zero birds counted in 5 of the last 25 years. Research was conducted to evaluate competing population process models of Alaskan-breeding Steller’s eiders through comparison of model projections to aerial survey data. To evaluate model efficacy and estimate demographic parameters, a Bayesian state-space modeling framework was used and each model was fit to counts from the annual aerial surveys, using sequential importance sampling and resampling. The results strongly support that the Alaskan breeding population experiences population level nonbreeding events and is open to exchange with the larger Russian-Pacific breeding population. Current recovery criteria for the Alaskan breeding population rely heavily on the ability to estimate population viability. The results of this investigation provide an informative model of the population process that can be used to examine future population states and assess the population in terms of the current recovery and reclassification criteria.
Estimation of stochastic volatility with long memory for index prices of FTSE Bursa Malaysia KLCI
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chen, Kho Chia; Kane, Ibrahim Lawal; Rahman, Haliza Abd
In recent years, modeling in long memory properties or fractionally integrated processes in stochastic volatility has been applied in the financial time series. A time series with structural breaks can generate a strong persistence in the autocorrelation function, which is an observed behaviour of a long memory process. This paper considers the structural break of data in order to determine true long memory time series data. Unlike usual short memory models for log volatility, the fractional Ornstein-Uhlenbeck process is neither a Markovian process nor can it be easily transformed into a Markovian process. This makes the likelihood evaluation and parametermore » estimation for the long memory stochastic volatility (LMSV) model challenging tasks. The drift and volatility parameters of the fractional Ornstein-Unlenbeck model are estimated separately using the least square estimator (lse) and quadratic generalized variations (qgv) method respectively. Finally, the empirical distribution of unobserved volatility is estimated using the particle filtering with sequential important sampling-resampling (SIR) method. The mean square error (MSE) between the estimated and empirical volatility indicates that the performance of the model towards the index prices of FTSE Bursa Malaysia KLCI is fairly well.« less
Estimation of stochastic volatility with long memory for index prices of FTSE Bursa Malaysia KLCI
NASA Astrophysics Data System (ADS)
Chen, Kho Chia; Bahar, Arifah; Kane, Ibrahim Lawal; Ting, Chee-Ming; Rahman, Haliza Abd
2015-02-01
In recent years, modeling in long memory properties or fractionally integrated processes in stochastic volatility has been applied in the financial time series. A time series with structural breaks can generate a strong persistence in the autocorrelation function, which is an observed behaviour of a long memory process. This paper considers the structural break of data in order to determine true long memory time series data. Unlike usual short memory models for log volatility, the fractional Ornstein-Uhlenbeck process is neither a Markovian process nor can it be easily transformed into a Markovian process. This makes the likelihood evaluation and parameter estimation for the long memory stochastic volatility (LMSV) model challenging tasks. The drift and volatility parameters of the fractional Ornstein-Unlenbeck model are estimated separately using the least square estimator (lse) and quadratic generalized variations (qgv) method respectively. Finally, the empirical distribution of unobserved volatility is estimated using the particle filtering with sequential important sampling-resampling (SIR) method. The mean square error (MSE) between the estimated and empirical volatility indicates that the performance of the model towards the index prices of FTSE Bursa Malaysia KLCI is fairly well.
Population Annealing Monte Carlo for Frustrated Systems
NASA Astrophysics Data System (ADS)
Amey, Christopher; Machta, Jonathan
Population annealing is a sequential Monte Carlo algorithm that efficiently simulates equilibrium systems with rough free energy landscapes such as spin glasses and glassy fluids. A large population of configurations is initially thermalized at high temperature and then cooled to low temperature according to an annealing schedule. The population is kept in thermal equilibrium at every annealing step via resampling configurations according to their Boltzmann weights. Population annealing is comparable to parallel tempering in terms of efficiency, but has several distinct and useful features. In this talk I will give an introduction to population annealing and present recent progress in understanding its equilibration properties and optimizing it for spin glasses. Results from large-scale population annealing simulations for the Ising spin glass in 3D and 4D will be presented. NSF Grant DMR-1507506.
A multistate dynamic site occupancy model for spatially aggregated sessile communities
Fukaya, Keiichi; Royle, J. Andrew; Okuda, Takehiro; Nakaoka, Masahiro; Noda, Takashi
2017-01-01
Estimation of transition probabilities of sessile communities seems easy in principle but may still be difficult in practice because resampling error (i.e. a failure to resample exactly the same location at fixed points) may cause significant estimation bias. Previous studies have developed novel analytical methods to correct for this estimation bias. However, they did not consider the local structure of community composition induced by the aggregated distribution of organisms that is typically observed in sessile assemblages and is very likely to affect observations.We developed a multistate dynamic site occupancy model to estimate transition probabilities that accounts for resampling errors associated with local community structure. The model applies a nonparametric multivariate kernel smoothing methodology to the latent occupancy component to estimate the local state composition near each observation point, which is assumed to determine the probability distribution of data conditional on the occurrence of resampling error.By using computer simulations, we confirmed that an observation process that depends on local community structure may bias inferences about transition probabilities. By applying the proposed model to a real data set of intertidal sessile communities, we also showed that estimates of transition probabilities and of the properties of community dynamics may differ considerably when spatial dependence is taken into account.Results suggest the importance of accounting for resampling error and local community structure for developing management plans that are based on Markovian models. Our approach provides a solution to this problem that is applicable to broad sessile communities. It can even accommodate an anisotropic spatial correlation of species composition, and may also serve as a basis for inferring complex nonlinear ecological dynamics.
NASA Astrophysics Data System (ADS)
Baisden, W. T.; Prior, C.; Lambie, S.; Tate, K.; Bruhn, F.; Parfitt, R.; Schipper, L.; Wilde, R. H.; Ross, C.
2006-12-01
Soil organic matter contains more C than terrestrial biomass and atmospheric CO2 combined, and reacts to climate and land-use change on timescales requiring long-term experiments or monitoring. The direction and uncertainty of soil C stock changes has been difficult to predict and incorporate in decision support tools for climate change policies. Moreover, standardization of approaches has been difficult because historic methods of soil sampling have varied regionally, nationally and temporally. The most common and uniform type of historic sampling is soil profiles, which have commonly been collected, described and archived in the course of both soil survey studies and research. Resampling soil profiles has considerable utility in carbon monitoring and in parameterizing models to understand the ecosystem responses to global change. Recent work spanning seven soil orders in New Zealand's grazed pastures has shown that, averaged over approximately 20 years, 31 soil profiles lost 106 g C m-2 y-1 (p=0.01) and 9.1 g N m{^-2} y-1 (p=0.002). These losses are unexpected and appear to extend well below the upper 30 cm of soil. Following on these recent results, additional advantages of resampling soil profiles can be emphasized. One of the most powerful applications afforded by resampling archived soils is the use of the pulse label of radiocarbon injected into the atmosphere by thermonuclear weapons testing circa 1963 as a tracer of soil carbon dynamics. This approach allows estimation of the proportion of soil C that is `passive' or `inert' and therefore unlikely to respond to global change. Evaluation of resampled soil horizons in a New Zealand soil chronosequence confirms that the approach yields consistent values for the proportion of `passive' soil C, reaching 25% of surface horizon soil C over 12,000 years. Across whole profiles, radiocarbon data suggest that the proportion of `passive' C in New Zealand grassland soil can be less than 40% of total soil C. Below 30 cm, 1 kg C m-2 or more may be reactive on decadal timescales, supporting evidence of soil C losses from throughout the soil profiles. Information from resampled soil profiles can be combined with additional contemporary measurements to test hypotheses about mechanisms for soil C changes. For example, Δ14C in excess of 200‰ in water extractable dissolved organic C (DOC) from surface soil horizons supports the hypothesis that decadal movement of DOC represents an important translocation of soil C. These preliminary results demonstrate that resampling whole soil profiles can support substantial progress in C cycle science, ranging from updating operational C accounting systems to the frontiers of research. Resampling can be complementary or superior to fixed-depth interval sampling of surface soil layers. Resampling must however be undertaken with relative urgency to maximize the potential interpretive power of bomb-derived radiocarbon.
McGraw, Benjamin A; Koppenhöfer, Albrecht M
2009-06-01
Binomial sequential sampling plans were developed to forecast weevil Listronotus maculicollis Kirby (Coleoptera: Curculionidae), larval damage to golf course turfgrass and aid in the development of integrated pest management programs for the weevil. Populations of emerging overwintered adults were sampled over a 2-yr period to determine the relationship between adult counts, larval density, and turfgrass damage. Larval density and composition of preferred host plants (Poa annua L.) significantly affected the expression of turfgrass damage. Multiple regression indicates that damage may occur in moderately mixed P. annua stands with as few as 10 larvae per 0.09 m2. However, > 150 larvae were required before damage became apparent in pure Agrostis stolonifera L. plots. Adult counts during peaks in emergence as well as cumulative counts across the emergence period were significantly correlated to future densities of larvae. Eight binomial sequential sampling plans based on two tally thresholds for classifying infestation (T = 1 and two adults) and four adult density thresholds (0.5, 0.85, 1.15, and 1.35 per 3.34 m2) were developed to forecast the likelihood of turfgrass damage by using adult counts during peak emergence. Resampling for validation of sample plans software was used to validate sampling plans with field-collected data sets. All sampling plans were found to deliver accurate classifications (correct decisions were made between 84.4 and 96.8%) in a practical timeframe (average sampling cost < 22.7 min).
NASA Astrophysics Data System (ADS)
Wicaksono, Pramaditya; Salivian Wisnu Kumara, Ignatius; Kamal, Muhammad; Afif Fauzan, Muhammad; Zhafarina, Zhafirah; Agus Nurswantoro, Dwi; Noviaris Yogyantoro, Rifka
2017-12-01
Although spectrally different, seagrass species may not be able to be mapped from multispectral remote sensing images due to the limitation of their spectral resolution. Therefore, it is important to quantitatively assess the possibility of mapping seagrass species using multispectral images by resampling seagrass species spectra to multispectral bands. Seagrass species spectra were measured on harvested seagrass leaves. Spectral resolution of multispectral images used in this research was adopted from WorldView-2, Quickbird, Sentinel-2A, ASTER VNIR, and Landsat 8 OLI. These images are widely available and can be a good representative and baseline for previous or future remote sensing images. Seagrass species considered in this research are Enhalus acoroides (Ea), Thalassodendron ciliatum (Tc), Thalassia hemprichii (Th), Cymodocea rotundata (Cr), Cymodocea serrulata (Cs), Halodule uninervis (Hu), Halodule pinifolia (Hp), Syringodum isoetifolium (Si), Halophila ovalis (Ho), and Halophila minor (Hm). Multispectral resampling analysis indicate that the resampled spectra exhibit similar shape and pattern with the original spectra but less precise, and they lose the unique absorption feature of seagrass species. Relying on spectral bands alone, multispectral image is not effective in mapping these seagrass species individually, which is shown by the poor and inconsistent result of Spectral Angle Mapper (SAM) classification technique in classifying seagrass species using seagrass species spectra as pure endmember. Only Sentinel-2A produced acceptable classification result using SAM.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Manoli, Gabriele, E-mail: manoli@dmsa.unipd.it; Nicholas School of the Environment, Duke University, Durham, NC 27708; Rossi, Matteo
The modeling of unsaturated groundwater flow is affected by a high degree of uncertainty related to both measurement and model errors. Geophysical methods such as Electrical Resistivity Tomography (ERT) can provide useful indirect information on the hydrological processes occurring in the vadose zone. In this paper, we propose and test an iterated particle filter method to solve the coupled hydrogeophysical inverse problem. We focus on an infiltration test monitored by time-lapse ERT and modeled using Richards equation. The goal is to identify hydrological model parameters from ERT electrical potential measurements. Traditional uncoupled inversion relies on the solution of two sequentialmore » inverse problems, the first one applied to the ERT measurements, the second one to Richards equation. This approach does not ensure an accurate quantitative description of the physical state, typically violating mass balance. To avoid one of these two inversions and incorporate in the process more physical simulation constraints, we cast the problem within the framework of a SIR (Sequential Importance Resampling) data assimilation approach that uses a Richards equation solver to model the hydrological dynamics and a forward ERT simulator combined with Archie's law to serve as measurement model. ERT observations are then used to update the state of the system as well as to estimate the model parameters and their posterior distribution. The limitations of the traditional sequential Bayesian approach are investigated and an innovative iterative approach is proposed to estimate the model parameters with high accuracy. The numerical properties of the developed algorithm are verified on both homogeneous and heterogeneous synthetic test cases based on a real-world field experiment.« less
Image re-sampling detection through a novel interpolation kernel.
Hilal, Alaa
2018-06-01
Image re-sampling involved in re-size and rotation transformations is an essential element block in a typical digital image alteration. Fortunately, traces left from such processes are detectable, proving that the image has gone a re-sampling transformation. Within this context, we present in this paper two original contributions. First, we propose a new re-sampling interpolation kernel. It depends on five independent parameters that controls its amplitude, angular frequency, standard deviation, and duration. Then, we demonstrate its capacity to imitate the same behavior of the most frequent interpolation kernels used in digital image re-sampling applications. Secondly, the proposed model is used to characterize and detect the correlation coefficients involved in re-sampling transformations. The involved process includes a minimization of an error function using the gradient method. The proposed method is assessed over a large database of 11,000 re-sampled images. Additionally, it is implemented within an algorithm in order to assess images that had undergone complex transformations. Obtained results demonstrate better performance and reduced processing time when compared to a reference method validating the suitability of the proposed approaches. Copyright © 2018 Elsevier B.V. All rights reserved.
System for monitoring non-coincident, nonstationary process signals
Gross, Kenneth C.; Wegerich, Stephan W.
2005-01-04
An improved system for monitoring non-coincident, non-stationary, process signals. The mean, variance, and length of a reference signal is defined by an automated system, followed by the identification of the leading and falling edges of a monitored signal and the length of the monitored signal. The monitored signal is compared to the reference signal, and the monitored signal is resampled in accordance with the reference signal. The reference signal is then correlated with the resampled monitored signal such that the reference signal and the resampled monitored signal are coincident in time with each other. The resampled monitored signal is then compared to the reference signal to determine whether the resampled monitored signal is within a set of predesignated operating conditions.
Resampling: A Marriage of Computers and Statistics. ERIC/TM Digest.
ERIC Educational Resources Information Center
Rudner, Lawrence M.; Shafer, Mary Morello
Advances in computer technology are making it possible for educational researchers to use simpler statistical methods to address a wide range of questions with smaller data sets and fewer, and less restrictive, assumptions. This digest introduces computationally intensive statistics, collectively called resampling techniques. Resampling is a…
An add-in implementation of the RESAMPLING syntax under Microsoft EXCEL.
Meineke, I
2000-10-01
The RESAMPLING syntax defines a set of powerful commands, which allow the programming of probabilistic statistical models with few, easily memorized statements. This paper presents an implementation of the RESAMPLING syntax using Microsoft EXCEL with Microsoft WINDOWS(R) as a platform. Two examples are given to demonstrate typical applications of RESAMPLING in biomedicine. Details of the implementation with special emphasis on the programming environment are discussed at length. The add-in is available electronically to interested readers upon request. The use of the add-in facilitates numerical statistical analyses of data from within EXCEL in a comfortable way.
A Sequential Monte Carlo Approach for Streamflow Forecasting
NASA Astrophysics Data System (ADS)
Hsu, K.; Sorooshian, S.
2008-12-01
As alternatives to traditional physically-based models, Artificial Neural Network (ANN) models offer some advantages with respect to the flexibility of not requiring the precise quantitative mechanism of the process and the ability to train themselves from the data directly. In this study, an ANN model was used to generate one-day-ahead streamflow forecasts from the precipitation input over a catchment. Meanwhile, the ANN model parameters were trained using a Sequential Monte Carlo (SMC) approach, namely Regularized Particle Filter (RPF). The SMC approaches are known for their capabilities in tracking the states and parameters of a nonlinear dynamic process based on the Baye's rule and the proposed effective sampling and resampling strategies. In this study, five years of daily rainfall and streamflow measurement were used for model training. Variable sample sizes of RPF, from 200 to 2000, were tested. The results show that, after 1000 RPF samples, the simulation statistics, in terms of correlation coefficient, root mean square error, and bias, were stabilized. It is also shown that the forecasted daily flows fit the observations very well, with the correlation coefficient of higher than 0.95. The results of RPF simulations were also compared with those from the popular back-propagation ANN training approach. The pros and cons of using SMC approach and the traditional back-propagation approach will be discussed.
ERIC Educational Resources Information Center
Hand, Michael L.
1990-01-01
Use of the bootstrap resampling technique (BRT) is assessed in its application to resampling analysis associated with measurement of payment allocation errors by federally funded Family Assistance Programs. The BRT is applied to a food stamp quality control database in Oregon. This analysis highlights the outlier-sensitivity of the…
Application of a New Resampling Method to SEM: A Comparison of S-SMART with the Bootstrap
ERIC Educational Resources Information Center
Bai, Haiyan; Sivo, Stephen A.; Pan, Wei; Fan, Xitao
2016-01-01
Among the commonly used resampling methods of dealing with small-sample problems, the bootstrap enjoys the widest applications because it often outperforms its counterparts. However, the bootstrap still has limitations when its operations are contemplated. Therefore, the purpose of this study is to examine an alternative, new resampling method…
Assessment of Person Fit Using Resampling-Based Approaches
ERIC Educational Resources Information Center
Sinharay, Sandip
2016-01-01
De la Torre and Deng suggested a resampling-based approach for person-fit assessment (PFA). The approach involves the use of the [math equation unavailable] statistic, a corrected expected a posteriori estimate of the examinee ability, and the Monte Carlo (MC) resampling method. The Type I error rate of the approach was closer to the nominal level…
Incorporating advanced language models into the P300 speller using particle filtering
NASA Astrophysics Data System (ADS)
Speier, W.; Arnold, C. W.; Deshpande, A.; Knall, J.; Pouratian, N.
2015-08-01
Objective. The P300 speller is a common brain-computer interface (BCI) application designed to communicate language by detecting event related potentials in a subject’s electroencephalogram signal. Information about the structure of natural language can be valuable for BCI communication, but attempts to use this information have thus far been limited to rudimentary n-gram models. While more sophisticated language models are prevalent in natural language processing literature, current BCI analysis methods based on dynamic programming cannot handle their complexity. Approach. Sampling methods can overcome this complexity by estimating the posterior distribution without searching the entire state space of the model. In this study, we implement sequential importance resampling, a commonly used particle filtering (PF) algorithm, to integrate a probabilistic automaton language model. Main result. This method was first evaluated offline on a dataset of 15 healthy subjects, which showed significant increases in speed and accuracy when compared to standard classification methods as well as a recently published approach using a hidden Markov model (HMM). An online pilot study verified these results as the average speed and accuracy achieved using the PF method was significantly higher than that using the HMM method. Significance. These findings strongly support the integration of domain-specific knowledge into BCI classification to improve system performance.
Assessment of resampling methods for causality testing: A note on the US inflation behavior
Kyrtsou, Catherine; Kugiumtzis, Dimitris; Diks, Cees
2017-01-01
Different resampling methods for the null hypothesis of no Granger causality are assessed in the setting of multivariate time series, taking into account that the driving-response coupling is conditioned on the other observed variables. As appropriate test statistic for this setting, the partial transfer entropy (PTE), an information and model-free measure, is used. Two resampling techniques, time-shifted surrogates and the stationary bootstrap, are combined with three independence settings (giving a total of six resampling methods), all approximating the null hypothesis of no Granger causality. In these three settings, the level of dependence is changed, while the conditioning variables remain intact. The empirical null distribution of the PTE, as the surrogate and bootstrapped time series become more independent, is examined along with the size and power of the respective tests. Additionally, we consider a seventh resampling method by contemporaneously resampling the driving and the response time series using the stationary bootstrap. Although this case does not comply with the no causality hypothesis, one can obtain an accurate sampling distribution for the mean of the test statistic since its value is zero under H0. Results indicate that as the resampling setting gets more independent, the test becomes more conservative. Finally, we conclude with a real application. More specifically, we investigate the causal links among the growth rates for the US CPI, money supply and crude oil. Based on the PTE and the seven resampling methods, we consistently find that changes in crude oil cause inflation conditioning on money supply in the post-1986 period. However this relationship cannot be explained on the basis of traditional cost-push mechanisms. PMID:28708870
Assessment of resampling methods for causality testing: A note on the US inflation behavior.
Papana, Angeliki; Kyrtsou, Catherine; Kugiumtzis, Dimitris; Diks, Cees
2017-01-01
Different resampling methods for the null hypothesis of no Granger causality are assessed in the setting of multivariate time series, taking into account that the driving-response coupling is conditioned on the other observed variables. As appropriate test statistic for this setting, the partial transfer entropy (PTE), an information and model-free measure, is used. Two resampling techniques, time-shifted surrogates and the stationary bootstrap, are combined with three independence settings (giving a total of six resampling methods), all approximating the null hypothesis of no Granger causality. In these three settings, the level of dependence is changed, while the conditioning variables remain intact. The empirical null distribution of the PTE, as the surrogate and bootstrapped time series become more independent, is examined along with the size and power of the respective tests. Additionally, we consider a seventh resampling method by contemporaneously resampling the driving and the response time series using the stationary bootstrap. Although this case does not comply with the no causality hypothesis, one can obtain an accurate sampling distribution for the mean of the test statistic since its value is zero under H0. Results indicate that as the resampling setting gets more independent, the test becomes more conservative. Finally, we conclude with a real application. More specifically, we investigate the causal links among the growth rates for the US CPI, money supply and crude oil. Based on the PTE and the seven resampling methods, we consistently find that changes in crude oil cause inflation conditioning on money supply in the post-1986 period. However this relationship cannot be explained on the basis of traditional cost-push mechanisms.
Testing particle filters on convective scale dynamics
NASA Astrophysics Data System (ADS)
Haslehner, Mylene; Craig, George. C.; Janjic, Tijana
2014-05-01
Particle filters have been developed in recent years to deal with highly nonlinear dynamics and non Gaussian error statistics that also characterize data assimilation on convective scales. In this work we explore the use of the efficient particle filter (P.v. Leeuwen, 2011) for convective scale data assimilation application. The method is tested in idealized setting, on two stochastic models. The models were designed to reproduce some of the properties of convection, for example the rapid development and decay of convective clouds. The first model is a simple one-dimensional, discrete state birth-death model of clouds (Craig and Würsch, 2012). For this model, the efficient particle filter that includes nudging the variables shows significant improvement compared to Ensemble Kalman Filter and Sequential Importance Resampling (SIR) particle filter. The success of the combination of nudging and resampling, measured as RMS error with respect to the 'true state', is proportional to the nudging intensity. Significantly, even a very weak nudging intensity brings notable improvement over SIR. The second model is a modified version of a stochastic shallow water model (Würsch and Craig 2013), which contains more realistic dynamical characteristics of convective scale phenomena. Using the efficient particle filter and different combination of observations of the three field variables (wind, water 'height' and rain) allows the particle filter to be evaluated in comparison to a regime where only nudging is used. Sensitivity to the properties of the model error covariance is also considered. Finally, criteria are identified under which the efficient particle filter outperforms nudging alone. References: Craig, G. C. and M. Würsch, 2012: The impact of localization and observation averaging for convective-scale data assimilation in a simple stochastic model. Q. J. R. Meteorol. Soc.,139, 515-523. Van Leeuwen, P. J., 2011: Efficient non-linear data assimilation in geophysical fluid dynamics. - Computers and Fluids, doi:10,1016/j.compfluid.2010.11.011, 1096 2011. Würsch, M. and G. C. Craig, 2013: A simple dynamical model of cumulus convection for data assimilation research, submitted to Met. Zeitschrift.
Katayama, R; Sakai, S; Sakaguchi, T; Maeda, T; Takada, K; Hayabuchi, N; Morishita, J
2008-07-20
PURPOSE/AIM OF THE EXHIBIT: The purpose of this exhibit is: 1. To explain "resampling", an image data processing, performed by the digital radiographic system based on flat panel detector (FPD). 2. To show the influence of "resampling" on the basic imaging properties. 3. To present accurate measurement methods of the basic imaging properties of the FPD system. 1. The relationship between the matrix sizes of the output image and the image data acquired on FPD that automatically changes depending on a selected image size (FOV). 2. The explanation of the image data processing of "resampling". 3. The evaluation results of the basic imaging properties of the FPD system using two types of DICOM image to which "resampling" was performed: characteristic curves, presampled MTFs, noise power spectra, detective quantum efficiencies. CONCLUSION/SUMMARY: The major points of the exhibit are as follows: 1. The influence of "resampling" should not be disregarded in the evaluation of the basic imaging properties of the flat panel detector system. 2. It is necessary for the basic imaging properties to be measured by using DICOM image to which no "resampling" is performed.
Comment on: 'A Poisson resampling method for simulating reduced counts in nuclear medicine images'.
de Nijs, Robin
2015-07-21
In order to be able to calculate half-count images from already acquired data, White and Lawson published their method based on Poisson resampling. They verified their method experimentally by measurements with a Co-57 flood source. In this comment their results are reproduced and confirmed by a direct numerical simulation in Matlab. Not only Poisson resampling, but also two direct redrawing methods were investigated. Redrawing methods were based on a Poisson and a Gaussian distribution. Mean, standard deviation, skewness and excess kurtosis half-count/full-count ratios were determined for all methods, and compared to the theoretical values for a Poisson distribution. Statistical parameters showed the same behavior as in the original note and showed the superiority of the Poisson resampling method. Rounding off before saving of the half count image had a severe impact on counting statistics for counts below 100. Only Poisson resampling was not affected by this, while Gaussian redrawing was less affected by it than Poisson redrawing. Poisson resampling is the method of choice, when simulating half-count (or less) images from full-count images. It simulates correctly the statistical properties, also in the case of rounding off of the images.
NASA Astrophysics Data System (ADS)
Hsu, Hsiao-Ping; Nadler, Walder; Grassberger, Peter
2005-07-01
The scaling behavior of randomly branched polymers in a good solvent is studied in two to nine dimensions, modeled by lattice animals on simple hypercubic lattices. For the simulations, we use a biased sequential sampling algorithm with re-sampling, similar to the pruned-enriched Rosenbluth method (PERM) used extensively for linear polymers. We obtain high statistics of animals with up to several thousand sites in all dimension 2⩽d⩽9. The partition sum (number of different animals) and gyration radii are estimated. In all dimensions we verify the Parisi-Sourlas prediction, and we verify all exactly known critical exponents in dimensions 2, 3, 4, and ⩾8. In addition, we present the hitherto most precise estimates for growth constants in d⩾3. For clusters with one site attached to an attractive surface, we verify the superuniversality of the cross-over exponent at the adsorption transition predicted by Janssen and Lyssy.
Computationally Efficient Resampling of Nonuniform Oversampled SAR Data
2010-05-01
noncoherently . The resample data is calculated using both a simple average and a weighted average of the demodulated data. The average nonuniform...trials with randomly varying accelerations. The results are shown in Fig. 5 for the noncoherent power difference and Fig. 6 for and coherent power...simple average. Figure 5. Noncoherent difference between SAR imagery generated with uniform sampling and nonuniform sampling that was resampled
Generating Virtual Patients by Multivariate and Discrete Re-Sampling Techniques.
Teutonico, D; Musuamba, F; Maas, H J; Facius, A; Yang, S; Danhof, M; Della Pasqua, O
2015-10-01
Clinical Trial Simulations (CTS) are a valuable tool for decision-making during drug development. However, to obtain realistic simulation scenarios, the patients included in the CTS must be representative of the target population. This is particularly important when covariate effects exist that may affect the outcome of a trial. The objective of our investigation was to evaluate and compare CTS results using re-sampling from a population pool and multivariate distributions to simulate patient covariates. COPD was selected as paradigm disease for the purposes of our analysis, FEV1 was used as response measure and the effects of a hypothetical intervention were evaluated in different populations in order to assess the predictive performance of the two methods. Our results show that the multivariate distribution method produces realistic covariate correlations, comparable to the real population. Moreover, it allows simulation of patient characteristics beyond the limits of inclusion and exclusion criteria in historical protocols. Both methods, discrete resampling and multivariate distribution generate realistic pools of virtual patients. However the use of a multivariate distribution enable more flexible simulation scenarios since it is not necessarily bound to the existing covariate combinations in the available clinical data sets.
Resampling methods in Microsoft Excel® for estimating reference intervals
Theodorsson, Elvar
2015-01-01
Computer- intensive resampling/bootstrap methods are feasible when calculating reference intervals from non-Gaussian or small reference samples. Microsoft Excel® in version 2010 or later includes natural functions, which lend themselves well to this purpose including recommended interpolation procedures for estimating 2.5 and 97.5 percentiles. The purpose of this paper is to introduce the reader to resampling estimation techniques in general and in using Microsoft Excel® 2010 for the purpose of estimating reference intervals in particular. Parametric methods are preferable to resampling methods when the distributions of observations in the reference samples is Gaussian or can transformed to that distribution even when the number of reference samples is less than 120. Resampling methods are appropriate when the distribution of data from the reference samples is non-Gaussian and in case the number of reference individuals and corresponding samples are in the order of 40. At least 500-1000 random samples with replacement should be taken from the results of measurement of the reference samples. PMID:26527366
Resampling methods in Microsoft Excel® for estimating reference intervals.
Theodorsson, Elvar
2015-01-01
Computer-intensive resampling/bootstrap methods are feasible when calculating reference intervals from non-Gaussian or small reference samples. Microsoft Excel® in version 2010 or later includes natural functions, which lend themselves well to this purpose including recommended interpolation procedures for estimating 2.5 and 97.5 percentiles. The purpose of this paper is to introduce the reader to resampling estimation techniques in general and in using Microsoft Excel® 2010 for the purpose of estimating reference intervals in particular. Parametric methods are preferable to resampling methods when the distributions of observations in the reference samples is Gaussian or can transformed to that distribution even when the number of reference samples is less than 120. Resampling methods are appropriate when the distribution of data from the reference samples is non-Gaussian and in case the number of reference individuals and corresponding samples are in the order of 40. At least 500-1000 random samples with replacement should be taken from the results of measurement of the reference samples.
Testing for Granger Causality in the Frequency Domain: A Phase Resampling Method.
Liu, Siwei; Molenaar, Peter
2016-01-01
This article introduces phase resampling, an existing but rarely used surrogate data method for making statistical inferences of Granger causality in frequency domain time series analysis. Granger causality testing is essential for establishing causal relations among variables in multivariate dynamic processes. However, testing for Granger causality in the frequency domain is challenging due to the nonlinear relation between frequency domain measures (e.g., partial directed coherence, generalized partial directed coherence) and time domain data. Through a simulation study, we demonstrate that phase resampling is a general and robust method for making statistical inferences even with short time series. With Gaussian data, phase resampling yields satisfactory type I and type II error rates in all but one condition we examine: when a small effect size is combined with an insufficient number of data points. Violations of normality lead to slightly higher error rates but are mostly within acceptable ranges. We illustrate the utility of phase resampling with two empirical examples involving multivariate electroencephalography (EEG) and skin conductance data.
Jollymore, Ashlee; Johnson, Mark S.; Hawthorne, Iain
2012-01-01
Organic material, including total and dissolved organic carbon (DOC), is ubiquitous within aquatic ecosystems, playing a variety of important and diverse biogeochemical and ecological roles. Determining how land-use changes affect DOC concentrations and bioavailability within aquatic ecosystems is an important means of evaluating the effects on ecological productivity and biogeochemical cycling. This paper presents a methodology case study looking at the deployment of a submersible UV-Vis absorbance spectrophotometer (UV-Vis spectro∷lyzer model, s∷can, Vienna, Austria) to determine stream organic carbon dynamics within a headwater catchment located near Campbell River (British Columbia, Canada). Field-based absorbance measurements of DOC were made before and after forest harvest, highlighting the advantages of high temporal resolution compared to traditional grab sampling and laboratory measurements. Details of remote deployment are described. High-frequency DOC data is explored by resampling the 30 min time series with a range of resampling time intervals (from daily to weekly time steps). DOC export was calculated for three months from the post-harvest data and resampled time series, showing that sampling frequency has a profound effect on total DOC export. DOC exports derived from weekly measurements were found to underestimate export by as much as 30% compared to DOC export calculated from high-frequency data. Additionally, the importance of the ability to remotely monitor the system through a recently deployed wireless connection is emphasized by examining causes of prior data losses, and how such losses may be prevented through the ability to react when environmental or power disturbances cause system interruption and data loss. PMID:22666002
Jollymore, Ashlee; Johnson, Mark S; Hawthorne, Iain
2012-01-01
Organic material, including total and dissolved organic carbon (DOC), is ubiquitous within aquatic ecosystems, playing a variety of important and diverse biogeochemical and ecological roles. Determining how land-use changes affect DOC concentrations and bioavailability within aquatic ecosystems is an important means of evaluating the effects on ecological productivity and biogeochemical cycling. This paper presents a methodology case study looking at the deployment of a submersible UV-Vis absorbance spectrophotometer (UV-Vis spectro::lyzer model, s::can, Vienna, Austria) to determine stream organic carbon dynamics within a headwater catchment located near Campbell River (British Columbia, Canada). Field-based absorbance measurements of DOC were made before and after forest harvest, highlighting the advantages of high temporal resolution compared to traditional grab sampling and laboratory measurements. Details of remote deployment are described. High-frequency DOC data is explored by resampling the 30 min time series with a range of resampling time intervals (from daily to weekly time steps). DOC export was calculated for three months from the post-harvest data and resampled time series, showing that sampling frequency has a profound effect on total DOC export. DOC exports derived from weekly measurements were found to underestimate export by as much as 30% compared to DOC export calculated from high-frequency data. Additionally, the importance of the ability to remotely monitor the system through a recently deployed wireless connection is emphasized by examining causes of prior data losses, and how such losses may be prevented through the ability to react when environmental or power disturbances cause system interruption and data loss.
Han, Guanghui; Liu, Xiabi; Zheng, Guangyuan; Wang, Murong; Huang, Shan
2018-06-06
Ground-glass opacity (GGO) is a common CT imaging sign on high-resolution CT, which means the lesion is more likely to be malignant compared to common solid lung nodules. The automatic recognition of GGO CT imaging signs is of great importance for early diagnosis and possible cure of lung cancers. The present GGO recognition methods employ traditional low-level features and system performance improves slowly. Considering the high-performance of CNN model in computer vision field, we proposed an automatic recognition method of 3D GGO CT imaging signs through the fusion of hybrid resampling and layer-wise fine-tuning CNN models in this paper. Our hybrid resampling is performed on multi-views and multi-receptive fields, which reduces the risk of missing small or large GGOs by adopting representative sampling panels and processing GGOs with multiple scales simultaneously. The layer-wise fine-tuning strategy has the ability to obtain the optimal fine-tuning model. Multi-CNN models fusion strategy obtains better performance than any single trained model. We evaluated our method on the GGO nodule samples in publicly available LIDC-IDRI dataset of chest CT scans. The experimental results show that our method yields excellent results with 96.64% sensitivity, 71.43% specificity, and 0.83 F1 score. Our method is a promising approach to apply deep learning method to computer-aided analysis of specific CT imaging signs with insufficient labeled images. Graphical abstract We proposed an automatic recognition method of 3D GGO CT imaging signs through the fusion of hybrid resampling and layer-wise fine-tuning CNN models in this paper. Our hybrid resampling reduces the risk of missing small or large GGOs by adopting representative sampling panels and processing GGOs with multiple scales simultaneously. The layer-wise fine-tuning strategy has ability to obtain the optimal fine-tuning model. Our method is a promising approach to apply deep learning method to computer-aided analysis of specific CT imaging signs with insufficient labeled images.
Zhou, Shuntai; Jones, Corbin; Mieczkowski, Piotr
2015-01-01
ABSTRACT Validating the sampling depth and reducing sequencing errors are critical for studies of viral populations using next-generation sequencing (NGS). We previously described the use of Primer ID to tag each viral RNA template with a block of degenerate nucleotides in the cDNA primer. We now show that low-abundance Primer IDs (offspring Primer IDs) are generated due to PCR/sequencing errors. These artifactual Primer IDs can be removed using a cutoff model for the number of reads required to make a template consensus sequence. We have modeled the fraction of sequences lost due to Primer ID resampling. For a typical sequencing run, less than 10% of the raw reads are lost to offspring Primer ID filtering and resampling. The remaining raw reads are used to correct for PCR resampling and sequencing errors. We also demonstrate that Primer ID reveals bias intrinsic to PCR, especially at low template input or utilization. cDNA synthesis and PCR convert ca. 20% of RNA templates into recoverable sequences, and 30-fold sequence coverage recovers most of these template sequences. We have directly measured the residual error rate to be around 1 in 10,000 nucleotides. We use this error rate and the Poisson distribution to define the cutoff to identify preexisting drug resistance mutations at low abundance in an HIV-infected subject. Collectively, these studies show that >90% of the raw sequence reads can be used to validate template sampling depth and to dramatically reduce the error rate in assessing a genetically diverse viral population using NGS. IMPORTANCE Although next-generation sequencing (NGS) has revolutionized sequencing strategies, it suffers from serious limitations in defining sequence heterogeneity in a genetically diverse population, such as HIV-1 due to PCR resampling and PCR/sequencing errors. The Primer ID approach reveals the true sampling depth and greatly reduces errors. Knowing the sampling depth allows the construction of a model of how to maximize the recovery of sequences from input templates and to reduce resampling of the Primer ID so that appropriate multiplexing can be included in the experimental design. With the defined sampling depth and measured error rate, we are able to assign cutoffs for the accurate detection of minority variants in viral populations. This approach allows the power of NGS to be realized without having to guess about sampling depth or to ignore the problem of PCR resampling, while also being able to correct most of the errors in the data set. PMID:26041299
NASA Astrophysics Data System (ADS)
Adjorlolo, Clement; Mutanga, Onisimo; Cho, Moses A.; Ismail, Riyad
2013-04-01
In this paper, a user-defined inter-band correlation filter function was used to resample hyperspectral data and thereby mitigate the problem of multicollinearity in classification analysis. The proposed resampling technique convolves the spectral dependence information between a chosen band-centre and its shorter and longer wavelength neighbours. Weighting threshold of inter-band correlation (WTC, Pearson's r) was calculated, whereby r = 1 at the band-centre. Various WTC (r = 0.99, r = 0.95 and r = 0.90) were assessed, and bands with coefficients beyond a chosen threshold were assigned r = 0. The resultant data were used in the random forest analysis to classify in situ C3 and C4 grass canopy reflectance. The respective WTC datasets yielded improved classification accuracies (kappa = 0.82, 0.79 and 0.76) with less correlated wavebands when compared to resampled Hyperion bands (kappa = 0.76). Overall, the results obtained from this study suggested that resampling of hyperspectral data should account for the spectral dependence information to improve overall classification accuracy as well as reducing the problem of multicollinearity.
NASA Astrophysics Data System (ADS)
Clark, Elizabeth; Wood, Andy; Nijssen, Bart; Mendoza, Pablo; Newman, Andy; Nowak, Kenneth; Arnold, Jeffrey
2017-04-01
In an automated forecast system, hydrologic data assimilation (DA) performs the valuable function of correcting raw simulated watershed model states to better represent external observations, including measurements of streamflow, snow, soil moisture, and the like. Yet the incorporation of automated DA into operational forecasting systems has been a long-standing challenge due to the complexities of the hydrologic system, which include numerous lags between state and output variations. To help demonstrate that such methods can succeed in operational automated implementations, we present results from the real-time application of an ensemble particle filter (PF) for short-range (7 day lead) ensemble flow forecasts in western US river basins. We use the System for Hydromet Applications, Research and Prediction (SHARP), developed by the National Center for Atmospheric Research (NCAR) in collaboration with the University of Washington, U.S. Army Corps of Engineers, and U.S. Bureau of Reclamation. SHARP is a fully automated platform for short-term to seasonal hydrologic forecasting applications, incorporating uncertainty in initial hydrologic conditions (IHCs) and in hydrometeorological predictions through ensemble methods. In this implementation, IHC uncertainty is estimated by propagating an ensemble of 100 temperature and precipitation time series through conceptual and physically-oriented models. The resulting ensemble of derived IHCs exhibits a broad range of possible soil moisture and snow water equivalent (SWE) states. The PF selects and/or weights and resamples the IHCs that are most consistent with external streamflow observations, and uses the particles to initialize a streamflow forecast ensemble driven by ensemble precipitation and temperature forecasts downscaled from the Global Ensemble Forecast System (GEFS). We apply this method in real-time for several basins in the western US that are important for water resources management, and perform a hindcast experiment to evaluate the utility of PF-based data assimilation on streamflow forecasts skill. This presentation describes findings, including a comparison of sequential and non-sequential particle weighting methods.
Methods of Soil Resampling to Monitor Changes in the Chemical Concentrations of Forest Soils.
Lawrence, Gregory B; Fernandez, Ivan J; Hazlett, Paul W; Bailey, Scott W; Ross, Donald S; Villars, Thomas R; Quintana, Angelica; Ouimet, Rock; McHale, Michael R; Johnson, Chris E; Briggs, Russell D; Colter, Robert A; Siemion, Jason; Bartlett, Olivia L; Vargas, Olga; Antidormi, Michael R; Koppers, Mary M
2016-11-25
Recent soils research has shown that important chemical soil characteristics can change in less than a decade, often the result of broad environmental changes. Repeated sampling to monitor these changes in forest soils is a relatively new practice that is not well documented in the literature and has only recently been broadly embraced by the scientific community. The objective of this protocol is therefore to synthesize the latest information on methods of soil resampling in a format that can be used to design and implement a soil monitoring program. Successful monitoring of forest soils requires that a study unit be defined within an area of forested land that can be characterized with replicate sampling locations. A resampling interval of 5 years is recommended, but if monitoring is done to evaluate a specific environmental driver, the rate of change expected in that driver should be taken into consideration. Here, we show that the sampling of the profile can be done by horizon where boundaries can be clearly identified and horizons are sufficiently thick to remove soil without contamination from horizons above or below. Otherwise, sampling can be done by depth interval. Archiving of sample for future reanalysis is a key step in avoiding analytical bias and providing the opportunity for additional analyses as new questions arise.
Methods of Soil Resampling to Monitor Changes in the Chemical Concentrations of Forest Soils
Lawrence, Gregory B.; Fernandez, Ivan J.; Hazlett, Paul W.; Bailey, Scott W.; Ross, Donald S.; Villars, Thomas R.; Quintana, Angelica; Ouimet, Rock; McHale, Michael R.; Johnson, Chris E.; Briggs, Russell D.; Colter, Robert A.; Siemion, Jason; Bartlett, Olivia L.; Vargas, Olga; Antidormi, Michael R.; Koppers, Mary M.
2016-01-01
Recent soils research has shown that important chemical soil characteristics can change in less than a decade, often the result of broad environmental changes. Repeated sampling to monitor these changes in forest soils is a relatively new practice that is not well documented in the literature and has only recently been broadly embraced by the scientific community. The objective of this protocol is therefore to synthesize the latest information on methods of soil resampling in a format that can be used to design and implement a soil monitoring program. Successful monitoring of forest soils requires that a study unit be defined within an area of forested land that can be characterized with replicate sampling locations. A resampling interval of 5 years is recommended, but if monitoring is done to evaluate a specific environmental driver, the rate of change expected in that driver should be taken into consideration. Here, we show that the sampling of the profile can be done by horizon where boundaries can be clearly identified and horizons are sufficiently thick to remove soil without contamination from horizons above or below. Otherwise, sampling can be done by depth interval. Archiving of sample for future reanalysis is a key step in avoiding analytical bias and providing the opportunity for additional analyses as new questions arise. PMID:27911419
Methods of soil resampling to monitor changes in the chemical concentrations of forest soils
Lawrence, Gregory B.; Fernandez, Ivan J.; Hazlett, Paul W.; Bailey, Scott W.; Ross, Donald S.; Villars, Thomas R.; Quintana, Angelica; Ouimet, Rock; McHale, Michael; Johnson, Chris E.; Briggs, Russell D.; Colter, Robert A.; Siemion, Jason; Bartlett, Olivia L.; Vargas, Olga; Antidormi, Michael; Koppers, Mary Margaret
2016-01-01
Recent soils research has shown that important chemical soil characteristics can change in less than a decade, often the result of broad environmental changes. Repeated sampling to monitor these changes in forest soils is a relatively new practice that is not well documented in the literature and has only recently been broadly embraced by the scientific community. The objective of this protocol is therefore to synthesize the latest information on methods of soil resampling in a format that can be used to design and implement a soil monitoring program. Successful monitoring of forest soils requires that a study unit be defined within an area of forested land that can be characterized with replicate sampling locations. A resampling interval of 5 years is recommended, but if monitoring is done to evaluate a specific environmental driver, the rate of change expected in that driver should be taken into consideration. Here, we show that the sampling of the profile can be done by horizon where boundaries can be clearly identified and horizons are sufficiently thick to remove soil without contamination from horizons above or below. Otherwise, sampling can be done by depth interval. Archiving of sample for future reanalysis is a key step in avoiding analytical bias and providing the opportunity for additional analyses as new questions arise.
Functional magnetic resonance imaging in a low-field intraoperative scanner.
Schulder, Michael; Azmi, Hooman; Biswal, Bharat
2003-01-01
Functional magnetic resonance imaging (fMRI) has been used for preoperative planning and intraoperative surgical navigation. However, most experience to date has been with preoperative images acquired on high-field echoplanar MRI units. We explored the feasibility of acquiring fMRI of the motor cortex with a dedicated low-field intraoperative MRI (iMRI). Five healthy volunteers were scanned with the 0.12-tesla PoleStar N-10 iMRI (Odin Medical Technologies, Israel). A finger-tapping motor paradigm was performed with sequential scans, acquired alternately at rest and during activity. In addition, scans were obtained during breath holding alternating with normal breathing. The same paradigms were repeated using a 3-tesla MRI (Siemens Corp., Allandale, N.J., USA). Statistical analysis was performed offline using cross-correlation and cluster techniques. Data were resampled using the 'jackknife' process. The location, number of activated voxels and degrees of statistical significance between the two scanners were compared. With both the 0.12- and 3-tesla imagers, motor cortex activation was seen in all subjects to a significance of p < 0.02 or greater. No clustered pixels were seen outside the sensorimotor cortex. The resampled correlation coefficients were normally distributed, with a mean of 0.56 for both the 0.12- and 3-tesla scanners (standard deviations 0.11 and 0.08, respectively). The breath holding paradigm confirmed that the expected diffuse activation was seen on 0.12- and 3-tesla scans. Accurate fMRI with a low-field iMRI is feasible. Such data could be acquired immediately before or even during surgery. This would increase the utility of iMRI and allow for updated intraoperative functional imaging, free of the limitations of brain shift. Copyright 2003 S. Karger AG, Basel
NASA Astrophysics Data System (ADS)
Laloy, Eric; Hérault, Romain; Lee, John; Jacques, Diederik; Linde, Niklas
2017-12-01
Efficient and high-fidelity prior sampling and inversion for complex geological media is still a largely unsolved challenge. Here, we use a deep neural network of the variational autoencoder type to construct a parametric low-dimensional base model parameterization of complex binary geological media. For inversion purposes, it has the attractive feature that random draws from an uncorrelated standard normal distribution yield model realizations with spatial characteristics that are in agreement with the training set. In comparison with the most commonly used parametric representations in probabilistic inversion, we find that our dimensionality reduction (DR) approach outperforms principle component analysis (PCA), optimization-PCA (OPCA) and discrete cosine transform (DCT) DR techniques for unconditional geostatistical simulation of a channelized prior model. For the considered examples, important compression ratios (200-500) are achieved. Given that the construction of our parameterization requires a training set of several tens of thousands of prior model realizations, our DR approach is more suited for probabilistic (or deterministic) inversion than for unconditional (or point-conditioned) geostatistical simulation. Probabilistic inversions of 2D steady-state and 3D transient hydraulic tomography data are used to demonstrate the DR-based inversion. For the 2D case study, the performance is superior compared to current state-of-the-art multiple-point statistics inversion by sequential geostatistical resampling (SGR). Inversion results for the 3D application are also encouraging.
NASA Astrophysics Data System (ADS)
Lorentzen, Rolf J.; Stordal, Andreas S.; Hewitt, Neal
2017-05-01
Flowrate allocation in production wells is a complicated task, especially for multiphase flow combined with several reservoir zones and/or branches. The result depends heavily on the available production data, and the accuracy of these. In the application we show here, downhole pressure and temperature data are available, in addition to the total flowrates at the wellhead. The developed methodology inverts these observations to the fluid flowrates (oil, water and gas) that enters two production branches in a real full-scale producer. A major challenge is accurate estimation of flowrates during rapid variations in the well, e.g. due to choke adjustments. The Auxiliary Sequential Importance Resampling (ASIR) filter was developed to handle such challenges, by introducing an auxiliary step, where the particle weights are recomputed (second weighting step) based on how well the particles reproduce the observations. However, the ASIR filter suffers from large computational time when the number of unknown parameters increase. The Gaussian Mixture (GM) filter combines a linear update, with the particle filters ability to capture non-Gaussian behavior. This makes it possible to achieve good performance with fewer model evaluations. In this work we present a new filter which combines the ASIR filter and the Gaussian Mixture filter (denoted ASGM), and demonstrate improved estimation (compared to ASIR and GM filters) in cases with rapid parameter variations, while maintaining reasonable computational cost.
Comparison of parametric and bootstrap method in bioequivalence test.
Ahn, Byung-Jin; Yim, Dong-Seok
2009-10-01
The estimation of 90% parametric confidence intervals (CIs) of mean AUC and Cmax ratios in bioequivalence (BE) tests are based upon the assumption that formulation effects in log-transformed data are normally distributed. To compare the parametric CIs with those obtained from nonparametric methods we performed repeated estimation of bootstrap-resampled datasets. The AUC and Cmax values from 3 archived datasets were used. BE tests on 1,000 resampled datasets from each archived dataset were performed using SAS (Enterprise Guide Ver.3). Bootstrap nonparametric 90% CIs of formulation effects were then compared with the parametric 90% CIs of the original datasets. The 90% CIs of formulation effects estimated from the 3 archived datasets were slightly different from nonparametric 90% CIs obtained from BE tests on resampled datasets. Histograms and density curves of formulation effects obtained from resampled datasets were similar to those of normal distribution. However, in 2 of 3 resampled log (AUC) datasets, the estimates of formulation effects did not follow the Gaussian distribution. Bias-corrected and accelerated (BCa) CIs, one of the nonparametric CIs of formulation effects, shifted outside the parametric 90% CIs of the archived datasets in these 2 non-normally distributed resampled log (AUC) datasets. Currently, the 80~125% rule based upon the parametric 90% CIs is widely accepted under the assumption of normally distributed formulation effects in log-transformed data. However, nonparametric CIs may be a better choice when data do not follow this assumption.
Comparison of Parametric and Bootstrap Method in Bioequivalence Test
Ahn, Byung-Jin
2009-01-01
The estimation of 90% parametric confidence intervals (CIs) of mean AUC and Cmax ratios in bioequivalence (BE) tests are based upon the assumption that formulation effects in log-transformed data are normally distributed. To compare the parametric CIs with those obtained from nonparametric methods we performed repeated estimation of bootstrap-resampled datasets. The AUC and Cmax values from 3 archived datasets were used. BE tests on 1,000 resampled datasets from each archived dataset were performed using SAS (Enterprise Guide Ver.3). Bootstrap nonparametric 90% CIs of formulation effects were then compared with the parametric 90% CIs of the original datasets. The 90% CIs of formulation effects estimated from the 3 archived datasets were slightly different from nonparametric 90% CIs obtained from BE tests on resampled datasets. Histograms and density curves of formulation effects obtained from resampled datasets were similar to those of normal distribution. However, in 2 of 3 resampled log (AUC) datasets, the estimates of formulation effects did not follow the Gaussian distribution. Bias-corrected and accelerated (BCa) CIs, one of the nonparametric CIs of formulation effects, shifted outside the parametric 90% CIs of the archived datasets in these 2 non-normally distributed resampled log (AUC) datasets. Currently, the 80~125% rule based upon the parametric 90% CIs is widely accepted under the assumption of normally distributed formulation effects in log-transformed data. However, nonparametric CIs may be a better choice when data do not follow this assumption. PMID:19915699
Image restoration techniques as applied to Landsat MSS and TM data
Meyer, David
1987-01-01
Two factors are primarily responsible for the loss of image sharpness in processing digital Landsat images. The first factor is inherent in the data because the sensor's optics and electronics, along with other sensor elements, blur and smear the data. Digital image restoration can be used to reduce this degradation. The second factor, which further degrades by blurring or aliasing, is the resampling performed during geometric correction. An image restoration procedure, when used in place of typical resampled techniques, reduces sensor degradation without introducing the artifacts associated with resampling. The EROS Data Center (EDC) has implemented the restoration proceed for Landsat multispectral scanner (MSS) and thematic mapper (TM) data. This capability, developed at the University of Arizona by Dr. Robert Schowengerdt and Lynette Wood, combines restoration and resampling in a single step to produce geometrically corrected MSS and TM imagery. As with resampling, restoration demands a tradeoff be made between aliasing, which occurs when attempting to extract maximum sharpness from an image, and blurring, which reduces the aliasing problem but sacrifices image sharpness. The restoration procedure used at EDC minimizes these artifacts by being adaptive, tailoring the tradeoff to be optimal for individual images.
Liu, Zhihua; Yang, Jian; He, Hong S.
2013-01-01
The relative importance of fuel, topography, and weather on fire spread varies at different spatial scales, but how the relative importance of these controls respond to changing spatial scales is poorly understood. We designed a “moving window” resampling technique that allowed us to quantify the relative importance of controls on fire spread at continuous spatial scales using boosted regression trees methods. This quantification allowed us to identify the threshold value for fire size at which the dominant control switches from fuel at small sizes to weather at large sizes. Topography had a fluctuating effect on fire spread across the spatial scales, explaining 20–30% of relative importance. With increasing fire size, the dominant control switched from bottom-up controls (fuel and topography) to top-down controls (weather). Our analysis suggested that there is a threshold for fire size, above which fires are driven primarily by weather and more likely lead to larger fire size. We suggest that this threshold, which may be ecosystem-specific, can be identified using our “moving window” resampling technique. Although the threshold derived from this analytical method may rely heavily on the sampling technique, our study introduced an easily implemented approach to identify scale thresholds in wildfire regimes. PMID:23383247
Cui, Ming; Xu, Lili; Wang, Huimin; Ju, Shaoqing; Xu, Shuizhu; Jing, Rongrong
2017-12-01
Measurement uncertainty (MU) is a metrological concept, which can be used for objectively estimating the quality of test results in medical laboratories. The Nordtest guide recommends an approach that uses both internal quality control (IQC) and external quality assessment (EQA) data to evaluate the MU. Bootstrap resampling is employed to simulate the unknown distribution based on the mathematical statistics method using an existing small sample of data, where the aim is to transform the small sample into a large sample. However, there have been no reports of the utilization of this method in medical laboratories. Thus, this study applied the Nordtest guide approach based on bootstrap resampling for estimating the MU. We estimated the MU for the white blood cell (WBC) count, red blood cell (RBC) count, hemoglobin (Hb), and platelets (Plt). First, we used 6months of IQC data and 12months of EQA data to calculate the MU according to the Nordtest method. Second, we combined the Nordtest method and bootstrap resampling with the quality control data and calculated the MU using MATLAB software. We then compared the MU results obtained using the two approaches. The expanded uncertainty results determined for WBC, RBC, Hb, and Plt using the bootstrap resampling method were 4.39%, 2.43%, 3.04%, and 5.92%, respectively, and 4.38%, 2.42%, 3.02%, and 6.00% with the existing quality control data (U [k=2]). For WBC, RBC, Hb, and Plt, the differences between the results obtained using the two methods were lower than 1.33%. The expanded uncertainty values were all less than the target uncertainties. The bootstrap resampling method allows the statistical analysis of the MU. Combining the Nordtest method and bootstrap resampling is considered a suitable alternative method for estimating the MU. Copyright © 2017 The Canadian Society of Clinical Chemists. Published by Elsevier Inc. All rights reserved.
Li, Dongmei; Le Pape, Marc A; Parikh, Nisha I; Chen, Will X; Dye, Timothy D
2013-01-01
Microarrays are widely used for examining differential gene expression, identifying single nucleotide polymorphisms, and detecting methylation loci. Multiple testing methods in microarray data analysis aim at controlling both Type I and Type II error rates; however, real microarray data do not always fit their distribution assumptions. Smyth's ubiquitous parametric method, for example, inadequately accommodates violations of normality assumptions, resulting in inflated Type I error rates. The Significance Analysis of Microarrays, another widely used microarray data analysis method, is based on a permutation test and is robust to non-normally distributed data; however, the Significance Analysis of Microarrays method fold change criteria are problematic, and can critically alter the conclusion of a study, as a result of compositional changes of the control data set in the analysis. We propose a novel approach, combining resampling with empirical Bayes methods: the Resampling-based empirical Bayes Methods. This approach not only reduces false discovery rates for non-normally distributed microarray data, but it is also impervious to fold change threshold since no control data set selection is needed. Through simulation studies, sensitivities, specificities, total rejections, and false discovery rates are compared across the Smyth's parametric method, the Significance Analysis of Microarrays, and the Resampling-based empirical Bayes Methods. Differences in false discovery rates controls between each approach are illustrated through a preterm delivery methylation study. The results show that the Resampling-based empirical Bayes Methods offer significantly higher specificity and lower false discovery rates compared to Smyth's parametric method when data are not normally distributed. The Resampling-based empirical Bayes Methods also offers higher statistical power than the Significance Analysis of Microarrays method when the proportion of significantly differentially expressed genes is large for both normally and non-normally distributed data. Finally, the Resampling-based empirical Bayes Methods are generalizable to next generation sequencing RNA-seq data analysis.
Lawrence, Gregory B.; Fernandez, Ivan J.; Richter, Daniel D.; Ross, Donald S.; Hazlett, Paul W.; Bailey, Scott W.; Oiumet, Rock; Warby, Richard A.F.; Johnson, Arthur H.; Lin, Henry; Kaste, James M.; Lapenis, Andrew G.; Sullivan, Timothy J.
2013-01-01
Environmental change is monitored in North America through repeated measurements of weather, stream and river flow, air and water quality, and most recently, soil properties. Some skepticism remains, however, about whether repeated soil sampling can effectively distinguish between temporal and spatial variability, and efforts to document soil change in forest ecosystems through repeated measurements are largely nascent and uncoordinated. In eastern North America, repeated soil sampling has begun to provide valuable information on environmental problems such as air pollution. This review synthesizes the current state of the science to further the development and use of soil resampling as an integral method for recording and understanding environmental change in forested settings. The origins of soil resampling reach back to the 19th century in England and Russia. The concepts and methodologies involved in forest soil resampling are reviewed and evaluated through a discussion of how temporal and spatial variability can be addressed with a variety of sampling approaches. Key resampling studies demonstrate the type of results that can be obtained through differing approaches. Ongoing, large-scale issues such as recovery from acidification, long-term N deposition, C sequestration, effects of climate change, impacts from invasive species, and the increasing intensification of soil management all warrant the use of soil resampling as an essential tool for environmental monitoring and assessment. Furthermore, with better awareness of the value of soil resampling, studies can be designed with a long-term perspective so that information can be efficiently obtained well into the future to address problems that have not yet surfaced.
Zhang, Yeqing; Wang, Meiling; Li, Yafeng
2018-01-01
For the objective of essentially decreasing computational complexity and time consumption of signal acquisition, this paper explores a resampling strategy and variable circular correlation time strategy specific to broadband multi-frequency GNSS receivers. In broadband GNSS receivers, the resampling strategy is established to work on conventional acquisition algorithms by resampling the main lobe of received broadband signals with a much lower frequency. Variable circular correlation time is designed to adapt to different signal strength conditions and thereby increase the operation flexibility of GNSS signal acquisition. The acquisition threshold is defined as the ratio of the highest and second highest correlation results in the search space of carrier frequency and code phase. Moreover, computational complexity of signal acquisition is formulated by amounts of multiplication and summation operations in the acquisition process. Comparative experiments and performance analysis are conducted on four sets of real GPS L2C signals with different sampling frequencies. The results indicate that the resampling strategy can effectively decrease computation and time cost by nearly 90–94% with just slight loss of acquisition sensitivity. With circular correlation time varying from 10 ms to 20 ms, the time cost of signal acquisition has increased by about 2.7–5.6% per millisecond, with most satellites acquired successfully. PMID:29495301
Zhang, Yeqing; Wang, Meiling; Li, Yafeng
2018-02-24
For the objective of essentially decreasing computational complexity and time consumption of signal acquisition, this paper explores a resampling strategy and variable circular correlation time strategy specific to broadband multi-frequency GNSS receivers. In broadband GNSS receivers, the resampling strategy is established to work on conventional acquisition algorithms by resampling the main lobe of received broadband signals with a much lower frequency. Variable circular correlation time is designed to adapt to different signal strength conditions and thereby increase the operation flexibility of GNSS signal acquisition. The acquisition threshold is defined as the ratio of the highest and second highest correlation results in the search space of carrier frequency and code phase. Moreover, computational complexity of signal acquisition is formulated by amounts of multiplication and summation operations in the acquisition process. Comparative experiments and performance analysis are conducted on four sets of real GPS L2C signals with different sampling frequencies. The results indicate that the resampling strategy can effectively decrease computation and time cost by nearly 90-94% with just slight loss of acquisition sensitivity. With circular correlation time varying from 10 ms to 20 ms, the time cost of signal acquisition has increased by about 2.7-5.6% per millisecond, with most satellites acquired successfully.
Paula-Moraes, S; Burkness, E C; Hunt, T E; Wright, R J; Hein, G L; Hutchison, W D
2011-12-01
Striacosta albicosta (Smith) (Lepidoptera: Noctuidae), is a native pest of dry beans (Phaseolus vulgaris L.) and corn (Zea mays L.). As a result of larval feeding damage on corn ears, S. albicosta has a narrow treatment window; thus, early detection of the pest in the field is essential, and egg mass sampling has become a popular monitoring tool. Three action thresholds for field and sweet corn currently are used by crop consultants, including 4% of plants infested with egg masses on sweet corn in the silking-tasseling stage, 8% of plants infested with egg masses on field corn with approximately 95% tasseled, and 20% of plants infested with egg masses on field corn during mid-milk-stage corn. The current monitoring recommendation is to sample 20 plants at each of five locations per field (100 plants total). In an effort to develop a more cost-effective sampling plan for S. albicosta egg masses, several alternative binomial sampling plans were developed using Wald's sequential probability ratio test, and validated using Resampling for Validation of Sampling Plans (RVSP) software. The benefit-cost ratio also was calculated and used to determine the final selection of sampling plans. Based on final sampling plans selected for each action threshold, the average sample number required to reach a treat or no-treat decision ranged from 38 to 41 plants per field. This represents a significant savings in sampling cost over the current recommendation of 100 plants.
The conditional resampling model STARS: weaknesses of the modeling concept and development
NASA Astrophysics Data System (ADS)
Menz, Christoph
2016-04-01
The Statistical Analogue Resampling Scheme (STARS) is based on a modeling concept of Werner and Gerstengarbe (1997). The model uses a conditional resampling technique to create a simulation time series from daily observations. Unlike other time series generators (such as stochastic weather generators) STARS only needs a linear regression specification of a single variable as the target condition for the resampling. Since its first implementation the algorithm was further extended in order to allow for a spatially distributed trend signal, to preserve the seasonal cycle and the autocorrelation of the observation time series (Orlovsky, 2007; Orlovsky et al., 2008). This evolved version was successfully used in several climate impact studies. However a detaild evaluation of the simulations revealed two fundamental weaknesses of the utilized resampling technique. 1. The restriction of the resampling condition on a single individual variable can lead to a misinterpretation of the change signal of other variables when the model is applied to a mulvariate time series. (F. Wechsung and M. Wechsung, 2014). As one example, the short-term correlations between precipitation and temperature (cooling of the near-surface air layer after a rainfall event) can be misinterpreted as a climatic change signal in the simulation series. 2. The model restricts the linear regression specification to the annual mean time series, refusing the specification of seasonal varying trends. To overcome these fundamental weaknesses a redevelopment of the whole algorithm was done. The poster discusses the main weaknesses of the earlier model implementation and the methods applied to overcome these in the new version. Based on the new model idealized simulations were conducted to illustrate the enhancement.
Resampling probability values for weighted kappa with multiple raters.
Mielke, Paul W; Berry, Kenneth J; Johnston, Janis E
2008-04-01
A new procedure to compute weighted kappa with multiple raters is described. A resampling procedure to compute approximate probability values for weighted kappa with multiple raters is presented. Applications of weighted kappa are illustrated with an example analysis of classifications by three independent raters.
Zhang, Bo; Liu, Wei; Zhang, Zhiwei; Qu, Yanping; Chen, Zhen; Albert, Paul S
2017-08-01
Joint modeling and within-cluster resampling are two approaches that are used for analyzing correlated data with informative cluster sizes. Motivated by a developmental toxicity study, we examined the performances and validity of these two approaches in testing covariate effects in generalized linear mixed-effects models. We show that the joint modeling approach is robust to the misspecification of cluster size models in terms of Type I and Type II errors when the corresponding covariates are not included in the random effects structure; otherwise, statistical tests may be affected. We also evaluate the performance of the within-cluster resampling procedure and thoroughly investigate the validity of it in modeling correlated data with informative cluster sizes. We show that within-cluster resampling is a valid alternative to joint modeling for cluster-specific covariates, but it is invalid for time-dependent covariates. The two methods are applied to a developmental toxicity study that investigated the effect of exposure to diethylene glycol dimethyl ether.
Yang, Yang; DeGruttola, Victor
2016-01-01
Traditional resampling-based tests for homogeneity in covariance matrices across multiple groups resample residuals, that is, data centered by group means. These residuals do not share the same second moments when the null hypothesis is false, which makes them difficult to use in the setting of multiple testing. An alternative approach is to resample standardized residuals, data centered by group sample means and standardized by group sample covariance matrices. This approach, however, has been observed to inflate type I error when sample size is small or data are generated from heavy-tailed distributions. We propose to improve this approach by using robust estimation for the first and second moments. We discuss two statistics: the Bartlett statistic and a statistic based on eigen-decomposition of sample covariance matrices. Both statistics can be expressed in terms of standardized errors under the null hypothesis. These methods are extended to test homogeneity in correlation matrices. Using simulation studies, we demonstrate that the robust resampling approach provides comparable or superior performance, relative to traditional approaches, for single testing and reasonable performance for multiple testing. The proposed methods are applied to data collected in an HIV vaccine trial to investigate possible determinants, including vaccine status, vaccine-induced immune response level and viral genotype, of unusual correlation pattern between HIV viral load and CD4 count in newly infected patients. PMID:22740584
Yang, Yang; DeGruttola, Victor
2012-06-22
Traditional resampling-based tests for homogeneity in covariance matrices across multiple groups resample residuals, that is, data centered by group means. These residuals do not share the same second moments when the null hypothesis is false, which makes them difficult to use in the setting of multiple testing. An alternative approach is to resample standardized residuals, data centered by group sample means and standardized by group sample covariance matrices. This approach, however, has been observed to inflate type I error when sample size is small or data are generated from heavy-tailed distributions. We propose to improve this approach by using robust estimation for the first and second moments. We discuss two statistics: the Bartlett statistic and a statistic based on eigen-decomposition of sample covariance matrices. Both statistics can be expressed in terms of standardized errors under the null hypothesis. These methods are extended to test homogeneity in correlation matrices. Using simulation studies, we demonstrate that the robust resampling approach provides comparable or superior performance, relative to traditional approaches, for single testing and reasonable performance for multiple testing. The proposed methods are applied to data collected in an HIV vaccine trial to investigate possible determinants, including vaccine status, vaccine-induced immune response level and viral genotype, of unusual correlation pattern between HIV viral load and CD4 count in newly infected patients.
NASA Astrophysics Data System (ADS)
Zunz, Violette; Goosse, Hugues; Dubinkina, Svetlana
2014-05-01
In this study, we assess systematically the impact of different initialisation procedures on the predictability of the sea ice in the Southern Ocean. These initialisation strategies are based on three data assimilation methods: the nudging, the particle filter with sequential resampling and the nudging proposal particle filter. An Earth-system model of intermediate complexity has been used to perform hindcast simulations in a perfect model approach. The predictability of the Southern Ocean sea ice is estimated through two aspects: the spread of the hindcast ensemble, indicating the uncertainty on the ensemble, and the correlation between the ensemble mean and the pseudo-observations, used to assess the accuracy of the prediction. Our results show that, at decadal timescales, more sophisticated data assimilation methods as well as denser pseudo-observations used to initialise the hindcasts decrease the spread of the ensemble but improve only slightly the accuracy of the prediction of the sea ice in the Southern Ocean. Overall, the predictability at interannual timescales is limited, at most, to three years ahead. At multi-decadal timescales, there is a clear improvement of the correlation of the trend in sea ice extent between the hindcasts and the pseudo-observations if the initialisation takes into account the pseudo-observations. The correlation reaches values larger than 0.5 and is due to the inertia of the ocean, showing the importance of the quality of the initialisation below the sea ice.
NASA Astrophysics Data System (ADS)
Zunz, Violette; Goosse, Hugues; Dubinkina, Svetlana
2015-04-01
In this study, we assess systematically the impact of different initialisation procedures on the predictability of the sea ice in the Southern Ocean. These initialisation strategies are based on three data assimilation methods: the nudging, the particle filter with sequential importance resampling and the nudging proposal particle filter. An Earth system model of intermediate complexity is used to perform hindcast simulations in a perfect model approach. The predictability of the Antarctic sea ice at interannual to multi-decadal timescales is estimated through two aspects: the spread of the hindcast ensemble, indicating the uncertainty of the ensemble, and the correlation between the ensemble mean and the pseudo-observations, used to assess the accuracy of the prediction. Our results show that at decadal timescales more sophisticated data assimilation methods as well as denser pseudo-observations used to initialise the hindcasts decrease the spread of the ensemble. However, our experiments did not clearly demonstrate that one of the initialisation methods systematically provides with a more accurate prediction of the sea ice in the Southern Ocean than the others. Overall, the predictability at interannual timescales is limited to 3 years ahead at most. At multi-decadal timescales, the trends in sea ice extent computed over the time period just after the initialisation are clearly better correlated between the hindcasts and the pseudo-observations if the initialisation takes into account the pseudo-observations. The correlation reaches values larger than 0.5 in winter. This high correlation has likely its origin in the slow evolution of the ocean ensured by its strong thermal inertia, showing the importance of the quality of the initialisation below the sea ice.
Cañas, Fernando; Pérez-Solá, Víctor; Díaz, Silvia; Rejas, Javier
2007-01-01
This study aimed to assess the cost effectiveness of ziprasidone versus haloperidol in sequential intramuscular (IM)/oral treatment of patients with exacerbation of schizophrenia in Spain. A cost-effectiveness analysis from the hospital perspective was performed. Length of stay, study medication and use of concomitant drugs were calculated using data from the ZIMO trial. The effectiveness of treatment was determined by the percentage of responders (reduction in baseline Brief Psychiatric Rating Scale [BPRS] negative symptoms subscale >or=30%). Economic assessment included estimation of mean (95% CI) total costs, cost per responder and the incremental cost-effectiveness ratio (ICER) per additional responder. The economic uncertainty level was controlled by resampling and calculation of cost-effectiveness acceptability curves. A total of 325 patients (ziprasidone n = 255, haloperidol n = 70) were included in this economic subanalysis. Ziprasidone showed a significantly higher responder rate compared with haloperidol (71% vs 56%, respectively; p = 0.023). Mean total costs were euro3582 (95% CI 3226, 3937) for ziprasidone and euro2953 (95% CI 2471, 3436) for haloperidol (p = 0.039), mainly due to a higher ziprasidone acquisition cost. However, costs per responder were lower with ziprasidone (euro5045 [95% CI 4211, 6020]) than with haloperidol (euro5302 [95% CI 3666, 7791], with a cost per additional responder (ICER) for ziprasidone of euro4095 (95% CI -130, 22 231). The acceptability curve showed an ICER cut-off value of euro13 891 at the 95% cost-effectiveness probability level for >or=30% reduction in BPRS negative symptoms. Compared with haloperidol, ziprasidone was significantly better at controlling psychotic negative symptoms in acute psychoses. The extra cost of ziprasidone was offset by a higher effectiveness rate, yielding a lower cost per responder. In light of the social benefit (less family burden and greater restoration of productivity), the incremental cost per additional responder with sequential IM/oral ziprasidone should be considered cost effective in patients with exacerbation of schizophrenia in Spain.
Introduction to Permutation and Resampling-Based Hypothesis Tests
ERIC Educational Resources Information Center
LaFleur, Bonnie J.; Greevy, Robert A.
2009-01-01
A resampling-based method of inference--permutation tests--is often used when distributional assumptions are questionable or unmet. Not only are these methods useful for obvious departures from parametric assumptions (e.g., normality) and small sample sizes, but they are also more robust than their parametric counterparts in the presences of…
Air-to-Air Missile Vector Scoring
2012-03-22
SIR sampling-importance resampling . . . . . . . . . . . . . . 53 EPF extended particle filter . . . . . . . . . . . . . . . . . . . . 54 UPF unscented...particle filter ( EPF ) or a unscented particle fil- ter (UPF) [20]. The basic concept is to apply a bank of N EKF or UKF filters to move particles from...Merwe, Doucet, Freitas and Wan provide a comprehensive discussion on the EPF and UPF, including algorithms for implementation [20]. 2Result based on
ERIC Educational Resources Information Center
Fan, Xitao
This paper empirically and systematically assessed the performance of bootstrap resampling procedure as it was applied to a regression model. Parameter estimates from Monte Carlo experiments (repeated sampling from population) and bootstrap experiments (repeated resampling from one original bootstrap sample) were generated and compared. Sample…
Anomalous change detection in imagery
Theiler, James P [Los Alamos, NM; Perkins, Simon J [Santa Fe, NM
2011-05-31
A distribution-based anomaly detection platform is described that identifies a non-flat background that is specified in terms of the distribution of the data. A resampling approach is also disclosed employing scrambled resampling of the original data with one class specified by the data and the other by the explicit distribution, and solving using binary classification.
De-Dopplerization of Acoustic Measurements
2017-08-10
band energy obtained from fractional octave band digital filters generates a de-Dopplerized spectrum without complex resampling algorithms. An...energy obtained from fractional octave band digital filters generates a de-Dopplerized spectrum without complex resampling algorithms. An equation...fractional octave representation and smearing that occurs within the spectrum11, digital filtering techniques were not considered by these earlier
Thematic mapper design parameter investigation
NASA Technical Reports Server (NTRS)
Colby, C. P., Jr.; Wheeler, S. G.
1978-01-01
This study simulated the multispectral data sets to be expected from three different Thematic Mapper configurations, and the ground processing of these data sets by three different resampling techniques. The simulated data sets were then evaluated by processing them for multispectral classification, and the Thematic Mapper configuration, and resampling technique which provided the best classification accuracy were identified.
permGPU: Using graphics processing units in RNA microarray association studies.
Shterev, Ivo D; Jung, Sin-Ho; George, Stephen L; Owzar, Kouros
2010-06-16
Many analyses of microarray association studies involve permutation, bootstrap resampling and cross-validation, that are ideally formulated as embarrassingly parallel computing problems. Given that these analyses are computationally intensive, scalable approaches that can take advantage of multi-core processor systems need to be developed. We have developed a CUDA based implementation, permGPU, that employs graphics processing units in microarray association studies. We illustrate the performance and applicability of permGPU within the context of permutation resampling for a number of test statistics. An extensive simulation study demonstrates a dramatic increase in performance when using permGPU on an NVIDIA GTX 280 card compared to an optimized C/C++ solution running on a conventional Linux server. permGPU is available as an open-source stand-alone application and as an extension package for the R statistical environment. It provides a dramatic increase in performance for permutation resampling analysis in the context of microarray association studies. The current version offers six test statistics for carrying out permutation resampling analyses for binary, quantitative and censored time-to-event traits.
Classifier performance prediction for computer-aided diagnosis using a limited dataset.
Sahiner, Berkman; Chan, Heang-Ping; Hadjiiski, Lubomir
2008-04-01
In a practical classifier design problem, the true population is generally unknown and the available sample is finite-sized. A common approach is to use a resampling technique to estimate the performance of the classifier that will be trained with the available sample. We conducted a Monte Carlo simulation study to compare the ability of the different resampling techniques in training the classifier and predicting its performance under the constraint of a finite-sized sample. The true population for the two classes was assumed to be multivariate normal distributions with known covariance matrices. Finite sets of sample vectors were drawn from the population. The true performance of the classifier is defined as the area under the receiver operating characteristic curve (AUC) when the classifier designed with the specific sample is applied to the true population. We investigated methods based on the Fukunaga-Hayes and the leave-one-out techniques, as well as three different types of bootstrap methods, namely, the ordinary, 0.632, and 0.632+ bootstrap. The Fisher's linear discriminant analysis was used as the classifier. The dimensionality of the feature space was varied from 3 to 15. The sample size n2 from the positive class was varied between 25 and 60, while the number of cases from the negative class was either equal to n2 or 3n2. Each experiment was performed with an independent dataset randomly drawn from the true population. Using a total of 1000 experiments for each simulation condition, we compared the bias, the variance, and the root-mean-squared error (RMSE) of the AUC estimated using the different resampling techniques relative to the true AUC (obtained from training on a finite dataset and testing on the population). Our results indicated that, under the study conditions, there can be a large difference in the RMSE obtained using different resampling methods, especially when the feature space dimensionality is relatively large and the sample size is small. Under this type of conditions, the 0.632 and 0.632+ bootstrap methods have the lowest RMSE, indicating that the difference between the estimated and the true performances obtained using the 0.632 and 0.632+ bootstrap will be statistically smaller than those obtained using the other three resampling methods. Of the three bootstrap methods, the 0.632+ bootstrap provides the lowest bias. Although this investigation is performed under some specific conditions, it reveals important trends for the problem of classifier performance prediction under the constraint of a limited dataset.
Porto, Paolo; Walling, Des E; Alewell, Christine; Callegari, Giovanni; Mabit, Lionel; Mallimo, Nicola; Meusburger, Katrin; Zehringer, Markus
2014-12-01
Soil erosion and both its on-site and off-site impacts are increasingly seen as a serious environmental problem across the world. The need for an improved evidence base on soil loss and soil redistribution rates has directed attention to the use of fallout radionuclides, and particularly (137)Cs, for documenting soil redistribution rates. This approach possesses important advantages over more traditional means of documenting soil erosion and soil redistribution. However, one key limitation of the approach is the time-averaged or lumped nature of the estimated erosion rates. In nearly all cases, these will relate to the period extending from the main period of bomb fallout to the time of sampling. Increasing concern for the impact of global change, particularly that related to changing land use and climate change, has frequently directed attention to the need to document changes in soil redistribution rates within this period. Re-sampling techniques, which should be distinguished from repeat-sampling techniques, have the potential to meet this requirement. As an example, the use of a re-sampling technique to derive estimates of the mean annual net soil loss from a small (1.38 ha) forested catchment in southern Italy is reported. The catchment was originally sampled in 1998 and samples were collected from points very close to the original sampling points again in 2013. This made it possible to compare the estimate of mean annual erosion for the period 1954-1998 with that for the period 1999-2013. The availability of measurements of sediment yield from the catchment for parts of the overall period made it possible to compare the results provided by the (137)Cs re-sampling study with the estimates of sediment yield for the same periods. In order to compare the estimates of soil loss and sediment yield for the two different periods, it was necessary to establish the uncertainty associated with the individual estimates. In the absence of a generally accepted procedure for such calculations, key factors influencing the uncertainty of the estimates were identified and a procedure developed. The results of the study demonstrated that there had been no significant change in mean annual soil loss in recent years and this was consistent with the information provided by the estimates of sediment yield from the catchment for the same periods. The study demonstrates the potential for using a re-sampling technique to document recent changes in soil redistribution rates. Copyright © 2014. Published by Elsevier Ltd.
System health monitoring using multiple-model adaptive estimation techniques
NASA Astrophysics Data System (ADS)
Sifford, Stanley Ryan
Monitoring system health for fault detection and diagnosis by tracking system parameters concurrently with state estimates is approached using a new multiple-model adaptive estimation (MMAE) method. This novel method is called GRid-based Adaptive Parameter Estimation (GRAPE). GRAPE expands existing MMAE methods by using new techniques to sample the parameter space. GRAPE expands on MMAE with the hypothesis that sample models can be applied and resampled without relying on a predefined set of models. GRAPE is initially implemented in a linear framework using Kalman filter models. A more generalized GRAPE formulation is presented using extended Kalman filter (EKF) models to represent nonlinear systems. GRAPE can handle both time invariant and time varying systems as it is designed to track parameter changes. Two techniques are presented to generate parameter samples for the parallel filter models. The first approach is called selected grid-based stratification (SGBS). SGBS divides the parameter space into equally spaced strata. The second approach uses Latin Hypercube Sampling (LHS) to determine the parameter locations and minimize the total number of required models. LHS is particularly useful when the parameter dimensions grow. Adding more parameters does not require the model count to increase for LHS. Each resample is independent of the prior sample set other than the location of the parameter estimate. SGBS and LHS can be used for both the initial sample and subsequent resamples. Furthermore, resamples are not required to use the same technique. Both techniques are demonstrated for both linear and nonlinear frameworks. The GRAPE framework further formalizes the parameter tracking process through a general approach for nonlinear systems. These additional methods allow GRAPE to either narrow the focus to converged values within a parameter range or expand the range in the appropriate direction to track the parameters outside the current parameter range boundary. Customizable rules define the specific resample behavior when the GRAPE parameter estimates converge. Convergence itself is determined from the derivatives of the parameter estimates using a simple moving average window to filter out noise. The system can be tuned to match the desired performance goals by making adjustments to parameters such as the sample size, convergence criteria, resample criteria, initial sampling method, resampling method, confidence in prior sample covariances, sample delay, and others.
Methods of soil resampling to monitor changes in the chemical concentrations of forest soils
Gregory B. Lawrence; Ivan J. Fernandez; Paul W. Hazlett; Scott W. Bailey; Donald S. Ross; Thomas R. Villars; Angelica Quintana; Rock Ouimet; Michael R. McHale; Chris E. Johnson; Russell D. Briggs; Robert A. Colter; Jason Siemion; Olivia L. Bartlett; Olga Vargas; Michael R. Antidormi; Mary M. Koppers
2016-01-01
Recent soils research has shown that important chemical soil characteristics can change in less than a decade, often the result of broad environmental changes. Repeated sampling to monitor these changes in forest soils is a relatively new practice that is not well documented in the literature and has only recently been broadly embraced by the scientific community. The...
Ter Braak, Cajo J F; Peres-Neto, Pedro; Dray, Stéphane
2017-01-01
Statistical testing of trait-environment association from data is a challenge as there is no common unit of observation: the trait is observed on species, the environment on sites and the mediating abundance on species-site combinations. A number of correlation-based methods, such as the community weighted trait means method (CWM), the fourth-corner correlation method and the multivariate method RLQ, have been proposed to estimate such trait-environment associations. In these methods, valid statistical testing proceeds by performing two separate resampling tests, one site-based and the other species-based and by assessing significance by the largest of the two p -values (the p max test). Recently, regression-based methods using generalized linear models (GLM) have been proposed as a promising alternative with statistical inference via site-based resampling. We investigated the performance of this new approach along with approaches that mimicked the p max test using GLM instead of fourth-corner. By simulation using models with additional random variation in the species response to the environment, the site-based resampling tests using GLM are shown to have severely inflated type I error, of up to 90%, when the nominal level is set as 5%. In addition, predictive modelling of such data using site-based cross-validation very often identified trait-environment interactions that had no predictive value. The problem that we identify is not an "omitted variable bias" problem as it occurs even when the additional random variation is independent of the observed trait and environment data. Instead, it is a problem of ignoring a random effect. In the same simulations, the GLM-based p max test controlled the type I error in all models proposed so far in this context, but still gave slightly inflated error in more complex models that included both missing (but important) traits and missing (but important) environmental variables. For screening the importance of single trait-environment combinations, the fourth-corner test is shown to give almost the same results as the GLM-based tests in far less computing time.
Population annealing simulations of a binary hard-sphere mixture
NASA Astrophysics Data System (ADS)
Callaham, Jared; Machta, Jonathan
2017-06-01
Population annealing is a sequential Monte Carlo scheme well suited to simulating equilibrium states of systems with rough free energy landscapes. Here we use population annealing to study a binary mixture of hard spheres. Population annealing is a parallel version of simulated annealing with an extra resampling step that ensures that a population of replicas of the system represents the equilibrium ensemble at every packing fraction in an annealing schedule. The algorithm and its equilibration properties are described, and results are presented for a glass-forming fluid composed of a 50/50 mixture of hard spheres with diameter ratio of 1.4:1. For this system, we obtain precise results for the equation of state in the glassy regime up to packing fractions φ ≈0.60 and study deviations from the Boublik-Mansoori-Carnahan-Starling-Leland equation of state. For higher packing fractions, the algorithm falls out of equilibrium and a free volume fit predicts jamming at packing fraction φ ≈0.667 . We conclude that population annealing is an effective tool for studying equilibrium glassy fluids and the jamming transition.
The Beginner's Guide to the Bootstrap Method of Resampling.
ERIC Educational Resources Information Center
Lane, Ginny G.
The bootstrap method of resampling can be useful in estimating the replicability of study results. The bootstrap procedure creates a mock population from a given sample of data from which multiple samples are then drawn. The method extends the usefulness of the jackknife procedure as it allows for computation of a given statistic across a maximal…
ERIC Educational Resources Information Center
Nevitt, Jonathan; Hancock, Gregory R.
2001-01-01
Evaluated the bootstrap method under varying conditions of nonnormality, sample size, model specification, and number of bootstrap samples drawn from the resampling space. Results for the bootstrap suggest the resampling-based method may be conservative in its control over model rejections, thus having an impact on the statistical power associated…
Resampling and Distribution of the Product Methods for Testing Indirect Effects in Complex Models
ERIC Educational Resources Information Center
Williams, Jason; MacKinnon, David P.
2008-01-01
Recent advances in testing mediation have found that certain resampling methods and tests based on the mathematical distribution of 2 normal random variables substantially outperform the traditional "z" test. However, these studies have primarily focused only on models with a single mediator and 2 component paths. To address this limitation, a…
Gaussian Process Interpolation for Uncertainty Estimation in Image Registration
Wachinger, Christian; Golland, Polina; Reuter, Martin; Wells, William
2014-01-01
Intensity-based image registration requires resampling images on a common grid to evaluate the similarity function. The uncertainty of interpolation varies across the image, depending on the location of resampled points relative to the base grid. We propose to perform Bayesian inference with Gaussian processes, where the covariance matrix of the Gaussian process posterior distribution estimates the uncertainty in interpolation. The Gaussian process replaces a single image with a distribution over images that we integrate into a generative model for registration. Marginalization over resampled images leads to a new similarity measure that includes the uncertainty of the interpolation. We demonstrate that our approach increases the registration accuracy and propose an efficient approximation scheme that enables seamless integration with existing registration methods. PMID:25333127
NASA Astrophysics Data System (ADS)
Collins, Jarrod A.; Heiselman, Jon S.; Weis, Jared A.; Clements, Logan W.; Simpson, Amber L.; Jarnagin, William R.; Miga, Michael I.
2017-03-01
In image-guided liver surgery (IGLS), sparse representations of the anterior organ surface may be collected intraoperatively to drive image-to-physical space registration. Soft tissue deformation represents a significant source of error for IGLS techniques. This work investigates the impact of surface data quality on current surface based IGLS registration methods. In this work, we characterize the robustness of our IGLS registration methods to noise in organ surface digitization. We study this within a novel human-to-phantom data framework that allows a rapid evaluation of clinically realistic data and noise patterns on a fully characterized hepatic deformation phantom. Additionally, we implement a surface data resampling strategy that is designed to decrease the impact of differences in surface acquisition. For this analysis, n=5 cases of clinical intraoperative data consisting of organ surface and salient feature digitizations from open liver resection were collected and analyzed within our human-to-phantom validation framework. As expected, results indicate that increasing levels of noise in surface acquisition cause registration fidelity to deteriorate. With respect to rigid registration using the raw and resampled data at clinically realistic levels of noise (i.e. a magnitude of 1.5 mm), resampling improved TRE by 21%. In terms of nonrigid registration, registrations using resampled data outperformed the raw data result by 14% at clinically realistic levels and were less susceptible to noise across the range of noise investigated. These results demonstrate the types of analyses our novel human-to-phantom validation framework can provide and indicate the considerable benefits of resampling strategies.
An optical systems analysis approach to image resampling
NASA Technical Reports Server (NTRS)
Lyon, Richard G.
1997-01-01
All types of image registration require some type of resampling, either during the registration or as a final step in the registration process. Thus the image(s) must be regridded into a spatially uniform, or angularly uniform, coordinate system with some pre-defined resolution. Frequently the ending resolution is not the resolution at which the data was observed with. The registration algorithm designer and end product user are presented with a multitude of possible resampling methods each of which modify the spatial frequency content of the data in some way. The purpose of this paper is threefold: (1) to show how an imaging system modifies the scene from an end to end optical systems analysis approach, (2) to develop a generalized resampling model, and (3) empirically apply the model to simulated radiometric scene data and tabulate the results. A Hanning windowed sinc interpolator method will be developed based upon the optical characterization of the system. It will be discussed in terms of the effects and limitations of sampling, aliasing, spectral leakage, and computational complexity. Simulated radiometric scene data will be used to demonstrate each of the algorithms. A high resolution scene will be "grown" using a fractal growth algorithm based on mid-point recursion techniques. The result scene data will be convolved with a point spread function representing the optical response. The resultant scene will be convolved with the detection systems response and subsampled to the desired resolution. The resultant data product will be subsequently resampled to the correct grid using the Hanning windowed sinc interpolator and the results and errors tabulated and discussed.
Surface Fitting for Quasi Scattered Data from Coordinate Measuring Systems.
Mao, Qing; Liu, Shugui; Wang, Sen; Ma, Xinhui
2018-01-13
Non-uniform rational B-spline (NURBS) surface fitting from data points is wildly used in the fields of computer aided design (CAD), medical imaging, cultural relic representation and object-shape detection. Usually, the measured data acquired from coordinate measuring systems is neither gridded nor completely scattered. The distribution of this kind of data is scattered in physical space, but the data points are stored in a way consistent with the order of measurement, so it is named quasi scattered data in this paper. Therefore they can be organized into rows easily but the number of points in each row is random. In order to overcome the difficulty of surface fitting from this kind of data, a new method based on resampling is proposed. It consists of three major steps: (1) NURBS curve fitting for each row, (2) resampling on the fitted curve and (3) surface fitting from the resampled data. Iterative projection optimization scheme is applied in the first and third step to yield advisable parameterization and reduce the time cost of projection. A resampling approach based on parameters, local peaks and contour curvature is proposed to overcome the problems of nodes redundancy and high time consumption in the fitting of this kind of scattered data. Numerical experiments are conducted with both simulation and practical data, and the results show that the proposed method is fast, effective and robust. What's more, by analyzing the fitting results acquired form data with different degrees of scatterness it can be demonstrated that the error introduced by resampling is negligible and therefore it is feasible.
Exact and Monte carlo resampling procedures for the Wilcoxon-Mann-Whitney and Kruskal-Wallis tests.
Berry, K J; Mielke, P W
2000-12-01
Exact and Monte Carlo resampling FORTRAN programs are described for the Wilcoxon-Mann-Whitney rank sum test and the Kruskal-Wallis one-way analysis of variance for ranks test. The program algorithms compensate for tied values and do not depend on asymptotic approximations for probability values, unlike most algorithms contained in PC-based statistical software packages.
Cellular neural network-based hybrid approach toward automatic image registration
NASA Astrophysics Data System (ADS)
Arun, Pattathal VijayaKumar; Katiyar, Sunil Kumar
2013-01-01
Image registration is a key component of various image processing operations that involve the analysis of different image data sets. Automatic image registration domains have witnessed the application of many intelligent methodologies over the past decade; however, inability to properly model object shape as well as contextual information has limited the attainable accuracy. A framework for accurate feature shape modeling and adaptive resampling using advanced techniques such as vector machines, cellular neural network (CNN), scale invariant feature transform (SIFT), coreset, and cellular automata is proposed. CNN has been found to be effective in improving feature matching as well as resampling stages of registration and complexity of the approach has been considerably reduced using coreset optimization. The salient features of this work are cellular neural network approach-based SIFT feature point optimization, adaptive resampling, and intelligent object modelling. Developed methodology has been compared with contemporary methods using different statistical measures. Investigations over various satellite images revealed that considerable success was achieved with the approach. This system has dynamically used spectral and spatial information for representing contextual knowledge using CNN-prolog approach. This methodology is also illustrated to be effective in providing intelligent interpretation and adaptive resampling.
Goldstein, Darlene R
2006-10-01
Studies of gene expression using high-density short oligonucleotide arrays have become a standard in a variety of biological contexts. Of the expression measures that have been proposed to quantify expression in these arrays, multi-chip-based measures have been shown to perform well. As gene expression studies increase in size, however, utilizing multi-chip expression measures is more challenging in terms of computing memory requirements and time. A strategic alternative to exact multi-chip quantification on a full large chip set is to approximate expression values based on subsets of chips. This paper introduces an extrapolation method, Extrapolation Averaging (EA), and a resampling method, Partition Resampling (PR), to approximate expression in large studies. An examination of properties indicates that subset-based methods can perform well compared with exact expression quantification. The focus is on short oligonucleotide chips, but the same ideas apply equally well to any array type for which expression is quantified using an entire set of arrays, rather than for only a single array at a time. Software implementing Partition Resampling and Extrapolation Averaging is under development as an R package for the BioConductor project.
Pesticides in Wyoming Groundwater, 2008-10
Eddy-Miller, Cheryl A.; Bartos, Timothy T.; Taylor, Michelle L.
2013-01-01
Groundwater samples were collected from 296 wells during 1995-2006 as part of a baseline study of pesticides in Wyoming groundwater. In 2009, a previous report summarized the results of the baseline sampling and the statistical evaluation of the occurrence of pesticides in relation to selected natural and anthropogenic (human-related) characteristics. During 2008-10, the U.S. Geological Survey, in cooperation with the Wyoming Department of Agriculture, resampled a subset (52) of the 296 wells sampled during 1995-2006 baseline study in order to compare detected compounds and respective concentrations between the two sampling periods and to evaluate the detections of new compounds. The 52 wells were distributed similarly to sites used in the 1995-2006 baseline study with respect to geographic area and land use within the geographic area of interest. Because of the use of different types of reporting levels and variability in reporting-level values during both the 1995-2006 baseline study and the 2008-10 resampling study, analytical results received from the laboratory were recensored. Two levels of recensoring were used to compare pesticides—a compound-specific assessment level (CSAL) that differed by compound and a common assessment level (CAL) of 0.07 microgram per liter. The recensoring techniques and values used for both studies, with the exception of the pesticide 2,4-D methyl ester, were the same. Twenty-eight different pesticides were detected in samples from the 52 wells during the 2008-10 resampling study. Pesticide concentrations were compared with several U.S. Environmental Protection Agency drinking-water standards or health advisories for finished (treated) water established under the Safe Drinking Water Act. All detected pesticides were measured at concentrations smaller than U.S. Environmental Protection Agency drinking-water standards or health advisories where applicable (many pesticides did not have standards or advisories). One or more pesticides were detected at concentrations greater than the CAL in water from 16 of 52 wells sampled (about 31 percent) during the resampling study. Detected pesticides were classified into one of six types: herbicides, herbicide degradates, insecticides, insecticide degradates, fungicides, or fungicide degradates. At least 95 percent of detected pesticides were classified as herbicides or herbicide degradates. The number of different pesticides detected in samples from the 52 wells was similar between the 1995-2006 baseline study (30 different pesticides) and 2008-2010 resampling study (28 different pesticides). Thirteen pesticides were detected during both studies. The change in the number of pesticides detected (without regard to which pesticide was detected) in groundwater samples from each of the 52 wells was evaluated and the number of pesticides detected in groundwater did not change for most of the wells (32). Of those that did have a difference between the two studies, 17 wells had more pesticide detections in groundwater during the 1995-2006 baseline study, whereas only 3 wells had more detections during the 2008-2010 resampling study. The difference in pesticide concentrations in groundwater samples from each of the 52 wells was determined. Few changes in concentration between the 1995-2006 baseline study and the 2008-2010 resampling study were seen for most detected pesticides. Seven pesticides had a greater concentration detected in the groundwater from the same well during the baseline sampling compared to the resampling study. Concentrations of prometon, which was detected in 17 wells, were greater in the baseline study sample compared to the resampling study sample from the same well 100 percent of the time. The change in the number of pesticides detected (without regard to which pesticide was detected) in groundwater samples from each of the 52 wells with respect to land use and geographic area was calculated. All wells with land use classified as agricultural had the same or a smaller number of pesticides detected in the resampling study compared to the baseline study. All wells in the Bighorn Basin geographic area also had the same or a smaller number of pesticides detected in the resampling study compared to the baseline study.
Experimental study of digital image processing techniques for LANDSAT data
NASA Technical Reports Server (NTRS)
Rifman, S. S. (Principal Investigator); Allendoerfer, W. B.; Caron, R. H.; Pemberton, L. J.; Mckinnon, D. M.; Polanski, G.; Simon, K. W.
1976-01-01
The author has identified the following significant results. Results are reported for: (1) subscene registration, (2) full scene rectification and registration, (3) resampling techniques, (4) and ground control point (GCP) extraction. Subscenes (354 pixels x 234 lines) were registered to approximately 1/4 pixel accuracy and evaluated by change detection imagery for three cases: (1) bulk data registration, (2) precision correction of a reference subscene using GCP data, and (3) independently precision processed subscenes. Full scene rectification and registration results were evaluated by using a correlation technique to measure registration errors of 0.3 pixel rms thoughout the full scene. Resampling evaluations of nearest neighbor and TRW cubic convolution processed data included change detection imagery and feature classification. Resampled data were also evaluated for an MSS scene containing specular solar reflections.
Benefits of an ultra large and multiresolution ensemble for estimating available wind power
NASA Astrophysics Data System (ADS)
Berndt, Jonas; Hoppe, Charlotte; Elbern, Hendrik
2016-04-01
In this study we investigate the benefits of an ultra large ensemble with up to 1000 members including multiple nesting with a target horizontal resolution of 1 km. The ensemble shall be used as a basis to detect events of extreme errors in wind power forecasting. Forecast value is the wind vector at wind turbine hub height (~ 100 m) in the short range (1 to 24 hour). Current wind power forecast systems rest already on NWP ensemble models. However, only calibrated ensembles from meteorological institutions serve as input so far, with limited spatial resolution (˜10 - 80 km) and member number (˜ 50). Perturbations related to the specific merits of wind power production are yet missing. Thus, single extreme error events which are not detected by such ensemble power forecasts occur infrequently. The numerical forecast model used in this study is the Weather Research and Forecasting Model (WRF). Model uncertainties are represented by stochastic parametrization of sub-grid processes via stochastically perturbed parametrization tendencies and in conjunction via the complementary stochastic kinetic-energy backscatter scheme already provided by WRF. We perform continuous ensemble updates by comparing each ensemble member with available observations using a sequential importance resampling filter to improve the model accuracy while maintaining ensemble spread. Additionally, we use different ensemble systems from global models (ECMWF and GFS) as input and boundary conditions to capture different synoptic conditions. Critical weather situations which are connected to extreme error events are located and corresponding perturbation techniques are applied. The demanding computational effort is overcome by utilising the supercomputer JUQUEEN at the Forschungszentrum Juelich.
Anisotropic scene geometry resampling with occlusion filling for 3DTV applications
NASA Astrophysics Data System (ADS)
Kim, Jangheon; Sikora, Thomas
2006-02-01
Image and video-based rendering technologies are receiving growing attention due to their photo-realistic rendering capability in free-viewpoint. However, two major limitations are ghosting and blurring due to their sampling-based mechanism. The scene geometry which supports to select accurate sampling positions is proposed using global method (i.e. approximate depth plane) and local method (i.e. disparity estimation). This paper focuses on the local method since it can yield more accurate rendering quality without large number of cameras. The local scene geometry has two difficulties which are the geometrical density and the uncovered area including hidden information. They are the serious drawback to reconstruct an arbitrary viewpoint without aliasing artifacts. To solve the problems, we propose anisotropic diffusive resampling method based on tensor theory. Isotropic low-pass filtering accomplishes anti-aliasing in scene geometry and anisotropic diffusion prevents filtering from blurring the visual structures. Apertures in coarse samples are estimated following diffusion on the pre-filtered space, the nonlinear weighting of gradient directions suppresses the amount of diffusion. Aliasing artifacts from low density are efficiently removed by isotropic filtering and the edge blurring can be solved by the anisotropic method at one process. Due to difference size of sampling gap, the resampling condition is defined considering causality between filter-scale and edge. Using partial differential equation (PDE) employing Gaussian scale-space, we iteratively achieve the coarse-to-fine resampling. In a large scale, apertures and uncovered holes can be overcoming because only strong and meaningful boundaries are selected on the resolution. The coarse-level resampling with a large scale is iteratively refined to get detail scene structure. Simulation results show the marked improvements of rendering quality.
NASA Astrophysics Data System (ADS)
Tweedie, C. E.; Ebert-May, D.; Hollister, R. D.; Johnson, D. R.; Lara, M. J.; Villarreal, S.; Spasojevic, M.; Webber, P.
2010-12-01
The International Polar Year-Back to the Future (IPY-BTF) is an endorsed International Polar Year project (IPY project #214). The overarching goal of this program is to determine how key structural and functional characteristics of high latitude/altitude terrestrial ecosystems have changed over the past 25 or more years and assess if such trajectories of change are likely to continue in the future. By rescuing data, revisiting, re-sampling historic research sites and assessing environmental change over time, we aim to provide greater understanding of how tundra is changing and what the possible drivers of these changes are. Resampling of sites established by Patrick J. Webber between 1964 and 1975 in northern Baffin Island, Northern Alaska and in the Rocky Mountains form a key contribution to the BTF project. Here we report on resampling efforts at each of these locations and initial results of a synthesis effort that finds similarities and differences in change between sites. Results suggest that although shifts in plant community composition are detectable at each location, the magnitude and direction of change differ among locations. Vegetation shifts along soil moisture gradients is apparent at most of the sites resampled. Interestingly, however, wet communities seem to have changed more than dry communities in the Arctic locations, while plant communities at the alpine site appear to be becoming more distinct regardless of soil moisture status. Ecosystem function studies performed in conjunction with plant community change suggest that there has been an increase in plant productivity at most sites resampled, especially in wet and mesic land cover types.
Resampling procedures to identify important SNPs using a consensus approach.
Pardy, Christopher; Motyer, Allan; Wilson, Susan
2011-11-29
Our goal is to identify common single-nucleotide polymorphisms (SNPs) (minor allele frequency > 1%) that add predictive accuracy above that gained by knowledge of easily measured clinical variables. We take an algorithmic approach to predict each phenotypic variable using a combination of phenotypic and genotypic predictors. We perform our procedure on the first simulated replicate and then validate against the others. Our procedure performs well when predicting Q1 but is less successful for the other outcomes. We use resampling procedures where possible to guard against false positives and to improve generalizability. The approach is based on finding a consensus regarding important SNPs by applying random forests and the least absolute shrinkage and selection operator (LASSO) on multiple subsamples. Random forests are used first to discard unimportant predictors, narrowing our focus to roughly 100 important SNPs. A cross-validation LASSO is then used to further select variables. We combine these procedures to guarantee that cross-validation can be used to choose a shrinkage parameter for the LASSO. If the clinical variables were unavailable, this prefiltering step would be essential. We perform the SNP-based analyses simultaneously rather than one at a time to estimate SNP effects in the presence of other causal variants. We analyzed the first simulated replicate of Genetic Analysis Workshop 17 without knowledge of the true model. Post-conference knowledge of the simulation parameters allowed us to investigate the limitations of our approach. We found that many of the false positives we identified were substantially correlated with genuine causal SNPs.
From climate-change spaghetti to climate-change distributions for 21st Century California
Dettinger, M.D.
2005-01-01
The uncertainties associated with climate-change projections for California are unlikely to disappear any time soon, and yet important long-term decisions will be needed to accommodate those potential changes. Projection uncertainties have typically been addressed by analysis of a few scenarios, chosen based on availability or to capture the extreme cases among available projections. However, by focusing on more common projections rather than the most extreme projections (using a new resampling method), new insights into current projections emerge: (1) uncertainties associated with future greenhouse-gas emissions are comparable with the differences among climate models, so that neither source of uncertainties should be neglected or underrepresented; (2) twenty-first century temperature projections spread more, overall, than do precipitation scenarios; (3) projections of extremely wet futures for California are true outliers among current projections; and (4) current projections that are warmest tend, overall, to yield a moderately drier California, while the cooler projections yield a somewhat wetter future. The resampling approach applied in this paper also provides a natural opportunity to objectively incorporate measures of model skill and the likelihoods of various emission scenarios into future assessments.
Spatial Quality Evaluation of Resampled Unmanned Aerial Vehicle-Imagery for Weed Mapping.
Borra-Serrano, Irene; Peña, José Manuel; Torres-Sánchez, Jorge; Mesas-Carrascosa, Francisco Javier; López-Granados, Francisca
2015-08-12
Unmanned aerial vehicles (UAVs) combined with different spectral range sensors are an emerging technology for providing early weed maps for optimizing herbicide applications. Considering that weeds, at very early phenological stages, are similar spectrally and in appearance, three major components are relevant: spatial resolution, type of sensor and classification algorithm. Resampling is a technique to create a new version of an image with a different width and/or height in pixels, and it has been used in satellite imagery with different spatial and temporal resolutions. In this paper, the efficiency of resampled-images (RS-images) created from real UAV-images (UAV-images; the UAVs were equipped with two types of sensors, i.e., visible and visible plus near-infrared spectra) captured at different altitudes is examined to test the quality of the RS-image output. The performance of the object-based-image-analysis (OBIA) implemented for the early weed mapping using different weed thresholds was also evaluated. Our results showed that resampling accurately extracted the spectral values from high spatial resolution UAV-images at an altitude of 30 m and the RS-image data at altitudes of 60 and 100 m, was able to provide accurate weed cover and herbicide application maps compared with UAV-images from real flights.
Spatial Quality Evaluation of Resampled Unmanned Aerial Vehicle-Imagery for Weed Mapping
Borra-Serrano, Irene; Peña, José Manuel; Torres-Sánchez, Jorge; Mesas-Carrascosa, Francisco Javier; López-Granados, Francisca
2015-01-01
Unmanned aerial vehicles (UAVs) combined with different spectral range sensors are an emerging technology for providing early weed maps for optimizing herbicide applications. Considering that weeds, at very early phenological stages, are similar spectrally and in appearance, three major components are relevant: spatial resolution, type of sensor and classification algorithm. Resampling is a technique to create a new version of an image with a different width and/or height in pixels, and it has been used in satellite imagery with different spatial and temporal resolutions. In this paper, the efficiency of resampled-images (RS-images) created from real UAV-images (UAV-images; the UAVs were equipped with two types of sensors, i.e., visible and visible plus near-infrared spectra) captured at different altitudes is examined to test the quality of the RS-image output. The performance of the object-based-image-analysis (OBIA) implemented for the early weed mapping using different weed thresholds was also evaluated. Our results showed that resampling accurately extracted the spectral values from high spatial resolution UAV-images at an altitude of 30 m and the RS-image data at altitudes of 60 and 100 m, was able to provide accurate weed cover and herbicide application maps compared with UAV-images from real flights. PMID:26274960
Shen, Chung-Wei; Chen, Yi-Hau
2018-03-13
We propose a model selection criterion for semiparametric marginal mean regression based on generalized estimating equations. The work is motivated by a longitudinal study on the physical frailty outcome in the elderly, where the cluster size, that is, the number of the observed outcomes in each subject, is "informative" in the sense that it is related to the frailty outcome itself. The new proposal, called Resampling Cluster Information Criterion (RCIC), is based on the resampling idea utilized in the within-cluster resampling method (Hoffman, Sen, and Weinberg, 2001, Biometrika 88, 1121-1134) and accommodates informative cluster size. The implementation of RCIC, however, is free of performing actual resampling of the data and hence is computationally convenient. Compared with the existing model selection methods for marginal mean regression, the RCIC method incorporates an additional component accounting for variability of the model over within-cluster subsampling, and leads to remarkable improvements in selecting the correct model, regardless of whether the cluster size is informative or not. Applying the RCIC method to the longitudinal frailty study, we identify being female, old age, low income and life satisfaction, and chronic health conditions as significant risk factors for physical frailty in the elderly. © 2018, The International Biometric Society.
Galvan, T L; Burkness, E C; Hutchison, W D
2007-06-01
To develop a practical integrated pest management (IPM) system for the multicolored Asian lady beetle, Harmonia axyridis (Pallas) (Coleoptera: Coccinellidae), in wine grapes, we assessed the spatial distribution of H. axyridis and developed eight sampling plans to estimate adult density or infestation level in grape clusters. We used 49 data sets collected from commercial vineyards in 2004 and 2005, in Minnesota and Wisconsin. Enumerative plans were developed using two precision levels (0.10 and 0.25); the six binomial plans reflected six unique action thresholds (3, 7, 12, 18, 22, and 31% of cluster samples infested with at least one H. axyridis). The spatial distribution of H. axyridis in wine grapes was aggregated, independent of cultivar and year, but it was more randomly distributed as mean density declined. The average sample number (ASN) for each sampling plan was determined using resampling software. For research purposes, an enumerative plan with a precision level of 0.10 (SE/X) resulted in a mean ASN of 546 clusters. For IPM applications, the enumerative plan with a precision level of 0.25 resulted in a mean ASN of 180 clusters. In contrast, the binomial plans resulted in much lower ASNs and provided high probabilities of arriving at correct "treat or no-treat" decisions, making these plans more efficient for IPM applications. For a tally threshold of one adult per cluster, the operating characteristic curves for the six action thresholds provided binomial sequential sampling plans with mean ASNs of only 19-26 clusters, and probabilities of making correct decisions between 83 and 96%. The benefits of the binomial sampling plans are discussed within the context of improving IPM programs for wine grapes.
0-2 Ma Paleomagnetic Field Behavior from Lava Flow Data Sets
NASA Astrophysics Data System (ADS)
Johnson, C. L.; Constable, C.; Tauxe, L.; Cromwell, G.
2010-12-01
The global time-averaged (TAF) structure of the paleomagnetic field and paleosecular variation (PSV) provide important constraints for numerical geodynamo simulations. Studies of the TAF have sought to characterize the nature of non-geocentric-axial dipole contributions to the field, in particular any such contributions that may be diagnostic of the influence of core-mantle boundary conditions on field generation. Similarly geographical variations in PSV are of interest, in particular the long-standing debate concerning anomalously low VGP (virtual geomagnetic pole) dispersion at Hawaii. Here, we analyze updated global directional data sets from lava flows. We present global models for the time-averaged field for the Brunhes and Matuyama epochs. New TAF models based on lava flow directional data for the Brunhes show longitudinal structure. In particular, high latitude flux lobes are observed, constrained by improved data sets from N. and S. America, Japan, and New Zealand. Anomalous TAF structure is also observed in the region around Hawaii. At Hawaii, previous inferences of the anomalous TAF (large inclination anomaly) and PSV (low VGP dispersion) have been argued to be the result of temporal sampling bias toward young flows. We use resampling techniques to examine possible biases in the TAF and PSV incurred by uneven temporal sampling. Resampling of the paleodirectional data onto a uniform temporal distribution, incorporating site ages and age errors leads to a TAF estimate for the Brunhes that is close to that reported for the actual data set, but an estimate for VGP dispersion that is increased relative to that obtained from the unevenly sampled data. Future investigations will incorporate the temporal resampling procedures into TAF modeling efforts, as well as recent progress in modeling the 0-2 Ma paleomagnetic dipole moment.
Brandon M. Collins; Richard G. Everett; Scott L. Stephens
2011-01-01
We re-sampled areas included in an unbiased 1911 timber inventory conducted by the U.S. Forest Service over a 4000 ha study area. Over half of the re-sampled area burned in relatively recent management- and lightning-ignited fires. This allowed for comparisons of both areas that have experienced recent fire and areas with no recent fire, to the same areas historically...
Quasi-Epipolar Resampling of High Resolution Satellite Stereo Imagery for Semi Global Matching
NASA Astrophysics Data System (ADS)
Tatar, N.; Saadatseresht, M.; Arefi, H.; Hadavand, A.
2015-12-01
Semi-global matching is a well-known stereo matching algorithm in photogrammetric and computer vision society. Epipolar images are supposed as input of this algorithm. Epipolar geometry of linear array scanners is not a straight line as in case of frame camera. Traditional epipolar resampling algorithms demands for rational polynomial coefficients (RPCs), physical sensor model or ground control points. In this paper we propose a new solution for epipolar resampling method which works without the need for these information. In proposed method, automatic feature extraction algorithms are employed to generate corresponding features for registering stereo pairs. Also original images are divided into small tiles. In this way by omitting the need for extra information, the speed of matching algorithm increased and the need for high temporal memory decreased. Our experiments on GeoEye-1 stereo pair captured over Qom city in Iran demonstrates that the epipolar images are generated with sub-pixel accuracy.
Janssen, Steve M J; Chessa, Antonio G; Murre, Jaap M J
2007-10-01
The reminiscence bump is the effect that people recall more personal events from early adulthood than from childhood or adulthood. The bump has been examined extensively. However, the question of whether the bump is caused by differential encoding or re-sampling is still unanswered. To examine this issue, participants were asked to name their three favourite books, movies, and records. Furthermore,they were asked when they first encountered them. We compared the temporal distributions and found that they all showed recency effects and reminiscence bumps. The distribution of favourite books had the largest recency effect and the distribution of favourite records had the largest reminiscence bump. We can explain these results by the difference in rehearsal. Books are read two or three times, movies are watched more frequently, whereas records are listened to numerous times. The results suggest that differential encoding initially causes the reminiscence bump and that re-sampling increases the bump further.
Arctic Acoustic Workshop Proceedings, 14-15 February 1989.
1989-06-01
measurements. The measurements reported by Levine et al. (1987) were taken from current and temperature sensors moored in two triangular grids . The internal...requires a resampling of the data series on a uniform depth-time grid . Statistics calculated from the resampled series will be used to test numerical...from an isolated keel. Figure 2: 2-D Modeling Geometry - The model is based on a 2-D Cartesian grid with an axis of symmetry on the left. A pulsed
Assessing Uncertainties in Surface Water Security: A Probabilistic Multi-model Resampling approach
NASA Astrophysics Data System (ADS)
Rodrigues, D. B. B.
2015-12-01
Various uncertainties are involved in the representation of processes that characterize interactions between societal needs, ecosystem functioning, and hydrological conditions. Here, we develop an empirical uncertainty assessment of water security indicators that characterize scarcity and vulnerability, based on a multi-model and resampling framework. We consider several uncertainty sources including those related to: i) observed streamflow data; ii) hydrological model structure; iii) residual analysis; iv) the definition of Environmental Flow Requirement method; v) the definition of critical conditions for water provision; and vi) the critical demand imposed by human activities. We estimate the overall uncertainty coming from the hydrological model by means of a residual bootstrap resampling approach, and by uncertainty propagation through different methodological arrangements applied to a 291 km² agricultural basin within the Cantareira water supply system in Brazil. Together, the two-component hydrograph residual analysis and the block bootstrap resampling approach result in a more accurate and precise estimate of the uncertainty (95% confidence intervals) in the simulated time series. We then compare the uncertainty estimates associated with water security indicators using a multi-model framework and provided by each model uncertainty estimation approach. The method is general and can be easily extended forming the basis for meaningful support to end-users facing water resource challenges by enabling them to incorporate a viable uncertainty analysis into a robust decision making process.
NASA Astrophysics Data System (ADS)
Clark, E.; Wood, A.; Nijssen, B.; Clark, M. P.
2017-12-01
Short- to medium-range (1- to 7-day) streamflow forecasts are important for flood control operations and in issuing potentially life-save flood warnings. In the U.S., the National Weather Service River Forecast Centers (RFCs) issue such forecasts in real time, depending heavily on a manual data assimilation (DA) approach. Forecasters adjust model inputs, states, parameters and outputs based on experience and consideration of a range of supporting real-time information. Achieving high-quality forecasts from new automated, centralized forecast systems will depend critically on the adequacy of automated DA approaches to make analogous corrections to the forecasting system. Such approaches would further enable systematic evaluation of real-time flood forecasting methods and strategies. Toward this goal, we have implemented a real-time Sequential Importance Resampling particle filter (SIR-PF) approach to assimilate observed streamflow into simulated initial hydrologic conditions (states) for initializing ensemble flood forecasts. Assimilating streamflow alone in SIR-PF improves simulated streamflow and soil moisture during the model spin up period prior to a forecast, with consequent benefits for forecasts. Nevertheless, it only consistently limits error in simulated snow water equivalent during the snowmelt season and in basins where precipitation falls primarily as snow. We examine how the simulated initial conditions with and without SIR-PF propagate into 1- to 7-day ensemble streamflow forecasts. Forecasts are evaluated in terms of reliability and skill over a 10-year period from 2005-2015. The focus of this analysis is on how interactions between hydroclimate and SIR-PF performance impact forecast skill. To this end, we examine forecasts for 5 hydroclimatically diverse basins in the western U.S. Some of these basins receive most of their precipitation as snow, others as rain. Some freeze throughout the mid-winter while others experience significant mid-winter melt events. We describe the methodology and present seasonal and inter-basin variations in DA-enhanced forecast skill.
NASA Astrophysics Data System (ADS)
Beckers, J.; Weerts, A.; Tijdeman, E.; Welles, E.; McManamon, A.
2013-12-01
To provide reliable and accurate seasonal streamflow forecasts for water resources management several operational hydrologic agencies and hydropower companies around the world use the Extended Streamflow Prediction (ESP) procedure. The ESP in its original implementation does not accommodate for any additional information that the forecaster may have about expected deviations from climatology in the near future. Several attempts have been conducted to improve the skill of the ESP forecast, especially for areas which are affected by teleconnetions (e,g. ENSO, PDO) via selection (Hamlet and Lettenmaier, 1999) or weighting schemes (Werner et al., 2004; Wood and Lettenmaier, 2006; Najafi et al., 2012). A disadvantage of such schemes is that they lead to a reduction of the signal to noise ratio of the probabilistic forecast. To overcome this, we propose a resampling method conditional on climate indices to generate meteorological time series to be used in the ESP. The method can be used to generate a large number of meteorological ensemble members in order to improve the statistical properties of the ensemble. The effectiveness of the method was demonstrated in a real-time operational hydrologic seasonal forecasts system for the Columbia River basin operated by the Bonneville Power Administration. The forecast skill of the k-nn resampler was tested against the original ESP for three basins at the long-range seasonal time scale. The BSS and CRPSS were used to compare the results to those of the original ESP method. Positive forecast skill scores were found for the resampler method conditioned on different indices for the prediction of spring peak flows in the Dworshak and Hungry Horse basin. For the Libby Dam basin however, no improvement of skill was found. The proposed resampling method is a promising practical approach that can add skill to ESP forecasts at the seasonal time scale. Further improvement is possible by fine tuning the method and selecting the most informative climate indices for the region of interest.
Cost-benefit analysis of sequential warning lights in nighttime work zone tapers.
DOT National Transportation Integrated Search
2011-06-01
Improving safety at nighttime work zones is important because of the extra visibility concerns. The deployment of sequential lights is an innovative method for improving driver recognition of lane closures and work zone tapers. Sequential lights are ...
Correcting Evaluation Bias of Relational Classifiers with Network Cross Validation
2010-01-01
classi- fication algorithms: simple random resampling (RRS), equal-instance random resampling (ERS), and network cross-validation ( NCV ). The first two... NCV procedure that eliminates overlap between test sets altogether. The procedure samples for k disjoint test sets that will be used for evaluation...propLabeled ∗ S) nodes from train Pool in f erenceSet =network − trainSet F = F ∪ < trainSet, test Set, in f erenceSet > end for output: F NCV addresses
Modified Polar-Format Software for Processing SAR Data
NASA Technical Reports Server (NTRS)
Chen, Curtis
2003-01-01
HMPF is a computer program that implements a modified polar-format algorithm for processing data from spaceborne synthetic-aperture radar (SAR) systems. Unlike prior polar-format processing algorithms, this algorithm is based on the assumption that the radar signal wavefronts are spherical rather than planar. The algorithm provides for resampling of SAR pulse data from slant range to radial distance from the center of a reference sphere that is nominally the local Earth surface. Then, invoking the projection-slice theorem, the resampled pulse data are Fourier-transformed over radial distance, arranged in the wavenumber domain according to the acquisition geometry, resampled to a Cartesian grid, and inverse-Fourier-transformed. The result of this process is the focused SAR image. HMPF, and perhaps other programs that implement variants of the algorithm, may give better accuracy than do prior algorithms for processing strip-map SAR data from high altitudes and may give better phase preservation relative to prior polar-format algorithms for processing spotlight-mode SAR data.
NASA Astrophysics Data System (ADS)
Meadors, Grant David; Krishnan, Badri; Papa, Maria Alessandra; Whelan, John T.; Zhang, Yuanhao
2018-02-01
Continuous-wave (CW) gravitational waves (GWs) call for computationally-intensive methods. Low signal-to-noise ratio signals need templated searches with long coherent integration times and thus fine parameter-space resolution. Longer integration increases sensitivity. Low-mass x-ray binaries (LMXBs) such as Scorpius X-1 (Sco X-1) may emit accretion-driven CWs at strains reachable by current ground-based observatories. Binary orbital parameters induce phase modulation. This paper describes how resampling corrects binary and detector motion, yielding source-frame time series used for cross-correlation. Compared to the previous, detector-frame, templated cross-correlation method, used for Sco X-1 on data from the first Advanced LIGO observing run (O1), resampling is about 20 × faster in the costliest, most-sensitive frequency bands. Speed-up factors depend on integration time and search setup. The speed could be reinvested into longer integration with a forecast sensitivity gain, 20 to 125 Hz median, of approximately 51%, or from 20 to 250 Hz, 11%, given the same per-band cost and setup. This paper's timing model enables future setup optimization. Resampling scales well with longer integration, and at 10 × unoptimized cost could reach respectively 2.83 × and 2.75 × median sensitivities, limited by spin-wandering. Then an O1 search could yield a marginalized-polarization upper limit reaching torque-balance at 100 Hz. Frequencies from 40 to 140 Hz might be probed in equal observing time with 2 × improved detectors.
Recommended GIS Analysis Methods for Global Gridded Population Data
NASA Astrophysics Data System (ADS)
Frye, C. E.; Sorichetta, A.; Rose, A.
2017-12-01
When using geographic information systems (GIS) to analyze gridded, i.e., raster, population data, analysts need a detailed understanding of several factors that affect raster data processing, and thus, the accuracy of the results. Global raster data is most often provided in an unprojected state, usually in the WGS 1984 geographic coordinate system. Most GIS functions and tools evaluate data based on overlay relationships (area) or proximity (distance). Area and distance for global raster data can be either calculated directly using the various earth ellipsoids or after transforming the data to equal-area/equidistant projected coordinate systems to analyze all locations equally. However, unlike when projecting vector data, not all projected coordinate systems can support such analyses equally, and the process of transforming raster data from one coordinate space to another often results unmanaged loss of data through a process called resampling. Resampling determines which values to use in the result dataset given an imperfect locational match in the input dataset(s). Cell size or resolution, registration, resampling method, statistical type, and whether the raster represents continuous or discreet information potentially influence the quality of the result. Gridded population data represent estimates of population in each raster cell, and this presentation will provide guidelines for accurately transforming population rasters for analysis in GIS. Resampling impacts the display of high resolution global gridded population data, and we will discuss how to properly handle pyramid creation using the Aggregate tool with the sum option to create overviews for mosaic datasets.
A comparison of resampling schemes for estimating model observer performance with small ensembles
NASA Astrophysics Data System (ADS)
Elshahaby, Fatma E. A.; Jha, Abhinav K.; Ghaly, Michael; Frey, Eric C.
2017-09-01
In objective assessment of image quality, an ensemble of images is used to compute the 1st and 2nd order statistics of the data. Often, only a finite number of images is available, leading to the issue of statistical variability in numerical observer performance. Resampling-based strategies can help overcome this issue. In this paper, we compared different combinations of resampling schemes (the leave-one-out (LOO) and the half-train/half-test (HT/HT)) and model observers (the conventional channelized Hotelling observer (CHO), channelized linear discriminant (CLD) and channelized quadratic discriminant). Observer performance was quantified by the area under the ROC curve (AUC). For a binary classification task and for each observer, the AUC value for an ensemble size of 2000 samples per class served as a gold standard for that observer. Results indicated that each observer yielded a different performance depending on the ensemble size and the resampling scheme. For a small ensemble size, the combination [CHO, HT/HT] had more accurate rankings than the combination [CHO, LOO]. Using the LOO scheme, the CLD and CHO had similar performance for large ensembles. However, the CLD outperformed the CHO and gave more accurate rankings for smaller ensembles. As the ensemble size decreased, the performance of the [CHO, LOO] combination seriously deteriorated as opposed to the [CLD, LOO] combination. Thus, it might be desirable to use the CLD with the LOO scheme when smaller ensemble size is available.
Continuity of the sequential product of sequential quantum effect algebras
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lei, Qiang, E-mail: leiqiang@hit.edu.cn; Su, Xiaochao, E-mail: hitswh@163.com; Wu, Junde, E-mail: wjd@zju.edu.cn
In order to study quantum measurement theory, sequential product defined by A∘B = A{sup 1/2}BA{sup 1/2} for any two quantum effects A, B has been introduced. Physically motivated conditions ask the sequential product to be continuous with respect to the strong operator topology. In this paper, we study the continuity problems of the sequential product A∘B = A{sup 1/2}BA{sup 1/2} with respect to other important topologies, such as norm topology, weak operator topology, order topology, and interval topology.
Dotsinsky, Ivan
2005-01-01
Background Public access defibrillators (PADs) are now available for more efficient and rapid treatment of out-of-hospital sudden cardiac arrest. PADs are used normally by untrained people on the streets and in sports centers, airports, and other public areas. Therefore, automated detection of ventricular fibrillation, or its exclusion, is of high importance. A special case exists at railway stations, where electric power-line frequency interference is significant. Many countries, especially in Europe, use 16.7 Hz AC power, which introduces high level frequency-varying interference that may compromise fibrillation detection. Method Moving signal averaging is often used for 50/60 Hz interference suppression if its effect on the ECG spectrum has little importance (no morphological analysis is performed). This approach may be also applied to the railway situation, if the interference frequency is continuously detected so as to synchronize the analog-to-digital conversion (ADC) for introducing variable inter-sample intervals. A better solution consists of rated ADC, software frequency measuring, internal irregular re-sampling according to the interference frequency, and a moving average over a constant sample number, followed by regular back re-sampling. Results The proposed method leads to a total railway interference cancellation, together with suppression of inherent noise, while the peak amplitudes of some sharp complexes are reduced. This reduction has negligible effect on accurate fibrillation detection. Conclusion The method is developed in the MATLAB environment and represents a useful tool for real time railway interference suppression. PMID:16309558
Dotsinsky, Ivan
2005-11-26
Public access defibrillators (PADs) are now available for more efficient and rapid treatment of out-of-hospital sudden cardiac arrest. PADs are used normally by untrained people on the streets and in sports centers, airports, and other public areas. Therefore, automated detection of ventricular fibrillation, or its exclusion, is of high importance. A special case exists at railway stations, where electric power-line frequency interference is significant. Many countries, especially in Europe, use 16.7 Hz AC power, which introduces high level frequency-varying interference that may compromise fibrillation detection. Moving signal averaging is often used for 50/60 Hz interference suppression if its effect on the ECG spectrum has little importance (no morphological analysis is performed). This approach may be also applied to the railway situation, if the interference frequency is continuously detected so as to synchronize the analog-to-digital conversion (ADC) for introducing variable inter-sample intervals. A better solution consists of rated ADC, software frequency measuring, internal irregular re-sampling according to the interference frequency, and a moving average over a constant sample number, followed by regular back re-sampling. The proposed method leads to a total railway interference cancellation, together with suppression of inherent noise, while the peak amplitudes of some sharp complexes are reduced. This reduction has negligible effect on accurate fibrillation detection. The method is developed in the MATLAB environment and represents a useful tool for real time railway interference suppression.
Reconstruction of dynamical systems from resampled point processes produced by neuron models
NASA Astrophysics Data System (ADS)
Pavlova, Olga N.; Pavlov, Alexey N.
2018-04-01
Characterization of dynamical features of chaotic oscillations from point processes is based on embedding theorems for non-uniformly sampled signals such as the sequences of interspike intervals (ISIs). This theoretical background confirms the ability of attractor reconstruction from ISIs generated by chaotically driven neuron models. The quality of such reconstruction depends on the available length of the analyzed dataset. We discuss how data resampling improves the reconstruction for short amount of data and show that this effect is observed for different types of mechanisms for spike generation.
Building test data from real outbreaks for evaluating detection algorithms.
Texier, Gaetan; Jackson, Michael L; Siwe, Leonel; Meynard, Jean-Baptiste; Deparis, Xavier; Chaudet, Herve
2017-01-01
Benchmarking surveillance systems requires realistic simulations of disease outbreaks. However, obtaining these data in sufficient quantity, with a realistic shape and covering a sufficient range of agents, size and duration, is known to be very difficult. The dataset of outbreak signals generated should reflect the likely distribution of authentic situations faced by the surveillance system, including very unlikely outbreak signals. We propose and evaluate a new approach based on the use of historical outbreak data to simulate tailored outbreak signals. The method relies on a homothetic transformation of the historical distribution followed by resampling processes (Binomial, Inverse Transform Sampling Method-ITSM, Metropolis-Hasting Random Walk, Metropolis-Hasting Independent, Gibbs Sampler, Hybrid Gibbs Sampler). We carried out an analysis to identify the most important input parameters for simulation quality and to evaluate performance for each of the resampling algorithms. Our analysis confirms the influence of the type of algorithm used and simulation parameters (i.e. days, number of cases, outbreak shape, overall scale factor) on the results. We show that, regardless of the outbreaks, algorithms and metrics chosen for the evaluation, simulation quality decreased with the increase in the number of days simulated and increased with the number of cases simulated. Simulating outbreaks with fewer cases than days of duration (i.e. overall scale factor less than 1) resulted in an important loss of information during the simulation. We found that Gibbs sampling with a shrinkage procedure provides a good balance between accuracy and data dependency. If dependency is of little importance, binomial and ITSM methods are accurate. Given the constraint of keeping the simulation within a range of plausible epidemiological curves faced by the surveillance system, our study confirms that our approach can be used to generate a large spectrum of outbreak signals.
Building test data from real outbreaks for evaluating detection algorithms
Texier, Gaetan; Jackson, Michael L.; Siwe, Leonel; Meynard, Jean-Baptiste; Deparis, Xavier; Chaudet, Herve
2017-01-01
Benchmarking surveillance systems requires realistic simulations of disease outbreaks. However, obtaining these data in sufficient quantity, with a realistic shape and covering a sufficient range of agents, size and duration, is known to be very difficult. The dataset of outbreak signals generated should reflect the likely distribution of authentic situations faced by the surveillance system, including very unlikely outbreak signals. We propose and evaluate a new approach based on the use of historical outbreak data to simulate tailored outbreak signals. The method relies on a homothetic transformation of the historical distribution followed by resampling processes (Binomial, Inverse Transform Sampling Method—ITSM, Metropolis-Hasting Random Walk, Metropolis-Hasting Independent, Gibbs Sampler, Hybrid Gibbs Sampler). We carried out an analysis to identify the most important input parameters for simulation quality and to evaluate performance for each of the resampling algorithms. Our analysis confirms the influence of the type of algorithm used and simulation parameters (i.e. days, number of cases, outbreak shape, overall scale factor) on the results. We show that, regardless of the outbreaks, algorithms and metrics chosen for the evaluation, simulation quality decreased with the increase in the number of days simulated and increased with the number of cases simulated. Simulating outbreaks with fewer cases than days of duration (i.e. overall scale factor less than 1) resulted in an important loss of information during the simulation. We found that Gibbs sampling with a shrinkage procedure provides a good balance between accuracy and data dependency. If dependency is of little importance, binomial and ITSM methods are accurate. Given the constraint of keeping the simulation within a range of plausible epidemiological curves faced by the surveillance system, our study confirms that our approach can be used to generate a large spectrum of outbreak signals. PMID:28863159
DOE Office of Scientific and Technical Information (OSTI.GOV)
Shrestha, S; Vedantham, S; Karellas, A
Purpose: Detectors with hexagonal pixels require resampling to square pixels for distortion-free display of acquired images. In this work, the presampling modulation transfer function (MTF) of a hexagonal pixel array photon-counting CdTe detector for region-of-interest fluoroscopy was measured and the optimal square pixel size for resampling was determined. Methods: A 0.65mm thick CdTe Schottky sensor capable of concurrently acquiring up to 3 energy-windowed images was operated in a single energy-window mode to include ≥10 KeV photons. The detector had hexagonal pixels with apothem of 30 microns resulting in pixel spacing of 60 and 51.96 microns along the two orthogonal directions.more » Images of a tungsten edge test device acquired under IEC RQA5 conditions were double Hough transformed to identify the edge and numerically differentiated. The presampling MTF was determined from the finely sampled line spread function that accounted for the hexagonal sampling. The optimal square pixel size was determined in two ways; the square pixel size for which the aperture function evaluated at the Nyquist frequencies along the two orthogonal directions matched that from the hexagonal pixel aperture functions, and the square pixel size for which the mean absolute difference between the square and hexagonal aperture functions was minimized over all frequencies up to the Nyquist limit. Results: Evaluation of the aperture functions over the entire frequency range resulted in square pixel size of 53 microns with less than 2% difference from the hexagonal pixel. Evaluation of the aperture functions at Nyquist frequencies alone resulted in 54 microns square pixels. For the photon-counting CdTe detector and after resampling to 53 microns square pixels using quadratic interpolation, the presampling MTF at Nyquist frequency of 9.434 cycles/mm along the two directions were 0.501 and 0.507. Conclusion: Hexagonal pixel array photon-counting CdTe detector after resampling to square pixels provides high-resolution imaging suitable for fluoroscopy.« less
NASA Astrophysics Data System (ADS)
Adjorlolo, Clement; Cho, Moses A.; Mutanga, Onisimo; Ismail, Riyad
2012-01-01
Hyperspectral remote-sensing approaches are suitable for detection of the differences in 3-carbon (C3) and four carbon (C4) grass species phenology and composition. However, the application of hyperspectral sensors to vegetation has been hampered by high-dimensionality, spectral redundancy, and multicollinearity problems. In this experiment, resampling of hyperspectral data to wider wavelength intervals, around a few band-centers, sensitive to the biophysical and biochemical properties of C3 or C4 grass species is proposed. The approach accounts for an inherent property of vegetation spectral response: the asymmetrical nature of the inter-band correlations between a waveband and its shorter- and longer-wavelength neighbors. It involves constructing a curve of weighting threshold of correlation (Pearson's r) between a chosen band-center and its neighbors, as a function of wavelength. In addition, data were resampled to some multispectral sensors-ASTER, GeoEye-1, IKONOS, QuickBird, RapidEye, SPOT 5, and WorldView-2 satellites-for comparative purposes, with the proposed method. The resulting datasets were analyzed, using the random forest algorithm. The proposed resampling method achieved improved classification accuracy (κ=0.82), compared to the resampled multispectral datasets (κ=0.78, 0.65, 0.62, 0.59, 0.65, 0.62, 0.76, respectively). Overall, results from this study demonstrated that spectral resolutions for C3 and C4 grasses can be optimized and controlled for high dimensionality and multicollinearity problems, yet yielding high classification accuracies. The findings also provide a sound basis for programming wavebands for future sensors.
Adaptive topographic mass correction for satellite gravity and gravity gradient data
NASA Astrophysics Data System (ADS)
Holzrichter, Nils; Szwillus, Wolfgang; Götze, Hans-Jürgen
2014-05-01
Subsurface modelling with gravity data includes a reliable topographic mass correction. Since decades, this mandatory step is a standard procedure. However, originally methods were developed for local terrestrial surveys. Therefore, these methods often include defaults like a limited correction area of 167 km around an observation point, resampling topography depending on the distance to the station or disregard the curvature of the earth. New satellite gravity data (e.g. GOCE) can be used for large scale lithospheric modelling with gravity data. The investigation areas can include thousands of kilometres. In addition, measurements are located in the flight height of the satellite (e.g. ~250 km for GOCE). The standard definition of the correction area and the specific grid spacing around an observation point was not developed for stations located in these heights and areas of these dimensions. This asks for a revaluation of the defaults used for topographic correction. We developed an algorithm which resamples the topography based on an adaptive approach. Instead of resampling topography depending on the distance to the station, the grids will be resampled depending on its influence at the station. Therefore, the only value the user has to define is the desired accuracy of the topographic correction. It is not necessary to define the grid spacing and a limited correction area. Furthermore, the algorithm calculates the topographic mass response with a spherical shaped polyhedral body. We show examples for local and global gravity datasets and compare the results of the topographic mass correction to existing approaches. We provide suggestions how satellite gravity and gradient data should be corrected.
Hinault, Thomas; Lemaire, Patrick; Phillips, Natalie
2016-01-01
This study investigated age-related differences in electrophysiological signatures of sequential modulations of poorer strategy effects. Sequential modulations of poorer strategy effects refer to decreased poorer strategy effects (i.e., poorer performance when the cued strategy is not the best) on current problem following poorer strategy problems compared to after better strategy problems. Analyses on electrophysiological (EEG) data revealed important age-related changes in time, frequency, and coherence of brain activities underlying sequential modulations of poorer strategy effects. More specifically, sequential modulations of poorer strategy effects were associated with earlier and later time windows (i.e., between 200- and 550 ms and between 850- and 1250 ms). Event-related potentials (ERPs) also revealed an earlier onset in older adults, together with more anterior and less lateralized activations. Furthermore, sequential modulations of poorer strategy effects were associated with theta and alpha frequencies in young adults while these modulations were found in delta frequency and theta inter-hemispheric coherence in older adults, consistent with qualitatively distinct patterns of brain activity. These findings have important implications to further our understanding of age-related differences and similarities in sequential modulations of cognitive control processes during arithmetic strategy execution. Copyright © 2015 Elsevier B.V. All rights reserved.
A CNN based Hybrid approach towards automatic image registration
NASA Astrophysics Data System (ADS)
Arun, Pattathal V.; Katiyar, Sunil K.
2013-06-01
Image registration is a key component of various image processing operations which involve the analysis of different image data sets. Automatic image registration domains have witnessed the application of many intelligent methodologies over the past decade; however inability to properly model object shape as well as contextual information had limited the attainable accuracy. In this paper, we propose a framework for accurate feature shape modeling and adaptive resampling using advanced techniques such as Vector Machines, Cellular Neural Network (CNN), SIFT, coreset, and Cellular Automata. CNN has found to be effective in improving feature matching as well as resampling stages of registration and complexity of the approach has been considerably reduced using corset optimization The salient features of this work are cellular neural network approach based SIFT feature point optimisation, adaptive resampling and intelligent object modelling. Developed methodology has been compared with contemporary methods using different statistical measures. Investigations over various satellite images revealed that considerable success was achieved with the approach. System has dynamically used spectral and spatial information for representing contextual knowledge using CNN-prolog approach. Methodology also illustrated to be effective in providing intelligent interpretation and adaptive resampling. Rejestracja obrazu jest kluczowym składnikiem różnych operacji jego przetwarzania. W ostatnich latach do automatycznej rejestracji obrazu wykorzystuje się metody sztucznej inteligencji, których największą wadą, obniżającą dokładność uzyskanych wyników jest brak możliwości dobrego wymodelowania kształtu i informacji kontekstowych. W niniejszej pracy zaproponowano zasady dokładnego modelowania kształtu oraz adaptacyjnego resamplingu z wykorzystaniem zaawansowanych technik, takich jak Vector Machines (VM), komórkowa sieć neuronowa (CNN), przesiewanie (SIFT), Coreset i automaty komórkowe. Stwierdzono, że za pomocą CNN można skutecznie poprawiać dopasowanie obiektów obrazowych oraz resampling kolejnych kroków rejestracji, zaś zastosowanie optymalizacji metodą Coreset znacznie redukuje złożoność podejścia. Zasadniczym przedmiotem pracy są: optymalizacja punktów metodą SIFT oparta na podejściu CNN, adaptacyjny resampling oraz inteligentne modelowanie obiektów. Opracowana metoda została porównana ze współcześnie stosowanymi metodami wykorzystującymi różne miary statystyczne. Badania nad różnymi obrazami satelitarnymi wykazały, że stosując opracowane podejście osiągnięto bardzo dobre wyniki. System stosując podejście CNN-prolog dynamicznie wykorzystuje informacje spektralne i przestrzenne dla reprezentacji wiedzy kontekstowej. Metoda okazała się również skuteczna w dostarczaniu inteligentnych interpretacji i w adaptacyjnym resamplingu.
Approximate median regression for complex survey data with skewed response.
Fraser, Raphael André; Lipsitz, Stuart R; Sinha, Debajyoti; Fitzmaurice, Garrett M; Pan, Yi
2016-12-01
The ready availability of public-use data from various large national complex surveys has immense potential for the assessment of population characteristics using regression models. Complex surveys can be used to identify risk factors for important diseases such as cancer. Existing statistical methods based on estimating equations and/or utilizing resampling methods are often not valid with survey data due to complex survey design features. That is, stratification, multistage sampling, and weighting. In this article, we accommodate these design features in the analysis of highly skewed response variables arising from large complex surveys. Specifically, we propose a double-transform-both-sides (DTBS)'based estimating equations approach to estimate the median regression parameters of the highly skewed response; the DTBS approach applies the same Box-Cox type transformation twice to both the outcome and regression function. The usual sandwich variance estimate can be used in our approach, whereas a resampling approach would be needed for a pseudo-likelihood based on minimizing absolute deviations (MAD). Furthermore, the approach is relatively robust to the true underlying distribution, and has much smaller mean square error than a MAD approach. The method is motivated by an analysis of laboratory data on urinary iodine (UI) concentration from the National Health and Nutrition Examination Survey. © 2016, The International Biometric Society.
Preprocessing the Nintendo Wii Board Signal to Derive More Accurate Descriptors of Statokinesigrams.
Audiffren, Julien; Contal, Emile
2016-08-01
During the past few years, the Nintendo Wii Balance Board (WBB) has been used in postural control research as an affordable but less reliable replacement for laboratory grade force platforms. However, the WBB suffers some limitations, such as a lower accuracy and an inconsistent sampling rate. In this study, we focus on the latter, namely the non uniform acquisition frequency. We show that this problem, combined with the poor signal to noise ratio of the WBB, can drastically decrease the quality of the obtained information if not handled properly. We propose a new resampling method, Sliding Window Average with Relevance Interval Interpolation (SWARII), specifically designed with the WBB in mind, for which we provide an open source implementation. We compare it with several existing methods commonly used in postural control, both on synthetic and experimental data. The results show that some methods, such as linear and piecewise constant interpolations should definitely be avoided, particularly when the resulting signal is differentiated, which is necessary to estimate speed, an important feature in postural control. Other methods, such as averaging on sliding windows or SWARII, perform significantly better on synthetic dataset, and produce results more similar to the laboratory-grade AMTI force plate (AFP) during experiments. Those methods should be preferred when resampling data collected from a WBB.
NASA Astrophysics Data System (ADS)
Pezzani, Carlos M.; Bossio, José M.; Castellino, Ariel M.; Bossio, Guillermo R.; De Angelo, Cristian H.
2017-02-01
Condition monitoring in permanent magnet synchronous machines has gained interest due to the increasing use in applications such as electric traction and power generation. Particularly in wind power generation, non-invasive condition monitoring techniques are of great importance. Usually, in such applications the access to the generator is complex and costly, while unexpected breakdowns results in high repair costs. This paper presents a technique which allows using vibration analysis for bearing fault detection in permanent magnet synchronous generators used in wind turbines. Given that in wind power applications the generator rotational speed may vary during normal operation, it is necessary to use special sampling techniques to apply spectral analysis of mechanical vibrations. In this work, a resampling technique based on order tracking without measuring the rotor position is proposed. To synchronize sampling with rotor position, an estimation of the rotor position obtained from the angle of the voltage vector is proposed. This angle is obtained from a phase-locked loop synchronized with the generator voltages. The proposed strategy is validated by laboratory experimental results obtained from a permanent magnet synchronous generator. Results with single point defects in the outer race of a bearing under variable speed and load conditions are presented.
Approximate Median Regression for Complex Survey Data with Skewed Response
Fraser, Raphael André; Lipsitz, Stuart R.; Sinha, Debajyoti; Fitzmaurice, Garrett M.; Pan, Yi
2016-01-01
Summary The ready availability of public-use data from various large national complex surveys has immense potential for the assessment of population characteristics using regression models. Complex surveys can be used to identify risk factors for important diseases such as cancer. Existing statistical methods based on estimating equations and/or utilizing resampling methods are often not valid with survey data due to complex survey design features. That is, stratification, multistage sampling and weighting. In this paper, we accommodate these design features in the analysis of highly skewed response variables arising from large complex surveys. Specifically, we propose a double-transform-both-sides (DTBS) based estimating equations approach to estimate the median regression parameters of the highly skewed response; the DTBS approach applies the same Box-Cox type transformation twice to both the outcome and regression function. The usual sandwich variance estimate can be used in our approach, whereas a resampling approach would be needed for a pseudo-likelihood based on minimizing absolute deviations (MAD). Furthermore, the approach is relatively robust to the true underlying distribution, and has much smaller mean square error than a MAD approach. The method is motivated by an analysis of laboratory data on urinary iodine (UI) concentration from the National Health and Nutrition Examination Survey. PMID:27062562
Simulating ensembles of source water quality using a K-nearest neighbor resampling approach.
Towler, Erin; Rajagopalan, Balaji; Seidel, Chad; Summers, R Scott
2009-03-01
Climatological, geological, and water management factors can cause significant variability in surface water quality. As drinking water quality standards become more stringent, the ability to quantify the variability of source water quality becomes more important for decision-making and planning in water treatment for regulatory compliance. However, paucity of long-term water quality data makes it challenging to apply traditional simulation techniques. To overcome this limitation, we have developed and applied a robust nonparametric K-nearest neighbor (K-nn) bootstrap approach utilizing the United States Environmental Protection Agency's Information Collection Rule (ICR) data. In this technique, first an appropriate "feature vector" is formed from the best available explanatory variables. The nearest neighbors to the feature vector are identified from the ICR data and are resampled using a weight function. Repetition of this results in water quality ensembles, and consequently the distribution and the quantification of the variability. The main strengths of the approach are its flexibility, simplicity, and the ability to use a large amount of spatial data with limited temporal extent to provide water quality ensembles for any given location. We demonstrate this approach by applying it to simulate monthly ensembles of total organic carbon for two utilities in the U.S. with very different watersheds and to alkalinity and bromide at two other U.S. utilities.
Inferring microevolution from museum collections and resampling: lessons learned from Cepaea.
Ożgo, Małgorzata; Liew, Thor-Seng; Webster, Nicole B; Schilthuizen, Menno
2017-01-01
Natural history collections are an important and largely untapped source of long-term data on evolutionary changes in wild populations. Here, we utilize three large geo-referenced sets of samples of the common European land-snail Cepaea nemoralis stored in the collection of Naturalis Biodiversity Center in Leiden, the Netherlands. Resampling of these populations allowed us to gain insight into changes occurring over 95, 69, and 50 years. Cepaea nemoralis is polymorphic for the colour and banding of the shell; the mode of inheritance of these patterns is known, and the polymorphism is under both thermal and predatory selection. At two sites the general direction of changes was towards lighter shells (yellow and less heavily banded), which is consistent with predictions based on on-going climatic change. At one site no directional changes were detected. At all sites there were significant shifts in morph frequencies between years, and our study contributes to the recognition that short-term changes in the states of populations often exceed long-term trends. Our interpretation was limited by the few time points available in the studied collections. We therefore stress the need for natural history collections to routinely collect large samples of common species, to allow much more reliable hind-casting of evolutionary responses to environmental change.
Preprocessing the Nintendo Wii Board Signal to Derive More Accurate Descriptors of Statokinesigrams
Audiffren, Julien; Contal, Emile
2016-01-01
During the past few years, the Nintendo Wii Balance Board (WBB) has been used in postural control research as an affordable but less reliable replacement for laboratory grade force platforms. However, the WBB suffers some limitations, such as a lower accuracy and an inconsistent sampling rate. In this study, we focus on the latter, namely the non uniform acquisition frequency. We show that this problem, combined with the poor signal to noise ratio of the WBB, can drastically decrease the quality of the obtained information if not handled properly. We propose a new resampling method, Sliding Window Average with Relevance Interval Interpolation (SWARII), specifically designed with the WBB in mind, for which we provide an open source implementation. We compare it with several existing methods commonly used in postural control, both on synthetic and experimental data. The results show that some methods, such as linear and piecewise constant interpolations should definitely be avoided, particularly when the resulting signal is differentiated, which is necessary to estimate speed, an important feature in postural control. Other methods, such as averaging on sliding windows or SWARII, perform significantly better on synthetic dataset, and produce results more similar to the laboratory-grade AMTI force plate (AFP) during experiments. Those methods should be preferred when resampling data collected from a WBB. PMID:27490545
ERIC Educational Resources Information Center
Horry, Ruth; Palmer, Matthew A.; Brewer, Neil
2012-01-01
Although the sequential lineup has been proposed as a means of protecting innocent suspects from mistaken identification, little is known about the importance of various aspects of the procedure. One potentially important detail is that witnesses should not know how many people are in the lineup. This is sometimes achieved by…
Shannon, Casey P; Chen, Virginia; Takhar, Mandeep; Hollander, Zsuzsanna; Balshaw, Robert; McManus, Bruce M; Tebbutt, Scott J; Sin, Don D; Ng, Raymond T
2016-11-14
Gene network inference (GNI) algorithms can be used to identify sets of coordinately expressed genes, termed network modules from whole transcriptome gene expression data. The identification of such modules has become a popular approach to systems biology, with important applications in translational research. Although diverse computational and statistical approaches have been devised to identify such modules, their performance behavior is still not fully understood, particularly in complex human tissues. Given human heterogeneity, one important question is how the outputs of these computational methods are sensitive to the input sample set, or stability. A related question is how this sensitivity depends on the size of the sample set. We describe here the SABRE (Similarity Across Bootstrap RE-sampling) procedure for assessing the stability of gene network modules using a re-sampling strategy, introduce a novel criterion for identifying stable modules, and demonstrate the utility of this approach in a clinically-relevant cohort, using two different gene network module discovery algorithms. The stability of modules increased as sample size increased and stable modules were more likely to be replicated in larger sets of samples. Random modules derived from permutated gene expression data were consistently unstable, as assessed by SABRE, and provide a useful baseline value for our proposed stability criterion. Gene module sets identified by different algorithms varied with respect to their stability, as assessed by SABRE. Finally, stable modules were more readily annotated in various curated gene set databases. The SABRE procedure and proposed stability criterion may provide guidance when designing systems biology studies in complex human disease and tissues.
Gleeson, Helena K; Wiley, Veronica; Wilcken, Bridget; Elliott, Elizabeth; Cowell, Christopher; Thonsett, Michael; Byrne, Geoffrey; Ambler, Geoffrey
2008-10-01
To assess the benefits and practicalities of setting up a newborn screening (NBS) program in Australia for congenital adrenal hyperplasia (CAH) through a 2 year pilot screening in ACT/NSW and comparing with case surveillance in other states. The pilot newborn screening occurred between 1/10/95 and 30/9/97 in NSW/ACT. Concurrently, case reporting for all new CAH cases occurred through the Australian Paediatric Surveillance Unit (APSU) across Australia. Details of clinical presentation, re-sampling and laboratory performance were assessed. 185,854 newborn infants were screened for CAH in NSW/ACT. Concurrently, 30 cases of CAH were reported to APSU, twelve of which were from NSW/ACT. CAH incidence was 1 in 15 488 (screened population) vs 1 in 18,034 births (unscreened) (difference not significant). Median age of initial notification was day 8 with confirmed diagnosis at 13(5-23) days in the screened population vs 16(7-37) days in the unscreened population (not significant). Of the 5 clinically unsuspected males in the screened population, one had mild salt-wasting by the time of notification, compared with salt-wasting crisis in all 6 males from the unscreened population. 96% of results were reported by day 10. Resampling was requested in 637 (0.4%) and median re-sampling delay was 11(0-28) days with higher resample rates in males (p < 0.0001). The within-laboratory cost per case of clinically unsuspected cases was A$42 717. There seems good justification for NBS for CAH based on clear prevention of salt-wasting crises and their potential long-term consequences. Also, prospects exist for enhancing screening performance.
NASA Astrophysics Data System (ADS)
Sargent, Steven D.; Greenman, Mark E.; Hansen, Scott M.
1998-11-01
The Spatial Infrared Imaging Telescope (SPIRIT III) is the primary sensor aboard the Midcourse Space Experiment (MSX), which was launched 24 April 1996. SPIRIT III included a Fourier transform spectrometer that collected terrestrial and celestial background phenomenology data for the Ballistic Missile Defense Organization (BMDO). This spectrometer used a helium-neon reference laser to measure the optical path difference (OPD) in the spectrometer and to command the analog-to-digital conversion of the infrared detector signals, thereby ensuring the data were sampled at precise increments of OPD. Spectrometer data must be sampled at accurate increments of OPD to optimize the spectral resolution and spectral position of the transformed spectra. Unfortunately, a failure in the power supply preregulator at the MSX spacecraft/SPIRIT III interface early in the mission forced the spectrometer to be operated without the reference laser until a failure investigation was completed. During this time data were collected in a backup mode that used an electronic clock to sample the data. These data were sampled evenly in time, and because the scan velocity varied, at nonuniform increments of OPD. The scan velocity profile depended on scan direction and scan length, and varied over time, greatly degrading the spectral resolution and spectral and radiometric accuracy of the measurements. The Convert software used to process the SPIRIT III data was modified to resample the clock-sampled data at even increments of OPD, using scan velocity profiles determined from ground and on-orbit data, greatly improving the quality of the clock-sampled data. This paper presents the resampling algorithm, the characterization of the scan velocity profiles, and the results of applying the resampling algorithm to on-orbit data.
Table-driven image transformation engine algorithm
NASA Astrophysics Data System (ADS)
Shichman, Marc
1993-04-01
A high speed image transformation engine (ITE) was designed and a prototype built for use in a generic electronic light table and image perspective transformation application code. The ITE takes any linear transformation, breaks the transformation into two passes and resamples the image appropriately for each pass. The system performance is achieved by driving the engine with a set of look up tables computed at start up time for the calculation of pixel output contributions. Anti-aliasing is done automatically in the image resampling process. Operations such as multiplications and trigonometric functions are minimized. This algorithm can be used for texture mapping, image perspective transformation, electronic light table, and virtual reality.
On removing interpolation and resampling artifacts in rigid image registration.
Aganj, Iman; Yeo, Boon Thye Thomas; Sabuncu, Mert R; Fischl, Bruce
2013-02-01
We show that image registration using conventional interpolation and summation approximations of continuous integrals can generally fail because of resampling artifacts. These artifacts negatively affect the accuracy of registration by producing local optima, altering the gradient, shifting the global optimum, and making rigid registration asymmetric. In this paper, after an extensive literature review, we demonstrate the causes of the artifacts by comparing inclusion and avoidance of resampling analytically. We show the sum-of-squared-differences cost function formulated as an integral to be more accurate compared with its traditional sum form in a simple case of image registration. We then discuss aliasing that occurs in rotation, which is due to the fact that an image represented in the Cartesian grid is sampled with different rates in different directions, and propose the use of oscillatory isotropic interpolation kernels, which allow better recovery of true global optima by overcoming this type of aliasing. Through our experiments on brain, fingerprint, and white noise images, we illustrate the superior performance of the integral registration cost function in both the Cartesian and spherical coordinates, and also validate the introduced radial interpolation kernel by demonstrating the improvement in registration.
On Removing Interpolation and Resampling Artifacts in Rigid Image Registration
Aganj, Iman; Yeo, Boon Thye Thomas; Sabuncu, Mert R.; Fischl, Bruce
2013-01-01
We show that image registration using conventional interpolation and summation approximations of continuous integrals can generally fail because of resampling artifacts. These artifacts negatively affect the accuracy of registration by producing local optima, altering the gradient, shifting the global optimum, and making rigid registration asymmetric. In this paper, after an extensive literature review, we demonstrate the causes of the artifacts by comparing inclusion and avoidance of resampling analytically. We show the sum-of-squared-differences cost function formulated as an integral to be more accurate compared with its traditional sum form in a simple case of image registration. We then discuss aliasing that occurs in rotation, which is due to the fact that an image represented in the Cartesian grid is sampled with different rates in different directions, and propose the use of oscillatory isotropic interpolation kernels, which allow better recovery of true global optima by overcoming this type of aliasing. Through our experiments on brain, fingerprint, and white noise images, we illustrate the superior performance of the integral registration cost function in both the Cartesian and spherical coordinates, and also validate the introduced radial interpolation kernel by demonstrating the improvement in registration. PMID:23076044
Mattfeldt, Torsten
2011-04-01
Computer-intensive methods may be defined as data analytical procedures involving a huge number of highly repetitive computations. We mention resampling methods with replacement (bootstrap methods), resampling methods without replacement (randomization tests) and simulation methods. The resampling methods are based on simple and robust principles and are largely free from distributional assumptions. Bootstrap methods may be used to compute confidence intervals for a scalar model parameter and for summary statistics from replicated planar point patterns, and for significance tests. For some simple models of planar point processes, point patterns can be simulated by elementary Monte Carlo methods. The simulation of models with more complex interaction properties usually requires more advanced computing methods. In this context, we mention simulation of Gibbs processes with Markov chain Monte Carlo methods using the Metropolis-Hastings algorithm. An alternative to simulations on the basis of a parametric model consists of stochastic reconstruction methods. The basic ideas behind the methods are briefly reviewed and illustrated by simple worked examples in order to encourage novices in the field to use computer-intensive methods. © 2010 The Authors Journal of Microscopy © 2010 Royal Microscopical Society.
Beachler, Jason A; Krueger, Chad A; Johnson, Anthony E
This process improvement study sought to evaluate the compliance in orthopaedic patients with sequential compression devices and to monitor any improvement in compliance following an educational intervention. All non-intensive care unit orthopaedic primary patients were evaluated at random times and their compliance with sequential compression devices was monitored and recorded. Following a 2-week period of data collection, an educational flyer was displayed in every patient's room and nursing staff held an in-service training event focusing on the importance of sequential compression device use in the surgical patient. Patients were then monitored, again at random, and compliance was recorded. With the addition of a simple flyer and a single in-service on the importance of mechanical compression in the surgical patient, a significant improvement in compliance was documented at the authors' institution from 28% to 59% (p < .0001).
Kent, Robert; Belitz, Kenneth; Fram, Miranda S.
2014-01-01
The Priority Basin Project (PBP) of the Groundwater Ambient Monitoring and Assessment (GAMA) Program was developed in response to the Groundwater Quality Monitoring Act of 2001 and is being conducted by the U.S. Geological Survey (USGS) in cooperation with the California State Water Resources Control Board (SWRCB). The GAMA-PBP began sampling, primarily public supply wells in May 2004. By the end of February 2006, seven (of what would eventually be 35) study units had been sampled over a wide area of the State. Selected wells in these first seven study units were resampled for water quality from August 2007 to November 2008 as part of an assessment of temporal trends in water quality by the GAMA-PBP. The initial sampling was designed to provide a spatially unbiased assessment of the quality of raw groundwater used for public water supplies within the seven study units. In the 7 study units, 462 wells were selected by using a spatially distributed, randomized grid-based method to provide statistical representation of the study area. Wells selected this way are referred to as grid wells or status wells. Approximately 3 years after the initial sampling, 55 of these previously sampled status wells (approximately 10 percent in each study unit) were randomly selected for resampling. The seven resampled study units, the total number of status wells sampled for each study unit, and the number of these wells resampled for trends are as follows, in chronological order of sampling: San Diego Drainages (53 status wells, 7 trend wells), North San Francisco Bay (84, 10), Northern San Joaquin Basin (51, 5), Southern Sacramento Valley (67, 7), San Fernando–San Gabriel (35, 6), Monterey Bay and Salinas Valley Basins (91, 11), and Southeast San Joaquin Valley (83, 9). The groundwater samples were analyzed for a large number of synthetic organic constituents (volatile organic compounds [VOCs], pesticides, and pesticide degradates), constituents of special interest (perchlorate, N-nitrosodimethylamine [NDMA], and 1,2,3-trichloropropane [1,2,3-TCP]), and naturally-occurring inorganic constituents (nutrients, major and minor ions, and trace elements). Naturally-occurring isotopes (tritium, carbon-14, and stable isotopes of hydrogen and oxygen in water) also were measured to help identify processes affecting groundwater quality and the sources and ages of the sampled groundwater. Nearly 300 constituents and water-quality indicators were investigated. Quality-control samples (blanks, replicates, and samples for matrix spikes) were collected at 24 percent of the 55 status wells resampled for trends, and the results for these samples were used to evaluate the quality of the data for the groundwater samples. Field blanks rarely contained detectable concentrations of any constituent, suggesting that contamination was not a noticeable source of bias in the data for the groundwater samples. Differences between replicate samples were mostly within acceptable ranges, indicating acceptably low variability in analytical results. Matrix-spike recoveries were within the acceptable range (70 to 130 percent) for 75 percent of the compounds for which matrix spikes were collected. This study did not attempt to evaluate the quality of water delivered to consumers. After withdrawal, groundwater typically is treated, disinfected, and blended with other waters to maintain acceptable water quality. The benchmarks used in this report apply to treated water that is served to the consumer, not to untreated groundwater. To provide some context for the results, however, concentrations of constituents measured in these groundwater samples were compared with benchmarks established by the U.S. Environmental Protection Agency (USEPA) and California Department of Public Health (CDPH). Comparisons between data collected for this study and benchmarks for drinking water are for illustrative purposes only and are not indicative of compliance or non-compliance with those benchmarks. Most constituents that were detected in groundwater samples from the trend wells were found at concentrations less than drinking-water benchmarks. Four VOCs—trichloroethene, tetrachloroethene, 1,2-dibromo-3-chloropropane, and methyl tert-butyl ether—were detected in one or more wells at concentrations greater than their health-based benchmarks, and six VOCs were detected in at least 10 percent of the samples during initial sampling or resampling of the trend wells. No pesticides were detected at concentrations near or greater than their health-based benchmarks. Three pesticide constituents—atrazine, deethylatrazine, and simazine—were detected in more than 10 percent of the trend-well samples during both sampling periods. Perchlorate, a constituent of special interest, was detected more frequently, and at greater concentrations during resampling than during initial sampling, but this may be due to a change in analytical method between the sampling periods, rather than to a change in groundwater quality. Another constituent of special interest, 1,2,3-TCP, was also detected more frequently during resampling than during initial sampling, but this pattern also may not reflect a change in groundwater quality. Samples from several of the wells where 1,2,3-TCP was detected by low-concentration-level analysis during resampling were not analyzed for 1,2,3-TCP using a low-level method during initial sampling. Most detections of nutrients and trace elements in samples from trend wells were less than health-based benchmarks during both sampling periods. Exceptions include nitrate, arsenic, boron, and vanadium, all detected at concentrations greater than their health-based benchmarks in at least one well during both sampling periods, and molybdenum, detected at concentrations greater than its health-based benchmark during resampling only. The isotopic ratios of oxygen and hydrogen in water and tritium and carbon-14 activities generally changed little between sampling periods, suggesting that the predominant sources and ages of groundwater in most trend wells were consistent between the sampling periods.
2017-01-01
Objective Anticipation of opponent actions, through the use of advanced (i.e., pre-event) kinematic information, can be trained using video-based temporal occlusion. Typically, this involves isolated opponent skills/shots presented as trials in a random order. However, two different areas of research concerning representative task design and contextual (non-kinematic) information, suggest this structure of practice restricts expert performance. The aim of this study was to examine the effect of a sequential structure of practice during video-based training of anticipatory behavior in tennis, as well as the transfer of these skills to the performance environment. Methods In a pre-practice-retention-transfer design, participants viewed life-sized video of tennis rallies across practice in either a sequential order (sequential group), in which participants were exposed to opponent skills/shots in the order they occur in the sport, or a non-sequential (non-sequential group) random order. Results In the video-based retention test, the sequential group was significantly more accurate in their anticipatory judgments when the retention condition replicated the sequential structure compared to the non-sequential group. In the non-sequential retention condition, the non-sequential group was more accurate than the sequential group. In the field-based transfer test, overall decision time was significantly faster in the sequential group compared to the non-sequential group. Conclusion Findings highlight the benefits of a sequential structure of practice for the transfer of anticipatory behavior in tennis. We discuss the role of contextual information, and the importance of representative task design, for the testing and training of perceptual-cognitive skills in sport. PMID:28355263
Active surface model improvement by energy function optimization for 3D segmentation.
Azimifar, Zohreh; Mohaddesi, Mahsa
2015-04-01
This paper proposes an optimized and efficient active surface model by improving the energy functions, searching method, neighborhood definition and resampling criterion. Extracting an accurate surface of the desired object from a number of 3D images using active surface and deformable models plays an important role in computer vision especially medical image processing. Different powerful segmentation algorithms have been suggested to address the limitations associated with the model initialization, poor convergence to surface concavities and slow convergence rate. This paper proposes a method to improve one of the strongest and recent segmentation algorithms, namely the Decoupled Active Surface (DAS) method. We consider a gradient of wavelet edge extracted image and local phase coherence as external energy to extract more information from images and we use curvature integral as internal energy to focus on high curvature region extraction. Similarly, we use resampling of points and a line search for point selection to improve the accuracy of the algorithm. We further employ an estimation of the desired object as an initialization for the active surface model. A number of tests and experiments have been done and the results show the improvements with regards to the extracted surface accuracy and computational time of the presented algorithm compared with the best and recent active surface models. Copyright © 2015 Elsevier Ltd. All rights reserved.
Application of microarray analysis on computer cluster and cloud platforms.
Bernau, C; Boulesteix, A-L; Knaus, J
2013-01-01
Analysis of recent high-dimensional biological data tends to be computationally intensive as many common approaches such as resampling or permutation tests require the basic statistical analysis to be repeated many times. A crucial advantage of these methods is that they can be easily parallelized due to the computational independence of the resampling or permutation iterations, which has induced many statistics departments to establish their own computer clusters. An alternative is to rent computing resources in the cloud, e.g. at Amazon Web Services. In this article we analyze whether a selection of statistical projects, recently implemented at our department, can be efficiently realized on these cloud resources. Moreover, we illustrate an opportunity to combine computer cluster and cloud resources. In order to compare the efficiency of computer cluster and cloud implementations and their respective parallelizations we use microarray analysis procedures and compare their runtimes on the different platforms. Amazon Web Services provide various instance types which meet the particular needs of the different statistical projects we analyzed in this paper. Moreover, the network capacity is sufficient and the parallelization is comparable in efficiency to standard computer cluster implementations. Our results suggest that many statistical projects can be efficiently realized on cloud resources. It is important to mention, however, that workflows can change substantially as a result of a shift from computer cluster to cloud computing.
Resampling to Address the Winner's Curse in Genetic Association Analysis of Time to Event
Poirier, Julia G.; Faye, Laura L.; Dimitromanolakis, Apostolos; Paterson, Andrew D.; Sun, Lei
2015-01-01
ABSTRACT The “winner's curse” is a subtle and difficult problem in interpretation of genetic association, in which association estimates from large‐scale gene detection studies are larger in magnitude than those from subsequent replication studies. This is practically important because use of a biased estimate from the original study will yield an underestimate of sample size requirements for replication, leaving the investigators with an underpowered study. Motivated by investigation of the genetics of type 1 diabetes complications in a longitudinal cohort of participants in the Diabetes Control and Complications Trial/Epidemiology of Diabetes Interventions and Complications (DCCT/EDIC) Genetics Study, we apply a bootstrap resampling method in analysis of time to nephropathy under a Cox proportional hazards model, examining 1,213 single‐nucleotide polymorphisms (SNPs) in 201 candidate genes custom genotyped in 1,361 white probands. Among 15 top‐ranked SNPs, bias reduction in log hazard ratio estimates ranges from 43.1% to 80.5%. In simulation studies based on the observed DCCT/EDIC genotype data, genome‐wide bootstrap estimates for false‐positive SNPs and for true‐positive SNPs with low‐to‐moderate power are closer to the true values than uncorrected naïve estimates, but tend to overcorrect SNPs with high power. This bias‐reduction technique is generally applicable for complex trait studies including quantitative, binary, and time‐to‐event traits. PMID:26411674
The GOLM-database standard- a framework for time-series data management based on free software
NASA Astrophysics Data System (ADS)
Eichler, M.; Francke, T.; Kneis, D.; Reusser, D.
2009-04-01
Monitoring and modelling projects usually involve time series data originating from different sources. Often, file formats, temporal resolution and meta-data documentation rarely adhere to a common standard. As a result, much effort is spent on converting, harmonizing, merging, checking, resampling and reformatting these data. Moreover, in work groups or during the course of time, these tasks tend to be carried out redundantly and repeatedly, especially when new data becomes available. The resulting duplication of data in various formats strains additional ressources. We propose a database structure and complementary scripts for facilitating these tasks. The GOLM- (General Observation and Location Management) framework allows for import and storage of time series data of different type while assisting in meta-data documentation, plausibility checking and harmonization. The imported data can be visually inspected and its coverage among locations and variables may be visualized. Supplementing scripts provide options for data export for selected stations and variables and resampling of the data to the desired temporal resolution. These tools can, for example, be used for generating model input files or reports. Since GOLM fully supports network access, the system can be used efficiently by distributed working groups accessing the same data over the internet. GOLM's database structure and the complementary scripts can easily be customized to specific needs. Any involved software such as MySQL, R, PHP, OpenOffice as well as the scripts for building and using the data base, including documentation, are free for download. GOLM was developed out of the practical requirements of the OPAQUE-project. It has been tested and further refined in the ERANET-CRUE and SESAM projects, all of which used GOLM to manage meteorological, hydrological and/or water quality data.
Zenker, Sven
2010-08-01
Combining mechanistic mathematical models of physiology with quantitative observations using probabilistic inference may offer advantages over established approaches to computerized decision support in acute care medicine. Particle filters (PF) can perform such inference successively as data becomes available. The potential of PF for real-time state estimation (SE) for a model of cardiovascular physiology is explored using parallel computers and the ability to achieve joint state and parameter estimation (JSPE) given minimal prior knowledge tested. A parallelized sequential importance sampling/resampling algorithm was implemented and its scalability for the pure SE problem for a non-linear five-dimensional ODE model of the cardiovascular system evaluated on a Cray XT3 using up to 1,024 cores. JSPE was implemented using a state augmentation approach with artificial stochastic evolution of the parameters. Its performance when simultaneously estimating the 5 states and 18 unknown parameters when given observations only of arterial pressure, central venous pressure, heart rate, and, optionally, cardiac output, was evaluated in a simulated bleeding/resuscitation scenario. SE was successful and scaled up to 1,024 cores with appropriate algorithm parametrization, with real-time equivalent performance for up to 10 million particles. JSPE in the described underdetermined scenario achieved excellent reproduction of observables and qualitative tracking of enddiastolic ventricular volumes and sympathetic nervous activity. However, only a subset of the posterior distributions of parameters concentrated around the true values for parts of the estimated trajectories. Parallelized PF's performance makes their application to complex mathematical models of physiology for the purpose of clinical data interpretation, prediction, and therapy optimization appear promising. JSPE in the described extremely underdetermined scenario nevertheless extracted information of potential clinical relevance from the data in this simulation setting. However, fully satisfactory resolution of this problem when minimal prior knowledge about parameter values is available will require further methodological improvements, which are discussed.
Spatial versus sequential correlations for random access coding
NASA Astrophysics Data System (ADS)
Tavakoli, Armin; Marques, Breno; Pawłowski, Marcin; Bourennane, Mohamed
2016-03-01
Random access codes are important for a wide range of applications in quantum information. However, their implementation with quantum theory can be made in two very different ways: (i) by distributing data with strong spatial correlations violating a Bell inequality or (ii) using quantum communication channels to create stronger-than-classical sequential correlations between state preparation and measurement outcome. Here we study this duality of the quantum realization. We present a family of Bell inequalities tailored to the task at hand and study their quantum violations. Remarkably, we show that the use of spatial and sequential quantum correlations imposes different limitations on the performance of quantum random access codes: Sequential correlations can outperform spatial correlations. We discuss the physics behind the observed discrepancy between spatial and sequential quantum correlations.
Kent, Robert; Landon, Matthew K.
2016-01-01
From 2004 to 2011, the U.S. Geological Survey collected samples from 1686 wells across the State of California as part of the California State Water Resources Control Board’s Groundwater Ambient Monitoring and Assessment (GAMA) Priority Basin Project (PBP). From 2007 to 2013, 224 of these wells were resampled to assess temporal trends in water quality. The samples were analyzed for 216 water-quality constituents, including inorganic and organic compounds as well as isotopic tracers. The resampled wells were grouped into five hydrogeologic zones. A nonparametric hypothesis test was used to test the differences between initial sampling and resampling results to evaluate possible step trends in water-quality, statewide, and within each hydrogeologic zone. The hypothesis tests were performed on the 79 constituents that were detected in more than 5 % of the samples collected during either sampling period in at least one hydrogeologic zone. Step trends were detected for 17 constituents. Increasing trends were detected for alkalinity, aluminum, beryllium, boron, lithium, orthophosphate, perchlorate, sodium, and specific conductance. Decreasing trends were detected for atrazine, cobalt, dissolved oxygen, lead, nickel, pH, simazine, and tritium. Tritium was expected to decrease due to decreasing values in precipitation, and the detection of decreases indicates that the method is capable of resolving temporal trends.
Correlated sequential tunneling through a double barrier for interacting one-dimensional electrons
NASA Astrophysics Data System (ADS)
Thorwart, M.; Egger, R.; Grifoni, M.
2005-07-01
The problem of resonant tunneling through a quantum dot weakly coupled to spinless Tomonaga-Luttinger liquids has been studied. We compute the linear conductance due to sequential tunneling processes upon employing a master equation approach. Besides the previously used lowest-order golden rule rates describing uncorrelated sequential tunneling processes, we systematically include higher-order correlated sequential tunneling (CST) diagrams within the standard Weisskopf-Wigner approximation. We provide estimates for the parameter regions where CST effects can be important. Focusing mainly on the temperature dependence of the peak conductance, we discuss the relation of these findings to previous theoretical and experimental results.
Maximum likelihood resampling of noisy, spatially correlated data
NASA Astrophysics Data System (ADS)
Goff, J.; Jenkins, C.
2005-12-01
In any geologic application, noisy data are sources of consternation for researchers, inhibiting interpretability and marring images with unsightly and unrealistic artifacts. Filtering is the typical solution to dealing with noisy data. However, filtering commonly suffers from ad hoc (i.e., uncalibrated, ungoverned) application, which runs the risk of erasing high variability components of the field in addition to the noise components. We present here an alternative to filtering: a newly developed methodology for correcting noise in data by finding the "best" value given the data value, its uncertainty, and the data values and uncertainties at proximal locations. The motivating rationale is that data points that are close to each other in space cannot differ by "too much", where how much is "too much" is governed by the field correlation properties. Data with large uncertainties will frequently violate this condition, and in such cases need to be corrected, or "resampled." The best solution for resampling is determined by the maximum of the likelihood function defined by the intersection of two probability density functions (pdf): (1) the data pdf, with mean and variance determined by the data value and square uncertainty, respectively, and (2) the geostatistical pdf, whose mean and variance are determined by the kriging algorithm applied to proximal data values. A Monte Carlo sampling of the data probability space eliminates non-uniqueness, and weights the solution toward data values with lower uncertainties. A test with a synthetic data set sampled from a known field demonstrates quantitatively and qualitatively the improvement provided by the maximum likelihood resampling algorithm. The method is also applied to three marine geology/geophysics data examples: (1) three generations of bathymetric data on the New Jersey shelf with disparate data uncertainties; (2) mean grain size data from the Adriatic Sea, which is combination of both analytic (low uncertainty) and word-based (higher uncertainty) sources; and (3) sidescan backscatter data from the Martha's Vineyard Coastal Observatory which are, as is typical for such data, affected by speckly noise.
Ex Post Facto Monte Carlo Variance Reduction
DOE Office of Scientific and Technical Information (OSTI.GOV)
Booth, Thomas E.
The variance in Monte Carlo particle transport calculations is often dominated by a few particles whose importance increases manyfold on a single transport step. This paper describes a novel variance reduction method that uses a large importance change as a trigger to resample the offending transport step. That is, the method is employed only after (ex post facto) a random walk attempts a transport step that would otherwise introduce a large variance in the calculation.Improvements in two Monte Carlo transport calculations are demonstrated empirically using an ex post facto method. First, the method is shown to reduce the variance inmore » a penetration problem with a cross-section window. Second, the method empirically appears to modify a point detector estimator from an infinite variance estimator to a finite variance estimator.« less
Significance of the impact of motion compensation on the variability of PET image features
NASA Astrophysics Data System (ADS)
Carles, M.; Bach, T.; Torres-Espallardo, I.; Baltas, D.; Nestle, U.; Martí-Bonmatí, L.
2018-03-01
In lung cancer, quantification by positron emission tomography/computed tomography (PET/CT) imaging presents challenges due to respiratory movement. Our primary aim was to study the impact of motion compensation implied by retrospectively gated (4D)-PET/CT on the variability of PET quantitative parameters. Its significance was evaluated by comparison with the variability due to (i) the voxel size in image reconstruction and (ii) the voxel size in image post-resampling. The method employed for feature extraction was chosen based on the analysis of (i) the effect of discretization of the standardized uptake value (SUV) on complementarity between texture features (TF) and conventional indices, (ii) the impact of the segmentation method on the variability of image features, and (iii) the variability of image features across the time-frame of 4D-PET. Thirty-one PET-features were involved. Three SUV discretization methods were applied: a constant width (SUV resolution) of the resampling bin (method RW), a constant number of bins (method RN) and RN on the image obtained after histogram equalization (method EqRN). The segmentation approaches evaluated were 40% of SUVmax and the contrast oriented algorithm (COA). Parameters derived from 4D-PET images were compared with values derived from the PET image obtained for (i) the static protocol used in our clinical routine (3D) and (ii) the 3D image post-resampled to the voxel size of the 4D image and PET image derived after modifying the reconstruction of the 3D image to comprise the voxel size of the 4D image. Results showed that TF complementarity with conventional indices was sensitive to the SUV discretization method. In the comparison of COA and 40% contours, despite the values not being interchangeable, all image features showed strong linear correlations (r > 0.91, p\\ll 0.001 ). Across the time-frames of 4D-PET, all image features followed a normal distribution in most patients. For our patient cohort, the compensation of tumor motion did not have a significant impact on the quantitative PET parameters. The variability of PET parameters due to voxel size in image reconstruction was more significant than variability due to voxel size in image post-resampling. In conclusion, most of the parameters (apart from the contrast of neighborhood matrix) were robust to the motion compensation implied by 4D-PET/CT. The impact on parameter variability due to the voxel size in image reconstruction and in image post-resampling could not be assumed to be equivalent.
Porto, Paolo; Walling, Desmond E; Cogliandro, Vanessa; Callegari, Giovanni
2016-11-01
In recent years, the fallout radionuclides caesium-137 ( 137 Cs) and unsupported lead-210 ( 210 Pb ex) have been successfully used to document rates of soil erosion in many areas of the world, as an alternative to conventional measurements. By virtue of their different half-lives, these two radionuclides are capable of providing information related to different time windows. 137 Cs measurements are commonly used to generate information on mean annual erosion rates over the past ca. 50-60 years, whereas 210 Pb ex measurements are able to provide information relating to a longer period of up to ca. 100 years. However, the time-integrated nature of the estimates of soil redistribution provided by 137 Cs and 210 Pb ex measurements can be seen as a limitation, particularly when viewed in the context of global change and interest in the response of soil redistribution rates to contemporary climate change and land use change. Re-sampling techniques used with these two fallout radionuclides potentially provide a basis for providing information on recent changes in soil redistribution rates. By virtue of the effectively continuous fallout input, of 210 Pb, the response of the 210 Pb ex inventory of a soil profile to changing soil redistribution rates and thus its potential for use with the re-sampling approach differs from that of 137 Cs. Its greater sensitivity to recent changes in soil redistribution rates suggests that 210 Pb ex may have advantages over 137 Cs for use in the re-sampling approach. The potential for using 210 Pb ex measurements in re-sampling studies is explored further in this contribution. Attention focuses on a small (1.38 ha) forested catchment in southern Italy. The catchment was originally sampled for 210 Pb ex measurements in 2001 and equivalent samples were collected from points very close to the original sampling points again in 2013. This made it possible to compare the estimates of mean annual erosion related to two different time windows. This comparison suggests that mean annual rates of net soil loss had increased during the period between the two sampling campaigns and that this increase was associated with a shift to an increased sediment delivery ratio. This change was consistent with independent information on likely changes in the sediment response of the study catchment provided by the available records of annual sediment yield and changes in the annual rainfall documented for the local area. Copyright © 2016 Elsevier Ltd. All rights reserved.
Adaptive Resampling Particle Filters for GPS Carrier-Phase Navigation and Collision Avoidance System
NASA Astrophysics Data System (ADS)
Hwang, Soon Sik
This dissertation addresses three problems: 1) adaptive resampling technique (ART) for Particle Filters, 2) precise relative positioning using Global Positioning System (GPS) Carrier-Phase (CP) measurements applied to nonlinear integer resolution problem for GPS CP navigation using Particle Filters, and 3) collision detection system based on GPS CP broadcasts. First, Monte Carlo filters, called Particle Filters (PF), are widely used where the system is non-linear and non-Gaussian. In real-time applications, their estimation accuracies and efficiencies are significantly affected by the number of particles and the scheduling of relocating weights and samples, the so-called resampling step. In this dissertation, the appropriate number of particles is estimated adaptively such that the error of the sample mean and variance stay in bounds. These bounds are given by the confidence interval of a normal probability distribution for a multi-variate state. Two required number of samples maintaining the mean and variance error within the bounds are derived. The time of resampling is determined when the required sample number for the variance error crosses the required sample number for the mean error. Second, the PF using GPS CP measurements with adaptive resampling is applied to precise relative navigation between two GPS antennas. In order to make use of CP measurements for navigation, the unknown number of cycles between GPS antennas, the so called integer ambiguity, should be resolved. The PF is applied to this integer ambiguity resolution problem where the relative navigation states estimation involves nonlinear observations and nonlinear dynamics equation. Using the PF, the probability density function of the states is estimated by sampling from the position and velocity space and the integer ambiguities are resolved without using the usual hypothesis tests to search for the integer ambiguity. The ART manages the number of position samples and the frequency of the resampling step for real-time kinematics GPS navigation. The experimental results demonstrate the performance of the ART and the insensitivity of the proposed approach to GPS CP cycle-slips. Third, the GPS has great potential for the development of new collision avoidance systems and is being considered for the next generation Traffic alert and Collision Avoidance System (TCAS). The current TCAS equipment, is capable of broadcasting GPS code information to nearby airplanes, and also, the collision avoidance system using the navigation information based on GPS code has been studied by researchers. In this dissertation, the aircraft collision detection system using GPS CP information is addressed. The PF with position samples is employed for the CP based relative position estimation problem and the same algorithm can be used to determine the vehicle attitude if multiple GPS antennas are used. For a reliable and enhanced collision avoidance system, three dimensional trajectories are projected using the estimates of the relative position, velocity, and the attitude. It is shown that the performance of GPS CP based collision detecting algorithm meets the accuracy requirements for a precise approach of flight for auto landing with significantly less unnecessary collision false alarms and no miss alarms.
The Bootstrap, the Jackknife, and the Randomization Test: A Sampling Taxonomy.
Rodgers, J L
1999-10-01
A simple sampling taxonomy is defined that shows the differences between and relationships among the bootstrap, the jackknife, and the randomization test. Each method has as its goal the creation of an empirical sampling distribution that can be used to test statistical hypotheses, estimate standard errors, and/or create confidence intervals. Distinctions between the methods can be made based on the sampling approach (with replacement versus without replacement) and the sample size (replacing the whole original sample versus replacing a subset of the original sample). The taxonomy is useful for teaching the goals and purposes of resampling schemes. An extension of the taxonomy implies other possible resampling approaches that have not previously been considered. Univariate and multivariate examples are presented.
ERIC Educational Resources Information Center
Shaughnessy, M.; And Others
Numerous cognitive psychologists have validated the hypothesis, originally advanced by the Russian physician, A. Luria, that different individuals process information in two distinctly different manners: simultaneously and sequentially. The importance of recognizing the existence of these two distinct styles of processing information and selecting…
Atmospheric Science Data Center
2016-08-22
MISBR MISR Browse Data: Color browse image of the Ellipsoid product for each camera resampled to 2.2 km resolution. ... Tool: Order Data Readme Files: Processing Status Production Report Read Software ...
Sequential biases in accumulating evidence
Huggins, Richard; Dogo, Samson Henry
2015-01-01
Whilst it is common in clinical trials to use the results of tests at one phase to decide whether to continue to the next phase and to subsequently design the next phase, we show that this can lead to biased results in evidence synthesis. Two new kinds of bias associated with accumulating evidence, termed ‘sequential decision bias’ and ‘sequential design bias’, are identified. Both kinds of bias are the result of making decisions on the usefulness of a new study, or its design, based on the previous studies. Sequential decision bias is determined by the correlation between the value of the current estimated effect and the probability of conducting an additional study. Sequential design bias arises from using the estimated value instead of the clinically relevant value of an effect in sample size calculations. We considered both the fixed‐effect and the random‐effects models of meta‐analysis and demonstrated analytically and by simulations that in both settings the problems due to sequential biases are apparent. According to our simulations, the sequential biases increase with increased heterogeneity. Minimisation of sequential biases arises as a new and important research area necessary for successful evidence‐based approaches to the development of science. © 2015 The Authors. Research Synthesis Methods Published by John Wiley & Sons Ltd. PMID:26626562
NASA Astrophysics Data System (ADS)
Jannati, Mojtaba; Valadan Zoej, Mohammad Javad; Mokhtarzade, Mehdi
2018-03-01
This paper presents a novel approach to epipolar resampling of cross-track linear pushbroom imagery using orbital parameters model (OPM). The backbone of the proposed method relies on modification of attitude parameters of linear array stereo imagery in such a way to parallelize the approximate conjugate epipolar lines (ACELs) with the instantaneous base line (IBL) of the conjugate image points (CIPs). Afterward, a complementary rotation is applied in order to parallelize all the ACELs throughout the stereo imagery. The new estimated attitude parameters are evaluated based on the direction of the IBL and the ACELs. Due to the spatial and temporal variability of the IBL (respectively changes in column and row numbers of the CIPs) and nonparallel nature of the epipolar lines in the stereo linear images, some polynomials in the both column and row numbers of the CIPs are used to model new attitude parameters. As the instantaneous position of sensors remains fix, the digital elevation model (DEM) of the area of interest is not required in the resampling process. According to the experimental results obtained from two pairs of SPOT and RapidEye stereo imagery with a high elevation relief, the average absolute values of remained vertical parallaxes of CIPs in the normalized images were obtained 0.19 and 0.28 pixels respectively, which confirm the high accuracy and applicability of the proposed method.
Backward Registration Based Aspect Ratio Similarity (ARS) for Image Retargeting Quality Assessment.
Zhang, Yabin; Fang, Yuming; Lin, Weisi; Zhang, Xinfeng; Li, Leida
2016-06-28
During the past few years, there have been various kinds of content-aware image retargeting operators proposed for image resizing. However, the lack of effective objective retargeting quality assessment metrics limits the further development of image retargeting techniques. Different from traditional Image Quality Assessment (IQA) metrics, the quality degradation during image retargeting is caused by artificial retargeting modifications, and the difficulty for Image Retargeting Quality Assessment (IRQA) lies in the alternation of the image resolution and content, which makes it impossible to directly evaluate the quality degradation like traditional IQA. In this paper, we interpret the image retargeting in a unified framework of resampling grid generation and forward resampling. We show that the geometric change estimation is an efficient way to clarify the relationship between the images. We formulate the geometric change estimation as a Backward Registration problem with Markov Random Field (MRF) and provide an effective solution. The geometric change aims to provide the evidence about how the original image is resized into the target image. Under the guidance of the geometric change, we develop a novel Aspect Ratio Similarity metric (ARS) to evaluate the visual quality of retargeted images by exploiting the local block changes with a visual importance pooling strategy. Experimental results on the publicly available MIT RetargetMe and CUHK datasets demonstrate that the proposed ARS can predict more accurate visual quality of retargeted images compared with state-of-the-art IRQA metrics.
A Sequential Analysis of Parent-Child Interactions in Anxious and Nonanxious Families
ERIC Educational Resources Information Center
Williams, Sarah R.; Kertz, Sarah J.; Schrock, Matthew D.; Woodruff-Borden, Janet
2012-01-01
Although theoretical work has suggested that reciprocal behavior patterns between parent and child may be important in the development of childhood anxiety, most empirical work has failed to consider the bidirectional nature of interactions. The current study sought to address this limitation by utilizing a sequential approach to exploring…
USDA-ARS?s Scientific Manuscript database
Effective Salmonella control in broilers is important from the standpoint of both consumer protection and industry viability. We investigated associations between Salmonella recovery from different sample types collected at sequential stages of one grow-out from the broiler flock and production env...
Investigating Stage-Sequential Growth Mixture Models with Multiphase Longitudinal Data
ERIC Educational Resources Information Center
Kim, Su-Young; Kim, Jee-Seon
2012-01-01
This article investigates three types of stage-sequential growth mixture models in the structural equation modeling framework for the analysis of multiple-phase longitudinal data. These models can be important tools for situations in which a single-phase growth mixture model produces distorted results and can allow researchers to better understand…
Assessment of reliability and safety of a manufacturing system with sequential failures is an important issue in industry, since the reliability and safety of the system depend not only on all failed states of system components, but also on the sequence of occurrences of those...
Alternatives to the sequential lineup: the importance of controlling the pictures.
Lindsay, R C; Bellinger, K
1999-06-01
Because sequential lineups reduce false-positive choices, their use has been recommended (R. C. L. Lindsay, 1999; R. C. L. Lindsay & G. L. Wells, 1985). Blind testing is included in the recommended procedures. Police, concerned about blind testing, devised alternative procedures, including self-administered sequential lineups, to reduce use of relative judgments (G. L. Wells, 1984) while permitting the investigating officer to conduct the procedure. Identification data from undergraduates exposed to a staged crime (N = 165) demonstrated that 4 alternative identification procedures tested were less effective than the original sequential lineup. Allowing witnesses to control the photographs resulted in higher rates of false-positive identification. Self-reports of using relative judgments were shown to be postdictive of decision accuracy.
GPU and APU computations of Finite Time Lyapunov Exponent fields
NASA Astrophysics Data System (ADS)
Conti, Christian; Rossinelli, Diego; Koumoutsakos, Petros
2012-03-01
We present GPU and APU accelerated computations of Finite-Time Lyapunov Exponent (FTLE) fields. The calculation of FTLEs is a computationally intensive process, as in order to obtain the sharp ridges associated with the Lagrangian Coherent Structures an extensive resampling of the flow field is required. The computational performance of this resampling is limited by the memory bandwidth of the underlying computer architecture. The present technique harnesses data-parallel execution of many-core architectures and relies on fast and accurate evaluations of moment conserving functions for the mesh to particle interpolations. We demonstrate how the computation of FTLEs can be efficiently performed on a GPU and on an APU through OpenCL and we report over one order of magnitude improvements over multi-threaded executions in FTLE computations of bluff body flows.
NASA Technical Reports Server (NTRS)
Lawton, Pat
2004-01-01
The objective of this work was to support the design of improved IUE NEWSIPS high dispersion extraction algorithms. The purpose of this work was to evaluate use of the Linearized Image (LIHI) file versus the Re-Sampled Image (SIHI) file, evaluate various extraction, and design algorithms for evaluation of IUE High Dispersion spectra. It was concluded the use of the Re-Sampled Image (SIHI) file was acceptable. Since the Gaussian profile worked well for the core and the Lorentzian profile worked well for the wings, the Voigt profile was chosen for use in the extraction algorithm. It was found that the gamma and sigma parameters varied significantly across the detector, so gamma and sigma masks for the SWP detector were developed. Extraction code was written.
Fast Computation of the Two-Point Correlation Function in the Age of Big Data
NASA Astrophysics Data System (ADS)
Pellegrino, Andrew; Timlin, John
2018-01-01
We present a new code which quickly computes the two-point correlation function for large sets of astronomical data. This code combines the ease of use of Python with the speed of parallel shared libraries written in C. We include the capability to compute the auto- and cross-correlation statistics, and allow the user to calculate the three-dimensional and angular correlation functions. Additionally, the code automatically divides the user-provided sky masks into contiguous subsamples of similar size, using the HEALPix pixelization scheme, for the purpose of resampling. Errors are computed using jackknife and bootstrap resampling in a way that adds negligible extra runtime, even with many subsamples. We demonstrate comparable speed with other clustering codes, and code accuracy compared to known and analytic results.
Spatial resampling of IDR frames for low bitrate video coding with HEVC
NASA Astrophysics Data System (ADS)
Hosking, Brett; Agrafiotis, Dimitris; Bull, David; Easton, Nick
2015-03-01
As the demand for higher quality and higher resolution video increases, many applications fail to meet this demand due to low bandwidth restrictions. One factor contributing to this problem is the high bitrate requirement of the intra-coded Instantaneous Decoding Refresh (IDR) frames featuring in all video coding standards. Frequent coding of IDR frames is essential for error resilience in order to prevent the occurrence of error propagation. However, as each one consumes a huge portion of the available bitrate, the quality of future coded frames is hindered by high levels of compression. This work presents a new technique, known as Spatial Resampling of IDR Frames (SRIF), and shows how it can increase the rate distortion performance by providing a higher and more consistent level of video quality at low bitrates.
More About the Phase-Synchronized Enhancement Method
NASA Technical Reports Server (NTRS)
Jong, Jen-Yi
2004-01-01
A report presents further details regarding the subject matter of "Phase-Synchronized Enhancement Method for Engine Diagnostics" (MFS-26435), NASA Tech Briefs, Vol. 22, No. 1 (January 1998), page 54. To recapitulate: The phase-synchronized enhancement method (PSEM) involves the digital resampling of a quasi-periodic signal in synchronism with the instantaneous phase of one of its spectral components. This resampling transforms the quasi-periodic signal into a periodic one more amenable to analysis. It is particularly useful for diagnosis of a rotating machine through analysis of vibration spectra that include components at the fundamental and harmonics of a slightly fluctuating rotation frequency. The report discusses the machinery-signal-analysis problem, outlines the PSEM algorithms, presents the mathematical basis of the PSEM, and presents examples of application of the PSEM in some computational simulations.
Method for Pre-Conditioning a Measured Surface Height Map for Model Validation
NASA Technical Reports Server (NTRS)
Sidick, Erkin
2012-01-01
This software allows one to up-sample or down-sample a measured surface map for model validation, not only without introducing any re-sampling errors, but also eliminating the existing measurement noise and measurement errors. Because the re-sampling of a surface map is accomplished based on the analytical expressions of Zernike-polynomials and a power spectral density model, such re-sampling does not introduce any aliasing and interpolation errors as is done by the conventional interpolation and FFT-based (fast-Fourier-transform-based) spatial-filtering method. Also, this new method automatically eliminates the measurement noise and other measurement errors such as artificial discontinuity. The developmental cycle of an optical system, such as a space telescope, includes, but is not limited to, the following two steps: (1) deriving requirements or specs on the optical quality of individual optics before they are fabricated through optical modeling and simulations, and (2) validating the optical model using the measured surface height maps after all optics are fabricated. There are a number of computational issues related to model validation, one of which is the "pre-conditioning" or pre-processing of the measured surface maps before using them in a model validation software tool. This software addresses the following issues: (1) up- or down-sampling a measured surface map to match it with the gridded data format of a model validation tool, and (2) eliminating the surface measurement noise or measurement errors such that the resulted surface height map is continuous or smoothly-varying. So far, the preferred method used for re-sampling a surface map is two-dimensional interpolation. The main problem of this method is that the same pixel can take different values when the method of interpolation is changed among the different methods such as the "nearest," "linear," "cubic," and "spline" fitting in Matlab. The conventional, FFT-based spatial filtering method used to eliminate the surface measurement noise or measurement errors can also suffer from aliasing effects. During re-sampling of a surface map, this software preserves the low spatial-frequency characteristic of a given surface map through the use of Zernike-polynomial fit coefficients, and maintains mid- and high-spatial-frequency characteristics of the given surface map by the use of a PSD model derived from the two-dimensional PSD data of the mid- and high-spatial-frequency components of the original surface map. Because this new method creates the new surface map in the desired sampling format from analytical expressions only, it does not encounter any aliasing effects and does not cause any discontinuity in the resultant surface map.
Adriaens, Els; Barroso, João; Eskes, Chantra; Hoffmann, Sebastian; McNamee, Pauline; Alépée, Nathalie; Bessou-Touya, Sandrine; De Smedt, Ann; De Wever, Bart; Pfannenbecker, Uwe; Tailhardat, Magalie; Zuang, Valérie
2014-03-01
For more than two decades, scientists have been trying to replace the regulatory in vivo Draize eye test by in vitro methods, but so far only partial replacement has been achieved. In order to better understand the reasons for this, historical in vivo rabbit data were analysed in detail and resampled with the purpose of (1) revealing which of the in vivo endpoints are most important in driving United Nations Globally Harmonized System/European Union Regulation on Classification, Labelling and Packaging (UN GHS/EU CLP) classification for serious eye damage/eye irritation and (2) evaluating the method's within-test variability for proposing acceptable and justifiable target values of sensitivity and specificity for alternative methods and their combinations in testing strategies. Among the Cat 1 chemicals evaluated, 36-65 % (depending on the database) were classified based only on persistence of effects, with the remaining being classified mostly based on severe corneal effects. Iritis was found to rarely drive the classification (<4 % of both Cat 1 and Cat 2 chemicals). The two most important endpoints driving Cat 2 classification are conjunctiva redness (75-81 %) and corneal opacity (54-75 %). The resampling analyses demonstrated an overall probability of at least 11 % that chemicals classified as Cat 1 by the Draize eye test could be equally identified as Cat 2 and of about 12 % for Cat 2 chemicals to be equally identified as No Cat. On the other hand, the over-classification error for No Cat and Cat 2 was negligible (<1 %), which strongly suggests a high over-predictive power of the Draize eye test. Moreover, our analyses of the classification drivers suggest a critical revision of the UN GHS/EU CLP decision criteria for the classification of chemicals based on Draize eye test data, in particular Cat 1 based only on persistence of conjunctiva effects or corneal opacity scores of 4. In order to successfully replace the regulatory in vivo Draize eye test, it will be important to recognise these uncertainties and to have in vitro tools to address the most important in vivo endpoints identified in this paper.
Hybrid modeling of spatial continuity for application to numerical inverse problems
Friedel, Michael J.; Iwashita, Fabio
2013-01-01
A novel two-step modeling approach is presented to obtain optimal starting values and geostatistical constraints for numerical inverse problems otherwise characterized by spatially-limited field data. First, a type of unsupervised neural network, called the self-organizing map (SOM), is trained to recognize nonlinear relations among environmental variables (covariates) occurring at various scales. The values of these variables are then estimated at random locations across the model domain by iterative minimization of SOM topographic error vectors. Cross-validation is used to ensure unbiasedness and compute prediction uncertainty for select subsets of the data. Second, analytical functions are fit to experimental variograms derived from original plus resampled SOM estimates producing model variograms. Sequential Gaussian simulation is used to evaluate spatial uncertainty associated with the analytical functions and probable range for constraining variables. The hybrid modeling of spatial continuity is demonstrated using spatially-limited hydrologic measurements at different scales in Brazil: (1) physical soil properties (sand, silt, clay, hydraulic conductivity) in the 42 km2 Vargem de Caldas basin; (2) well yield and electrical conductivity of groundwater in the 132 km2 fractured crystalline aquifer; and (3) specific capacity, hydraulic head, and major ions in a 100,000 km2 transboundary fractured-basalt aquifer. These results illustrate the benefits of exploiting nonlinear relations among sparse and disparate data sets for modeling spatial continuity, but the actual application of these spatial data to improve numerical inverse modeling requires testing.
Addressing Challenges in Web Accessibility for the Blind and Visually Impaired
ERIC Educational Resources Information Center
Guercio, Angela; Stirbens, Kathleen A.; Williams, Joseph; Haiber, Charles
2011-01-01
Searching for relevant information on the web is an important aspect of distance learning. This activity is a challenge for visually impaired distance learners. While sighted people have the ability to filter information in a fast and non sequential way, blind persons rely on tools that process the information in a sequential way. Learning is…
USDA-ARS?s Scientific Manuscript database
We developed a sequential Monte Carlo filter to estimate the states and the parameters in a stochastic model of Japanese Encephalitis (JE) spread in the Philippines. This method is particularly important for its adaptability to the availability of new incidence data. This method can also capture the...
ERIC Educational Resources Information Center
Wu, Sheng-Yi; Hou, Huei-Tse
2015-01-01
Cognitive styles play an important role in influencing the learning process, but to date no relevant study has been conducted using lag sequential analysis to assess knowledge construction learning patterns based on different cognitive styles in computer-supported collaborative learning activities in online collaborative discussions. This study…
Mining of high utility-probability sequential patterns from uncertain databases
Zhang, Binbin; Fournier-Viger, Philippe; Li, Ting
2017-01-01
High-utility sequential pattern mining (HUSPM) has become an important issue in the field of data mining. Several HUSPM algorithms have been designed to mine high-utility sequential patterns (HUPSPs). They have been applied in several real-life situations such as for consumer behavior analysis and event detection in sensor networks. Nonetheless, most studies on HUSPM have focused on mining HUPSPs in precise data. But in real-life, uncertainty is an important factor as data is collected using various types of sensors that are more or less accurate. Hence, data collected in a real-life database can be annotated with existing probabilities. This paper presents a novel pattern mining framework called high utility-probability sequential pattern mining (HUPSPM) for mining high utility-probability sequential patterns (HUPSPs) in uncertain sequence databases. A baseline algorithm with three optional pruning strategies is presented to mine HUPSPs. Moroever, to speed up the mining process, a projection mechanism is designed to create a database projection for each processed sequence, which is smaller than the original database. Thus, the number of unpromising candidates can be greatly reduced, as well as the execution time for mining HUPSPs. Substantial experiments both on real-life and synthetic datasets show that the designed algorithm performs well in terms of runtime, number of candidates, memory usage, and scalability for different minimum utility and minimum probability thresholds. PMID:28742847
Modeling eye gaze patterns in clinician-patient interaction with lag sequential analysis.
Montague, Enid; Xu, Jie; Chen, Ping-Yu; Asan, Onur; Barrett, Bruce P; Chewning, Betty
2011-10-01
The aim of this study was to examine whether lag sequential analysis could be used to describe eye gaze orientation between clinicians and patients in the medical encounter. This topic is particularly important as new technologies are implemented into multiuser health care settings in which trust is critical and nonverbal cues are integral to achieving trust. This analysis method could lead to design guidelines for technologies and more effective assessments of interventions. Nonverbal communication patterns are important aspects of clinician-patient interactions and may affect patient outcomes. The eye gaze behaviors of clinicians and patients in 110 videotaped medical encounters were analyzed using the lag sequential method to identify significant behavior sequences. Lag sequential analysis included both event-based lag and time-based lag. Results from event-based lag analysis showed that the patient's gaze followed that of the clinician, whereas the clinician's gaze did not follow the patient's. Time-based sequential analysis showed that responses from the patient usually occurred within 2 s after the initial behavior of the clinician. Our data suggest that the clinician's gaze significantly affects the medical encounter but that the converse is not true. Findings from this research have implications for the design of clinical work systems and modeling interactions. Similar research methods could be used to identify different behavior patterns in clinical settings (physical layout, technology, etc.) to facilitate and evaluate clinical work system designs.
Modeling Eye Gaze Patterns in Clinician-Patient Interaction with Lag Sequential Analysis
Montague, E; Xu, J; Asan, O; Chen, P; Chewning, B; Barrett, B
2011-01-01
Objective The aim of this study was to examine whether lag-sequential analysis could be used to describe eye gaze orientation between clinicians and patients in the medical encounter. This topic is particularly important as new technologies are implemented into multi-user health care settings where trust is critical and nonverbal cues are integral to achieving trust. This analysis method could lead to design guidelines for technologies and more effective assessments of interventions. Background Nonverbal communication patterns are important aspects of clinician-patient interactions and may impact patient outcomes. Method Eye gaze behaviors of clinicians and patients in 110-videotaped medical encounters were analyzed using the lag-sequential method to identify significant behavior sequences. Lag-sequential analysis included both event-based lag and time-based lag. Results Results from event-based lag analysis showed that the patients’ gaze followed that of clinicians, while clinicians did not follow patients. Time-based sequential analysis showed that responses from the patient usually occurred within two seconds after the initial behavior of the clinician. Conclusion Our data suggest that the clinician’s gaze significantly affects the medical encounter but not the converse. Application Findings from this research have implications for the design of clinical work systems and modeling interactions. Similar research methods could be used to identify different behavior patterns in clinical settings (physical layout, technology, etc.) to facilitate and evaluate clinical work system designs. PMID:22046723
Using timed event sequential data in nursing research.
Pecanac, Kristen E; Doherty-King, Barbara; Yoon, Ju Young; Brown, Roger; Schiefelbein, Tony
2015-01-01
Measuring behavior is important in nursing research, and innovative technologies are needed to capture the "real-life" complexity of behaviors and events. The purpose of this article is to describe the use of timed event sequential data in nursing research and to demonstrate the use of this data in a research study. Timed event sequencing allows the researcher to capture the frequency, duration, and sequence of behaviors as they occur in an observation period and to link the behaviors to contextual details. Timed event sequential data can easily be collected with handheld computers, loaded with a software program designed for capturing observations in real time. Timed event sequential data add considerable strength to analysis of any nursing behavior of interest, which can enhance understanding and lead to improvement in nursing practice.
NASA Astrophysics Data System (ADS)
Lee, Han Sang; Kim, Hyeun A.; Kim, Hyeonjin; Hong, Helen; Yoon, Young Cheol; Kim, Junmo
2016-03-01
In spite of its clinical importance in diagnosis of osteoarthritis, segmentation of cartilage in knee MRI remains a challenging task due to its shape variability and low contrast with surrounding soft tissues and synovial fluid. In this paper, we propose a multi-atlas segmentation of cartilage in knee MRI with sequential atlas registrations and locallyweighted voting (LWV). First, bone is segmented by sequential volume- and object-based registrations and LWV. Second, to overcome the shape variability of cartilage, cartilage is segmented by bone-mask-based registration and LWV. In experiments, the proposed method improved the bone segmentation by reducing misclassified bone region, and enhanced the cartilage segmentation by preventing cartilage leakage into surrounding similar intensity region, with the help of sequential registrations and LWV.
Final Data Usability Summary and Resampling Proposal for Fort Sheridan
1996-03-22
performed. The basic approach discussed here was determined in discussions between Fort Sheridan, the EPA, Illinois EPA, the Army Environmental Center, and its RI consultant, Environmental Science and Engineering, Inc.
Immersive volume rendering of blood vessels
NASA Astrophysics Data System (ADS)
Long, Gregory; Kim, Han Suk; Marsden, Alison; Bazilevs, Yuri; Schulze, Jürgen P.
2012-03-01
In this paper, we present a novel method of visualizing flow in blood vessels. Our approach reads unstructured tetrahedral data, resamples it, and uses slice based 3D texture volume rendering. Due to the sparse structure of blood vessels, we utilize an octree to efficiently store the resampled data by discarding empty regions of the volume. We use animation to convey time series data, wireframe surface to give structure, and utilize the StarCAVE, a 3D virtual reality environment, to add a fully immersive element to the visualization. Our tool has great value in interdisciplinary work, helping scientists collaborate with clinicians, by improving the understanding of blood flow simulations. Full immersion in the flow field allows for a more intuitive understanding of the flow phenomena, and can be a great help to medical experts for treatment planning.
Spatial Resolution Characterization for QuickBird Image Products 2003-2004 Season
NASA Technical Reports Server (NTRS)
Blonski, Slawomir
2006-01-01
This presentation focuses on spatial resolution characterization for QuickBird panochromatic images in 2003-2004 and presents data measurements and analysis of SSC edge target deployment and edge response extraction and modeling. The results of the characterization are shown as values of the Modulation Transfer Function (MTF) at the Nyquist spatial frequency and as the Relative Edge Response (RER) components. The results show that RER is much less sensitive to accuracy of the curve fitting than the value of MTF at Nyquist frequency. Therefore, the RER/edge response slope is a more robust estimator of the digital image spatial resolution than the MTF. For the QuickBird panochromatic images, the RER is consistently equal to 0.5 for images processed with the Cubic Convolution resampling and to 0.8 for the MTF resampling.
Restoration and reconstruction from overlapping images
NASA Technical Reports Server (NTRS)
Reichenbach, Stephen E.; Kaiser, Daniel J.; Hanson, Andrew L.; Li, Jing
1997-01-01
This paper describes a technique for restoring and reconstructing a scene from overlapping images. In situations where there are multiple, overlapping images of the same scene, it may be desirable to create a single image that most closely approximates the scene, based on all of the data in the available images. For example, successive swaths acquired by NASA's planned Moderate Imaging Spectrometer (MODIS) will overlap, particularly at wide scan angles, creating a severe visual artifact in the output image. Resampling the overlapping swaths to produce a more accurate image on a uniform grid requires restoration and reconstruction. The one-pass restoration and reconstruction technique developed in this paper yields mean-square-optimal resampling, based on a comprehensive end-to-end system model that accounts for image overlap, and subject to user-defined and data-availability constraints on the spatial support of the filter.
Voice Conversion Using Pitch Shifting Algorithm by Time Stretching with PSOLA and Re-Sampling
NASA Astrophysics Data System (ADS)
Mousa, Allam
2010-01-01
Voice changing has many applications in the industry and commercial filed. This paper emphasizes voice conversion using a pitch shifting method which depends on detecting the pitch of the signal (fundamental frequency) using Simplified Inverse Filter Tracking (SIFT) and changing it according to the target pitch period using time stretching with Pitch Synchronous Over Lap Add Algorithm (PSOLA), then resampling the signal in order to have the same play rate. The same study was performed to see the effect of voice conversion when some Arabic speech signal is considered. Treatment of certain Arabic voiced vowels and the conversion between male and female speech has shown some expansion or compression in the resulting speech. Comparison in terms of pitch shifting is presented here. Analysis was performed for a single frame and a full segmentation of speech.
Warton, David I; Thibaut, Loïc; Wang, Yi Alice
2017-01-01
Bootstrap methods are widely used in statistics, and bootstrapping of residuals can be especially useful in the regression context. However, difficulties are encountered extending residual resampling to regression settings where residuals are not identically distributed (thus not amenable to bootstrapping)-common examples including logistic or Poisson regression and generalizations to handle clustered or multivariate data, such as generalised estimating equations. We propose a bootstrap method based on probability integral transform (PIT-) residuals, which we call the PIT-trap, which assumes data come from some marginal distribution F of known parametric form. This method can be understood as a type of "model-free bootstrap", adapted to the problem of discrete and highly multivariate data. PIT-residuals have the key property that they are (asymptotically) pivotal. The PIT-trap thus inherits the key property, not afforded by any other residual resampling approach, that the marginal distribution of data can be preserved under PIT-trapping. This in turn enables the derivation of some standard bootstrap properties, including second-order correctness of pivotal PIT-trap test statistics. In multivariate data, bootstrapping rows of PIT-residuals affords the property that it preserves correlation in data without the need for it to be modelled, a key point of difference as compared to a parametric bootstrap. The proposed method is illustrated on an example involving multivariate abundance data in ecology, and demonstrated via simulation to have improved properties as compared to competing resampling methods.
Mir, Taskia; Dirks, Peter; Mason, Warren P; Bernstein, Mark
2014-10-01
This is a qualitative study designed to examine patient acceptability of re-sampling surgery for glioblastoma multiforme (GBM) electively post-therapy or at asymptomatic relapse. Thirty patients were selected using the convenience sampling method and interviewed. Patients were presented with hypothetical scenarios including a scenario in which the surgery was offered to them routinely and a scenario in which the surgery was in a clinical trial. The results of the study suggest that about two thirds of the patients offered the surgery on a routine basis would be interested, and half of the patients would agree to the surgery as part of a clinical trial. Several overarching themes emerged, some of which include: patients expressed ethical concerns about offering financial incentives or compensation to the patients or surgeons involved in the study; patients were concerned about appropriate communication and full disclosure about the procedures involved, the legalities of tumor ownership and the use of the tumor post-surgery; patients may feel alone or vulnerable when they are approached about the surgery; patients and their families expressed immense trust in their surgeon and indicated that this trust is a major determinant of their agreeing to surgery. The overall positive response to re-sampling surgery suggests that this procedure, if designed with all the ethical concerns attended to, would be welcomed by most patients. This approach of asking patients beforehand if a treatment innovation is acceptable would appear to be more practical and ethically desirable than previous practice.
NASA Astrophysics Data System (ADS)
Wang, Jinliang; Wu, Xuejiao
2010-11-01
Geometric correction of imagery is a basic application of remote sensing technology. Its precision will impact directly on the accuracy and reliability of applications. The accuracy of geometric correction depends on many factors, including the used model for correction and the accuracy of the reference map, the number of ground control points (GCP) and its spatial distribution, resampling methods. The ETM+ image of Kunming Dianchi Lake Basin and 1:50000 geographical maps had been used to compare different correction methods. The results showed that: (1) The correction errors were more than one pixel and some of them were several pixels when the polynomial model was used. The correction accuracy was not stable when the Delaunay model was used. The correction errors were less than one pixel when the collinearity equation was used. (2) 6, 9, 25 and 35 GCP were selected randomly for geometric correction using the polynomial correction model respectively, the best result was obtained when 25 GCPs were used. (3) The contrast ratio of image corrected by using nearest neighbor and the best resampling rate was compared to that of using the cubic convolution and bilinear model. But the continuity of pixel gravy value was not very good. The contrast of image corrected was the worst and the computation time was the longest by using the cubic convolution method. According to the above results, the result was the best by using bilinear to resample.
Phu, Jack; Bui, Bang V; Kalloniatis, Michael; Khuu, Sieu K
2018-03-01
The number of subjects needed to establish the normative limits for visual field (VF) testing is not known. Using bootstrap resampling, we determined whether the ground truth mean, distribution limits, and standard deviation (SD) could be approximated using different set size ( x ) levels, in order to provide guidance for the number of healthy subjects required to obtain robust VF normative data. We analyzed the 500 Humphrey Field Analyzer (HFA) SITA-Standard results of 116 healthy subjects and 100 HFA full threshold results of 100 psychophysically experienced healthy subjects. These VFs were resampled (bootstrapped) to determine mean sensitivity, distribution limits (5th and 95th percentiles), and SD for different ' x ' and numbers of resamples. We also used the VF results of 122 glaucoma patients to determine the performance of ground truth and bootstrapped results in identifying and quantifying VF defects. An x of 150 (for SITA-Standard) and 60 (for full threshold) produced bootstrapped descriptive statistics that were no longer different to the original distribution limits and SD. Removing outliers produced similar results. Differences between original and bootstrapped limits in detecting glaucomatous defects were minimized at x = 250. Ground truth statistics of VF sensitivities could be approximated using set sizes that are significantly smaller than the original cohort. Outlier removal facilitates the use of Gaussian statistics and does not significantly affect the distribution limits. We provide guidance for choosing the cohort size for different levels of error when performing normative comparisons with glaucoma patients.
Assessing uncertainties in surface water security: An empirical multimodel approach
NASA Astrophysics Data System (ADS)
Rodrigues, Dulce B. B.; Gupta, Hoshin V.; Mendiondo, Eduardo M.; Oliveira, Paulo Tarso S.
2015-11-01
Various uncertainties are involved in the representation of processes that characterize interactions among societal needs, ecosystem functioning, and hydrological conditions. Here we develop an empirical uncertainty assessment of water security indicators that characterize scarcity and vulnerability, based on a multimodel and resampling framework. We consider several uncertainty sources including those related to (i) observed streamflow data; (ii) hydrological model structure; (iii) residual analysis; (iv) the method for defining Environmental Flow Requirement; (v) the definition of critical conditions for water provision; and (vi) the critical demand imposed by human activities. We estimate the overall hydrological model uncertainty by means of a residual bootstrap resampling approach, and by uncertainty propagation through different methodological arrangements applied to a 291 km2 agricultural basin within the Cantareira water supply system in Brazil. Together, the two-component hydrograph residual analysis and the block bootstrap resampling approach result in a more accurate and precise estimate of the uncertainty (95% confidence intervals) in the simulated time series. We then compare the uncertainty estimates associated with water security indicators using a multimodel framework and the uncertainty estimates provided by each model uncertainty estimation approach. The range of values obtained for the water security indicators suggests that the models/methods are robust and performs well in a range of plausible situations. The method is general and can be easily extended, thereby forming the basis for meaningful support to end-users facing water resource challenges by enabling them to incorporate a viable uncertainty analysis into a robust decision-making process.
A novel fruit shape classification method based on multi-scale analysis
NASA Astrophysics Data System (ADS)
Gui, Jiangsheng; Ying, Yibin; Rao, Xiuqin
2005-11-01
Shape is one of the major concerns and which is still a difficult problem in automated inspection and sorting of fruits. In this research, we proposed the multi-scale energy distribution (MSED) for object shape description, the relationship between objects shape and its boundary energy distribution at multi-scale was explored for shape extraction. MSED offers not only the mainly energy which represent primary shape information at the lower scales, but also subordinate energy which represent local shape information at higher differential scales. Thus, it provides a natural tool for multi resolution representation and can be used as a feature for shape classification. We addressed the three main processing steps in the MSED-based shape classification. They are namely, 1) image preprocessing and citrus shape extraction, 2) shape resample and shape feature normalization, 3) energy decomposition by wavelet and classification by BP neural network. Hereinto, shape resample is resample 256 boundary pixel from a curve which is approximated original boundary by using cubic spline in order to get uniform raw data. A probability function was defined and an effective method to select a start point was given through maximal expectation, which overcame the inconvenience of traditional methods in order to have a property of rotation invariants. The experiment result is relatively well normal citrus and serious abnormality, with a classification rate superior to 91.2%. The global correct classification rate is 89.77%, and our method is more effective than traditional method. The global result can meet the request of fruit grading.
Thibaut, Loïc; Wang, Yi Alice
2017-01-01
Bootstrap methods are widely used in statistics, and bootstrapping of residuals can be especially useful in the regression context. However, difficulties are encountered extending residual resampling to regression settings where residuals are not identically distributed (thus not amenable to bootstrapping)—common examples including logistic or Poisson regression and generalizations to handle clustered or multivariate data, such as generalised estimating equations. We propose a bootstrap method based on probability integral transform (PIT-) residuals, which we call the PIT-trap, which assumes data come from some marginal distribution F of known parametric form. This method can be understood as a type of “model-free bootstrap”, adapted to the problem of discrete and highly multivariate data. PIT-residuals have the key property that they are (asymptotically) pivotal. The PIT-trap thus inherits the key property, not afforded by any other residual resampling approach, that the marginal distribution of data can be preserved under PIT-trapping. This in turn enables the derivation of some standard bootstrap properties, including second-order correctness of pivotal PIT-trap test statistics. In multivariate data, bootstrapping rows of PIT-residuals affords the property that it preserves correlation in data without the need for it to be modelled, a key point of difference as compared to a parametric bootstrap. The proposed method is illustrated on an example involving multivariate abundance data in ecology, and demonstrated via simulation to have improved properties as compared to competing resampling methods. PMID:28738071
Stereo reconstruction from multiperspective panoramas.
Li, Yin; Shum, Heung-Yeung; Tang, Chi-Keung; Szeliski, Richard
2004-01-01
A new approach to computing a panoramic (360 degrees) depth map is presented in this paper. Our approach uses a large collection of images taken by a camera whose motion has been constrained to planar concentric circles. We resample regular perspective images to produce a set of multiperspective panoramas and then compute depth maps directly from these resampled panoramas. Our panoramas sample uniformly in three dimensions: rotation angle, inverse radial distance, and vertical elevation. The use of multiperspective panoramas eliminates the limited overlap present in the original input images and, thus, problems as in conventional multibaseline stereo can be avoided. Our approach differs from stereo matching of single-perspective panoramic images taken from different locations, where the epipolar constraints are sine curves. For our multiperspective panoramas, the epipolar geometry, to the first order approximation, consists of horizontal lines. Therefore, any traditional stereo algorithm can be applied to multiperspective panoramas with little modification. In this paper, we describe two reconstruction algorithms. The first is a cylinder sweep algorithm that uses a small number of resampled multiperspective panoramas to obtain dense 3D reconstruction. The second algorithm, in contrast, uses a large number of multiperspective panoramas and takes advantage of the approximate horizontal epipolar geometry inherent in multiperspective panoramas. It comprises a novel and efficient 1D multibaseline matching technique, followed by tensor voting to extract the depth surface. Experiments show that our algorithms are capable of producing comparable high quality depth maps which can be used for applications such as view interpolation.
Comparison of bootstrap approaches for estimation of uncertainties of DTI parameters.
Chung, SungWon; Lu, Ying; Henry, Roland G
2006-11-01
Bootstrap is an empirical non-parametric statistical technique based on data resampling that has been used to quantify uncertainties of diffusion tensor MRI (DTI) parameters, useful in tractography and in assessing DTI methods. The current bootstrap method (repetition bootstrap) used for DTI analysis performs resampling within the data sharing common diffusion gradients, requiring multiple acquisitions for each diffusion gradient. Recently, wild bootstrap was proposed that can be applied without multiple acquisitions. In this paper, two new approaches are introduced called residual bootstrap and repetition bootknife. We show that repetition bootknife corrects for the large bias present in the repetition bootstrap method and, therefore, better estimates the standard errors. Like wild bootstrap, residual bootstrap is applicable to single acquisition scheme, and both are based on regression residuals (called model-based resampling). Residual bootstrap is based on the assumption that non-constant variance of measured diffusion-attenuated signals can be modeled, which is actually the assumption behind the widely used weighted least squares solution of diffusion tensor. The performances of these bootstrap approaches were compared in terms of bias, variance, and overall error of bootstrap-estimated standard error by Monte Carlo simulation. We demonstrate that residual bootstrap has smaller biases and overall errors, which enables estimation of uncertainties with higher accuracy. Understanding the properties of these bootstrap procedures will help us to choose the optimal approach for estimating uncertainties that can benefit hypothesis testing based on DTI parameters, probabilistic fiber tracking, and optimizing DTI methods.
Bias Corrections for Regional Estimates of the Time-averaged Geomagnetic Field
NASA Astrophysics Data System (ADS)
Constable, C.; Johnson, C. L.
2009-05-01
We assess two sources of bias in the time-averaged geomagnetic field (TAF) and paleosecular variation (PSV): inadequate temporal sampling, and the use of unit vectors in deriving temporal averages of the regional geomagnetic field. For the first temporal sampling question we use statistical resampling of existing data sets to minimize and correct for bias arising from uneven temporal sampling in studies of the time- averaged geomagnetic field (TAF) and its paleosecular variation (PSV). The techniques are illustrated using data derived from Hawaiian lava flows for 0-5~Ma: directional observations are an updated version of a previously published compilation of paleomagnetic directional data centered on ± 20° latitude by Lawrence et al./(2006); intensity data are drawn from Tauxe & Yamazaki, (2007). We conclude that poor temporal sampling can produce biased estimates of TAF and PSV, and resampling to appropriate statistical distribution of ages reduces this bias. We suggest that similar resampling should be attempted as a bias correction for all regional paleomagnetic data to be used in TAF and PSV modeling. The second potential source of bias is the use of directional data in place of full vector data to estimate the average field. This is investigated for the full vector subset of the updated Hawaiian data set. Lawrence, K.P., C.G. Constable, and C.L. Johnson, 2006, Geochem. Geophys. Geosyst., 7, Q07007, DOI 10.1029/2005GC001181. Tauxe, L., & Yamazkai, 2007, Treatise on Geophysics,5, Geomagnetism, Elsevier, Amsterdam, Chapter 13,p509
The sequential structure of brain activation predicts skill.
Anderson, John R; Bothell, Daniel; Fincham, Jon M; Moon, Jungaa
2016-01-29
In an fMRI study, participants were trained to play a complex video game. They were scanned early and then again after substantial practice. While better players showed greater activation in one region (right dorsal striatum) their relative skill was better diagnosed by considering the sequential structure of whole brain activation. Using a cognitive model that played this game, we extracted a characterization of the mental states that are involved in playing a game and the statistical structure of the transitions among these states. There was a strong correspondence between this measure of sequential structure and the skill of different players. Using multi-voxel pattern analysis, it was possible to recognize, with relatively high accuracy, the cognitive states participants were in during particular scans. We used the sequential structure of these activation-recognized states to predict the skill of individual players. These findings indicate that important features about information-processing strategies can be identified from a model-based analysis of the sequential structure of brain activation. Copyright © 2015 Elsevier Ltd. All rights reserved.
Classical and sequential limit analysis revisited
NASA Astrophysics Data System (ADS)
Leblond, Jean-Baptiste; Kondo, Djimédo; Morin, Léo; Remmal, Almahdi
2018-04-01
Classical limit analysis applies to ideal plastic materials, and within a linearized geometrical framework implying small displacements and strains. Sequential limit analysis was proposed as a heuristic extension to materials exhibiting strain hardening, and within a fully general geometrical framework involving large displacements and strains. The purpose of this paper is to study and clearly state the precise conditions permitting such an extension. This is done by comparing the evolution equations of the full elastic-plastic problem, the equations of classical limit analysis, and those of sequential limit analysis. The main conclusion is that, whereas classical limit analysis applies to materials exhibiting elasticity - in the absence of hardening and within a linearized geometrical framework -, sequential limit analysis, to be applicable, strictly prohibits the presence of elasticity - although it tolerates strain hardening and large displacements and strains. For a given mechanical situation, the relevance of sequential limit analysis therefore essentially depends upon the importance of the elastic-plastic coupling in the specific case considered.
The impact of eyewitness identifications from simultaneous and sequential lineups.
Wright, Daniel B
2007-10-01
Recent guidelines in the US allow either simultaneous or sequential lineups to be used for eyewitness identification. This paper investigates how potential jurors weight the probative value of the different outcomes from both of these types of lineups. Participants (n=340) were given a description of a case that included some exonerating and some incriminating evidence. There was either a simultaneous or a sequential lineup. Depending on the condition, an eyewitness chose the suspect, chose a filler, or made no identification. The participant had to judge the guilt of the suspect and decide whether to render a guilty verdict. For both simultaneous and sequential lineups an identification had a large effect,increasing the probability of a guilty verdict. There were no reliable effects detected between making no identification and identifying a filler. The effect sizes were similar for simultaneous and sequential lineups. These findings are important for judges and other legal professionals to know for trials involving lineup identifications.
Osteomyelitis of the head and neck: sequential radionuclide scanning in diagnosis and therapy
DOE Office of Scientific and Technical Information (OSTI.GOV)
Strauss, M.; Kaufman, R.A.; Baum, S.
1985-01-01
Sequential technetium and gallium scans of the head and neck were used to confirm the diagnosis of osteomyelitis and as an important therapeutic aid to delineate the transformation of active osteomyelitis to inactive osteomyelitis in 11 cases involving sites in the head and neck. Illustrative cases are presented of frontal sinus and cervical spine osteomyelitis and laryngeal osteochondritis.
van Heijnsbergen, E.; van Deursen, A.; Bouwknegt, M.; Bruin, J. P.; Schalk, J. A. C.
2016-01-01
ABSTRACT Garden soils were investigated as reservoirs and potential sources of pathogenic Legionella bacteria. Legionella bacteria were detected in 22 of 177 garden soil samples (12%) by amoebal coculture. Of these 22 Legionella-positive soil samples, seven contained Legionella pneumophila. Several other species were found, including the pathogenic Legionella longbeachae (4 gardens) and Legionella sainthelensi (9 gardens). The L. pneumophila isolates comprised 15 different sequence types (STs), and eight of these STs were previously isolated from patients according to the European Working Group for Legionella Infections (EWGLI) database. Six gardens that were found to be positive for L. pneumophila were resampled after several months, and in three gardens, L. pneumophila was again isolated. One of these gardens was resampled four times throughout the year and was found to be positive for L. pneumophila on all occasions. IMPORTANCE Tracking the source of infection for sporadic cases of Legionnaires' disease (LD) has proven to be hard. L. pneumophila ST47, the sequence type that is most frequently isolated from LD patients in the Netherlands, is rarely found in potential environmental sources. As L. pneumophila ST47 was previously isolated from a garden soil sample during an outbreak investigation, garden soils were investigated as reservoirs and potential sources of pathogenic Legionella bacteria. The detection of viable, clinically relevant Legionella strains indicates that garden soil is a potential source of Legionella bacteria, and future research should assess the public health implication of the presence of L. pneumophila in garden soil. PMID:27316958
Comparisons of Reflectivities from the TRMM Precipitation Radar and Ground-Based Radars
NASA Technical Reports Server (NTRS)
Wang, Jianxin; Wolff, David B.
2008-01-01
Given the decade long and highly successful Tropical Rainfall Measuring Mission (TRMM), it is now possible to provide quantitative comparisons between ground-based radars (GRs) with the space-borne TRMM precipitation radar (PR) with greater certainty over longer time scales in various tropical climatological regions. This study develops an automated methodology to match and compare simultaneous TRMM PR and GR reflectivities at four primary TRMM Ground Validation (GV) sites: Houston, Texas (HSTN); Melbourne, Florida (MELB); Kwajalein, Republic of the Marshall Islands (KWAJ); and Darwin, Australia (DARW). Data from each instrument are resampled into a three-dimensional Cartesian coordinate system. The horizontal displacement during the PR data resampling is corrected. Comparisons suggest that the PR suffers significant attenuation at lower levels especially in convective rain. The attenuation correction performs quite well for convective rain but appears to slightly over-correct in stratiform rain. The PR and GR observations at HSTN, MELB and KWAJ agree to about 1 dB on average with a few exceptions, while the GR at DARW requires +1 to -5 dB calibration corrections. One of the important findings of this study is that the GR calibration offset is dependent on the reflectivity magnitude. Hence, we propose that the calibration should be carried out using a regression correction, rather than simply adding an offset value to all GR reflectivities. This methodology is developed towards TRMM GV efforts to improve the accuracy of tropical rain estimates, and can also be applied to the proposed Global Precipitation Measurement and other related activities over the globe.
Datamining approaches for modeling tumor control probability.
Naqa, Issam El; Deasy, Joseph O; Mu, Yi; Huang, Ellen; Hope, Andrew J; Lindsay, Patricia E; Apte, Aditya; Alaly, James; Bradley, Jeffrey D
2010-11-01
Tumor control probability (TCP) to radiotherapy is determined by complex interactions between tumor biology, tumor microenvironment, radiation dosimetry, and patient-related variables. The complexity of these heterogeneous variable interactions constitutes a challenge for building predictive models for routine clinical practice. We describe a datamining framework that can unravel the higher order relationships among dosimetric dose-volume prognostic variables, interrogate various radiobiological processes, and generalize to unseen data before when applied prospectively. Several datamining approaches are discussed that include dose-volume metrics, equivalent uniform dose, mechanistic Poisson model, and model building methods using statistical regression and machine learning techniques. Institutional datasets of non-small cell lung cancer (NSCLC) patients are used to demonstrate these methods. The performance of the different methods was evaluated using bivariate Spearman rank correlations (rs). Over-fitting was controlled via resampling methods. Using a dataset of 56 patients with primary NCSLC tumors and 23 candidate variables, we estimated GTV volume and V75 to be the best model parameters for predicting TCP using statistical resampling and a logistic model. Using these variables, the support vector machine (SVM) kernel method provided superior performance for TCP prediction with an rs=0.68 on leave-one-out testing compared to logistic regression (rs=0.4), Poisson-based TCP (rs=0.33), and cell kill equivalent uniform dose model (rs=0.17). The prediction of treatment response can be improved by utilizing datamining approaches, which are able to unravel important non-linear complex interactions among model variables and have the capacity to predict on unseen data for prospective clinical applications.
NASA Astrophysics Data System (ADS)
Singh, A. K.; Toshniwal, D.
2017-12-01
The MODIS Joint Atmosphere product, MODATML2 and MYDATML2 L2/3 provided by LAADS DAAC (Level-1 and Atmosphere Archive & Distribution System Distributed Active Archive Center) re-sampled from medium resolution MODIS Terra /Aqua Satellites data at 5km scale, contains Cloud Reflectance, Cloud Top Temperature, Water Vapor, Aerosol Optical Depth/Thickness, Humidity data. These re-sampled data, when used for deriving climatic effects of aerosols (particularly in case of cooling effect) still exposes limitations in presence of uncertainty measures in atmospheric artifacts such as aerosol, cloud, cirrus cloud etc. The effect of uncertainty measures in these artifacts imposes an important challenge for estimation of aerosol effects, adequately affecting precise regional weather modeling and predictions: Forecasting and recommendation applications developed largely depend on these short-term local conditions (e.g. City/Locality based recommendations to citizens/farmers based on local weather models). Our approach inculcates artificial intelligence technique for representing heterogeneous data(satellite data along with air quality data from local weather stations (i.e. in situ data)) to learn, correct and predict aerosol effects in the presence of cloud and other atmospheric artifacts, defusing Spatio-temporal correlations and regressions. The Big Data process pipeline consisting correlation and regression techniques developed on Apache Spark platform can easily scale for large data sets including many tiles (scenes) and over widened time-scale. Keywords: Climatic Effects of Aerosols, Situation-Aware, Big Data, Apache Spark, MODIS Terra /Aqua, Time Series
Frequency Analysis Using Bootstrap Method and SIR Algorithm for Prevention of Natural Disasters
NASA Astrophysics Data System (ADS)
Kim, T.; Kim, Y. S.
2017-12-01
The frequency analysis of hydrometeorological data is one of the most important factors in response to natural disaster damage, and design standards for a disaster prevention facilities. In case of frequency analysis of hydrometeorological data, it assumes that observation data have statistical stationarity, and a parametric method considering the parameter of probability distribution is applied. For a parametric method, it is necessary to sufficiently collect reliable data; however, snowfall observations are needed to compensate for insufficient data in Korea, because of reducing the number of days for snowfall observations and mean maximum daily snowfall depth due to climate change. In this study, we conducted the frequency analysis for snowfall using the Bootstrap method and SIR algorithm which are the resampling methods that can overcome the problems of insufficient data. For the 58 meteorological stations distributed evenly in Korea, the probability of snowfall depth was estimated by non-parametric frequency analysis using the maximum daily snowfall depth data. The results show that probabilistic daily snowfall depth by frequency analysis is decreased at most stations, and most stations representing the rate of change were found to be consistent in both parametric and non-parametric frequency analysis. This study shows that the resampling methods can do the frequency analysis of the snowfall depth that has insufficient observed samples, which can be applied to interpretation of other natural disasters such as summer typhoons with seasonal characteristics. Acknowledgment.This research was supported by a grant(MPSS-NH-2015-79) from Disaster Prediction and Mitigation Technology Development Program funded by Korean Ministry of Public Safety and Security(MPSS).
NASA Astrophysics Data System (ADS)
Dostálová, Alena; Naeimi, Vahid; Wagner, Wolfgang; Elefante, Stefano; Cao, Senmao; Persson, Henrik
2016-10-01
One of the major advantages of the Sentinel-1 data is its capability to provide very high spatio-temporal coverage allowing the mapping of large areas as well as creation of dense time-series of the Sentinel-1 acquisitions. The SGRT software developed at TU Wien aims at automated processing of Sentinel-1 data for global and regional products. The first step of the processing consists of the Sentinel-1 data geocoding with the help of S1TBX software and their resampling to a common grid. These resampled images serve as an input for the product derivation. Thus, it is very important to select the most reliable processing settings and assess the geocoding uncertainty for both backscatter and projected local incidence angle images. Within this study, selection of Sentinel-1 acquisitions over 3 test areas in Europe were processed manually in the S1TBX software, testing multiple software versions, processing settings and digital elevation models (DEM) and the accuracy of the resulting geocoded images were assessed. Secondly, all available Sentinel-1 data over the areas were processed using selected settings and detailed quality check was performed. Overall, strong influence of the used DEM on the geocoding quality was confirmed with differences up to 80 meters in areas with higher terrain variations. In flat areas, the geocoding accuracy of backscatter images was overall good, with observed shifts between 0 and 30m. Larger systematic shifts were identified in case of projected local incidence angle images. These results encourage the automated processing of large volumes of Sentinel-1 data.
Niu, Shanzhou; Zhang, Shanli; Huang, Jing; Bian, Zhaoying; Chen, Wufan; Yu, Gaohang; Liang, Zhengrong; Ma, Jianhua
2016-01-01
Cerebral perfusion x-ray computed tomography (PCT) is an important functional imaging modality for evaluating cerebrovascular diseases and has been widely used in clinics over the past decades. However, due to the protocol of PCT imaging with repeated dynamic sequential scans, the associative radiation dose unavoidably increases as compared with that used in conventional CT examinations. Minimizing the radiation exposure in PCT examination is a major task in the CT field. In this paper, considering the rich similarity redundancy information among enhanced sequential PCT images, we propose a low-dose PCT image restoration model by incorporating the low-rank and sparse matrix characteristic of sequential PCT images. Specifically, the sequential PCT images were first stacked into a matrix (i.e., low-rank matrix), and then a non-convex spectral norm/regularization and a spatio-temporal total variation norm/regularization were then built on the low-rank matrix to describe the low rank and sparsity of the sequential PCT images, respectively. Subsequently, an improved split Bregman method was adopted to minimize the associative objective function with a reasonable convergence rate. Both qualitative and quantitative studies were conducted using a digital phantom and clinical cerebral PCT datasets to evaluate the present method. Experimental results show that the presented method can achieve images with several noticeable advantages over the existing methods in terms of noise reduction and universal quality index. More importantly, the present method can produce more accurate kinetic enhanced details and diagnostic hemodynamic parameter maps. PMID:27440948
Methods to achieve accurate projection of regional and global raster databases
Usery, E.L.; Seong, J.C.; Steinwand, D.R.; Finn, M.P.
2002-01-01
This research aims at building a decision support system (DSS) for selecting an optimum projection considering various factors, such as pixel size, areal extent, number of categories, spatial pattern of categories, resampling methods, and error correction methods. Specifically, this research will investigate three goals theoretically and empirically and, using the already developed empirical base of knowledge with these results, develop an expert system for map projection of raster data for regional and global database modeling. The three theoretical goals are as follows: (1) The development of a dynamic projection that adjusts projection formulas for latitude on the basis of raster cell size to maintain equal-sized cells. (2) The investigation of the relationships between the raster representation and the distortion of features, number of categories, and spatial pattern. (3) The development of an error correction and resampling procedure that is based on error analysis of raster projection.
Speckle reduction in digital holography with resampling ring masks
NASA Astrophysics Data System (ADS)
Zhang, Wenhui; Cao, Liangcai; Jin, Guofan
2018-01-01
One-shot digital holographic imaging has the advantages of high stability and low temporal cost. However, the reconstruction is affected by the speckle noise. Resampling ring-mask method in spectrum domain is proposed for speckle reduction. The useful spectrum of one hologram is divided into several sub-spectra by ring masks. In the reconstruction, angular spectrum transform is applied to guarantee the calculation accuracy which has no approximation. N reconstructed amplitude images are calculated from the corresponding sub-spectra. Thanks to speckle's random distribution, superimposing these N uncorrelated amplitude images would lead to a final reconstructed image with lower speckle noise. Normalized relative standard deviation values of the reconstructed image are used to evaluate the reduction of speckle. Effect of the method on the spatial resolution of the reconstructed image is also quantitatively evaluated. Experimental and simulation results prove the feasibility and effectiveness of the proposed method.
Statistical wiring of thalamic receptive fields optimizes spatial sampling of the retinal image
Wang, Xin; Sommer, Friedrich T.; Hirsch, Judith A.
2014-01-01
Summary It is widely assumed that mosaics of retinal ganglion cells establish the optimal representation of visual space. However, relay cells in the visual thalamus often receive convergent input from several retinal afferents and, in cat, outnumber ganglion cells. To explore how the thalamus transforms the retinal image, we built a model of the retinothalamic circuit using experimental data and simple wiring rules. The model shows how the thalamus might form a resampled map of visual space with the potential to facilitate detection of stimulus position in the presence of sensor noise. Bayesian decoding conducted with the model provides support for this scenario. Despite its benefits, however, resampling introduces image blur, thus impairing edge perception. Whole-cell recordings obtained in vivo suggest that this problem is mitigated by arrangements of excitation and inhibition within the receptive field that effectively boost contrast borders, much like strategies used in digital image processing. PMID:24559681
An Acoustic OFDM System with Symbol-by-Symbol Doppler Compensation for Underwater Communication
MinhHai, Tran; Rie, Saotome; Suzuki, Taisaku; Wada, Tomohisa
2016-01-01
We propose an acoustic OFDM system for underwater communication, specifically for vertical link communications such as between a robot in the sea bottom and a mother ship in the surface. The main contributions are (1) estimation of time varying Doppler shift using continual pilots in conjunction with monitoring the drift of Power Delay Profile and (2) symbol-by-symbol Doppler compensation in frequency domain by an ICI matrix representing nonuniform Doppler. In addition, we compare our proposal against a resampling method. Simulation and experimental results confirm that our system outperforms the resampling method when the velocity changes roughly over OFDM symbols. Overall, experimental results taken in Shizuoka, Japan, show our system using 16QAM, and 64QAM achieved a data throughput of 7.5 Kbit/sec with a transmitter moving at maximum 2 m/s, in a complicated trajectory, over 30 m vertically. PMID:27057558
Forecasting drought risks for a water supply storage system using bootstrap position analysis
Tasker, Gary; Dunne, Paul
1997-01-01
Forecasting the likelihood of drought conditions is an integral part of managing a water supply storage and delivery system. Position analysis uses a large number of possible flow sequences as inputs to a simulation of a water supply storage and delivery system. For a given set of operating rules and water use requirements, water managers can use such a model to forecast the likelihood of specified outcomes such as reservoir levels falling below a specified level or streamflows falling below statutory passing flows a few months ahead conditioned on the current reservoir levels and streamflows. The large number of possible flow sequences are generated using a stochastic streamflow model with a random resampling of innovations. The advantages of this resampling scheme, called bootstrap position analysis, are that it does not rely on the unverifiable assumption of normality and it allows incorporation of long-range weather forecasts into the analysis.
Measures of precision for dissimilarity-based multivariate analysis of ecological communities
Anderson, Marti J; Santana-Garcon, Julia
2015-01-01
Ecological studies require key decisions regarding the appropriate size and number of sampling units. No methods currently exist to measure precision for multivariate assemblage data when dissimilarity-based analyses are intended to follow. Here, we propose a pseudo multivariate dissimilarity-based standard error (MultSE) as a useful quantity for assessing sample-size adequacy in studies of ecological communities. Based on sums of squared dissimilarities, MultSE measures variability in the position of the centroid in the space of a chosen dissimilarity measure under repeated sampling for a given sample size. We describe a novel double resampling method to quantify uncertainty in MultSE values with increasing sample size. For more complex designs, values of MultSE can be calculated from the pseudo residual mean square of a permanova model, with the double resampling done within appropriate cells in the design. R code functions for implementing these techniques, along with ecological examples, are provided. PMID:25438826
NASA Astrophysics Data System (ADS)
Lu, Siliang; Wang, Xiaoxian; He, Qingbo; Liu, Fang; Liu, Yongbin
2016-12-01
Transient signal analysis (TSA) has been proven an effective tool for motor bearing fault diagnosis, but has yet to be applied in processing bearing fault signals with variable rotating speed. In this study, a new TSA-based angular resampling (TSAAR) method is proposed for fault diagnosis under speed fluctuation condition via sound signal analysis. By applying the TSAAR method, the frequency smearing phenomenon is eliminated and the fault characteristic frequency is exposed in the envelope spectrum for bearing fault recognition. The TSAAR method can accurately estimate the phase information of the fault-induced impulses using neither complicated time-frequency analysis techniques nor external speed sensors, and hence it provides a simple, flexible, and data-driven approach that realizes variable-speed motor bearing fault diagnosis. The effectiveness and efficiency of the proposed TSAAR method are verified through a series of simulated and experimental case studies.
Simulation and statistics: Like rhythm and song
NASA Astrophysics Data System (ADS)
Othman, Abdul Rahman
2013-04-01
Simulation has been introduced to solve problems in the form of systems. By using this technique the following two problems can be overcome. First, a problem that has an analytical solution but the cost of running an experiment to solve is high in terms of money and lives. Second, a problem exists but has no analytical solution. In the field of statistical inference the second problem is often encountered. With the advent of high-speed computing devices, a statistician can now use resampling techniques such as the bootstrap and permutations to form pseudo sampling distribution that will lead to the solution of the problem that cannot be solved analytically. This paper discusses how a Monte Carlo simulation was and still being used to verify the analytical solution in inference. This paper also discusses the resampling techniques as simulation techniques. The misunderstandings about these two techniques are examined. The successful usages of both techniques are also explained.
Bishara, Anthony J; Hittner, James B
2012-09-01
It is well known that when data are nonnormally distributed, a test of the significance of Pearson's r may inflate Type I error rates and reduce power. Statistics textbooks and the simulation literature provide several alternatives to Pearson's correlation. However, the relative performance of these alternatives has been unclear. Two simulation studies were conducted to compare 12 methods, including Pearson, Spearman's rank-order, transformation, and resampling approaches. With most sample sizes (n ≥ 20), Type I and Type II error rates were minimized by transforming the data to a normal shape prior to assessing the Pearson correlation. Among transformation approaches, a general purpose rank-based inverse normal transformation (i.e., transformation to rankit scores) was most beneficial. However, when samples were both small (n ≤ 10) and extremely nonnormal, the permutation test often outperformed other alternatives, including various bootstrap tests.
A scale-invariant change detection method for land use/cover change research
NASA Astrophysics Data System (ADS)
Xing, Jin; Sieber, Renee; Caelli, Terrence
2018-07-01
Land Use/Cover Change (LUCC) detection relies increasingly on comparing remote sensing images with different spatial and spectral scales. Based on scale-invariant image analysis algorithms in computer vision, we propose a scale-invariant LUCC detection method to identify changes from scale heterogeneous images. This method is composed of an entropy-based spatial decomposition, two scale-invariant feature extraction methods, Maximally Stable Extremal Region (MSER) and Scale-Invariant Feature Transformation (SIFT) algorithms, a spatial regression voting method to integrate MSER and SIFT results, a Markov Random Field-based smoothing method, and a support vector machine classification method to assign LUCC labels. We test the scale invariance of our new method with a LUCC case study in Montreal, Canada, 2005-2012. We found that the scale-invariant LUCC detection method provides similar accuracy compared with the resampling-based approach but this method avoids the LUCC distortion incurred by resampling.
Bootstrap position analysis for forecasting low flow frequency
Tasker, Gary D.; Dunne, P.
1997-01-01
A method of random resampling of residuals from stochastic models is used to generate a large number of 12-month-long traces of natural monthly runoff to be used in a position analysis model for a water-supply storage and delivery system. Position analysis uses the traces to forecast the likelihood of specified outcomes such as reservoir levels falling below a specified level or streamflows falling below statutory passing flows conditioned on the current reservoir levels and streamflows. The advantages of this resampling scheme, called bootstrap position analysis, are that it does not rely on the unverifiable assumption of normality, fewer parameters need to be estimated directly from the data, and accounting for parameter uncertainty is easily done. For a given set of operating rules and water-use requirements for a system, water managers can use such a model as a decision-making tool to evaluate different operating rules. ?? ASCE,.
NASA Astrophysics Data System (ADS)
Chuan, Zun Liang; Ismail, Noriszura; Shinyie, Wendy Ling; Lit Ken, Tan; Fam, Soo-Fen; Senawi, Azlyna; Yusoff, Wan Nur Syahidah Wan
2018-04-01
Due to the limited of historical precipitation records, agglomerative hierarchical clustering algorithms widely used to extrapolate information from gauged to ungauged precipitation catchments in yielding a more reliable projection of extreme hydro-meteorological events such as extreme precipitation events. However, identifying the optimum number of homogeneous precipitation catchments accurately based on the dendrogram resulted using agglomerative hierarchical algorithms are very subjective. The main objective of this study is to propose an efficient regionalized algorithm to identify the homogeneous precipitation catchments for non-stationary precipitation time series. The homogeneous precipitation catchments are identified using average linkage hierarchical clustering algorithm associated multi-scale bootstrap resampling, while uncentered correlation coefficient as the similarity measure. The regionalized homogeneous precipitation is consolidated using K-sample Anderson Darling non-parametric test. The analysis result shows the proposed regionalized algorithm performed more better compared to the proposed agglomerative hierarchical clustering algorithm in previous studies.
Efficient high-quality volume rendering of SPH data.
Fraedrich, Roland; Auer, Stefan; Westermann, Rüdiger
2010-01-01
High quality volume rendering of SPH data requires a complex order-dependent resampling of particle quantities along the view rays. In this paper we present an efficient approach to perform this task using a novel view-space discretization of the simulation domain. Our method draws upon recent work on GPU-based particle voxelization for the efficient resampling of particles into uniform grids. We propose a new technique that leverages a perspective grid to adaptively discretize the view-volume, giving rise to a continuous level-of-detail sampling structure and reducing memory requirements compared to a uniform grid. In combination with a level-of-detail representation of the particle set, the perspective grid allows effectively reducing the amount of primitives to be processed at run-time. We demonstrate the quality and performance of our method for the rendering of fluid and gas dynamics SPH simulations consisting of many millions of particles.
Efficient geometric rectification techniques for spectral analysis algorithm
NASA Technical Reports Server (NTRS)
Chang, C. Y.; Pang, S. S.; Curlander, J. C.
1992-01-01
The spectral analysis algorithm is a viable technique for processing synthetic aperture radar (SAR) data in near real time throughput rates by trading the image resolution. One major challenge of the spectral analysis algorithm is that the output image, often referred to as the range-Doppler image, is represented in the iso-range and iso-Doppler lines, a curved grid format. This phenomenon is known to be the fanshape effect. Therefore, resampling is required to convert the range-Doppler image into a rectangular grid format before the individual images can be overlaid together to form seamless multi-look strip imagery. An efficient algorithm for geometric rectification of the range-Doppler image is presented. The proposed algorithm, realized in two one-dimensional resampling steps, takes into consideration the fanshape phenomenon of the range-Doppler image as well as the high squint angle and updates of the cross-track and along-track Doppler parameters. No ground reference points are required.
Scanner imaging systems, aircraft
NASA Technical Reports Server (NTRS)
Ungar, S. G.
1982-01-01
The causes and effects of distortion in aircraft scanner data are reviewed and an approach to reduce distortions by modelling the effect of aircraft motion on the scanner scene is discussed. With the advent of advanced satellite borne scanner systems, the geometric and radiometric correction of aircraft scanner data has become increasingly important. Corrections are needed to reliably simulate observations obtained by such systems for purposes of evaluation. It is found that if sufficient navigational information is available, aircraft scanner coordinates may be related very precisely to planimetric ground coordinates. However, the potential for a multivalue remapping transformation (i.e., scan lines crossing each other), adds an inherent uncertainty, to any radiometric resampling scheme, which is dependent on the precise geometry of the scan and ground pattern.
A Bayesian sequential processor approach to spectroscopic portal system decisions
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sale, K; Candy, J; Breitfeller, E
The development of faster more reliable techniques to detect radioactive contraband in a portal type scenario is an extremely important problem especially in this era of constant terrorist threats. Towards this goal the development of a model-based, Bayesian sequential data processor for the detection problem is discussed. In the sequential processor each datum (detector energy deposit and pulse arrival time) is used to update the posterior probability distribution over the space of model parameters. The nature of the sequential processor approach is that a detection is produced as soon as it is statistically justified by the data rather than waitingmore » for a fixed counting interval before any analysis is performed. In this paper the Bayesian model-based approach, physics and signal processing models and decision functions are discussed along with the first results of our research.« less
The distribution of individual cabinet positions in coalition governments: A sequential approach
Meyer, Thomas M.; Müller, Wolfgang C.
2015-01-01
Abstract Multiparty government in parliamentary democracies entails bargaining over the payoffs of government participation, in particular the allocation of cabinet positions. While most of the literature deals with the numerical distribution of cabinet seats among government parties, this article explores the distribution of individual portfolios. It argues that coalition negotiations are sequential choice processes that begin with the allocation of those portfolios most important to the bargaining parties. This induces conditionality in the bargaining process as choices of individual cabinet positions are not independent of each other. Linking this sequential logic with party preferences for individual cabinet positions, the authors of the article study the allocation of individual portfolios for 146 coalition governments in Western and Central Eastern Europe. The results suggest that a sequential logic in the bargaining process results in better predictions than assuming mutual independence in the distribution of individual portfolios. PMID:27546952
NASA Astrophysics Data System (ADS)
Kim, Young-Rok; Park, Eunseo; Choi, Eun-Jung; Park, Sang-Young; Park, Chandeok; Lim, Hyung-Chul
2014-09-01
In this study, genetic resampling (GRS) approach is utilized for precise orbit determination (POD) using the batch filter based on particle filtering (PF). Two genetic operations, which are arithmetic crossover and residual mutation, are used for GRS of the batch filter based on PF (PF batch filter). For POD, Laser-ranging Precise Orbit Determination System (LPODS) and satellite laser ranging (SLR) observations of the CHAMP satellite are used. Monte Carlo trials for POD are performed by one hundred times. The characteristics of the POD results by PF batch filter with GRS are compared with those of a PF batch filter with minimum residual resampling (MRRS). The post-fit residual, 3D error by external orbit comparison, and POD repeatability are analyzed for orbit quality assessments. The POD results are externally checked by NASA JPL’s orbits using totally different software, measurements, and techniques. For post-fit residuals and 3D errors, both MRRS and GRS give accurate estimation results whose mean root mean square (RMS) values are at a level of 5 cm and 10-13 cm, respectively. The mean radial orbit errors of both methods are at a level of 5 cm. For POD repeatability represented as the standard deviations of post-fit residuals and 3D errors by repetitive PODs, however, GRS yields 25% and 13% more robust estimation results than MRRS for post-fit residual and 3D error, respectively. This study shows that PF batch filter with GRS approach using genetic operations is superior to PF batch filter with MRRS in terms of robustness in POD with SLR observations.
Jodice, Patrick G.R.; Garman, S.L.; Collopy, Michael W.
2001-01-01
Marbled Murrelets (Brachyramphus marmoratus) are threatened seabirds that nest in coastal old-growth coniferous forests throughout much of their breeding range. Currently, observer-based audio-visual surveys are conducted at inland forest sites during the breeding season primarily to determine nesting distribution and breeding status and are being used to estimate temporal or spatial trends in murrelet detections. Our goal was to assess the feasibility of using audio-visual survey data for such monitoring. We used an intensive field-based survey effort to record daily murrelet detections at seven survey stations in the Oregon Coast Range. We then used computer-aided resampling techniques to assess the effectiveness of twelve survey strategies with varying scheduling and a sampling intensity of 4-14 surveys per breeding season to estimate known means and SDs of murrelet detections. Most survey strategies we tested failed to provide estimates of detection means and SDs that were within A?20% of actual means and SDs. Estimates of daily detections were, however, frequently estimated to within A?50% of field data with sampling efforts of 14 days/breeding season. Additional resampling analyses with statistically generated detection data indicated that the temporal variability in detection data had a great effect on the reliability of the mean and SD estimates calculated from the twelve survey strategies, while the value of the mean had little effect. Effectiveness at estimating multi-year trends in detection data was similarly poor, indicating that audio-visual surveys might be reliably used to estimate annual declines in murrelet detections of the order of 50% per year.
Estimation of Rainfall Sampling Uncertainty: A Comparison of Two Diverse Approaches
NASA Technical Reports Server (NTRS)
Steiner, Matthias; Zhang, Yu; Baeck, Mary Lynn; Wood, Eric F.; Smith, James A.; Bell, Thomas L.; Lau, William K. M. (Technical Monitor)
2002-01-01
The spatial and temporal intermittence of rainfall causes the averages of satellite observations of rain rate to differ from the "true" average rain rate over any given area and time period, even if the satellite observations are perfectly accurate. The difference of satellite averages based on occasional observation by satellite systems and the continuous-time average of rain rate is referred to as sampling error. In this study, rms sampling error estimates are obtained for average rain rates over boxes 100 km, 200 km, and 500 km on a side, for averaging periods of 1 day, 5 days, and 30 days. The study uses a multi-year, merged radar data product provided by Weather Services International Corp. at a resolution of 2 km in space and 15 min in time, over an area of the central U.S. extending from 35N to 45N in latitude and 100W to 80W in longitude. The intervals between satellite observations are assumed to be equal, and similar In size to what present and future satellite systems are able to provide (from 1 h to 12 h). The sampling error estimates are obtained using a resampling method called "resampling by shifts," and are compared to sampling error estimates proposed by Bell based on earlier work by Laughlin. The resampling estimates are found to scale with areal size and time period as the theory predicts. The dependence on average rain rate and time interval between observations is also similar to what the simple theory suggests.
One-shot estimate of MRMC variance: AUC.
Gallas, Brandon D
2006-03-01
One popular study design for estimating the area under the receiver operating characteristic curve (AUC) is the one in which a set of readers reads a set of cases: a fully crossed design in which every reader reads every case. The variability of the subsequent reader-averaged AUC has two sources: the multiple readers and the multiple cases (MRMC). In this article, we present a nonparametric estimate for the variance of the reader-averaged AUC that is unbiased and does not use resampling tools. The one-shot estimate is based on the MRMC variance derived by the mechanistic approach of Barrett et al. (2005), as well as the nonparametric variance of a single-reader AUC derived in the literature on U statistics. We investigate the bias and variance properties of the one-shot estimate through a set of Monte Carlo simulations with simulated model observers and images. The different simulation configurations vary numbers of readers and cases, amounts of image noise and internal noise, as well as how the readers are constructed. We compare the one-shot estimate to a method that uses the jackknife resampling technique with an analysis of variance model at its foundation (Dorfman et al. 1992). The name one-shot highlights that resampling is not used. The one-shot and jackknife estimators behave similarly, with the one-shot being marginally more efficient when the number of cases is small. We have derived a one-shot estimate of the MRMC variance of AUC that is based on a probabilistic foundation with limited assumptions, is unbiased, and compares favorably to an established estimate.
Maffei, D F; Sant'Ana, A S; Monteiro, G; Schaffner, D W; Franco, B D G M
2016-06-01
This study evaluated the impact of sodium dichloroisocyanurate (5, 10, 20, 30, 40, 50 and 250 mg l(-1) ) in wash water on transfer of Salmonella Typhimurium from contaminated lettuce to wash water and then to other noncontaminated lettuces washed sequentially in the same water. Experiments were designed mimicking the conditions commonly seen in minimally processed vegetable (MPV) processing plants in Brazil. The scenarios were as follows: (1) Washing one inoculated lettuce portion in nonchlorinated water, followed by washing 10 noninoculated portions sequentially. (2) Washing one inoculated lettuce portion in chlorinated water followed by washing five noninoculated portions sequentially. (3) Washing five inoculated lettuce portions in chlorinated water sequentially, followed by washing five noninoculated portions sequentially. (4) Washing five noninoculated lettuce portions in chlorinated water sequentially, followed by washing five inoculated portions sequentially and then by washing five noninoculated portions sequentially in the same water. Salm. Typhimurium transfer from inoculated lettuce to wash water and further dissemination to noninoculated lettuces occurred when nonchlorinated water was used (scenario 1). When chlorinated water was used (scenarios 2, 3 and 4), no measurable Salm. Typhimurium transfer occurred if the sanitizer was ≥10 mg l(-1) . Use of sanitizers in correct concentrations is important to minimize the risk of microbial transfer during MPV washing. In this study, the impact of sodium dichloroisocyanurate in the wash water on transfer of Salmonella Typhimurium from inoculated lettuce to wash water and then to other noninoculated lettuces washed sequentially in the same water was evaluated. The use of chlorinated water, at concentration above 10 mg l(-1) , effectively prevented Salm. Typhimurium transfer under several different washing scenarios. Conversely, when nonchlorinated water was used, Salm. Typhimurium transfer occurred in up to at least 10 noninoculated batches of lettuce washed sequentially in the same water. © 2016 The Society for Applied Microbiology.
Geologic Materials Center - General Information | Alaska Division of
effective November 9, 2017. Set by DGGS Director's Order, the fees will help offset operational costs and -effective alternative to the tremendous expense of core drilling and resampling in the field. One foot of
NASA Astrophysics Data System (ADS)
Huang, Huan; Baddour, Natalie; Liang, Ming
2018-02-01
Under normal operating conditions, bearings often run under time-varying rotational speed conditions. Under such circumstances, the bearing vibrational signal is non-stationary, which renders ineffective the techniques used for bearing fault diagnosis under constant running conditions. One of the conventional methods of bearing fault diagnosis under time-varying speed conditions is resampling the non-stationary signal to a stationary signal via order tracking with the measured variable speed. With the resampled signal, the methods available for constant condition cases are thus applicable. However, the accuracy of the order tracking is often inadequate and the time-varying speed is sometimes not measurable. Thus, resampling-free methods are of interest for bearing fault diagnosis under time-varying rotational speed for use without tachometers. With the development of time-frequency analysis, the time-varying fault character manifests as curves in the time-frequency domain. By extracting the Instantaneous Fault Characteristic Frequency (IFCF) from the Time-Frequency Representation (TFR) and converting the IFCF, its harmonics, and the Instantaneous Shaft Rotational Frequency (ISRF) into straight lines, the bearing fault can be detected and diagnosed without resampling. However, so far, the extraction of the IFCF for bearing fault diagnosis is mostly based on the assumption that at each moment the IFCF has the highest amplitude in the TFR, which is not always true. Hence, a more reliable T-F curve extraction approach should be investigated. Moreover, if the T-F curves including the IFCF, its harmonic, and the ISRF can be all extracted from the TFR directly, no extra processing is needed for fault diagnosis. Therefore, this paper proposes an algorithm for multiple T-F curve extraction from the TFR based on a fast path optimization which is more reliable for T-F curve extraction. Then, a new procedure for bearing fault diagnosis under unknown time-varying speed conditions is developed based on the proposed algorithm and a new fault diagnosis strategy. The average curve-to-curve ratios are utilized to describe the relationship of the extracted curves and fault diagnosis can then be achieved by comparing the ratios to the fault characteristic coefficients. The effectiveness of the proposed method is validated by simulated and experimental signals.
NASA Astrophysics Data System (ADS)
Zhang, Yu-Ying; Reiprich, Thomas H.; Schneider, Peter; Clerc, Nicolas; Merloni, Andrea; Schwope, Axel; Borm, Katharina; Andernach, Heinz; Caretta, César A.; Wu, Xiang-Ping
2017-03-01
We present the relation of X-ray luminosity versus dynamical mass for 63 nearby clusters of galaxies in a flux-limited sample, the HIghest X-ray FLUx Galaxy Cluster Sample (HIFLUGCS, consisting of 64 clusters). The luminosity measurements are obtained based on 1.3 Ms of clean XMM-Newton data and ROSAT pointed observations. The masses are estimated using optical spectroscopic redshifts of 13647 cluster galaxies in total. We classify clusters into disturbed and undisturbed based on a combination of the X-ray luminosity concentration and the offset between the brightest cluster galaxy and X-ray flux-weighted center. Given sufficient numbers (I.e., ≥45) of member galaxies when the dynamical masses are computed, the luminosity versus mass relations agree between the disturbed and undisturbed clusters. The cool-core clusters still dominate the scatter in the luminosity versus mass relation even when a core-corrected X-ray luminosity is used, which indicates that the scatter of this scaling relation mainly reflects the structure formation history of the clusters. As shown by the clusters with only few spectroscopically confirmed members, the dynamical masses can be underestimated and thus lead to a biased scaling relation. To investigate the potential of spectroscopic surveys to follow up high-redshift galaxy clusters or groups observed in X-ray surveys for the identifications and mass calibrations, we carried out Monte Carlo resampling of the cluster galaxy redshifts and calibrated the uncertainties of the redshift and dynamical mass estimates when only reduced numbers of galaxy redshifts per cluster are available. The resampling considers the SPIDERS and 4MOST configurations, designed for the follow-up of the eROSITA clusters, and was carried out for each cluster in the sample at the actual cluster redshift as well as at the assigned input cluster redshifts of 0.2, 0.4, 0.6, and 0.8. To follow up very distant clusters or groups, we also carried out the mass calibration based on the resampling with only ten redshifts per cluster, and redshift calibration based on the resampling with only five and ten redshifts per cluster, respectively. Our results demonstrate the power of combining upcoming X-ray and optical spectroscopic surveys for mass calibration of clusters. The scatter in the dynamical mass estimates for the clusters with at least ten members is within 50%.
NASA Astrophysics Data System (ADS)
Brown, James D.; Wu, Limin; He, Minxue; Regonda, Satish; Lee, Haksu; Seo, Dong-Jun
2014-11-01
Retrospective forecasts of precipitation, temperature, and streamflow were generated with the Hydrologic Ensemble Forecast Service (HEFS) of the U.S. National Weather Service (NWS) for a 20-year period between 1979 and 1999. The hindcasts were produced for two basins in each of four River Forecast Centers (RFCs), namely the Arkansas-Red Basin RFC, the Colorado Basin RFC, the California-Nevada RFC, and the Middle Atlantic RFC. Precipitation and temperature forecasts were produced with the HEFS Meteorological Ensemble Forecast Processor (MEFP). Inputs to the MEFP comprised ;raw; precipitation and temperature forecasts from the frozen (circa 1997) version of the NWS Global Forecast System (GFS) and a climatological ensemble, which involved resampling historical observations in a moving window around the forecast valid date (;resampled climatology;). In both cases, the forecast horizon was 1-14 days. This paper outlines the hindcasting and verification strategy, and then focuses on the quality of the temperature and precipitation forecasts from the MEFP. A companion paper focuses on the quality of the streamflow forecasts from the HEFS. In general, the precipitation forecasts are more skillful than resampled climatology during the first week, but comprise little or no skill during the second week. In contrast, the temperature forecasts improve upon resampled climatology at all forecast lead times. However, there are notable differences among RFCs and for different seasons, aggregation periods and magnitudes of the observed and forecast variables, both for precipitation and temperature. For example, the MEFP-GFS precipitation forecasts show the highest correlations and greatest skill in the California Nevada RFC, particularly during the wet season (November-April). While generally reliable, the MEFP forecasts typically underestimate the largest observed precipitation amounts (a Type-II conditional bias). As a statistical technique, the MEFP cannot detect, and thus appropriately correct for, conditions that are undetected by the GFS. The calibration of the MEFP to provide reliable and skillful forecasts of a range of precipitation amounts (not only large amounts) is a secondary factor responsible for these Type-II conditional biases. Interpretation of the verification results leads to guidance on the expected performance and limitations of the MEFP, together with recommendations on future enhancements.
NASA Astrophysics Data System (ADS)
Müller, H.; Haberlandt, U.
2018-01-01
Rainfall time series of high temporal resolution and spatial density are crucial for urban hydrology. The multiplicative random cascade model can be used for temporal rainfall disaggregation of daily data to generate such time series. Here, the uniform splitting approach with a branching number of 3 in the first disaggregation step is applied. To achieve a final resolution of 5 min, subsequent steps after disaggregation are necessary. Three modifications at different disaggregation levels are tested in this investigation (uniform splitting at Δt = 15 min, linear interpolation at Δt = 7.5 min and Δt = 3.75 min). Results are compared both with observations and an often used approach, based on the assumption that a time steps with Δt = 5.625 min, as resulting if a branching number of 2 is applied throughout, can be replaced with Δt = 5 min (called the 1280 min approach). Spatial consistence is implemented in the disaggregated time series using a resampling algorithm. In total, 24 recording stations in Lower Saxony, Northern Germany with a 5 min resolution have been used for the validation of the disaggregation procedure. The urban-hydrological suitability is tested with an artificial combined sewer system of about 170 hectares. The results show that all three variations outperform the 1280 min approach regarding reproduction of wet spell duration, average intensity, fraction of dry intervals and lag-1 autocorrelation. Extreme values with durations of 5 min are also better represented. For durations of 1 h, all approaches show only slight deviations from the observed extremes. The applied resampling algorithm is capable to achieve sufficient spatial consistence. The effects on the urban hydrological simulations are significant. Without spatial consistence, flood volumes of manholes and combined sewer overflow are strongly underestimated. After resampling, results using disaggregated time series as input are in the range of those using observed time series. Best overall performance regarding rainfall statistics are obtained by the method in which the disaggregation process ends at time steps with 7.5 min duration, deriving the 5 min time steps by linear interpolation. With subsequent resampling this method leads to a good representation of manhole flooding and combined sewer overflow volume in terms of hydrological simulations and outperforms the 1280 min approach.
Liu, Rong
2017-01-01
Obtaining a fast and reliable decision is an important issue in brain-computer interfaces (BCI), particularly in practical real-time applications such as wheelchair or neuroprosthetic control. In this study, the EEG signals were firstly analyzed with a power projective base method. Then we were applied a decision-making model, the sequential probability ratio testing (SPRT), for single-trial classification of motor imagery movement events. The unique strength of this proposed classification method lies in its accumulative process, which increases the discriminative power as more and more evidence is observed over time. The properties of the method were illustrated on thirteen subjects' recordings from three datasets. Results showed that our proposed power projective method outperformed two benchmark methods for every subject. Moreover, with sequential classifier, the accuracies across subjects were significantly higher than that with nonsequential ones. The average maximum accuracy of the SPRT method was 84.1%, as compared with 82.3% accuracy for the sequential Bayesian (SB) method. The proposed SPRT method provides an explicit relationship between stopping time, thresholds, and error, which is important for balancing the time-accuracy trade-off. These results suggest SPRT would be useful in speeding up decision-making while trading off errors in BCI. PMID:29348781
Perceptual Grouping Affects Pitch Judgments Across Time and Frequency
Borchert, Elizabeth M. O.; Micheyl, Christophe; Oxenham, Andrew J.
2010-01-01
Pitch, the perceptual correlate of fundamental frequency (F0), plays an important role in speech, music and animal vocalizations. Changes in F0 over time help define musical melodies and speech prosody, while comparisons of simultaneous F0 are important for musical harmony, and for segregating competing sound sources. This study compared listeners’ ability to detect differences in F0 between pairs of sequential or simultaneous tones that were filtered into separate, non-overlapping spectral regions. The timbre differences induced by filtering led to poor F0 discrimination in the sequential, but not the simultaneous, conditions. Temporal overlap of the two tones was not sufficient to produce good performance; instead performance appeared to depend on the two tones being integrated into the same perceptual object. The results confirm the difficulty of comparing the pitches of sequential sounds with different timbres and suggest that, for simultaneous sounds, pitch differences may be detected through a decrease in perceptual fusion rather than an explicit coding and comparison of the underlying F0s. PMID:21077719
Mainela-Arnold, Elina; Evans, Julia L.
2014-01-01
This study tested the predictions of the procedural deficit hypothesis by investigating the relationship between sequential statistical learning and two aspects of lexical ability, lexical-phonological and lexical-semantic, in children with and without specific language impairment (SLI). Participants included 40 children (ages 8;5–12;3), 20 children with SLI and 20 with typical development. Children completed Saffran’s statistical word segmentation task, a lexical-phonological access task (gating task), and a word definition task. Poor statistical learners were also poor at managing lexical-phonological competition during the gating task. However, statistical learning was not a significant predictor of semantic richness in word definitions. The ability to track statistical sequential regularities may be important for learning the inherently sequential structure of lexical-phonology, but not as important for learning lexical-semantic knowledge. Consistent with the procedural/declarative memory distinction, the brain networks associated with the two types of lexical learning are likely to have different learning properties. PMID:23425593
Silva, Ivair R
2018-01-15
Type I error probability spending functions are commonly used for designing sequential analysis of binomial data in clinical trials, but it is also quickly emerging for near-continuous sequential analysis of post-market drug and vaccine safety surveillance. It is well known that, for clinical trials, when the null hypothesis is not rejected, it is still important to minimize the sample size. Unlike in post-market drug and vaccine safety surveillance, that is not important. In post-market safety surveillance, specially when the surveillance involves identification of potential signals, the meaningful statistical performance measure to be minimized is the expected sample size when the null hypothesis is rejected. The present paper shows that, instead of the convex Type I error spending shape conventionally used in clinical trials, a concave shape is more indicated for post-market drug and vaccine safety surveillance. This is shown for both, continuous and group sequential analysis. Copyright © 2017 John Wiley & Sons, Ltd.
EEG Classification with a Sequential Decision-Making Method in Motor Imagery BCI.
Liu, Rong; Wang, Yongxuan; Newman, Geoffrey I; Thakor, Nitish V; Ying, Sarah
2017-12-01
To develop subject-specific classifier to recognize mental states fast and reliably is an important issue in brain-computer interfaces (BCI), particularly in practical real-time applications such as wheelchair or neuroprosthetic control. In this paper, a sequential decision-making strategy is explored in conjunction with an optimal wavelet analysis for EEG classification. The subject-specific wavelet parameters based on a grid-search method were first developed to determine evidence accumulative curve for the sequential classifier. Then we proposed a new method to set the two constrained thresholds in the sequential probability ratio test (SPRT) based on the cumulative curve and a desired expected stopping time. As a result, it balanced the decision time of each class, and we term it balanced threshold SPRT (BTSPRT). The properties of the method were illustrated on 14 subjects' recordings from offline and online tests. Results showed the average maximum accuracy of the proposed method to be 83.4% and the average decision time of 2.77[Formula: see text]s, when compared with 79.2% accuracy and a decision time of 3.01[Formula: see text]s for the sequential Bayesian (SB) method. The BTSPRT method not only improves the classification accuracy and decision speed comparing with the other nonsequential or SB methods, but also provides an explicit relationship between stopping time, thresholds and error, which is important for balancing the speed-accuracy tradeoff. These results suggest that BTSPRT would be useful in explicitly adjusting the tradeoff between rapid decision-making and error-free device control.
Veiga, Helena Perrut; Bianchini, Esther Mandelbaum Gonçalves
2012-01-01
To perform an integrative review of studies on liquid sequential swallowing, by characterizing the methodology of the studies and the most important findings in young and elderly adults. Review of the literature written in English and Portuguese on PubMed, LILACS, SciELO and MEDLINE databases, within the past twenty years, available fully, using the following uniterms: sequential swallowing, swallowing, dysphagia, cup, straw, in various combinations. Research articles with a methodological approach on the characterization of liquid sequential swallowing by young and/or elderly adults, regardless of health condition, excluding studies involving only the esophageal phase. The following research indicators were applied: objectives, number and gender of participants; age group; amount of liquid offered; intake instruction; utensil used, methods and main findings. 18 studies met the established criteria. The articles were categorized according to the sample characterization and the methodology on volume intake, utensil used and types of exams. Most studies investigated only healthy individuals, with no swallowing complaints. Subjects were given different instructions as to the intake of all the volume: usual manner, continually, as rapidly as possible. The findings about the characterization of sequential swallowing were varied and described in accordance with the objectives of each study. It found great variability in the methodology employed to characterize the sequential swallowing. Some findings are not comparable, and sequential swallowing is not studied in most swallowing protocols, without consensus on the influence of the utensil.
Horry, Ruth; Palmer, Matthew A; Brewer, Neil
2012-12-01
Although the sequential lineup has been proposed as a means of protecting innocent suspects from mistaken identification, little is known about the importance of various aspects of the procedure. One potentially important detail is that witnesses should not know how many people are in the lineup. This is sometimes achieved by backloading the lineup so that witnesses believe that the lineup includes more photographs than it actually does. This study aimed to investigate the effect of backloading on witness decision making. A large sample (N = 833) of community-dwelling adults viewed a live "culprit" and then saw a target-present or target-absent sequential lineup. All lineups included 6 individuals, but the participants were told that the lineup included 6 photographs (nonbackloaded condition) or that the lineup included 12 or 30 photographs (backloaded conditions). The suspect either appeared early (Position 2) or late (Position 6) in the lineup. Innocent suspects placed in Position 6 were chosen more frequently by participants in the nonbackloaded condition than in either backloaded condition. Additionally, when the lineup was not backloaded, foil identification rates increased from Positions 3 to 5, suggesting a gradually shifting response criterion. The results suggest that backloading encourages participants to adopt a more conservative response criterion, and it reduces or eliminates the tendency for the criterion to become more lenient over the course of the lineup. The results underscore the absolute importance of ensuring that witnesses who view sequential lineups are unaware of the number of individuals to be seen.
NASA Astrophysics Data System (ADS)
Wamser, Kyle
Hyperspectral imagery and the corresponding ability to conduct analysis below the pixel level have tremendous potential to aid in landcover monitoring. During large ecosystem restoration projects, being able to monitor specific aspects of the recovery over large and often inaccessible areas under constrained finances are major challenges. The Civil Air Patrol's Airborne Real-time Cueing Hyperspectral Enhanced Reconnaissance (ARCHER) can provide hyperspectral data in most parts of the United States at relatively low cost. Although designed specifically for use in locating downed aircraft, the imagery holds the potential to identify specific aspects of landcover at far greater fidelity than traditional multispectral means. The goals of this research were to improve the use of ARCHER hyperspectral imagery to classify sub-canopy and open-area vegetation in coniferous forests located in the Southern Rockies and to determine how much fidelity might be lost from a baseline of 1 meter spatial resolution resampled to 2 and 5 meter pixel size to simulate higher altitude collection. Based on analysis comparing linear spectral unmixing with a traditional supervised classification, the linear spectral unmixing proved to be statistically superior. More importantly, however, linear spectral unmixing provided additional sub-pixel information that was unavailable using other techniques. The second goal of determining fidelity loss based on spatial resolution was more difficult to determine due to how the data are represented. Furthermore, the 2 and 5 meter imagery were obtained by resampling the 1 meter imagery and therefore may not be representative of the quality of actual 2 or 5 meter imagery. Ultimately, the information derived from this research may be useful in better utilizing hyperspectral imagery to conduct forest monitoring and assessment.
NASA Astrophysics Data System (ADS)
Miao, Yonghao; Zhao, Ming; Lin, Jing; Lei, Yaguo
2017-08-01
The extraction of periodic impulses, which are the important indicators of rolling bearing faults, from vibration signals is considerably significance for fault diagnosis. Maximum correlated kurtosis deconvolution (MCKD) developed from minimum entropy deconvolution (MED) has been proven as an efficient tool for enhancing the periodic impulses in the diagnosis of rolling element bearings and gearboxes. However, challenges still exist when MCKD is applied to the bearings operating under harsh working conditions. The difficulties mainly come from the rigorous requires for the multi-input parameters and the complicated resampling process. To overcome these limitations, an improved MCKD (IMCKD) is presented in this paper. The new method estimates the iterative period by calculating the autocorrelation of the envelope signal rather than relies on the provided prior period. Moreover, the iterative period will gradually approach to the true fault period through updating the iterative period after every iterative step. Since IMCKD is unaffected by the impulse signals with the high kurtosis value, the new method selects the maximum kurtosis filtered signal as the final choice from all candidates in the assigned iterative counts. Compared with MCKD, IMCKD has three advantages. First, without considering prior period and the choice of the order of shift, IMCKD is more efficient and has higher robustness. Second, the resampling process is not necessary for IMCKD, which is greatly convenient for the subsequent frequency spectrum analysis and envelope spectrum analysis without resetting the sampling rate. Third, IMCKD has a significant performance advantage in diagnosing the bearing compound-fault which expands the application range. Finally, the effectiveness and superiority of IMCKD are validated by a number of simulated bearing fault signals and applying to compound faults and single fault diagnosis of a locomotive bearing.
A non-parametric peak calling algorithm for DamID-Seq.
Li, Renhua; Hempel, Leonie U; Jiang, Tingbo
2015-01-01
Protein-DNA interactions play a significant role in gene regulation and expression. In order to identify transcription factor binding sites (TFBS) of double sex (DSX)-an important transcription factor in sex determination, we applied the DNA adenine methylation identification (DamID) technology to the fat body tissue of Drosophila, followed by deep sequencing (DamID-Seq). One feature of DamID-Seq data is that induced adenine methylation signals are not assured to be symmetrically distributed at TFBS, which renders the existing peak calling algorithms for ChIP-Seq, including SPP and MACS, inappropriate for DamID-Seq data. This challenged us to develop a new algorithm for peak calling. A challenge in peaking calling based on sequence data is estimating the averaged behavior of background signals. We applied a bootstrap resampling method to short sequence reads in the control (Dam only). After data quality check and mapping reads to a reference genome, the peaking calling procedure compromises the following steps: 1) reads resampling; 2) reads scaling (normalization) and computing signal-to-noise fold changes; 3) filtering; 4) Calling peaks based on a statistically significant threshold. This is a non-parametric method for peak calling (NPPC). We also used irreproducible discovery rate (IDR) analysis, as well as ChIP-Seq data to compare the peaks called by the NPPC. We identified approximately 6,000 peaks for DSX, which point to 1,225 genes related to the fat body tissue difference between female and male Drosophila. Statistical evidence from IDR analysis indicated that these peaks are reproducible across biological replicates. In addition, these peaks are comparable to those identified by use of ChIP-Seq on S2 cells, in terms of peak number, location, and peaks width.
Ali, Sajid; Soubeyrand, Samuel; Gladieux, Pierre; Giraud, Tatiana; Leconte, Marc; Gautier, Angélique; Mboup, Mamadou; Chen, Wanquan; de Vallavieille-Pope, Claude; Enjalbert, Jérôme
2016-07-01
Inferring reproductive and demographic parameters of populations is crucial to our understanding of species ecology and evolutionary potential but can be challenging, especially in partially clonal organisms. Here, we describe a new and accurate method, cloncase, for estimating both the rate of sexual vs. asexual reproduction and the effective population size, based on the frequency of clonemate resampling across generations. Simulations showed that our method provides reliable estimates of sex frequency and effective population size for a wide range of parameters. The cloncase method was applied to Puccinia striiformis f.sp. tritici, a fungal pathogen causing stripe/yellow rust, an important wheat disease. This fungus is highly clonal in Europe but has been suggested to recombine in Asia. Using two temporally spaced samples of P. striiformis f.sp. tritici in China, the estimated sex frequency was 75% (i.e. three-quarter of individuals being sexually derived during the yearly sexual cycle), indicating strong contribution of sexual reproduction to the life cycle of the pathogen in this area. The inferred effective population size of this partially clonal organism (Nc = 998) was in good agreement with estimates obtained using methods based on temporal variations in allelic frequencies. The cloncase estimator presented herein is the first method allowing accurate inference of both sex frequency and effective population size from population data without knowledge of recombination or mutation rates. cloncase can be applied to population genetic data from any organism with cyclical parthenogenesis and should in particular be very useful for improving our understanding of pest and microbial population biology. © 2016 John Wiley & Sons Ltd.
The influence of sampling interval on the accuracy of trail impact assessment
Leung, Y.-F.; Marion, J.L.
1999-01-01
Trail impact assessment and monitoring (IA&M) programs have been growing in importance and application in recreation resource management at protected areas. Census-based and sampling-based approaches have been developed in such programs, with systematic point sampling being the most common survey design. This paper examines the influence of sampling interval on the accuracy of estimates for selected trail impact problems. A complete census of four impact types on 70 trails in Great Smoky Mountains National Park was utilized as the base data set for the analyses. The census data were resampled at increasing intervals to create a series of simulated point data sets. Estimates of frequency of occurrence and lineal extent for the four impact types were compared with the census data set. The responses of accuracy loss on lineal extent estimates to increasing sampling intervals varied across different impact types, while the responses on frequency of occurrence estimates were consistent, approximating an inverse asymptotic curve. These findings suggest that systematic point sampling may be an appropriate method for estimating the lineal extent but not the frequency of trail impacts. Sample intervals of less than 100 m appear to yield an excellent level of accuracy for the four impact types evaluated. Multiple regression analysis results suggest that appropriate sampling intervals are more likely to be determined by the type of impact in question rather than the length of trail. The census-based trail survey and the resampling-simulation method developed in this study can be a valuable first step in establishing long-term trail IA&M programs, in which an optimal sampling interval range with acceptable accuracy is determined before investing efforts in data collection.
LANDSAT-D investigations in snow hydrology
NASA Technical Reports Server (NTRS)
Dozier, J. (Principal Investigator)
1984-01-01
Thematic mapper radiometric characteristics, snow/cloud reflectance, and atmospheric correction are discussed with application to determining the spectral albedo of snow. The geometric characterics of TM and digital elevation data are examined. The geometric transformations and resampling required to coregister these data are discussed.
Effects of neostriatal 6-OHDA lesion on performance in a rat sequential reaction time task.
Domenger, D; Schwarting, R K W
2008-10-31
Work in humans and monkeys has provided evidence that the basal ganglia, and the neurotransmitter dopamine therein, play an important role for sequential learning and performance. Compared to primates, experimental work in rodents is rather sparse, largely due to the fact that tasks comparable to the human ones, especially serial reaction time tasks (SRTT), had been lacking until recently. We have developed a rat model of the SRTT, which allows to study neural correlates of sequential performance and motor sequence execution. Here, we report the effects of dopaminergic neostriatal lesions, performed using bilateral 6-hydroxydopamine injections, on performance of well-trained rats tested in our SRTT. Sequential behavior was measured in two ways: for one, the effects of small violations of otherwise well trained sequences were examined as a measure of attention and automation. Secondly, sequential versus random performance was compared as a measure of sequential learning. Neurochemically, the lesions led to sub-total dopamine depletions in the neostriatum, which ranged around 60% in the lateral, and around 40% in the medial neostriatum. These lesions led to a general instrumental impairment in terms of reduced speed (response latencies) and response rate, and these deficits were correlated with the degree of striatal dopamine loss. Furthermore, the violation test indicated that the lesion group conducted less automated responses. The comparison of random versus sequential responding showed that the lesion group did not retain its superior sequential performance in terms of speed, whereas they did in terms of accuracy. Also, rats with lesions did not improve further in overall performance as compared to pre-lesion values, whereas controls did. These results support previous results that neostriatal dopamine is involved in instrumental behaviour in general. Also, these lesions are not sufficient to completely abolish sequential performance, at least when acquired before lesion as tested here.
Lörincz, András; Póczos, Barnabás
2003-06-01
In optimizations the dimension of the problem may severely, sometimes exponentially increase optimization time. Parametric function approximatiors (FAPPs) have been suggested to overcome this problem. Here, a novel FAPP, cost component analysis (CCA) is described. In CCA, the search space is resampled according to the Boltzmann distribution generated by the energy landscape. That is, CCA converts the optimization problem to density estimation. Structure of the induced density is searched by independent component analysis (ICA). The advantage of CCA is that each independent ICA component can be optimized separately. In turn, (i) CCA intends to partition the original problem into subproblems and (ii) separating (partitioning) the original optimization problem into subproblems may serve interpretation. Most importantly, (iii) CCA may give rise to high gains in optimization time. Numerical simulations illustrate the working of the algorithm.
Determination of Time Dependent Virus Inactivation Rates
NASA Astrophysics Data System (ADS)
Chrysikopoulos, C. V.; Vogler, E. T.
2003-12-01
A methodology is developed for estimating temporally variable virus inactivation rate coefficients from experimental virus inactivation data. The methodology consists of a technique for slope estimation of normalized virus inactivation data in conjunction with a resampling parameter estimation procedure. The slope estimation technique is based on a relatively flexible geostatistical method known as universal kriging. Drift coefficients are obtained by nonlinear fitting of bootstrap samples and the corresponding confidence intervals are obtained by bootstrap percentiles. The proposed methodology yields more accurate time dependent virus inactivation rate coefficients than those estimated by fitting virus inactivation data to a first-order inactivation model. The methodology is successfully applied to a set of poliovirus batch inactivation data. Furthermore, the importance of accurate inactivation rate coefficient determination on virus transport in water saturated porous media is demonstrated with model simulations.
Ion channel gene expression predicts survival in glioma patients
Wang, Rong; Gurguis, Christopher I.; Gu, Wanjun; Ko, Eun A; Lim, Inja; Bang, Hyoweon; Zhou, Tong; Ko, Jae-Hong
2015-01-01
Ion channels are important regulators in cell proliferation, migration, and apoptosis. The malfunction and/or aberrant expression of ion channels may disrupt these important biological processes and influence cancer progression. In this study, we investigate the expression pattern of ion channel genes in glioma. We designate 18 ion channel genes that are differentially expressed in high-grade glioma as a prognostic molecular signature. This ion channel gene expression based signature predicts glioma outcome in three independent validation cohorts. Interestingly, 16 of these 18 genes were down-regulated in high-grade glioma. This signature is independent of traditional clinical, molecular, and histological factors. Resampling tests indicate that the prognostic power of the signature outperforms random gene sets selected from human genome in all the validation cohorts. More importantly, this signature performs better than the random gene signatures selected from glioma-associated genes in two out of three validation datasets. This study implicates ion channels in brain cancer, thus expanding on knowledge of their roles in other cancers. Individualized profiling of ion channel gene expression serves as a superior and independent prognostic tool for glioma patients. PMID:26235283
NASA Astrophysics Data System (ADS)
Han, Tao; Chen, Lingyun; Lai, Chao-Jen; Liu, Xinming; Shen, Youtao; Zhong, Yuncheng; Ge, Shuaiping; Yi, Ying; Wang, Tianpeng; Shaw, Chris C.
2009-02-01
Images of mastectomy breast specimens have been acquired with a bench top experimental Cone beam CT (CBCT) system. The resulting images have been segmented to model an uncompressed breast for simulation of various CBCT techniques. To further simulate conventional or tomosynthesis mammographic imaging for comparison with the CBCT technique, a deformation technique was developed to convert the CT data for an uncompressed breast to a compressed breast without altering the breast volume or regional breast density. With this technique, 3D breast deformation is separated into two 2D deformations in coronal and axial views. To preserve the total breast volume and regional tissue composition, each 2D deformation step was achieved by altering the square pixels into rectangular ones with the pixel areas unchanged and resampling with the original square pixels using bilinear interpolation. The compression was modeled by first stretching the breast in the superior-inferior direction in the coronal view. The image data were first deformed by distorting the voxels with a uniform distortion ratio. These deformed data were then deformed again using distortion ratios varying with the breast thickness and re-sampled. The deformation procedures were applied in the axial view to stretch the breast in the chest wall to nipple direction while shrinking it in the mediolateral to lateral direction re-sampled and converted into data for uniform cubic voxels. Threshold segmentation was applied to the final deformed image data to obtain the 3D compressed breast model. Our results show that the original segmented CBCT image data were successfully converted into those for a compressed breast with the same volume and regional density preserved. Using this compressed breast model, conventional and tomosynthesis mammograms were simulated for comparison with CBCT.
Dwivedi, Alok Kumar; Mallawaarachchi, Indika; Alvarado, Luis A
2017-06-30
Experimental studies in biomedical research frequently pose analytical problems related to small sample size. In such studies, there are conflicting findings regarding the choice of parametric and nonparametric analysis, especially with non-normal data. In such instances, some methodologists questioned the validity of parametric tests and suggested nonparametric tests. In contrast, other methodologists found nonparametric tests to be too conservative and less powerful and thus preferred using parametric tests. Some researchers have recommended using a bootstrap test; however, this method also has small sample size limitation. We used a pooled method in nonparametric bootstrap test that may overcome the problem related with small samples in hypothesis testing. The present study compared nonparametric bootstrap test with pooled resampling method corresponding to parametric, nonparametric, and permutation tests through extensive simulations under various conditions and using real data examples. The nonparametric pooled bootstrap t-test provided equal or greater power for comparing two means as compared with unpaired t-test, Welch t-test, Wilcoxon rank sum test, and permutation test while maintaining type I error probability for any conditions except for Cauchy and extreme variable lognormal distributions. In such cases, we suggest using an exact Wilcoxon rank sum test. Nonparametric bootstrap paired t-test also provided better performance than other alternatives. Nonparametric bootstrap test provided benefit over exact Kruskal-Wallis test. We suggest using nonparametric bootstrap test with pooled resampling method for comparing paired or unpaired means and for validating the one way analysis of variance test results for non-normal data in small sample size studies. Copyright © 2017 John Wiley & Sons, Ltd. Copyright © 2017 John Wiley & Sons, Ltd.
Motion vector field phase-to-amplitude resampling for 4D motion-compensated cone-beam CT
NASA Astrophysics Data System (ADS)
Sauppe, Sebastian; Kuhm, Julian; Brehm, Marcus; Paysan, Pascal; Seghers, Dieter; Kachelrieß, Marc
2018-02-01
We propose a phase-to-amplitude resampling (PTAR) method to reduce motion blurring in motion-compensated (MoCo) 4D cone-beam CT (CBCT) image reconstruction, without increasing the computational complexity of the motion vector field (MVF) estimation approach. PTAR is able to improve the image quality in reconstructed 4D volumes, including both regular and irregular respiration patterns. The PTAR approach starts with a robust phase-gating procedure for the initial MVF estimation and then switches to a phase-adapted amplitude gating method. The switch implies an MVF-resampling, which makes them amplitude-specific. PTAR ensures that the MVFs, which have been estimated on phase-gated reconstructions, are still valid for all amplitude-gated reconstructions. To validate the method, we use an artificially deformed clinical CT scan with a realistic breathing pattern and several patient data sets acquired with a TrueBeamTM integrated imaging system (Varian Medical Systems, Palo Alto, CA, USA). Motion blurring, which still occurs around the area of the diaphragm or at small vessels above the diaphragm in artifact-specific cyclic motion compensation (acMoCo) images based on phase-gating, is significantly reduced by PTAR. Also, small lung structures appear sharper in the images. This is demonstrated both for simulated and real patient data. A quantification of the sharpness of the diaphragm confirms these findings. PTAR improves the image quality of 4D MoCo reconstructions compared to conventional phase-gated MoCo images, in particular for irregular breathing patterns. Thus, PTAR increases the robustness of MoCo reconstructions for CBCT. Because PTAR does not require any additional steps for the MVF estimation, it is computationally efficient. Our method is not restricted to CBCT but could rather be applied to other image modalities.
Dudoit, Sandrine; Gilbert, Houston N.; van der Laan, Mark J.
2014-01-01
Summary This article proposes resampling-based empirical Bayes multiple testing procedures for controlling a broad class of Type I error rates, defined as generalized tail probability (gTP) error rates, gTP(q, g) = Pr(g(Vn, Sn) > q), and generalized expected value (gEV) error rates, gEV(g) = E[g(Vn, Sn)], for arbitrary functions g(Vn, Sn) of the numbers of false positives Vn and true positives Sn. Of particular interest are error rates based on the proportion g(Vn, Sn) = Vn/(Vn + Sn) of Type I errors among the rejected hypotheses, such as the false discovery rate (FDR), FDR = E[Vn/(Vn + Sn)]. The proposed procedures offer several advantages over existing methods. They provide Type I error control for general data generating distributions, with arbitrary dependence structures among variables. Gains in power are achieved by deriving rejection regions based on guessed sets of true null hypotheses and null test statistics randomly sampled from joint distributions that account for the dependence structure of the data. The Type I error and power properties of an FDR-controlling version of the resampling-based empirical Bayes approach are investigated and compared to those of widely-used FDR-controlling linear step-up procedures in a simulation study. The Type I error and power trade-off achieved by the empirical Bayes procedures under a variety of testing scenarios allows this approach to be competitive with or outperform the Storey and Tibshirani (2003) linear step-up procedure, as an alternative to the classical Benjamini and Hochberg (1995) procedure. PMID:18932138
NASA Astrophysics Data System (ADS)
Bou-Fakhreddine, Bassam; Mougharbel, Imad; Faye, Alain; Abou Chakra, Sara; Pollet, Yann
2018-03-01
Accurate daily river flow forecast is essential in many applications of water resources such as hydropower operation, agricultural planning and flood control. This paper presents a forecasting approach to deal with a newly addressed situation where hydrological data exist for a period longer than that of meteorological data (measurements asymmetry). In fact, one of the potential solutions to resolve measurements asymmetry issue is data re-sampling. It is a matter of either considering only the hydrological data or the balanced part of the hydro-meteorological data set during the forecasting process. However, the main disadvantage is that we may lose potentially relevant information from the left-out data. In this research, the key output is a Two-Phase Constructive Fuzzy inference hybrid model that is implemented over the non re-sampled data. The introduced modeling approach must be capable of exploiting the available data efficiently with higher prediction efficiency relative to Constructive Fuzzy model trained over re-sampled data set. The study was applied to Litani River in the Bekaa Valley - Lebanon by using 4 years of rainfall and 24 years of river flow daily measurements. A Constructive Fuzzy System Model (C-FSM) and a Two-Phase Constructive Fuzzy System Model (TPC-FSM) are trained. Upon validating, the second model has shown a primarily competitive performance and accuracy with the ability to preserve a higher day-to-day variability for 1, 3 and 6 days ahead. In fact, for the longest lead period, the C-FSM and TPC-FSM were able of explaining respectively 84.6% and 86.5% of the actual river flow variation. Overall, the results indicate that TPC-FSM model has provided a better tool to capture extreme flows in the process of streamflow prediction.
Study on the Classification of GAOFEN-3 Polarimetric SAR Images Using Deep Neural Network
NASA Astrophysics Data System (ADS)
Zhang, J.; Zhang, J.; Zhao, Z.
2018-04-01
Polarimetric Synthetic Aperture Radar (POLSAR) imaging principle determines that the image quality will be affected by speckle noise. So the recognition accuracy of traditional image classification methods will be reduced by the effect of this interference. Since the date of submission, Deep Convolutional Neural Network impacts on the traditional image processing methods and brings the field of computer vision to a new stage with the advantages of a strong ability to learn deep features and excellent ability to fit large datasets. Based on the basic characteristics of polarimetric SAR images, the paper studied the types of the surface cover by using the method of Deep Learning. We used the fully polarimetric SAR features of different scales to fuse RGB images to the GoogLeNet model based on convolution neural network Iterative training, and then use the trained model to test the classification of data validation.First of all, referring to the optical image, we mark the surface coverage type of GF-3 POLSAR image with 8m resolution, and then collect the samples according to different categories. To meet the GoogLeNet model requirements of 256 × 256 pixel image input and taking into account the lack of full-resolution SAR resolution, the original image should be pre-processed in the process of resampling. In this paper, POLSAR image slice samples of different scales with sampling intervals of 2 m and 1 m to be trained separately and validated by the verification dataset. Among them, the training accuracy of GoogLeNet model trained with resampled 2-m polarimetric SAR image is 94.89 %, and that of the trained SAR image with resampled 1 m is 92.65 %.
Testing non-inferiority of a new treatment in three-arm clinical trials with binary endpoints.
Tang, Nian-Sheng; Yu, Bin; Tang, Man-Lai
2014-12-18
A two-arm non-inferiority trial without a placebo is usually adopted to demonstrate that an experimental treatment is not worse than a reference treatment by a small pre-specified non-inferiority margin due to ethical concerns. Selection of the non-inferiority margin and establishment of assay sensitivity are two major issues in the design, analysis and interpretation for two-arm non-inferiority trials. Alternatively, a three-arm non-inferiority clinical trial including a placebo is usually conducted to assess the assay sensitivity and internal validity of a trial. Recently, some large-sample approaches have been developed to assess the non-inferiority of a new treatment based on the three-arm trial design. However, these methods behave badly with small sample sizes in the three arms. This manuscript aims to develop some reliable small-sample methods to test three-arm non-inferiority. Saddlepoint approximation, exact and approximate unconditional, and bootstrap-resampling methods are developed to calculate p-values of the Wald-type, score and likelihood ratio tests. Simulation studies are conducted to evaluate their performance in terms of type I error rate and power. Our empirical results show that the saddlepoint approximation method generally behaves better than the asymptotic method based on the Wald-type test statistic. For small sample sizes, approximate unconditional and bootstrap-resampling methods based on the score test statistic perform better in the sense that their corresponding type I error rates are generally closer to the prespecified nominal level than those of other test procedures. Both approximate unconditional and bootstrap-resampling test procedures based on the score test statistic are generally recommended for three-arm non-inferiority trials with binary outcomes.
Ozçift, Akin
2011-05-01
Supervised classification algorithms are commonly used in the designing of computer-aided diagnosis systems. In this study, we present a resampling strategy based Random Forests (RF) ensemble classifier to improve diagnosis of cardiac arrhythmia. Random forests is an ensemble classifier that consists of many decision trees and outputs the class that is the mode of the class's output by individual trees. In this way, an RF ensemble classifier performs better than a single tree from classification performance point of view. In general, multiclass datasets having unbalanced distribution of sample sizes are difficult to analyze in terms of class discrimination. Cardiac arrhythmia is such a dataset that has multiple classes with small sample sizes and it is therefore adequate to test our resampling based training strategy. The dataset contains 452 samples in fourteen types of arrhythmias and eleven of these classes have sample sizes less than 15. Our diagnosis strategy consists of two parts: (i) a correlation based feature selection algorithm is used to select relevant features from cardiac arrhythmia dataset. (ii) RF machine learning algorithm is used to evaluate the performance of selected features with and without simple random sampling to evaluate the efficiency of proposed training strategy. The resultant accuracy of the classifier is found to be 90.0% and this is a quite high diagnosis performance for cardiac arrhythmia. Furthermore, three case studies, i.e., thyroid, cardiotocography and audiology, are used to benchmark the effectiveness of the proposed method. The results of experiments demonstrated the efficiency of random sampling strategy in training RF ensemble classification algorithm. Copyright © 2011 Elsevier Ltd. All rights reserved.
Plasmon-driven sequential chemical reactions in an aqueous environment.
Zhang, Xin; Wang, Peijie; Zhang, Zhenglong; Fang, Yurui; Sun, Mengtao
2014-06-24
Plasmon-driven sequential chemical reactions were successfully realized in an aqueous environment. In an electrochemical environment, sequential chemical reactions were driven by an applied potential and laser irradiation. Furthermore, the rate of the chemical reaction was controlled via pH, which provides indirect evidence that the hot electrons generated from plasmon decay play an important role in plasmon-driven chemical reactions. In acidic conditions, the hot electrons were captured by the abundant H(+) in the aqueous environment, which prevented the chemical reaction. The developed plasmon-driven chemical reactions in an aqueous environment will significantly expand the applications of plasmon chemistry and may provide a promising avenue for green chemistry using plasmon catalysis in aqueous environments under irradiation by sunlight.
Plasmon-driven sequential chemical reactions in an aqueous environment
Zhang, Xin; Wang, Peijie; Zhang, Zhenglong; Fang, Yurui; Sun, Mengtao
2014-01-01
Plasmon-driven sequential chemical reactions were successfully realized in an aqueous environment. In an electrochemical environment, sequential chemical reactions were driven by an applied potential and laser irradiation. Furthermore, the rate of the chemical reaction was controlled via pH, which provides indirect evidence that the hot electrons generated from plasmon decay play an important role in plasmon-driven chemical reactions. In acidic conditions, the hot electrons were captured by the abundant H+ in the aqueous environment, which prevented the chemical reaction. The developed plasmon-driven chemical reactions in an aqueous environment will significantly expand the applications of plasmon chemistry and may provide a promising avenue for green chemistry using plasmon catalysis in aqueous environments under irradiation by sunlight. PMID:24958029
NASA Astrophysics Data System (ADS)
Chen, Xinjia; Lacy, Fred; Carriere, Patrick
2015-05-01
Sequential test algorithms are playing increasingly important roles for quick detecting network intrusions such as portscanners. In view of the fact that such algorithms are usually analyzed based on intuitive approximation or asymptotic analysis, we develop an exact computational method for the performance analysis of such algorithms. Our method can be used to calculate the probability of false alarm and average detection time up to arbitrarily pre-specified accuracy.
The timing of language learning shapes brain structure associated with articulation.
Berken, Jonathan A; Gracco, Vincent L; Chen, Jen-Kai; Klein, Denise
2016-09-01
We compared the brain structure of highly proficient simultaneous (two languages from birth) and sequential (second language after age 5) bilinguals, who differed only in their degree of native-like accent, to determine how the brain develops when a skill is acquired from birth versus later in life. For the simultaneous bilinguals, gray matter density was increased in the left putamen, as well as in the left posterior insula, right dorsolateral prefrontal cortex, and left and right occipital cortex. For the sequential bilinguals, gray matter density was increased in the bilateral premotor cortex. Sequential bilinguals with better accents also showed greater gray matter density in the left putamen, and in several additional brain regions important for sensorimotor integration and speech-motor control. Our findings suggest that second language learning results in enhanced brain structure of specific brain areas, which depends on whether two languages are learned simultaneously or sequentially, and on the extent to which native-like proficiency is acquired.
Middlebrooks, Catherine D; Castel, Alan D
2018-05-01
Learners make a number of decisions when attempting to study efficiently: they must choose which information to study, for how long to study it, and whether to restudy it later. The current experiments examine whether documented impairments to self-regulated learning when studying information sequentially, as opposed to simultaneously, extend to the learning of and memory for valuable information. In Experiment 1, participants studied lists of words ranging in value from 1-10 points sequentially or simultaneously at a preset presentation rate; in Experiment 2, study was self-paced and participants could choose to restudy. Although participants prioritized high-value over low-value information, irrespective of presentation, those who studied the items simultaneously demonstrated superior value-based prioritization with respect to recall, study selections, and self-pacing. The results of the present experiments support the theory that devising, maintaining, and executing efficient study agendas is inherently different under sequential formatting than simultaneous. (PsycINFO Database Record (c) 2018 APA, all rights reserved).
Friston, Karl J.; Dolan, Raymond J.
2017-01-01
Normative models of human cognition often appeal to Bayesian filtering, which provides optimal online estimates of unknown or hidden states of the world, based on previous observations. However, in many cases it is necessary to optimise beliefs about sequences of states rather than just the current state. Importantly, Bayesian filtering and sequential inference strategies make different predictions about beliefs and subsequent choices, rendering them behaviourally dissociable. Taking data from a probabilistic reversal task we show that subjects’ choices provide strong evidence that they are representing short sequences of states. Between-subject measures of this implicit sequential inference strategy had a neurobiological underpinning and correlated with grey matter density in prefrontal and parietal cortex, as well as the hippocampus. Our findings provide, to our knowledge, the first evidence for sequential inference in human cognition, and by exploiting between-subject variation in this measure we provide pointers to its neuronal substrates. PMID:28486504
SAMPLE SIZE FOR SEASONAL MEAN CONCENTRATION, DEPOSITION VELOCITY AND DEPOSITION: A RESAMPLING STUDY
Methodologies are described to assign confidence statements to seasonal means of concentration (C), deposition velocity (V J, and deposition categorized by species/parameters, sites, and seasons in the presence of missing data. Estimators of seasonal means with missing weekly dat...
MISR Level 1 Near Real Time Products
Atmospheric Science Data Center
2016-10-31
Level 1 Near Real Time The MISR Near Real Time Level 1 data products ... km MISR swath and projected onto a Space-Oblique Mercator (SOM) map grid. The Ellipsoid-projected and Terrain-projected top-of-atmosphere (TOA) radiance products provide measurements respectively resampled onto the ...
Using and Evaluating Resampling Simulations in SPSS and Excel.
ERIC Educational Resources Information Center
Smith, Brad
2003-01-01
Describes and evaluates three computer-assisted simulations used with Statistical Package for the Social Sciences (SPSS) and Microsoft Excel. Designed the simulations to reinforce and enhance student understanding of sampling distributions, confidence intervals, and significance tests. Reports evaluations revealed improved student comprehension of…
Bradu, Adrian; Kapinchev, Konstantin; Barnes, Frederick; Podoleanu, Adrian
2015-07-01
In a previous report, we demonstrated master-slave optical coherence tomography (MS-OCT), an OCT method that does not need resampling of data and can be used to deliver en face images from several depths simultaneously. In a separate report, we have also demonstrated MS-OCT's capability of producing cross-sectional images of a quality similar to those provided by the traditional Fourier domain (FD) OCT technique, but at a much slower rate. Here, we demonstrate that by taking advantage of the parallel processing capabilities offered by the MS-OCT method, cross-sectional OCT images of the human retina can be produced in real time. We analyze the conditions that ensure a true real-time B-scan imaging operation and demonstrate in vivo real-time images from human fovea and the optic nerve, with resolution and sensitivity comparable to those produced using the traditional FD-based method, however, without the need of data resampling.
Resampling approach for anomalous change detection
NASA Astrophysics Data System (ADS)
Theiler, James; Perkins, Simon
2007-04-01
We investigate the problem of identifying pixels in pairs of co-registered images that correspond to real changes on the ground. Changes that are due to environmental differences (illumination, atmospheric distortion, etc.) or sensor differences (focus, contrast, etc.) will be widespread throughout the image, and the aim is to avoid these changes in favor of changes that occur in only one or a few pixels. Formal outlier detection schemes (such as the one-class support vector machine) can identify rare occurrences, but will be confounded by pixels that are "equally rare" in both images: they may be anomalous, but they are not changes. We describe a resampling scheme we have developed that formally addresses both of these issues, and reduces the problem to a binary classification, a problem for which a large variety of machine learning tools have been developed. In principle, the effects of misregistration will manifest themselves as pervasive changes, and our method will be robust against them - but in practice, misregistration remains a serious issue.
Clausen, J L; Georgian, T; Gardner, K H; Douglas, T A
2018-01-01
Research shows grab sampling is inadequate for evaluating military ranges contaminated with energetics because of their highly heterogeneous distribution. Similar studies assessing the heterogeneous distribution of metals at small-arms ranges (SAR) are lacking. To address this we evaluated whether grab sampling provides appropriate data for performing risk analysis at metal-contaminated SARs characterized with 30-48 grab samples. We evaluated the extractable metal content of Cu, Pb, Sb, and Zn of the field data using a Monte Carlo random resampling with replacement (bootstrapping) simulation approach. Results indicate the 95% confidence interval of the mean for Pb (432 mg/kg) at one site was 200-700 mg/kg with a data range of 5-4500 mg/kg. Considering the U.S. Environmental Protection Agency screening level for lead is 400 mg/kg, the necessity of cleanup at this site is unclear. Resampling based on populations of 7 and 15 samples, a sample size more realistic for the area yielded high false negative rates.
Measures of precision for dissimilarity-based multivariate analysis of ecological communities.
Anderson, Marti J; Santana-Garcon, Julia
2015-01-01
Ecological studies require key decisions regarding the appropriate size and number of sampling units. No methods currently exist to measure precision for multivariate assemblage data when dissimilarity-based analyses are intended to follow. Here, we propose a pseudo multivariate dissimilarity-based standard error (MultSE) as a useful quantity for assessing sample-size adequacy in studies of ecological communities. Based on sums of squared dissimilarities, MultSE measures variability in the position of the centroid in the space of a chosen dissimilarity measure under repeated sampling for a given sample size. We describe a novel double resampling method to quantify uncertainty in MultSE values with increasing sample size. For more complex designs, values of MultSE can be calculated from the pseudo residual mean square of a permanova model, with the double resampling done within appropriate cells in the design. R code functions for implementing these techniques, along with ecological examples, are provided. © 2014 The Authors. Ecology Letters published by John Wiley & Sons Ltd and CNRS.
Uncertainties in the cluster-cluster correlation function
NASA Astrophysics Data System (ADS)
Ling, E. N.; Frenk, C. S.; Barrow, J. D.
1986-12-01
The bootstrap resampling technique is applied to estimate sampling errors and significance levels of the two-point correlation functions determined for a subset of the CfA redshift survey of galaxies and a redshift sample of 104 Abell clusters. The angular correlation function for a sample of 1664 Abell clusters is also calculated. The standard errors in xi(r) for the Abell data are found to be considerably larger than quoted 'Poisson errors'. The best estimate for the ratio of the correlation length of Abell clusters (richness class R greater than or equal to 1, distance class D less than or equal to 4) to that of CfA galaxies is 4.2 + 1.4 or - 1.0 (68 percentile error). The enhancement of cluster clustering over galaxy clustering is statistically significant in the presence of resampling errors. The uncertainties found do not include the effects of possible systematic biases in the galaxy and cluster catalogs and could be regarded as lower bounds on the true uncertainty range.
Survival estimation and the effects of dependency among animals
Schmutz, Joel A.; Ward, David H.; Sedinger, James S.; Rexstad, Eric A.
1995-01-01
Survival models assume that fates of individuals are independent, yet the robustness of this assumption has been poorly quantified. We examine how empirically derived estimates of the variance of survival rates are affected by dependency in survival probability among individuals. We used Monte Carlo simulations to generate known amounts of dependency among pairs of individuals and analyzed these data with Kaplan-Meier and Cormack-Jolly-Seber models. Dependency significantly increased these empirical variances as compared to theoretically derived estimates of variance from the same populations. Using resighting data from 168 pairs of black brant, we used a resampling procedure and program RELEASE to estimate empirical and mean theoretical variances. We estimated that the relationship between paired individuals caused the empirical variance of the survival rate to be 155% larger than the empirical variance for unpaired individuals. Monte Carlo simulations and use of this resampling strategy can provide investigators with information on how robust their data are to this common assumption of independent survival probabilities.
Confidence limit calculation for antidotal potency ratio derived from lethal dose 50
Manage, Ananda; Petrikovics, Ilona
2013-01-01
AIM: To describe confidence interval calculation for antidotal potency ratios using bootstrap method. METHODS: We can easily adapt the nonparametric bootstrap method which was invented by Efron to construct confidence intervals in such situations like this. The bootstrap method is a resampling method in which the bootstrap samples are obtained by resampling from the original sample. RESULTS: The described confidence interval calculation using bootstrap method does not require the sampling distribution antidotal potency ratio. This can serve as a substantial help for toxicologists, who are directed to employ the Dixon up-and-down method with the application of lower number of animals to determine lethal dose 50 values for characterizing the investigated toxic molecules and eventually for characterizing the antidotal protections by the test antidotal systems. CONCLUSION: The described method can serve as a useful tool in various other applications. Simplicity of the method makes it easier to do the calculation using most of the programming software packages. PMID:25237618
A program for handling map projections of small-scale geospatial raster data
Finn, Michael P.; Steinwand, Daniel R.; Trent, Jason R.; Buehler, Robert A.; Mattli, David M.; Yamamoto, Kristina H.
2012-01-01
Scientists routinely accomplish small-scale geospatial modeling using raster datasets of global extent. Such use often requires the projection of global raster datasets onto a map or the reprojection from a given map projection associated with a dataset. The distortion characteristics of these projection transformations can have significant effects on modeling results. Distortions associated with the reprojection of global data are generally greater than distortions associated with reprojections of larger-scale, localized areas. The accuracy of areas in projected raster datasets of global extent is dependent on spatial resolution. To address these problems of projection and the associated resampling that accompanies it, methods for framing the transformation space, direct point-to-point transformations rather than gridded transformation spaces, a solution to the wrap-around problem, and an approach to alternative resampling methods are presented. The implementations of these methods are provided in an open-source software package called MapImage (or mapIMG, for short), which is designed to function on a variety of computer architectures.
Novel Designs of Quantum Reversible Counters
NASA Astrophysics Data System (ADS)
Qi, Xuemei; Zhu, Haihong; Chen, Fulong; Zhu, Junru; Zhang, Ziyang
2016-11-01
Reversible logic, as an interesting and important issue, has been widely used in designing combinational and sequential circuits for low-power and high-speed computation. Though a significant number of works have been done on reversible combinational logic, the realization of reversible sequential circuit is still at premature stage. Reversible counter is not only an important part of the sequential circuit but also an essential part of the quantum circuit system. In this paper, we designed two kinds of novel reversible counters. In order to construct counter, the innovative reversible T Flip-flop Gate (TFG), T Flip-flop block (T_FF) and JK flip-flop block (JK_FF) are proposed. Based on the above blocks and some existing reversible gates, the 4-bit binary-coded decimal (BCD) counter and controlled Up/Down synchronous counter are designed. With the help of Verilog hardware description language (Verilog HDL), these counters above have been modeled and confirmed. According to the simulation results, our circuits' logic structures are validated. Compared to the existing ones in terms of quantum cost (QC), delay (DL) and garbage outputs (GBO), it can be concluded that our designs perform better than the others. There is no doubt that they can be used as a kind of important storage components to be applied in future low-power computing systems.
Yin, Weiwei; Garimalla, Swetha; Moreno, Alberto; Galinski, Mary R; Styczynski, Mark P
2015-08-28
There are increasing efforts to bring high-throughput systems biology techniques to bear on complex animal model systems, often with a goal of learning about underlying regulatory network structures (e.g., gene regulatory networks). However, complex animal model systems typically have significant limitations on cohort sizes, number of samples, and the ability to perform follow-up and validation experiments. These constraints are particularly problematic for many current network learning approaches, which require large numbers of samples and may predict many more regulatory relationships than actually exist. Here, we test the idea that by leveraging the accuracy and efficiency of classifiers, we can construct high-quality networks that capture important interactions between variables in datasets with few samples. We start from a previously-developed tree-like Bayesian classifier and generalize its network learning approach to allow for arbitrary depth and complexity of tree-like networks. Using four diverse sample networks, we demonstrate that this approach performs consistently better at low sample sizes than the Sparse Candidate Algorithm, a representative approach for comparison because it is known to generate Bayesian networks with high positive predictive value. We develop and demonstrate a resampling-based approach to enable the identification of a viable root for the learned tree-like network, important for cases where the root of a network is not known a priori. We also develop and demonstrate an integrated resampling-based approach to the reduction of variable space for the learning of the network. Finally, we demonstrate the utility of this approach via the analysis of a transcriptional dataset of a malaria challenge in a non-human primate model system, Macaca mulatta, suggesting the potential to capture indicators of the earliest stages of cellular differentiation during leukopoiesis. We demonstrate that by starting from effective and efficient approaches for creating classifiers, we can identify interesting tree-like network structures with significant ability to capture the relationships in the training data. This approach represents a promising strategy for inferring networks with high positive predictive value under the constraint of small numbers of samples, meeting a need that will only continue to grow as more high-throughput studies are applied to complex model systems.
Actively learning human gaze shifting paths for semantics-aware photo cropping.
Zhang, Luming; Gao, Yue; Ji, Rongrong; Xia, Yingjie; Dai, Qionghai; Li, Xuelong
2014-05-01
Photo cropping is a widely used tool in printing industry, photography, and cinematography. Conventional cropping models suffer from the following three challenges. First, the deemphasized role of semantic contents that are many times more important than low-level features in photo aesthetics. Second, the absence of a sequential ordering in the existing models. In contrast, humans look at semantically important regions sequentially when viewing a photo. Third, the difficulty of leveraging inputs from multiple users. Experience from multiple users is particularly critical in cropping as photo assessment is quite a subjective task. To address these challenges, this paper proposes semantics-aware photo cropping, which crops a photo by simulating the process of humans sequentially perceiving semantically important regions of a photo. We first project the local features (graphlets in this paper) onto the semantic space, which is constructed based on the category information of the training photos. An efficient learning algorithm is then derived to sequentially select semantically representative graphlets of a photo, and the selecting process can be interpreted by a path, which simulates humans actively perceiving semantics in a photo. Furthermore, we learn a prior distribution of such active graphlet paths from training photos that are marked as aesthetically pleasing by multiple users. The learned priors enforce the corresponding active graphlet path of a test photo to be maximally similar to those from the training photos. Experimental results show that: 1) the active graphlet path accurately predicts human gaze shifting, and thus is more indicative for photo aesthetics than conventional saliency maps and 2) the cropped photos produced by our approach outperform its competitors in both qualitative and quantitative comparisons.
Building Intuitions about Statistical Inference Based on Resampling
ERIC Educational Resources Information Center
Watson, Jane; Chance, Beth
2012-01-01
Formal inference, which makes theoretical assumptions about distributions and applies hypothesis testing procedures with null and alternative hypotheses, is notoriously difficult for tertiary students to master. The debate about whether this content should appear in Years 11 and 12 of the "Australian Curriculum: Mathematics" has gone on…
ERIC Educational Resources Information Center
Peterson, Ivars
1991-01-01
A method that enables people to obtain the benefits of statistics and probability theory without the shortcomings of conventional methods because it is free of mathematical formulas and is easy to understand and use is described. A resampling technique called the "bootstrap" is discussed in terms of application and development. (KR)
Bias-Corrected Estimation of Noncentrality Parameters of Covariance Structure Models
ERIC Educational Resources Information Center
Raykov, Tenko
2005-01-01
A bias-corrected estimator of noncentrality parameters of covariance structure models is discussed. The approach represents an application of the bootstrap methodology for purposes of bias correction, and utilizes the relation between average of resample conventional noncentrality parameter estimates and their sample counterpart. The…
Testing variance components by two jackknife methods
USDA-ARS?s Scientific Manuscript database
The jacknife method, a resampling technique, has been widely used for statistical tests for years. The pseudo value based jacknife method (defined as pseudo jackknife method) is commonly used to reduce the bias for an estimate; however, sometimes it could result in large variaion for an estmimate a...
Sequential voluntary cough and aspiration or aspiration risk in Parkinson's disease.
Hegland, Karen Wheeler; Okun, Michael S; Troche, Michelle S
2014-08-01
Disordered swallowing, or dysphagia, is almost always present to some degree in people with Parkinson's disease (PD), either causing aspiration or greatly increasing the risk for aspiration during swallowing. This likely contributes to aspiration pneumonia, a leading cause of death in this patient population. Effective airway protection is dependent upon multiple behaviors, including cough and swallowing. Single voluntary cough function is disordered in people with PD and dysphagia. However, the appropriate response to aspirate material is more than one cough, or sequential cough. The goal of this study was to examine voluntary sequential coughing in people with PD, with and without dysphagia. Forty adults diagnosed with idiopathic PD produced two trials of sequential voluntary cough. The cough airflows were obtained using pneumotachograph and facemask and subsequently digitized and recorded. All participants received a modified barium swallow study as part of their clinical care, and the worst penetration-aspiration score observed was used to determine whether the patient had dysphagia. There were significant differences in the compression phase duration, peak expiratory flow rates, and amount of air expired of the sequential cough produced by participants with and without dysphagia. The presence of dysphagia in people with PD is associated with disordered cough function. Sequential cough, which is important in removing aspirate material from large- and smaller-diameter airways, is also impaired in people with PD and dysphagia compared with those without dysphagia. There may be common neuroanatomical substrates for cough and swallowing impairment in PD leading to the co-occurrence of these dysfunctions.
Publications - GMC 366 | Alaska Division of Geological & Geophysical
Alaska MAPTEACH Tsunami Inundation Mapping Energy Resources Gas Hydrates STATEMAP Program information DGGS GMC 366 Publication Details Title: Makushin Geothermal Project ST-1R Core 2009 re-sampling and analysis: Analytical results for anomalous precious and base metals associated with geothermal systems
Tracking the Gender Pay Gap: A Case Study
ERIC Educational Resources Information Center
Travis, Cheryl B.; Gross, Louis J.; Johnson, Bruce A.
2009-01-01
This article provides a short introduction to standard considerations in the formal study of wages and illustrates the use of multiple regression and resampling simulation approaches in a case study of faculty salaries at one university. Multiple regression is especially beneficial where it provides information on strength of association, specific…
Testing the Difference of Correlated Agreement Coefficients for Statistical Significance
ERIC Educational Resources Information Center
Gwet, Kilem L.
2016-01-01
This article addresses the problem of testing the difference between two correlated agreement coefficients for statistical significance. A number of authors have proposed methods for testing the difference between two correlated kappa coefficients, which require either the use of resampling methods or the use of advanced statistical modeling…
Moving forward: response to "Studying eyewitness investigations in the field".
Ross, Stephen J; Malpass, Roy S
2008-02-01
Field studies of eyewitness identification are richly confounded. Determining which confounds undermine interpretation is important. The blind administration confound in the Illinois study is said to undermine it's value for understanding the relative utility of simultaneous and sequential lineups. Most criticisms of the Illinois study focus on filler identifications, and related inferences about the importance of the blind confound. We find no convincing evidence supporting this line of attack and wonder at filler identifications as the major line of criticism. More debilitating problems impede using the Illinois study to address the simultaneous versus sequential lineup controversy: inability to estimate guilt independent of identification evidence, lack of protocol compliance monitoring, and assessment of lineups quality. Moving forward requires removing these limitations.
Using informative priors in facies inversion: The case of C-ISR method
NASA Astrophysics Data System (ADS)
Valakas, G.; Modis, K.
2016-08-01
Inverse problems involving the characterization of hydraulic properties of groundwater flow systems by conditioning on observations of the state variables are mathematically ill-posed because they have multiple solutions and are sensitive to small changes in the data. In the framework of McMC methods for nonlinear optimization and under an iterative spatial resampling transition kernel, we present an algorithm for narrowing the prior and thus producing improved proposal realizations. To achieve this goal, we cosimulate the facies distribution conditionally to facies observations and normal scores transformed hydrologic response measurements, assuming a linear coregionalization model. The approach works by creating an importance sampling effect that steers the process to selected areas of the prior. The effectiveness of our approach is demonstrated by an example application on a synthetic underdetermined inverse problem in aquifer characterization.
Sweep excitation with order tracking: A new tactic for beam crack analysis
NASA Astrophysics Data System (ADS)
Wei, Dongdong; Wang, KeSheng; Zhang, Mian; Zuo, Ming J.
2018-04-01
Crack detection in beams and beam-like structures is an important issue in industry and has attracted numerous investigations. A local crack leads to global system dynamics changes and produce non-linear vibration responses. Many researchers have studied these non-linearities for beam crack diagnosis. However, most reported methods are based on impact excitation and constant frequency excitation. Few studies have focused on crack detection through external sweep excitation which unleashes abundant dynamic characteristics of the system. Together with a signal resampling technique inspired by Computed Order Tracking, this paper utilize vibration responses under sweep excitations to diagnose crack status of beams. A data driven method for crack depth evaluation is proposed and window based harmonics extracting approaches are studied. The effectiveness of sweep excitation and the proposed method is experimentally validated.
Immediately sequential bilateral cataract surgery: advantages and disadvantages.
Singh, Ranjodh; Dohlman, Thomas H; Sun, Grace
2017-01-01
The number of cataract surgeries performed globally will continue to rise to meet the needs of an aging population. This increased demand will require healthcare systems and providers to find new surgical efficiencies while maintaining excellent surgical outcomes. Immediately sequential bilateral cataract surgery (ISBCS) has been proposed as a solution and is increasingly being performed worldwide. The purpose of this review is to discuss the advantages and disadvantages of ISBCS. When appropriate patient selection occurs and guidelines are followed, ISBCS is comparable with delayed sequential bilateral cataract surgery in long-term patient satisfaction, visual acuity and complication rates. In addition, the risk of bilateral postoperative endophthalmitis and concerns of poorer refractive outcomes have not been supported by the literature. ISBCS is cost-effective for the patient, healthcare payors and society, but current reimbursement models in many countries create significant financial barriers for facilities and surgeons. As demand for cataract surgery rises worldwide, ISBCS will become increasingly important as an alternative to delayed sequential bilateral cataract surgery. Advantages include potentially decreased wait times for surgery, patient convenience and cost savings for healthcare payors. Although they are comparable in visual acuity and complication rates, hurdles that prevent wide adoption include liability concerns as ISBCS is not an established standard of care, economic constraints for facilities and surgeons and inability to fine-tune intraocular lens selection in the second eye. Given these considerations, an open discussion regarding the advantages and disadvantages of ISBCS is important for appropriate patient selection.
Florencio, Camila; Cunha, Fernanda M; Badino, Alberto C; Farinas, Cristiane S; Ximenes, Eduardo; Ladisch, Michael R
2016-08-01
Cellulases and hemicellulases from Trichoderma reesei and Aspergillus niger have been shown to be powerful enzymes for biomass conversion to sugars, but the production costs are still relatively high for commercial application. The choice of an effective microbial cultivation process employed for enzyme production is important, since it may affect titers and the profile of protein secretion. We used proteomic analysis to characterize the secretome of T. reesei and A. niger cultivated in submerged and sequential fermentation processes. The information gained was key to understand differences in hydrolysis of steam exploded sugarcane bagasse for enzyme cocktails obtained from two different cultivation processes. The sequential process for cultivating A. niger gave xylanase and β-glucosidase activities 3- and 8-fold higher, respectively, than corresponding activities from the submerged process. A greater protein diversity of critical cellulolytic and hemicellulolytic enzymes were also observed through secretome analyses. These results helped to explain the 3-fold higher yield for hydrolysis of non-washed pretreated bagasse when combined T. reesei and A. niger enzyme extracts from sequential fermentation were used in place of enzymes obtained from submerged fermentation. An enzyme loading of 0.7 FPU cellulase activity/g glucan was surprisingly effective when compared to the 5-15 times more enzyme loadings commonly reported for other cellulose hydrolysis studies. Analyses showed that more than 80% consisted of proteins other than cellulases whose role is important to the hydrolysis of a lignocellulose substrate. Our work combined proteomic analyses and enzymology studies to show that sequential and submerged cultivation methods differently influence both titers and secretion profile of key enzymes required for the hydrolysis of sugarcane bagasse. The higher diversity of feruloyl esterases, xylanases and other auxiliary hemicellulolytic enzymes observed in the enzyme mixtures from the sequential fermentation could be one major reason for the more efficient enzyme hydrolysis that results when using the combined secretomes from A. niger and T. reesei. Copyright © 2016 Elsevier Inc. All rights reserved.
Simulations of lattice animals and trees
NASA Astrophysics Data System (ADS)
Hsu, Hsiao-Ping; Nadler, Walter; Grassberger, Peter
2005-01-01
The scaling behaviour of randomly branched polymers in a good solvent is studied in two to nine dimensions, using as microscopic models lattice animals and lattice trees on simple hypercubic lattices. As a stochastic sampling method we use a biased sequential sampling algorithm with re-sampling, similar to the pruned-enriched Rosenbluth method (PERM) used extensively for linear polymers. Essentially we start simulating percolation clusters (either site or bond), re-weigh them according to the animal (tree) ensemble, and prune or branch the further growth according to a heuristic fitness function. In contrast to previous applications of PERM, this fitness function is not the weight with which the actual configuration would contribute to the partition sum, but is closely related to it. We obtain high statistics of animals with up to several thousand sites in all dimension 2 <= d <= 9. In addition to the partition sum (number of different animals) we estimate gyration radii and numbers of perimeter sites. In all dimensions we verify the Parisi-Sourlas prediction, and we verify all exactly known critical exponents in dimensions 2, 3, 4 and >=8. In addition, we present the hitherto most precise estimates for growth constants in d >= 3. For clusters with one site attached to an attractive surface, we verify for d >= 3 the superuniversality of the cross-over exponent phgr at the adsorption transition predicted by Janssen and Lyssy, but not for d = 2. There, we find phgr = 0.480(4) instead of the conjectured phgr = 1/2. Finally, we discuss the collapse of animals and trees, arguing that our present version of the algorithm is also efficient for some of the models studied in this context, but showing that it is not very efficient for the 'classical' model for collapsing animals.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Cella, Laura, E-mail: laura.cella@cnr.it; Department of Advanced Biomedical Sciences, Federico II University School of Medicine, Naples; Liuzzi, Raffaele
Purpose: To establish a multivariate normal tissue complication probability (NTCP) model for radiation-induced asymptomatic heart valvular defects (RVD). Methods and Materials: Fifty-six patients treated with sequential chemoradiation therapy for Hodgkin lymphoma (HL) were retrospectively reviewed for RVD events. Clinical information along with whole heart, cardiac chambers, and lung dose distribution parameters was collected, and the correlations to RVD were analyzed by means of Spearman's rank correlation coefficient (Rs). For the selection of the model order and parameters for NTCP modeling, a multivariate logistic regression method using resampling techniques (bootstrapping) was applied. Model performance was evaluated using the area under themore » receiver operating characteristic curve (AUC). Results: When we analyzed the whole heart, a 3-variable NTCP model including the maximum dose, whole heart volume, and lung volume was shown to be the optimal predictive model for RVD (Rs = 0.573, P<.001, AUC = 0.83). When we analyzed the cardiac chambers individually, for the left atrium and for the left ventricle, an NTCP model based on 3 variables including the percentage volume exceeding 30 Gy (V30), cardiac chamber volume, and lung volume was selected as the most predictive model (Rs = 0.539, P<.001, AUC = 0.83; and Rs = 0.557, P<.001, AUC = 0.82, respectively). The NTCP values increase as heart maximum dose or cardiac chambers V30 increase. They also increase with larger volumes of the heart or cardiac chambers and decrease when lung volume is larger. Conclusions: We propose logistic NTCP models for RVD considering not only heart irradiation dose but also the combined effects of lung and heart volumes. Our study establishes the statistical evidence of the indirect effect of lung size on radio-induced heart toxicity.« less
PC_Eyewitness and the sequential superiority effect: computer-based lineup administration.
MacLin, Otto H; Zimmerman, Laura A; Malpass, Roy S
2005-06-01
Computer technology has become an increasingly important tool for conducting eyewitness identifications. In the area of lineup identifications, computerized administration offers several advantages for researchers and law enforcement. PC_Eyewitness is designed specifically to administer lineups. To assess this new lineup technology, two studies were conducted in order to replicate the results of previous studies comparing simultaneous and sequential lineups. One hundred twenty university students participated in each experiment. Experiment 1 used traditional paper-and-pencil lineup administration methods to compare simultaneous to sequential lineups. Experiment 2 used PC_Eyewitness to administer simultaneous and sequential lineups. The results of these studies were compared to the meta-analytic results reported by N. Steblay, J. Dysart, S. Fulero, and R. C. L. Lindsay (2001). No differences were found between paper-and-pencil and PC_Eyewitness lineup administration methods. The core findings of the N. Steblay et al. (2001) meta-analysis were replicated by both administration procedures. These results show that computerized lineup administration using PC_Eyewitness is an effective means for gathering eyewitness identification data.
Ye, Jiawen; Yeung, Dannii Y; Liu, Elaine S C; Rochelle, Tina L
2018-04-03
Past research has often focused on the effects of emotional intelligence and received social support on subjective well-being yet paid limited attention to the effects of provided social support. This study adopted a longitudinal design to examine the sequential mediating effects of provided and received social support on the relationship between trait emotional intelligence and subjective happiness. A total of 214 Hong Kong Chinese undergraduates were asked to complete two assessments with a 6-month interval in between. The results of the sequential mediation analysis indicated that the trait emotional intelligence measured in Time 1 indirectly influenced the level of subjective happiness in Time 2 through a sequential pathway of social support provided for others in Time 1 and social support received from others in Time 2. These findings highlight the importance of trait emotional intelligence and the reciprocal exchanges of social support in the subjective well-being of university students. © 2018 International Union of Psychological Science.
Rawat, Varun; Kumar, B Senthil; Sudalai, Arumugam
2013-06-14
A new sequential organocatalytic method for the synthesis of chiral 3-substituted (X = OH, NH2) tetrahydroquinoline derivatives (THQs) [ee up to 99%, yield up to 87%] based on α-aminooxylation or -amination followed by reductive cyclization of o-nitrohydrocinnamaldehydes has been described. This methodology has been efficiently demonstrated in the synthesis of two important bioactive molecules namely (-)-sumanirole (96% ee) and 1-[(S)-3-(dimethylamino)-3,4-dihydro-6,7-dimethoxy-quinolin-1(2H)-yl]propanone (92% ee).
Sequential Voluntary Cough and Aspiration or Aspiration Risk in Parkinson’s Disease
Hegland, Karen Wheeler; Okun, Michael S.; Troche, Michelle S.
2015-01-01
Background Disordered swallowing, or dysphagia, is almost always present to some degree in people with Parkinson’s disease (PD), either causing aspiration or greatly increasing the risk for aspiration during swallowing. This likely contributes to aspiration pneumonia, a leading cause of death in this patient population. Effective airway protection is dependent upon multiple behaviors, including cough and swallowing. Single voluntary cough function is disordered in people with PD and dysphagia. However, the appropriate response to aspirate material is more than one cough, or sequential cough. The goal of this study was to examine voluntary sequential coughing in people with PD, with and without dysphagia. Methods Forty adults diagnosed with idiopathic PD produced two trials of sequential voluntary cough. The cough airflows were obtained using pneumotachograph and facemask and subsequently digitized and recorded. All participants received a modified barium swallow study as part of their clinical care, and the worst penetration–aspiration score observed was used to determine whether the patient had dysphagia. Results There were significant differences in the compression phase duration, peak expiratory flow rates, and amount of air expired of the sequential cough produced by participants with and without dysphagia. Conclusions The presence of dysphagia in people with PD is associated with disordered cough function. Sequential cough, which is important in removing aspirate material from large- and smaller-diameter airways, is also impaired in people with PD and dysphagia compared with those without dysphagia. There may be common neuroanatomical substrates for cough and swallowing impairment in PD leading to the co-occurrence of these dysfunctions. PMID:24792231
Research on parallel algorithm for sequential pattern mining
NASA Astrophysics Data System (ADS)
Zhou, Lijuan; Qin, Bai; Wang, Yu; Hao, Zhongxiao
2008-03-01
Sequential pattern mining is the mining of frequent sequences related to time or other orders from the sequence database. Its initial motivation is to discover the laws of customer purchasing in a time section by finding the frequent sequences. In recent years, sequential pattern mining has become an important direction of data mining, and its application field has not been confined to the business database and has extended to new data sources such as Web and advanced science fields such as DNA analysis. The data of sequential pattern mining has characteristics as follows: mass data amount and distributed storage. Most existing sequential pattern mining algorithms haven't considered the above-mentioned characteristics synthetically. According to the traits mentioned above and combining the parallel theory, this paper puts forward a new distributed parallel algorithm SPP(Sequential Pattern Parallel). The algorithm abides by the principal of pattern reduction and utilizes the divide-and-conquer strategy for parallelization. The first parallel task is to construct frequent item sets applying frequent concept and search space partition theory and the second task is to structure frequent sequences using the depth-first search method at each processor. The algorithm only needs to access the database twice and doesn't generate the candidated sequences, which abates the access time and improves the mining efficiency. Based on the random data generation procedure and different information structure designed, this paper simulated the SPP algorithm in a concrete parallel environment and implemented the AprioriAll algorithm. The experiments demonstrate that compared with AprioriAll, the SPP algorithm had excellent speedup factor and efficiency.
2011-03-01
resampling a second time . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 82 70 Plot of RSA bitgroup exponentiation with DAILMOM after a...14 DVFS Dynamic Voltage and Frequency Switching . . . . . . . . . . . . . . . . . . . 14 MDPL Masked Dual-Rail...algorithms to prevent whole-sale discovery of PINs and other simple methods to prevent employee tampering [5]. In time , cryptographic systems have
On the estimation of spread rate for a biological population
Jim Clark; Lajos Horváth; Mark Lewis
2001-01-01
We propose a nonparametric estimator for the rate of spread of an introduced population. We prove that the limit distribution of the estimator is normal or stable, depending on the behavior of the moment generating function. We show that resampling methods can also be used to approximate the distribution of the estimators.
ERIC Educational Resources Information Center
Du, Yunfei
This paper discusses the impact of sampling error on the construction of confidence intervals around effect sizes. Sampling error affects the location and precision of confidence intervals. Meta-analytic resampling demonstrates that confidence intervals can haphazardly bounce around the true population parameter. Special software with graphical…
Applying Bootstrap Resampling to Compute Confidence Intervals for Various Statistics with R
ERIC Educational Resources Information Center
Dogan, C. Deha
2017-01-01
Background: Most of the studies in academic journals use p values to represent statistical significance. However, this is not a good indicator of practical significance. Although confidence intervals provide information about the precision of point estimation, they are, unfortunately, rarely used. The infrequent use of confidence intervals might…
Exploring the Replicability of a Study's Results: Bootstrap Statistics for the Multivariate Case.
ERIC Educational Resources Information Center
Thompson, Bruce
Conventional statistical significance tests do not inform the researcher regarding the likelihood that results will replicate. One strategy for evaluating result replication is to use a "bootstrap" resampling of a study's data so that the stability of results across numerous configurations of the subjects can be explored. This paper…
Statistical process control for residential treated wood
Patricia K. Lebow; Timothy M. Young; Stan Lebow
2017-01-01
This paper is the first stage of a study that attempts to improve the process of manufacturing treated lumber through the use of statistical process control (SPC). Analysis of industrial and auditing agency data sets revealed there are differences between the industry and agency probability density functions (pdf) for normalized retention data. Resampling of batches of...
Explanation of Two Anomalous Results in Statistical Mediation Analysis
ERIC Educational Resources Information Center
Fritz, Matthew S.; Taylor, Aaron B.; MacKinnon, David P.
2012-01-01
Previous studies of different methods of testing mediation models have consistently found two anomalous results. The first result is elevated Type I error rates for the bias-corrected and accelerated bias-corrected bootstrap tests not found in nonresampling tests or in resampling tests that did not include a bias correction. This is of special…
NASA Technical Reports Server (NTRS)
Park, Steve
1990-01-01
A large and diverse number of computational techniques are routinely used to process and analyze remotely sensed data. These techniques include: univariate statistics; multivariate statistics; principal component analysis; pattern recognition and classification; other multivariate techniques; geometric correction; registration and resampling; radiometric correction; enhancement; restoration; Fourier analysis; and filtering. Each of these techniques will be considered, in order.
DOT National Transportation Integrated Search
2004-02-01
Researchers and practitioners are commonly faced with the problem of limited data in the evaluation of ITS systems. Due to high data collection costs and limited resources, they are often forced to make decisions about the efficacy of a system or tec...
USDA-ARS?s Scientific Manuscript database
Satellite-based passive microwave remote sensing typically involves a scanning antenna that makes measurements at irregularly spaced locations. These locations can change on a day to day basis. Soil moisture products derived from satellite-based passive microwave remote sensing are usually resampled...
Resampling-Based Gap Analysis for Detecting Nodes with High Centrality on Large Social Network
2015-05-22
University, Shiga, Japan kimura@rins.ryukoku.ac.jp 4 Institute of Scientific and Industrial Research, Osaka University, Osaka, Japan 5 School of...second one is a network extracted from a Japanese word-of-mouth communication site for cosmetics , “@cosme”2, consist- ing of 45, 024 nodes
Confidence Intervals for Effect Sizes: Applying Bootstrap Resampling
ERIC Educational Resources Information Center
Banjanovic, Erin S.; Osborne, Jason W.
2016-01-01
Confidence intervals for effect sizes (CIES) provide readers with an estimate of the strength of a reported statistic as well as the relative precision of the point estimate. These statistics offer more information and context than null hypothesis statistic testing. Although confidence intervals have been recommended by scholars for many years,…
Xiao, Yongling; Abrahamowicz, Michal
2010-03-30
We propose two bootstrap-based methods to correct the standard errors (SEs) from Cox's model for within-cluster correlation of right-censored event times. The cluster-bootstrap method resamples, with replacement, only the clusters, whereas the two-step bootstrap method resamples (i) the clusters, and (ii) individuals within each selected cluster, with replacement. In simulations, we evaluate both methods and compare them with the existing robust variance estimator and the shared gamma frailty model, which are available in statistical software packages. We simulate clustered event time data, with latent cluster-level random effects, which are ignored in the conventional Cox's model. For cluster-level covariates, both proposed bootstrap methods yield accurate SEs, and type I error rates, and acceptable coverage rates, regardless of the true random effects distribution, and avoid serious variance under-estimation by conventional Cox-based standard errors. However, the two-step bootstrap method over-estimates the variance for individual-level covariates. We also apply the proposed bootstrap methods to obtain confidence bands around flexible estimates of time-dependent effects in a real-life analysis of cluster event times.
Bersanelli, Matteo; Mosca, Ettore; Remondini, Daniel; Castellani, Gastone; Milanesi, Luciano
2016-01-01
A relation exists between network proximity of molecular entities in interaction networks, functional similarity and association with diseases. The identification of network regions associated with biological functions and pathologies is a major goal in systems biology. We describe a network diffusion-based pipeline for the interpretation of different types of omics in the context of molecular interaction networks. We introduce the network smoothing index, a network-based quantity that allows to jointly quantify the amount of omics information in genes and in their network neighbourhood, using network diffusion to define network proximity. The approach is applicable to both descriptive and inferential statistics calculated on omics data. We also show that network resampling, applied to gene lists ranked by quantities derived from the network smoothing index, indicates the presence of significantly connected genes. As a proof of principle, we identified gene modules enriched in somatic mutations and transcriptional variations observed in samples of prostate adenocarcinoma (PRAD). In line with the local hypothesis, network smoothing index and network resampling underlined the existence of a connected component of genes harbouring molecular alterations in PRAD. PMID:27731320
NASA Astrophysics Data System (ADS)
Wechsung, Frank; Wechsung, Maximilian
2016-11-01
The STatistical Analogue Resampling Scheme (STARS) statistical approach was recently used to project changes of climate variables in Germany corresponding to a supposed degree of warming. We show by theoretical and empirical analysis that STARS simply transforms interannual gradients between warmer and cooler seasons into climate trends. According to STARS projections, summers in Germany will inevitably become dryer and winters wetter under global warming. Due to the dominance of negative interannual correlations between precipitation and temperature during the year, STARS has a tendency to generate a net annual decrease in precipitation under mean German conditions. Furthermore, according to STARS, the annual level of global radiation would increase in Germany. STARS can be still used, e.g., for generating scenarios in vulnerability and uncertainty studies. However, it is not suitable as a climate downscaling tool to access risks following from changing climate for a finer than general circulation model (GCM) spatial scale.
McRoy, Susan; Jones, Sean; Kurmally, Adam
2016-09-01
This article examines methods for automated question classification applied to cancer-related questions that people have asked on the web. This work is part of a broader effort to provide automated question answering for health education. We created a new corpus of consumer-health questions related to cancer and a new taxonomy for those questions. We then compared the effectiveness of different statistical methods for developing classifiers, including weighted classification and resampling. Basic methods for building classifiers were limited by the high variability in the natural distribution of questions and typical refinement approaches of feature selection and merging categories achieved only small improvements to classifier accuracy. Best performance was achieved using weighted classification and resampling methods, the latter yielding an accuracy of F1 = 0.963. Thus, it would appear that statistical classifiers can be trained on natural data, but only if natural distributions of classes are smoothed. Such classifiers would be useful for automated question answering, for enriching web-based content, or assisting clinical professionals to answer questions. © The Author(s) 2015.
Vehicle Fault Diagnose Based on Smart Sensor
NASA Astrophysics Data System (ADS)
Zhining, Li; Peng, Wang; Jianmin, Mei; Jianwei, Li; Fei, Teng
In the vehicle's traditional fault diagnose system, we usually use a computer system with a A/D card and with many sensors connected to it. The disadvantage of this system is that these sensor can hardly be shared with control system and other systems, there are too many connect lines and the electro magnetic compatibility(EMC) will be affected. In this paper, smart speed sensor, smart acoustic press sensor, smart oil press sensor, smart acceleration sensor and smart order tracking sensor were designed to solve this problem. With the CAN BUS these smart sensors, fault diagnose computer and other computer could be connected together to establish a network system which can monitor and control the vehicle's diesel and other system without any duplicate sensor. The hard and soft ware of the smart sensor system was introduced, the oil press, vibration and acoustic signal are resampled by constant angle increment to eliminate the influence of the rotate speed. After the resample, the signal in every working cycle could be averaged in angle domain and do other analysis like order spectrum.
Liu, Fang; Shen, Changqing; He, Qingbo; Zhang, Ao; Liu, Yongbin; Kong, Fanrang
2014-01-01
A fault diagnosis strategy based on the wayside acoustic monitoring technique is investigated for locomotive bearing fault diagnosis. Inspired by the transient modeling analysis method based on correlation filtering analysis, a so-called Parametric-Mother-Doppler-Wavelet (PMDW) is constructed with six parameters, including a center characteristic frequency and five kinematic model parameters. A Doppler effect eliminator containing a PMDW generator, a correlation filtering analysis module, and a signal resampler is invented to eliminate the Doppler effect embedded in the acoustic signal of the recorded bearing. Through the Doppler effect eliminator, the five kinematic model parameters can be identified based on the signal itself. Then, the signal resampler is applied to eliminate the Doppler effect using the identified parameters. With the ability to detect early bearing faults, the transient model analysis method is employed to detect localized bearing faults after the embedded Doppler effect is eliminated. The effectiveness of the proposed fault diagnosis strategy is verified via simulation studies and applications to diagnose locomotive roller bearing defects. PMID:24803197
Comparison of Sequential Drug Release in Vitro and in Vivo
Sundararaj, Sharath C.; Al-Sabbagh, Mohanad; Rabek, Cheryl L.; Dziubla, Thomas D.; Thomas, Mark V.; Puleo, David A.
2015-01-01
Development of drug delivery devices typically involves characterizing in vitro release performance with the inherent assumption that this will closely approximate in vivo performance. Yet, as delivery devices become more complex, for instance with a sequential drug release pattern, it is important to confirm that in vivo properties correlate with the expected “programming” achieved in vitro. In this work, a systematic comparison between in vitro and in vivo biomaterial erosion and sequential release was performed for a multilayered association polymer system comprising cellulose acetate phthalate and Pluronic F-127. After assessing the materials during incubation in phosphate-buffered saline, devices were implanted supracalvarially in rats. Devices with two different doses and with different erosion rates were harvested at increasing times post-implantation, and the in vivo thickness loss, mass loss, and the drug release profiles were compared with their in vitro counterparts. The sequential release of four different drugs observed in vitro was successfully translated to in vivo conditions. Results suggest, however, that the total erosion time of the devices was longer and release rates of the four drugs were different, with drugs initially released more quickly and then more slowly in vivo. Whereas many comparative studies of in vitro and in vivo drug release from biodegradable polymers involved a single drug, the present research demonstrated that sequential release of four drugs can be maintained following implantation. PMID:26111338
Carretti, Barbara; Lanfranchi, Silvia; Mammarella, Irene C
2013-01-01
Earlier research showed that visuospatial working memory (VSWM) is better preserved in Down syndrome (DS) than verbal WM. Some differences emerged, however, when VSWM performance was broken down into its various components, and more recent studies revealed that the spatial-simultaneous component of VSWM is more impaired than the spatial-sequential one. The difficulty of managing more than one item at a time is also evident when the information to be recalled is structured. To further analyze this issue, we investigated the advantage of material being structured in spatial-simultaneous and spatial-sequential tasks by comparing the performance of a group of individuals with DS and a group of typically-developing children matched for mental age. Both groups were presented with VSWM tasks in which both the presentation format (simultaneous vs. sequential) and the type of configuration (pattern vs. random) were manipulated. Findings indicated that individuals with DS took less advantage of the pattern configuration in the spatial-simultaneous task than TD children; in contrast, the two groups' performance did not differ in the pattern configuration of the spatial-sequential task. Taken together, these results confirmed difficulties relating to the spatial-simultaneous component of VSWM in individuals with DS, supporting the importance of distinguishing between different components within this system. The findings are discussed in terms of factors influencing this specific deficit. Copyright © 2012 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Moreno, Claudia E.; Guevara, Roger; Sánchez-Rojas, Gerardo; Téllez, Dianeis; Verdú, José R.
2008-01-01
Environmental assessment at the community level in highly diverse ecosystems is limited by taxonomic constraints and statistical methods requiring true replicates. Our objective was to show how diverse systems can be studied at the community level using higher taxa as biodiversity surrogates, and re-sampling methods to allow comparisons. To illustrate this we compared the abundance, richness, evenness and diversity of the litter fauna in a pine-oak forest in central Mexico among seasons, sites and collecting methods. We also assessed changes in the abundance of trophic guilds and evaluated the relationships between community parameters and litter attributes. With the direct search method we observed differences in the rate of taxa accumulation between sites. Bootstrap analysis showed that abundance varied significantly between seasons and sampling methods, but not between sites. In contrast, diversity and evenness were significantly higher at the managed than at the non-managed site. Tree regression models show that abundance varied mainly between seasons, whereas taxa richness was affected by litter attributes (composition and moisture content). The abundance of trophic guilds varied among methods and seasons, but overall we found that parasitoids, predators and detrivores decreased under management. Therefore, although our results suggest that management has positive effects on the richness and diversity of litter fauna, the analysis of trophic guilds revealed a contrasting story. Our results indicate that functional groups and re-sampling methods may be used as tools for describing community patterns in highly diverse systems. Also, the higher taxa surrogacy could be seen as a preliminary approach when it is not possible to identify the specimens at a low taxonomic level in a reasonable period of time and in a context of limited financial resources, but further studies are needed to test whether the results are specific to a system or whether they are general with regards to land management.
Epistemic uncertainty in the location and magnitude of earthquakes in Italy from Macroseismic data
Bakun, W.H.; Gomez, Capera A.; Stucchi, M.
2011-01-01
Three independent techniques (Bakun and Wentworth, 1997; Boxer from Gasperini et al., 1999; and Macroseismic Estimation of Earthquake Parameters [MEEP; see Data and Resources section, deliverable D3] from R.M.W. Musson and M.J. Jimenez) have been proposed for estimating an earthquake location and magnitude from intensity data alone. The locations and magnitudes obtained for a given set of intensity data are almost always different, and no one technique is consistently best at matching instrumental locations and magnitudes of recent well-recorded earthquakes in Italy. Rather than attempting to select one of the three solutions as best, we use all three techniques to estimate the location and the magnitude and the epistemic uncertainties among them. The estimates are calculated using bootstrap resampled data sets with Monte Carlo sampling of a decision tree. The decision-tree branch weights are based on goodness-of-fit measures of location and magnitude for recent earthquakes. The location estimates are based on the spatial distribution of locations calculated from the bootstrap resampled data. The preferred source location is the locus of the maximum bootstrap location spatial density. The location uncertainty is obtained from contours of the bootstrap spatial density: 68% of the bootstrap locations are within the 68% confidence region, and so on. For large earthquakes, our preferred location is not associated with the epicenter but with a location on the extended rupture surface. For small earthquakes, the epicenters are generally consistent with the location uncertainties inferred from the intensity data if an epicenter inaccuracy of 2-3 km is allowed. The preferred magnitude is the median of the distribution of bootstrap magnitudes. As with location uncertainties, the uncertainties in magnitude are obtained from the distribution of bootstrap magnitudes: the bounds of the 68% uncertainty range enclose 68% of the bootstrap magnitudes, and so on. The instrumental magnitudes for large and small earthquakes are generally consistent with the confidence intervals inferred from the distribution of bootstrap resampled magnitudes.
NASA Astrophysics Data System (ADS)
Royer, A.; Larue, F.; De Sève, D.; Roy, A.; Vionnet, V.; Picard, G.; Cosme, E.
2017-12-01
Over northern snow-dominated basins, the snow water equivalent (SWE) is of primary interest for spring streamflow forecasting. SWE retrievals from satellite data are still not well resolved, in particular from microwave (MW) measurements, the only type of data sensible to snow mass. Also, the use of snowpack models is challenging due to the large uncertainties in meteorological input forcings. This project aims to improve SWE prediction by assimilation of satellite brightness temperature (TB), without any ground-based observations. The proposed approach is the coupling of a detailed multilayer snowpack model (Crocus) with a MW snow emission model (DMRT-ML). The assimilation scheme is a Sequential Importance Resampling Particle filter, through ensembles of perturbed meteorological forcings according to their respective uncertainties. Crocus simulations driven by operational meteorological forecasts from the Canadian Global Environmental Multiscale model at 10 km spatial resolution were compared to continuous daily SWE measurements over Québec, North-Eastern Canada (56° - 45°N). The results show a mean bias of the maximum SWE overestimated by 16% with variations up to +32%. This observed large variability could lead to dramatic consequences on spring flood forecasts. Results of Crocus-DMRT-ML coupling compared to surface-based TB measurements (at 11, 19 and 37 GHz) show that the Crocus snowpack microstructure described by sticky hard spheres within DMRT has to be scaled by a snow stickiness of 0.18, significantly reducing the overall RMSE of simulated TBs. The ability of assimilation of daily TBs to correct the simulated SWE is first presented through twin experiments with synthetic data, and then with AMSR-2 satellite time series of TBs along the winter taking into account atmospheric and forest canopy interferences (absorption and emission). The differences between TBs at 19-37 GHz and at 11-19 GHz, in vertical polarization, were assimilated. This assimilation test with synthetic data gives a SWE RMSE reduced by a factor of 2 after assimilation. Assimilation of AMSR-2 TBs shows improvement in SWE retrievals compared to continuous in-situ SWE measurements. The accuracy is discussed as a function of boreal forest density and LAI (MODIS-based data), having significant effects.
Sleep to the beat: A nap favours consolidation of timing.
Verweij, Ilse M; Onuki, Yoshiyuki; Van Someren, Eus J W; Van der Werf, Ysbrand D
2016-06-01
Growing evidence suggests that sleep is important for procedural learning, but few studies have investigated the effect of sleep on the temporal aspects of motor skill learning. We assessed the effect of a 90-min day-time nap on learning a motor timing task, using 2 adaptations of a serial interception sequence learning (SISL) task. Forty-two right-handed participants performed the task before and after a 90-min period of sleep or wake. Electroencephalography (EEG) was recorded throughout. The motor task consisted of a sequential spatial pattern and was performed according to 2 different timing conditions, that is, either following a sequential or a random temporal pattern. The increase in accuracy was compared between groups using a mixed linear regression model. Within the sleep group, performance improvement was modeled based on sleep characteristics, including spindle- and slow-wave density. The sleep group, but not the wake group, showed improvement in the random temporal, but especially and significantly more strongly in the sequential temporal condition. None of the sleep characteristics predicted improvement on either general of the timing conditions. In conclusion, a daytime nap improves performance on a timing task. We show that performance on the task with a sequential timing sequence benefits more from sleep than motor timing. More important, the temporal sequence did not benefit initial learning, because differences arose only after an offline period and specifically when this period contained sleep. Sleep appears to aid in the extraction of regularities for optimal subsequent performance. (PsycINFO Database Record (c) 2016 APA, all rights reserved).
Temporal texture of associative encoding modulates recall processes.
Tibon, Roni; Levy, Daniel A
2014-02-01
Binding aspects of an experience that are distributed over time is an important element of episodic memory. In the current study, we examined how the temporal complexity of an experience may govern the processes required for its retrieval. We recorded event-related potentials during episodic cued recall following pair associate learning of concurrently and sequentially presented object-picture pairs. Cued recall success effects over anterior and posterior areas were apparent in several time windows. In anterior locations, these recall success effects were similar for concurrently and sequentially encoded pairs. However, in posterior sites clustered over parietal scalp the effect was larger for the retrieval of sequentially encoded pairs. We suggest that anterior aspects of the mid-latency recall success effects may reflect working-with-memory operations or direct access recall processes, while more posterior aspects reflect recollective processes which are required for retrieval of episodes of greater temporal complexity. Copyright © 2013 Elsevier Inc. All rights reserved.
PARTICLE FILTERING WITH SEQUENTIAL PARAMETER LEARNING FOR NONLINEAR BOLD fMRI SIGNALS.
Xia, Jing; Wang, Michelle Yongmei
Analyzing the blood oxygenation level dependent (BOLD) effect in the functional magnetic resonance imaging (fMRI) is typically based on recent ground-breaking time series analysis techniques. This work represents a significant improvement over existing approaches to system identification using nonlinear hemodynamic models. It is important for three reasons. First, instead of using linearized approximations of the dynamics, we present a nonlinear filtering based on the sequential Monte Carlo method to capture the inherent nonlinearities in the physiological system. Second, we simultaneously estimate the hidden physiological states and the system parameters through particle filtering with sequential parameter learning to fully take advantage of the dynamic information of the BOLD signals. Third, during the unknown static parameter learning, we employ the low-dimensional sufficient statistics for efficiency and avoiding potential degeneration of the parameters. The performance of the proposed method is validated using both the simulated data and real BOLD fMRI data.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Padbury, Richard P.; Jur, Jesse S., E-mail: jsjur@ncsu.edu
Previous research exploring inorganic materials nucleation behavior on polymers via atomic layer deposition indicates the formation of hybrid organic–inorganic materials that form within the subsurface of the polymer. This has inspired adaptations to the process, such as sequential vapor infiltration, which enhances the diffusion of organometallic precursors into the subsurface of the polymer to promote the formation of a hybrid organic–inorganic coating. This work highlights the fundamental difference in mass uptake behavior between atomic layer deposition and sequential vapor infiltration using in-situ methods. In particular, in-situ quartz crystal microgravimetry is used to compare the mass uptake behavior of trimethyl aluminummore » in poly(butylene terephthalate) and polyamide-6 polymer thin films. The importance of trimethyl aluminum diffusion into the polymer subsurface and the subsequent chemical reactions with polymer functional groups are discussed.« less
Alant, Erna; Kolatsis, Anna; Lilienfeld, Margi
2010-03-01
An important aspect in AAC concerns the user's ability to locate an aided visual symbol on a communication display in order to facilitate meaningful interaction with partners. Recent studies have suggested that the use of different colored symbols may be influential in the visual search process, and that this, in turn will influence the speed and accuracy of symbol location. This study examined the role of color on rate and accuracy of identifying symbols on an 8-location overlay through the use of 3 color conditions (same, mixed and unique). Sixty typically developing preschool children were exposed to two different sequential exposures (Set 1 and Set 2). Participants searched for a target stimulus (either meaningful symbols or arbitrary forms) in a stimuli array. Findings indicated that the sequential exposures (orderings) impacted both time and accuracy for both types of symbols within specific instances.
Mining sequential patterns for protein fold recognition.
Exarchos, Themis P; Papaloukas, Costas; Lampros, Christos; Fotiadis, Dimitrios I
2008-02-01
Protein data contain discriminative patterns that can be used in many beneficial applications if they are defined correctly. In this work sequential pattern mining (SPM) is utilized for sequence-based fold recognition. Protein classification in terms of fold recognition plays an important role in computational protein analysis, since it can contribute to the determination of the function of a protein whose structure is unknown. Specifically, one of the most efficient SPM algorithms, cSPADE, is employed for the analysis of protein sequence. A classifier uses the extracted sequential patterns to classify proteins in the appropriate fold category. For training and evaluating the proposed method we used the protein sequences from the Protein Data Bank and the annotation of the SCOP database. The method exhibited an overall accuracy of 25% in a classification problem with 36 candidate categories. The classification performance reaches up to 56% when the five most probable protein folds are considered.
Partnership for Edge Physics (EPSI), University of Texas Final Report
DOE Office of Scientific and Technical Information (OSTI.GOV)
Moser, Robert; Carey, Varis; Michoski, Craig
Simulations of tokamak plasmas require a number of inputs whose values are uncertain. The effects of these input uncertainties on the reliability of model predictions is of great importance when validating predictions by comparison to experimental observations, and when using the predictions for design and operation of devices. However, high fidelity simulation of tokamak plasmas, particular those aimed at characterization of the edge plasma physics, are computationally expensive, so lower cost surrogates are required to enable practical uncertainty estimates. Two surrogate modeling techniques have been explored in the context of tokamak plasma simulations using the XGC family of plasma simulationmore » codes. The first is a response surface surrogate, and the second is an augmented surrogate relying on scenario extrapolation. In addition, to reduce the costs of the XGC simulations, a particle resampling algorithm was developed, which allows marker particle distributions to be adjusted to maintain optimal importance sampling. This means that the total number of particles in and therefore the cost of a simulation can be reduced while maintaining the same accuracy.« less
Schofield, Pamela J.; Slack, W. Todd; Peterson, Mark S.; Gregoire, Denise R.
2007-01-01
We provide information about the effects of Hurricane Katrina on populations of an invasive fish, the Nile tilapia (Oreochromis niloticus) in southern Mississippi. By resampling areas surveyed before the storm, we attempted to determine whether the species expanded its range by moving with storm-related floods. Additionally, we used rotenone to eradicate individuals of this species at a hurricane-damaged aquaculture facility on the Mississippi coast. Although our survey was limited geographically, we did not find the species to occur beyond the aquaculture facility, other than in an adjacent bayou. Our rotenone treatment of the facility appeared effective with only a single O. niloticus being collected six weeks after the treatment. To reduce the spread of O. niloticus in the southeastern U.S., it is important to continue to control feral populations, work to eliminate vectors for dispersal, and continue monitoring their distribution.
A Powerful Test for Comparing Multiple Regression Functions.
Maity, Arnab
2012-09-01
In this article, we address the important problem of comparison of two or more population regression functions. Recently, Pardo-Fernández, Van Keilegom and González-Manteiga (2007) developed test statistics for simple nonparametric regression models: Y(ij) = θ(j)(Z(ij)) + σ(j)(Z(ij))∊(ij), based on empirical distributions of the errors in each population j = 1, … , J. In this paper, we propose a test for equality of the θ(j)(·) based on the concept of generalized likelihood ratio type statistics. We also generalize our test for other nonparametric regression setups, e.g, nonparametric logistic regression, where the loglikelihood for population j is any general smooth function [Formula: see text]. We describe a resampling procedure to obtain the critical values of the test. In addition, we present a simulation study to evaluate the performance of the proposed test and compare our results to those in Pardo-Fernández et al. (2007).
A Stochastic Climate Generator for Agriculture in Southeast Asian Domains
NASA Astrophysics Data System (ADS)
Greene, A. M.; Allis, E. C.
2014-12-01
We extend a previously-described method for generating future climate scenarios, suitable for driving agricultural models, to selected domains in Lao PDR, Bangladesh and Indonesia. There are notable differences in climatology among the study regions, most importantly the inverse seasonal relationship of southeast Asian and Australian monsoons. These differences necessitate a partially-differentiated modeling approach, utilizing common features for better estimation while allowing independent modeling of divergent attributes. The method attempts to constrain uncertainty due to both anthropogenic and natural influences, providing a measure of how these effects may combine during specified future decades. Seasonal climate fields are downscaled to the daily time step by resampling the AgMERRA dataset, providing a full suite of agriculturally relevant variables and enabling the propagation of climate uncertainty to agricultural outputs. The role of this research in a broader project, conducted under the auspices of the International Fund for Agricultural Development (IFAD), is discussed.
Spatial Point Pattern Analysis of Neurons Using Ripley's K-Function in 3D
Jafari-Mamaghani, Mehrdad; Andersson, Mikael; Krieger, Patrik
2010-01-01
The aim of this paper is to apply a non-parametric statistical tool, Ripley's K-function, to analyze the 3-dimensional distribution of pyramidal neurons. Ripley's K-function is a widely used tool in spatial point pattern analysis. There are several approaches in 2D domains in which this function is executed and analyzed. Drawing consistent inferences on the underlying 3D point pattern distributions in various applications is of great importance as the acquisition of 3D biological data now poses lesser of a challenge due to technological progress. As of now, most of the applications of Ripley's K-function in 3D domains do not focus on the phenomenon of edge correction, which is discussed thoroughly in this paper. The main goal is to extend the theoretical and practical utilization of Ripley's K-function and corresponding tests based on bootstrap resampling from 2D to 3D domains. PMID:20577588
Dembo, Mana; Radovčić, Davorka; Garvin, Heather M; Laird, Myra F; Schroeder, Lauren; Scott, Jill E; Brophy, Juliet; Ackermann, Rebecca R; Musiba, Chares M; de Ruiter, Darryl J; Mooers, Arne Ø; Collard, Mark
2016-08-01
Homo naledi is a recently discovered species of fossil hominin from South Africa. A considerable amount is already known about H. naledi but some important questions remain unanswered. Here we report a study that addressed two of them: "Where does H. naledi fit in the hominin evolutionary tree?" and "How old is it?" We used a large supermatrix of craniodental characters for both early and late hominin species and Bayesian phylogenetic techniques to carry out three analyses. First, we performed a dated Bayesian analysis to generate estimates of the evolutionary relationships of fossil hominins including H. naledi. Then we employed Bayes factor tests to compare the strength of support for hypotheses about the relationships of H. naledi suggested by the best-estimate trees. Lastly, we carried out a resampling analysis to assess the accuracy of the age estimate for H. naledi yielded by the dated Bayesian analysis. The analyses strongly supported the hypothesis that H. naledi forms a clade with the other Homo species and Australopithecus sediba. The analyses were more ambiguous regarding the position of H. naledi within the (Homo, Au. sediba) clade. A number of hypotheses were rejected, but several others were not. Based on the available craniodental data, Homo antecessor, Asian Homo erectus, Homo habilis, Homo floresiensis, Homo sapiens, and Au. sediba could all be the sister taxon of H. naledi. According to the dated Bayesian analysis, the most likely age for H. naledi is 912 ka. This age estimate was supported by the resampling analysis. Our findings have a number of implications. Most notably, they support the assignment of the new specimens to Homo, cast doubt on the claim that H. naledi is simply a variant of H. erectus, and suggest H. naledi is younger than has been previously proposed. Copyright © 2016 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Kross, Angela; McNairn, Heather; Lapen, David; Sunohara, Mark; Champagne, Catherine
2015-02-01
Leaf area index (LAI) and biomass are important indicators of crop development and the availability of this information during the growing season can support farmer decision making processes. This study demonstrates the applicability of RapidEye multi-spectral data for estimation of LAI and biomass of two crop types (corn and soybean) with different canopy structure, leaf structure and photosynthetic pathways. The advantages of Rapid Eye in terms of increased temporal resolution (∼daily), high spatial resolution (∼5 m) and enhanced spectral information (includes red-edge band) are explored as an individual sensor and as part of a multi-sensor constellation. Seven vegetation indices based on combinations of reflectance in green, red, red-edge and near infrared bands were derived from RapidEye imagery between 2011 and 2013. LAI and biomass data were collected during the same period for calibration and validation of the relationships between vegetation indices and LAI and dry above-ground biomass. Most indices showed sensitivity to LAI from emergence to 8 m2/m2. The normalized difference vegetation index (NDVI), the red-edge NDVI and the green NDVI were insensitive to crop type and had coefficients of variations (CV) ranging between 19 and 27%; and coefficients of determination ranging between 86 and 88%. The NDVI performed best for the estimation of dry leaf biomass (CV = 27% and r2 = 090) and was also insensitive to crop type. The red-edge indices did not show any significant improvement in LAI and biomass estimation over traditional multispectral indices. Cumulative vegetation indices showed strong performance for estimation of total dry above-ground biomass, especially for corn (CV ≤ 20%). This study demonstrated that continuous crop LAI monitoring over time and space at the field level can be achieved using a combination of RapidEye, Landsat and SPOT data and sensor-dependant best-fit functions. This approach eliminates/reduces the need for reflectance resampling, VIs inter-calibration and spatial resampling.
A new method to obtain ground control points based on SRTM data
NASA Astrophysics Data System (ADS)
Wang, Pu; An, Wei; Deng, Xin-pu; Zhang, Xi
2013-09-01
The GCPs are widely used in remote sense image registration and geometric correction. Normally, the DRG and DOM are the major data source from which GCPs are extracted. But the high accuracy products of DRG and DOM are usually costly to obtain. Some of the production are free, yet without any guarantee. In order to balance the cost and the accuracy, the paper proposes a method of extracting the GCPs from SRTM data. The method consist of artificial assistance, binarization, data resample and reshape. With artificial assistance to find out which part of SRTM data could be used as GCPs, such as the islands or sharp coast line. By utilizing binarization algorithm , the shape information of the region is obtained while other information is excluded. Then the binary data is resampled to a suitable resolution required by specific application. At last, the data would be reshaped according to satellite imaging type to obtain the GCPs which could be used. There are three advantages of the method proposed in the paper. Firstly, the method is easy for implementation. Unlike the DRG data or DOM data that charges a lot, the SRTM data is totally free to access without any constricts. Secondly, the SRTM has a high accuracy about 90m that is promised by its producer, so the GCPs got from it can also obtain a high quality. Finally, given the SRTM data covers nearly all the land surface of earth between latitude -60° and latitude +60°, the GCPs which are produced by the method can cover most important regions of the world. The method which obtain GCPs from SRTM data can be used in meteorological satellite image or some situation alike, which have a relative low requirement about the accuracy. Through plenty of simulation test, the method is proved convenient and effective.
Fast parallel image registration on CPU and GPU for diagnostic classification of Alzheimer's disease
Shamonin, Denis P.; Bron, Esther E.; Lelieveldt, Boudewijn P. F.; Smits, Marion; Klein, Stefan; Staring, Marius
2013-01-01
Nonrigid image registration is an important, but time-consuming task in medical image analysis. In typical neuroimaging studies, multiple image registrations are performed, i.e., for atlas-based segmentation or template construction. Faster image registration routines would therefore be beneficial. In this paper we explore acceleration of the image registration package elastix by a combination of several techniques: (i) parallelization on the CPU, to speed up the cost function derivative calculation; (ii) parallelization on the GPU building on and extending the OpenCL framework from ITKv4, to speed up the Gaussian pyramid computation and the image resampling step; (iii) exploitation of certain properties of the B-spline transformation model; (iv) further software optimizations. The accelerated registration tool is employed in a study on diagnostic classification of Alzheimer's disease and cognitively normal controls based on T1-weighted MRI. We selected 299 participants from the publicly available Alzheimer's Disease Neuroimaging Initiative database. Classification is performed with a support vector machine based on gray matter volumes as a marker for atrophy. We evaluated two types of strategies (voxel-wise and region-wise) that heavily rely on nonrigid image registration. Parallelization and optimization resulted in an acceleration factor of 4–5x on an 8-core machine. Using OpenCL a speedup factor of 2 was realized for computation of the Gaussian pyramids, and 15–60 for the resampling step, for larger images. The voxel-wise and the region-wise classification methods had an area under the receiver operator characteristic curve of 88 and 90%, respectively, both for standard and accelerated registration. We conclude that the image registration package elastix was substantially accelerated, with nearly identical results to the non-optimized version. The new functionality will become available in the next release of elastix as open source under the BSD license. PMID:24474917
Hoomans, Ties; Severens, Johan L; Evers, Silvia M A A; Ament, Andre J H A
2009-01-01
Decisions about clinical practice change, that is, which guidelines to adopt and how to implement them, can be made sequentially or simultaneously. Decision makers adopting a sequential approach first compare the costs and effects of alternative guidelines to select the best set of guideline recommendations for patient management and subsequently examine the implementation costs and effects to choose the best strategy to implement the selected guideline. In an integral approach, decision makers simultaneously decide about the guideline and the implementation strategy on the basis of the overall value for money in changing clinical practice. This article demonstrates that the decision to use a sequential v. an integral approach affects the need for detailed information and the complexity of the decision analytic process. More importantly, it may lead to different choices of guidelines and implementation strategies for clinical practice change. The differences in decision making and decision analysis between the alternative approaches are comprehensively illustrated using 2 hypothetical examples. We argue that, in most cases, an integral approach to deciding about change in clinical practice is preferred, as this provides more efficient use of scarce health-care resources.
PC_Eyewitness: evaluating the New Jersey method.
MacLin, Otto H; Phelan, Colin M
2007-05-01
One important variable in eyewitness identification research is lineup administration procedure. Lineups administered sequentially (one at a time) have been shown to reduce the number of false identifications in comparison with those administered simultaneously (all at once). As a result, some policymakers have adopted sequential administration. However, they have made slight changes to the method used in psychology laboratories. Eyewitnesses in the field are allowed to take multiple passes through a lineup, whereas participants in the laboratory are allowed only one pass. PC_Eyewitness (PCE) is a computerized system used to construct and administer simultaneous or sequential lineups in both the laboratory and the field. It is currently being used in laboratories investigating eyewitness identification in the United States, Canada, and abroad. A modified version of PCE is also being developed for a local police department. We developed a new module for PCE, the New Jersey module, to examine the effects of a second pass. We found that the sequential advantage was eliminated when the participants were allowed to view the lineup a second time. The New Jersey module, and steps we are taking to improve on the module, are presented here and are being made available to the research and law enforcement communities.
A Data Augmentation Approach to Short Text Classification
ERIC Educational Resources Information Center
Rosario, Ryan Robert
2017-01-01
Text classification typically performs best with large training sets, but short texts are very common on the World Wide Web. Can we use resampling and data augmentation to construct larger texts using similar terms? Several current methods exist for working with short text that rely on using external data and contexts, or workarounds. Our focus is…
Mist net effort required to inventory a forest bat species assemblage.
Theodore J. Weller; Danny C. Lee
2007-01-01
Little quantitative information exists about the survey effort necessary to inventory temperate bat species assemblages. We used a bootstrap resampling lgorithm to estimate the number of mist net surveys required to capture individuals from 9 species at both study area and site levels using data collected in a forested watershed in northwestern California, USA, during...
Long-Term Soil Chemistry Changes in Aggrading Forest Ecosystems
Jennifer D. Knoepp; Wayne T. Swank
1994-01-01
Assessing potential long-term forest productivity requires identification of the processes regulating chemical changes in forest soils. We resampled the litter layer and upper two mineral soil horizons, A and AB/BA, in two aggrading southern Appalachian watersheds 20 yr after an earlier sampling. Soils from a mixed-hardwood watershed exhibited a small but significant...
ERIC Educational Resources Information Center
Nevitt, Johnathan; Hancock, Gregory R.
Though common structural equation modeling (SEM) methods are predicated upon the assumption of multivariate normality, applied researchers often find themselves with data clearly violating this assumption and without sufficient sample size to use distribution-free estimation methods. Fortunately, promising alternatives are being integrated into…
Jeffrey T. Walton
2008-01-01
Three machine learning subpixel estimation methods (Cubist, Random Forests, and support vector regression) were applied to estimate urban cover. Urban forest canopy cover and impervious surface cover were estimated from Landsat-7 ETM+ imagery using a higher resolution cover map resampled to 30 m as training and reference data. Three different band combinations (...
Fourier Descriptor Analysis and Unification of Voice Range Profile Contours: Method and Applications
ERIC Educational Resources Information Center
Pabon, Peter; Ternstrom, Sten; Lamarche, Anick
2011-01-01
Purpose: To describe a method for unified description, statistical modeling, and comparison of voice range profile (VRP) contours, even from diverse sources. Method: A morphologic modeling technique, which is based on Fourier descriptors (FDs), is applied to the VRP contour. The technique, which essentially involves resampling of the curve of the…
The Relationship of Cohabitation and Mental Health: A Study of a Young Adult Cohort.
ERIC Educational Resources Information Center
Horwitz, Allan V.; White, Helene Raskin
1998-01-01
Uses a cohort of unmarried young adults who were sampled when they were 18, 21, or 24 years old and resampled seven years later. Results indicate no differences between cohabitators and married couples in levels of depression. Cohabitating men report more alcohol problems than married and single men; cohabitating women reported more alcohol…
Techniques for Down-Sampling a Measured Surface Height Map for Model Validation
NASA Technical Reports Server (NTRS)
Sidick, Erkin
2012-01-01
This software allows one to down-sample a measured surface map for model validation, not only without introducing any re-sampling errors, but also eliminating the existing measurement noise and measurement errors. The software tool of the current two new techniques can be used in all optical model validation processes involving large space optical surfaces
Rangeland exclosures of northeastern Oregon: stories they tell (1936–2004).
Charles Grier Johnson
2007-01-01
Rangeland exclosures installed primarily in the 1960s, but with some from the 1940s, were resampled for changes in plant community structure and composition periodically from 1977 to 2004 on the Malheur, Umatilla, and Wallowa-Whitman National Forests in northeastern Oregon. They allow one to compare vegetation with all-ungulate exclusion (known historically as game...
Collateral Information for Equating in Small Samples: A Preliminary Investigation
ERIC Educational Resources Information Center
Kim, Sooyeon; Livingston, Samuel A.; Lewis, Charles
2011-01-01
This article describes a preliminary investigation of an empirical Bayes (EB) procedure for using collateral information to improve equating of scores on test forms taken by small numbers of examinees. Resampling studies were done on two different forms of the same test. In each study, EB and non-EB versions of two equating methods--chained linear…
Sequential Versus Concurrent Trastuzumab in Adjuvant Chemotherapy for Breast Cancer
Perez, Edith A.; Suman, Vera J.; Davidson, Nancy E.; Gralow, Julie R.; Kaufman, Peter A.; Visscher, Daniel W.; Chen, Beiyun; Ingle, James N.; Dakhil, Shaker R.; Zujewski, JoAnne; Moreno-Aspitia, Alvaro; Pisansky, Thomas M.; Jenkins, Robert B.
2011-01-01
Purpose NCCTG (North Central Cancer Treatment Group) N9831 is the only randomized phase III trial evaluating trastuzumab added sequentially or used concurrently with chemotherapy in resected stages I to III invasive human epidermal growth factor receptor 2–positive breast cancer. Patients and Methods Patients received doxorubicin and cyclophosphamide every 3 weeks for four cycles, followed by paclitaxel weekly for 12 weeks (arm A), paclitaxel plus sequential trastuzumab weekly for 52 weeks (arm B), or paclitaxel plus concurrent trastuzumab for 12 weeks followed by trastuzumab for 40 weeks (arm C). The primary end point was disease-free survival (DFS). Results Comparison of arm A (n = 1,087) and arm B (n = 1,097), with 6-year median follow-up and 390 events, revealed 5-year DFS rates of 71.8% and 80.1%, respectively. DFS was significantly increased with trastuzumab added sequentially to paclitaxel (log-rank P < .001; arm B/arm A hazard ratio [HR], 0.69; 95% CI, 0.57 to 0.85). Comparison of arm B (n = 954) and arm C (n = 949), with 6-year median follow-up and 313 events, revealed 5-year DFS rates of 80.1% and 84.4%, respectively. There was an increase in DFS with concurrent trastuzumab and paclitaxel relative to sequential administration (arm C/arm B HR, 0.77; 99.9% CI, 0.53 to 1.11), but the P value (.02) did not cross the prespecified O'Brien-Fleming boundary (.00116) for the interim analysis. Conclusion DFS was significantly improved with 52 weeks of trastuzumab added to adjuvant chemotherapy. On the basis of a positive risk-benefit ratio, we recommend that trastuzumab be incorporated into a concurrent regimen with taxane chemotherapy as an important standard-of-care treatment alternative to a sequential regimen. PMID:22042958
Gruber, Ranit; Levitt, Michael; Horovitz, Amnon
2017-01-01
Knowing the mechanism of allosteric switching is important for understanding how molecular machines work. The CCT/TRiC chaperonin nanomachine undergoes ATP-driven conformational changes that are crucial for its folding function. Here, we demonstrate that insight into its allosteric mechanism of ATP hydrolysis can be achieved by Arrhenius analysis. Our results show that ATP hydrolysis triggers sequential ‟conformational waves.” They also suggest that these waves start from subunits CCT6 and CCT8 (or CCT3 and CCT6) and proceed clockwise and counterclockwise, respectively. PMID:28461478
Gruber, Ranit; Levitt, Michael; Horovitz, Amnon
2017-05-16
Knowing the mechanism of allosteric switching is important for understanding how molecular machines work. The CCT/TRiC chaperonin nanomachine undergoes ATP-driven conformational changes that are crucial for its folding function. Here, we demonstrate that insight into its allosteric mechanism of ATP hydrolysis can be achieved by Arrhenius analysis. Our results show that ATP hydrolysis triggers sequential ‟conformational waves." They also suggest that these waves start from subunits CCT6 and CCT8 (or CCT3 and CCT6) and proceed clockwise and counterclockwise, respectively.
The sequential megafaunal collapse hypothesis: Testing with existing data
NASA Astrophysics Data System (ADS)
DeMaster, Douglas P.; Trites, Andrew W.; Clapham, Phillip; Mizroch, Sally; Wade, Paul; Small, Robert J.; Hoef, Jay Ver
2006-02-01
Springer et al. [Springer, A.M., Estes, J.A., van Vliet, G.B., Williams, T.M., Doak, D.F., Danner, E.M., Forney, K.A., Pfister, B., 2003. Sequential megafaunal collapse in the North Pacific Ocean: an ongoing legacy of industrial whaling? Proceedings of the National Academy of Sciences 100 (21), 12,223-12,228] hypothesized that great whales were an important prey resource for killer whales, and that the removal of fin and sperm whales by commercial whaling in the region of the Bering Sea/Aleutian Islands (BSAI) in the late 1960s and 1970s led to cascading trophic interactions that caused the sequential decline of populations of harbor seal, northern fur seal, Steller sea lion and northern sea otter. This hypothesis, referred to as the Sequential Megafaunal Collapse (SMC), has stirred considerable interest because of its implication for ecosystem-based management. The SMC has the following assumptions: (1) fin whales and sperm whales were important as prey species in the Bering Sea; (2) the biomass of all large whale species (i.e., North Pacific right, fin, humpback, gray, sperm, minke and bowhead whales) was in decline in the Bering Sea in the 1960s and early 1970s; and (3) pinniped declines in the 1970s and 1980s were sequential. We concluded that the available data are not consistent with the first two assumptions of the SMC. Statistical tests of the timing of the declines do not support the assumption that pinniped declines were sequential. We propose two alternative hypotheses for the declines that are more consistent with the available data. While it is plausible, from energetic arguments, for predation by killer whales to have been an important factor in the declines of one or more of the three populations of pinnipeds and the sea otter population in the BSAI region over the last 30 years, we hypothesize that the declines in pinniped populations in the BSAI can best be understood by invoking a multiple factor hypothesis that includes both bottom-up forcing (as indicated by evidence of nutritional stress in the western Steller sea lion population) and top-down forcing (e.g., predation by killer whales, mortality incidental to commercial fishing, directed harvests). Our second hypothesis is a modification of the top-down forcing mechanism (i.e., killer whale predation on one or more of the pinniped populations and the sea otter population is mediated via the recovery of the eastern North Pacific population of the gray whale). We remain skeptical about the proposed link between commercial whaling on fin and sperm whales, which ended in the mid-1960s, and the observed decline of populations of northern fur seal, harbor seal, and Steller sea lion some 15 years later.
Involving young people in decision making about sequential cochlear implantation.
Ion, Rebecca; Cropper, Jenny; Walters, Hazel
2013-11-01
The National Institute for Health and Clinical Excellence guidelines recommended young people who currently have one cochlear implant be offered assessment for a second, sequential implant, due to the reported improvements in sound localization and speech perception in noise. The possibility and benefits of group information and counselling assessments were considered. Previous research has shown advantages of group sessions involving young people and their families and such groups which also allow young people opportunity to discuss their concerns separately to their parents/guardians are found to be 'hugely important'. Such research highlights the importance of involving children in decision-making processes. Families considering a sequential cochlear implant were invited to a group information/counselling session, which included time for parents and children to meet separately. Fourteen groups were held with approximately four to five families in each session, totalling 62 patients. The sessions were facilitated by the multi-disciplinary team, with a particular psychological focus in the young people's session. Feedback from families has demonstrated positive support for this format. Questionnaire feedback, to which nine families responded, indicated that seven preferred the group session to an individual session and all approved of separate groups for the child and parents/guardians. Overall the group format and psychological focus were well received in this typically surgical setting and emphasized the importance of involving the young person in the decision-making process. This positive feedback also opens up the opportunity to use a group format in other assessment processes.
NASA Technical Reports Server (NTRS)
Sankararaman, Shankar
2016-01-01
This paper presents a computational framework for uncertainty characterization and propagation, and sensitivity analysis under the presence of aleatory and epistemic un- certainty, and develops a rigorous methodology for efficient refinement of epistemic un- certainty by identifying important epistemic variables that significantly affect the overall performance of an engineering system. The proposed methodology is illustrated using the NASA Langley Uncertainty Quantification Challenge (NASA-LUQC) problem that deals with uncertainty analysis of a generic transport model (GTM). First, Bayesian inference is used to infer subsystem-level epistemic quantities using the subsystem-level model and corresponding data. Second, tools of variance-based global sensitivity analysis are used to identify four important epistemic variables (this limitation specified in the NASA-LUQC is reflective of practical engineering situations where not all epistemic variables can be refined due to time/budget constraints) that significantly affect system-level performance. The most significant contribution of this paper is the development of the sequential refine- ment methodology, where epistemic variables for refinement are not identified all-at-once. Instead, only one variable is first identified, and then, Bayesian inference and global sensi- tivity calculations are repeated to identify the next important variable. This procedure is continued until all 4 variables are identified and the refinement in the system-level perfor- mance is computed. The advantages of the proposed sequential refinement methodology over the all-at-once uncertainty refinement approach are explained, and then applied to the NASA Langley Uncertainty Quantification Challenge problem.
Kropp, Laura E.; Garg, Manish; Binder, Robert J.
2010-01-01
Cellular peptides generated by proteasomal degradation of proteins in the cytosol and destined for presentation by MHC I are associated with several chaperones. Hsp70, hsp90 and the TCP1-ring complex have been implicated as important cytosolic players for chaperoning these peptides. In this study we report that gp96 and calreticulin are essential for chaperoning peptides in the endoplasmic reticulum. Importantly we demonstrate that cellular peptides are transferred sequentially from gp96 to calreticulin and then to MHC I forming a relay line. Disruption of this relay line by removal of gp96 or calreticulin prevents the binding of peptides by MHC I and hence presentation of the MHC I-peptide complex on the cell surface. Our results are important for understanding how peptides are processed and trafficked within the endoplasmic reticulum before exiting in association with MHC I heavy chains and β2-microglobulin as a trimolecular complex. PMID:20410492
NASA Astrophysics Data System (ADS)
Richman, Michael B.; Gong, Xiaofeng
1999-06-01
When applying eigenanalysis, one decision analysts make is the determination of what magnitude an eigenvector coefficient (e.g., principal component (PC) loading) must achieve to be considered as physically important. Such coefficients can be displayed on maps or in a time series or tables to gain a fuller understanding of a large array of multivariate data. Previously, such a decision on what value of loading designates a useful signal (hereafter called the loading `cutoff') for each eigenvector has been purely subjective. The importance of selecting such a cutoff is apparent since those loading elements in the range of zero to the cutoff are ignored in the interpretation and naming of PCs since only the absolute values of loadings greater than the cutoff are physically analyzed. This research sets out to objectify the problem of best identifying the cutoff by application of matching between known correlation/covariance structures and their corresponding eigenpatterns, as this cutoff point (known as the hyperplane width) is varied.A Monte Carlo framework is used to resample at five sample sizes. Fourteen different hyperplane cutoff widths are tested, bootstrap resampled 50 times to obtain stable results. The key findings are that the location of an optimal hyperplane cutoff width (one which maximized the information content match between the eigenvector and the parent dispersion matrix from which it was derived) is a well-behaved unimodal function. On an individual eigenvector, this enables the unique determination of a hyperplane cutoff value to be used to separate those loadings that best reflect the relationships from those that do not. The effects of sample size on the matching accuracy are dramatic as the values for all solutions (i.e., unrotated, rotated) rose steadily from 25 through 250 observations and then weakly thereafter. The specific matching coefficients are useful to assess the penalties incurred when one analyzes eigenvector coefficients of a lower absolute value than the cutoff (termed coefficient in the hyperplane) or, alternatively, chooses not to analyze coefficients that contain useful physical signal outside of the hyperplane. Therefore, this study enables the analyst to make the best use of the information available in their PCs to shed light on complicated data structures.
Esser, Sarah; Haider, Hilde
2017-01-01
The Serial Reaction Time Task (SRTT) is an important paradigm to study the properties of unconscious learning processes. One specifically interesting and still controversially discussed topic are the conditions under which unconsciously acquired knowledge becomes conscious knowledge. The different assumptions about the underlying mechanisms can contrastively be separated into two accounts: single system views in which the strengthening of associative weights throughout training gradually turns implicit knowledge into explicit knowledge, and dual system views in which implicit knowledge itself does not become conscious. Rather, it requires a second process which detects changes in performance and is able to acquire conscious knowledge. In a series of three experiments, we manipulated the arrangement of sequential and deviant trials. In an SRTT training, participants either received mini-blocks of sequential trials followed by mini-blocks of deviant trials (22 trials each) or they received sequential and deviant trials mixed randomly. Importantly the number of correct and deviant transitions was the same for both conditions. Experiment 1 showed that both conditions acquired a comparable amount of implicit knowledge, expressed in different test tasks. Experiment 2 further demonstrated that both conditions differed in their subjectively experienced fluency of the task, with more fluency experienced when trained with mini-blocks. Lastly, Experiment 3 revealed that the participants trained with longer mini-blocks of sequential and deviant material developed more explicit knowledge. Results are discussed regarding their compatibility with different assumptions about the emergence of explicit knowledge in an implicit learning situation, especially with respect to the role of metacognitive judgements and more specifically the Unexpected-Event Hypothesis.
Xu, Shuozhi; Xiong, Liming; Chen, Youping; ...
2016-01-29
Sequential slip transfer across grain boundaries (GB) has an important role in size-dependent propagation of plastic deformation in polycrystalline metals. For example, the Hall–Petch effect, which states that a smaller average grain size results in a higher yield stress, can be rationalised in terms of dislocation pile-ups against GBs. In spite of extensive studies in modelling individual phases and grains using atomistic simulations, well-accepted criteria of slip transfer across GBs are still lacking, as well as models of predicting irreversible GB structure evolution. Slip transfer is inherently multiscale since both the atomic structure of the boundary and the long-range fieldsmore » of the dislocation pile-up come into play. In this work, concurrent atomistic-continuum simulations are performed to study sequential slip transfer of a series of curved dislocations from a given pile-up on Σ3 coherent twin boundary (CTB) in Cu and Al, with dominant leading screw character at the site of interaction. A Frank-Read source is employed to nucleate dislocations continuously. It is found that subject to a shear stress of 1.2 GPa, screw dislocations transfer into the twinned grain in Cu, but glide on the twin boundary plane in Al. Moreover, four dislocation/CTB interaction modes are identified in Al, which are affected by (1) applied shear stress, (2) dislocation line length, and (3) dislocation line curvature. Our results elucidate the discrepancies between atomistic simulations and experimental observations of dislocation-GB reactions and highlight the importance of directly modeling sequential dislocation slip transfer reactions using fully 3D models.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Xu, Shuozhi; Xiong, Liming; Chen, Youping
Sequential slip transfer across grain boundaries (GB) has an important role in size-dependent propagation of plastic deformation in polycrystalline metals. For example, the Hall–Petch effect, which states that a smaller average grain size results in a higher yield stress, can be rationalised in terms of dislocation pile-ups against GBs. In spite of extensive studies in modelling individual phases and grains using atomistic simulations, well-accepted criteria of slip transfer across GBs are still lacking, as well as models of predicting irreversible GB structure evolution. Slip transfer is inherently multiscale since both the atomic structure of the boundary and the long-range fieldsmore » of the dislocation pile-up come into play. In this work, concurrent atomistic-continuum simulations are performed to study sequential slip transfer of a series of curved dislocations from a given pile-up on Σ3 coherent twin boundary (CTB) in Cu and Al, with dominant leading screw character at the site of interaction. A Frank-Read source is employed to nucleate dislocations continuously. It is found that subject to a shear stress of 1.2 GPa, screw dislocations transfer into the twinned grain in Cu, but glide on the twin boundary plane in Al. Moreover, four dislocation/CTB interaction modes are identified in Al, which are affected by (1) applied shear stress, (2) dislocation line length, and (3) dislocation line curvature. Our results elucidate the discrepancies between atomistic simulations and experimental observations of dislocation-GB reactions and highlight the importance of directly modeling sequential dislocation slip transfer reactions using fully 3D models.« less
Esser, Sarah; Haider, Hilde
2017-01-01
The Serial Reaction Time Task (SRTT) is an important paradigm to study the properties of unconscious learning processes. One specifically interesting and still controversially discussed topic are the conditions under which unconsciously acquired knowledge becomes conscious knowledge. The different assumptions about the underlying mechanisms can contrastively be separated into two accounts: single system views in which the strengthening of associative weights throughout training gradually turns implicit knowledge into explicit knowledge, and dual system views in which implicit knowledge itself does not become conscious. Rather, it requires a second process which detects changes in performance and is able to acquire conscious knowledge. In a series of three experiments, we manipulated the arrangement of sequential and deviant trials. In an SRTT training, participants either received mini-blocks of sequential trials followed by mini-blocks of deviant trials (22 trials each) or they received sequential and deviant trials mixed randomly. Importantly the number of correct and deviant transitions was the same for both conditions. Experiment 1 showed that both conditions acquired a comparable amount of implicit knowledge, expressed in different test tasks. Experiment 2 further demonstrated that both conditions differed in their subjectively experienced fluency of the task, with more fluency experienced when trained with mini-blocks. Lastly, Experiment 3 revealed that the participants trained with longer mini-blocks of sequential and deviant material developed more explicit knowledge. Results are discussed regarding their compatibility with different assumptions about the emergence of explicit knowledge in an implicit learning situation, especially with respect to the role of metacognitive judgements and more specifically the Unexpected-Event Hypothesis. PMID:28421018
Anokye, Nana Kwame; Pokhrel, Subhash; Buxton, Martin; Fox-Rushby, Julia
2013-06-01
Little is known about the correlates of meeting recommended levels of participation in physical activity (PA) and how this understanding informs public health policies on behaviour change. To analyse who meets the recommended level of participation in PA in males and females separately by applying 'process' modelling frameworks (single vs. sequential 2-step process). Using the Health Survey for England 2006, (n = 14 142; ≥ 16 years), gender-specific regression models were estimated using bivariate probit with selectivity correction and single probit models. A 'sequential, 2-step process' modelled participation and meeting the recommended level separately, whereas the 'single process' considered both participation and level together. In females, meeting the recommended level was associated with degree holders [Marginal effect (ME) = 0.013] and age (ME = -0.001), whereas in males, age was a significant correlate (ME = -0.003 to -0.004). The order of importance of correlates was similar across genders, with ethnicity being the most important correlate in both males (ME = -0.060) and females (ME = -0.133). In females, the 'sequential, 2-step process' performed better (ρ = -0.364, P < 0.001) than that in males (ρ = 0.154). The degree to which people undertake the recommended level of PA through vigorous activity varies between males and females, and the process that best predicts such decisions, i.e. whether it is a sequential, 2-step process or a single-step choice, is also different for males and females. Understanding this should help to identify subgroups that are less likely to meet the recommended level of PA (and hence more likely to benefit from any PA promotion intervention).
Shrestha, Suman; Karellas, Andrew; Shi, Linxi; Gounis, Matthew J.; Bellazzini, Ronaldo; Spandre, Gloria; Brez, Alessandro; Minuti, Massimo
2016-01-01
Purpose: High-resolution, photon-counting, energy-resolved detector with fast-framing capability can facilitate simultaneous acquisition of precontrast and postcontrast images for subtraction angiography without pixel registration artifacts and can facilitate high-resolution real-time imaging during image-guided interventions. Hence, this study was conducted to determine the spatial resolution characteristics of a hexagonal pixel array photon-counting cadmium telluride (CdTe) detector. Methods: A 650 μm thick CdTe Schottky photon-counting detector capable of concurrently acquiring up to two energy-windowed images was operated in a single energy-window mode to include photons of 10 keV or higher. The detector had hexagonal pixels with apothem of 30 μm resulting in pixel pitch of 60 and 51.96 μm along the two orthogonal directions. The detector was characterized at IEC-RQA5 spectral conditions. Linear response of the detector was determined over the air kerma rate relevant to image-guided interventional procedures ranging from 1.3 nGy/frame to 91.4 μGy/frame. Presampled modulation transfer was determined using a tungsten edge test device. The edge-spread function and the finely sampled line spread function accounted for hexagonal sampling, from which the presampled modulation transfer function (MTF) was determined. Since detectors with hexagonal pixels require resampling to square pixels for distortion-free display, the optimal square pixel size was determined by minimizing the root-mean-squared-error of the aperture functions for the square and hexagonal pixels up to the Nyquist limit. Results: At Nyquist frequencies of 8.33 and 9.62 cycles/mm along the apothem and orthogonal to the apothem directions, the modulation factors were 0.397 and 0.228, respectively. For the corresponding axis, the limiting resolution defined as 10% MTF occurred at 13.3 and 12 cycles/mm, respectively. Evaluation of the aperture functions yielded an optimal square pixel size of 54 μm. After resampling to 54 μm square pixels using trilinear interpolation, the presampled MTF at Nyquist frequency of 9.26 cycles/mm was 0.29 and 0.24 along the orthogonal directions and the limiting resolution (10% MTF) occurred at approximately 12 cycles/mm. Visual analysis of a bar pattern image showed the ability to resolve close to 12 line-pairs/mm and qualitative evaluation of a neurovascular nitinol-stent showed the ability to visualize its struts at clinically relevant conditions. Conclusions: Hexagonal pixel array photon-counting CdTe detector provides high spatial resolution in single-photon counting mode. After resampling to optimal square pixel size for distortion-free display, the spatial resolution is preserved. The dual-energy capabilities of the detector could allow for artifact-free subtraction angiography and basis material decomposition. The proposed high-resolution photon-counting detector with energy-resolving capability can be of importance for several image-guided interventional procedures as well as for pediatric applications. PMID:27147324
Vedantham, Srinivasan; Shrestha, Suman; Karellas, Andrew; Shi, Linxi; Gounis, Matthew J; Bellazzini, Ronaldo; Spandre, Gloria; Brez, Alessandro; Minuti, Massimo
2016-05-01
High-resolution, photon-counting, energy-resolved detector with fast-framing capability can facilitate simultaneous acquisition of precontrast and postcontrast images for subtraction angiography without pixel registration artifacts and can facilitate high-resolution real-time imaging during image-guided interventions. Hence, this study was conducted to determine the spatial resolution characteristics of a hexagonal pixel array photon-counting cadmium telluride (CdTe) detector. A 650 μm thick CdTe Schottky photon-counting detector capable of concurrently acquiring up to two energy-windowed images was operated in a single energy-window mode to include photons of 10 keV or higher. The detector had hexagonal pixels with apothem of 30 μm resulting in pixel pitch of 60 and 51.96 μm along the two orthogonal directions. The detector was characterized at IEC-RQA5 spectral conditions. Linear response of the detector was determined over the air kerma rate relevant to image-guided interventional procedures ranging from 1.3 nGy/frame to 91.4 μGy/frame. Presampled modulation transfer was determined using a tungsten edge test device. The edge-spread function and the finely sampled line spread function accounted for hexagonal sampling, from which the presampled modulation transfer function (MTF) was determined. Since detectors with hexagonal pixels require resampling to square pixels for distortion-free display, the optimal square pixel size was determined by minimizing the root-mean-squared-error of the aperture functions for the square and hexagonal pixels up to the Nyquist limit. At Nyquist frequencies of 8.33 and 9.62 cycles/mm along the apothem and orthogonal to the apothem directions, the modulation factors were 0.397 and 0.228, respectively. For the corresponding axis, the limiting resolution defined as 10% MTF occurred at 13.3 and 12 cycles/mm, respectively. Evaluation of the aperture functions yielded an optimal square pixel size of 54 μm. After resampling to 54 μm square pixels using trilinear interpolation, the presampled MTF at Nyquist frequency of 9.26 cycles/mm was 0.29 and 0.24 along the orthogonal directions and the limiting resolution (10% MTF) occurred at approximately 12 cycles/mm. Visual analysis of a bar pattern image showed the ability to resolve close to 12 line-pairs/mm and qualitative evaluation of a neurovascular nitinol-stent showed the ability to visualize its struts at clinically relevant conditions. Hexagonal pixel array photon-counting CdTe detector provides high spatial resolution in single-photon counting mode. After resampling to optimal square pixel size for distortion-free display, the spatial resolution is preserved. The dual-energy capabilities of the detector could allow for artifact-free subtraction angiography and basis material decomposition. The proposed high-resolution photon-counting detector with energy-resolving capability can be of importance for several image-guided interventional procedures as well as for pediatric applications.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Vedantham, Srinivasan; Shrestha, Suman; Karellas, Andrew, E-mail: andrew.karellas@umassmed.edu
Purpose: High-resolution, photon-counting, energy-resolved detector with fast-framing capability can facilitate simultaneous acquisition of precontrast and postcontrast images for subtraction angiography without pixel registration artifacts and can facilitate high-resolution real-time imaging during image-guided interventions. Hence, this study was conducted to determine the spatial resolution characteristics of a hexagonal pixel array photon-counting cadmium telluride (CdTe) detector. Methods: A 650 μm thick CdTe Schottky photon-counting detector capable of concurrently acquiring up to two energy-windowed images was operated in a single energy-window mode to include photons of 10 keV or higher. The detector had hexagonal pixels with apothem of 30 μm resulting in pixelmore » pitch of 60 and 51.96 μm along the two orthogonal directions. The detector was characterized at IEC-RQA5 spectral conditions. Linear response of the detector was determined over the air kerma rate relevant to image-guided interventional procedures ranging from 1.3 nGy/frame to 91.4 μGy/frame. Presampled modulation transfer was determined using a tungsten edge test device. The edge-spread function and the finely sampled line spread function accounted for hexagonal sampling, from which the presampled modulation transfer function (MTF) was determined. Since detectors with hexagonal pixels require resampling to square pixels for distortion-free display, the optimal square pixel size was determined by minimizing the root-mean-squared-error of the aperture functions for the square and hexagonal pixels up to the Nyquist limit. Results: At Nyquist frequencies of 8.33 and 9.62 cycles/mm along the apothem and orthogonal to the apothem directions, the modulation factors were 0.397 and 0.228, respectively. For the corresponding axis, the limiting resolution defined as 10% MTF occurred at 13.3 and 12 cycles/mm, respectively. Evaluation of the aperture functions yielded an optimal square pixel size of 54 μm. After resampling to 54 μm square pixels using trilinear interpolation, the presampled MTF at Nyquist frequency of 9.26 cycles/mm was 0.29 and 0.24 along the orthogonal directions and the limiting resolution (10% MTF) occurred at approximately 12 cycles/mm. Visual analysis of a bar pattern image showed the ability to resolve close to 12 line-pairs/mm and qualitative evaluation of a neurovascular nitinol-stent showed the ability to visualize its struts at clinically relevant conditions. Conclusions: Hexagonal pixel array photon-counting CdTe detector provides high spatial resolution in single-photon counting mode. After resampling to optimal square pixel size for distortion-free display, the spatial resolution is preserved. The dual-energy capabilities of the detector could allow for artifact-free subtraction angiography and basis material decomposition. The proposed high-resolution photon-counting detector with energy-resolving capability can be of importance for several image-guided interventional procedures as well as for pediatric applications.« less
Contrasting natural regeneration and tree planting in fourteen North American cities
David J. Nowak
2012-01-01
Field data from randomly located plots in 12 cities in the United States and Canada were used to estimate the proportion of the existing tree population that was planted or occurred via natural regeneration. In addition, two cities (Baltimore and Syracuse) were recently re-sampled to estimate the proportion of newly established trees that were planted. Results for the...
Grain Size and Parameter Recovery with TIMSS and the General Diagnostic Model
ERIC Educational Resources Information Center
Skaggs, Gary; Wilkins, Jesse L. M.; Hein, Serge F.
2016-01-01
The purpose of this study was to explore the degree of grain size of the attributes and the sample sizes that can support accurate parameter recovery with the General Diagnostic Model (GDM) for a large-scale international assessment. In this resampling study, bootstrap samples were obtained from the 2003 Grade 8 TIMSS in Mathematics at varying…
NASA Astrophysics Data System (ADS)
Coupon, Jean; Leauthaud, Alexie; Kilbinger, Martin; Medezinski, Elinor
2017-07-01
SWOT (Super W Of Theta) computes two-point statistics for very large data sets, based on “divide and conquer” algorithms, mainly, but not limited to data storage in binary trees, approximation at large scale, parellelization (open MPI), and bootstrap and jackknife resampling methods “on the fly”. It currently supports projected and 3D galaxy auto and cross correlations, galaxy-galaxy lensing, and weighted histograms.
Confidence Limits for the Indirect Effect: Distribution of the Product and Resampling Methods
ERIC Educational Resources Information Center
MacKinnon, David P.; Lockwood, Chondra M.; Williams, Jason
2004-01-01
The most commonly used method to test an indirect effect is to divide the estimate of the indirect effect by its standard error and compare the resulting z statistic with a critical value from the standard normal distribution. Confidence limits for the indirect effect are also typically based on critical values from the standard normal…
Synchronizing data from irregularly sampled sensors
DOE Office of Scientific and Technical Information (OSTI.GOV)
Uluyol, Onder
A system and method include receiving a set of sampled measurements for each of multiple sensors, wherein the sampled measurements are at irregular intervals or different rates, re-sampling the sampled measurements of each of the multiple sensors at a higher rate than one of the sensor's set of sampled measurements, and synchronizing the sampled measurements of each of the multiple sensors.
J. Travis Swaim; Daniel C. Dey; Michael R. Saunders; Dale R. Weigel; Christopher D. Thornton; John M. Kabrick; Michael A. Jenkins
2016-01-01
We resampled plots from a repeated measures study implemented on the Hoosier National Forest (HNF) in southern Indiana in 1988 to investigate the influence of site and seedling physical attributes on height growth and establishment success of oak species (Quercus spp.) reproduction in stands regenerated by the clearcut method. Before harvest, an...
USDA-ARS?s Scientific Manuscript database
Better understanding agriculture’s effect on shallow groundwater quality is needed on the southern Idaho, Twin Falls irrigation tract. In 1999 and 2002-2007 we resampled 10 of the 15 tunnel drains monitored in a late-1960s study to determine the influence of time on NO3-N, dissolved reactive P (DRP)...
ERIC Educational Resources Information Center
Longford, Nicholas T.
Large scale surveys usually employ a complex sampling design and as a consequence, no standard methods for estimation of the standard errors associated with the estimates of population means are available. Resampling methods, such as jackknife or bootstrap, are often used, with reference to their properties of robustness and reduction of bias. A…
Evaluation of burst-mode LDA spectra with implications
NASA Astrophysics Data System (ADS)
Velte, Clara; George, William
2009-11-01
Burst-mode LDA spectra, as described in [1], are compared to spectra obtained from corresponding HWA measurements using the FFT in a round jet and cylinder wake experiment. The phrase ``burst-mode LDA'' refers to an LDA which operates with at most one particle present in the measuring volume at a time. Due to the random sampling and velocity bias of the LDA signal, the Direct Fourier Transform with accompanying weighting by the measured residence times was applied to obtain a correct interpretation of the spectral estimate. Further, the self-noise was removed as described in [2]. In addition, resulting spectra from common interpolation and uniform resampling techniques are compared to the above mentioned estimates. The burst-mode LDA spectra are seen to concur well with the HWA spectra up to the emergence of the noise floor, caused mainly by the intermittency of the LDA signal. The interpolated and resampled counterparts yield unphysical spectra, which are buried in frequency dependent noise and step noise, except at very high LDA data rates where they perform well up to a limited frequency.[4pt] [1] Buchhave, P. PhD Thesis, SUNY/Buffalo, 1979.[0pt] [2] Velte, C.M. PhD Thesis, DTU/Copenhagen, 2009.
A resampling procedure for generating conditioned daily weather sequences
Clark, Martyn P.; Gangopadhyay, Subhrendu; Brandon, David; Werner, Kevin; Hay, Lauren E.; Rajagopalan, Balaji; Yates, David
2004-01-01
A method is introduced to generate conditioned daily precipitation and temperature time series at multiple stations. The method resamples data from the historical record “nens” times for the period of interest (nens = number of ensemble members) and reorders the ensemble members to reconstruct the observed spatial (intersite) and temporal correlation statistics. The weather generator model is applied to 2307 stations in the contiguous United States and is shown to reproduce the observed spatial correlation between neighboring stations, the observed correlation between variables (e.g., between precipitation and temperature), and the observed temporal correlation between subsequent days in the generated weather sequence. The weather generator model is extended to produce sequences of weather that are conditioned on climate indices (in this case the Niño 3.4 index). Example illustrations of conditioned weather sequences are provided for a station in Arizona (Petrified Forest, 34.8°N, 109.9°W), where El Niño and La Niña conditions have a strong effect on winter precipitation. The conditioned weather sequences generated using the methods described in this paper are appropriate for use as input to hydrologic models to produce multiseason forecasts of streamflow.
Feder, Paul I; Ma, Zhenxu J; Bull, Richard J; Teuschler, Linda K; Rice, Glenn
2009-01-01
In chemical mixtures risk assessment, the use of dose-response data developed for one mixture to estimate risk posed by a second mixture depends on whether the two mixtures are sufficiently similar. While evaluations of similarity may be made using qualitative judgments, this article uses nonparametric statistical methods based on the "bootstrap" resampling technique to address the question of similarity among mixtures of chemical disinfectant by-products (DBP) in drinking water. The bootstrap resampling technique is a general-purpose, computer-intensive approach to statistical inference that substitutes empirical sampling for theoretically based parametric mathematical modeling. Nonparametric, bootstrap-based inference involves fewer assumptions than parametric normal theory based inference. The bootstrap procedure is appropriate, at least in an asymptotic sense, whether or not the parametric, distributional assumptions hold, even approximately. The statistical analysis procedures in this article are initially illustrated with data from 5 water treatment plants (Schenck et al., 2009), and then extended using data developed from a study of 35 drinking-water utilities (U.S. EPA/AMWA, 1989), which permits inclusion of a greater number of water constituents and increased structure in the statistical models.
NASA Astrophysics Data System (ADS)
Holoien, Thomas W.-S.; Marshall, Philip J.; Wechsler, Risa H.
2017-06-01
We describe two new open-source tools written in Python for performing extreme deconvolution Gaussian mixture modeling (XDGMM) and using a conditioned model to re-sample observed supernova and host galaxy populations. XDGMM is new program that uses Gaussian mixtures to perform density estimation of noisy data using extreme deconvolution (XD) algorithms. Additionally, it has functionality not available in other XD tools. It allows the user to select between the AstroML and Bovy et al. fitting methods and is compatible with scikit-learn machine learning algorithms. Most crucially, it allows the user to condition a model based on the known values of a subset of parameters. This gives the user the ability to produce a tool that can predict unknown parameters based on a model that is conditioned on known values of other parameters. EmpiriciSN is an exemplary application of this functionality, which can be used to fit an XDGMM model to observed supernova/host data sets and predict likely supernova parameters using a model conditioned on observed host properties. It is primarily intended to simulate realistic supernovae for LSST data simulations based on empirical galaxy properties.
Lopez, Michael J; Schuckers, Michael
2017-05-01
Roughly 14% of regular season National Hockey League games since the 2005-06 season have been decided by a shoot-out, and the resulting allocation of points has impacted play-off races each season. But despite interest from fans, players and league officials, there is little in the way of published research on team or individual shoot-out performance. This manuscript attempts to fill that void. We present both generalised linear mixed model and Bayesian hierarchical model frameworks to model shoot-out outcomes, with results suggesting that there are (i) small but statistically significant talent gaps between shooters, (ii) marginal differences in performance among netminders and (iii) few, if any, predictors of player success after accounting for individual talent. We also provide a resampling strategy to highlight a selection bias with respect to shooter assignment, in which coaches choose their most skilled offensive players early in shoot-out rounds and are less likely to select players with poor past performances. Finally, given that per-shot data for shoot-outs do not currently exist in a single location for public use, we provide both our data and source code for other researchers interested in studying shoot-out outcomes.
NEAT: an efficient network enrichment analysis test.
Signorelli, Mirko; Vinciotti, Veronica; Wit, Ernst C
2016-09-05
Network enrichment analysis is a powerful method, which allows to integrate gene enrichment analysis with the information on relationships between genes that is provided by gene networks. Existing tests for network enrichment analysis deal only with undirected networks, they can be computationally slow and are based on normality assumptions. We propose NEAT, a test for network enrichment analysis. The test is based on the hypergeometric distribution, which naturally arises as the null distribution in this context. NEAT can be applied not only to undirected, but to directed and partially directed networks as well. Our simulations indicate that NEAT is considerably faster than alternative resampling-based methods, and that its capacity to detect enrichments is at least as good as the one of alternative tests. We discuss applications of NEAT to network analyses in yeast by testing for enrichment of the Environmental Stress Response target gene set with GO Slim and KEGG functional gene sets, and also by inspecting associations between functional sets themselves. NEAT is a flexible and efficient test for network enrichment analysis that aims to overcome some limitations of existing resampling-based tests. The method is implemented in the R package neat, which can be freely downloaded from CRAN ( https://cran.r-project.org/package=neat ).
Babamoradi, Hamid; van den Berg, Frans; Rinnan, Åsmund
2016-02-18
In Multivariate Statistical Process Control, when a fault is expected or detected in the process, contribution plots are essential for operators and optimization engineers in identifying those process variables that were affected by or might be the cause of the fault. The traditional way of interpreting a contribution plot is to examine the largest contributing process variables as the most probable faulty ones. This might result in false readings purely due to the differences in natural variation, measurement uncertainties, etc. It is more reasonable to compare variable contributions for new process runs with historical results achieved under Normal Operating Conditions, where confidence limits for contribution plots estimated from training data are used to judge new production runs. Asymptotic methods cannot provide confidence limits for contribution plots, leaving re-sampling methods as the only option. We suggest bootstrap re-sampling to build confidence limits for all contribution plots in online PCA-based MSPC. The new strategy to estimate CLs is compared to the previously reported CLs for contribution plots. An industrial batch process dataset was used to illustrate the concepts. Copyright © 2016 Elsevier B.V. All rights reserved.
Estimator banks: a new tool for direction-of-arrival estimation
NASA Astrophysics Data System (ADS)
Gershman, Alex B.; Boehme, Johann F.
1997-10-01
A new powerful tool for improving the threshold performance of direction-of-arrival (DOA) estimation is considered. The essence of our approach is to reduce the number of outliers in the threshold domain using the so-called estimator bank containing multiple 'parallel' underlying DOA estimators which are based on pseudorandom resampling of the MUSIC spatial spectrum for given data batch or sample covariance matrix. To improve the threshold performance relative to conventional MUSIC, evolutionary principles are used, i.e., only 'successful' underlying estimators (having no failure in the preliminary estimated source localization sectors) are exploited in the final estimate. An efficient beamspace root implementation of the estimator bank approach is developed, combined with the array interpolation technique which enables the application to arbitrary arrays. A higher-order extension of our approach is also presented, where the cumulant-based MUSIC estimator is exploited as a basic technique for spatial spectrum resampling. Simulations and experimental data processing show that our algorithm performs well below the MUSIC threshold, namely, has the threshold performance similar to that of the stochastic ML method. At the same time, the computational cost of our algorithm is much lower than that of stochastic ML because no multidimensional optimization is involved.
Memory and Trend of Precipitation in China during 1966-2013
NASA Astrophysics Data System (ADS)
Du, M.; Sun, F.; Liu, W.
2017-12-01
As climate change has had a significant impact on water cycle, the characteristic and variation of precipitation under climate change turned into a hotspot in hydrology. This study aims to analyze the trend and memory (both short-term and long-term) of precipitation in China. To do that, we apply statistical tests (including Mann-Kendall test, Ljung-Box test and Hurst exponent) to annual precipitation (P), frequency of rainy day (λ) and mean daily rainfall in days when precipitation occurs (α) in China (1966-2013). We also use a resampling approach to determine the field significance. From there, we evaluate the spatial distribution and percentages of stations with significant memory or trend. We find that the percentages of significant downtrends for λ and significant uptrends for α are significantly larger than the critical values at 95% field significance level, probably caused by the global warming. From these results, we conclude that extra care is necessary when significant results are obtained using statistical tests. This is because the null hypothesis could be rejected by chance and this situation is more likely to occur if spatial correlation is ignored according to the results of the resampling approach.
Pham-The, Hai; Casañola-Martin, Gerardo; Garrigues, Teresa; Bermejo, Marival; González-Álvarez, Isabel; Nguyen-Hai, Nam; Cabrera-Pérez, Miguel Ángel; Le-Thi-Thu, Huong
2016-02-01
In many absorption, distribution, metabolism, and excretion (ADME) modeling problems, imbalanced data could negatively affect classification performance of machine learning algorithms. Solutions for handling imbalanced dataset have been proposed, but their application for ADME modeling tasks is underexplored. In this paper, various strategies including cost-sensitive learning and resampling methods were studied to tackle the moderate imbalance problem of a large Caco-2 cell permeability database. Simple physicochemical molecular descriptors were utilized for data modeling. Support vector machine classifiers were constructed and compared using multiple comparison tests. Results showed that the models developed on the basis of resampling strategies displayed better performance than the cost-sensitive classification models, especially in the case of oversampling data where misclassification rates for minority class have values of 0.11 and 0.14 for training and test set, respectively. A consensus model with enhanced applicability domain was subsequently constructed and showed improved performance. This model was used to predict a set of randomly selected high-permeability reference drugs according to the biopharmaceutics classification system. Overall, this study provides a comparison of numerous rebalancing strategies and displays the effectiveness of oversampling methods to deal with imbalanced permeability data problems.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Holoien, Thomas W. -S.; Marshall, Philip J.; Wechsler, Risa H.
We describe two new open-source tools written in Python for performing extreme deconvolution Gaussian mixture modeling (XDGMM) and using a conditioned model to re-sample observed supernova and host galaxy populations. XDGMM is new program that uses Gaussian mixtures to perform density estimation of noisy data using extreme deconvolution (XD) algorithms. Additionally, it has functionality not available in other XD tools. It allows the user to select between the AstroML and Bovy et al. fitting methods and is compatible with scikit-learn machine learning algorithms. Most crucially, it allows the user to condition a model based on the known values of amore » subset of parameters. This gives the user the ability to produce a tool that can predict unknown parameters based on a model that is conditioned on known values of other parameters. EmpiriciSN is an exemplary application of this functionality, which can be used to fit an XDGMM model to observed supernova/host data sets and predict likely supernova parameters using a model conditioned on observed host properties. It is primarily intended to simulate realistic supernovae for LSST data simulations based on empirical galaxy properties.« less
NASA Astrophysics Data System (ADS)
Angrisano, Antonio; Maratea, Antonio; Gaglione, Salvatore
2018-01-01
In the absence of obstacles, a GPS device is generally able to provide continuous and accurate estimates of position, while in urban scenarios buildings can generate multipath and echo-only phenomena that severely affect the continuity and the accuracy of the provided estimates. Receiver autonomous integrity monitoring (RAIM) techniques are able to reduce the negative consequences of large blunders in urban scenarios, but require both a good redundancy and a low contamination to be effective. In this paper a resampling strategy based on bootstrap is proposed as an alternative to RAIM, in order to estimate accurately position in case of low redundancy and multiple blunders: starting with the pseudorange measurement model, at each epoch the available measurements are bootstrapped—that is random sampled with replacement—and the generated a posteriori empirical distribution is exploited to derive the final position. Compared to standard bootstrap, in this paper the sampling probabilities are not uniform, but vary according to an indicator of the measurement quality. The proposed method has been compared with two different RAIM techniques on a data set collected in critical conditions, resulting in a clear improvement on all considered figures of merit.
How Do Substitute Teachers Substitute? An Empirical Study of Substitute-Teacher Labor Supply
ERIC Educational Resources Information Center
Gershenson, Seth
2012-01-01
This paper examines the daily labor supply of a potentially important, but often overlooked, source of instruction in U.S. public schools: substitute teachers. I estimate a sequential binary-choice model of substitute teachers' job-offer acceptance decisions using data on job offers made by a randomized automated calling system. Importantly, this…
NASA Astrophysics Data System (ADS)
Yamamoto, K.; Murata, K.; Kimura, E.; Honda, R.
2006-12-01
In the Solar-Terrestrial Physics (STP) field, the amount of satellite observation data has been increasing every year. It is necessary to solve the following three problems to achieve large-scale statistical analyses of plenty of data. (i) More CPU power and larger memory and disk size are required. However, total powers of personal computers are not enough to analyze such amount of data. Super-computers provide a high performance CPU and rich memory area, but they are usually separated from the Internet or connected only for the purpose of programming or data file transfer. (ii) Most of the observation data files are managed at distributed data sites over the Internet. Users have to know where the data files are located. (iii) Since no common data format in the STP field is available now, users have to prepare reading program for each data by themselves. To overcome the problems (i) and (ii), we constructed a parallel and distributed data analysis environment based on the Gfarm reference implementation of the Grid Datafarm architecture. The Gfarm shares both computational resources and perform parallel distributed processings. In addition, the Gfarm provides the Gfarm filesystem which can be as virtual directory tree among nodes. The Gfarm environment is composed of three parts; a metadata server to manage distributed files information, filesystem nodes to provide computational resources and a client to throw a job into metadata server and manages data processing schedulings. In the present study, both data files and data processes are parallelized on the Gfarm with 6 file system nodes: CPU clock frequency of each node is Pentium V 1GHz, 256MB memory and40GB disk. To evaluate performances of the present Gfarm system, we scanned plenty of data files, the size of which is about 300MB for each, in three processing methods: sequential processing in one node, sequential processing by each node and parallel processing by each node. As a result, in comparison between the number of files and the elapsed time, parallel and distributed processing shorten the elapsed time to 1/5 than sequential processing. On the other hand, sequential processing times were shortened in another experiment, whose file size is smaller than 100KB. In this case, the elapsed time to scan one file is within one second. It implies that disk swap took place in case of parallel processing by each node. We note that the operation became unstable when the number of the files exceeded 1000. To overcome the problem (iii), we developed an original data class. This class supports our reading of data files with various data formats since it converts them into an original data format since it defines schemata for every type of data and encapsulates the structure of data files. In addition, since this class provides a function of time re-sampling, users can easily convert multiple data (array) with different time resolution into the same time resolution array. Finally, using the Gfarm, we achieved a high performance environment for large-scale statistical data analyses. It should be noted that the present method is effective only when one data file size is large enough. At present, we are restructuring the new Gfarm environment with 8 nodes: CPU is Athlon 64 x2 Dual Core 2GHz, 2GB memory and 1.2TB disk (using RAID0) for each node. Our original class is to be implemented on the new Gfarm environment. In the present talk, we show the latest results with applying the present system for data analyses with huge number of satellite observation data files.
Short-term memory for spatial, sequential and duration information.
Manohar, Sanjay G; Pertzov, Yoni; Husain, Masud
2017-10-01
Space and time appear to play key roles in the way that information is organized in short-term memory (STM). Some argue that they are crucial contexts within which other stored features are embedded, allowing binding of information that belongs together within STM. Here we review recent behavioral, neurophysiological and imaging studies that have sought to investigate the nature of spatial, sequential and duration representations in STM, and how these might break down in disease. Findings from these studies point to an important role of the hippocampus and other medial temporal lobe structures in aspects of STM, challenging conventional accounts of involvement of these regions in only long-term memory.
NASA Technical Reports Server (NTRS)
Mier Muth, A. M.; Willsky, A. S.
1978-01-01
In this paper we describe a method for approximating a waveform by a spline. The method is quite efficient, as the data are processed sequentially. The basis of the approach is to view the approximation problem as a question of estimation of a polynomial in noise, with the possibility of abrupt changes in the highest derivative. This allows us to bring several powerful statistical signal processing tools into play. We also present some initial results on the application of our technique to the processing of electrocardiograms, where the knot locations themselves may be some of the most important pieces of diagnostic information.
Wang, Lu; Liao, Shengjin; Ruan, Yong-Ling
2013-01-01
Seed development depends on coordination among embryo, endosperm and seed coat. Endosperm undergoes nuclear division soon after fertilization, whereas embryo remains quiescent for a while. Such a developmental sequence is of great importance for proper seed development. However, the underlying mechanism remains unclear. Recent results on the cellular domain- and stage-specific expression of invertase genes in cotton and Arabidopsis revealed that cell wall invertase may positively and specifically regulate nuclear division of endosperm after fertilization, thereby playing a role in determining the sequential development of endosperm and embryo, probably through glucose signaling.
Garey, Lorra; Cheema, Mina K; Otal, Tanveer K; Schmidt, Norman B; Neighbors, Clayton; Zvolensky, Michael J
2016-10-01
Smoking rates are markedly higher among trauma-exposed individuals relative to non-trauma-exposed individuals. Extant work suggests that both perceived stress and negative affect reduction smoking expectancies are independent mechanisms that link trauma-related symptoms and smoking. Yet, no work has examined perceived stress and negative affect reduction smoking expectancies as potential explanatory variables for the relation between trauma-related symptom severity and smoking in a sequential pathway model. Methods The present study utilized a sample of treatment-seeking, trauma-exposed smokers (n = 363; 49.0% female) to examine perceived stress and negative affect reduction expectancies for smoking as potential sequential explanatory variables linking trauma-related symptom severity and nicotine dependence, perceived barriers to smoking cessation, and severity of withdrawal-related problems and symptoms during past quit attempts. As hypothesized, perceived stress and negative affect reduction expectancies had a significant sequential indirect effect on trauma-related symptom severity and criterion variables. Findings further elucidate the complex pathways through which trauma-related symptoms contribute to smoking behavior and cognitions, and highlight the importance of addressing perceived stress and negative affect reduction expectancies in smoking cessation programs among trauma-exposed individuals. (Am J Addict 2016;25:565-572). © 2016 American Academy of Addiction Psychiatry.
Fan, Tingbo; Liu, Zhenbo; Zhang, Dong; Tang, Mengxing
2013-03-01
Lesion formation and temperature distribution induced by high-intensity focused ultrasound (HIFU) were investigated both numerically and experimentally via two energy-delivering strategies, i.e., sequential discrete and continuous scanning modes. Simulations were presented based on the combination of Khokhlov-Zabolotskaya-Kuznetsov (KZK) equation and bioheat equation. Measurements were performed on tissue-mimicking phantoms sonicated by a 1.12-MHz single-element focused transducer working at an acoustic power of 75 W. Both the simulated and experimental results show that, in the sequential discrete mode, obvious saw-tooth-like contours could be observed for the peak temperature distribution and the lesion boundaries, with the increasing interval space between two adjacent exposure points. In the continuous scanning mode, more uniform peak temperature distributions and lesion boundaries would be produced, and the peak temperature values would decrease significantly with the increasing scanning speed. In addition, compared to the sequential discrete mode, the continuous scanning mode could achieve higher treatment efficiency (lesion area generated per second) with a lower peak temperature. The present studies suggest that the peak temperature and tissue lesion resulting from the HIFU exposure could be controlled by adjusting the transducer scanning speed, which is important for improving the HIFU treatment efficiency.
Hinault, Thomas; Lemaire, Patrick; Touron, Dayna
2017-02-01
In this study, we asked young adults and older adults to encode pairs of words. For each item, they were told which strategy to use, interactive imagery or rote repetition. Data revealed poorer-strategy effects in both young adults and older adults: Participants obtained better performance when executing better strategies (i.e., interactive-imagery strategy to encode pairs of concrete words; rote-repetition strategy on pairs of abstract words) than with poorer strategies (i.e., interactive-imagery strategy on pairs of abstract words; rote-repetition strategy on pairs of concrete words). Crucially, we showed that sequential modulations of poorer-strategy effects (i.e., poorer-strategy effects being larger when previous items were encoded with better relative to poorer strategies), previously demonstrated in arithmetic, generalise to memory strategies. We also found reduced sequential modulations of poorer-strategy effects in older adults relative to young adults. Finally, sequential modulations of poorer-strategy effects correlated with measures of cognitive control processes, suggesting that these processes underlie efficient trial-to-trial modulations during strategy execution. Differences in correlations with cognitive control processes were also found between older adults and young adults. These findings have important implications regarding mechanisms underlying memory strategy execution and age differences in memory performance.
Time scale of random sequential adsorption.
Erban, Radek; Chapman, S Jonathan
2007-04-01
A simple multiscale approach to the diffusion-driven adsorption from a solution to a solid surface is presented. The model combines two important features of the adsorption process: (i) The kinetics of the chemical reaction between adsorbing molecules and the surface and (ii) geometrical constraints on the surface made by molecules which are already adsorbed. The process (i) is modeled in a diffusion-driven context, i.e., the conditional probability of adsorbing a molecule provided that the molecule hits the surface is related to the macroscopic surface reaction rate. The geometrical constraint (ii) is modeled using random sequential adsorption (RSA), which is the sequential addition of molecules at random positions on a surface; one attempt to attach a molecule is made per one RSA simulation time step. By coupling RSA with the diffusion of molecules in the solution above the surface the RSA simulation time step is related to the real physical time. The method is illustrated on a model of chemisorption of reactive polymers to a virus surface.
Hsieh, Tsung-Yu; Huang, Chi-Kai; Su, Tzu-Sen; Hong, Cheng-You; Wei, Tzu-Chien
2017-03-15
Crystal morphology and structure are important for improving the organic-inorganic lead halide perovskite semiconductor property in optoelectronic, electronic, and photovoltaic devices. In particular, crystal growth and dissolution are two major phenomena in determining the morphology of methylammonium lead iodide perovskite in the sequential deposition method for fabricating a perovskite solar cell. In this report, the effect of immersion time in the second step, i.e., methlyammonium iodide immersion in the morphological, structural, optical, and photovoltaic evolution, is extensively investigated. Supported by experimental evidence, a five-staged, time-dependent evolution of the morphology of methylammonium lead iodide perovskite crystals is established and is well connected to the photovoltaic performance. This result is beneficial for engineering optimal time for methylammonium iodide immersion and converging the solar cell performance in the sequential deposition route. Meanwhile, our result suggests that large, well-faceted methylammonium lead iodide perovskite single crystal may be incubated by solution process. This offers a low cost route for synthesizing perovskite single crystal.
Identifying protein complexes in PPI network using non-cooperative sequential game.
Maulik, Ujjwal; Basu, Srinka; Ray, Sumanta
2017-08-21
Identifying protein complexes from protein-protein interaction (PPI) network is an important and challenging task in computational biology as it helps in better understanding of cellular mechanisms in various organisms. In this paper we propose a noncooperative sequential game based model for protein complex detection from PPI network. The key hypothesis is that protein complex formation is driven by mechanism that eventually optimizes the number of interactions within the complex leading to dense subgraph. The hypothesis is drawn from the observed network property named small world. The proposed multi-player game model translates the hypothesis into the game strategies. The Nash equilibrium of the game corresponds to a network partition where each protein either belong to a complex or form a singleton cluster. We further propose an algorithm to find the Nash equilibrium of the sequential game. The exhaustive experiment on synthetic benchmark and real life yeast networks evaluates the structural as well as biological significance of the network partitions.
Kirchenbaum, Greg A.; Carter, Donald M.
2015-01-01
ABSTRACT Broadly reactive antibodies targeting the conserved hemagglutinin (HA) stalk region are elicited following sequential infection or vaccination with influenza viruses belonging to divergent subtypes and/or expressing antigenically distinct HA globular head domains. Here, we demonstrate, through the use of novel chimeric HA proteins and competitive binding assays, that sequential infection of ferrets with antigenically distinct seasonal H1N1 (sH1N1) influenza virus isolates induced an HA stalk-specific antibody response. Additionally, stalk-specific antibody titers were boosted following sequential infection with antigenically distinct sH1N1 isolates in spite of preexisting, cross-reactive, HA-specific antibody titers. Despite a decline in stalk-specific serum antibody titers, sequential sH1N1 influenza virus-infected ferrets were protected from challenge with a novel H1N1 influenza virus (A/California/07/2009), and these ferrets poorly transmitted the virus to naive contacts. Collectively, these findings indicate that HA stalk-specific antibodies are commonly elicited in ferrets following sequential infection with antigenically distinct sH1N1 influenza virus isolates lacking HA receptor-binding site cross-reactivity and can protect ferrets against a pathogenic novel H1N1 virus. IMPORTANCE The influenza virus hemagglutinin (HA) is a major target of the humoral immune response following infection and/or seasonal vaccination. While antibodies targeting the receptor-binding pocket of HA possess strong neutralization capacities, these antibodies are largely strain specific and do not confer protection against antigenic drift variant or novel HA subtype-expressing viruses. In contrast, antibodies targeting the conserved stalk region of HA exhibit broader reactivity among viruses within and among influenza virus subtypes. Here, we show that sequential infection of ferrets with antigenically distinct seasonal H1N1 influenza viruses boosts the antibody responses directed at the HA stalk region. Moreover, ferrets possessing HA stalk-specific antibody were protected against novel H1N1 virus infection and did not transmit the virus to naive contacts. PMID:26559834
Colligan, Lacey; Anderson, Janet E; Potts, Henry W W; Berman, Jonathan
2010-01-07
Many quality and safety improvement methods in healthcare rely on a complete and accurate map of the process. Process mapping in healthcare is often achieved using a sequential flow diagram, but there is little guidance available in the literature about the most effective type of process map to use. Moreover there is evidence that the organisation of information in an external representation affects reasoning and decision making. This exploratory study examined whether the type of process map - sequential or hierarchical - affects healthcare practitioners' judgments. A sequential and a hierarchical process map of a community-based anti coagulation clinic were produced based on data obtained from interviews, talk-throughs, attendance at a training session and examination of protocols and policies. Clinic practitioners were asked to specify the parts of the process that they judged to contain quality and safety concerns. The process maps were then shown to them in counter-balanced order and they were asked to circle on the diagrams the parts of the process where they had the greatest quality and safety concerns. A structured interview was then conducted, in which they were asked about various aspects of the diagrams. Quality and safety concerns cited by practitioners differed depending on whether they were or were not looking at a process map, and whether they were looking at a sequential diagram or a hierarchical diagram. More concerns were identified using the hierarchical diagram compared with the sequential diagram and more concerns were identified in relation to clinical work than administrative work. Participants' preference for the sequential or hierarchical diagram depended on the context in which they would be using it. The difficulties of determining the boundaries for the analysis and the granularity required were highlighted. The results indicated that the layout of a process map does influence perceptions of quality and safety problems in a process. In quality improvement work it is important to carefully consider the type of process map to be used and to consider using more than one map to ensure that different aspects of the process are captured.
When good is stickier than bad: Understanding gain/loss asymmetries in sequential framing effects.
Sparks, Jehan; Ledgerwood, Alison
2017-08-01
Considerable research has demonstrated the power of the current positive or negative frame to shape people's current judgments. But humans must often learn about positive and negative information as they encounter that information sequentially over time. It is therefore crucial to consider the potential importance of sequencing when developing an understanding of how humans think about valenced information. Indeed, recent work looking at sequentially encountered frames suggests that some frames can linger outside the context in which they are first encountered, sticking in the mind so that subsequent frames have a muted effect. The present research builds a comprehensive account of sequential framing effects in both the loss and the gain domains. After seeing information about a potential gain or loss framed in positive terms or negative terms, participants saw the same issue reframed in the opposing way. Across 5 studies and 1566 participants, we find accumulating evidence for the notion that in the gain domain, positive frames are stickier than negative frames for novel but not familiar scenarios, whereas in the loss domain, negative frames are always stickier than positive frames. Integrating regulatory focus theory with the literatures on negativity dominance and positivity offset, we develop a new and comprehensive account of sequential framing effects that emphasizes the adaptive value of positivity and negativity biases in specific contexts. Our findings highlight the fact that research conducted solely in the loss domain risks painting an incomplete and oversimplified picture of human bias and suggest new directions for future research. (PsycINFO Database Record (c) 2017 APA, all rights reserved).
ERIC Educational Resources Information Center
Hsieh, Chueh-an; Xu, Xueli; von Davier, Matthias
2010-01-01
This paper presents an application of a jackknifing approach to variance estimation of ability inferences for groups of students, using a multidimensional discrete model for item response data. The data utilized to demonstrate the approach come from the National Assessment of Educational Progress (NAEP). In contrast to the operational approach…
Fire chronology and windstorm effects on persistence of a disjunct oak-shortleaf pine community
Michael D. Jones; Marlin L. Bowles
2012-01-01
We investigated effects of a human-altered fire regime and wind storms on persistence of disjunct oak-shortleaf pine vegetation occurring along 5.5 km of xeric habitat on the east bluffs of the Mississippi River in Union County, IL. In 2009, we resampled vegetation transects established in seven stands in 1954 and obtained 26 cross sections containing fire scars from...
Radar/Sonar and Time Series Analysis
1991-06-27
Davis, William Dunsmuir Fourier and Likelihood Analysis in NMR Spectroscopy .......... David Brillinger and Reinhold Kaiser Resampling Techniques for...Zubelli. 2:30 pm Gunter Meyer The parabolic Fock theory for a convex dielectric Georgia Tech. scatterer Abstract: This talk deals with a high frequency...Lincoln Laboratory, MIT Jun 18 - Jun 29 Gunter Meyer Georgia Institute of Technology Jun 25 - Jun 29 Willard Miller University of Minnesota Ruth Miniowitz
Robust High Data Rate MIMO Underwater Acoustic Communications
2011-09-30
We solved it via exploiting FFTs. The extended CAN algorithm is referred to as periodic CAN ( PeCAN ). Unlike most existing sequence construction...methods which are algebraic and deterministic in nature, we start the iteration of PeCAN from random phase initializations and then proceed to...covert UAC applications. We will use PeCAN sequences for more in-water experimentations to demonstrate their effectiveness. Temporal Resampling: In
Effects of forest management on soil carbon: results of some long-term resampling studies
D.W. Johnson; Jennifer D. Knoepp; Wayne T. Swank; J. Shan; L.A. Morris; David H. D.H. van Lear; P.R. Kapeluck
2002-01-01
The effects of harvest intensity (sawlog, SAW; whole tree, WTH; and complete tree, CTH) on biomass and soil carbon (C) were studied in four forested sites in the Southeastern United States: (mixed deciduous forests at Oak Ridge, TN and Coweeta, NC; Pinus taeda at Clemson, SC; and P. eliottii at Bradford, FL). In general, harvesting had no lasting...
Raymond L. Czaplewski
2005-01-01
Forest Service Research and Development (R&D) and State and Private Forestry Deputy Areas, in partnership with the National Forest System Remote Sensing Applications Center (RSAC), built a 250-m resolution (6.25-ha pixel) dataset for the entire USA. It assembles multi-seasonal hyperspectral MODIS data and derivatives, Landsat derivatives (i.e., summary statistics...
Kokaly, Raymond F.; Couvillion, Brady; Holloway, JoAnn M.; Roberts, Dar A.; Ustin, Susan L.; Peterson, Seth H.; Khanna, Shruti; Piazza, Sarai C.
2013-01-01
We applied a spectroscopic analysis to Airborne Visible/InfraRed Imaging Spectrometer (AVIRIS) data collected from low and medium altitudes during and after the Deepwater Horizon oil spill to delineate the distribution of oil-damaged canopies in the marshes of Barataria Bay, Louisiana. Spectral feature analysis compared the AVIRIS data to reference spectra of oiled marsh by using absorption features centered near 1.7 and 2.3 μm, which arise from CH bonds in oil. AVIRIS-derived maps of oiled shorelines from the individual dates of July 31, September 14, and October 4, 2010, were 89.3%, 89.8%, and 90.6% accurate, respectively. A composite map at 3.5 m grid spacing, accumulated from the three dates, was 93.4% accurate in detecting oiled shorelines. The composite map had 100% accuracy for detecting damaged plant canopy in oiled areas that extended more than 1.2 m into the marsh. Spatial resampling of the AVIRIS data to 30 m reduced the accuracy to 73.6% overall. However, detection accuracy remained high for oiled canopies that extended more than 4 m into the marsh (23 of 28 field reference points with oil were detected). Spectral resampling of the 3.5 m AVIRIS data to Landsat Enhanced Thematic Mapper (ETM) spectral response greatly reduced the detection of oil spectral signatures. With spatial resampling of simulated Landsat ETM data to 30 m, oil signatures were not detected. Overall, ~ 40 km of coastline, marsh comprised mainly of Spartina alterniflora and Juncus roemerianus, were found to be oiled in narrow zones at the shorelines. Zones of oiled canopies reached on average 11 m into the marsh, with a maximum reach of 21 m. The field and airborne data showed that, in many areas, weathered oil persisted in the marsh from the first field survey, July 10, to the latest airborne survey, October 4, 2010. The results demonstrate the applicability of high spatial resolution imaging spectrometer data to identifying contaminants in the environment for use in evaluating ecosystem disturbance and response.
Révész, Kinga M; Landwehr, Jurate M
2002-01-01
A new method was developed to analyze the stable carbon and oxygen isotope ratios of small samples (400 +/- 20 micro g) of calcium carbonate. This new method streamlines the classical phosphoric acid/calcium carbonate (H(3)PO(4)/CaCO(3)) reaction method by making use of a recently available Thermoquest-Finnigan GasBench II preparation device and a Delta Plus XL continuous flow isotope ratio mass spectrometer. Conditions for which the H(3)PO(4)/CaCO(3) reaction produced reproducible and accurate results with minimal error had to be determined. When the acid/carbonate reaction temperature was kept at 26 degrees C and the reaction time was between 24 and 54 h, the precision of the carbon and oxygen isotope ratios for pooled samples from three reference standard materials was =0.1 and =0.2 per mill or per thousand, respectively, although later analysis showed that materials from one specific standard required reaction time between 34 and 54 h for delta(18)O to achieve this level of precision. Aliquot screening methods were shown to further minimize the total error. The accuracy and precision of the new method were analyzed and confirmed by statistical analysis. The utility of the method was verified by analyzing calcite from Devils Hole, Nevada, for which isotope-ratio values had previously been obtained by the classical method. Devils Hole core DH-11 recently had been re-cut and re-sampled, and isotope-ratio values were obtained using the new method. The results were comparable with those obtained by the classical method with correlation = +0.96 for both isotope ratios. The consistency of the isotopic results is such that an alignment offset could be identified in the re-sampled core material, and two cutting errors that occurred during re-sampling then were confirmed independently. This result indicates that the new method is a viable alternative to the classical reaction method. In particular, the new method requires less sample material permitting finer resolution and allows automation of some processes resulting in considerable time savings.
Bromberg, J.E.; Kumar, S.; Brown, C.S.; Stohlgren, T.J.
2011-01-01
Downy brome (Bromus tectorum L.), an invasive winter annual grass, may be increasing in extent and abundance at high elevations in the western United States. This would pose a great threat to high-elevation plant communities and resources. However, data to track this species in high-elevation environments are limited. To address changes in the distribution and abundance of downy brome and the factors most associated with its occurrence, we used field sampling and statistical methods, and niche modeling. In 2007, we resampled plots from two vegetation surveys in Rocky Mountain National Park for presence and cover of downy brome. One survey was established in 1993 and had been resampled in 1999. The other survey was established in 1996 and had not been resampled until our study. Although not all comparisons between years demonstrated significant changes in downy brome abundance, its mean cover increased nearly fivefold from 1993 (0.7%) to 2007 (3.6%) in one of the two vegetation surveys (P = 0.06). Although the average cover of downy brome within the second survey appeared to be increasing from 1996 to 2007, this slight change from 0.5% to 1.2% was not statistically significant (P = 0.24). Downy brome was present in 50% more plots in 1999 than in 1993 (P = 0.02) in the first survey. In the second survey, downy brome was present in 30% more plots in 2007 than in 1996 (P = 0.08). Maxent, a species-environmental matching model, was generally able to predict occurrences of downy brome, as new locations were in the ranges predicted by earlier generated models. The model found that distance to roads, elevation, and vegetation community influenced the predictions most. The strong response of downy brome to interannual environmental variability makes detecting change challenging, especially with small sample sizes. However, our results suggest that the area in which downy brome occurs is likely increasing in Rocky Mountain National Park through increased frequency and cover. Field surveys along with predictive modeling will be vital in directing efforts to manage this highly invasive species. ?? Weed Science Society of America 2011.
Dinavahi, Saketh S; Noory, Mohammad A; Gowda, Raghavendra; Drabick, Joseph J; Berg, Arthur; Neves, Rogerio I; Robertson, Gavin P
2018-03-01
Drug combinations acting synergistically to kill cancer cells have become increasingly important in melanoma as an approach to manage the recurrent resistant disease. Protein kinase B (AKT) is a major target in this disease but its inhibitors are not effective clinically, which is a major concern. Targeting AKT in combination with WEE1 (mitotic inhibitor kinase) seems to have potential to make AKT-based therapeutics effective clinically. Since agents targeting AKT and WEE1 have been tested individually in the clinic, the quickest way to move the drug combination to patients would be to combine these agents sequentially, enabling the use of existing phase I clinical trial toxicity data. Therefore, a rapid preclinical approach is needed to evaluate whether simultaneous or sequential drug treatment has maximal therapeutic efficacy, which is based on a mechanistic rationale. To develop this approach, melanoma cell lines were treated with AKT inhibitor AZD5363 [4-amino- N -[(1 S )-1-(4-chlorophenyl)-3-hydroxypropyl]-1-(7 H -pyrrolo[2,3- d ]pyrimidin-4-yl)piperidine-4-carboxamide] and WEE1 inhibitor AZD1775 [2-allyl-1-(6-(2-hydroxypropan-2-yl)pyridin-2-yl)-6-((4-(4-methylpiperazin-1-yl)phenyl)amino)-1 H -pyrazolo[3,4- d ]pyrimidin-3(2 H )-one] using simultaneous and sequential dosing schedules. Simultaneous treatment synergistically reduced melanoma cell survival and tumor growth. In contrast, sequential treatment was antagonistic and had a minimal tumor inhibitory effect compared with individual agents. Mechanistically, simultaneous targeting of AKT and WEE1 enhanced deregulation of the cell cycle and DNA damage repair pathways by modulating transcription factors p53 and forkhead box M1, which was not observed with sequential treatment. Thus, this study identifies a rapid approach to assess the drug combinations with a mechanistic basis for selection, which suggests that combining AKT and WEE1 inhibitors is needed for maximal efficacy. Copyright © 2018 by The American Society for Pharmacology and Experimental Therapeutics.
Tacholess order-tracking approach for wind turbine gearbox fault detection
NASA Astrophysics Data System (ADS)
Wang, Yi; Xie, Yong; Xu, Guanghua; Zhang, Sicong; Hou, Chenggang
2017-09-01
Monitoring of wind turbines under variable-speed operating conditions has become an important issue in recent years. The gearbox of a wind turbine is the most important transmission unit; it generally exhibits complex vibration signatures due to random variations in operating conditions. Spectral analysis is one of the main approaches in vibration signal processing. However, spectral analysis is based on a stationary assumption and thus inapplicable to the fault diagnosis of wind turbines under variable-speed operating conditions. This constraint limits the application of spectral analysis to wind turbine diagnosis in industrial applications. Although order-tracking methods have been proposed for wind turbine fault detection in recent years, current methods are only applicable to cases in which the instantaneous shaft phase is available. For wind turbines with limited structural spaces, collecting phase signals with tachometers or encoders is difficult. In this study, a tacholess order-tracking method for wind turbines is proposed to overcome the limitations of traditional techniques. The proposed method extracts the instantaneous phase from the vibration signal, resamples the signal at equiangular increments, and calculates the order spectrum for wind turbine fault identification. The effectiveness of the proposed method is experimentally validated with the vibration signals of wind turbines.
Wang, Yu-Jie; Dou, Kai; Tang, Zhi-Wen
2017-01-01
Organizational citizenship behavior (OCB) is important to the development of an organization. Research into factors that foster OCB and the underlying processes are therefore substantially crucial. The current study aimed to test the association between trait self-control and OCB and the mediating role of consideration for future consequence. Four hundred and ninety-four Chinese employees (275 men, 219 women) took part in the study. Participants completed a battery of self-report measures online that assessed trait self-control, tendencies of consideration of future consequence, and organizational citizenship behavior. Path analysis was conducted and bootstrapping technique (N = 5000), a resampling method that is asymptotically more accurate than the standard intervals using sample variance and assumptions of normality, was used to judge the significance of the mediation. Results of path analysis showed that trait self-control was positively related to OCB. More importantly, the "trait self-control-OCB" link was mediated by consideration of future consequence-future, but not by consideration of future consequence-immediate. Employees with high trait self-control engage in more organizational citizenship behavior and this link can be partly explained by consideration of future consequence-future.
SEQUENTIAL REDUCTIVE DEHALOGATION OF CHLOROANILINES BY MICROORGANISMS FROM A METHANOGENIC AQUIFER
Chloroaniline-based compounds are widely used chem- icals and important contaminants of aquatic and terrestrial environments. We have found that chloroanilines can be biologically dehalogenated in polluted aquifers when methanogenic, but not sulfate-reducing conditions prevail. T...
NASA Technical Reports Server (NTRS)
Cowie, L. L.; Rybicki, G. B.
1982-01-01
Waves of star formation in a uniform, differentially rotating disk galaxy are treated analytically as a propagating detonation wave front. It is shown, that if single solitary waves could be excited, they would evolve asymptotically to one of two stable spiral forms, each of which rotates with a fixed pattern speed. Simple numerical solutions confirm these results. However, the pattern of waves that develop naturally from an initially localized disturbance is more complex and dies out within a few rotation periods. These results suggest a conclusive observational test for deciding whether sequential star formation is an important determinant of spiral structure in some class of galaxies.
Overcoming catastrophic forgetting in neural networks
Kirkpatrick, James; Pascanu, Razvan; Rabinowitz, Neil; Veness, Joel; Desjardins, Guillaume; Rusu, Andrei A.; Milan, Kieran; Quan, John; Ramalho, Tiago; Grabska-Barwinska, Agnieszka; Hassabis, Demis; Clopath, Claudia; Kumaran, Dharshan; Hadsell, Raia
2017-01-01
The ability to learn tasks in a sequential fashion is crucial to the development of artificial intelligence. Until now neural networks have not been capable of this and it has been widely thought that catastrophic forgetting is an inevitable feature of connectionist models. We show that it is possible to overcome this limitation and train networks that can maintain expertise on tasks that they have not experienced for a long time. Our approach remembers old tasks by selectively slowing down learning on the weights important for those tasks. We demonstrate our approach is scalable and effective by solving a set of classification tasks based on a hand-written digit dataset and by learning several Atari 2600 games sequentially. PMID:28292907
Re-Examining Group Development in Adventure Therapy Groups.
ERIC Educational Resources Information Center
DeGraaf, Don; Ashby, Jeff
1998-01-01
Small-group development is an important aspect of adventure therapy. Supplementing knowledge of sequential stages of group development with knowledge concerning within-stage nonsequential development yields a richer understanding of groups. Integrating elements of the individual counseling relationship (working alliance, transference, and real…
Donovan, Rory M.; Tapia, Jose-Juan; Sullivan, Devin P.; Faeder, James R.; Murphy, Robert F.; Dittrich, Markus; Zuckerman, Daniel M.
2016-01-01
The long-term goal of connecting scales in biological simulation can be facilitated by scale-agnostic methods. We demonstrate that the weighted ensemble (WE) strategy, initially developed for molecular simulations, applies effectively to spatially resolved cell-scale simulations. The WE approach runs an ensemble of parallel trajectories with assigned weights and uses a statistical resampling strategy of replicating and pruning trajectories to focus computational effort on difficult-to-sample regions. The method can also generate unbiased estimates of non-equilibrium and equilibrium observables, sometimes with significantly less aggregate computing time than would be possible using standard parallelization. Here, we use WE to orchestrate particle-based kinetic Monte Carlo simulations, which include spatial geometry (e.g., of organelles, plasma membrane) and biochemical interactions among mobile molecular species. We study a series of models exhibiting spatial, temporal and biochemical complexity and show that although WE has important limitations, it can achieve performance significantly exceeding standard parallel simulation—by orders of magnitude for some observables. PMID:26845334
Irwin, Kara C; Konnert, Candace; Wong, May; O'Neill, Thomas A
2014-04-01
Symptoms of posttraumatic stress disorder (PTSD) and pain are often comorbid among veterans. The purpose of this study was to investigate to what extent symptoms of anxiety, depression, and alcohol use mediated the relationship between PTSD symptoms and pain among 113 treated male Canadian veterans. Measures of PTSD, pain, anxiety symptoms, depression symptoms, and alcohol use were collected as part of the initial assessment. The bootstrapped resampling analyses were consistent with the hypothesis of mediation for anxiety and depression, but not alcohol use. The confidence intervals did not include zero and the indirect effect of PTSD on pain through anxiety was .04, CI [.03, .07]. The indirect effect of PTSD on pain through depression was .04, CI [.02, .07]. These findings suggest that PTSD and pain symptoms among veterans may be related through the underlying symptoms of anxiety and depression, thus emphasizing the importance of targeting anxiety and depression symptoms when treating comorbid PTSD and pain patients. © 2014 International Society for Traumatic Stress Studies.
A Downloadable Three-Dimensional Virtual Model of the Visible Ear
Wang, Haobing; Merchant, Saumil N.; Sorensen, Mads S.
2008-01-01
Purpose To develop a three-dimensional (3-D) virtual model of a human temporal bone and surrounding structures. Methods A fresh-frozen human temporal bone was serially sectioned and digital images of the surface of the tissue block were recorded (the ‘Visible Ear’). The image stack was resampled at a final resolution of 50 × 50 × 50/100 µm/voxel, registered in custom software and segmented in PhotoShop® 7.0. The segmented image layers were imported into Amira® 3.1 to generate smooth polygonal surface models. Results The 3-D virtual model presents the structures of the middle, inner and outer ears in their surgically relevant surroundings. It is packaged within a cross-platform freeware, which allows for full rotation, visibility and transparency control, as well as the ability to slice the 3-D model open at any section. The appropriate raw image can be superimposed on the cleavage plane. The model can be downloaded at https://research.meei.harvard.edu/Otopathology/3dmodels/ PMID:17124433
LaBeaud, A.D.; Cross, P.C.; Getz, W.M.; Glinka, A.; King, C.H.
2011-01-01
Rift Valley fever virus (RVFV) is an emerging biodefense pathogen that poses significant threats to human and livestock health. To date, the interepidemic reservoirs of RVFV are not well defined. In a longitudinal survey of infectious diseases among African buffalo during 2000-2006, 550 buffalo were tested for antibodies against RVFV in 820 capture events in 302 georeferenced locations in Kruger National Park, South Africa. Overall, 115 buffalo (21%) were seropositive. Seroprevalence of RVFV was highest (32%) in the first study year, and decreased progressively in subsequent years, but had no detectable impact on survival. Nine (7%) of 126 resampled, initially seronegative animals seroconverted during periods outside any reported regional RVFV outbreaks. Seroconversions for RVFV were detected in significant temporal clusters during 2001-2003 and in 2004. These findings highlight the potential importance of wildlife as reservoirs for RVFV and interepidemic RVFV transmission in perpetuating regional RVFV transmission risk. Copyright ?? 2011 by The American Society of Tropical Medicine and Hygiene.
Cross-Validation of Survival Bump Hunting by Recursive Peeling Methods.
Dazard, Jean-Eudes; Choe, Michael; LeBlanc, Michael; Rao, J Sunil
2014-08-01
We introduce a survival/risk bump hunting framework to build a bump hunting model with a possibly censored time-to-event type of response and to validate model estimates. First, we describe the use of adequate survival peeling criteria to build a survival/risk bump hunting model based on recursive peeling methods. Our method called "Patient Recursive Survival Peeling" is a rule-induction method that makes use of specific peeling criteria such as hazard ratio or log-rank statistics. Second, to validate our model estimates and improve survival prediction accuracy, we describe a resampling-based validation technique specifically designed for the joint task of decision rule making by recursive peeling (i.e. decision-box) and survival estimation. This alternative technique, called "combined" cross-validation is done by combining test samples over the cross-validation loops, a design allowing for bump hunting by recursive peeling in a survival setting. We provide empirical results showing the importance of cross-validation and replication.
Cross-Validation of Survival Bump Hunting by Recursive Peeling Methods
Dazard, Jean-Eudes; Choe, Michael; LeBlanc, Michael; Rao, J. Sunil
2015-01-01
We introduce a survival/risk bump hunting framework to build a bump hunting model with a possibly censored time-to-event type of response and to validate model estimates. First, we describe the use of adequate survival peeling criteria to build a survival/risk bump hunting model based on recursive peeling methods. Our method called “Patient Recursive Survival Peeling” is a rule-induction method that makes use of specific peeling criteria such as hazard ratio or log-rank statistics. Second, to validate our model estimates and improve survival prediction accuracy, we describe a resampling-based validation technique specifically designed for the joint task of decision rule making by recursive peeling (i.e. decision-box) and survival estimation. This alternative technique, called “combined” cross-validation is done by combining test samples over the cross-validation loops, a design allowing for bump hunting by recursive peeling in a survival setting. We provide empirical results showing the importance of cross-validation and replication. PMID:26997922
An Efficient Audio Watermarking Algorithm in Frequency Domain for Copyright Protection
NASA Astrophysics Data System (ADS)
Dhar, Pranab Kumar; Khan, Mohammad Ibrahim; Kim, Cheol-Hong; Kim, Jong-Myon
Digital Watermarking plays an important role for copyright protection of multimedia data. This paper proposes a new watermarking system in frequency domain for copyright protection of digital audio. In our proposed watermarking system, the original audio is segmented into non-overlapping frames. Watermarks are then embedded into the selected prominent peaks in the magnitude spectrum of each frame. Watermarks are extracted by performing the inverse operation of watermark embedding process. Simulation results indicate that the proposed watermarking system is highly robust against various kinds of attacks such as noise addition, cropping, re-sampling, re-quantization, MP3 compression, and low-pass filtering. Our proposed watermarking system outperforms Cox's method in terms of imperceptibility, while keeping comparable robustness with the Cox's method. Our proposed system achieves SNR (signal-to-noise ratio) values ranging from 20 dB to 28 dB, in contrast to Cox's method which achieves SNR values ranging from only 14 dB to 23 dB.
The role of interpersonal sensitivity, social support, and quality of life in rural older adults.
Wedgeworth, Monika; LaRocca, Michael A; Chaplin, William F; Scogin, Forrest
The mental health of elderly individuals in rural areas is increasingly relevant as populations age and social structures change. While social support satisfaction is a well-established predictor of quality of life, interpersonal sensitivity symptoms may diminish this relation. The current study extends the findings of Scogin et al by investigating the relationship among interpersonal sensitivity, social support satisfaction, and quality of life among rural older adults and exploring the mediating role of social support in the relation between interpersonal sensitivity and quality of life (N = 128). Hierarchical regression revealed that interpersonal sensitivity and social support satisfaction predicted quality of life. In addition, bootstrapping resampling supported the role of social support satisfaction as a mediator between interpersonal sensitivity symptoms and quality of life. These results underscore the importance of nurses and allied health providers in assessing and attending to negative self-perceptions of clients, as well as the perceived quality of their social networks. Copyright © 2016 Elsevier Inc. All rights reserved.
NASA Astrophysics Data System (ADS)
Basieva, Irina; Khrennikov, Andrei
2015-10-01
In this paper we study the problem of a possibility to use quantum observables to describe a possible combination of the order effect with sequential reproducibility for quantum measurements. By the order effect we mean a dependence of probability distributions (of measurement results) on the order of measurements. We consider two types of the sequential reproducibility: adjacent reproducibility (A-A) (the standard perfect repeatability) and separated reproducibility(A-B-A). The first one is reproducibility with probability 1 of a result of measurement of some observable A measured twice, one A measurement after the other. The second one, A-B-A, is reproducibility with probability 1 of a result of A measurement when another quantum observable B is measured between two A's. Heuristically, it is clear that the second type of reproducibility is complementary to the order effect. We show that, surprisingly, this may not be the case. The order effect can coexist with a separated reproducibility as well as adjacent reproducibility for both observables A and B. However, the additional constraint in the form of separated reproducibility of the B-A-B type makes this coexistence impossible. The problem under consideration was motivated by attempts to apply the quantum formalism outside of physics, especially, in cognitive psychology and psychophysics. However, it is also important for foundations of quantum physics as a part of the problem about the structure of sequential quantum measurements.
Effectiveness of sequential automatic-manual home respiratory polygraphy scoring.
Masa, Juan F; Corral, Jaime; Pereira, Ricardo; Duran-Cantolla, Joaquin; Cabello, Marta; Hernández-Blasco, Luis; Monasterio, Carmen; Alonso-Fernandez, Alberto; Chiner, Eusebi; Vázquez-Polo, Francisco-José; Montserrat, Jose M
2013-04-01
Automatic home respiratory polygraphy (HRP) scoring functions can potentially confirm the diagnosis of sleep apnoea-hypopnoea syndrome (SAHS) (obviating technician scoring) in a substantial number of patients. The result would have important management and cost implications. The aim of this study was to determine the diagnostic cost-effectiveness of a sequential HRP scoring protocol (automatic and then manual for residual cases) compared with manual HRP scoring, and with in-hospital polysomnography. We included suspected SAHS patients in a multicentre study and assigned them to home and hospital protocols at random. We constructed receiver operating characteristic (ROC) curves for manual and automatic scoring. Diagnostic agreement for several cut-off points was explored and costs for two equally effective alternatives were calculated. Of 366 randomised patients, 348 completed the protocol. Manual scoring produced better ROC curves than automatic scoring. There was no sensitive automatic or subsequent manual HRP apnoea-hypopnoea index (AHI) cut-off point. The specific cut-off points for automatic and subsequent manual HRP scorings (AHI >25 and >20, respectively) had a specificity of 93% for automatic and 94% for manual scorings. The costs of manual protocol were 9% higher than sequential HRP protocol; these were 69% and 64%, respectively, of the cost of the polysomnography. A sequential HRP scoring protocol is a cost-effective alternative to polysomnography, although with limited cost savings compared to HRP manual scoring.
ERIC Educational Resources Information Center
Zhang, Dongbo; Koda, Keiko
2012-01-01
Within the Structural Equation Modeling framework, this study tested the direct and indirect effects of morphological awareness and lexical inferencing ability on L2 vocabulary knowledge and reading comprehension among advanced Chinese EFL readers in a university in China. Using both regular z-test and the bootstrapping (data-based resampling)…
The Potential for Restoration to Break the Grass/Fire Cycle in Dryland Ecosystems in Hawaii
2016-11-01
Content of a Leaf PSAG Pleistocene Substrate Age Gradient PSW Pacific Southwest Station PTA Pohakuloa Training Area PV Photosynthetic Vegetation...resampled these images to 1.5 m resolution, and the study area encompassed 23 individual photographs. Due to variable solar illumination among...ecosystems many plant species benefit from similar conditions during regeneration, such as increased water availability, decreased solar radiation, and
Radar/Sonar and Time Series Analysis
1991-04-08
Fourier and Likelihood Analysis in NMR Spectroscopy .......... David Brillinger and Reinhold Kaiser Resampling Techniques for Stationary Time-series... Meyer The parabolic Fock theory foi a convex dielectric Georgia Tech. scatterer Abstract. This talk deals with a high frequency as) mptotic m~thod for...Malesky Inst. of Physics, Moscow Jun 11 - Jun 15 Victor P. Maslov MIEIM, USSR May 29 - Jun 15 Robert P. Meyer University of Wisconsin Jun 11 - Jun 15
D.W. Johnson; C.C. Trettin; D.E. Todd
2016-01-01
Vegetation, forest floor, and soils were resampled at a mixed oak site in eastern Tennessee that had been subjected to stem only (SOH), whole-tree harvest (WTH), and no harvest (REF) 33Â years previously. Although differences between harvest treatments were not statistically significant (PÂ <Â 0.05), average diameter, height, basal...
Tran, Viet-Thi; Porcher, Raphael; Falissard, Bruno; Ravaud, Philippe
2016-12-01
To describe methods to determine sample sizes in surveys using open-ended questions and to assess how resampling methods can be used to determine data saturation in these surveys. We searched the literature for surveys with open-ended questions and assessed the methods used to determine sample size in 100 studies selected at random. Then, we used Monte Carlo simulations on data from a previous study on the burden of treatment to assess the probability of identifying new themes as a function of the number of patients recruited. In the literature, 85% of researchers used a convenience sample, with a median size of 167 participants (interquartile range [IQR] = 69-406). In our simulation study, the probability of identifying at least one new theme for the next included subject was 32%, 24%, and 12% after the inclusion of 30, 50, and 100 subjects, respectively. The inclusion of 150 participants at random resulted in the identification of 92% themes (IQR = 91-93%) identified in the original study. In our study, data saturation was most certainly reached for samples >150 participants. Our method may be used to determine when to continue the study to find new themes or stop because of futility. Copyright © 2016 Elsevier Inc. All rights reserved.
Visualizing the anatomical-functional correlation of the human brain
NASA Astrophysics Data System (ADS)
Chang, YuKuang; Rockwood, Alyn P.; Reiman, Eric M.
1995-04-01
Three-dimensional tomographic images obtained from different modalities or from the same modality at different times provide complementary information. For example, while PET shows brain function, images from MRI identify anatomical structures. In this paper, we investigate the problem of displaying available information about structures and function together. Several steps are described to achieve our goal. These include segmentation of the data, registration, resampling, and display. Segmentation is used to identify brain tissue from surrounding tissues, especially in the MRI data. Registration aligns the different modalities as closely as possible. Resampling arises from the registration since two data sets do not usually correspond and the rendering method is most easily achieved if the data correspond to the same grid used in display. We combine several techniques to display the data. MRI data is reconstructed from 2D slices into 3D structures from which isosurfaces are extracted and represented by approximating polygonalizations. These are then displayed using standard graphics pipelines including shaded and transparent images. PET data measures the qualitative rates of cerebral glucose utilization or oxygen consumption. PET image is best displayed as a volume of luminous particles. The combination of both display methods allows the viewer to compare the functional information contained in the PET data with the anatomically more precise MRI data.
Uncertainty Quantification in High Throughput Screening ...
Using uncertainty quantification, we aim to improve the quality of modeling data from high throughput screening assays for use in risk assessment. ToxCast is a large-scale screening program that analyzes thousands of chemicals using over 800 assays representing hundreds of biochemical and cellular processes, including endocrine disruption, cytotoxicity, and zebrafish development. Over 2.6 million concentration response curves are fit to models to extract parameters related to potency and efficacy. Models built on ToxCast results are being used to rank and prioritize the toxicological risk of tested chemicals and to predict the toxicity of tens of thousands of chemicals not yet tested in vivo. However, the data size also presents challenges. When fitting the data, the choice of models, model selection strategy, and hit call criteria must reflect the need for computational efficiency and robustness, requiring hard and somewhat arbitrary cutoffs. When coupled with unavoidable noise in the experimental concentration response data, these hard cutoffs cause uncertainty in model parameters and the hit call itself. The uncertainty will then propagate through all of the models built on the data. Left unquantified, this uncertainty makes it difficult to fully interpret the data for risk assessment. We used bootstrap resampling methods to quantify the uncertainty in fitting models to the concentration response data. Bootstrap resampling determines confidence intervals for
Automated coregistration of MTI spectral bands
NASA Astrophysics Data System (ADS)
Theiler, James P.; Galbraith, Amy E.; Pope, Paul A.; Ramsey, Keri A.; Szymanski, John J.
2002-08-01
In the focal plane of a pushbroom imager, a linear array of pixels is scanned across the scene, building up the image one row at a time. For the Multispectral Thermal Imager (MTI), each of fifteen different spectral bands has its own linear array. These arrays are pushed across the scene together, but since each band's array is at a different position on the focal plane, a separate image is produced for each band. The standard MTI data products (LEVEL1B_R_COREG and LEVEL1B_R_GEO) resample these separate images to a common grid and produce coregistered multispectral image cubes. The coregistration software employs a direct ``dead reckoning' approach. Every pixel in the calibrated image is mapped to an absolute position on the surface of the earth, and these are resampled to produce an undistorted coregistered image of the scene. To do this requires extensive information regarding the satellite position and pointing as a function of time, the precise configuration of the focal plane, and the distortion due to the optics. These must be combined with knowledge about the position and altitude of the target on the rotating ellipsoidal earth. We will discuss the direct approach to MTI coregistration, as well as more recent attempts to tweak the precision of the band-to-band registration using correlations in the imagery itself.
NASA Astrophysics Data System (ADS)
Schonlau, William J.
2006-05-01
An immersive viewing engine providing basic telepresence functionality for a variety of application types is presented. Augmented reality, teleoperation and virtual reality applications all benefit from the use of head mounted display devices that present imagery appropriate to the user's head orientation at full frame rates. Our primary application is the viewing of remote environments, as with a camera equipped teleoperated vehicle. The conventional approach where imagery from a narrow field camera onboard the vehicle is presented to the user on a small rectangular screen is contrasted with an immersive viewing system where a cylindrical or spherical format image is received from a panoramic camera on the vehicle, resampled in response to sensed user head orientation and presented via wide field eyewear display, approaching 180 degrees of horizontal field. Of primary interest is the user's enhanced ability to perceive and understand image content, even when image resolution parameters are poor, due to the innate visual integration and 3-D model generation capabilities of the human visual system. A mathematical model for tracking user head position and resampling the panoramic image to attain distortion free viewing of the region appropriate to the user's current head pose is presented and consideration is given to providing the user with stereo viewing generated from depth map information derived using stereo from motion algorithms.
Sachan, Prachee; Kumar, Nidhi; Sharma, Jagdish Prasad
2014-01-01
Background: Density of the drugs injected intrathecally is an important factor that influences spread in the cerebrospinal fluid. Mixing adjuvants with local anesthetics (LA) alters their density and hence their spread compared to when given sequentially in seperate syringes. Aims: To evaluate the efficacy of intrathecal administration of hyperbaric bupivacaine (HB) and clonidine as a mixture and sequentially in terms of block characteristics, hemodynamics, neonatal outcome, and postoperative pain. Setting and Design: Prospective randomized single blind study at a tertiary center from 2010 to 2012. Materials and Methods: Ninety full-term parturient scheduled for elective cesarean sections were divided into three groups on the basis of technique of intrathecal drug administration. Group M received mixture of 75 μg clonidine and 10 mg HB 0.5%. Group A received 75 μg clonidine after administration of 10 mg HB 0.5% through separate syringe. Group B received 75 μg clonidine before HB 0.5% (10 mg) through separate syringe. Statistical analysis used: Observational descriptive statistics, analysis of variance with Bonferroni multiple comparison post hoc test, and Chi-square test. Results: Time to achieve complete sensory and motor block was less in group A and B in which drugs were given sequentially. Duration of analgesia lasted longer in group B (474.3 ± 20.79 min) and group A (472.50 ± 22.11 min) than in group M (337 ± 18.22 min) with clinically insignificant influence on hemodynamic parameters and sedation. Conclusion: Sequential technique reduces time to achieve complete sensory and motor block, delays block regression, and significantly prolongs the duration of analgesia. However, it did not matter much whether clonidine was administered before or after HB. PMID:25886098
Nakashima, Ryoichi; Komori, Yuya; Maeda, Eriko; Yoshikawa, Takeharu; Yokosawa, Kazuhiko
2016-01-01
Although viewing multiple stacks of medical images presented on a display is a relatively new but useful medical task, little is known about this task. Particularly, it is unclear how radiologists search for lesions in this type of image reading. When viewing cluttered and dynamic displays, continuous motion itself does not capture attention. Thus, it is effective for the target detection that observers' attention is captured by the onset signal of a suddenly appearing target among the continuously moving distractors (i.e., a passive viewing strategy). This can be applied to stack viewing tasks, because lesions often show up as transient signals in medical images which are sequentially presented simulating a dynamic and smoothly transforming image progression of organs. However, it is unclear whether observers can detect a target when the target appears at the beginning of a sequential presentation where the global apparent motion onset signal (i.e., signal of the initiation of the apparent motion by sequential presentation) occurs. We investigated the ability of radiologists to detect lesions during such tasks by comparing the performances of radiologists and novices. Results show that overall performance of radiologists is better than novices. Furthermore, the temporal locations of lesions in CT image sequences, i.e., when a lesion appears in an image sequence, does not affect the performance of radiologists, whereas it does affect the performance of novices. Results indicate that novices have greater difficulty in detecting a lesion appearing early than late in the image sequence. We suggest that radiologists have other mechanisms to detect lesions in medical images with little attention which novices do not have. This ability is critically important when viewing rapid sequential presentations of multiple CT images, such as stack viewing tasks.
Nakashima, Ryoichi; Komori, Yuya; Maeda, Eriko; Yoshikawa, Takeharu; Yokosawa, Kazuhiko
2016-01-01
Although viewing multiple stacks of medical images presented on a display is a relatively new but useful medical task, little is known about this task. Particularly, it is unclear how radiologists search for lesions in this type of image reading. When viewing cluttered and dynamic displays, continuous motion itself does not capture attention. Thus, it is effective for the target detection that observers' attention is captured by the onset signal of a suddenly appearing target among the continuously moving distractors (i.e., a passive viewing strategy). This can be applied to stack viewing tasks, because lesions often show up as transient signals in medical images which are sequentially presented simulating a dynamic and smoothly transforming image progression of organs. However, it is unclear whether observers can detect a target when the target appears at the beginning of a sequential presentation where the global apparent motion onset signal (i.e., signal of the initiation of the apparent motion by sequential presentation) occurs. We investigated the ability of radiologists to detect lesions during such tasks by comparing the performances of radiologists and novices. Results show that overall performance of radiologists is better than novices. Furthermore, the temporal locations of lesions in CT image sequences, i.e., when a lesion appears in an image sequence, does not affect the performance of radiologists, whereas it does affect the performance of novices. Results indicate that novices have greater difficulty in detecting a lesion appearing early than late in the image sequence. We suggest that radiologists have other mechanisms to detect lesions in medical images with little attention which novices do not have. This ability is critically important when viewing rapid sequential presentations of multiple CT images, such as stack viewing tasks. PMID:27774080
Sequentially reweighted TV minimization for CT metal artifact reduction.
Zhang, Xiaomeng; Xing, Lei
2013-07-01
Metal artifact reduction has long been an important topic in x-ray CT image reconstruction. In this work, the authors propose an iterative method that sequentially minimizes a reweighted total variation (TV) of the image and produces substantially artifact-reduced reconstructions. A sequentially reweighted TV minimization algorithm is proposed to fully exploit the sparseness of image gradients (IG). The authors first formulate a constrained optimization model that minimizes a weighted TV of the image, subject to the constraint that the estimated projection data are within a specified tolerance of the available projection measurements, with image non-negativity enforced. The authors then solve a sequence of weighted TV minimization problems where weights used for the next iteration are computed from the current solution. Using the complete projection data, the algorithm first reconstructs an image from which a binary metal image can be extracted. Forward projection of the binary image identifies metal traces in the projection space. The metal-free background image is then reconstructed from the metal-trace-excluded projection data by employing a different set of weights. Each minimization problem is solved using a gradient method that alternates projection-onto-convex-sets and steepest descent. A series of simulation and experimental studies are performed to evaluate the proposed approach. Our study shows that the sequentially reweighted scheme, by altering a single parameter in the weighting function, flexibly controls the sparsity of the IG and reconstructs artifacts-free images in a two-stage process. It successfully produces images with significantly reduced streak artifacts, suppressed noise and well-preserved contrast and edge properties. The sequentially reweighed TV minimization provides a systematic approach for suppressing CT metal artifacts. The technique can also be generalized to other "missing data" problems in CT image reconstruction.
Asao, Tetsuhiko; Fujiwara, Yutaka; Itahashi, Kota; Kitahara, Shinsuke; Goto, Yasushi; Horinouchi, Hidehito; Kanda, Shintaro; Nokihara, Hiroshi; Yamamoto, Noboru; Takahashi, Kazuhisa; Ohe, Yuichiro
2017-07-01
Second-generation anaplastic lymphoma kinase (ALK) inhibitors, such as alectinib and ceritinib, have recently been approved for treatment of ALK-rearranged non-small-cell lung cancer (NSCLC). An optimal strategy for using 2 or more ALK inhibitors has not been established. We sought to investigate the clinical impact of sequential use of ALK inhibitors on these tumors in clinical practice. Patients with ALK-rearranged NSCLC treated from May 2010 to January 2016 at the National Cancer Center Hospital were identified, and their outcomes were evaluated retrospectively. Fifty-nine patients with ALK-rearranged NSCLC had been treated and 37 cases were assessable. Twenty-six received crizotinib, 21 received alectinib, and 13 (35.1%) received crizotinib followed by alectinib. Response rates and median progression-free survival (PFS) on crizotinib and alectinib (after crizotinib failure) were 53.8% (95% confidence interval [CI], 26.7%-80.9%) and 38.4% (95% CI, 12.0%-64.9%), and 10.7 (95% CI, 5.3-14.7) months and 16.6 (95% CI, 2.9-not calculable), respectively. The median PFS of patients on sequential therapy was 35.2 months (95% CI, 12.7 months-not calculable). The 5-year survival rate of ALK-rearranged patients who received 2 sequential ALK inhibitors from diagnosis was 77.8% (95% CI, 36.5%-94.0%). The combined PFS and 5-year survival rates in patients who received sequential ALK inhibitors were encouraging. Making full use of multiple ALK inhibitors might be important to prolonging survival in patients with ALK-rearranged NSCLC. Copyright © 2016 Elsevier Inc. All rights reserved.
Collapsing lattice animals and lattice trees in two dimensions
NASA Astrophysics Data System (ADS)
Hsu, Hsiao-Ping; Grassberger, Peter
2005-06-01
We present high statistics simulations of weighted lattice bond animals and lattice trees on the square lattice, with fugacities for each non-bonded contact and for each bond between two neighbouring monomers. The simulations are performed using a newly developed sequential sampling method with resampling, very similar to the pruned-enriched Rosenbluth method (PERM) used for linear chain polymers. We determine with high precision the line of second-order transitions from an extended to a collapsed phase in the resulting two-dimensional phase diagram. This line includes critical bond percolation as a multicritical point, and we verify that this point divides the line into different universality classes. One of them corresponds to the collapse driven by contacts and includes the collapse of (weakly embeddable) trees. There is some evidence that the other is subdivided again into two parts with different universality classes. One of these (at the far side from collapsing trees) is bond driven and is represented by the Derrida-Herrmann model of animals having bonds only (no contacts). Between the critical percolation point and this bond-driven collapse seems to be an intermediate regime, whose other end point is a multicritical point P* where a transition line between two collapsed phases (one bond driven and the other contact driven) sparks off. This point P* seems to be attractive (in the renormalization group sense) from the side of the intermediate regime, so there are four universality classes on the transition line (collapsing trees, critical percolation, intermediate regime, and Derrida-Herrmann). We obtain very precise estimates for all critical exponents for collapsing trees. It is already harder to estimate the critical exponents for the intermediate regime. Finally, it is very difficult to obtain with our method good estimates of the critical parameters of the Derrida-Herrmann universality class. As regards the bond-driven to contact-driven transition in the collapsed phase, we have some evidence for its existence and rough location, but no precise estimates of critical exponents.
Cella, Laura; Liuzzi, Raffaele; Conson, Manuel; D'Avino, Vittoria; Salvatore, Marco; Pacelli, Roberto
2012-12-27
Hypothyroidism is a frequent late side effect of radiation therapy of the cervical region. Purpose of this work is to develop multivariate normal tissue complication probability (NTCP) models for radiation-induced hypothyroidism (RHT) and to compare them with already existing NTCP models for RHT. Fifty-three patients treated with sequential chemo-radiotherapy for Hodgkin's lymphoma (HL) were retrospectively reviewed for RHT events. Clinical information along with thyroid gland dose distribution parameters were collected and their correlation to RHT was analyzed by Spearman's rank correlation coefficient (Rs). Multivariate logistic regression method using resampling methods (bootstrapping) was applied to select model order and parameters for NTCP modeling. Model performance was evaluated through the area under the receiver operating characteristic curve (AUC). Models were tested against external published data on RHT and compared with other published NTCP models. If we express the thyroid volume exceeding X Gy as a percentage (Vx(%)), a two-variable NTCP model including V30(%) and gender resulted to be the optimal predictive model for RHT (Rs = 0.615, p < 0.001. AUC = 0.87). Conversely, if absolute thyroid volume exceeding X Gy (Vx(cc)) was analyzed, an NTCP model based on 3 variables including V30(cc), thyroid gland volume and gender was selected as the most predictive model (Rs = 0.630, p < 0.001. AUC = 0.85). The three-variable model performs better when tested on an external cohort characterized by large inter-individuals variation in thyroid volumes (AUC = 0.914, 95% CI 0.760-0.984). A comparable performance was found between our model and that proposed in the literature based on thyroid gland mean dose and volume (p = 0.264). The absolute volume of thyroid gland exceeding 30 Gy in combination with thyroid gland volume and gender provide an NTCP model for RHT with improved prediction capability not only within our patient population but also in an external cohort.
NASA Astrophysics Data System (ADS)
Saidaliyeva, Zarina; Davenport, Ian; Nobakht, Mohamad; White, Kevin; Shahgedanova, Maria
2017-04-01
Kazakhstan is a major producer of grain. Large scale grain production dominates in the north, making Kazakhstan one of the largest exporters of grain in the world. Agricultural production accounts for 9% of the national GDP, providing 25% of national employment. The south relies on grain production from household farms for subsistence, and has low resilience, so is vulnerable to reductions in output. Yields in the south depend on snowmelt and glacier runoff. The major limit to production is water supply, which is affected by glacier retreat and frequent droughts. Climate change is likely to impact all climate drivers negatively, leading to a decrease in crop yield, which will impact Kazakhstan and countries dependent on importing its produce. This work makes initial steps in modelling the impact of climate change on crop yield, by identifying the links between snowfall, soil moisture and agricultural productivity. Several remotely-sensed data sources are being used. The availability of snowmelt water over the period 2010-2014 is estimated by extracting the annual maximum snow water equivalent (SWE) from the Globsnow dataset, which assimilates satellite microwave observations with field observations to produce a spatial map. Soil moisture over the period 2010-2016 is provided by the ESA Soil Moisture and Ocean Salinity (SMOS) mission. Vegetation density is approximated by the Normalised Difference Vegetation Index (NDVI) produced from NASA's MODIS instruments. Statistical information on crop yields is provided by the Ministry of National Economy of the Republic of Kazakhstan Committee on Statistics. Demonstrating the link between snowmelt yield and agricultural productivity depends on showing the impact of snow mass during winter on remotely-sensed soil moisture, the link between soil moisture and vegetation density, and finally the link between vegetation density and crop yield. Soil moisture maps were extracted from SMOS observations, and resampled onto a 40km x 40km grid, and analysed to produce monthly averages. The monthly maximum snow water equivalent estimates for March were resampled onto the same grid, to approximate the total snow contributing to snowmelt. The MODIS MOD13A2 1km 16-day NDVI product was resampled onto the same 40km grid, and aggregated into 32-day averages. Annual crop yield is available in terms of kg of yield per hectare for each region in Kazakhstan between 2004 and 2015. To show the connection between the snowmelt and soil moisture, the cells within the snow and soil moisture grids were compared to calculate correlation. Data were aggregated per region. Regions in northern Kazakhstan showed the strongest correlations, because more of the soil water supply is derived from snowmelt than rain, and the southern regions showed poor correlation because of the greater influence of rainfall and irrigation. Correlations between soil moisture and vegetation density, and crop yield are ongoing, and results will be presented. It is envisaged that this research will assist the Kazakh farming community, providing real-time soil moisture data from SMOS.
Carvalho, Sofia D; Folta, Kevin M
2014-01-01
Different light wavelengths have specific effects on plant growth and development. Narrow-bandwidth light-emitting diode (LED) lighting may be used to directionally manipulate size, color and metabolites in high-value fruits and vegetables. In this report, Red Russian kale (Brassica napus) seedlings were grown under specific light conditions and analyzed for photomorphogenic responses, pigment accumulation and nutraceutical content. The results showed that this genotype responds predictably to darkness, blue and red light, with suppression of hypocotyl elongation, development of pigments and changes in specific metabolites. However, these seedlings were relatively hypersensitive to far-red light, leading to uncharacteristically short hypocotyls and high pigment accumulation, even after growth under very low fluence rates (<1 μmol m−2 s−1). General antioxidant levels and aliphatic glucosinolates are elevated by far-red light treatments. Sequential treatments of darkness, blue light, red light and far-red light were applied throughout sprout development to alter final product quality. These results indicate that sequential treatment with narrow-bandwidth light may be used to affect key economically important traits in high-value crops. PMID:26504531
Sequential associative memory with nonuniformity of the layer sizes.
Teramae, Jun-Nosuke; Fukai, Tomoki
2007-01-01
Sequence retrieval has a fundamental importance in information processing by the brain, and has extensively been studied in neural network models. Most of the previous sequential associative memory embedded sequences of memory patterns have nearly equal sizes. It was recently shown that local cortical networks display many diverse yet repeatable precise temporal sequences of neuronal activities, termed "neuronal avalanches." Interestingly, these avalanches displayed size and lifetime distributions that obey power laws. Inspired by these experimental findings, here we consider an associative memory model of binary neurons that stores sequences of memory patterns with highly variable sizes. Our analysis includes the case where the statistics of these size variations obey the above-mentioned power laws. We study the retrieval dynamics of such memory systems by analytically deriving the equations that govern the time evolution of macroscopic order parameters. We calculate the critical sequence length beyond which the network cannot retrieve memory sequences correctly. As an application of the analysis, we show how the present variability in sequential memory patterns degrades the power-law lifetime distribution of retrieved neural activities.
NASA Astrophysics Data System (ADS)
Shyu, Mei-Ling; Huang, Zifang; Luo, Hongli
In recent years, pervasive computing infrastructures have greatly improved the interaction between human and system. As we put more reliance on these computing infrastructures, we also face threats of network intrusion and/or any new forms of undesirable IT-based activities. Hence, network security has become an extremely important issue, which is closely connected with homeland security, business transactions, and people's daily life. Accurate and efficient intrusion detection technologies are required to safeguard the network systems and the critical information transmitted in the network systems. In this chapter, a novel network intrusion detection framework for mining and detecting sequential intrusion patterns is proposed. The proposed framework consists of a Collateral Representative Subspace Projection Modeling (C-RSPM) component for supervised classification, and an inter-transactional association rule mining method based on Layer Divided Modeling (LDM) for temporal pattern analysis. Experiments on the KDD99 data set and the traffic data set generated by a private LAN testbed show promising results with high detection rates, low processing time, and low false alarm rates in mining and detecting sequential intrusion detections.
NASA Astrophysics Data System (ADS)
Hoffmann, Mathias; Jurisch, Nicole; Garcia Alba, Juana; Albiac Borraz, Elisa; Schmidt, Marten; Huth, Vytas; Rogasik, Helmut; Rieckh, Helene; Verch, Gernot; Sommer, Michael; Augustin, Jürgen
2017-03-01
Carbon (C) sequestration in soils plays a key role in the global C cycle. It is therefore crucial to adequately monitor dynamics in soil organic carbon (ΔSOC) stocks when aiming to reveal underlying processes and potential drivers. However, small-scale spatial (10-30 m) and temporal changes in SOC stocks, particularly pronounced in arable lands, are hard to assess. The main reasons for this are limitations of the well-established methods. On the one hand, repeated soil inventories, often used in long-term field trials, reveal spatial patterns and trends in ΔSOC but require a longer observation period and a sufficient number of repetitions. On the other hand, eddy covariance measurements of C fluxes towards a complete C budget of the soil-plant-atmosphere system may help to obtain temporal ΔSOC patterns but lack small-scale spatial resolution. To overcome these limitations, this study presents a reliable method to detect both short-term temporal dynamics as well as small-scale spatial differences of ΔSOC using measurements of the net ecosystem carbon balance (NECB) as a proxy. To estimate the NECB, a combination of automatic chamber (AC) measurements of CO2 exchange and empirically modeled aboveground biomass development (NPPshoot) were used. To verify our method, results were compared with ΔSOC observed by soil resampling. Soil resampling and AC measurements were performed from 2010 to 2014 at a colluvial depression located in the hummocky ground moraine landscape of northeastern Germany. The measurement site is characterized by a variable groundwater level (GWL) and pronounced small-scale spatial heterogeneity regarding SOC and nitrogen (Nt) stocks. Tendencies and magnitude of ΔSOC values derived by AC measurements and repeated soil inventories corresponded well. The period of maximum plant growth was identified as being most important for the development of spatial differences in annual ΔSOC. Hence, we were able to confirm that AC-based C budgets are able to reveal small-scale spatial differences and short-term temporal dynamics of ΔSOC.
Kuzawa, Christopher W; Eisenberg, Dan T A
2012-01-01
Birth weight (BW) predicts many health outcomes, but the relative contributions of genes and environmental factors to BW remain uncertain. Some studies report stronger mother-offspring than father-offspring BW correlations, with attenuated father-offspring BW correlations when the mother is stunted. These findings have been interpreted as evidence that maternal genetic or environmental factors play an important role in determining birth size, with small maternal size constraining paternal genetic contributions to offspring BW. Here we evaluate mother-offspring and father-offspring birth weight (BW) associations and evaluate whether maternal stunting constrains genetic contributions to offspring birth size. Data include BW of offspring (n = 1,101) born to female members (n = 382) and spouses of male members (n = 275) of a birth cohort (born 1983-84) in Metropolitan Cebu, Philippines. Regression was used to relate parental and offspring BW adjusting for confounders. Resampling testing was used to evaluate whether false paternity could explain any evidence for excess matrilineal inheritance. In a pooled model adjusting for maternal height and confounders, parental BW was a borderline-significantly stronger predictor of offspring BW in mothers compared to fathers (sex of parent interaction p = 0.068). In separate multivariate models, each kg in mother's and father's BW predicted a 271±53 g (p<0.00001) and 132±55 g (p = 0.017) increase in offspring BW, respectively. Resampling statistics suggested that false paternity rates of >25% and likely 50% would be needed to explain these differences. There was no interaction between maternal stature and maternal BW (interaction p = 0.520) or paternal BW (p = 0.545). Each kg change in mother's BW predicted twice the change in offspring BW as predicted by a change in father's BW, consistent with an intergenerational maternal effect on offspring BW. Evidence for excess matrilineal BW heritability at all levels of maternal stature points to indirect genetic, mitochondrial, or epigenetic maternal contributions to offspring fetal growth.
Ensemble-based prediction of RNA secondary structures.
Aghaeepour, Nima; Hoos, Holger H
2013-04-24
Accurate structure prediction methods play an important role for the understanding of RNA function. Energy-based, pseudoknot-free secondary structure prediction is one of the most widely used and versatile approaches, and improved methods for this task have received much attention over the past five years. Despite the impressive progress that as been achieved in this area, existing evaluations of the prediction accuracy achieved by various algorithms do not provide a comprehensive, statistically sound assessment. Furthermore, while there is increasing evidence that no prediction algorithm consistently outperforms all others, no work has been done to exploit the complementary strengths of multiple approaches. In this work, we present two contributions to the area of RNA secondary structure prediction. Firstly, we use state-of-the-art, resampling-based statistical methods together with a previously published and increasingly widely used dataset of high-quality RNA structures to conduct a comprehensive evaluation of existing RNA secondary structure prediction procedures. The results from this evaluation clarify the performance relationship between ten well-known existing energy-based pseudoknot-free RNA secondary structure prediction methods and clearly demonstrate the progress that has been achieved in recent years. Secondly, we introduce AveRNA, a generic and powerful method for combining a set of existing secondary structure prediction procedures into an ensemble-based method that achieves significantly higher prediction accuracies than obtained from any of its component procedures. Our new, ensemble-based method, AveRNA, improves the state of the art for energy-based, pseudoknot-free RNA secondary structure prediction by exploiting the complementary strengths of multiple existing prediction procedures, as demonstrated using a state-of-the-art statistical resampling approach. In addition, AveRNA allows an intuitive and effective control of the trade-off between false negative and false positive base pair predictions. Finally, AveRNA can make use of arbitrary sets of secondary structure prediction procedures and can therefore be used to leverage improvements in prediction accuracy offered by algorithms and energy models developed in the future. Our data, MATLAB software and a web-based version of AveRNA are publicly available at http://www.cs.ubc.ca/labs/beta/Software/AveRNA.
Kim, Hyoungrae; Jang, Cheongyun; Yadav, Dharmendra K; Kim, Mi-Hyun
2017-03-23
The accuracy of any 3D-QSAR, Pharmacophore and 3D-similarity based chemometric target fishing models are highly dependent on a reasonable sample of active conformations. Since a number of diverse conformational sampling algorithm exist, which exhaustively generate enough conformers, however model building methods relies on explicit number of common conformers. In this work, we have attempted to make clustering algorithms, which could find reasonable number of representative conformer ensembles automatically with asymmetric dissimilarity matrix generated from openeye tool kit. RMSD was the important descriptor (variable) of each column of the N × N matrix considered as N variables describing the relationship (network) between the conformer (in a row) and the other N conformers. This approach used to evaluate the performance of the well-known clustering algorithms by comparison in terms of generating representative conformer ensembles and test them over different matrix transformation functions considering the stability. In the network, the representative conformer group could be resampled for four kinds of algorithms with implicit parameters. The directed dissimilarity matrix becomes the only input to the clustering algorithms. Dunn index, Davies-Bouldin index, Eta-squared values and omega-squared values were used to evaluate the clustering algorithms with respect to the compactness and the explanatory power. The evaluation includes the reduction (abstraction) rate of the data, correlation between the sizes of the population and the samples, the computational complexity and the memory usage as well. Every algorithm could find representative conformers automatically without any user intervention, and they reduced the data to 14-19% of the original values within 1.13 s per sample at the most. The clustering methods are simple and practical as they are fast and do not ask for any explicit parameters. RCDTC presented the maximum Dunn and omega-squared values of the four algorithms in addition to consistent reduction rate between the population size and the sample size. The performance of the clustering algorithms was consistent over different transformation functions. Moreover, the clustering method can also be applied to molecular dynamics sampling simulation results.
Dong, Angang; Ye, Xingchen; Chen, Jun; Kang, Yijin; Gordon, Thomas; Kikkawa, James M; Murray, Christopher B
2011-02-02
The ability to engineer surface properties of nanocrystals (NCs) is important for various applications, as many of the physical and chemical properties of nanoscale materials are strongly affected by the surface chemistry. Here, we report a facile ligand-exchange approach, which enables sequential surface functionalization and phase transfer of colloidal NCs while preserving the NC size and shape. Nitrosonium tetrafluoroborate (NOBF4) is used to replace the original organic ligands attached to the NC surface, stabilizing the NCs in various polar, hydrophilic media such as N,N-dimethylformamide for years, with no observed aggregation or precipitation. This approach is applicable to various NCs (metal oxides, metals, semiconductors, and dielectrics) of different sizes and shapes. The hydrophilic NCs obtained can subsequently be further functionalized using a variety of capping molecules, imparting different surface functionalization to NCs depending on the molecules employed. Our work provides a versatile ligand-exchange strategy for NC surface functionalization and represents an important step toward controllably engineering the surface properties of NCs.
NASA Astrophysics Data System (ADS)
Khoshkbar Sadigh, Arash
Part I: Dynamic Voltage Restorer In the present power grids, voltage sags are recognized as a serious threat and a frequently occurring power-quality problem and have costly consequence such as sensitive loads tripping and production loss. Consequently, the demand for high power quality and voltage stability becomes a pressing issue. Dynamic voltage restorer (DVR), as a custom power device, is more effective and direct solutions for "restoring" the quality of voltage at its load-side terminals when the quality of voltage at its source-side terminals is disturbed. In the first part of this thesis, a DVR configuration with no need of bulky dc link capacitor or energy storage is proposed. This fact causes to reduce the size of the DVR and increase the reliability of the circuit. In addition, the proposed DVR topology is based on high-frequency isolation transformer resulting in the size reduction of transformer. The proposed DVR circuit, which is suitable for both low- and medium-voltage applications, is based on dc-ac converters connected in series to split the main dc link between the inputs of dc-ac converters. This feature makes it possible to use modular dc-ac converters and utilize low-voltage components in these converters whenever it is required to use DVR in medium-voltage application. The proposed configuration is tested under different conditions of load power factor and grid voltage harmonic. It has been shown that proposed DVR can compensate the voltage sag effectively and protect the sensitive loads. Following the proposition of the DVR topology, a fundamental voltage amplitude detection method which is applicable in both single/three-phase systems for DVR applications is proposed. The advantages of proposed method include application in distorted power grid with no need of any low-pass filter, precise and reliable detection, simple computation and implementation without using a phased locked loop and lookup table. The proposed method has been verified by simulation and experimental tests under various conditions considering all possible cases such as different amounts of voltage sag depth (VSD), different amounts of point-on-wave (POW) at which voltage sag occurs, harmonic distortion, line frequency variation, and phase jump (PJ). Furthermore, the ripple amount of fundamental voltage amplitude calculated by the proposed method and its error is analyzed considering the line frequency variation together with harmonic distortion. The best and worst detection time of proposed method were measured 1ms and 8.8ms, respectively. Finally, the proposed method has been compared with other voltage sag detection methods available in literature. Part 2: Power System Modeling for Renewable Energy Integration: As power distribution systems are evolving into more complex networks, electrical engineers have to rely on software tools to perform circuit analysis. There are dozens of powerful software tools available in the market to perform the power system studies. Although their main functions are similar, there are differences in features and formatting structures to suit specific applications. This creates challenges for transferring power system circuit models data (PSCMD) between different software and rebuilding the same circuit in the second software environment. The objective of this part of thesis is to develop a Unified Platform (UP) to facilitate transferring PSCMD among different software packages and relieve the challenges of the circuit model conversion process. UP uses a commonly available spreadsheet file with a defined format, for any home software to write data to and for any destination software to read data from, via a script-based application called PSCMD transfer application. The main considerations in developing the UP are to minimize manual intervention and import a one-line diagram into the destination software or export it from the source software, with all details to allow load flow, short circuit and other analyses. In this study, ETAP, OpenDSS, and GridLab-D are considered, and PSCMD transfer applications written in MATLAB have been developed for each of these to read the circuit model data provided in the UP spreadsheet. In order to test the developed PSCMD transfer applications, circuit model data of a test circuit and a power distribution circuit from Southern California Edison (SCE) - a utility company - both built in CYME, were exported into the spreadsheet file according to the UP format. Thereafter, circuit model data were imported successfully from the spreadsheet files into above mentioned software using the PSCMD transfer applications developed for each software. After the SCE studied circuit is transferred into OpenDSS software using the proposed UP scheme and developed application, it has been studied to investigate the impacts of large-scale solar energy penetration. The main challenge of solar energy integration into power grid is its intermittency (i.e., discontinuity of output power) nature due to cloud shading of photovoltaic panels which depends on weather conditions. In order to conduct this study, OpenDSS time-series simulation feature, which is required due to intermittency of solar energy, is utilized. In this study, the impacts of intermittency of solar energy penetration, especially high-variability points, on voltage fluctuation and operation of capacitor bank and voltage regulator is provided. In addition, the necessity to interpolate and resample unequally spaced time-series measurement data and convert them to equally spaced time-series data as well as the effect of resampling time-interval on the amount of error is discussed. Two applications are developed in Matlab to do interpolation and resampling as well as to calculate the amount of error for different resampling time-intervals to figure out the suitable resampling time-interval. Furthermore, an approach based on cumulative distribution, regarding the length for lines/cables types and the power rating for loads, is presented to prioritize which loads, lines and cables the meters should be installed at to have the most effect on model validation.
Decadal changes in north-American tundra plant communities
NASA Astrophysics Data System (ADS)
Villarreal, S.; Johnson, D. R.; Webber, P.; Ebert-May, D.; Hollister, R. D.; Tweedie, C. E.
2013-12-01
Improving our understanding of how tundra vegetation responds to environmental change over decadal time scales is important. Tundra plants and ecosystems are well-recognized for their susceptibility to be impacted by climate warming; changes in land-atmosphere carbon, water, and energy balance in tundra landscapes have the potential to impact regional to global-scale climate, and relatively few studies examining change in tundra landscapes have spanned decadal time scales. The majority of our understanding of tundra vegetation responses to environmental change has been derived from studies along environmental gradients, experimental manipulations, and modeling. This study synthesizes the rescue and resampling of historic vegetation study sites established during the 1960's and 1970's at three arctic tundra locations (Baffin Island, Canada, Barrow, Alaska, and Atqasuk, Alaska), and one alpine tundra location (Niwot Ridge, Colorado). We conducted a meta-analysis to examine decadal changes in plant community composition, species richness, species evenness, and species diversity at all locations and for three broad soil moisture classes (dry, moist, wet). For all sites, except Baffin Island, change over the last decade was compared with long term change to determine if rates of change have altered over time. Change in plant community composition was most dramatic at Barrow and Baffin Island (P < 0.05), while less change was detected at Niwot Ridge (P < 0.10), and Atqasuk. Plant communities also changed for all soil moisture classes. The rate of change at Barrow and in moist soil classes appears to have quickened over the last decade. Rates of early plant successional change at Baffin Island appear to have quickened relative to rates documented in the mid 1960's. There were no changes in species richness at any of the locations, but there appears to be acceleration in the loss of species richness for dry and moist tundra. Species evenness increased at Atqasuk and in dry and wet tundra but decreased at Niwot Ridge in moist tundra. A loss in species diversity was detected in moist tundra in the decadal study, while diversity increased for dry and wet tundra. Baffin Island was the only location to show evidence of an increase in species diversity. This study appears to be among the first to document an acceleration of vegetation change in tundra landscapes using decadal time scale observational data, and highlights the importance of both sustained monitoring, and the rescue and resampling of historic sites, which have proven to be effective in advancing our knowledge of vegetation change dynamics. This project was a contribution to the International Polar Year Back to the Future project (IPY#512).
Designing User-Computer Dialogues: Basic Principles and Guidelines.
ERIC Educational Resources Information Center
Harrell, Thomas H.
This discussion of the design of computerized psychological assessment or testing instruments stresses the importance of the well-designed computer-user interface. The principles underlying the three main functional elements of computer-user dialogue--data entry, data display, and sequential control--are discussed, and basic guidelines derived…
A heuristic for landscape management
Martín Alfonso B. Mendoza; Jesús S. Zepeta; Juan José A. Fajardo
2006-01-01
The development of landscape ecology has stressed out the importance of spatial and sequential relationships as explanations to forest stand dynamics, and for other natural ambiences. This presentation offers a specific design that introduces spatial considerations into forest planning with the idea of regulating fragmentation and connectivity in commercial forest...
Parent-Implemented Communication Intervention: Sequential Analysis of Triadic Relationships
ERIC Educational Resources Information Center
Brown, Jennifer A.; Woods, Juliann J.
2016-01-01
Collaboration with parents and caregivers to support young children's communication development is an important component to early intervention services. Coaching parents to implement communication support strategies is increasingly common in parent-implemented interventions, but few studies examine the process as well as the outcomes. We explored…
1986-11-01
V2 + V 2 2 We can formulate the general weighted resampling formulas by giving an inter - polation formula and a sampling formula. Specifically...tessellation grids. 4.1. One-dimensional Adaptive Pyramid We suggest an interest operator based on the local " busyness " of the data. It has been observed...that in human perception a line with higher " busyness " seems longer than a straight line segment [6], as in Figure 7. Here, we will use a smoothed
ERIC Educational Resources Information Center
Livingston, Samuel A.; Kim, Sooyeon
2010-01-01
A series of resampling studies investigated the accuracy of equating by four different methods in a random groups equating design with samples of 400, 200, 100, and 50 test takers taking each form. Six pairs of forms were constructed. Each pair was constructed by assigning items from an existing test taken by 9,000 or more test takers. The…
Allison Bidlack; Sarah Bisbing; Brian Buma; David D’Amore; Paul Hennon; Thomas Heutte; John Krapek; Robin Mulvey; Lauren Oakes
2017-01-01
In their analysis of resampled and remeasured plot data from the USDA Forest Service Forest Inventory and Analysis (FIA) program, Barrett and Pattison (2017, Can. J. For. Res. 47(1): 97â105, doi:10.1139/cjfr-2016-0335) suggest that there is neither evidence of a recent regional decrease in yellow-cedar (Callitropsis nootkatensis...
Coherent Change Detection: Theoretical Description and Experimental Results
2006-08-01
Elementary Linear Algebra With Applications. John Wiley and sons, 1987. 49. J. Lee, K. W. Hoppel, and A. R. Miller, “Intensity and phase statistics of...kx, ky, kz = 0). The nature of the image recovered by the PFA may be ascertained by considering a scene consisting of an elementary point scatter...registered image pair estimate any dominant relative linear phase term between the primary image and the resampled repeat pass image and remove this
Dark sequential Z ' portal: Collider and direct detection experiments
NASA Astrophysics Data System (ADS)
Arcadi, Giorgio; Campos, Miguel D.; Lindner, Manfred; Masiero, Antonio; Queiroz, Farinaldo S.
2018-02-01
We revisit the status of a Majorana fermion as a dark matter candidate when a sequential Z' gauge boson dictates the dark matter phenomenology. Direct dark matter detection signatures rise from dark matter-nucleus scatterings at bubble chamber and liquid xenon detectors, and from the flux of neutrinos from the Sun measured by the IceCube experiment, which is governed by the spin-dependent dark matter-nucleus scattering. On the collider side, LHC searches for dilepton and monojet + missing energy signals play an important role. The relic density and perturbativity requirements are also addressed. By exploiting the dark matter complementarity we outline the region of parameter space where one can successfully have a Majorana dark matter particle in light of current and planned experimental sensitivities.
Modeling of a Sequential Two-Stage Combustor
NASA Technical Reports Server (NTRS)
Hendricks, R. C.; Liu, N.-S.; Gallagher, J. R.; Ryder, R. C.; Brankovic, A.; Hendricks, J. A.
2005-01-01
A sequential two-stage, natural gas fueled power generation combustion system is modeled to examine the fundamental aerodynamic and combustion characteristics of the system. The modeling methodology includes CAD-based geometry definition, and combustion computational fluid dynamics analysis. Graphical analysis is used to examine the complex vortical patterns in each component, identifying sources of pressure loss. The simulations demonstrate the importance of including the rotating high-pressure turbine blades in the computation, as this results in direct computation of combustion within the first turbine stage, and accurate simulation of the flow in the second combustion stage. The direct computation of hot-streaks through the rotating high-pressure turbine stage leads to improved understanding of the aerodynamic relationships between the primary and secondary combustors and the turbomachinery.
Sequential Syndrome Decoding of Convolutional Codes
NASA Technical Reports Server (NTRS)
Reed, I. S.; Truong, T. K.
1984-01-01
The algebraic structure of convolutional codes are reviewed and sequential syndrome decoding is applied to those codes. These concepts are then used to realize by example actual sequential decoding, using the stack algorithm. The Fano metric for use in sequential decoding is modified so that it can be utilized to sequentially find the minimum weight error sequence.
Robustness of the sequential lineup advantage.
Gronlund, Scott D; Carlson, Curt A; Dailey, Sarah B; Goodsell, Charles A
2009-06-01
A growing movement in the United States and around the world involves promoting the advantages of conducting an eyewitness lineup in a sequential manner. We conducted a large study (N = 2,529) that included 24 comparisons of sequential versus simultaneous lineups. A liberal statistical criterion revealed only 2 significant sequential lineup advantages and 3 significant simultaneous advantages. Both sequential advantages occurred when the good photograph of the guilty suspect or either innocent suspect was in the fifth position in the sequential lineup; all 3 simultaneous advantages occurred when the poorer quality photograph of the guilty suspect or either innocent suspect was in the second position. Adjusting the statistical criterion to control for the multiple tests (.05/24) revealed no significant sequential advantages. Moreover, despite finding more conservative overall choosing for the sequential lineup, no support was found for the proposal that a sequential advantage was due to that conservative criterion shift. Unless lineups with particular characteristics predominate in the real world, there appears to be no strong preference for conducting lineups in either a sequential or a simultaneous manner. (PsycINFO Database Record (c) 2009 APA, all rights reserved).
Delay test generation for synchronous sequential circuits
NASA Astrophysics Data System (ADS)
Devadas, Srinivas
1989-05-01
We address the problem of generating tests for delay faults in non-scan synchronous sequential circuits. Delay test generation for sequential circuits is a considerably more difficult problem than delay testing of combinational circuits and has received much less attention. In this paper, we present a method for generating test sequences to detect delay faults in sequential circuits using the stuck-at fault sequential test generator STALLION. The method is complete in that it will generate a delay test sequence for a targeted fault given sufficient CPU time, if such a sequence exists. We term faults for which no delay test sequence exists, under out test methodology, sequentially delay redundant. We describe means of eliminating sequential delay redundancies in logic circuits. We present a partial-scan methodology for enhancing the testability of difficult-to-test of untestable sequential circuits, wherein a small number of flip-flops are selected and made controllable/observable. The selection process guarantees the elimination of all sequential delay redundancies. We show that an intimate relationship exists between state assignment and delay testability of a sequential machine. We describe a state assignment algorithm for the synthesis of sequential machines with maximal delay fault testability. Preliminary experimental results using the test generation, partial-scan and synthesis algorithm are presented.
Ganapathy, Kavina; Sowmithra, Sowmithra; Bhonde, Ramesh; Datta, Indrani
2016-07-16
The neuron-glia ratio is of prime importance for maintaining the physiological homeostasis of neuronal and glial cells, and especially crucial for dopaminergic neurons because a reduction in glial density has been reported in postmortem reports of brains affected by Parkinson's disease. We thus aimed at developing an in vitro midbrain culture which would replicate a similar neuron-glia ratio to that in in vivo adult midbrain while containing a similar number of dopaminergic neurons. A sequential culture technique was adopted to achieve this. Neural progenitors (NPs) were generated by the hanging-drop method and propagated as 3D neurospheres followed by the derivation of outgrowth from these neurospheres on a chosen extracellular matrix. The highest proliferation was observed in neurospheres from day in vitro (DIV) 5 through MTT and FACS analysis of Ki67 expression. FACS analysis using annexin/propidium iodide showed an increase in the apoptotic population from DIV 8. DIV 5 neurospheres were therefore selected for deriving the differentiated outgrowth of midbrain on a poly-L-lysine-coated surface. Quantitative RT-PCR showed comparable gene expressions of the mature neuronal marker β-tubulin III, glial marker GFAP and dopaminergic marker tyrosine hydroxylase (TH) as compared to in vivo adult rat midbrain. The FACS analysis showed a similar neuron-glia ratio obtained by the sequential culture in comparison to adult rat midbrain. The yield of β-tubulin III and TH was distinctly higher in the sequential culture in comparison to 2D culture, which showed a higher yield of GFAP immunopositive cells. Functional characterization indicated that both the constitutive and inducible (KCl and ATP) release of dopamine was distinctly higher in the sequential culture than the 2D culture. Thus, the sequential culture technique succeeded in the initial enrichment of NPs in 3D neurospheres, which in turn resulted in an optimal attainment of the neuron-glia ratio on outgrowth culture from these neurospheres. © 2016 S. Karger AG, Basel.
Sequential Superresolution Imaging of Multiple Targets Using a Single Fluorophore
Lidke, Diane S.; Lidke, Keith A.
2015-01-01
Fluorescence superresolution (SR) microscopy, or fluorescence nanoscopy, provides nanometer scale detail of cellular structures and allows for imaging of biological processes at the molecular level. Specific SR imaging methods, such as localization-based imaging, rely on stochastic transitions between on (fluorescent) and off (dark) states of fluorophores. Imaging multiple cellular structures using multi-color imaging is complicated and limited by the differing properties of various organic dyes including their fluorescent state duty cycle, photons per switching event, number of fluorescent cycles before irreversible photobleaching, and overall sensitivity to buffer conditions. In addition, multiple color imaging requires consideration of multiple optical paths or chromatic aberration that can lead to differential aberrations that are important at the nanometer scale. Here, we report a method for sequential labeling and imaging that allows for SR imaging of multiple targets using a single fluorophore with negligible cross-talk between images. Using brightfield image correlation to register and overlay multiple image acquisitions with ~10 nm overlay precision in the x-y imaging plane, we have exploited the optimal properties of AlexaFluor647 for dSTORM to image four distinct cellular proteins. We also visualize the changes in co-localization of the epidermal growth factor (EGF) receptor and clathrin upon EGF addition that are consistent with clathrin-mediated endocytosis. These results are the first to demonstrate sequential SR (s-SR) imaging using direct stochastic reconstruction microscopy (dSTORM), and this method for sequential imaging can be applied to any superresolution technique. PMID:25860558
Grant, Clare F J; Carr, B Veronica; Kotecha, Abhay; van den Born, Erwin; Stuart, David I; Hammond, John A; Charleston, Bryan
2017-05-01
Foot-and-mouth disease virus (FMDV) is a highly contagious viral disease. Antibodies are pivotal in providing protection against FMDV infection. Serological protection against one FMDV serotype does not confer interserotype protection. However, some historical data have shown that interserotype protection can be induced following sequential FMDV challenge with multiple FMDV serotypes. In this study, we have investigated the kinetics of the FMDV-specific antibody-secreting cell (ASC) response following homologous and heterologous inactivated FMDV vaccination regimes. We have demonstrated that the kinetics of the B cell response are similar for all four FMDV serotypes tested following a homologous FMDV vaccination regime. When a heterologous vaccination regime was used with the sequential inoculation of three different inactivated FMDV serotypes (O, A, and Asia1 serotypes) a B cell response to FMDV SAT1 and serotype C was induced. The studies also revealed that the local lymphoid tissue had detectable FMDV-specific ASCs in the absence of circulating FMDV-specific ASCs, indicating the presence of short-lived ASCs, a hallmark of a T-independent 2 (TI-2) antigenic response to inactivated FMDV capsid. IMPORTANCE We have demonstrated the development of intraserotype response following a sequential vaccination regime of four different FMDV serotypes. We have found indication of short-lived ASCs in the local lymphoid tissue, further evidence of a TI-2 response to FMDV. Copyright © 2017 American Society for Microbiology.
ProperCAD: A portable object-oriented parallel environment for VLSI CAD
NASA Technical Reports Server (NTRS)
Ramkumar, Balkrishna; Banerjee, Prithviraj
1993-01-01
Most parallel algorithms for VLSI CAD proposed to date have one important drawback: they work efficiently only on machines that they were designed for. As a result, algorithms designed to date are dependent on the architecture for which they are developed and do not port easily to other parallel architectures. A new project under way to address this problem is described. A Portable object-oriented parallel environment for CAD algorithms (ProperCAD) is being developed. The objectives of this research are (1) to develop new parallel algorithms that run in a portable object-oriented environment (CAD algorithms using a general purpose platform for portable parallel programming called CARM is being developed and a C++ environment that is truly object-oriented and specialized for CAD applications is also being developed); and (2) to design the parallel algorithms around a good sequential algorithm with a well-defined parallel-sequential interface (permitting the parallel algorithm to benefit from future developments in sequential algorithms). One CAD application that has been implemented as part of the ProperCAD project, flat VLSI circuit extraction, is described. The algorithm, its implementation, and its performance on a range of parallel machines are discussed in detail. It currently runs on an Encore Multimax, a Sequent Symmetry, Intel iPSC/2 and i860 hypercubes, a NCUBE 2 hypercube, and a network of Sun Sparc workstations. Performance data for other applications that were developed are provided: namely test pattern generation for sequential circuits, parallel logic synthesis, and standard cell placement.
Yang, Sejung; Park, Junhee; Lee, Hanuel; Kim, Soohyun; Lee, Byung-Uk; Chung, Kee-Yang; Oh, Byungho
2016-01-01
Photographs of skin wounds have the most important information during the secondary intention healing (SIH). However, there is no standard method for handling those images and analyzing them efficiently and conveniently. To investigate the sequential changes of SIH depending on the body sites using a color patch method. We performed retrospective reviews of 30 patients (11 facial and 19 non-facial areas) who underwent SIH for the restoration of skin defects and captured sequential photographs with a color patch which is specially designed for automatically calculating defect and scar sizes. Using a novel image analysis method with a color patch, skin defects were calculated more accurately (range of error rate: -3.39% ~ + 3.05%). All patients had smaller scar size than the original defect size after SIH treatment (rates of decrease: 18.8% ~ 86.1%), and facial area showed significantly higher decrease rate compared with the non-facial area such as scalp and extremities (67.05 ± 12.48 vs. 53.29 ± 18.11, P < 0.05). From the result of estimating the date corresponding to the half of the final decrement, all of the facial area showed improvements within two weeks (8.45 ± 3.91), and non-facial area needed 14.33 ± 9.78 days. From the results of sequential changes of skin defects, SIH can be recommended as an alternative treatment method for restoration with more careful dressing for initial two weeks.
Accumulation of evidence during sequential decision making: the importance of top-down factors.
de Lange, Floris P; Jensen, Ole; Dehaene, Stanislas
2010-01-13
In the last decade, great progress has been made in characterizing the accumulation of neural information during simple unitary perceptual decisions. However, much less is known about how sequentially presented evidence is integrated over time for successful decision making. The aim of this study was to study the mechanisms of sequential decision making in humans. In a magnetoencephalography (MEG) study, we presented healthy volunteers with sequences of centrally presented arrows. Sequence length varied between one and five arrows, and the accumulated directions of the arrows informed the subject about which hand to use for a button press at the end of the sequence (e.g., LRLRR should result in a right-hand press). Mathematical modeling suggested that nonlinear accumulation was the rational strategy for performing this task in the presence of no or little noise, whereas quasilinear accumulation was optimal in the presence of substantial noise. MEG recordings showed a correlate of evidence integration over parietal and central cortex that was inversely related to the amount of accumulated evidence (i.e., when more evidence was accumulated, neural activity for new stimuli was attenuated). This modulation of activity likely reflects a top-down influence on sensory processing, effectively constraining the influence of sensory information on the decision variable over time. The results indicate that, when making decisions on the basis of sequential information, the human nervous system integrates evidence in a nonlinear manner, using the amount of previously accumulated information to constrain the accumulation of additional evidence.
Scalable Kernel Methods and Algorithms for General Sequence Analysis
ERIC Educational Resources Information Center
Kuksa, Pavel
2011-01-01
Analysis of large-scale sequential data has become an important task in machine learning and pattern recognition, inspired in part by numerous scientific and technological applications such as the document and text classification or the analysis of biological sequences. However, current computational methods for sequence comparison still lack…
ERIC Educational Resources Information Center
Wang, Chun; Chang, Hua-Hua
2011-01-01
Over the past thirty years, obtaining diagnostic information from examinees' item responses has become an increasingly important feature of educational and psychological testing. The objective can be achieved by sequentially selecting multidimensional items to fit the class of latent traits being assessed, and therefore Multidimensional…
Charting Early Trajectories of Executive Control with the Shape School
ERIC Educational Resources Information Center
Clark, Caron A. C.; Sheffield, Tiffany D.; Chevalier, Nicolas; Nelson, Jennifer Mize; Wiebe, Sandra A.; Espy, Kimberly Andrews
2013-01-01
Despite acknowledgement of the importance of executive control for learning and behavior, there is a dearth of research charting its developmental trajectory as it unfolds against the background of children's sociofamilial milieus. Using a prospective, cohort-sequential design, this study describes growth trajectories for inhibitory control…
The Refuse Hideaway Landfill (23-acre) was designed as a "natural attenuation" landfill and no provision was made to collect and treat contaminated water. Natural biological degradation through sequential reductive dechlorination had been an important mechanism for natural atten...
Exact Tests for the Rasch Model via Sequential Importance Sampling
ERIC Educational Resources Information Center
Chen, Yuguo; Small, Dylan
2005-01-01
Rasch proposed an exact conditional inference approach to testing his model but never implemented it because it involves the calculation of a complicated probability. This paper furthers Rasch's approach by (1) providing an efficient Monte Carlo methodology for accurately approximating the required probability and (2) illustrating the usefulness…
Database Creation for Information Processing Methods, Metrics, and Models (DCIPM3)
2009-05-01
who is in Student Government Association (SGA), attends a meeting that addresses the lineup of events to have at the pep rally, with other...documented events had some level of importance to the development of similar sequential events of a notional subject, many dead ends and random events
Student Teachers' Team Teaching during Field Experiences: An Evaluation by Their Mentors
ERIC Educational Resources Information Center
Simons, Mathea; Baeten, Marlies
2016-01-01
Since collaboration within schools gains importance, teacher educators are looking for alternative models of field experience inspired by collaborative learning. Team teaching is such a model. This study explores two team teaching models (parallel and sequential teaching) by investigating the mentors' perspective. Semi-structured interviews were…