Sample records for resampling-based significance testing

  1. Introduction to Permutation and Resampling-Based Hypothesis Tests

    ERIC Educational Resources Information Center

    LaFleur, Bonnie J.; Greevy, Robert A.

    2009-01-01

    A resampling-based method of inference--permutation tests--is often used when distributional assumptions are questionable or unmet. Not only are these methods useful for obvious departures from parametric assumptions (e.g., normality) and small sample sizes, but they are also more robust than their parametric counterparts in the presences of…

  2. Resampling-based Methods in Single and Multiple Testing for Equality of Covariance/Correlation Matrices

    PubMed Central

    Yang, Yang; DeGruttola, Victor

    2016-01-01

    Traditional resampling-based tests for homogeneity in covariance matrices across multiple groups resample residuals, that is, data centered by group means. These residuals do not share the same second moments when the null hypothesis is false, which makes them difficult to use in the setting of multiple testing. An alternative approach is to resample standardized residuals, data centered by group sample means and standardized by group sample covariance matrices. This approach, however, has been observed to inflate type I error when sample size is small or data are generated from heavy-tailed distributions. We propose to improve this approach by using robust estimation for the first and second moments. We discuss two statistics: the Bartlett statistic and a statistic based on eigen-decomposition of sample covariance matrices. Both statistics can be expressed in terms of standardized errors under the null hypothesis. These methods are extended to test homogeneity in correlation matrices. Using simulation studies, we demonstrate that the robust resampling approach provides comparable or superior performance, relative to traditional approaches, for single testing and reasonable performance for multiple testing. The proposed methods are applied to data collected in an HIV vaccine trial to investigate possible determinants, including vaccine status, vaccine-induced immune response level and viral genotype, of unusual correlation pattern between HIV viral load and CD4 count in newly infected patients. PMID:22740584

  3. Resampling-based methods in single and multiple testing for equality of covariance/correlation matrices.

    PubMed

    Yang, Yang; DeGruttola, Victor

    2012-06-22

    Traditional resampling-based tests for homogeneity in covariance matrices across multiple groups resample residuals, that is, data centered by group means. These residuals do not share the same second moments when the null hypothesis is false, which makes them difficult to use in the setting of multiple testing. An alternative approach is to resample standardized residuals, data centered by group sample means and standardized by group sample covariance matrices. This approach, however, has been observed to inflate type I error when sample size is small or data are generated from heavy-tailed distributions. We propose to improve this approach by using robust estimation for the first and second moments. We discuss two statistics: the Bartlett statistic and a statistic based on eigen-decomposition of sample covariance matrices. Both statistics can be expressed in terms of standardized errors under the null hypothesis. These methods are extended to test homogeneity in correlation matrices. Using simulation studies, we demonstrate that the robust resampling approach provides comparable or superior performance, relative to traditional approaches, for single testing and reasonable performance for multiple testing. The proposed methods are applied to data collected in an HIV vaccine trial to investigate possible determinants, including vaccine status, vaccine-induced immune response level and viral genotype, of unusual correlation pattern between HIV viral load and CD4 count in newly infected patients.

  4. Assessing differential expression in two-color microarrays: a resampling-based empirical Bayes approach.

    PubMed

    Li, Dongmei; Le Pape, Marc A; Parikh, Nisha I; Chen, Will X; Dye, Timothy D

    2013-01-01

    Microarrays are widely used for examining differential gene expression, identifying single nucleotide polymorphisms, and detecting methylation loci. Multiple testing methods in microarray data analysis aim at controlling both Type I and Type II error rates; however, real microarray data do not always fit their distribution assumptions. Smyth's ubiquitous parametric method, for example, inadequately accommodates violations of normality assumptions, resulting in inflated Type I error rates. The Significance Analysis of Microarrays, another widely used microarray data analysis method, is based on a permutation test and is robust to non-normally distributed data; however, the Significance Analysis of Microarrays method fold change criteria are problematic, and can critically alter the conclusion of a study, as a result of compositional changes of the control data set in the analysis. We propose a novel approach, combining resampling with empirical Bayes methods: the Resampling-based empirical Bayes Methods. This approach not only reduces false discovery rates for non-normally distributed microarray data, but it is also impervious to fold change threshold since no control data set selection is needed. Through simulation studies, sensitivities, specificities, total rejections, and false discovery rates are compared across the Smyth's parametric method, the Significance Analysis of Microarrays, and the Resampling-based empirical Bayes Methods. Differences in false discovery rates controls between each approach are illustrated through a preterm delivery methylation study. The results show that the Resampling-based empirical Bayes Methods offer significantly higher specificity and lower false discovery rates compared to Smyth's parametric method when data are not normally distributed. The Resampling-based empirical Bayes Methods also offers higher statistical power than the Significance Analysis of Microarrays method when the proportion of significantly differentially

  5. Testing the significance of a correlation with nonnormal data: comparison of Pearson, Spearman, transformation, and resampling approaches.

    PubMed

    Bishara, Anthony J; Hittner, James B

    2012-09-01

    It is well known that when data are nonnormally distributed, a test of the significance of Pearson's r may inflate Type I error rates and reduce power. Statistics textbooks and the simulation literature provide several alternatives to Pearson's correlation. However, the relative performance of these alternatives has been unclear. Two simulation studies were conducted to compare 12 methods, including Pearson, Spearman's rank-order, transformation, and resampling approaches. With most sample sizes (n ≥ 20), Type I and Type II error rates were minimized by transforming the data to a normal shape prior to assessing the Pearson correlation. Among transformation approaches, a general purpose rank-based inverse normal transformation (i.e., transformation to rankit scores) was most beneficial. However, when samples were both small (n ≤ 10) and extremely nonnormal, the permutation test often outperformed other alternatives, including various bootstrap tests.

  6. Assessment of resampling methods for causality testing: A note on the US inflation behavior

    PubMed Central

    Kyrtsou, Catherine; Kugiumtzis, Dimitris; Diks, Cees

    2017-01-01

    Different resampling methods for the null hypothesis of no Granger causality are assessed in the setting of multivariate time series, taking into account that the driving-response coupling is conditioned on the other observed variables. As appropriate test statistic for this setting, the partial transfer entropy (PTE), an information and model-free measure, is used. Two resampling techniques, time-shifted surrogates and the stationary bootstrap, are combined with three independence settings (giving a total of six resampling methods), all approximating the null hypothesis of no Granger causality. In these three settings, the level of dependence is changed, while the conditioning variables remain intact. The empirical null distribution of the PTE, as the surrogate and bootstrapped time series become more independent, is examined along with the size and power of the respective tests. Additionally, we consider a seventh resampling method by contemporaneously resampling the driving and the response time series using the stationary bootstrap. Although this case does not comply with the no causality hypothesis, one can obtain an accurate sampling distribution for the mean of the test statistic since its value is zero under H0. Results indicate that as the resampling setting gets more independent, the test becomes more conservative. Finally, we conclude with a real application. More specifically, we investigate the causal links among the growth rates for the US CPI, money supply and crude oil. Based on the PTE and the seven resampling methods, we consistently find that changes in crude oil cause inflation conditioning on money supply in the post-1986 period. However this relationship cannot be explained on the basis of traditional cost-push mechanisms. PMID:28708870

  7. Assessment of resampling methods for causality testing: A note on the US inflation behavior.

    PubMed

    Papana, Angeliki; Kyrtsou, Catherine; Kugiumtzis, Dimitris; Diks, Cees

    2017-01-01

    Different resampling methods for the null hypothesis of no Granger causality are assessed in the setting of multivariate time series, taking into account that the driving-response coupling is conditioned on the other observed variables. As appropriate test statistic for this setting, the partial transfer entropy (PTE), an information and model-free measure, is used. Two resampling techniques, time-shifted surrogates and the stationary bootstrap, are combined with three independence settings (giving a total of six resampling methods), all approximating the null hypothesis of no Granger causality. In these three settings, the level of dependence is changed, while the conditioning variables remain intact. The empirical null distribution of the PTE, as the surrogate and bootstrapped time series become more independent, is examined along with the size and power of the respective tests. Additionally, we consider a seventh resampling method by contemporaneously resampling the driving and the response time series using the stationary bootstrap. Although this case does not comply with the no causality hypothesis, one can obtain an accurate sampling distribution for the mean of the test statistic since its value is zero under H0. Results indicate that as the resampling setting gets more independent, the test becomes more conservative. Finally, we conclude with a real application. More specifically, we investigate the causal links among the growth rates for the US CPI, money supply and crude oil. Based on the PTE and the seven resampling methods, we consistently find that changes in crude oil cause inflation conditioning on money supply in the post-1986 period. However this relationship cannot be explained on the basis of traditional cost-push mechanisms.

  8. Accelerated spike resampling for accurate multiple testing controls.

    PubMed

    Harrison, Matthew T

    2013-02-01

    Controlling for multiple hypothesis tests using standard spike resampling techniques often requires prohibitive amounts of computation. Importance sampling techniques can be used to accelerate the computation. The general theory is presented, along with specific examples for testing differences across conditions using permutation tests and for testing pairwise synchrony and precise lagged-correlation between many simultaneously recorded spike trains using interval jitter.

  9. Forensic identification of resampling operators: A semi non-intrusive approach.

    PubMed

    Cao, Gang; Zhao, Yao; Ni, Rongrong

    2012-03-10

    Recently, several new resampling operators have been proposed and successfully invalidate the existing resampling detectors. However, the reliability of such anti-forensic techniques is unaware and needs to be investigated. In this paper, we focus on the forensic identification of digital image resampling operators including the traditional type and the anti-forensic type which hides the trace of traditional resampling. Various resampling algorithms involving geometric distortion (GD)-based, dual-path-based and postprocessing-based are investigated. The identification is achieved in the manner of semi non-intrusive, supposing the resampling software could be accessed. Given an input pattern of monotone signal, polarity aberration of GD-based resampled signal's first derivative is analyzed theoretically and measured by effective feature metric. Dual-path-based and postprocessing-based resampling can also be identified by feeding proper test patterns. Experimental results on various parameter settings demonstrate the effectiveness of the proposed approach. Copyright © 2011 Elsevier Ireland Ltd. All rights reserved.

  10. Assessment of Person Fit Using Resampling-Based Approaches

    ERIC Educational Resources Information Center

    Sinharay, Sandip

    2016-01-01

    De la Torre and Deng suggested a resampling-based approach for person-fit assessment (PFA). The approach involves the use of the [math equation unavailable] statistic, a corrected expected a posteriori estimate of the examinee ability, and the Monte Carlo (MC) resampling method. The Type I error rate of the approach was closer to the nominal level…

  11. Testing for Granger Causality in the Frequency Domain: A Phase Resampling Method.

    PubMed

    Liu, Siwei; Molenaar, Peter

    2016-01-01

    This article introduces phase resampling, an existing but rarely used surrogate data method for making statistical inferences of Granger causality in frequency domain time series analysis. Granger causality testing is essential for establishing causal relations among variables in multivariate dynamic processes. However, testing for Granger causality in the frequency domain is challenging due to the nonlinear relation between frequency domain measures (e.g., partial directed coherence, generalized partial directed coherence) and time domain data. Through a simulation study, we demonstrate that phase resampling is a general and robust method for making statistical inferences even with short time series. With Gaussian data, phase resampling yields satisfactory type I and type II error rates in all but one condition we examine: when a small effect size is combined with an insufficient number of data points. Violations of normality lead to slightly higher error rates but are mostly within acceptable ranges. We illustrate the utility of phase resampling with two empirical examples involving multivariate electroencephalography (EEG) and skin conductance data.

  12. Resampling and Distribution of the Product Methods for Testing Indirect Effects in Complex Models

    ERIC Educational Resources Information Center

    Williams, Jason; MacKinnon, David P.

    2008-01-01

    Recent advances in testing mediation have found that certain resampling methods and tests based on the mathematical distribution of 2 normal random variables substantially outperform the traditional "z" test. However, these studies have primarily focused only on models with a single mediator and 2 component paths. To address this limitation, a…

  13. Exact and Monte carlo resampling procedures for the Wilcoxon-Mann-Whitney and Kruskal-Wallis tests.

    PubMed

    Berry, K J; Mielke, P W

    2000-12-01

    Exact and Monte Carlo resampling FORTRAN programs are described for the Wilcoxon-Mann-Whitney rank sum test and the Kruskal-Wallis one-way analysis of variance for ranks test. The program algorithms compensate for tied values and do not depend on asymptotic approximations for probability values, unlike most algorithms contained in PC-based statistical software packages.

  14. Testing the Difference of Correlated Agreement Coefficients for Statistical Significance

    ERIC Educational Resources Information Center

    Gwet, Kilem L.

    2016-01-01

    This article addresses the problem of testing the difference between two correlated agreement coefficients for statistical significance. A number of authors have proposed methods for testing the difference between two correlated kappa coefficients, which require either the use of resampling methods or the use of advanced statistical modeling…

  15. Bi-resampled data study

    NASA Technical Reports Server (NTRS)

    Benner, R.; Young, W.

    1977-01-01

    The results of an experimental study conducted to determine the geometric and radiometric effects of double resampling (bi-resampling) performed on image data in the process of performing map projection transformations are reported.

  16. Resampling-Based Empirical Bayes Multiple Testing Procedures for Controlling Generalized Tail Probability and Expected Value Error Rates: Focus on the False Discovery Rate and Simulation Study

    PubMed Central

    Dudoit, Sandrine; Gilbert, Houston N.; van der Laan, Mark J.

    2014-01-01

    Summary This article proposes resampling-based empirical Bayes multiple testing procedures for controlling a broad class of Type I error rates, defined as generalized tail probability (gTP) error rates, gTP(q, g) = Pr(g(Vn, Sn) > q), and generalized expected value (gEV) error rates, gEV(g) = E[g(Vn, Sn)], for arbitrary functions g(Vn, Sn) of the numbers of false positives Vn and true positives Sn. Of particular interest are error rates based on the proportion g(Vn, Sn) = Vn/(Vn + Sn) of Type I errors among the rejected hypotheses, such as the false discovery rate (FDR), FDR = E[Vn/(Vn + Sn)]. The proposed procedures offer several advantages over existing methods. They provide Type I error control for general data generating distributions, with arbitrary dependence structures among variables. Gains in power are achieved by deriving rejection regions based on guessed sets of true null hypotheses and null test statistics randomly sampled from joint distributions that account for the dependence structure of the data. The Type I error and power properties of an FDR-controlling version of the resampling-based empirical Bayes approach are investigated and compared to those of widely-used FDR-controlling linear step-up procedures in a simulation study. The Type I error and power trade-off achieved by the empirical Bayes procedures under a variety of testing scenarios allows this approach to be competitive with or outperform the Storey and Tibshirani (2003) linear step-up procedure, as an alternative to the classical Benjamini and Hochberg (1995) procedure. PMID:18932138

  17. On-line crack prognosis in attachment lug using Lamb wave-deterministic resampling particle filter-based method

    NASA Astrophysics Data System (ADS)

    Yuan, Shenfang; Chen, Jian; Yang, Weibo; Qiu, Lei

    2017-08-01

    Fatigue crack growth prognosis is important for prolonging service time, improving safety, and reducing maintenance cost in many safety-critical systems, such as in aircraft, wind turbines, bridges, and nuclear plants. Combining fatigue crack growth models with the particle filter (PF) method has proved promising to deal with the uncertainties during fatigue crack growth and reach a more accurate prognosis. However, research on prognosis methods integrating on-line crack monitoring with the PF method is still lacking, as well as experimental verifications. Besides, the PF methods adopted so far are almost all sequential importance resampling-based PFs, which usually encounter sample impoverishment problems, and hence performs poorly. To solve these problems, in this paper, the piezoelectric transducers (PZTs)-based active Lamb wave method is adopted for on-line crack monitoring. The deterministic resampling PF (DRPF) is proposed to be used in fatigue crack growth prognosis, which can overcome the sample impoverishment problem. The proposed method is verified through fatigue tests of attachment lugs, which are a kind of important joint component in aerospace systems.

  18. Analysis of small sample size studies using nonparametric bootstrap test with pooled resampling method.

    PubMed

    Dwivedi, Alok Kumar; Mallawaarachchi, Indika; Alvarado, Luis A

    2017-06-30

    Experimental studies in biomedical research frequently pose analytical problems related to small sample size. In such studies, there are conflicting findings regarding the choice of parametric and nonparametric analysis, especially with non-normal data. In such instances, some methodologists questioned the validity of parametric tests and suggested nonparametric tests. In contrast, other methodologists found nonparametric tests to be too conservative and less powerful and thus preferred using parametric tests. Some researchers have recommended using a bootstrap test; however, this method also has small sample size limitation. We used a pooled method in nonparametric bootstrap test that may overcome the problem related with small samples in hypothesis testing. The present study compared nonparametric bootstrap test with pooled resampling method corresponding to parametric, nonparametric, and permutation tests through extensive simulations under various conditions and using real data examples. The nonparametric pooled bootstrap t-test provided equal or greater power for comparing two means as compared with unpaired t-test, Welch t-test, Wilcoxon rank sum test, and permutation test while maintaining type I error probability for any conditions except for Cauchy and extreme variable lognormal distributions. In such cases, we suggest using an exact Wilcoxon rank sum test. Nonparametric bootstrap paired t-test also provided better performance than other alternatives. Nonparametric bootstrap test provided benefit over exact Kruskal-Wallis test. We suggest using nonparametric bootstrap test with pooled resampling method for comparing paired or unpaired means and for validating the one way analysis of variance test results for non-normal data in small sample size studies. Copyright © 2017 John Wiley & Sons, Ltd. Copyright © 2017 John Wiley & Sons, Ltd.

  19. Maximum a posteriori resampling of noisy, spatially correlated data

    NASA Astrophysics Data System (ADS)

    Goff, John A.; Jenkins, Chris; Calder, Brian

    2006-08-01

    In any geologic application, noisy data are sources of consternation for researchers, inhibiting interpretability and marring images with unsightly and unrealistic artifacts. Filtering is the typical solution to dealing with noisy data. However, filtering commonly suffers from ad hoc (i.e., uncalibrated, ungoverned) application. We present here an alternative to filtering: a newly developed method for correcting noise in data by finding the "best" value given available information. The motivating rationale is that data points that are close to each other in space cannot differ by "too much," where "too much" is governed by the field covariance. Data with large uncertainties will frequently violate this condition and therefore ought to be corrected, or "resampled." Our solution for resampling is determined by the maximum of the a posteriori density function defined by the intersection of (1) the data error probability density function (pdf) and (2) the conditional pdf, determined by the geostatistical kriging algorithm applied to proximal data values. A maximum a posteriori solution can be computed sequentially going through all the data, but the solution depends on the order in which the data are examined. We approximate the global a posteriori solution by randomizing this order and taking the average. A test with a synthetic data set sampled from a known field demonstrates quantitatively and qualitatively the improvement provided by the maximum a posteriori resampling algorithm. The method is also applied to three marine geology/geophysics data examples, demonstrating the viability of the method for diverse applications: (1) three generations of bathymetric data on the New Jersey shelf with disparate data uncertainties; (2) mean grain size data from the Adriatic Sea, which is a combination of both analytic (low uncertainty) and word-based (higher uncertainty) sources; and (3) side-scan backscatter data from the Martha's Vineyard Coastal Observatory which are, as

  20. Adaptive Resampling Particle Filters for GPS Carrier-Phase Navigation and Collision Avoidance System

    NASA Astrophysics Data System (ADS)

    Hwang, Soon Sik

    This dissertation addresses three problems: 1) adaptive resampling technique (ART) for Particle Filters, 2) precise relative positioning using Global Positioning System (GPS) Carrier-Phase (CP) measurements applied to nonlinear integer resolution problem for GPS CP navigation using Particle Filters, and 3) collision detection system based on GPS CP broadcasts. First, Monte Carlo filters, called Particle Filters (PF), are widely used where the system is non-linear and non-Gaussian. In real-time applications, their estimation accuracies and efficiencies are significantly affected by the number of particles and the scheduling of relocating weights and samples, the so-called resampling step. In this dissertation, the appropriate number of particles is estimated adaptively such that the error of the sample mean and variance stay in bounds. These bounds are given by the confidence interval of a normal probability distribution for a multi-variate state. Two required number of samples maintaining the mean and variance error within the bounds are derived. The time of resampling is determined when the required sample number for the variance error crosses the required sample number for the mean error. Second, the PF using GPS CP measurements with adaptive resampling is applied to precise relative navigation between two GPS antennas. In order to make use of CP measurements for navigation, the unknown number of cycles between GPS antennas, the so called integer ambiguity, should be resolved. The PF is applied to this integer ambiguity resolution problem where the relative navigation states estimation involves nonlinear observations and nonlinear dynamics equation. Using the PF, the probability density function of the states is estimated by sampling from the position and velocity space and the integer ambiguities are resolved without using the usual hypothesis tests to search for the integer ambiguity. The ART manages the number of position samples and the frequency of the

  1. An optical systems analysis approach to image resampling

    NASA Technical Reports Server (NTRS)

    Lyon, Richard G.

    1997-01-01

    All types of image registration require some type of resampling, either during the registration or as a final step in the registration process. Thus the image(s) must be regridded into a spatially uniform, or angularly uniform, coordinate system with some pre-defined resolution. Frequently the ending resolution is not the resolution at which the data was observed with. The registration algorithm designer and end product user are presented with a multitude of possible resampling methods each of which modify the spatial frequency content of the data in some way. The purpose of this paper is threefold: (1) to show how an imaging system modifies the scene from an end to end optical systems analysis approach, (2) to develop a generalized resampling model, and (3) empirically apply the model to simulated radiometric scene data and tabulate the results. A Hanning windowed sinc interpolator method will be developed based upon the optical characterization of the system. It will be discussed in terms of the effects and limitations of sampling, aliasing, spectral leakage, and computational complexity. Simulated radiometric scene data will be used to demonstrate each of the algorithms. A high resolution scene will be "grown" using a fractal growth algorithm based on mid-point recursion techniques. The result scene data will be convolved with a point spread function representing the optical response. The resultant scene will be convolved with the detection systems response and subsampled to the desired resolution. The resultant data product will be subsequently resampled to the correct grid using the Hanning windowed sinc interpolator and the results and errors tabulated and discussed.

  2. Combining Nordtest method and bootstrap resampling for measurement uncertainty estimation of hematology analytes in a medical laboratory.

    PubMed

    Cui, Ming; Xu, Lili; Wang, Huimin; Ju, Shaoqing; Xu, Shuizhu; Jing, Rongrong

    2017-12-01

    Measurement uncertainty (MU) is a metrological concept, which can be used for objectively estimating the quality of test results in medical laboratories. The Nordtest guide recommends an approach that uses both internal quality control (IQC) and external quality assessment (EQA) data to evaluate the MU. Bootstrap resampling is employed to simulate the unknown distribution based on the mathematical statistics method using an existing small sample of data, where the aim is to transform the small sample into a large sample. However, there have been no reports of the utilization of this method in medical laboratories. Thus, this study applied the Nordtest guide approach based on bootstrap resampling for estimating the MU. We estimated the MU for the white blood cell (WBC) count, red blood cell (RBC) count, hemoglobin (Hb), and platelets (Plt). First, we used 6months of IQC data and 12months of EQA data to calculate the MU according to the Nordtest method. Second, we combined the Nordtest method and bootstrap resampling with the quality control data and calculated the MU using MATLAB software. We then compared the MU results obtained using the two approaches. The expanded uncertainty results determined for WBC, RBC, Hb, and Plt using the bootstrap resampling method were 4.39%, 2.43%, 3.04%, and 5.92%, respectively, and 4.38%, 2.42%, 3.02%, and 6.00% with the existing quality control data (U [k=2]). For WBC, RBC, Hb, and Plt, the differences between the results obtained using the two methods were lower than 1.33%. The expanded uncertainty values were all less than the target uncertainties. The bootstrap resampling method allows the statistical analysis of the MU. Combining the Nordtest method and bootstrap resampling is considered a suitable alternative method for estimating the MU. Copyright © 2017 The Canadian Society of Clinical Chemists. Published by Elsevier Inc. All rights reserved.

  3. Spatial Quality Evaluation of Resampled Unmanned Aerial Vehicle-Imagery for Weed Mapping.

    PubMed

    Borra-Serrano, Irene; Peña, José Manuel; Torres-Sánchez, Jorge; Mesas-Carrascosa, Francisco Javier; López-Granados, Francisca

    2015-08-12

    Unmanned aerial vehicles (UAVs) combined with different spectral range sensors are an emerging technology for providing early weed maps for optimizing herbicide applications. Considering that weeds, at very early phenological stages, are similar spectrally and in appearance, three major components are relevant: spatial resolution, type of sensor and classification algorithm. Resampling is a technique to create a new version of an image with a different width and/or height in pixels, and it has been used in satellite imagery with different spatial and temporal resolutions. In this paper, the efficiency of resampled-images (RS-images) created from real UAV-images (UAV-images; the UAVs were equipped with two types of sensors, i.e., visible and visible plus near-infrared spectra) captured at different altitudes is examined to test the quality of the RS-image output. The performance of the object-based-image-analysis (OBIA) implemented for the early weed mapping using different weed thresholds was also evaluated. Our results showed that resampling accurately extracted the spectral values from high spatial resolution UAV-images at an altitude of 30 m and the RS-image data at altitudes of 60 and 100 m, was able to provide accurate weed cover and herbicide application maps compared with UAV-images from real flights.

  4. Spatial Quality Evaluation of Resampled Unmanned Aerial Vehicle-Imagery for Weed Mapping

    PubMed Central

    Borra-Serrano, Irene; Peña, José Manuel; Torres-Sánchez, Jorge; Mesas-Carrascosa, Francisco Javier; López-Granados, Francisca

    2015-01-01

    Unmanned aerial vehicles (UAVs) combined with different spectral range sensors are an emerging technology for providing early weed maps for optimizing herbicide applications. Considering that weeds, at very early phenological stages, are similar spectrally and in appearance, three major components are relevant: spatial resolution, type of sensor and classification algorithm. Resampling is a technique to create a new version of an image with a different width and/or height in pixels, and it has been used in satellite imagery with different spatial and temporal resolutions. In this paper, the efficiency of resampled-images (RS-images) created from real UAV-images (UAV-images; the UAVs were equipped with two types of sensors, i.e., visible and visible plus near-infrared spectra) captured at different altitudes is examined to test the quality of the RS-image output. The performance of the object-based-image-analysis (OBIA) implemented for the early weed mapping using different weed thresholds was also evaluated. Our results showed that resampling accurately extracted the spectral values from high spatial resolution UAV-images at an altitude of 30 m and the RS-image data at altitudes of 60 and 100 m, was able to provide accurate weed cover and herbicide application maps compared with UAV-images from real flights. PMID:26274960

  5. Comment on: 'A Poisson resampling method for simulating reduced counts in nuclear medicine images'.

    PubMed

    de Nijs, Robin

    2015-07-21

    In order to be able to calculate half-count images from already acquired data, White and Lawson published their method based on Poisson resampling. They verified their method experimentally by measurements with a Co-57 flood source. In this comment their results are reproduced and confirmed by a direct numerical simulation in Matlab. Not only Poisson resampling, but also two direct redrawing methods were investigated. Redrawing methods were based on a Poisson and a Gaussian distribution. Mean, standard deviation, skewness and excess kurtosis half-count/full-count ratios were determined for all methods, and compared to the theoretical values for a Poisson distribution. Statistical parameters showed the same behavior as in the original note and showed the superiority of the Poisson resampling method. Rounding off before saving of the half count image had a severe impact on counting statistics for counts below 100. Only Poisson resampling was not affected by this, while Gaussian redrawing was less affected by it than Poisson redrawing. Poisson resampling is the method of choice, when simulating half-count (or less) images from full-count images. It simulates correctly the statistical properties, also in the case of rounding off of the images.

  6. Image re-sampling detection through a novel interpolation kernel.

    PubMed

    Hilal, Alaa

    2018-06-01

    Image re-sampling involved in re-size and rotation transformations is an essential element block in a typical digital image alteration. Fortunately, traces left from such processes are detectable, proving that the image has gone a re-sampling transformation. Within this context, we present in this paper two original contributions. First, we propose a new re-sampling interpolation kernel. It depends on five independent parameters that controls its amplitude, angular frequency, standard deviation, and duration. Then, we demonstrate its capacity to imitate the same behavior of the most frequent interpolation kernels used in digital image re-sampling applications. Secondly, the proposed model is used to characterize and detect the correlation coefficients involved in re-sampling transformations. The involved process includes a minimization of an error function using the gradient method. The proposed method is assessed over a large database of 11,000 re-sampled images. Additionally, it is implemented within an algorithm in order to assess images that had undergone complex transformations. Obtained results demonstrate better performance and reduced processing time when compared to a reference method validating the suitability of the proposed approaches. Copyright © 2018 Elsevier B.V. All rights reserved.

  7. The conditional resampling model STARS: weaknesses of the modeling concept and development

    NASA Astrophysics Data System (ADS)

    Menz, Christoph

    2016-04-01

    The Statistical Analogue Resampling Scheme (STARS) is based on a modeling concept of Werner and Gerstengarbe (1997). The model uses a conditional resampling technique to create a simulation time series from daily observations. Unlike other time series generators (such as stochastic weather generators) STARS only needs a linear regression specification of a single variable as the target condition for the resampling. Since its first implementation the algorithm was further extended in order to allow for a spatially distributed trend signal, to preserve the seasonal cycle and the autocorrelation of the observation time series (Orlovsky, 2007; Orlovsky et al., 2008). This evolved version was successfully used in several climate impact studies. However a detaild evaluation of the simulations revealed two fundamental weaknesses of the utilized resampling technique. 1. The restriction of the resampling condition on a single individual variable can lead to a misinterpretation of the change signal of other variables when the model is applied to a mulvariate time series. (F. Wechsung and M. Wechsung, 2014). As one example, the short-term correlations between precipitation and temperature (cooling of the near-surface air layer after a rainfall event) can be misinterpreted as a climatic change signal in the simulation series. 2. The model restricts the linear regression specification to the annual mean time series, refusing the specification of seasonal varying trends. To overcome these fundamental weaknesses a redevelopment of the whole algorithm was done. The poster discusses the main weaknesses of the earlier model implementation and the methods applied to overcome these in the new version. Based on the new model idealized simulations were conducted to illustrate the enhancement.

  8. Maximum likelihood resampling of noisy, spatially correlated data

    NASA Astrophysics Data System (ADS)

    Goff, J.; Jenkins, C.

    2005-12-01

    In any geologic application, noisy data are sources of consternation for researchers, inhibiting interpretability and marring images with unsightly and unrealistic artifacts. Filtering is the typical solution to dealing with noisy data. However, filtering commonly suffers from ad hoc (i.e., uncalibrated, ungoverned) application, which runs the risk of erasing high variability components of the field in addition to the noise components. We present here an alternative to filtering: a newly developed methodology for correcting noise in data by finding the "best" value given the data value, its uncertainty, and the data values and uncertainties at proximal locations. The motivating rationale is that data points that are close to each other in space cannot differ by "too much", where how much is "too much" is governed by the field correlation properties. Data with large uncertainties will frequently violate this condition, and in such cases need to be corrected, or "resampled." The best solution for resampling is determined by the maximum of the likelihood function defined by the intersection of two probability density functions (pdf): (1) the data pdf, with mean and variance determined by the data value and square uncertainty, respectively, and (2) the geostatistical pdf, whose mean and variance are determined by the kriging algorithm applied to proximal data values. A Monte Carlo sampling of the data probability space eliminates non-uniqueness, and weights the solution toward data values with lower uncertainties. A test with a synthetic data set sampled from a known field demonstrates quantitatively and qualitatively the improvement provided by the maximum likelihood resampling algorithm. The method is also applied to three marine geology/geophysics data examples: (1) three generations of bathymetric data on the New Jersey shelf with disparate data uncertainties; (2) mean grain size data from the Adriatic Sea, which is combination of both analytic (low uncertainty

  9. Anisotropic scene geometry resampling with occlusion filling for 3DTV applications

    NASA Astrophysics Data System (ADS)

    Kim, Jangheon; Sikora, Thomas

    2006-02-01

    Image and video-based rendering technologies are receiving growing attention due to their photo-realistic rendering capability in free-viewpoint. However, two major limitations are ghosting and blurring due to their sampling-based mechanism. The scene geometry which supports to select accurate sampling positions is proposed using global method (i.e. approximate depth plane) and local method (i.e. disparity estimation). This paper focuses on the local method since it can yield more accurate rendering quality without large number of cameras. The local scene geometry has two difficulties which are the geometrical density and the uncovered area including hidden information. They are the serious drawback to reconstruct an arbitrary viewpoint without aliasing artifacts. To solve the problems, we propose anisotropic diffusive resampling method based on tensor theory. Isotropic low-pass filtering accomplishes anti-aliasing in scene geometry and anisotropic diffusion prevents filtering from blurring the visual structures. Apertures in coarse samples are estimated following diffusion on the pre-filtered space, the nonlinear weighting of gradient directions suppresses the amount of diffusion. Aliasing artifacts from low density are efficiently removed by isotropic filtering and the edge blurring can be solved by the anisotropic method at one process. Due to difference size of sampling gap, the resampling condition is defined considering causality between filter-scale and edge. Using partial differential equation (PDE) employing Gaussian scale-space, we iteratively achieve the coarse-to-fine resampling. In a large scale, apertures and uncovered holes can be overcoming because only strong and meaningful boundaries are selected on the resolution. The coarse-level resampling with a large scale is iteratively refined to get detail scene structure. Simulation results show the marked improvements of rendering quality.

  10. Computationally Efficient Resampling of Nonuniform Oversampled SAR Data

    DTIC Science & Technology

    2010-05-01

    noncoherently . The resample data is calculated using both a simple average and a weighted average of the demodulated data. The average nonuniform...trials with randomly varying accelerations. The results are shown in Fig. 5 for the noncoherent power difference and Fig. 6 for and coherent power...simple average. Figure 5. Noncoherent difference between SAR imagery generated with uniform sampling and nonuniform sampling that was resampled

  11. Spectral resampling based on user-defined inter-band correlation filter: C3 and C4 grass species classification

    NASA Astrophysics Data System (ADS)

    Adjorlolo, Clement; Mutanga, Onisimo; Cho, Moses A.; Ismail, Riyad

    2013-04-01

    In this paper, a user-defined inter-band correlation filter function was used to resample hyperspectral data and thereby mitigate the problem of multicollinearity in classification analysis. The proposed resampling technique convolves the spectral dependence information between a chosen band-centre and its shorter and longer wavelength neighbours. Weighting threshold of inter-band correlation (WTC, Pearson's r) was calculated, whereby r = 1 at the band-centre. Various WTC (r = 0.99, r = 0.95 and r = 0.90) were assessed, and bands with coefficients beyond a chosen threshold were assigned r = 0. The resultant data were used in the random forest analysis to classify in situ C3 and C4 grass canopy reflectance. The respective WTC datasets yielded improved classification accuracies (kappa = 0.82, 0.79 and 0.76) with less correlated wavebands when compared to resampled Hyperion bands (kappa = 0.76). Overall, the results obtained from this study suggested that resampling of hyperspectral data should account for the spectral dependence information to improve overall classification accuracy as well as reducing the problem of multicollinearity.

  12. A comparison of resampling schemes for estimating model observer performance with small ensembles

    NASA Astrophysics Data System (ADS)

    Elshahaby, Fatma E. A.; Jha, Abhinav K.; Ghaly, Michael; Frey, Eric C.

    2017-09-01

    In objective assessment of image quality, an ensemble of images is used to compute the 1st and 2nd order statistics of the data. Often, only a finite number of images is available, leading to the issue of statistical variability in numerical observer performance. Resampling-based strategies can help overcome this issue. In this paper, we compared different combinations of resampling schemes (the leave-one-out (LOO) and the half-train/half-test (HT/HT)) and model observers (the conventional channelized Hotelling observer (CHO), channelized linear discriminant (CLD) and channelized quadratic discriminant). Observer performance was quantified by the area under the ROC curve (AUC). For a binary classification task and for each observer, the AUC value for an ensemble size of 2000 samples per class served as a gold standard for that observer. Results indicated that each observer yielded a different performance depending on the ensemble size and the resampling scheme. For a small ensemble size, the combination [CHO, HT/HT] had more accurate rankings than the combination [CHO, LOO]. Using the LOO scheme, the CLD and CHO had similar performance for large ensembles. However, the CLD outperformed the CHO and gave more accurate rankings for smaller ensembles. As the ensemble size decreased, the performance of the [CHO, LOO] combination seriously deteriorated as opposed to the [CLD, LOO] combination. Thus, it might be desirable to use the CLD with the LOO scheme when smaller ensemble size is available.

  13. Resampling: A Marriage of Computers and Statistics. ERIC/TM Digest.

    ERIC Educational Resources Information Center

    Rudner, Lawrence M.; Shafer, Mary Morello

    Advances in computer technology are making it possible for educational researchers to use simpler statistical methods to address a wide range of questions with smaller data sets and fewer, and less restrictive, assumptions. This digest introduces computationally intensive statistics, collectively called resampling techniques. Resampling is a…

  14. An empirical study using permutation-based resampling in meta-regression

    PubMed Central

    2012-01-01

    Background In meta-regression, as the number of trials in the analyses decreases, the risk of false positives or false negatives increases. This is partly due to the assumption of normality that may not hold in small samples. Creation of a distribution from the observed trials using permutation methods to calculate P values may allow for less spurious findings. Permutation has not been empirically tested in meta-regression. The objective of this study was to perform an empirical investigation to explore the differences in results for meta-analyses on a small number of trials using standard large sample approaches verses permutation-based methods for meta-regression. Methods We isolated a sample of randomized controlled clinical trials (RCTs) for interventions that have a small number of trials (herbal medicine trials). Trials were then grouped by herbal species and condition and assessed for methodological quality using the Jadad scale, and data were extracted for each outcome. Finally, we performed meta-analyses on the primary outcome of each group of trials and meta-regression for methodological quality subgroups within each meta-analysis. We used large sample methods and permutation methods in our meta-regression modeling. We then compared final models and final P values between methods. Results We collected 110 trials across 5 intervention/outcome pairings and 5 to 10 trials per covariate. When applying large sample methods and permutation-based methods in our backwards stepwise regression the covariates in the final models were identical in all cases. The P values for the covariates in the final model were larger in 78% (7/9) of the cases for permutation and identical for 22% (2/9) of the cases. Conclusions We present empirical evidence that permutation-based resampling may not change final models when using backwards stepwise regression, but may increase P values in meta-regression of multiple covariates for relatively small amount of trials. PMID:22587815

  15. Resampling methods in Microsoft Excel® for estimating reference intervals

    PubMed Central

    Theodorsson, Elvar

    2015-01-01

    Computer- intensive resampling/bootstrap methods are feasible when calculating reference intervals from non-Gaussian or small reference samples. Microsoft Excel® in version 2010 or later includes natural functions, which lend themselves well to this purpose including recommended interpolation procedures for estimating 2.5 and 97.5 percentiles.
The purpose of this paper is to introduce the reader to resampling estimation techniques in general and in using Microsoft Excel® 2010 for the purpose of estimating reference intervals in particular.
Parametric methods are preferable to resampling methods when the distributions of observations in the reference samples is Gaussian or can transformed to that distribution even when the number of reference samples is less than 120. Resampling methods are appropriate when the distribution of data from the reference samples is non-Gaussian and in case the number of reference individuals and corresponding samples are in the order of 40. At least 500-1000 random samples with replacement should be taken from the results of measurement of the reference samples. PMID:26527366

  16. Resampling methods in Microsoft Excel® for estimating reference intervals.

    PubMed

    Theodorsson, Elvar

    2015-01-01

    Computer-intensive resampling/bootstrap methods are feasible when calculating reference intervals from non-Gaussian or small reference samples. Microsoft Excel® in version 2010 or later includes natural functions, which lend themselves well to this purpose including recommended interpolation procedures for estimating 2.5 and 97.5 percentiles. 
The purpose of this paper is to introduce the reader to resampling estimation techniques in general and in using Microsoft Excel® 2010 for the purpose of estimating reference intervals in particular.
 Parametric methods are preferable to resampling methods when the distributions of observations in the reference samples is Gaussian or can transformed to that distribution even when the number of reference samples is less than 120. Resampling methods are appropriate when the distribution of data from the reference samples is non-Gaussian and in case the number of reference individuals and corresponding samples are in the order of 40. At least 500-1000 random samples with replacement should be taken from the results of measurement of the reference samples.

  17. An add-in implementation of the RESAMPLING syntax under Microsoft EXCEL.

    PubMed

    Meineke, I

    2000-10-01

    The RESAMPLING syntax defines a set of powerful commands, which allow the programming of probabilistic statistical models with few, easily memorized statements. This paper presents an implementation of the RESAMPLING syntax using Microsoft EXCEL with Microsoft WINDOWS(R) as a platform. Two examples are given to demonstrate typical applications of RESAMPLING in biomedicine. Details of the implementation with special emphasis on the programming environment are discussed at length. The add-in is available electronically to interested readers upon request. The use of the add-in facilitates numerical statistical analyses of data from within EXCEL in a comfortable way.

  18. Precise orbit determination using the batch filter based on particle filtering with genetic resampling approach

    NASA Astrophysics Data System (ADS)

    Kim, Young-Rok; Park, Eunseo; Choi, Eun-Jung; Park, Sang-Young; Park, Chandeok; Lim, Hyung-Chul

    2014-09-01

    In this study, genetic resampling (GRS) approach is utilized for precise orbit determination (POD) using the batch filter based on particle filtering (PF). Two genetic operations, which are arithmetic crossover and residual mutation, are used for GRS of the batch filter based on PF (PF batch filter). For POD, Laser-ranging Precise Orbit Determination System (LPODS) and satellite laser ranging (SLR) observations of the CHAMP satellite are used. Monte Carlo trials for POD are performed by one hundred times. The characteristics of the POD results by PF batch filter with GRS are compared with those of a PF batch filter with minimum residual resampling (MRRS). The post-fit residual, 3D error by external orbit comparison, and POD repeatability are analyzed for orbit quality assessments. The POD results are externally checked by NASA JPL’s orbits using totally different software, measurements, and techniques. For post-fit residuals and 3D errors, both MRRS and GRS give accurate estimation results whose mean root mean square (RMS) values are at a level of 5 cm and 10-13 cm, respectively. The mean radial orbit errors of both methods are at a level of 5 cm. For POD repeatability represented as the standard deviations of post-fit residuals and 3D errors by repetitive PODs, however, GRS yields 25% and 13% more robust estimation results than MRRS for post-fit residual and 3D error, respectively. This study shows that PF batch filter with GRS approach using genetic operations is superior to PF batch filter with MRRS in terms of robustness in POD with SLR observations.

  19. Modeling of correlated data with informative cluster sizes: An evaluation of joint modeling and within-cluster resampling approaches.

    PubMed

    Zhang, Bo; Liu, Wei; Zhang, Zhiwei; Qu, Yanping; Chen, Zhen; Albert, Paul S

    2017-08-01

    Joint modeling and within-cluster resampling are two approaches that are used for analyzing correlated data with informative cluster sizes. Motivated by a developmental toxicity study, we examined the performances and validity of these two approaches in testing covariate effects in generalized linear mixed-effects models. We show that the joint modeling approach is robust to the misspecification of cluster size models in terms of Type I and Type II errors when the corresponding covariates are not included in the random effects structure; otherwise, statistical tests may be affected. We also evaluate the performance of the within-cluster resampling procedure and thoroughly investigate the validity of it in modeling correlated data with informative cluster sizes. We show that within-cluster resampling is a valid alternative to joint modeling for cluster-specific covariates, but it is invalid for time-dependent covariates. The two methods are applied to a developmental toxicity study that investigated the effect of exposure to diethylene glycol dimethyl ether.

  20. Conditional Monthly Weather Resampling Procedure for Operational Seasonal Water Resources Forecasting

    NASA Astrophysics Data System (ADS)

    Beckers, J.; Weerts, A.; Tijdeman, E.; Welles, E.; McManamon, A.

    2013-12-01

    To provide reliable and accurate seasonal streamflow forecasts for water resources management several operational hydrologic agencies and hydropower companies around the world use the Extended Streamflow Prediction (ESP) procedure. The ESP in its original implementation does not accommodate for any additional information that the forecaster may have about expected deviations from climatology in the near future. Several attempts have been conducted to improve the skill of the ESP forecast, especially for areas which are affected by teleconnetions (e,g. ENSO, PDO) via selection (Hamlet and Lettenmaier, 1999) or weighting schemes (Werner et al., 2004; Wood and Lettenmaier, 2006; Najafi et al., 2012). A disadvantage of such schemes is that they lead to a reduction of the signal to noise ratio of the probabilistic forecast. To overcome this, we propose a resampling method conditional on climate indices to generate meteorological time series to be used in the ESP. The method can be used to generate a large number of meteorological ensemble members in order to improve the statistical properties of the ensemble. The effectiveness of the method was demonstrated in a real-time operational hydrologic seasonal forecasts system for the Columbia River basin operated by the Bonneville Power Administration. The forecast skill of the k-nn resampler was tested against the original ESP for three basins at the long-range seasonal time scale. The BSS and CRPSS were used to compare the results to those of the original ESP method. Positive forecast skill scores were found for the resampler method conditioned on different indices for the prediction of spring peak flows in the Dworshak and Hungry Horse basin. For the Libby Dam basin however, no improvement of skill was found. The proposed resampling method is a promising practical approach that can add skill to ESP forecasts at the seasonal time scale. Further improvement is possible by fine tuning the method and selecting the most

  1. Resampling soil profiles can constrain large-scale changes in the C cycle: obtaining robust information from radiocarbon measurements

    NASA Astrophysics Data System (ADS)

    Baisden, W. T.; Prior, C.; Lambie, S.; Tate, K.; Bruhn, F.; Parfitt, R.; Schipper, L.; Wilde, R. H.; Ross, C.

    2006-12-01

    Soil organic matter contains more C than terrestrial biomass and atmospheric CO2 combined, and reacts to climate and land-use change on timescales requiring long-term experiments or monitoring. The direction and uncertainty of soil C stock changes has been difficult to predict and incorporate in decision support tools for climate change policies. Moreover, standardization of approaches has been difficult because historic methods of soil sampling have varied regionally, nationally and temporally. The most common and uniform type of historic sampling is soil profiles, which have commonly been collected, described and archived in the course of both soil survey studies and research. Resampling soil profiles has considerable utility in carbon monitoring and in parameterizing models to understand the ecosystem responses to global change. Recent work spanning seven soil orders in New Zealand's grazed pastures has shown that, averaged over approximately 20 years, 31 soil profiles lost 106 g C m-2 y-1 (p=0.01) and 9.1 g N m{^-2} y-1 (p=0.002). These losses are unexpected and appear to extend well below the upper 30 cm of soil. Following on these recent results, additional advantages of resampling soil profiles can be emphasized. One of the most powerful applications afforded by resampling archived soils is the use of the pulse label of radiocarbon injected into the atmosphere by thermonuclear weapons testing circa 1963 as a tracer of soil carbon dynamics. This approach allows estimation of the proportion of soil C that is `passive' or `inert' and therefore unlikely to respond to global change. Evaluation of resampled soil horizons in a New Zealand soil chronosequence confirms that the approach yields consistent values for the proportion of `passive' soil C, reaching 25% of surface horizon soil C over 12,000 years. Across whole profiles, radiocarbon data suggest that the proportion of `passive' C in New Zealand grassland soil can be less than 40% of total soil C. Below 30 cm

  2. Assessing Uncertainties in Surface Water Security: A Probabilistic Multi-model Resampling approach

    NASA Astrophysics Data System (ADS)

    Rodrigues, D. B. B.

    2015-12-01

    Various uncertainties are involved in the representation of processes that characterize interactions between societal needs, ecosystem functioning, and hydrological conditions. Here, we develop an empirical uncertainty assessment of water security indicators that characterize scarcity and vulnerability, based on a multi-model and resampling framework. We consider several uncertainty sources including those related to: i) observed streamflow data; ii) hydrological model structure; iii) residual analysis; iv) the definition of Environmental Flow Requirement method; v) the definition of critical conditions for water provision; and vi) the critical demand imposed by human activities. We estimate the overall uncertainty coming from the hydrological model by means of a residual bootstrap resampling approach, and by uncertainty propagation through different methodological arrangements applied to a 291 km² agricultural basin within the Cantareira water supply system in Brazil. Together, the two-component hydrograph residual analysis and the block bootstrap resampling approach result in a more accurate and precise estimate of the uncertainty (95% confidence intervals) in the simulated time series. We then compare the uncertainty estimates associated with water security indicators using a multi-model framework and provided by each model uncertainty estimation approach. The method is general and can be easily extended forming the basis for meaningful support to end-users facing water resource challenges by enabling them to incorporate a viable uncertainty analysis into a robust decision making process.

  3. Partition resampling and extrapolation averaging: approximation methods for quantifying gene expression in large numbers of short oligonucleotide arrays.

    PubMed

    Goldstein, Darlene R

    2006-10-01

    Studies of gene expression using high-density short oligonucleotide arrays have become a standard in a variety of biological contexts. Of the expression measures that have been proposed to quantify expression in these arrays, multi-chip-based measures have been shown to perform well. As gene expression studies increase in size, however, utilizing multi-chip expression measures is more challenging in terms of computing memory requirements and time. A strategic alternative to exact multi-chip quantification on a full large chip set is to approximate expression values based on subsets of chips. This paper introduces an extrapolation method, Extrapolation Averaging (EA), and a resampling method, Partition Resampling (PR), to approximate expression in large studies. An examination of properties indicates that subset-based methods can perform well compared with exact expression quantification. The focus is on short oligonucleotide chips, but the same ideas apply equally well to any array type for which expression is quantified using an entire set of arrays, rather than for only a single array at a time. Software implementing Partition Resampling and Extrapolation Averaging is under development as an R package for the BioConductor project.

  4. Systematic evaluation of sequential geostatistical resampling within MCMC for posterior sampling of near-surface geophysical inverse problems

    NASA Astrophysics Data System (ADS)

    Ruggeri, Paolo; Irving, James; Holliger, Klaus

    2015-08-01

    We critically examine the performance of sequential geostatistical resampling (SGR) as a model proposal mechanism for Bayesian Markov-chain-Monte-Carlo (MCMC) solutions to near-surface geophysical inverse problems. Focusing on a series of simple yet realistic synthetic crosshole georadar tomographic examples characterized by different numbers of data, levels of data error and degrees of model parameter spatial correlation, we investigate the efficiency of three different resampling strategies with regard to their ability to generate statistically independent realizations from the Bayesian posterior distribution. Quite importantly, our results show that, no matter what resampling strategy is employed, many of the examined test cases require an unreasonably high number of forward model runs to produce independent posterior samples, meaning that the SGR approach as currently implemented will not be computationally feasible for a wide range of problems. Although use of a novel gradual-deformation-based proposal method can help to alleviate these issues, it does not offer a full solution. Further, we find that the nature of the SGR is found to strongly influence MCMC performance; however no clear rule exists as to what set of inversion parameters and/or overall proposal acceptance rate will allow for the most efficient implementation. We conclude that although the SGR methodology is highly attractive as it allows for the consideration of complex geostatistical priors as well as conditioning to hard and soft data, further developments are necessary in the context of novel or hybrid MCMC approaches for it to be considered generally suitable for near-surface geophysical inversions.

  5. Random forests ensemble classifier trained with data resampling strategy to improve cardiac arrhythmia diagnosis.

    PubMed

    Ozçift, Akin

    2011-05-01

    Supervised classification algorithms are commonly used in the designing of computer-aided diagnosis systems. In this study, we present a resampling strategy based Random Forests (RF) ensemble classifier to improve diagnosis of cardiac arrhythmia. Random forests is an ensemble classifier that consists of many decision trees and outputs the class that is the mode of the class's output by individual trees. In this way, an RF ensemble classifier performs better than a single tree from classification performance point of view. In general, multiclass datasets having unbalanced distribution of sample sizes are difficult to analyze in terms of class discrimination. Cardiac arrhythmia is such a dataset that has multiple classes with small sample sizes and it is therefore adequate to test our resampling based training strategy. The dataset contains 452 samples in fourteen types of arrhythmias and eleven of these classes have sample sizes less than 15. Our diagnosis strategy consists of two parts: (i) a correlation based feature selection algorithm is used to select relevant features from cardiac arrhythmia dataset. (ii) RF machine learning algorithm is used to evaluate the performance of selected features with and without simple random sampling to evaluate the efficiency of proposed training strategy. The resultant accuracy of the classifier is found to be 90.0% and this is a quite high diagnosis performance for cardiac arrhythmia. Furthermore, three case studies, i.e., thyroid, cardiotocography and audiology, are used to benchmark the effectiveness of the proposed method. The results of experiments demonstrated the efficiency of random sampling strategy in training RF ensemble classification algorithm. Copyright © 2011 Elsevier Ltd. All rights reserved.

  6. Snowball: resampling combined with distance-based regression to discover transcriptional consequences of a driver mutation

    PubMed Central

    Xu, Yaomin; Guo, Xingyi; Sun, Jiayang; Zhao, Zhongming

    2015-01-01

    Motivation: Large-scale cancer genomic studies, such as The Cancer Genome Atlas (TCGA), have profiled multidimensional genomic data, including mutation and expression profiles on a variety of cancer cell types, to uncover the molecular mechanism of cancerogenesis. More than a hundred driver mutations have been characterized that confer the advantage of cell growth. However, how driver mutations regulate the transcriptome to affect cellular functions remains largely unexplored. Differential analysis of gene expression relative to a driver mutation on patient samples could provide us with new insights in understanding driver mutation dysregulation in tumor genome and developing personalized treatment strategies. Results: Here, we introduce the Snowball approach as a highly sensitive statistical analysis method to identify transcriptional signatures that are affected by a recurrent driver mutation. Snowball utilizes a resampling-based approach and combines a distance-based regression framework to assign a robust ranking index of genes based on their aggregated association with the presence of the mutation, and further selects the top significant genes for downstream data analyses or experiments. In our application of the Snowball approach to both synthesized and TCGA data, we demonstrated that it outperforms the standard methods and provides more accurate inferences to the functional effects and transcriptional dysregulation of driver mutations. Availability and implementation: R package and source code are available from CRAN at http://cran.r-project.org/web/packages/DESnowball, and also available at http://bioinfo.mc.vanderbilt.edu/DESnowball/. Contact: zhongming.zhao@vanderbilt.edu Supplementary information: Supplementary data are available at Bioinformatics online. PMID:25192743

  7. Comparison of parametric and bootstrap method in bioequivalence test.

    PubMed

    Ahn, Byung-Jin; Yim, Dong-Seok

    2009-10-01

    The estimation of 90% parametric confidence intervals (CIs) of mean AUC and Cmax ratios in bioequivalence (BE) tests are based upon the assumption that formulation effects in log-transformed data are normally distributed. To compare the parametric CIs with those obtained from nonparametric methods we performed repeated estimation of bootstrap-resampled datasets. The AUC and Cmax values from 3 archived datasets were used. BE tests on 1,000 resampled datasets from each archived dataset were performed using SAS (Enterprise Guide Ver.3). Bootstrap nonparametric 90% CIs of formulation effects were then compared with the parametric 90% CIs of the original datasets. The 90% CIs of formulation effects estimated from the 3 archived datasets were slightly different from nonparametric 90% CIs obtained from BE tests on resampled datasets. Histograms and density curves of formulation effects obtained from resampled datasets were similar to those of normal distribution. However, in 2 of 3 resampled log (AUC) datasets, the estimates of formulation effects did not follow the Gaussian distribution. Bias-corrected and accelerated (BCa) CIs, one of the nonparametric CIs of formulation effects, shifted outside the parametric 90% CIs of the archived datasets in these 2 non-normally distributed resampled log (AUC) datasets. Currently, the 80~125% rule based upon the parametric 90% CIs is widely accepted under the assumption of normally distributed formulation effects in log-transformed data. However, nonparametric CIs may be a better choice when data do not follow this assumption.

  8. Comparison of Parametric and Bootstrap Method in Bioequivalence Test

    PubMed Central

    Ahn, Byung-Jin

    2009-01-01

    The estimation of 90% parametric confidence intervals (CIs) of mean AUC and Cmax ratios in bioequivalence (BE) tests are based upon the assumption that formulation effects in log-transformed data are normally distributed. To compare the parametric CIs with those obtained from nonparametric methods we performed repeated estimation of bootstrap-resampled datasets. The AUC and Cmax values from 3 archived datasets were used. BE tests on 1,000 resampled datasets from each archived dataset were performed using SAS (Enterprise Guide Ver.3). Bootstrap nonparametric 90% CIs of formulation effects were then compared with the parametric 90% CIs of the original datasets. The 90% CIs of formulation effects estimated from the 3 archived datasets were slightly different from nonparametric 90% CIs obtained from BE tests on resampled datasets. Histograms and density curves of formulation effects obtained from resampled datasets were similar to those of normal distribution. However, in 2 of 3 resampled log (AUC) datasets, the estimates of formulation effects did not follow the Gaussian distribution. Bias-corrected and accelerated (BCa) CIs, one of the nonparametric CIs of formulation effects, shifted outside the parametric 90% CIs of the archived datasets in these 2 non-normally distributed resampled log (AUC) datasets. Currently, the 80~125% rule based upon the parametric 90% CIs is widely accepted under the assumption of normally distributed formulation effects in log-transformed data. However, nonparametric CIs may be a better choice when data do not follow this assumption. PMID:19915699

  9. Resampling to accelerate cross-correlation searches for continuous gravitational waves from binary systems

    NASA Astrophysics Data System (ADS)

    Meadors, Grant David; Krishnan, Badri; Papa, Maria Alessandra; Whelan, John T.; Zhang, Yuanhao

    2018-02-01

    Continuous-wave (CW) gravitational waves (GWs) call for computationally-intensive methods. Low signal-to-noise ratio signals need templated searches with long coherent integration times and thus fine parameter-space resolution. Longer integration increases sensitivity. Low-mass x-ray binaries (LMXBs) such as Scorpius X-1 (Sco X-1) may emit accretion-driven CWs at strains reachable by current ground-based observatories. Binary orbital parameters induce phase modulation. This paper describes how resampling corrects binary and detector motion, yielding source-frame time series used for cross-correlation. Compared to the previous, detector-frame, templated cross-correlation method, used for Sco X-1 on data from the first Advanced LIGO observing run (O1), resampling is about 20 × faster in the costliest, most-sensitive frequency bands. Speed-up factors depend on integration time and search setup. The speed could be reinvested into longer integration with a forecast sensitivity gain, 20 to 125 Hz median, of approximately 51%, or from 20 to 250 Hz, 11%, given the same per-band cost and setup. This paper's timing model enables future setup optimization. Resampling scales well with longer integration, and at 10 × unoptimized cost could reach respectively 2.83 × and 2.75 × median sensitivities, limited by spin-wandering. Then an O1 search could yield a marginalized-polarization upper limit reaching torque-balance at 100 Hz. Frequencies from 40 to 140 Hz might be probed in equal observing time with 2 × improved detectors.

  10. Resampling probability values for weighted kappa with multiple raters.

    PubMed

    Mielke, Paul W; Berry, Kenneth J; Johnston, Janis E

    2008-04-01

    A new procedure to compute weighted kappa with multiple raters is described. A resampling procedure to compute approximate probability values for weighted kappa with multiple raters is presented. Applications of weighted kappa are illustrated with an example analysis of classifications by three independent raters.

  11. A Resampling Analysis of Federal Family Assistance Program Quality Control Data: An Application of the Bootstrap.

    ERIC Educational Resources Information Center

    Hand, Michael L.

    1990-01-01

    Use of the bootstrap resampling technique (BRT) is assessed in its application to resampling analysis associated with measurement of payment allocation errors by federally funded Family Assistance Programs. The BRT is applied to a food stamp quality control database in Oregon. This analysis highlights the outlier-sensitivity of the…

  12. Application of a New Resampling Method to SEM: A Comparison of S-SMART with the Bootstrap

    ERIC Educational Resources Information Center

    Bai, Haiyan; Sivo, Stephen A.; Pan, Wei; Fan, Xitao

    2016-01-01

    Among the commonly used resampling methods of dealing with small-sample problems, the bootstrap enjoys the widest applications because it often outperforms its counterparts. However, the bootstrap still has limitations when its operations are contemplated. Therefore, the purpose of this study is to examine an alternative, new resampling method…

  13. Reconstruction of dynamical systems from resampled point processes produced by neuron models

    NASA Astrophysics Data System (ADS)

    Pavlova, Olga N.; Pavlov, Alexey N.

    2018-04-01

    Characterization of dynamical features of chaotic oscillations from point processes is based on embedding theorems for non-uniformly sampled signals such as the sequences of interspike intervals (ISIs). This theoretical background confirms the ability of attractor reconstruction from ISIs generated by chaotically driven neuron models. The quality of such reconstruction depends on the available length of the analyzed dataset. We discuss how data resampling improves the reconstruction for short amount of data and show that this effect is observed for different types of mechanisms for spike generation.

  14. Using resampling to assess reliability of audio-visual survey strategies for marbled murrelets at inland forest sites

    USGS Publications Warehouse

    Jodice, Patrick G.R.; Garman, S.L.; Collopy, Michael W.

    2001-01-01

    Marbled Murrelets (Brachyramphus marmoratus) are threatened seabirds that nest in coastal old-growth coniferous forests throughout much of their breeding range. Currently, observer-based audio-visual surveys are conducted at inland forest sites during the breeding season primarily to determine nesting distribution and breeding status and are being used to estimate temporal or spatial trends in murrelet detections. Our goal was to assess the feasibility of using audio-visual survey data for such monitoring. We used an intensive field-based survey effort to record daily murrelet detections at seven survey stations in the Oregon Coast Range. We then used computer-aided resampling techniques to assess the effectiveness of twelve survey strategies with varying scheduling and a sampling intensity of 4-14 surveys per breeding season to estimate known means and SDs of murrelet detections. Most survey strategies we tested failed to provide estimates of detection means and SDs that were within A?20% of actual means and SDs. Estimates of daily detections were, however, frequently estimated to within A?50% of field data with sampling efforts of 14 days/breeding season. Additional resampling analyses with statistically generated detection data indicated that the temporal variability in detection data had a great effect on the reliability of the mean and SD estimates calculated from the twelve survey strategies, while the value of the mean had little effect. Effectiveness at estimating multi-year trends in detection data was similarly poor, indicating that audio-visual surveys might be reliably used to estimate annual declines in murrelet detections of the order of 50% per year.

  15. The Beginner's Guide to the Bootstrap Method of Resampling.

    ERIC Educational Resources Information Center

    Lane, Ginny G.

    The bootstrap method of resampling can be useful in estimating the replicability of study results. The bootstrap procedure creates a mock population from a given sample of data from which multiple samples are then drawn. The method extends the usefulness of the jackknife procedure as it allows for computation of a given statistic across a maximal…

  16. Wavelet analysis in ecology and epidemiology: impact of statistical tests

    PubMed Central

    Cazelles, Bernard; Cazelles, Kévin; Chavez, Mario

    2014-01-01

    Wavelet analysis is now frequently used to extract information from ecological and epidemiological time series. Statistical hypothesis tests are conducted on associated wavelet quantities to assess the likelihood that they are due to a random process. Such random processes represent null models and are generally based on synthetic data that share some statistical characteristics with the original time series. This allows the comparison of null statistics with those obtained from original time series. When creating synthetic datasets, different techniques of resampling result in different characteristics shared by the synthetic time series. Therefore, it becomes crucial to consider the impact of the resampling method on the results. We have addressed this point by comparing seven different statistical testing methods applied with different real and simulated data. Our results show that statistical assessment of periodic patterns is strongly affected by the choice of the resampling method, so two different resampling techniques could lead to two different conclusions about the same time series. Moreover, our results clearly show the inadequacy of resampling series generated by white noise and red noise that are nevertheless the methods currently used in the wide majority of wavelets applications. Our results highlight that the characteristics of a time series, namely its Fourier spectrum and autocorrelation, are important to consider when choosing the resampling technique. Results suggest that data-driven resampling methods should be used such as the hidden Markov model algorithm and the ‘beta-surrogate’ method. PMID:24284892

  17. Wavelet analysis in ecology and epidemiology: impact of statistical tests.

    PubMed

    Cazelles, Bernard; Cazelles, Kévin; Chavez, Mario

    2014-02-06

    Wavelet analysis is now frequently used to extract information from ecological and epidemiological time series. Statistical hypothesis tests are conducted on associated wavelet quantities to assess the likelihood that they are due to a random process. Such random processes represent null models and are generally based on synthetic data that share some statistical characteristics with the original time series. This allows the comparison of null statistics with those obtained from original time series. When creating synthetic datasets, different techniques of resampling result in different characteristics shared by the synthetic time series. Therefore, it becomes crucial to consider the impact of the resampling method on the results. We have addressed this point by comparing seven different statistical testing methods applied with different real and simulated data. Our results show that statistical assessment of periodic patterns is strongly affected by the choice of the resampling method, so two different resampling techniques could lead to two different conclusions about the same time series. Moreover, our results clearly show the inadequacy of resampling series generated by white noise and red noise that are nevertheless the methods currently used in the wide majority of wavelets applications. Our results highlight that the characteristics of a time series, namely its Fourier spectrum and autocorrelation, are important to consider when choosing the resampling technique. Results suggest that data-driven resampling methods should be used such as the hidden Markov model algorithm and the 'beta-surrogate' method.

  18. Significance testing of rules in rule-based models of human problem solving

    NASA Technical Reports Server (NTRS)

    Lewis, C. M.; Hammer, J. M.

    1986-01-01

    Rule-based models of human problem solving have typically not been tested for statistical significance. Three methods of testing rules - analysis of variance, randomization, and contingency tables - are presented. Advantages and disadvantages of the methods are also described.

  19. A critical issue in model-based inference for studying trait-based community assembly and a solution.

    PubMed

    Ter Braak, Cajo J F; Peres-Neto, Pedro; Dray, Stéphane

    2017-01-01

    Statistical testing of trait-environment association from data is a challenge as there is no common unit of observation: the trait is observed on species, the environment on sites and the mediating abundance on species-site combinations. A number of correlation-based methods, such as the community weighted trait means method (CWM), the fourth-corner correlation method and the multivariate method RLQ, have been proposed to estimate such trait-environment associations. In these methods, valid statistical testing proceeds by performing two separate resampling tests, one site-based and the other species-based and by assessing significance by the largest of the two p -values (the p max test). Recently, regression-based methods using generalized linear models (GLM) have been proposed as a promising alternative with statistical inference via site-based resampling. We investigated the performance of this new approach along with approaches that mimicked the p max test using GLM instead of fourth-corner. By simulation using models with additional random variation in the species response to the environment, the site-based resampling tests using GLM are shown to have severely inflated type I error, of up to 90%, when the nominal level is set as 5%. In addition, predictive modelling of such data using site-based cross-validation very often identified trait-environment interactions that had no predictive value. The problem that we identify is not an "omitted variable bias" problem as it occurs even when the additional random variation is independent of the observed trait and environment data. Instead, it is a problem of ignoring a random effect. In the same simulations, the GLM-based p max test controlled the type I error in all models proposed so far in this context, but still gave slightly inflated error in more complex models that included both missing (but important) traits and missing (but important) environmental variables. For screening the importance of single trait

  20. WE-G-204-03: Photon-Counting Hexagonal Pixel Array CdTe Detector: Optimal Resampling to Square Pixels

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Shrestha, S; Vedantham, S; Karellas, A

    Purpose: Detectors with hexagonal pixels require resampling to square pixels for distortion-free display of acquired images. In this work, the presampling modulation transfer function (MTF) of a hexagonal pixel array photon-counting CdTe detector for region-of-interest fluoroscopy was measured and the optimal square pixel size for resampling was determined. Methods: A 0.65mm thick CdTe Schottky sensor capable of concurrently acquiring up to 3 energy-windowed images was operated in a single energy-window mode to include ≥10 KeV photons. The detector had hexagonal pixels with apothem of 30 microns resulting in pixel spacing of 60 and 51.96 microns along the two orthogonal directions.more » Images of a tungsten edge test device acquired under IEC RQA5 conditions were double Hough transformed to identify the edge and numerically differentiated. The presampling MTF was determined from the finely sampled line spread function that accounted for the hexagonal sampling. The optimal square pixel size was determined in two ways; the square pixel size for which the aperture function evaluated at the Nyquist frequencies along the two orthogonal directions matched that from the hexagonal pixel aperture functions, and the square pixel size for which the mean absolute difference between the square and hexagonal aperture functions was minimized over all frequencies up to the Nyquist limit. Results: Evaluation of the aperture functions over the entire frequency range resulted in square pixel size of 53 microns with less than 2% difference from the hexagonal pixel. Evaluation of the aperture functions at Nyquist frequencies alone resulted in 54 microns square pixels. For the photon-counting CdTe detector and after resampling to 53 microns square pixels using quadratic interpolation, the presampling MTF at Nyquist frequency of 9.434 cycles/mm along the two directions were 0.501 and 0.507. Conclusion: Hexagonal pixel array photon-counting CdTe detector after resampling to square

  1. Low Computational Signal Acquisition for GNSS Receivers Using a Resampling Strategy and Variable Circular Correlation Time

    PubMed Central

    Zhang, Yeqing; Wang, Meiling; Li, Yafeng

    2018-01-01

    For the objective of essentially decreasing computational complexity and time consumption of signal acquisition, this paper explores a resampling strategy and variable circular correlation time strategy specific to broadband multi-frequency GNSS receivers. In broadband GNSS receivers, the resampling strategy is established to work on conventional acquisition algorithms by resampling the main lobe of received broadband signals with a much lower frequency. Variable circular correlation time is designed to adapt to different signal strength conditions and thereby increase the operation flexibility of GNSS signal acquisition. The acquisition threshold is defined as the ratio of the highest and second highest correlation results in the search space of carrier frequency and code phase. Moreover, computational complexity of signal acquisition is formulated by amounts of multiplication and summation operations in the acquisition process. Comparative experiments and performance analysis are conducted on four sets of real GPS L2C signals with different sampling frequencies. The results indicate that the resampling strategy can effectively decrease computation and time cost by nearly 90–94% with just slight loss of acquisition sensitivity. With circular correlation time varying from 10 ms to 20 ms, the time cost of signal acquisition has increased by about 2.7–5.6% per millisecond, with most satellites acquired successfully. PMID:29495301

  2. Low Computational Signal Acquisition for GNSS Receivers Using a Resampling Strategy and Variable Circular Correlation Time.

    PubMed

    Zhang, Yeqing; Wang, Meiling; Li, Yafeng

    2018-02-24

    For the objective of essentially decreasing computational complexity and time consumption of signal acquisition, this paper explores a resampling strategy and variable circular correlation time strategy specific to broadband multi-frequency GNSS receivers. In broadband GNSS receivers, the resampling strategy is established to work on conventional acquisition algorithms by resampling the main lobe of received broadband signals with a much lower frequency. Variable circular correlation time is designed to adapt to different signal strength conditions and thereby increase the operation flexibility of GNSS signal acquisition. The acquisition threshold is defined as the ratio of the highest and second highest correlation results in the search space of carrier frequency and code phase. Moreover, computational complexity of signal acquisition is formulated by amounts of multiplication and summation operations in the acquisition process. Comparative experiments and performance analysis are conducted on four sets of real GPS L2C signals with different sampling frequencies. The results indicate that the resampling strategy can effectively decrease computation and time cost by nearly 90-94% with just slight loss of acquisition sensitivity. With circular correlation time varying from 10 ms to 20 ms, the time cost of signal acquisition has increased by about 2.7-5.6% per millisecond, with most satellites acquired successfully.

  3. On removing interpolation and resampling artifacts in rigid image registration.

    PubMed

    Aganj, Iman; Yeo, Boon Thye Thomas; Sabuncu, Mert R; Fischl, Bruce

    2013-02-01

    We show that image registration using conventional interpolation and summation approximations of continuous integrals can generally fail because of resampling artifacts. These artifacts negatively affect the accuracy of registration by producing local optima, altering the gradient, shifting the global optimum, and making rigid registration asymmetric. In this paper, after an extensive literature review, we demonstrate the causes of the artifacts by comparing inclusion and avoidance of resampling analytically. We show the sum-of-squared-differences cost function formulated as an integral to be more accurate compared with its traditional sum form in a simple case of image registration. We then discuss aliasing that occurs in rotation, which is due to the fact that an image represented in the Cartesian grid is sampled with different rates in different directions, and propose the use of oscillatory isotropic interpolation kernels, which allow better recovery of true global optima by overcoming this type of aliasing. Through our experiments on brain, fingerprint, and white noise images, we illustrate the superior performance of the integral registration cost function in both the Cartesian and spherical coordinates, and also validate the introduced radial interpolation kernel by demonstrating the improvement in registration.

  4. On Removing Interpolation and Resampling Artifacts in Rigid Image Registration

    PubMed Central

    Aganj, Iman; Yeo, Boon Thye Thomas; Sabuncu, Mert R.; Fischl, Bruce

    2013-01-01

    We show that image registration using conventional interpolation and summation approximations of continuous integrals can generally fail because of resampling artifacts. These artifacts negatively affect the accuracy of registration by producing local optima, altering the gradient, shifting the global optimum, and making rigid registration asymmetric. In this paper, after an extensive literature review, we demonstrate the causes of the artifacts by comparing inclusion and avoidance of resampling analytically. We show the sum-of-squared-differences cost function formulated as an integral to be more accurate compared with its traditional sum form in a simple case of image registration. We then discuss aliasing that occurs in rotation, which is due to the fact that an image represented in the Cartesian grid is sampled with different rates in different directions, and propose the use of oscillatory isotropic interpolation kernels, which allow better recovery of true global optima by overcoming this type of aliasing. Through our experiments on brain, fingerprint, and white noise images, we illustrate the superior performance of the integral registration cost function in both the Cartesian and spherical coordinates, and also validate the introduced radial interpolation kernel by demonstrating the improvement in registration. PMID:23076044

  5. Generating Virtual Patients by Multivariate and Discrete Re-Sampling Techniques.

    PubMed

    Teutonico, D; Musuamba, F; Maas, H J; Facius, A; Yang, S; Danhof, M; Della Pasqua, O

    2015-10-01

    Clinical Trial Simulations (CTS) are a valuable tool for decision-making during drug development. However, to obtain realistic simulation scenarios, the patients included in the CTS must be representative of the target population. This is particularly important when covariate effects exist that may affect the outcome of a trial. The objective of our investigation was to evaluate and compare CTS results using re-sampling from a population pool and multivariate distributions to simulate patient covariates. COPD was selected as paradigm disease for the purposes of our analysis, FEV1 was used as response measure and the effects of a hypothetical intervention were evaluated in different populations in order to assess the predictive performance of the two methods. Our results show that the multivariate distribution method produces realistic covariate correlations, comparable to the real population. Moreover, it allows simulation of patient characteristics beyond the limits of inclusion and exclusion criteria in historical protocols. Both methods, discrete resampling and multivariate distribution generate realistic pools of virtual patients. However the use of a multivariate distribution enable more flexible simulation scenarios since it is not necessarily bound to the existing covariate combinations in the available clinical data sets.

  6. Motion vector field phase-to-amplitude resampling for 4D motion-compensated cone-beam CT

    NASA Astrophysics Data System (ADS)

    Sauppe, Sebastian; Kuhm, Julian; Brehm, Marcus; Paysan, Pascal; Seghers, Dieter; Kachelrieß, Marc

    2018-02-01

    We propose a phase-to-amplitude resampling (PTAR) method to reduce motion blurring in motion-compensated (MoCo) 4D cone-beam CT (CBCT) image reconstruction, without increasing the computational complexity of the motion vector field (MVF) estimation approach. PTAR is able to improve the image quality in reconstructed 4D volumes, including both regular and irregular respiration patterns. The PTAR approach starts with a robust phase-gating procedure for the initial MVF estimation and then switches to a phase-adapted amplitude gating method. The switch implies an MVF-resampling, which makes them amplitude-specific. PTAR ensures that the MVFs, which have been estimated on phase-gated reconstructions, are still valid for all amplitude-gated reconstructions. To validate the method, we use an artificially deformed clinical CT scan with a realistic breathing pattern and several patient data sets acquired with a TrueBeamTM integrated imaging system (Varian Medical Systems, Palo Alto, CA, USA). Motion blurring, which still occurs around the area of the diaphragm or at small vessels above the diaphragm in artifact-specific cyclic motion compensation (acMoCo) images based on phase-gating, is significantly reduced by PTAR. Also, small lung structures appear sharper in the images. This is demonstrated both for simulated and real patient data. A quantification of the sharpness of the diaphragm confirms these findings. PTAR improves the image quality of 4D MoCo reconstructions compared to conventional phase-gated MoCo images, in particular for irregular breathing patterns. Thus, PTAR increases the robustness of MoCo reconstructions for CBCT. Because PTAR does not require any additional steps for the MVF estimation, it is computationally efficient. Our method is not restricted to CBCT but could rather be applied to other image modalities.

  7. Illustrating, Quantifying, and Correcting for Bias in Post-hoc Analysis of Gene-Based Rare Variant Tests of Association

    PubMed Central

    Grinde, Kelsey E.; Arbet, Jaron; Green, Alden; O'Connell, Michael; Valcarcel, Alessandra; Westra, Jason; Tintle, Nathan

    2017-01-01

    To date, gene-based rare variant testing approaches have focused on aggregating information across sets of variants to maximize statistical power in identifying genes showing significant association with diseases. Beyond identifying genes that are associated with diseases, the identification of causal variant(s) in those genes and estimation of their effect is crucial for planning replication studies and characterizing the genetic architecture of the locus. However, we illustrate that straightforward single-marker association statistics can suffer from substantial bias introduced by conditioning on gene-based test significance, due to the phenomenon often referred to as “winner's curse.” We illustrate the ramifications of this bias on variant effect size estimation and variant prioritization/ranking approaches, outline parameters of genetic architecture that affect this bias, and propose a bootstrap resampling method to correct for this bias. We find that our correction method significantly reduces the bias due to winner's curse (average two-fold decrease in bias, p < 2.2 × 10−6) and, consequently, substantially improves mean squared error and variant prioritization/ranking. The method is particularly helpful in adjustment for winner's curse effects when the initial gene-based test has low power and for relatively more common, non-causal variants. Adjustment for winner's curse is recommended for all post-hoc estimation and ranking of variants after a gene-based test. Further work is necessary to continue seeking ways to reduce bias and improve inference in post-hoc analysis of gene-based tests under a wide variety of genetic architectures. PMID:28959274

  8. A novel approach for epipolar resampling of cross-track linear pushbroom imagery using orbital parameters model

    NASA Astrophysics Data System (ADS)

    Jannati, Mojtaba; Valadan Zoej, Mohammad Javad; Mokhtarzade, Mehdi

    2018-03-01

    This paper presents a novel approach to epipolar resampling of cross-track linear pushbroom imagery using orbital parameters model (OPM). The backbone of the proposed method relies on modification of attitude parameters of linear array stereo imagery in such a way to parallelize the approximate conjugate epipolar lines (ACELs) with the instantaneous base line (IBL) of the conjugate image points (CIPs). Afterward, a complementary rotation is applied in order to parallelize all the ACELs throughout the stereo imagery. The new estimated attitude parameters are evaluated based on the direction of the IBL and the ACELs. Due to the spatial and temporal variability of the IBL (respectively changes in column and row numbers of the CIPs) and nonparallel nature of the epipolar lines in the stereo linear images, some polynomials in the both column and row numbers of the CIPs are used to model new attitude parameters. As the instantaneous position of sensors remains fix, the digital elevation model (DEM) of the area of interest is not required in the resampling process. According to the experimental results obtained from two pairs of SPOT and RapidEye stereo imagery with a high elevation relief, the average absolute values of remained vertical parallaxes of CIPs in the normalized images were obtained 0.19 and 0.28 pixels respectively, which confirm the high accuracy and applicability of the proposed method.

  9. NEAT: an efficient network enrichment analysis test.

    PubMed

    Signorelli, Mirko; Vinciotti, Veronica; Wit, Ernst C

    2016-09-05

    Network enrichment analysis is a powerful method, which allows to integrate gene enrichment analysis with the information on relationships between genes that is provided by gene networks. Existing tests for network enrichment analysis deal only with undirected networks, they can be computationally slow and are based on normality assumptions. We propose NEAT, a test for network enrichment analysis. The test is based on the hypergeometric distribution, which naturally arises as the null distribution in this context. NEAT can be applied not only to undirected, but to directed and partially directed networks as well. Our simulations indicate that NEAT is considerably faster than alternative resampling-based methods, and that its capacity to detect enrichments is at least as good as the one of alternative tests. We discuss applications of NEAT to network analyses in yeast by testing for enrichment of the Environmental Stress Response target gene set with GO Slim and KEGG functional gene sets, and also by inspecting associations between functional sets themselves. NEAT is a flexible and efficient test for network enrichment analysis that aims to overcome some limitations of existing resampling-based tests. The method is implemented in the R package neat, which can be freely downloaded from CRAN ( https://cran.r-project.org/package=neat ).

  10. Inferring microevolution from museum collections and resampling: lessons learned from Cepaea.

    PubMed

    Ożgo, Małgorzata; Liew, Thor-Seng; Webster, Nicole B; Schilthuizen, Menno

    2017-01-01

    Natural history collections are an important and largely untapped source of long-term data on evolutionary changes in wild populations. Here, we utilize three large geo-referenced sets of samples of the common European land-snail Cepaea nemoralis stored in the collection of Naturalis Biodiversity Center in Leiden, the Netherlands. Resampling of these populations allowed us to gain insight into changes occurring over 95, 69, and 50 years. Cepaea nemoralis is polymorphic for the colour and banding of the shell; the mode of inheritance of these patterns is known, and the polymorphism is under both thermal and predatory selection. At two sites the general direction of changes was towards lighter shells (yellow and less heavily banded), which is consistent with predictions based on on-going climatic change. At one site no directional changes were detected. At all sites there were significant shifts in morph frequencies between years, and our study contributes to the recognition that short-term changes in the states of populations often exceed long-term trends. Our interpretation was limited by the few time points available in the studied collections. We therefore stress the need for natural history collections to routinely collect large samples of common species, to allow much more reliable hind-casting of evolutionary responses to environmental change.

  11. Quasi-Epipolar Resampling of High Resolution Satellite Stereo Imagery for Semi Global Matching

    NASA Astrophysics Data System (ADS)

    Tatar, N.; Saadatseresht, M.; Arefi, H.; Hadavand, A.

    2015-12-01

    Semi-global matching is a well-known stereo matching algorithm in photogrammetric and computer vision society. Epipolar images are supposed as input of this algorithm. Epipolar geometry of linear array scanners is not a straight line as in case of frame camera. Traditional epipolar resampling algorithms demands for rational polynomial coefficients (RPCs), physical sensor model or ground control points. In this paper we propose a new solution for epipolar resampling method which works without the need for these information. In proposed method, automatic feature extraction algorithms are employed to generate corresponding features for registering stereo pairs. Also original images are divided into small tiles. In this way by omitting the need for extra information, the speed of matching algorithm increased and the need for high temporal memory decreased. Our experiments on GeoEye-1 stereo pair captured over Qom city in Iran demonstrates that the epipolar images are generated with sub-pixel accuracy.

  12. Temporal distribution of favourite books, movies, and records: differential encoding and re-sampling.

    PubMed

    Janssen, Steve M J; Chessa, Antonio G; Murre, Jaap M J

    2007-10-01

    The reminiscence bump is the effect that people recall more personal events from early adulthood than from childhood or adulthood. The bump has been examined extensively. However, the question of whether the bump is caused by differential encoding or re-sampling is still unanswered. To examine this issue, participants were asked to name their three favourite books, movies, and records. Furthermore,they were asked when they first encountered them. We compared the temporal distributions and found that they all showed recency effects and reminiscence bumps. The distribution of favourite books had the largest recency effect and the distribution of favourite records had the largest reminiscence bump. We can explain these results by the difference in rehearsal. Books are read two or three times, movies are watched more frequently, whereas records are listened to numerous times. The results suggest that differential encoding initially causes the reminiscence bump and that re-sampling increases the bump further.

  13. A resampling strategy based on bootstrap to reduce the effect of large blunders in GPS absolute positioning

    NASA Astrophysics Data System (ADS)

    Angrisano, Antonio; Maratea, Antonio; Gaglione, Salvatore

    2018-01-01

    In the absence of obstacles, a GPS device is generally able to provide continuous and accurate estimates of position, while in urban scenarios buildings can generate multipath and echo-only phenomena that severely affect the continuity and the accuracy of the provided estimates. Receiver autonomous integrity monitoring (RAIM) techniques are able to reduce the negative consequences of large blunders in urban scenarios, but require both a good redundancy and a low contamination to be effective. In this paper a resampling strategy based on bootstrap is proposed as an alternative to RAIM, in order to estimate accurately position in case of low redundancy and multiple blunders: starting with the pseudorange measurement model, at each epoch the available measurements are bootstrapped—that is random sampled with replacement—and the generated a posteriori empirical distribution is exploited to derive the final position. Compared to standard bootstrap, in this paper the sampling probabilities are not uniform, but vary according to an indicator of the measurement quality. The proposed method has been compared with two different RAIM techniques on a data set collected in critical conditions, resulting in a clear improvement on all considered figures of merit.

  14. Resampling algorithm for the Spatial Infrared Imaging Telescope (SPIRIT III) Fourier transform spectrometer

    NASA Astrophysics Data System (ADS)

    Sargent, Steven D.; Greenman, Mark E.; Hansen, Scott M.

    1998-11-01

    The Spatial Infrared Imaging Telescope (SPIRIT III) is the primary sensor aboard the Midcourse Space Experiment (MSX), which was launched 24 April 1996. SPIRIT III included a Fourier transform spectrometer that collected terrestrial and celestial background phenomenology data for the Ballistic Missile Defense Organization (BMDO). This spectrometer used a helium-neon reference laser to measure the optical path difference (OPD) in the spectrometer and to command the analog-to-digital conversion of the infrared detector signals, thereby ensuring the data were sampled at precise increments of OPD. Spectrometer data must be sampled at accurate increments of OPD to optimize the spectral resolution and spectral position of the transformed spectra. Unfortunately, a failure in the power supply preregulator at the MSX spacecraft/SPIRIT III interface early in the mission forced the spectrometer to be operated without the reference laser until a failure investigation was completed. During this time data were collected in a backup mode that used an electronic clock to sample the data. These data were sampled evenly in time, and because the scan velocity varied, at nonuniform increments of OPD. The scan velocity profile depended on scan direction and scan length, and varied over time, greatly degrading the spectral resolution and spectral and radiometric accuracy of the measurements. The Convert software used to process the SPIRIT III data was modified to resample the clock-sampled data at even increments of OPD, using scan velocity profiles determined from ground and on-orbit data, greatly improving the quality of the clock-sampled data. This paper presents the resampling algorithm, the characterization of the scan velocity profiles, and the results of applying the resampling algorithm to on-orbit data.

  15. The efficiency of average linkage hierarchical clustering algorithm associated multi-scale bootstrap resampling in identifying homogeneous precipitation catchments

    NASA Astrophysics Data System (ADS)

    Chuan, Zun Liang; Ismail, Noriszura; Shinyie, Wendy Ling; Lit Ken, Tan; Fam, Soo-Fen; Senawi, Azlyna; Yusoff, Wan Nur Syahidah Wan

    2018-04-01

    Due to the limited of historical precipitation records, agglomerative hierarchical clustering algorithms widely used to extrapolate information from gauged to ungauged precipitation catchments in yielding a more reliable projection of extreme hydro-meteorological events such as extreme precipitation events. However, identifying the optimum number of homogeneous precipitation catchments accurately based on the dendrogram resulted using agglomerative hierarchical algorithms are very subjective. The main objective of this study is to propose an efficient regionalized algorithm to identify the homogeneous precipitation catchments for non-stationary precipitation time series. The homogeneous precipitation catchments are identified using average linkage hierarchical clustering algorithm associated multi-scale bootstrap resampling, while uncentered correlation coefficient as the similarity measure. The regionalized homogeneous precipitation is consolidated using K-sample Anderson Darling non-parametric test. The analysis result shows the proposed regionalized algorithm performed more better compared to the proposed agglomerative hierarchical clustering algorithm in previous studies.

  16. Testing variance components by two jackknife methods

    USDA-ARS?s Scientific Manuscript database

    The jacknife method, a resampling technique, has been widely used for statistical tests for years. The pseudo value based jacknife method (defined as pseudo jackknife method) is commonly used to reduce the bias for an estimate; however, sometimes it could result in large variaion for an estmimate a...

  17. Resampling approach for anomalous change detection

    NASA Astrophysics Data System (ADS)

    Theiler, James; Perkins, Simon

    2007-04-01

    We investigate the problem of identifying pixels in pairs of co-registered images that correspond to real changes on the ground. Changes that are due to environmental differences (illumination, atmospheric distortion, etc.) or sensor differences (focus, contrast, etc.) will be widespread throughout the image, and the aim is to avoid these changes in favor of changes that occur in only one or a few pixels. Formal outlier detection schemes (such as the one-class support vector machine) can identify rare occurrences, but will be confounded by pixels that are "equally rare" in both images: they may be anomalous, but they are not changes. We describe a resampling scheme we have developed that formally addresses both of these issues, and reduces the problem to a binary classification, a problem for which a large variety of machine learning tools have been developed. In principle, the effects of misregistration will manifest themselves as pervasive changes, and our method will be robust against them - but in practice, misregistration remains a serious issue.

  18. Testing non-inferiority of a new treatment in three-arm clinical trials with binary endpoints.

    PubMed

    Tang, Nian-Sheng; Yu, Bin; Tang, Man-Lai

    2014-12-18

    A two-arm non-inferiority trial without a placebo is usually adopted to demonstrate that an experimental treatment is not worse than a reference treatment by a small pre-specified non-inferiority margin due to ethical concerns. Selection of the non-inferiority margin and establishment of assay sensitivity are two major issues in the design, analysis and interpretation for two-arm non-inferiority trials. Alternatively, a three-arm non-inferiority clinical trial including a placebo is usually conducted to assess the assay sensitivity and internal validity of a trial. Recently, some large-sample approaches have been developed to assess the non-inferiority of a new treatment based on the three-arm trial design. However, these methods behave badly with small sample sizes in the three arms. This manuscript aims to develop some reliable small-sample methods to test three-arm non-inferiority. Saddlepoint approximation, exact and approximate unconditional, and bootstrap-resampling methods are developed to calculate p-values of the Wald-type, score and likelihood ratio tests. Simulation studies are conducted to evaluate their performance in terms of type I error rate and power. Our empirical results show that the saddlepoint approximation method generally behaves better than the asymptotic method based on the Wald-type test statistic. For small sample sizes, approximate unconditional and bootstrap-resampling methods based on the score test statistic perform better in the sense that their corresponding type I error rates are generally closer to the prespecified nominal level than those of other test procedures. Both approximate unconditional and bootstrap-resampling test procedures based on the score test statistic are generally recommended for three-arm non-inferiority trials with binary outcomes.

  19. The Bootstrap, the Jackknife, and the Randomization Test: A Sampling Taxonomy.

    PubMed

    Rodgers, J L

    1999-10-01

    A simple sampling taxonomy is defined that shows the differences between and relationships among the bootstrap, the jackknife, and the randomization test. Each method has as its goal the creation of an empirical sampling distribution that can be used to test statistical hypotheses, estimate standard errors, and/or create confidence intervals. Distinctions between the methods can be made based on the sampling approach (with replacement versus without replacement) and the sample size (replacing the whole original sample versus replacing a subset of the original sample). The taxonomy is useful for teaching the goals and purposes of resampling schemes. An extension of the taxonomy implies other possible resampling approaches that have not previously been considered. Univariate and multivariate examples are presented.

  20. Significance levels for studies with correlated test statistics.

    PubMed

    Shi, Jianxin; Levinson, Douglas F; Whittemore, Alice S

    2008-07-01

    When testing large numbers of null hypotheses, one needs to assess the evidence against the global null hypothesis that none of the hypotheses is false. Such evidence typically is based on the test statistic of the largest magnitude, whose statistical significance is evaluated by permuting the sample units to simulate its null distribution. Efron (2007) has noted that correlation among the test statistics can induce substantial interstudy variation in the shapes of their histograms, which may cause misleading tail counts. Here, we show that permutation-based estimates of the overall significance level also can be misleading when the test statistics are correlated. We propose that such estimates be conditioned on a simple measure of the spread of the observed histogram, and we provide a method for obtaining conditional significance levels. We justify this conditioning using the conditionality principle described by Cox and Hinkley (1974). Application of the method to gene expression data illustrates the circumstances when conditional significance levels are needed.

  1. GEE-based SNP set association test for continuous and discrete traits in family-based association studies.

    PubMed

    Wang, Xuefeng; Lee, Seunggeun; Zhu, Xiaofeng; Redline, Susan; Lin, Xihong

    2013-12-01

    Family-based genetic association studies of related individuals provide opportunities to detect genetic variants that complement studies of unrelated individuals. Most statistical methods for family association studies for common variants are single marker based, which test one SNP a time. In this paper, we consider testing the effect of an SNP set, e.g., SNPs in a gene, in family studies, for both continuous and discrete traits. Specifically, we propose a generalized estimating equations (GEEs) based kernel association test, a variance component based testing method, to test for the association between a phenotype and multiple variants in an SNP set jointly using family samples. The proposed approach allows for both continuous and discrete traits, where the correlation among family members is taken into account through the use of an empirical covariance estimator. We derive the theoretical distribution of the proposed statistic under the null and develop analytical methods to calculate the P-values. We also propose an efficient resampling method for correcting for small sample size bias in family studies. The proposed method allows for easily incorporating covariates and SNP-SNP interactions. Simulation studies show that the proposed method properly controls for type I error rates under both random and ascertained sampling schemes in family studies. We demonstrate through simulation studies that our approach has superior performance for association mapping compared to the single marker based minimum P-value GEE test for an SNP-set effect over a range of scenarios. We illustrate the application of the proposed method using data from the Cleveland Family GWAS Study. © 2013 WILEY PERIODICALS, INC.

  2. Groundwater-quality data in seven GAMA study units: results from initial sampling, 2004-2005, and resampling, 2007-2008, of wells: California GAMA Program Priority Basin Project

    USGS Publications Warehouse

    Kent, Robert; Belitz, Kenneth; Fram, Miranda S.

    2014-01-01

    The Priority Basin Project (PBP) of the Groundwater Ambient Monitoring and Assessment (GAMA) Program was developed in response to the Groundwater Quality Monitoring Act of 2001 and is being conducted by the U.S. Geological Survey (USGS) in cooperation with the California State Water Resources Control Board (SWRCB). The GAMA-PBP began sampling, primarily public supply wells in May 2004. By the end of February 2006, seven (of what would eventually be 35) study units had been sampled over a wide area of the State. Selected wells in these first seven study units were resampled for water quality from August 2007 to November 2008 as part of an assessment of temporal trends in water quality by the GAMA-PBP. The initial sampling was designed to provide a spatially unbiased assessment of the quality of raw groundwater used for public water supplies within the seven study units. In the 7 study units, 462 wells were selected by using a spatially distributed, randomized grid-based method to provide statistical representation of the study area. Wells selected this way are referred to as grid wells or status wells. Approximately 3 years after the initial sampling, 55 of these previously sampled status wells (approximately 10 percent in each study unit) were randomly selected for resampling. The seven resampled study units, the total number of status wells sampled for each study unit, and the number of these wells resampled for trends are as follows, in chronological order of sampling: San Diego Drainages (53 status wells, 7 trend wells), North San Francisco Bay (84, 10), Northern San Joaquin Basin (51, 5), Southern Sacramento Valley (67, 7), San Fernando–San Gabriel (35, 6), Monterey Bay and Salinas Valley Basins (91, 11), and Southeast San Joaquin Valley (83, 9). The groundwater samples were analyzed for a large number of synthetic organic constituents (volatile organic compounds [VOCs], pesticides, and pesticide degradates), constituents of special interest (perchlorate, N

  3. Significance of the impact of motion compensation on the variability of PET image features

    NASA Astrophysics Data System (ADS)

    Carles, M.; Bach, T.; Torres-Espallardo, I.; Baltas, D.; Nestle, U.; Martí-Bonmatí, L.

    2018-03-01

    In lung cancer, quantification by positron emission tomography/computed tomography (PET/CT) imaging presents challenges due to respiratory movement. Our primary aim was to study the impact of motion compensation implied by retrospectively gated (4D)-PET/CT on the variability of PET quantitative parameters. Its significance was evaluated by comparison with the variability due to (i) the voxel size in image reconstruction and (ii) the voxel size in image post-resampling. The method employed for feature extraction was chosen based on the analysis of (i) the effect of discretization of the standardized uptake value (SUV) on complementarity between texture features (TF) and conventional indices, (ii) the impact of the segmentation method on the variability of image features, and (iii) the variability of image features across the time-frame of 4D-PET. Thirty-one PET-features were involved. Three SUV discretization methods were applied: a constant width (SUV resolution) of the resampling bin (method RW), a constant number of bins (method RN) and RN on the image obtained after histogram equalization (method EqRN). The segmentation approaches evaluated were 40% of SUVmax and the contrast oriented algorithm (COA). Parameters derived from 4D-PET images were compared with values derived from the PET image obtained for (i) the static protocol used in our clinical routine (3D) and (ii) the 3D image post-resampled to the voxel size of the 4D image and PET image derived after modifying the reconstruction of the 3D image to comprise the voxel size of the 4D image. Results showed that TF complementarity with conventional indices was sensitive to the SUV discretization method. In the comparison of COA and 40% contours, despite the values not being interchangeable, all image features showed strong linear correlations (r  >  0.91, p\\ll 0.001 ). Across the time-frames of 4D-PET, all image features followed a normal distribution in most patients. For our patient cohort, the

  4. A PLL-based resampling technique for vibration analysis in variable-speed wind turbines with PMSG: A bearing fault case

    NASA Astrophysics Data System (ADS)

    Pezzani, Carlos M.; Bossio, José M.; Castellino, Ariel M.; Bossio, Guillermo R.; De Angelo, Cristian H.

    2017-02-01

    Condition monitoring in permanent magnet synchronous machines has gained interest due to the increasing use in applications such as electric traction and power generation. Particularly in wind power generation, non-invasive condition monitoring techniques are of great importance. Usually, in such applications the access to the generator is complex and costly, while unexpected breakdowns results in high repair costs. This paper presents a technique which allows using vibration analysis for bearing fault detection in permanent magnet synchronous generators used in wind turbines. Given that in wind power applications the generator rotational speed may vary during normal operation, it is necessary to use special sampling techniques to apply spectral analysis of mechanical vibrations. In this work, a resampling technique based on order tracking without measuring the rotor position is proposed. To synchronize sampling with rotor position, an estimation of the rotor position obtained from the angle of the voltage vector is proposed. This angle is obtained from a phase-locked loop synchronized with the generator voltages. The proposed strategy is validated by laboratory experimental results obtained from a permanent magnet synchronous generator. Results with single point defects in the outer race of a bearing under variable speed and load conditions are presented.

  5. Multispectral Resampling of Seagrass Species Spectra: WorldView-2, Quickbird, Sentinel-2A, ASTER VNIR, and Landsat 8 OLI

    NASA Astrophysics Data System (ADS)

    Wicaksono, Pramaditya; Salivian Wisnu Kumara, Ignatius; Kamal, Muhammad; Afif Fauzan, Muhammad; Zhafarina, Zhafirah; Agus Nurswantoro, Dwi; Noviaris Yogyantoro, Rifka

    2017-12-01

    Although spectrally different, seagrass species may not be able to be mapped from multispectral remote sensing images due to the limitation of their spectral resolution. Therefore, it is important to quantitatively assess the possibility of mapping seagrass species using multispectral images by resampling seagrass species spectra to multispectral bands. Seagrass species spectra were measured on harvested seagrass leaves. Spectral resolution of multispectral images used in this research was adopted from WorldView-2, Quickbird, Sentinel-2A, ASTER VNIR, and Landsat 8 OLI. These images are widely available and can be a good representative and baseline for previous or future remote sensing images. Seagrass species considered in this research are Enhalus acoroides (Ea), Thalassodendron ciliatum (Tc), Thalassia hemprichii (Th), Cymodocea rotundata (Cr), Cymodocea serrulata (Cs), Halodule uninervis (Hu), Halodule pinifolia (Hp), Syringodum isoetifolium (Si), Halophila ovalis (Ho), and Halophila minor (Hm). Multispectral resampling analysis indicate that the resampled spectra exhibit similar shape and pattern with the original spectra but less precise, and they lose the unique absorption feature of seagrass species. Relying on spectral bands alone, multispectral image is not effective in mapping these seagrass species individually, which is shown by the poor and inconsistent result of Spectral Angle Mapper (SAM) classification technique in classifying seagrass species using seagrass species spectra as pure endmember. Only Sentinel-2A produced acceptable classification result using SAM.

  6. Methods of Soil Resampling to Monitor Changes in the Chemical Concentrations of Forest Soils.

    PubMed

    Lawrence, Gregory B; Fernandez, Ivan J; Hazlett, Paul W; Bailey, Scott W; Ross, Donald S; Villars, Thomas R; Quintana, Angelica; Ouimet, Rock; McHale, Michael R; Johnson, Chris E; Briggs, Russell D; Colter, Robert A; Siemion, Jason; Bartlett, Olivia L; Vargas, Olga; Antidormi, Michael R; Koppers, Mary M

    2016-11-25

    Recent soils research has shown that important chemical soil characteristics can change in less than a decade, often the result of broad environmental changes. Repeated sampling to monitor these changes in forest soils is a relatively new practice that is not well documented in the literature and has only recently been broadly embraced by the scientific community. The objective of this protocol is therefore to synthesize the latest information on methods of soil resampling in a format that can be used to design and implement a soil monitoring program. Successful monitoring of forest soils requires that a study unit be defined within an area of forested land that can be characterized with replicate sampling locations. A resampling interval of 5 years is recommended, but if monitoring is done to evaluate a specific environmental driver, the rate of change expected in that driver should be taken into consideration. Here, we show that the sampling of the profile can be done by horizon where boundaries can be clearly identified and horizons are sufficiently thick to remove soil without contamination from horizons above or below. Otherwise, sampling can be done by depth interval. Archiving of sample for future reanalysis is a key step in avoiding analytical bias and providing the opportunity for additional analyses as new questions arise.

  7. Methods of soil resampling to monitor changes in the chemical concentrations of forest soils

    USGS Publications Warehouse

    Lawrence, Gregory B.; Fernandez, Ivan J.; Hazlett, Paul W.; Bailey, Scott W.; Ross, Donald S.; Villars, Thomas R.; Quintana, Angelica; Ouimet, Rock; McHale, Michael; Johnson, Chris E.; Briggs, Russell D.; Colter, Robert A.; Siemion, Jason; Bartlett, Olivia L.; Vargas, Olga; Antidormi, Michael; Koppers, Mary Margaret

    2016-01-01

    Recent soils research has shown that important chemical soil characteristics can change in less than a decade, often the result of broad environmental changes. Repeated sampling to monitor these changes in forest soils is a relatively new practice that is not well documented in the literature and has only recently been broadly embraced by the scientific community. The objective of this protocol is therefore to synthesize the latest information on methods of soil resampling in a format that can be used to design and implement a soil monitoring program. Successful monitoring of forest soils requires that a study unit be defined within an area of forested land that can be characterized with replicate sampling locations. A resampling interval of 5 years is recommended, but if monitoring is done to evaluate a specific environmental driver, the rate of change expected in that driver should be taken into consideration. Here, we show that the sampling of the profile can be done by horizon where boundaries can be clearly identified and horizons are sufficiently thick to remove soil without contamination from horizons above or below. Otherwise, sampling can be done by depth interval. Archiving of sample for future reanalysis is a key step in avoiding analytical bias and providing the opportunity for additional analyses as new questions arise.

  8. A shift from significance test to hypothesis test through power analysis in medical research.

    PubMed

    Singh, G

    2006-01-01

    Medical research literature until recently, exhibited substantial dominance of the Fisher's significance test approach of statistical inference concentrating more on probability of type I error over Neyman-Pearson's hypothesis test considering both probability of type I and II error. Fisher's approach dichotomises results into significant or not significant results with a P value. The Neyman-Pearson's approach talks of acceptance or rejection of null hypothesis. Based on the same theory these two approaches deal with same objective and conclude in their own way. The advancement in computing techniques and availability of statistical software have resulted in increasing application of power calculations in medical research and thereby reporting the result of significance tests in the light of power of the test also. Significance test approach, when it incorporates power analysis contains the essence of hypothesis test approach. It may be safely argued that rising application of power analysis in medical research may have initiated a shift from Fisher's significance test to Neyman-Pearson's hypothesis test procedure.

  9. Use of a (137)Cs re-sampling technique to investigate temporal changes in soil erosion and sediment mobilisation for a small forested catchment in southern Italy.

    PubMed

    Porto, Paolo; Walling, Des E; Alewell, Christine; Callegari, Giovanni; Mabit, Lionel; Mallimo, Nicola; Meusburger, Katrin; Zehringer, Markus

    2014-12-01

    Soil erosion and both its on-site and off-site impacts are increasingly seen as a serious environmental problem across the world. The need for an improved evidence base on soil loss and soil redistribution rates has directed attention to the use of fallout radionuclides, and particularly (137)Cs, for documenting soil redistribution rates. This approach possesses important advantages over more traditional means of documenting soil erosion and soil redistribution. However, one key limitation of the approach is the time-averaged or lumped nature of the estimated erosion rates. In nearly all cases, these will relate to the period extending from the main period of bomb fallout to the time of sampling. Increasing concern for the impact of global change, particularly that related to changing land use and climate change, has frequently directed attention to the need to document changes in soil redistribution rates within this period. Re-sampling techniques, which should be distinguished from repeat-sampling techniques, have the potential to meet this requirement. As an example, the use of a re-sampling technique to derive estimates of the mean annual net soil loss from a small (1.38 ha) forested catchment in southern Italy is reported. The catchment was originally sampled in 1998 and samples were collected from points very close to the original sampling points again in 2013. This made it possible to compare the estimate of mean annual erosion for the period 1954-1998 with that for the period 1999-2013. The availability of measurements of sediment yield from the catchment for parts of the overall period made it possible to compare the results provided by the (137)Cs re-sampling study with the estimates of sediment yield for the same periods. In order to compare the estimates of soil loss and sediment yield for the two different periods, it was necessary to establish the uncertainty associated with the individual estimates. In the absence of a generally accepted procedure

  10. Community level patterns in diverse systems: A case study of litter fauna in a Mexican pine-oak forest using higher taxa surrogates and re-sampling methods

    NASA Astrophysics Data System (ADS)

    Moreno, Claudia E.; Guevara, Roger; Sánchez-Rojas, Gerardo; Téllez, Dianeis; Verdú, José R.

    2008-01-01

    Environmental assessment at the community level in highly diverse ecosystems is limited by taxonomic constraints and statistical methods requiring true replicates. Our objective was to show how diverse systems can be studied at the community level using higher taxa as biodiversity surrogates, and re-sampling methods to allow comparisons. To illustrate this we compared the abundance, richness, evenness and diversity of the litter fauna in a pine-oak forest in central Mexico among seasons, sites and collecting methods. We also assessed changes in the abundance of trophic guilds and evaluated the relationships between community parameters and litter attributes. With the direct search method we observed differences in the rate of taxa accumulation between sites. Bootstrap analysis showed that abundance varied significantly between seasons and sampling methods, but not between sites. In contrast, diversity and evenness were significantly higher at the managed than at the non-managed site. Tree regression models show that abundance varied mainly between seasons, whereas taxa richness was affected by litter attributes (composition and moisture content). The abundance of trophic guilds varied among methods and seasons, but overall we found that parasitoids, predators and detrivores decreased under management. Therefore, although our results suggest that management has positive effects on the richness and diversity of litter fauna, the analysis of trophic guilds revealed a contrasting story. Our results indicate that functional groups and re-sampling methods may be used as tools for describing community patterns in highly diverse systems. Also, the higher taxa surrogacy could be seen as a preliminary approach when it is not possible to identify the specimens at a low taxonomic level in a reasonable period of time and in a context of limited financial resources, but further studies are needed to test whether the results are specific to a system or whether they are general

  11. A SIGNIFICANCE TEST FOR THE LASSO1

    PubMed Central

    Lockhart, Richard; Taylor, Jonathan; Tibshirani, Ryan J.; Tibshirani, Robert

    2014-01-01

    In the sparse linear regression setting, we consider testing the significance of the predictor variable that enters the current lasso model, in the sequence of models visited along the lasso solution path. We propose a simple test statistic based on lasso fitted values, called the covariance test statistic, and show that when the true model is linear, this statistic has an Exp(1) asymptotic distribution under the null hypothesis (the null being that all truly active variables are contained in the current lasso model). Our proof of this result for the special case of the first predictor to enter the model (i.e., testing for a single significant predictor variable against the global null) requires only weak assumptions on the predictor matrix X. On the other hand, our proof for a general step in the lasso path places further technical assumptions on X and the generative model, but still allows for the important high-dimensional case p > n, and does not necessarily require that the current lasso model achieves perfect recovery of the truly active variables. Of course, for testing the significance of an additional variable between two nested linear models, one typically uses the chi-squared test, comparing the drop in residual sum of squares (RSS) to a χ12 distribution. But when this additional variable is not fixed, and has been chosen adaptively or greedily, this test is no longer appropriate: adaptivity makes the drop in RSS stochastically much larger than χ12 under the null hypothesis. Our analysis explicitly accounts for adaptivity, as it must, since the lasso builds an adaptive sequence of linear models as the tuning parameter λ decreases. In this analysis, shrinkage plays a key role: though additional variables are chosen adaptively, the coefficients of lasso active variables are shrunken due to the l1 penalty. Therefore, the test statistic (which is based on lasso fitted values) is in a sense balanced by these two opposing properties—adaptivity and

  12. Speckle reduction in digital holography with resampling ring masks

    NASA Astrophysics Data System (ADS)

    Zhang, Wenhui; Cao, Liangcai; Jin, Guofan

    2018-01-01

    One-shot digital holographic imaging has the advantages of high stability and low temporal cost. However, the reconstruction is affected by the speckle noise. Resampling ring-mask method in spectrum domain is proposed for speckle reduction. The useful spectrum of one hologram is divided into several sub-spectra by ring masks. In the reconstruction, angular spectrum transform is applied to guarantee the calculation accuracy which has no approximation. N reconstructed amplitude images are calculated from the corresponding sub-spectra. Thanks to speckle's random distribution, superimposing these N uncorrelated amplitude images would lead to a final reconstructed image with lower speckle noise. Normalized relative standard deviation values of the reconstructed image are used to evaluate the reduction of speckle. Effect of the method on the spatial resolution of the reconstructed image is also quantitatively evaluated. Experimental and simulation results prove the feasibility and effectiveness of the proposed method.

  13. Methods of Soil Resampling to Monitor Changes in the Chemical Concentrations of Forest Soils

    PubMed Central

    Lawrence, Gregory B.; Fernandez, Ivan J.; Hazlett, Paul W.; Bailey, Scott W.; Ross, Donald S.; Villars, Thomas R.; Quintana, Angelica; Ouimet, Rock; McHale, Michael R.; Johnson, Chris E.; Briggs, Russell D.; Colter, Robert A.; Siemion, Jason; Bartlett, Olivia L.; Vargas, Olga; Antidormi, Michael R.; Koppers, Mary M.

    2016-01-01

    Recent soils research has shown that important chemical soil characteristics can change in less than a decade, often the result of broad environmental changes. Repeated sampling to monitor these changes in forest soils is a relatively new practice that is not well documented in the literature and has only recently been broadly embraced by the scientific community. The objective of this protocol is therefore to synthesize the latest information on methods of soil resampling in a format that can be used to design and implement a soil monitoring program. Successful monitoring of forest soils requires that a study unit be defined within an area of forested land that can be characterized with replicate sampling locations. A resampling interval of 5 years is recommended, but if monitoring is done to evaluate a specific environmental driver, the rate of change expected in that driver should be taken into consideration. Here, we show that the sampling of the profile can be done by horizon where boundaries can be clearly identified and horizons are sufficiently thick to remove soil without contamination from horizons above or below. Otherwise, sampling can be done by depth interval. Archiving of sample for future reanalysis is a key step in avoiding analytical bias and providing the opportunity for additional analyses as new questions arise. PMID:27911419

  14. Resampling procedures to identify important SNPs using a consensus approach.

    PubMed

    Pardy, Christopher; Motyer, Allan; Wilson, Susan

    2011-11-29

    Our goal is to identify common single-nucleotide polymorphisms (SNPs) (minor allele frequency > 1%) that add predictive accuracy above that gained by knowledge of easily measured clinical variables. We take an algorithmic approach to predict each phenotypic variable using a combination of phenotypic and genotypic predictors. We perform our procedure on the first simulated replicate and then validate against the others. Our procedure performs well when predicting Q1 but is less successful for the other outcomes. We use resampling procedures where possible to guard against false positives and to improve generalizability. The approach is based on finding a consensus regarding important SNPs by applying random forests and the least absolute shrinkage and selection operator (LASSO) on multiple subsamples. Random forests are used first to discard unimportant predictors, narrowing our focus to roughly 100 important SNPs. A cross-validation LASSO is then used to further select variables. We combine these procedures to guarantee that cross-validation can be used to choose a shrinkage parameter for the LASSO. If the clinical variables were unavailable, this prefiltering step would be essential. We perform the SNP-based analyses simultaneously rather than one at a time to estimate SNP effects in the presence of other causal variants. We analyzed the first simulated replicate of Genetic Analysis Workshop 17 without knowledge of the true model. Post-conference knowledge of the simulation parameters allowed us to investigate the limitations of our approach. We found that many of the false positives we identified were substantially correlated with genuine causal SNPs.

  15. Correction of the significance level when attempting multiple transformations of an explanatory variable in generalized linear models

    PubMed Central

    2013-01-01

    Background In statistical modeling, finding the most favorable coding for an exploratory quantitative variable involves many tests. This process involves multiple testing problems and requires the correction of the significance level. Methods For each coding, a test on the nullity of the coefficient associated with the new coded variable is computed. The selected coding corresponds to that associated with the largest statistical test (or equivalently the smallest pvalue). In the context of the Generalized Linear Model, Liquet and Commenges (Stat Probability Lett,71:33–38,2005) proposed an asymptotic correction of the significance level. This procedure, based on the score test, has been developed for dichotomous and Box-Cox transformations. In this paper, we suggest the use of resampling methods to estimate the significance level for categorical transformations with more than two levels and, by definition those that involve more than one parameter in the model. The categorical transformation is a more flexible way to explore the unknown shape of the effect between an explanatory and a dependent variable. Results The simulations we ran in this study showed good performances of the proposed methods. These methods were illustrated using the data from a study of the relationship between cholesterol and dementia. Conclusion The algorithms were implemented using R, and the associated CPMCGLM R package is available on the CRAN. PMID:23758852

  16. OPATs: Omnibus P-value association tests.

    PubMed

    Chen, Chia-Wei; Yang, Hsin-Chou

    2017-07-10

    Combining statistical significances (P-values) from a set of single-locus association tests in genome-wide association studies is a proof-of-principle method for identifying disease-associated genomic segments, functional genes and biological pathways. We review P-value combinations for genome-wide association studies and introduce an integrated analysis tool, Omnibus P-value Association Tests (OPATs), which provides popular analysis methods of P-value combinations. The software OPATs programmed in R and R graphical user interface features a user-friendly interface. In addition to analysis modules for data quality control and single-locus association tests, OPATs provides three types of set-based association test: window-, gene- and biopathway-based association tests. P-value combinations with or without threshold and rank truncation are provided. The significance of a set-based association test is evaluated by using resampling procedures. Performance of the set-based association tests in OPATs has been evaluated by simulation studies and real data analyses. These set-based association tests help boost the statistical power, alleviate the multiple-testing problem, reduce the impact of genetic heterogeneity, increase the replication efficiency of association tests and facilitate the interpretation of association signals by streamlining the testing procedures and integrating the genetic effects of multiple variants in genomic regions of biological relevance. In summary, P-value combinations facilitate the identification of marker sets associated with disease susceptibility and uncover missing heritability in association studies, thereby establishing a foundation for the genetic dissection of complex diseases and traits. OPATs provides an easy-to-use and statistically powerful analysis tool for P-value combinations. OPATs, examples, and user guide can be downloaded from http://www.stat.sinica.edu.tw/hsinchou/genetics/association/OPATs.htm. © The Author 2017

  17. EmpiriciSN: Re-sampling Observed Supernova/Host Galaxy Populations Using an XD Gaussian Mixture Model

    NASA Astrophysics Data System (ADS)

    Holoien, Thomas W.-S.; Marshall, Philip J.; Wechsler, Risa H.

    2017-06-01

    We describe two new open-source tools written in Python for performing extreme deconvolution Gaussian mixture modeling (XDGMM) and using a conditioned model to re-sample observed supernova and host galaxy populations. XDGMM is new program that uses Gaussian mixtures to perform density estimation of noisy data using extreme deconvolution (XD) algorithms. Additionally, it has functionality not available in other XD tools. It allows the user to select between the AstroML and Bovy et al. fitting methods and is compatible with scikit-learn machine learning algorithms. Most crucially, it allows the user to condition a model based on the known values of a subset of parameters. This gives the user the ability to produce a tool that can predict unknown parameters based on a model that is conditioned on known values of other parameters. EmpiriciSN is an exemplary application of this functionality, which can be used to fit an XDGMM model to observed supernova/host data sets and predict likely supernova parameters using a model conditioned on observed host properties. It is primarily intended to simulate realistic supernovae for LSST data simulations based on empirical galaxy properties.

  18. EmpiriciSN: Re-sampling Observed Supernova/Host Galaxy Populations Using an XD Gaussian Mixture Model

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Holoien, Thomas W. -S.; Marshall, Philip J.; Wechsler, Risa H.

    We describe two new open-source tools written in Python for performing extreme deconvolution Gaussian mixture modeling (XDGMM) and using a conditioned model to re-sample observed supernova and host galaxy populations. XDGMM is new program that uses Gaussian mixtures to perform density estimation of noisy data using extreme deconvolution (XD) algorithms. Additionally, it has functionality not available in other XD tools. It allows the user to select between the AstroML and Bovy et al. fitting methods and is compatible with scikit-learn machine learning algorithms. Most crucially, it allows the user to condition a model based on the known values of amore » subset of parameters. This gives the user the ability to produce a tool that can predict unknown parameters based on a model that is conditioned on known values of other parameters. EmpiriciSN is an exemplary application of this functionality, which can be used to fit an XDGMM model to observed supernova/host data sets and predict likely supernova parameters using a model conditioned on observed host properties. It is primarily intended to simulate realistic supernovae for LSST data simulations based on empirical galaxy properties.« less

  19. Fault diagnosis of motor bearing with speed fluctuation via angular resampling of transient sound signals

    NASA Astrophysics Data System (ADS)

    Lu, Siliang; Wang, Xiaoxian; He, Qingbo; Liu, Fang; Liu, Yongbin

    2016-12-01

    Transient signal analysis (TSA) has been proven an effective tool for motor bearing fault diagnosis, but has yet to be applied in processing bearing fault signals with variable rotating speed. In this study, a new TSA-based angular resampling (TSAAR) method is proposed for fault diagnosis under speed fluctuation condition via sound signal analysis. By applying the TSAAR method, the frequency smearing phenomenon is eliminated and the fault characteristic frequency is exposed in the envelope spectrum for bearing fault recognition. The TSAAR method can accurately estimate the phase information of the fault-induced impulses using neither complicated time-frequency analysis techniques nor external speed sensors, and hence it provides a simple, flexible, and data-driven approach that realizes variable-speed motor bearing fault diagnosis. The effectiveness and efficiency of the proposed TSAAR method are verified through a series of simulated and experimental case studies.

  20. Automatic recognition of 3D GGO CT imaging signs through the fusion of hybrid resampling and layer-wise fine-tuning CNNs.

    PubMed

    Han, Guanghui; Liu, Xiabi; Zheng, Guangyuan; Wang, Murong; Huang, Shan

    2018-06-06

    Ground-glass opacity (GGO) is a common CT imaging sign on high-resolution CT, which means the lesion is more likely to be malignant compared to common solid lung nodules. The automatic recognition of GGO CT imaging signs is of great importance for early diagnosis and possible cure of lung cancers. The present GGO recognition methods employ traditional low-level features and system performance improves slowly. Considering the high-performance of CNN model in computer vision field, we proposed an automatic recognition method of 3D GGO CT imaging signs through the fusion of hybrid resampling and layer-wise fine-tuning CNN models in this paper. Our hybrid resampling is performed on multi-views and multi-receptive fields, which reduces the risk of missing small or large GGOs by adopting representative sampling panels and processing GGOs with multiple scales simultaneously. The layer-wise fine-tuning strategy has the ability to obtain the optimal fine-tuning model. Multi-CNN models fusion strategy obtains better performance than any single trained model. We evaluated our method on the GGO nodule samples in publicly available LIDC-IDRI dataset of chest CT scans. The experimental results show that our method yields excellent results with 96.64% sensitivity, 71.43% specificity, and 0.83 F1 score. Our method is a promising approach to apply deep learning method to computer-aided analysis of specific CT imaging signs with insufficient labeled images. Graphical abstract We proposed an automatic recognition method of 3D GGO CT imaging signs through the fusion of hybrid resampling and layer-wise fine-tuning CNN models in this paper. Our hybrid resampling reduces the risk of missing small or large GGOs by adopting representative sampling panels and processing GGOs with multiple scales simultaneously. The layer-wise fine-tuning strategy has ability to obtain the optimal fine-tuning model. Our method is a promising approach to apply deep learning method to computer-aided analysis

  1. Significance testing testate amoeba water table reconstructions

    NASA Astrophysics Data System (ADS)

    Payne, Richard J.; Babeshko, Kirill V.; van Bellen, Simon; Blackford, Jeffrey J.; Booth, Robert K.; Charman, Dan J.; Ellershaw, Megan R.; Gilbert, Daniel; Hughes, Paul D. M.; Jassey, Vincent E. J.; Lamentowicz, Łukasz; Lamentowicz, Mariusz; Malysheva, Elena A.; Mauquoy, Dmitri; Mazei, Yuri; Mitchell, Edward A. D.; Swindles, Graeme T.; Tsyganov, Andrey N.; Turner, T. Edward; Telford, Richard J.

    2016-04-01

    Transfer functions are valuable tools in palaeoecology, but their output may not always be meaningful. A recently-developed statistical test ('randomTF') offers the potential to distinguish among reconstructions which are more likely to be useful, and those less so. We applied this test to a large number of reconstructions of peatland water table depth based on testate amoebae. Contrary to our expectations, a substantial majority (25 of 30) of these reconstructions gave non-significant results (P > 0.05). The underlying reasons for this outcome are unclear. We found no significant correlation between randomTF P-value and transfer function performance, the properties of the training set and reconstruction, or measures of transfer function fit. These results give cause for concern but we believe it would be extremely premature to discount the results of non-significant reconstructions. We stress the need for more critical assessment of transfer function output, replication of results and ecologically-informed interpretation of palaeoecological data.

  2. Manipulating the Alpha Level Cannot Cure Significance Testing.

    PubMed

    Trafimow, David; Amrhein, Valentin; Areshenkoff, Corson N; Barrera-Causil, Carlos J; Beh, Eric J; Bilgiç, Yusuf K; Bono, Roser; Bradley, Michael T; Briggs, William M; Cepeda-Freyre, Héctor A; Chaigneau, Sergio E; Ciocca, Daniel R; Correa, Juan C; Cousineau, Denis; de Boer, Michiel R; Dhar, Subhra S; Dolgov, Igor; Gómez-Benito, Juana; Grendar, Marian; Grice, James W; Guerrero-Gimenez, Martin E; Gutiérrez, Andrés; Huedo-Medina, Tania B; Jaffe, Klaus; Janyan, Armina; Karimnezhad, Ali; Korner-Nievergelt, Fränzi; Kosugi, Koji; Lachmair, Martin; Ledesma, Rubén D; Limongi, Roberto; Liuzza, Marco T; Lombardo, Rosaria; Marks, Michael J; Meinlschmidt, Gunther; Nalborczyk, Ladislas; Nguyen, Hung T; Ospina, Raydonal; Perezgonzalez, Jose D; Pfister, Roland; Rahona, Juan J; Rodríguez-Medina, David A; Romão, Xavier; Ruiz-Fernández, Susana; Suarez, Isabel; Tegethoff, Marion; Tejo, Mauricio; van de Schoot, Rens; Vankov, Ivan I; Velasco-Forero, Santiago; Wang, Tonghui; Yamada, Yuki; Zoppino, Felipe C M; Marmolejo-Ramos, Fernando

    2018-01-01

    We argue that making accept/reject decisions on scientific hypotheses, including a recent call for changing the canonical alpha level from p = 0.05 to p = 0.005, is deleterious for the finding of new discoveries and the progress of science. Given that blanket and variable alpha levels both are problematic, it is sensible to dispense with significance testing altogether. There are alternatives that address study design and sample size much more directly than significance testing does; but none of the statistical tools should be taken as the new magic method giving clear-cut mechanical answers. Inference should not be based on single studies at all, but on cumulative evidence from multiple independent studies. When evaluating the strength of the evidence, we should consider, for example, auxiliary assumptions, the strength of the experimental design, and implications for applications. To boil all this down to a binary decision based on a p -value threshold of 0.05, 0.01, 0.005, or anything else, is not acceptable.

  3. Testing for significance of phase synchronisation dynamics in the EEG.

    PubMed

    Daly, Ian; Sweeney-Reed, Catherine M; Nasuto, Slawomir J

    2013-06-01

    A number of tests exist to check for statistical significance of phase synchronisation within the Electroencephalogram (EEG); however, the majority suffer from a lack of generality and applicability. They may also fail to account for temporal dynamics in the phase synchronisation, regarding synchronisation as a constant state instead of a dynamical process. Therefore, a novel test is developed for identifying the statistical significance of phase synchronisation based upon a combination of work characterising temporal dynamics of multivariate time-series and Markov modelling. We show how this method is better able to assess the significance of phase synchronisation than a range of commonly used significance tests. We also show how the method may be applied to identify and classify significantly different phase synchronisation dynamics in both univariate and multivariate datasets.

  4. Contribution of Morphological Awareness and Lexical Inferencing Ability to L2 Vocabulary Knowledge and Reading Comprehension among Advanced EFL Learners: Testing Direct and Indirect Effects

    ERIC Educational Resources Information Center

    Zhang, Dongbo; Koda, Keiko

    2012-01-01

    Within the Structural Equation Modeling framework, this study tested the direct and indirect effects of morphological awareness and lexical inferencing ability on L2 vocabulary knowledge and reading comprehension among advanced Chinese EFL readers in a university in China. Using both regular z-test and the bootstrapping (data-based resampling)…

  5. Improving particle filters in rainfall-runoff models: application of the resample-move step and development of the ensemble Gaussian particle filter

    NASA Astrophysics Data System (ADS)

    Plaza Guingla, D. A.; Pauwels, V. R.; De Lannoy, G. J.; Matgen, P.; Giustarini, L.; De Keyser, R.

    2012-12-01

    The objective of this work is to analyze the improvement in the performance of the particle filter by including a resample-move step or by using a modified Gaussian particle filter. Specifically, the standard particle filter structure is altered by the inclusion of the Markov chain Monte Carlo move step. The second choice adopted in this study uses the moments of an ensemble Kalman filter analysis to define the importance density function within the Gaussian particle filter structure. Both variants of the standard particle filter are used in the assimilation of densely sampled discharge records into a conceptual rainfall-runoff model. In order to quantify the obtained improvement, discharge root mean square errors are compared for different particle filters, as well as for the ensemble Kalman filter. First, a synthetic experiment is carried out. The results indicate that the performance of the standard particle filter can be improved by the inclusion of the resample-move step, but its effectiveness is limited to situations with limited particle impoverishment. The results also show that the modified Gaussian particle filter outperforms the rest of the filters. Second, a real experiment is carried out in order to validate the findings from the synthetic experiment. The addition of the resample-move step does not show a considerable improvement due to performance limitations in the standard particle filter with real data. On the other hand, when an optimal importance density function is used in the Gaussian particle filter, the results show a considerably improved performance of the particle filter.

  6. Manipulating the Alpha Level Cannot Cure Significance Testing

    PubMed Central

    Trafimow, David; Amrhein, Valentin; Areshenkoff, Corson N.; Barrera-Causil, Carlos J.; Beh, Eric J.; Bilgiç, Yusuf K.; Bono, Roser; Bradley, Michael T.; Briggs, William M.; Cepeda-Freyre, Héctor A.; Chaigneau, Sergio E.; Ciocca, Daniel R.; Correa, Juan C.; Cousineau, Denis; de Boer, Michiel R.; Dhar, Subhra S.; Dolgov, Igor; Gómez-Benito, Juana; Grendar, Marian; Grice, James W.; Guerrero-Gimenez, Martin E.; Gutiérrez, Andrés; Huedo-Medina, Tania B.; Jaffe, Klaus; Janyan, Armina; Karimnezhad, Ali; Korner-Nievergelt, Fränzi; Kosugi, Koji; Lachmair, Martin; Ledesma, Rubén D.; Limongi, Roberto; Liuzza, Marco T.; Lombardo, Rosaria; Marks, Michael J.; Meinlschmidt, Gunther; Nalborczyk, Ladislas; Nguyen, Hung T.; Ospina, Raydonal; Perezgonzalez, Jose D.; Pfister, Roland; Rahona, Juan J.; Rodríguez-Medina, David A.; Romão, Xavier; Ruiz-Fernández, Susana; Suarez, Isabel; Tegethoff, Marion; Tejo, Mauricio; van de Schoot, Rens; Vankov, Ivan I.; Velasco-Forero, Santiago; Wang, Tonghui; Yamada, Yuki; Zoppino, Felipe C. M.; Marmolejo-Ramos, Fernando

    2018-01-01

    We argue that making accept/reject decisions on scientific hypotheses, including a recent call for changing the canonical alpha level from p = 0.05 to p = 0.005, is deleterious for the finding of new discoveries and the progress of science. Given that blanket and variable alpha levels both are problematic, it is sensible to dispense with significance testing altogether. There are alternatives that address study design and sample size much more directly than significance testing does; but none of the statistical tools should be taken as the new magic method giving clear-cut mechanical answers. Inference should not be based on single studies at all, but on cumulative evidence from multiple independent studies. When evaluating the strength of the evidence, we should consider, for example, auxiliary assumptions, the strength of the experimental design, and implications for applications. To boil all this down to a binary decision based on a p-value threshold of 0.05, 0.01, 0.005, or anything else, is not acceptable. PMID:29867666

  7. Spatial resampling of IDR frames for low bitrate video coding with HEVC

    NASA Astrophysics Data System (ADS)

    Hosking, Brett; Agrafiotis, Dimitris; Bull, David; Easton, Nick

    2015-03-01

    As the demand for higher quality and higher resolution video increases, many applications fail to meet this demand due to low bandwidth restrictions. One factor contributing to this problem is the high bitrate requirement of the intra-coded Instantaneous Decoding Refresh (IDR) frames featuring in all video coding standards. Frequent coding of IDR frames is essential for error resilience in order to prevent the occurrence of error propagation. However, as each one consumes a huge portion of the available bitrate, the quality of future coded frames is hindered by high levels of compression. This work presents a new technique, known as Spatial Resampling of IDR Frames (SRIF), and shows how it can increase the rate distortion performance by providing a higher and more consistent level of video quality at low bitrates.

  8. A resampling procedure for generating conditioned daily weather sequences

    USGS Publications Warehouse

    Clark, Martyn P.; Gangopadhyay, Subhrendu; Brandon, David; Werner, Kevin; Hay, Lauren E.; Rajagopalan, Balaji; Yates, David

    2004-01-01

    A method is introduced to generate conditioned daily precipitation and temperature time series at multiple stations. The method resamples data from the historical record “nens” times for the period of interest (nens = number of ensemble members) and reorders the ensemble members to reconstruct the observed spatial (intersite) and temporal correlation statistics. The weather generator model is applied to 2307 stations in the contiguous United States and is shown to reproduce the observed spatial correlation between neighboring stations, the observed correlation between variables (e.g., between precipitation and temperature), and the observed temporal correlation between subsequent days in the generated weather sequence. The weather generator model is extended to produce sequences of weather that are conditioned on climate indices (in this case the Niño 3.4 index). Example illustrations of conditioned weather sequences are provided for a station in Arizona (Petrified Forest, 34.8°N, 109.9°W), where El Niño and La Niña conditions have a strong effect on winter precipitation. The conditioned weather sequences generated using the methods described in this paper are appropriate for use as input to hydrologic models to produce multiseason forecasts of streamflow.

  9. Identification of significant features by the Global Mean Rank test.

    PubMed

    Klammer, Martin; Dybowski, J Nikolaj; Hoffmann, Daniel; Schaab, Christoph

    2014-01-01

    With the introduction of omics-technologies such as transcriptomics and proteomics, numerous methods for the reliable identification of significantly regulated features (genes, proteins, etc.) have been developed. Experimental practice requires these tests to successfully deal with conditions such as small numbers of replicates, missing values, non-normally distributed expression levels, and non-identical distributions of features. With the MeanRank test we aimed at developing a test that performs robustly under these conditions, while favorably scaling with the number of replicates. The test proposed here is a global one-sample location test, which is based on the mean ranks across replicates, and internally estimates and controls the false discovery rate. Furthermore, missing data is accounted for without the need of imputation. In extensive simulations comparing MeanRank to other frequently used methods, we found that it performs well with small and large numbers of replicates, feature dependent variance between replicates, and variable regulation across features on simulation data and a recent two-color microarray spike-in dataset. The tests were then used to identify significant changes in the phosphoproteomes of cancer cells induced by the kinase inhibitors erlotinib and 3-MB-PP1 in two independently published mass spectrometry-based studies. MeanRank outperformed the other global rank-based methods applied in this study. Compared to the popular Significance Analysis of Microarrays and Linear Models for Microarray methods, MeanRank performed similar or better. Furthermore, MeanRank exhibits more consistent behavior regarding the degree of regulation and is robust against the choice of preprocessing methods. MeanRank does not require any imputation of missing values, is easy to understand, and yields results that are easy to interpret. The software implementing the algorithm is freely available for academic and commercial use.

  10. Performance of Bootstrapping Approaches To Model Test Statistics and Parameter Standard Error Estimation in Structural Equation Modeling.

    ERIC Educational Resources Information Center

    Nevitt, Jonathan; Hancock, Gregory R.

    2001-01-01

    Evaluated the bootstrap method under varying conditions of nonnormality, sample size, model specification, and number of bootstrap samples drawn from the resampling space. Results for the bootstrap suggest the resampling-based method may be conservative in its control over model rejections, thus having an impact on the statistical power associated…

  11. Mixed Effects Models for Resampled Network Statistics Improves Statistical Power to Find Differences in Multi-Subject Functional Connectivity

    PubMed Central

    Narayan, Manjari; Allen, Genevera I.

    2016-01-01

    Many complex brain disorders, such as autism spectrum disorders, exhibit a wide range of symptoms and disability. To understand how brain communication is impaired in such conditions, functional connectivity studies seek to understand individual differences in brain network structure in terms of covariates that measure symptom severity. In practice, however, functional connectivity is not observed but estimated from complex and noisy neural activity measurements. Imperfect subject network estimates can compromise subsequent efforts to detect covariate effects on network structure. We address this problem in the case of Gaussian graphical models of functional connectivity, by proposing novel two-level models that treat both subject level networks and population level covariate effects as unknown parameters. To account for imperfectly estimated subject level networks when fitting these models, we propose two related approaches—R2 based on resampling and random effects test statistics, and R3 that additionally employs random adaptive penalization. Simulation studies using realistic graph structures reveal that R2 and R3 have superior statistical power to detect covariate effects compared to existing approaches, particularly when the number of within subject observations is comparable to the size of subject networks. Using our novel models and methods to study parts of the ABIDE dataset, we find evidence of hypoconnectivity associated with symptom severity in autism spectrum disorders, in frontoparietal and limbic systems as well as in anterior and posterior cingulate cortices. PMID:27147940

  12. Predicting coin flips: using resampling and hierarchical models to help untangle the NHL's shoot-out.

    PubMed

    Lopez, Michael J; Schuckers, Michael

    2017-05-01

    Roughly 14% of regular season National Hockey League games since the 2005-06 season have been decided by a shoot-out, and the resulting allocation of points has impacted play-off races each season. But despite interest from fans, players and league officials, there is little in the way of published research on team or individual shoot-out performance. This manuscript attempts to fill that void. We present both generalised linear mixed model and Bayesian hierarchical model frameworks to model shoot-out outcomes, with results suggesting that there are (i) small but statistically significant talent gaps between shooters, (ii) marginal differences in performance among netminders and (iii) few, if any, predictors of player success after accounting for individual talent. We also provide a resampling strategy to highlight a selection bias with respect to shooter assignment, in which coaches choose their most skilled offensive players early in shoot-out rounds and are less likely to select players with poor past performances. Finally, given that per-shot data for shoot-outs do not currently exist in a single location for public use, we provide both our data and source code for other researchers interested in studying shoot-out outcomes.

  13. Simulating ensembles of source water quality using a K-nearest neighbor resampling approach.

    PubMed

    Towler, Erin; Rajagopalan, Balaji; Seidel, Chad; Summers, R Scott

    2009-03-01

    Climatological, geological, and water management factors can cause significant variability in surface water quality. As drinking water quality standards become more stringent, the ability to quantify the variability of source water quality becomes more important for decision-making and planning in water treatment for regulatory compliance. However, paucity of long-term water quality data makes it challenging to apply traditional simulation techniques. To overcome this limitation, we have developed and applied a robust nonparametric K-nearest neighbor (K-nn) bootstrap approach utilizing the United States Environmental Protection Agency's Information Collection Rule (ICR) data. In this technique, first an appropriate "feature vector" is formed from the best available explanatory variables. The nearest neighbors to the feature vector are identified from the ICR data and are resampled using a weight function. Repetition of this results in water quality ensembles, and consequently the distribution and the quantification of the variability. The main strengths of the approach are its flexibility, simplicity, and the ability to use a large amount of spatial data with limited temporal extent to provide water quality ensembles for any given location. We demonstrate this approach by applying it to simulate monthly ensembles of total organic carbon for two utilities in the U.S. with very different watersheds and to alkalinity and bromide at two other U.S. utilities.

  14. Building Intuitions about Statistical Inference Based on Resampling

    ERIC Educational Resources Information Center

    Watson, Jane; Chance, Beth

    2012-01-01

    Formal inference, which makes theoretical assumptions about distributions and applies hypothesis testing procedures with null and alternative hypotheses, is notoriously difficult for tertiary students to master. The debate about whether this content should appear in Years 11 and 12 of the "Australian Curriculum: Mathematics" has gone on…

  15. Are patients open to elective re-sampling of their glioblastoma? A new way of assessing treatment innovations.

    PubMed

    Mir, Taskia; Dirks, Peter; Mason, Warren P; Bernstein, Mark

    2014-10-01

    This is a qualitative study designed to examine patient acceptability of re-sampling surgery for glioblastoma multiforme (GBM) electively post-therapy or at asymptomatic relapse. Thirty patients were selected using the convenience sampling method and interviewed. Patients were presented with hypothetical scenarios including a scenario in which the surgery was offered to them routinely and a scenario in which the surgery was in a clinical trial. The results of the study suggest that about two thirds of the patients offered the surgery on a routine basis would be interested, and half of the patients would agree to the surgery as part of a clinical trial. Several overarching themes emerged, some of which include: patients expressed ethical concerns about offering financial incentives or compensation to the patients or surgeons involved in the study; patients were concerned about appropriate communication and full disclosure about the procedures involved, the legalities of tumor ownership and the use of the tumor post-surgery; patients may feel alone or vulnerable when they are approached about the surgery; patients and their families expressed immense trust in their surgeon and indicated that this trust is a major determinant of their agreeing to surgery. The overall positive response to re-sampling surgery suggests that this procedure, if designed with all the ethical concerns attended to, would be welcomed by most patients. This approach of asking patients beforehand if a treatment innovation is acceptable would appear to be more practical and ethically desirable than previous practice.

  16. Field significance of performance measures in the context of regional climate model evaluation. Part 2: precipitation

    NASA Astrophysics Data System (ADS)

    Ivanov, Martin; Warrach-Sagi, Kirsten; Wulfmeyer, Volker

    2018-04-01

    A new approach for rigorous spatial analysis of the downscaling performance of regional climate model (RCM) simulations is introduced. It is based on a multiple comparison of the local tests at the grid cells and is also known as `field' or `global' significance. The block length for the local resampling tests is precisely determined to adequately account for the time series structure. New performance measures for estimating the added value of downscaled data relative to the large-scale forcing fields are developed. The methodology is exemplarily applied to a standard EURO-CORDEX hindcast simulation with the Weather Research and Forecasting (WRF) model coupled with the land surface model NOAH at 0.11 ∘ grid resolution. Daily precipitation climatology for the 1990-2009 period is analysed for Germany for winter and summer in comparison with high-resolution gridded observations from the German Weather Service. The field significance test controls the proportion of falsely rejected local tests in a meaningful way and is robust to spatial dependence. Hence, the spatial patterns of the statistically significant local tests are also meaningful. We interpret them from a process-oriented perspective. While the downscaled precipitation distributions are statistically indistinguishable from the observed ones in most regions in summer, the biases of some distribution characteristics are significant over large areas in winter. WRF-NOAH generates appropriate stationary fine-scale climate features in the daily precipitation field over regions of complex topography in both seasons and appropriate transient fine-scale features almost everywhere in summer. As the added value of global climate model (GCM)-driven simulations cannot be smaller than this perfect-boundary estimate, this work demonstrates in a rigorous manner the clear additional value of dynamical downscaling over global climate simulations. The evaluation methodology has a broad spectrum of applicability as it is

  17. A Powerful Test for Comparing Multiple Regression Functions.

    PubMed

    Maity, Arnab

    2012-09-01

    In this article, we address the important problem of comparison of two or more population regression functions. Recently, Pardo-Fernández, Van Keilegom and González-Manteiga (2007) developed test statistics for simple nonparametric regression models: Y(ij) = θ(j)(Z(ij)) + σ(j)(Z(ij))∊(ij), based on empirical distributions of the errors in each population j = 1, … , J. In this paper, we propose a test for equality of the θ(j)(·) based on the concept of generalized likelihood ratio type statistics. We also generalize our test for other nonparametric regression setups, e.g, nonparametric logistic regression, where the loglikelihood for population j is any general smooth function [Formula: see text]. We describe a resampling procedure to obtain the critical values of the test. In addition, we present a simulation study to evaluate the performance of the proposed test and compare our results to those in Pardo-Fernández et al. (2007).

  18. Voice Conversion Using Pitch Shifting Algorithm by Time Stretching with PSOLA and Re-Sampling

    NASA Astrophysics Data System (ADS)

    Mousa, Allam

    2010-01-01

    Voice changing has many applications in the industry and commercial filed. This paper emphasizes voice conversion using a pitch shifting method which depends on detecting the pitch of the signal (fundamental frequency) using Simplified Inverse Filter Tracking (SIFT) and changing it according to the target pitch period using time stretching with Pitch Synchronous Over Lap Add Algorithm (PSOLA), then resampling the signal in order to have the same play rate. The same study was performed to see the effect of voice conversion when some Arabic speech signal is considered. Treatment of certain Arabic voiced vowels and the conversion between male and female speech has shown some expansion or compression in the resulting speech. Comparison in terms of pitch shifting is presented here. Analysis was performed for a single frame and a full segmentation of speech.

  19. The comparison of automated clustering algorithms for resampling representative conformer ensembles with RMSD matrix.

    PubMed

    Kim, Hyoungrae; Jang, Cheongyun; Yadav, Dharmendra K; Kim, Mi-Hyun

    2017-03-23

    The accuracy of any 3D-QSAR, Pharmacophore and 3D-similarity based chemometric target fishing models are highly dependent on a reasonable sample of active conformations. Since a number of diverse conformational sampling algorithm exist, which exhaustively generate enough conformers, however model building methods relies on explicit number of common conformers. In this work, we have attempted to make clustering algorithms, which could find reasonable number of representative conformer ensembles automatically with asymmetric dissimilarity matrix generated from openeye tool kit. RMSD was the important descriptor (variable) of each column of the N × N matrix considered as N variables describing the relationship (network) between the conformer (in a row) and the other N conformers. This approach used to evaluate the performance of the well-known clustering algorithms by comparison in terms of generating representative conformer ensembles and test them over different matrix transformation functions considering the stability. In the network, the representative conformer group could be resampled for four kinds of algorithms with implicit parameters. The directed dissimilarity matrix becomes the only input to the clustering algorithms. Dunn index, Davies-Bouldin index, Eta-squared values and omega-squared values were used to evaluate the clustering algorithms with respect to the compactness and the explanatory power. The evaluation includes the reduction (abstraction) rate of the data, correlation between the sizes of the population and the samples, the computational complexity and the memory usage as well. Every algorithm could find representative conformers automatically without any user intervention, and they reduced the data to 14-19% of the original values within 1.13 s per sample at the most. The clustering methods are simple and practical as they are fast and do not ask for any explicit parameters. RCDTC presented the maximum Dunn and omega-squared values of the

  20. Significance testing as perverse probabilistic reasoning

    PubMed Central

    2011-01-01

    Truth claims in the medical literature rely heavily on statistical significance testing. Unfortunately, most physicians misunderstand the underlying probabilistic logic of significance tests and consequently often misinterpret their results. This near-universal misunderstanding is highlighted by means of a simple quiz which we administered to 246 physicians at two major academic hospitals, on which the proportion of incorrect responses exceeded 90%. A solid understanding of the fundamental concepts of probability theory is becoming essential to the rational interpretation of medical information. This essay provides a technically sound review of these concepts that is accessible to a medical audience. We also briefly review the debate in the cognitive sciences regarding physicians' aptitude for probabilistic inference. PMID:21356064

  1. Resampling to Address the Winner's Curse in Genetic Association Analysis of Time to Event

    PubMed Central

    Poirier, Julia G.; Faye, Laura L.; Dimitromanolakis, Apostolos; Paterson, Andrew D.; Sun, Lei

    2015-01-01

    ABSTRACT The “winner's curse” is a subtle and difficult problem in interpretation of genetic association, in which association estimates from large‐scale gene detection studies are larger in magnitude than those from subsequent replication studies. This is practically important because use of a biased estimate from the original study will yield an underestimate of sample size requirements for replication, leaving the investigators with an underpowered study. Motivated by investigation of the genetics of type 1 diabetes complications in a longitudinal cohort of participants in the Diabetes Control and Complications Trial/Epidemiology of Diabetes Interventions and Complications (DCCT/EDIC) Genetics Study, we apply a bootstrap resampling method in analysis of time to nephropathy under a Cox proportional hazards model, examining 1,213 single‐nucleotide polymorphisms (SNPs) in 201 candidate genes custom genotyped in 1,361 white probands. Among 15 top‐ranked SNPs, bias reduction in log hazard ratio estimates ranges from 43.1% to 80.5%. In simulation studies based on the observed DCCT/EDIC genotype data, genome‐wide bootstrap estimates for false‐positive SNPs and for true‐positive SNPs with low‐to‐moderate power are closer to the true values than uncorrected naïve estimates, but tend to overcorrect SNPs with high power. This bias‐reduction technique is generally applicable for complex trait studies including quantitative, binary, and time‐to‐event traits. PMID:26411674

  2. Permutation tests for goodness-of-fit testing of mathematical models to experimental data.

    PubMed

    Fişek, M Hamit; Barlas, Zeynep

    2013-03-01

    This paper presents statistical procedures for improving the goodness-of-fit testing of theoretical models to data obtained from laboratory experiments. We use an experimental study in the expectation states research tradition which has been carried out in the "standardized experimental situation" associated with the program to illustrate the application of our procedures. We briefly review the expectation states research program and the fundamentals of resampling statistics as we develop our procedures in the resampling context. The first procedure we develop is a modification of the chi-square test which has been the primary statistical tool for assessing goodness of fit in the EST research program, but has problems associated with its use. We discuss these problems and suggest a procedure to overcome them. The second procedure we present, the "Average Absolute Deviation" test, is a new test and is proposed as an alternative to the chi square test, as being simpler and more informative. The third and fourth procedures are permutation versions of Jonckheere's test for ordered alternatives, and Kendall's tau(b), a rank order correlation coefficient. The fifth procedure is a new rank order goodness-of-fit test, which we call the "Deviation from Ideal Ranking" index, which we believe may be more useful than other rank order tests for assessing goodness-of-fit of models to experimental data. The application of these procedures to the sample data is illustrated in detail. We then present another laboratory study from an experimental paradigm different from the expectation states paradigm - the "network exchange" paradigm, and describe how our procedures may be applied to this data set. Copyright © 2012 Elsevier Inc. All rights reserved.

  3. A multiparametric magnetic resonance imaging-based risk model to determine the risk of significant prostate cancer prior to biopsy.

    PubMed

    van Leeuwen, Pim J; Hayen, Andrew; Thompson, James E; Moses, Daniel; Shnier, Ron; Böhm, Maret; Abuodha, Magdaline; Haynes, Anne-Maree; Ting, Francis; Barentsz, Jelle; Roobol, Monique; Vass, Justin; Rasiah, Krishan; Delprado, Warick; Stricker, Phillip D

    2017-12-01

    To develop and externally validate a predictive model for detection of significant prostate cancer. Development of the model was based on a prospective cohort including 393 men who underwent multiparametric magnetic resonance imaging (mpMRI) before biopsy. External validity of the model was then examined retrospectively in 198 men from a separate institution whom underwent mpMRI followed by biopsy for abnormal prostate-specific antigen (PSA) level or digital rectal examination (DRE). A model was developed with age, PSA level, DRE, prostate volume, previous biopsy, and Prostate Imaging Reporting and Data System (PIRADS) score, as predictors for significant prostate cancer (Gleason 7 with >5% grade 4, ≥20% cores positive or ≥7 mm of cancer in any core). Probability was studied via logistic regression. Discriminatory performance was quantified by concordance statistics and internally validated with bootstrap resampling. In all, 393 men had complete data and 149 (37.9%) had significant prostate cancer. While the variable model had good accuracy in predicting significant prostate cancer, area under the curve (AUC) of 0.80, the advanced model (incorporating mpMRI) had a significantly higher AUC of 0.88 (P < 0.001). The model was well calibrated in internal and external validation. Decision analysis showed that use of the advanced model in practice would improve biopsy outcome predictions. Clinical application of the model would reduce 28% of biopsies, whilst missing 2.6% significant prostate cancer. Individualised risk assessment of significant prostate cancer using a predictive model that incorporates mpMRI PIRADS score and clinical data allows a considerable reduction in unnecessary biopsies and reduction of the risk of over-detection of insignificant prostate cancer at the cost of a very small increase in the number of significant cancers missed. © 2017 The Authors BJU International © 2017 BJU International Published by John Wiley & Sons Ltd.

  4. HEALTH SIGNIFICANCE OF PULMONARY FUNCTION TESTS

    EPA Science Inventory

    As the sensitivity and precision of functional tests improves, we become increasingly able to measure responses to pollutant exposures with little, if any, demonstrable health significance. Proper interpretation of such functional responses generally requires an ability to evalua...

  5. [Words before actions- the significance of counselling in the Praena-Test era].

    PubMed

    Tschudin, Sibil

    2014-04-23

    Due to new offers in prenatal diagnostics pregnant women are forced to make choices. In Switzerland physicians are obliged to inform previous to prenatal tests and to obtain informed consent. Considering the complexity of this information and the consequences of a positive result, counselling is challenging, especially in an intercultural context. A questionnaire-based study compared information processing, test interpretation and emotional response of pregnant women from Switzerland and adjacent countries with Turkish women. Knowledge of the latter was significantly lower and they found counselling more unsettling, but their acceptance of prenatal tests was significantly higher. An empathetic approach and the right words are decisive, and counselling will even gain importance when considering the increase in options patients are confronted with.

  6. Testing Strategies for Model-Based Development

    NASA Technical Reports Server (NTRS)

    Heimdahl, Mats P. E.; Whalen, Mike; Rajan, Ajitha; Miller, Steven P.

    2006-01-01

    This report presents an approach for testing artifacts generated in a model-based development process. This approach divides the traditional testing process into two parts: requirements-based testing (validation testing) which determines whether the model implements the high-level requirements and model-based testing (conformance testing) which determines whether the code generated from a model is behaviorally equivalent to the model. The goals of the two processes differ significantly and this report explores suitable testing metrics and automation strategies for each. To support requirements-based testing, we define novel objective requirements coverage metrics similar to existing specification and code coverage metrics. For model-based testing, we briefly describe automation strategies and examine the fault-finding capability of different structural coverage metrics using tests automatically generated from the model.

  7. Resampling method for applying density-dependent habitat selection theory to wildlife surveys.

    PubMed

    Tardy, Olivia; Massé, Ariane; Pelletier, Fanie; Fortin, Daniel

    2015-01-01

    Isodar theory can be used to evaluate fitness consequences of density-dependent habitat selection by animals. A typical habitat isodar is a regression curve plotting competitor densities in two adjacent habitats when individual fitness is equal. Despite the increasing use of habitat isodars, their application remains largely limited to areas composed of pairs of adjacent habitats that are defined a priori. We developed a resampling method that uses data from wildlife surveys to build isodars in heterogeneous landscapes without having to predefine habitat types. The method consists in randomly placing blocks over the survey area and dividing those blocks in two adjacent sub-blocks of the same size. Animal abundance is then estimated within the two sub-blocks. This process is done 100 times. Different functional forms of isodars can be investigated by relating animal abundance and differences in habitat features between sub-blocks. We applied this method to abundance data of raccoons and striped skunks, two of the main hosts of rabies virus in North America. Habitat selection by raccoons and striped skunks depended on both conspecific abundance and the difference in landscape composition and structure between sub-blocks. When conspecific abundance was low, raccoons and striped skunks favored areas with relatively high proportions of forests and anthropogenic features, respectively. Under high conspecific abundance, however, both species preferred areas with rather large corn-forest edge densities and corn field proportions. Based on random sampling techniques, we provide a robust method that is applicable to a broad range of species, including medium- to large-sized mammals with high mobility. The method is sufficiently flexible to incorporate multiple environmental covariates that can reflect key requirements of the focal species. We thus illustrate how isodar theory can be used with wildlife surveys to assess density-dependent habitat selection over large

  8. Tests of Statistical Significance Made Sound

    ERIC Educational Resources Information Center

    Haig, Brian D.

    2017-01-01

    This article considers the nature and place of tests of statistical significance (ToSS) in science, with particular reference to psychology. Despite the enormous amount of attention given to this topic, psychology's understanding of ToSS remains deficient. The major problem stems from a widespread and uncritical acceptance of null hypothesis…

  9. [Investigation of the accurate measurement of the basic imaging properties for the digital radiographic system based on flat panel detector].

    PubMed

    Katayama, R; Sakai, S; Sakaguchi, T; Maeda, T; Takada, K; Hayabuchi, N; Morishita, J

    2008-07-20

    PURPOSE/AIM OF THE EXHIBIT: The purpose of this exhibit is: 1. To explain "resampling", an image data processing, performed by the digital radiographic system based on flat panel detector (FPD). 2. To show the influence of "resampling" on the basic imaging properties. 3. To present accurate measurement methods of the basic imaging properties of the FPD system. 1. The relationship between the matrix sizes of the output image and the image data acquired on FPD that automatically changes depending on a selected image size (FOV). 2. The explanation of the image data processing of "resampling". 3. The evaluation results of the basic imaging properties of the FPD system using two types of DICOM image to which "resampling" was performed: characteristic curves, presampled MTFs, noise power spectra, detective quantum efficiencies. CONCLUSION/SUMMARY: The major points of the exhibit are as follows: 1. The influence of "resampling" should not be disregarded in the evaluation of the basic imaging properties of the flat panel detector system. 2. It is necessary for the basic imaging properties to be measured by using DICOM image to which no "resampling" is performed.

  10. Significance Testing in Confirmatory Factor Analytic Models.

    ERIC Educational Resources Information Center

    Khattab, Ali-Maher; Hocevar, Dennis

    Traditionally, confirmatory factor analytic models are tested against a null model of total independence. Using randomly generated factors in a matrix of 46 aptitude tests, this approach is shown to be unlikely to reject even random factors. An alternative null model, based on a single general factor, is suggested. In addition, an index of model…

  11. A nonparametric significance test for sampled networks.

    PubMed

    Elliott, Andrew; Leicht, Elizabeth; Whitmore, Alan; Reinert, Gesine; Reed-Tsochas, Felix

    2018-01-01

    Our work is motivated by an interest in constructing a protein-protein interaction network that captures key features associated with Parkinson's disease. While there is an abundance of subnetwork construction methods available, it is often far from obvious which subnetwork is the most suitable starting point for further investigation. We provide a method to assess whether a subnetwork constructed from a seed list (a list of nodes known to be important in the area of interest) differs significantly from a randomly generated subnetwork. The proposed method uses a Monte Carlo approach. As different seed lists can give rise to the same subnetwork, we control for redundancy by constructing a minimal seed list as the starting point for the significance test. The null model is based on random seed lists of the same length as a minimum seed list that generates the subnetwork; in this random seed list the nodes have (approximately) the same degree distribution as the nodes in the minimum seed list. We use this null model to select subnetworks which deviate significantly from random on an appropriate set of statistics and might capture useful information for a real world protein-protein interaction network. The software used in this paper are available for download at https://sites.google.com/site/elliottande/. The software is written in Python and uses the NetworkX library. ande.elliott@gmail.com or felix.reed-tsochas@sbs.ox.ac.uk. Supplementary data are available at Bioinformatics online. © The Author 2017. Published by Oxford University Press.

  12. A nonparametric significance test for sampled networks

    PubMed Central

    Leicht, Elizabeth; Whitmore, Alan; Reinert, Gesine; Reed-Tsochas, Felix

    2018-01-01

    Abstract Motivation Our work is motivated by an interest in constructing a protein–protein interaction network that captures key features associated with Parkinson’s disease. While there is an abundance of subnetwork construction methods available, it is often far from obvious which subnetwork is the most suitable starting point for further investigation. Results We provide a method to assess whether a subnetwork constructed from a seed list (a list of nodes known to be important in the area of interest) differs significantly from a randomly generated subnetwork. The proposed method uses a Monte Carlo approach. As different seed lists can give rise to the same subnetwork, we control for redundancy by constructing a minimal seed list as the starting point for the significance test. The null model is based on random seed lists of the same length as a minimum seed list that generates the subnetwork; in this random seed list the nodes have (approximately) the same degree distribution as the nodes in the minimum seed list. We use this null model to select subnetworks which deviate significantly from random on an appropriate set of statistics and might capture useful information for a real world protein–protein interaction network. Availability and implementation The software used in this paper are available for download at https://sites.google.com/site/elliottande/. The software is written in Python and uses the NetworkX library. Contact ande.elliott@gmail.com or felix.reed-tsochas@sbs.ox.ac.uk Supplementary information Supplementary data are available at Bioinformatics online. PMID:29036452

  13. Use of power analysis to develop detectable significance criteria for sea urchin toxicity tests

    USGS Publications Warehouse

    Carr, R.S.; Biedenbach, J.M.

    1999-01-01

    When sufficient data are available, the statistical power of a test can be determined using power analysis procedures. The term “detectable significance” has been coined to refer to this criterion based on power analysis and past performance of a test. This power analysis procedure has been performed with sea urchin (Arbacia punctulata) fertilization and embryological development data from sediment porewater toxicity tests. Data from 3100 and 2295 tests for the fertilization and embryological development tests, respectively, were used to calculate the criteria and regression equations describing the power curves. Using Dunnett's test, a minimum significant difference (MSD) (β = 0.05) of 15.5% and 19% for the fertilization test, and 16.4% and 20.6% for the embryological development test, for α ≤ 0.05 and α ≤ 0.01, respectively, were determined. The use of this second criterion reduces type I (false positive) errors and helps to establish a critical level of difference based on the past performance of the test.

  14. A rule-based software test data generator

    NASA Technical Reports Server (NTRS)

    Deason, William H.; Brown, David B.; Chang, Kai-Hsiung; Cross, James H., II

    1991-01-01

    Rule-based software test data generation is proposed as an alternative to either path/predicate analysis or random data generation. A prototype rule-based test data generator for Ada programs is constructed and compared to a random test data generator. Four Ada procedures are used in the comparison. Approximately 2000 rule-based test cases and 100,000 randomly generated test cases are automatically generated and executed. The success of the two methods is compared using standard coverage metrics. Simple statistical tests showing that even the primitive rule-based test data generation prototype is significantly better than random data generation are performed. This result demonstrates that rule-based test data generation is feasible and shows great promise in assisting test engineers, especially when the rule base is developed further.

  15. Retrospective analysis of the Draize test for serious eye damage/eye irritation: importance of understanding the in vivo endpoints under UN GHS/EU CLP for the development and evaluation of in vitro test methods.

    PubMed

    Adriaens, Els; Barroso, João; Eskes, Chantra; Hoffmann, Sebastian; McNamee, Pauline; Alépée, Nathalie; Bessou-Touya, Sandrine; De Smedt, Ann; De Wever, Bart; Pfannenbecker, Uwe; Tailhardat, Magalie; Zuang, Valérie

    2014-03-01

    For more than two decades, scientists have been trying to replace the regulatory in vivo Draize eye test by in vitro methods, but so far only partial replacement has been achieved. In order to better understand the reasons for this, historical in vivo rabbit data were analysed in detail and resampled with the purpose of (1) revealing which of the in vivo endpoints are most important in driving United Nations Globally Harmonized System/European Union Regulation on Classification, Labelling and Packaging (UN GHS/EU CLP) classification for serious eye damage/eye irritation and (2) evaluating the method's within-test variability for proposing acceptable and justifiable target values of sensitivity and specificity for alternative methods and their combinations in testing strategies. Among the Cat 1 chemicals evaluated, 36-65 % (depending on the database) were classified based only on persistence of effects, with the remaining being classified mostly based on severe corneal effects. Iritis was found to rarely drive the classification (<4 % of both Cat 1 and Cat 2 chemicals). The two most important endpoints driving Cat 2 classification are conjunctiva redness (75-81 %) and corneal opacity (54-75 %). The resampling analyses demonstrated an overall probability of at least 11 % that chemicals classified as Cat 1 by the Draize eye test could be equally identified as Cat 2 and of about 12 % for Cat 2 chemicals to be equally identified as No Cat. On the other hand, the over-classification error for No Cat and Cat 2 was negligible (<1 %), which strongly suggests a high over-predictive power of the Draize eye test. Moreover, our analyses of the classification drivers suggest a critical revision of the UN GHS/EU CLP decision criteria for the classification of chemicals based on Draize eye test data, in particular Cat 1 based only on persistence of conjunctiva effects or corneal opacity scores of 4. In order to successfully replace the regulatory in vivo Draize eye test, it will

  16. Cellular neural network-based hybrid approach toward automatic image registration

    NASA Astrophysics Data System (ADS)

    Arun, Pattathal VijayaKumar; Katiyar, Sunil Kumar

    2013-01-01

    Image registration is a key component of various image processing operations that involve the analysis of different image data sets. Automatic image registration domains have witnessed the application of many intelligent methodologies over the past decade; however, inability to properly model object shape as well as contextual information has limited the attainable accuracy. A framework for accurate feature shape modeling and adaptive resampling using advanced techniques such as vector machines, cellular neural network (CNN), scale invariant feature transform (SIFT), coreset, and cellular automata is proposed. CNN has been found to be effective in improving feature matching as well as resampling stages of registration and complexity of the approach has been considerably reduced using coreset optimization. The salient features of this work are cellular neural network approach-based SIFT feature point optimization, adaptive resampling, and intelligent object modelling. Developed methodology has been compared with contemporary methods using different statistical measures. Investigations over various satellite images revealed that considerable success was achieved with the approach. This system has dynamically used spectral and spatial information for representing contextual knowledge using CNN-prolog approach. This methodology is also illustrated to be effective in providing intelligent interpretation and adaptive resampling.

  17. Pearson's chi-square test and rank correlation inferences for clustered data.

    PubMed

    Shih, Joanna H; Fay, Michael P

    2017-09-01

    Pearson's chi-square test has been widely used in testing for association between two categorical responses. Spearman rank correlation and Kendall's tau are often used for measuring and testing association between two continuous or ordered categorical responses. However, the established statistical properties of these tests are only valid when each pair of responses are independent, where each sampling unit has only one pair of responses. When each sampling unit consists of a cluster of paired responses, the assumption of independent pairs is violated. In this article, we apply the within-cluster resampling technique to U-statistics to form new tests and rank-based correlation estimators for possibly tied clustered data. We develop large sample properties of the new proposed tests and estimators and evaluate their performance by simulations. The proposed methods are applied to a data set collected from a PET/CT imaging study for illustration. Published 2017. This article is a U.S. Government work and is in the public domain in the USA.

  18. The Harm Done to Reproducibility by the Culture of Null Hypothesis Significance Testing.

    PubMed

    Lash, Timothy L

    2017-09-15

    In the last few years, stakeholders in the scientific community have raised alarms about a perceived lack of reproducibility of scientific results. In reaction, guidelines for journals have been promulgated and grant applicants have been asked to address the rigor and reproducibility of their proposed projects. Neither solution addresses a primary culprit, which is the culture of null hypothesis significance testing that dominates statistical analysis and inference. In an innovative research enterprise, selection of results for further evaluation based on null hypothesis significance testing is doomed to yield a low proportion of reproducible results and a high proportion of effects that are initially overestimated. In addition, the culture of null hypothesis significance testing discourages quantitative adjustments to account for systematic errors and quantitative incorporation of prior information. These strategies would otherwise improve reproducibility and have not been previously proposed in the widely cited literature on this topic. Without discarding the culture of null hypothesis significance testing and implementing these alternative methods for statistical analysis and inference, all other strategies for improving reproducibility will yield marginal gains at best. © The Author(s) 2017. Published by Oxford University Press on behalf of the Johns Hopkins Bloomberg School of Public Health. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.

  19. A Particle Smoother with Sequential Importance Resampling for soil hydraulic parameter estimation: A lysimeter experiment

    NASA Astrophysics Data System (ADS)

    Montzka, Carsten; Hendricks Franssen, Harrie-Jan; Moradkhani, Hamid; Pütz, Thomas; Han, Xujun; Vereecken, Harry

    2013-04-01

    An adequate description of soil hydraulic properties is essential for a good performance of hydrological forecasts. So far, several studies showed that data assimilation could reduce the parameter uncertainty by considering soil moisture observations. However, these observations and also the model forcings were recorded with a specific measurement error. It seems a logical step to base state updating and parameter estimation on observations made at multiple time steps, in order to reduce the influence of outliers at single time steps given measurement errors and unknown model forcings. Such outliers could result in erroneous state estimation as well as inadequate parameters. This has been one of the reasons to use a smoothing technique as implemented for Bayesian data assimilation methods such as the Ensemble Kalman Filter (i.e. Ensemble Kalman Smoother). Recently, an ensemble-based smoother has been developed for state update with a SIR particle filter. However, this method has not been used for dual state-parameter estimation. In this contribution we present a Particle Smoother with sequentially smoothing of particle weights for state and parameter resampling within a time window as opposed to the single time step data assimilation used in filtering techniques. This can be seen as an intermediate variant between a parameter estimation technique using global optimization with estimation of single parameter sets valid for the whole period, and sequential Monte Carlo techniques with estimation of parameter sets evolving from one time step to another. The aims are i) to improve the forecast of evaporation and groundwater recharge by estimating hydraulic parameters, and ii) to reduce the impact of single erroneous model inputs/observations by a smoothing method. In order to validate the performance of the proposed method in a real world application, the experiment is conducted in a lysimeter environment.

  20. cloncase: Estimation of sex frequency and effective population size by clonemate resampling in partially clonal organisms.

    PubMed

    Ali, Sajid; Soubeyrand, Samuel; Gladieux, Pierre; Giraud, Tatiana; Leconte, Marc; Gautier, Angélique; Mboup, Mamadou; Chen, Wanquan; de Vallavieille-Pope, Claude; Enjalbert, Jérôme

    2016-07-01

    Inferring reproductive and demographic parameters of populations is crucial to our understanding of species ecology and evolutionary potential but can be challenging, especially in partially clonal organisms. Here, we describe a new and accurate method, cloncase, for estimating both the rate of sexual vs. asexual reproduction and the effective population size, based on the frequency of clonemate resampling across generations. Simulations showed that our method provides reliable estimates of sex frequency and effective population size for a wide range of parameters. The cloncase method was applied to Puccinia striiformis f.sp. tritici, a fungal pathogen causing stripe/yellow rust, an important wheat disease. This fungus is highly clonal in Europe but has been suggested to recombine in Asia. Using two temporally spaced samples of P. striiformis f.sp. tritici in China, the estimated sex frequency was 75% (i.e. three-quarter of individuals being sexually derived during the yearly sexual cycle), indicating strong contribution of sexual reproduction to the life cycle of the pathogen in this area. The inferred effective population size of this partially clonal organism (Nc  = 998) was in good agreement with estimates obtained using methods based on temporal variations in allelic frequencies. The cloncase estimator presented herein is the first method allowing accurate inference of both sex frequency and effective population size from population data without knowledge of recombination or mutation rates. cloncase can be applied to population genetic data from any organism with cyclical parthenogenesis and should in particular be very useful for improving our understanding of pest and microbial population biology. © 2016 John Wiley & Sons Ltd.

  1. A brief introduction to computer-intensive methods, with a view towards applications in spatial statistics and stereology.

    PubMed

    Mattfeldt, Torsten

    2011-04-01

    Computer-intensive methods may be defined as data analytical procedures involving a huge number of highly repetitive computations. We mention resampling methods with replacement (bootstrap methods), resampling methods without replacement (randomization tests) and simulation methods. The resampling methods are based on simple and robust principles and are largely free from distributional assumptions. Bootstrap methods may be used to compute confidence intervals for a scalar model parameter and for summary statistics from replicated planar point patterns, and for significance tests. For some simple models of planar point processes, point patterns can be simulated by elementary Monte Carlo methods. The simulation of models with more complex interaction properties usually requires more advanced computing methods. In this context, we mention simulation of Gibbs processes with Markov chain Monte Carlo methods using the Metropolis-Hastings algorithm. An alternative to simulations on the basis of a parametric model consists of stochastic reconstruction methods. The basic ideas behind the methods are briefly reviewed and illustrated by simple worked examples in order to encourage novices in the field to use computer-intensive methods. © 2010 The Authors Journal of Microscopy © 2010 Royal Microscopical Society.

  2. Statistical significance test for transition matrices of atmospheric Markov chains

    NASA Technical Reports Server (NTRS)

    Vautard, Robert; Mo, Kingtse C.; Ghil, Michael

    1990-01-01

    Low-frequency variability of large-scale atmospheric dynamics can be represented schematically by a Markov chain of multiple flow regimes. This Markov chain contains useful information for the long-range forecaster, provided that the statistical significance of the associated transition matrix can be reliably tested. Monte Carlo simulation yields a very reliable significance test for the elements of this matrix. The results of this test agree with previously used empirical formulae when each cluster of maps identified as a distinct flow regime is sufficiently large and when they all contain a comparable number of maps. Monte Carlo simulation provides a more reliable way to test the statistical significance of transitions to and from small clusters. It can determine the most likely transitions, as well as the most unlikely ones, with a prescribed level of statistical significance.

  3. Fine-mapping additive and dominant SNP effects using group-LASSO and Fractional Resample Model Averaging

    PubMed Central

    Sabourin, Jeremy; Nobel, Andrew B.; Valdar, William

    2014-01-01

    Genomewide association studies sometimes identify loci at which both the number and identities of the underlying causal variants are ambiguous. In such cases, statistical methods that model effects of multiple SNPs simultaneously can help disentangle the observed patterns of association and provide information about how those SNPs could be prioritized for follow-up studies. Current multi-SNP methods, however, tend to assume that SNP effects are well captured by additive genetics; yet when genetic dominance is present, this assumption translates to reduced power and faulty prioritizations. We describe a statistical procedure for prioritizing SNPs at GWAS loci that efficiently models both additive and dominance effects. Our method, LLARRMA-dawg, combines a group LASSO procedure for sparse modeling of multiple SNP effects with a resampling procedure based on fractional observation weights; it estimates for each SNP the robustness of association with the phenotype both to sampling variation and to competing explanations from other SNPs. In producing a SNP prioritization that best identifies underlying true signals, we show that: our method easily outperforms a single marker analysis; when additive-only signals are present, our joint model for additive and dominance is equivalent to or only slightly less powerful than modeling additive-only effects; and, when dominance signals are present, even in combination with substantial additive effects, our joint model is unequivocally more powerful than a model assuming additivity. We also describe how performance can be improved through calibrated randomized penalization, and discuss how dominance in ungenotyped SNPs can be incorporated through either heterozygote dosage or multiple imputation. PMID:25417853

  4. Exploring the potential for using 210Pbex measurements within a re-sampling approach to document recent changes in soil redistribution rates within a small catchment in southern Italy.

    PubMed

    Porto, Paolo; Walling, Desmond E; Cogliandro, Vanessa; Callegari, Giovanni

    2016-11-01

    In recent years, the fallout radionuclides caesium-137 ( 137 Cs) and unsupported lead-210 ( 210 Pb ex) have been successfully used to document rates of soil erosion in many areas of the world, as an alternative to conventional measurements. By virtue of their different half-lives, these two radionuclides are capable of providing information related to different time windows. 137 Cs measurements are commonly used to generate information on mean annual erosion rates over the past ca. 50-60 years, whereas 210 Pb ex measurements are able to provide information relating to a longer period of up to ca. 100 years. However, the time-integrated nature of the estimates of soil redistribution provided by 137 Cs and 210 Pb ex measurements can be seen as a limitation, particularly when viewed in the context of global change and interest in the response of soil redistribution rates to contemporary climate change and land use change. Re-sampling techniques used with these two fallout radionuclides potentially provide a basis for providing information on recent changes in soil redistribution rates. By virtue of the effectively continuous fallout input, of 210 Pb, the response of the 210 Pb ex inventory of a soil profile to changing soil redistribution rates and thus its potential for use with the re-sampling approach differs from that of 137 Cs. Its greater sensitivity to recent changes in soil redistribution rates suggests that 210 Pb ex may have advantages over 137 Cs for use in the re-sampling approach. The potential for using 210 Pb ex measurements in re-sampling studies is explored further in this contribution. Attention focuses on a small (1.38 ha) forested catchment in southern Italy. The catchment was originally sampled for 210 Pb ex measurements in 2001 and equivalent samples were collected from points very close to the original sampling points again in 2013. This made it possible to compare the estimates of mean annual erosion related to two different time windows. This

  5. Network diffusion-based analysis of high-throughput data for the detection of differentially enriched modules

    PubMed Central

    Bersanelli, Matteo; Mosca, Ettore; Remondini, Daniel; Castellani, Gastone; Milanesi, Luciano

    2016-01-01

    A relation exists between network proximity of molecular entities in interaction networks, functional similarity and association with diseases. The identification of network regions associated with biological functions and pathologies is a major goal in systems biology. We describe a network diffusion-based pipeline for the interpretation of different types of omics in the context of molecular interaction networks. We introduce the network smoothing index, a network-based quantity that allows to jointly quantify the amount of omics information in genes and in their network neighbourhood, using network diffusion to define network proximity. The approach is applicable to both descriptive and inferential statistics calculated on omics data. We also show that network resampling, applied to gene lists ranked by quantities derived from the network smoothing index, indicates the presence of significantly connected genes. As a proof of principle, we identified gene modules enriched in somatic mutations and transcriptional variations observed in samples of prostate adenocarcinoma (PRAD). In line with the local hypothesis, network smoothing index and network resampling underlined the existence of a connected component of genes harbouring molecular alterations in PRAD. PMID:27731320

  6. A novel measure and significance testing in data analysis of cell image segmentation.

    PubMed

    Wu, Jin Chu; Halter, Michael; Kacker, Raghu N; Elliott, John T; Plant, Anne L

    2017-03-14

    Cell image segmentation (CIS) is an essential part of quantitative imaging of biological cells. Designing a performance measure and conducting significance testing are critical for evaluating and comparing the CIS algorithms for image-based cell assays in cytometry. Many measures and methods have been proposed and implemented to evaluate segmentation methods. However, computing the standard errors (SE) of the measures and their correlation coefficient is not described, and thus the statistical significance of performance differences between CIS algorithms cannot be assessed. We propose the total error rate (TER), a novel performance measure for segmenting all cells in the supervised evaluation. The TER statistically aggregates all misclassification error rates (MER) by taking cell sizes as weights. The MERs are for segmenting each single cell in the population. The TER is fully supported by the pairwise comparisons of MERs using 106 manually segmented ground-truth cells with different sizes and seven CIS algorithms taken from ImageJ. Further, the SE and 95% confidence interval (CI) of TER are computed based on the SE of MER that is calculated using the bootstrap method. An algorithm for computing the correlation coefficient of TERs between two CIS algorithms is also provided. Hence, the 95% CI error bars can be used to classify CIS algorithms. The SEs of TERs and their correlation coefficient can be employed to conduct the hypothesis testing, while the CIs overlap, to determine the statistical significance of the performance differences between CIS algorithms. A novel measure TER of CIS is proposed. The TER's SEs and correlation coefficient are computed. Thereafter, CIS algorithms can be evaluated and compared statistically by conducting the significance testing.

  7. Cell-Based Genotoxicity Testing

    NASA Astrophysics Data System (ADS)

    Reifferscheid, Georg; Buchinger, Sebastian

    Genotoxicity test systems that are based on bacteria display an important role in the detection and assessment of DNA damaging chemicals. They belong to the basic line of test systems due to their easy realization, rapidness, broad applicability, high sensitivity and good reproducibility. Since the development of the Salmonella microsomal mutagenicity assay by Ames and coworkers in the early 1970s, significant development in bacterial genotoxicity assays was achieved and is still a subject matter of research. The basic principle of the mutagenicity assay is a reversion of a growth inhibited bacterial strain, e.g., due to auxotrophy, back to a fast growing phenotype (regain of prototrophy). Deeper knowledge of the ­mutation events allows a mechanistic understanding of the induced DNA-damage by the utilization of base specific tester strains. Collections of such specific tester strains were extended by genetic engineering. Beside the reversion assays, test systems utilizing the bacterial SOS-response were invented. These methods are based on the fusion of various SOS-responsive promoters with a broad variety of reporter genes facilitating numerous methods of signal detection. A very important aspect of genotoxicity testing is the bioactivation of ­xenobiotics to DNA-damaging compounds. Most widely used is the extracellular metabolic activation by making use of rodent liver homogenates. Again, genetic engineering allows the construction of highly sophisticated bacterial tester strains with significantly enhanced sensitivity due to overexpression of enzymes that are involved in the metabolism of xenobiotics. This provides mechanistic insights into the toxification and detoxification pathways of xenobiotics and helps explaining the chemical nature of hazardous substances in unknown mixtures. In summary, beginning with "natural" tester strains the rational design of bacteria led to highly specific and sensitive tools for a rapid, reliable and cost effective

  8. Kolmogorov-Smirnov test for spatially correlated data

    USGS Publications Warehouse

    Olea, R.A.; Pawlowsky-Glahn, V.

    2009-01-01

    The Kolmogorov-Smirnov test is a convenient method for investigating whether two underlying univariate probability distributions can be regarded as undistinguishable from each other or whether an underlying probability distribution differs from a hypothesized distribution. Application of the test requires that the sample be unbiased and the outcomes be independent and identically distributed, conditions that are violated in several degrees by spatially continuous attributes, such as topographical elevation. A generalized form of the bootstrap method is used here for the purpose of modeling the distribution of the statistic D of the Kolmogorov-Smirnov test. The innovation is in the resampling, which in the traditional formulation of bootstrap is done by drawing from the empirical sample with replacement presuming independence. The generalization consists of preparing resamplings with the same spatial correlation as the empirical sample. This is accomplished by reading the value of unconditional stochastic realizations at the sampling locations, realizations that are generated by simulated annealing. The new approach was tested by two empirical samples taken from an exhaustive sample closely following a lognormal distribution. One sample was a regular, unbiased sample while the other one was a clustered, preferential sample that had to be preprocessed. Our results show that the p-value for the spatially correlated case is always larger that the p-value of the statistic in the absence of spatial correlation, which is in agreement with the fact that the information content of an uncorrelated sample is larger than the one for a spatially correlated sample of the same size. ?? Springer-Verlag 2008.

  9. MACE prediction of acute coronary syndrome via boosted resampling classification using electronic medical records.

    PubMed

    Huang, Zhengxing; Chan, Tak-Ming; Dong, Wei

    2017-02-01

    Major adverse cardiac events (MACE) of acute coronary syndrome (ACS) often occur suddenly resulting in high mortality and morbidity. Recently, the rapid development of electronic medical records (EMR) provides the opportunity to utilize the potential of EMR to improve the performance of MACE prediction. In this study, we present a novel data-mining based approach specialized for MACE prediction from a large volume of EMR data. The proposed approach presents a new classification algorithm by applying both over-sampling and under-sampling on minority-class and majority-class samples, respectively, and integrating the resampling strategy into a boosting framework so that it can effectively handle imbalance of MACE of ACS patients analogous to domain practice. The method learns a new and stronger MACE prediction model each iteration from a more difficult subset of EMR data with wrongly predicted MACEs of ACS patients by a previous weak model. We verify the effectiveness of the proposed approach on a clinical dataset containing 2930 ACS patient samples with 268 feature types. While the imbalanced ratio does not seem extreme (25.7%), MACE prediction targets pose great challenge to traditional methods. As these methods degenerate dramatically with increasing imbalanced ratios, the performance of our approach for predicting MACE remains robust and reaches 0.672 in terms of AUC. On average, the proposed approach improves the performance of MACE prediction by 4.8%, 4.5%, 8.6% and 4.8% over the standard SVM, Adaboost, SMOTE, and the conventional GRACE risk scoring system for MACE prediction, respectively. We consider that the proposed iterative boosting approach has demonstrated great potential to meet the challenge of MACE prediction for ACS patients using a large volume of EMR. Copyright © 2017 Elsevier Inc. All rights reserved.

  10. Best (but oft-forgotten) practices: the multiple problems of multiplicity-whether and how to correct for many statistical tests.

    PubMed

    Streiner, David L

    2015-10-01

    Testing many null hypotheses in a single study results in an increased probability of detecting a significant finding just by chance (the problem of multiplicity). Debates have raged over many years with regard to whether to correct for multiplicity and, if so, how it should be done. This article first discusses how multiple tests lead to an inflation of the α level, then explores the following different contexts in which multiplicity arises: testing for baseline differences in various types of studies, having >1 outcome variable, conducting statistical tests that produce >1 P value, taking multiple "peeks" at the data, and unplanned, post hoc analyses (i.e., "data dredging," "fishing expeditions," or "P-hacking"). It then discusses some of the methods that have been proposed for correcting for multiplicity, including single-step procedures (e.g., Bonferroni); multistep procedures, such as those of Holm, Hochberg, and Šidák; false discovery rate control; and resampling approaches. Note that these various approaches describe different aspects and are not necessarily mutually exclusive. For example, resampling methods could be used to control the false discovery rate or the family-wise error rate (as defined later in this article). However, the use of one of these approaches presupposes that we should correct for multiplicity, which is not universally accepted, and the article presents the arguments for and against such "correction." The final section brings together these threads and presents suggestions with regard to when it makes sense to apply the corrections and how to do so. © 2015 American Society for Nutrition.

  11. Test Design Considerations for Students with Significant Cognitive Disabilities

    ERIC Educational Resources Information Center

    Anderson, Daniel; Farley, Dan; Tindal, Gerald

    2015-01-01

    Students with significant cognitive disabilities present an assessment dilemma that centers on access and validity in large-scale testing programs. Typically, access is improved by eliminating construct-irrelevant barriers, while validity is improved, in part, through test standardization. In this article, one state's alternate assessment data…

  12. Effects of model complexity and priors on estimation using sequential importance sampling/resampling for species conservation

    USGS Publications Warehouse

    Dunham, Kylee; Grand, James B.

    2016-01-01

    We examined the effects of complexity and priors on the accuracy of models used to estimate ecological and observational processes, and to make predictions regarding population size and structure. State-space models are useful for estimating complex, unobservable population processes and making predictions about future populations based on limited data. To better understand the utility of state space models in evaluating population dynamics, we used them in a Bayesian framework and compared the accuracy of models with differing complexity, with and without informative priors using sequential importance sampling/resampling (SISR). Count data were simulated for 25 years using known parameters and observation process for each model. We used kernel smoothing to reduce the effect of particle depletion, which is common when estimating both states and parameters with SISR. Models using informative priors estimated parameter values and population size with greater accuracy than their non-informative counterparts. While the estimates of population size and trend did not suffer greatly in models using non-informative priors, the algorithm was unable to accurately estimate demographic parameters. This model framework provides reasonable estimates of population size when little to no information is available; however, when information on some vital rates is available, SISR can be used to obtain more precise estimates of population size and process. Incorporating model complexity such as that required by structured populations with stage-specific vital rates affects precision and accuracy when estimating latent population variables and predicting population dynamics. These results are important to consider when designing monitoring programs and conservation efforts requiring management of specific population segments.

  13. Your Chi-Square Test Is Statistically Significant: Now What?

    ERIC Educational Resources Information Center

    Sharpe, Donald

    2015-01-01

    Applied researchers have employed chi-square tests for more than one hundred years. This paper addresses the question of how one should follow a statistically significant chi-square test result in order to determine the source of that result. Four approaches were evaluated: calculating residuals, comparing cells, ransacking, and partitioning. Data…

  14. Physical Test Prototypes Based on Microcontroller

    NASA Astrophysics Data System (ADS)

    Paramitha, S. T.

    2017-03-01

    The purpose of this study was to produce a prototype of a physical test-based microcontroller. The research method uses the research and development of the Borg and gall. The procedure starts from the study; research and information collecting, planning, develop preliminary form of product, preliminary field testing, main product revision, playing field testing, operational product revision, field operational testing, final product revision, dissemination and implementation. Validation of the product, obtained through expert evaluation; test products of small scale and large scale; effectiveness test; evaluation of respondents. The results showed that the eligibility assessment of prototype products based physical tests microcontroller. Based on the ratings of seven experts showed that 87% included in the category of “very good” and 13% included in the category of “good”. While the effectiveness of the test results showed that 1). The results of the experimental group to test sit-ups increase by 40% and the control group by 15%. 2). The results of the experimental group to test push-ups increased by 30% and the control group by 10%. 3). The results of the experimental group to test the Back-ups increased by 25% and the control group by 10%. With a significant value of 0.002 less than 0.05, product means a physical test prototype microcontroller based, proven effective in improving the results of physical tests. Conclusions and recommendations; Product physical microcontroller-based assays, can be used to measure the physical tests of pushups, sit ups, and back-ups.

  15. Properties of permutation-based gene tests and controlling type 1 error using a summary statistic based gene test

    PubMed Central

    2013-01-01

    Background The advent of genome-wide association studies has led to many novel disease-SNP associations, opening the door to focused study on their biological underpinnings. Because of the importance of analyzing these associations, numerous statistical methods have been devoted to them. However, fewer methods have attempted to associate entire genes or genomic regions with outcomes, which is potentially more useful knowledge from a biological perspective and those methods currently implemented are often permutation-based. Results One property of some permutation-based tests is that their power varies as a function of whether significant markers are in regions of linkage disequilibrium (LD) or not, which we show from a theoretical perspective. We therefore develop two methods for quantifying the degree of association between a genomic region and outcome, both of whose power does not vary as a function of LD structure. One method uses dimension reduction to “filter” redundant information when significant LD exists in the region, while the other, called the summary-statistic test, controls for LD by scaling marker Z-statistics using knowledge of the correlation matrix of markers. An advantage of this latter test is that it does not require the original data, but only their Z-statistics from univariate regressions and an estimate of the correlation structure of markers, and we show how to modify the test to protect the type 1 error rate when the correlation structure of markers is misspecified. We apply these methods to sequence data of oral cleft and compare our results to previously proposed gene tests, in particular permutation-based ones. We evaluate the versatility of the modification of the summary-statistic test since the specification of correlation structure between markers can be inaccurate. Conclusion We find a significant association in the sequence data between the 8q24 region and oral cleft using our dimension reduction approach and a borderline

  16. Properties of permutation-based gene tests and controlling type 1 error using a summary statistic based gene test.

    PubMed

    Swanson, David M; Blacker, Deborah; Alchawa, Taofik; Ludwig, Kerstin U; Mangold, Elisabeth; Lange, Christoph

    2013-11-07

    The advent of genome-wide association studies has led to many novel disease-SNP associations, opening the door to focused study on their biological underpinnings. Because of the importance of analyzing these associations, numerous statistical methods have been devoted to them. However, fewer methods have attempted to associate entire genes or genomic regions with outcomes, which is potentially more useful knowledge from a biological perspective and those methods currently implemented are often permutation-based. One property of some permutation-based tests is that their power varies as a function of whether significant markers are in regions of linkage disequilibrium (LD) or not, which we show from a theoretical perspective. We therefore develop two methods for quantifying the degree of association between a genomic region and outcome, both of whose power does not vary as a function of LD structure. One method uses dimension reduction to "filter" redundant information when significant LD exists in the region, while the other, called the summary-statistic test, controls for LD by scaling marker Z-statistics using knowledge of the correlation matrix of markers. An advantage of this latter test is that it does not require the original data, but only their Z-statistics from univariate regressions and an estimate of the correlation structure of markers, and we show how to modify the test to protect the type 1 error rate when the correlation structure of markers is misspecified. We apply these methods to sequence data of oral cleft and compare our results to previously proposed gene tests, in particular permutation-based ones. We evaluate the versatility of the modification of the summary-statistic test since the specification of correlation structure between markers can be inaccurate. We find a significant association in the sequence data between the 8q24 region and oral cleft using our dimension reduction approach and a borderline significant association using the

  17. Residual uncertainty estimation using instance-based learning with applications to hydrologic forecasting

    NASA Astrophysics Data System (ADS)

    Wani, Omar; Beckers, Joost V. L.; Weerts, Albrecht H.; Solomatine, Dimitri P.

    2017-08-01

    A non-parametric method is applied to quantify residual uncertainty in hydrologic streamflow forecasting. This method acts as a post-processor on deterministic model forecasts and generates a residual uncertainty distribution. Based on instance-based learning, it uses a k nearest-neighbour search for similar historical hydrometeorological conditions to determine uncertainty intervals from a set of historical errors, i.e. discrepancies between past forecast and observation. The performance of this method is assessed using test cases of hydrologic forecasting in two UK rivers: the Severn and Brue. Forecasts in retrospect were made and their uncertainties were estimated using kNN resampling and two alternative uncertainty estimators: quantile regression (QR) and uncertainty estimation based on local errors and clustering (UNEEC). Results show that kNN uncertainty estimation produces accurate and narrow uncertainty intervals with good probability coverage. Analysis also shows that the performance of this technique depends on the choice of search space. Nevertheless, the accuracy and reliability of uncertainty intervals generated using kNN resampling are at least comparable to those produced by QR and UNEEC. It is concluded that kNN uncertainty estimation is an interesting alternative to other post-processors, like QR and UNEEC, for estimating forecast uncertainty. Apart from its concept being simple and well understood, an advantage of this method is that it is relatively easy to implement.

  18. Short Duration Base Heating Test Improvements

    NASA Technical Reports Server (NTRS)

    Bender, Robert L.; Dagostino, Mark G.; Engel, Bradley A.; Engel, Carl D.

    1999-01-01

    Significant improvements have been made to a short duration space launch vehicle base heating test technique. This technique was first developed during the 1960's to investigate launch vehicle plume induced convective environments. Recent improvements include the use of coiled nitrogen buffer gas lines upstream of the hydrogen / oxygen propellant charge tubes, fast acting solenoid valves, stand alone gas delivery and data acquisition systems, and an integrated model design code. Technique improvements were successfully demonstrated during a 2.25% scale X-33 base heating test conducted in the NASA/MSFC Nozzle Test Facility in early 1999. Cost savings of approximately an order of magnitude over previous tests were realized due in large part to these improvements.

  19. An Empirical Comparison of Methods for Equating with Randomly Equivalent Groups of 50 to 400 Test Takers. Research Report. ETS RR-10-05

    ERIC Educational Resources Information Center

    Livingston, Samuel A.; Kim, Sooyeon

    2010-01-01

    A series of resampling studies investigated the accuracy of equating by four different methods in a random groups equating design with samples of 400, 200, 100, and 50 test takers taking each form. Six pairs of forms were constructed. Each pair was constructed by assigning items from an existing test taken by 9,000 or more test takers. The…

  20. permGPU: Using graphics processing units in RNA microarray association studies.

    PubMed

    Shterev, Ivo D; Jung, Sin-Ho; George, Stephen L; Owzar, Kouros

    2010-06-16

    Many analyses of microarray association studies involve permutation, bootstrap resampling and cross-validation, that are ideally formulated as embarrassingly parallel computing problems. Given that these analyses are computationally intensive, scalable approaches that can take advantage of multi-core processor systems need to be developed. We have developed a CUDA based implementation, permGPU, that employs graphics processing units in microarray association studies. We illustrate the performance and applicability of permGPU within the context of permutation resampling for a number of test statistics. An extensive simulation study demonstrates a dramatic increase in performance when using permGPU on an NVIDIA GTX 280 card compared to an optimized C/C++ solution running on a conventional Linux server. permGPU is available as an open-source stand-alone application and as an extension package for the R statistical environment. It provides a dramatic increase in performance for permutation resampling analysis in the context of microarray association studies. The current version offers six test statistics for carrying out permutation resampling analyses for binary, quantitative and censored time-to-event traits.

  1. Do School-Based Tutoring Programs Significantly Improve Student Performance on Standardized Tests?

    ERIC Educational Resources Information Center

    Rothman, Terri; Henderson, Mary

    2011-01-01

    This study used a pre-post, nonequivalent control group design to examine the impact of an in-district, after-school tutoring program on eighth grade students' standardized test scores in language arts and mathematics. Students who had scored in the near-passing range on either the language arts or mathematics aspect of a standardized test at the…

  2. Fast and Accurate Approximation to Significance Tests in Genome-Wide Association Studies

    PubMed Central

    Zhang, Yu; Liu, Jun S.

    2011-01-01

    Genome-wide association studies commonly involve simultaneous tests of millions of single nucleotide polymorphisms (SNP) for disease association. The SNPs in nearby genomic regions, however, are often highly correlated due to linkage disequilibrium (LD, a genetic term for correlation). Simple Bonferonni correction for multiple comparisons is therefore too conservative. Permutation tests, which are often employed in practice, are both computationally expensive for genome-wide studies and limited in their scopes. We present an accurate and computationally efficient method, based on Poisson de-clumping heuristics, for approximating genome-wide significance of SNP associations. Compared with permutation tests and other multiple comparison adjustment approaches, our method computes the most accurate and robust p-value adjustments for millions of correlated comparisons within seconds. We demonstrate analytically that the accuracy and the efficiency of our method are nearly independent of the sample size, the number of SNPs, and the scale of p-values to be adjusted. In addition, our method can be easily adopted to estimate false discovery rate. When applied to genome-wide SNP datasets, we observed highly variable p-value adjustment results evaluated from different genomic regions. The variation in adjustments along the genome, however, are well conserved between the European and the African populations. The p-value adjustments are significantly correlated with LD among SNPs, recombination rates, and SNP densities. Given the large variability of sequence features in the genome, we further discuss a novel approach of using SNP-specific (local) thresholds to detect genome-wide significant associations. This article has supplementary material online. PMID:22140288

  3. The Use of Meta-Analytic Statistical Significance Testing

    ERIC Educational Resources Information Center

    Polanin, Joshua R.; Pigott, Terri D.

    2015-01-01

    Meta-analysis multiplicity, the concept of conducting multiple tests of statistical significance within one review, is an underdeveloped literature. We address this issue by considering how Type I errors can impact meta-analytic results, suggest how statistical power may be affected through the use of multiplicity corrections, and propose how…

  4. Significance testing - are we ready yet to abandon its use?

    PubMed

    The, Bertram

    2011-11-01

    Understanding of the damaging effects of significance testing has steadily grown. Reporting p values without dichotomizing the result to be significant or not, is not the solution. Confidence intervals are better, but are troubled by a non-intuitive interpretation, and are often misused just to see whether the null value lies within the interval. Bayesian statistics provide an alternative which solves most of these problems. Although criticized for relying on subjective models, the interpretation of a Bayesian posterior probability is more intuitive than the interpretation of a p value, and seems to be closest to intuitive patterns of human decision making. Another alternative could be using confidence interval functions (or p value functions) to display a continuum of intervals at different levels of confidence around a point estimate. Thus, better alternatives to significance testing exist. The reluctance to abandon this practice might be both preference of clinging to old habits as well as the unfamiliarity with better methods. Authors might question if using less commonly exercised, though superior, techniques will be well received by the editors, reviewers and the readership. A joint effort will be needed to abandon significance testing in clinical research in the future.

  5. Using and Evaluating Resampling Simulations in SPSS and Excel.

    ERIC Educational Resources Information Center

    Smith, Brad

    2003-01-01

    Describes and evaluates three computer-assisted simulations used with Statistical Package for the Social Sciences (SPSS) and Microsoft Excel. Designed the simulations to reinforce and enhance student understanding of sampling distributions, confidence intervals, and significance tests. Reports evaluations revealed improved student comprehension of…

  6. Entropy Based Genetic Association Tests and Gene-Gene Interaction Tests

    PubMed Central

    de Andrade, Mariza; Wang, Xin

    2011-01-01

    In the past few years, several entropy-based tests have been proposed for testing either single SNP association or gene-gene interaction. These tests are mainly based on Shannon entropy and have higher statistical power when compared to standard χ2 tests. In this paper, we extend some of these tests using a more generalized entropy definition, Rényi entropy, where Shannon entropy is a special case of order 1. The order λ (>0) of Rényi entropy weights the events (genotype/haplotype) according to their probabilities (frequencies). Higher λ places more emphasis on higher probability events while smaller λ (close to 0) tends to assign weights more equally. Thus, by properly choosing the λ, one can potentially increase the power of the tests or the p-value level of significance. We conducted simulation as well as real data analyses to assess the impact of the order λ and the performance of these generalized tests. The results showed that for dominant model the order 2 test was more powerful and for multiplicative model the order 1 or 2 had similar power. The analyses indicate that the choice of λ depends on the underlying genetic model and Shannon entropy is not necessarily the most powerful entropy measure for constructing genetic association or interaction tests. PMID:23089811

  7. Arctic Acoustic Workshop Proceedings, 14-15 February 1989.

    DTIC Science & Technology

    1989-06-01

    measurements. The measurements reported by Levine et al. (1987) were taken from current and temperature sensors moored in two triangular grids . The internal...requires a resampling of the data series on a uniform depth-time grid . Statistics calculated from the resampled series will be used to test numerical...from an isolated keel. Figure 2: 2-D Modeling Geometry - The model is based on a 2-D Cartesian grid with an axis of symmetry on the left. A pulsed

  8. Testing particle filters on convective scale dynamics

    NASA Astrophysics Data System (ADS)

    Haslehner, Mylene; Craig, George. C.; Janjic, Tijana

    2014-05-01

    Particle filters have been developed in recent years to deal with highly nonlinear dynamics and non Gaussian error statistics that also characterize data assimilation on convective scales. In this work we explore the use of the efficient particle filter (P.v. Leeuwen, 2011) for convective scale data assimilation application. The method is tested in idealized setting, on two stochastic models. The models were designed to reproduce some of the properties of convection, for example the rapid development and decay of convective clouds. The first model is a simple one-dimensional, discrete state birth-death model of clouds (Craig and Würsch, 2012). For this model, the efficient particle filter that includes nudging the variables shows significant improvement compared to Ensemble Kalman Filter and Sequential Importance Resampling (SIR) particle filter. The success of the combination of nudging and resampling, measured as RMS error with respect to the 'true state', is proportional to the nudging intensity. Significantly, even a very weak nudging intensity brings notable improvement over SIR. The second model is a modified version of a stochastic shallow water model (Würsch and Craig 2013), which contains more realistic dynamical characteristics of convective scale phenomena. Using the efficient particle filter and different combination of observations of the three field variables (wind, water 'height' and rain) allows the particle filter to be evaluated in comparison to a regime where only nudging is used. Sensitivity to the properties of the model error covariance is also considered. Finally, criteria are identified under which the efficient particle filter outperforms nudging alone. References: Craig, G. C. and M. Würsch, 2012: The impact of localization and observation averaging for convective-scale data assimilation in a simple stochastic model. Q. J. R. Meteorol. Soc.,139, 515-523. Van Leeuwen, P. J., 2011: Efficient non-linear data assimilation in geophysical

  9. Small sample mediation testing: misplaced confidence in bootstrapped confidence intervals.

    PubMed

    Koopman, Joel; Howe, Michael; Hollenbeck, John R; Sin, Hock-Peng

    2015-01-01

    Bootstrapping is an analytical tool commonly used in psychology to test the statistical significance of the indirect effect in mediation models. Bootstrapping proponents have particularly advocated for its use for samples of 20-80 cases. This advocacy has been heeded, especially in the Journal of Applied Psychology, as researchers are increasingly utilizing bootstrapping to test mediation with samples in this range. We discuss reasons to be concerned with this escalation, and in a simulation study focused specifically on this range of sample sizes, we demonstrate not only that bootstrapping has insufficient statistical power to provide a rigorous hypothesis test in most conditions but also that bootstrapping has a tendency to exhibit an inflated Type I error rate. We then extend our simulations to investigate an alternative empirical resampling method as well as a Bayesian approach and demonstrate that they exhibit comparable statistical power to bootstrapping in small samples without the associated inflated Type I error. Implications for researchers testing mediation hypotheses in small samples are presented. For researchers wishing to use these methods in their own research, we have provided R syntax in the online supplemental materials. (c) 2015 APA, all rights reserved.

  10. Fast Generation of Ensembles of Cosmological N-Body Simulations via Mode-Resampling

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Schneider, M D; Cole, S; Frenk, C S

    2011-02-14

    We present an algorithm for quickly generating multiple realizations of N-body simulations to be used, for example, for cosmological parameter estimation from surveys of large-scale structure. Our algorithm uses a new method to resample the large-scale (Gaussian-distributed) Fourier modes in a periodic N-body simulation box in a manner that properly accounts for the nonlinear mode-coupling between large and small scales. We find that our method for adding new large-scale mode realizations recovers the nonlinear power spectrum to sub-percent accuracy on scales larger than about half the Nyquist frequency of the simulation box. Using 20 N-body simulations, we obtain a powermore » spectrum covariance matrix estimate that matches the estimator from Takahashi et al. (from 5000 simulations) with < 20% errors in all matrix elements. Comparing the rates of convergence, we determine that our algorithm requires {approx}8 times fewer simulations to achieve a given error tolerance in estimates of the power spectrum covariance matrix. The degree of success of our algorithm indicates that we understand the main physical processes that give rise to the correlations in the matter power spectrum. Namely, the large-scale Fourier modes modulate both the degree of structure growth through the variation in the effective local matter density and also the spatial frequency of small-scale perturbations through large-scale displacements. We expect our algorithm to be useful for noise modeling when constraining cosmological parameters from weak lensing (cosmic shear) and galaxy surveys, rescaling summary statistics of N-body simulations for new cosmological parameter values, and any applications where the influence of Fourier modes larger than the simulation size must be accounted for.« less

  11. A Quantitative Analysis of Evidence-Based Testing Practices in Nursing Education

    ERIC Educational Resources Information Center

    Moore, Wendy

    2017-01-01

    The focus of this dissertation is evidence-based testing practices in nursing education. Specifically, this research study explored the implementation of evidence-based testing practices between nursing faculty of various experience levels. While the significance of evidence-based testing in nursing education is well documented, little is known…

  12. Memory and Trend of Precipitation in China during 1966-2013

    NASA Astrophysics Data System (ADS)

    Du, M.; Sun, F.; Liu, W.

    2017-12-01

    As climate change has had a significant impact on water cycle, the characteristic and variation of precipitation under climate change turned into a hotspot in hydrology. This study aims to analyze the trend and memory (both short-term and long-term) of precipitation in China. To do that, we apply statistical tests (including Mann-Kendall test, Ljung-Box test and Hurst exponent) to annual precipitation (P), frequency of rainy day (λ) and mean daily rainfall in days when precipitation occurs (α) in China (1966-2013). We also use a resampling approach to determine the field significance. From there, we evaluate the spatial distribution and percentages of stations with significant memory or trend. We find that the percentages of significant downtrends for λ and significant uptrends for α are significantly larger than the critical values at 95% field significance level, probably caused by the global warming. From these results, we conclude that extra care is necessary when significant results are obtained using statistical tests. This is because the null hypothesis could be rejected by chance and this situation is more likely to occur if spatial correlation is ignored according to the results of the resampling approach.

  13. Measures of precision for dissimilarity-based multivariate analysis of ecological communities

    PubMed Central

    Anderson, Marti J; Santana-Garcon, Julia

    2015-01-01

    Ecological studies require key decisions regarding the appropriate size and number of sampling units. No methods currently exist to measure precision for multivariate assemblage data when dissimilarity-based analyses are intended to follow. Here, we propose a pseudo multivariate dissimilarity-based standard error (MultSE) as a useful quantity for assessing sample-size adequacy in studies of ecological communities. Based on sums of squared dissimilarities, MultSE measures variability in the position of the centroid in the space of a chosen dissimilarity measure under repeated sampling for a given sample size. We describe a novel double resampling method to quantify uncertainty in MultSE values with increasing sample size. For more complex designs, values of MultSE can be calculated from the pseudo residual mean square of a permanova model, with the double resampling done within appropriate cells in the design. R code functions for implementing these techniques, along with ecological examples, are provided. PMID:25438826

  14. Computer-Based Testing: Test Site Security.

    ERIC Educational Resources Information Center

    Rosen, Gerald A.

    Computer-based testing places great burdens on all involved parties to ensure test security. A task analysis of test site security might identify the areas of protecting the test, protecting the data, and protecting the environment as essential issues in test security. Protecting the test involves transmission of the examinations, identifying the…

  15. 40 CFR Appendix IV to Part 265 - Tests for Significance

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... changes in the concentration or value of an indicator parameter in periodic ground-water samples when... then be compared to the value of the t-statistic found in a table for t-test of significance at the specified level of significance. A calculated value of t which exceeds the value of t found in the table...

  16. 40 CFR Appendix IV to Part 265 - Tests for Significance

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... changes in the concentration or value of an indicator parameter in periodic ground-water samples when... then be compared to the value of the t-statistic found in a table for t-test of significance at the specified level of significance. A calculated value of t which exceeds the value of t found in the table...

  17. 40 CFR Appendix IV to Part 265 - Tests for Significance

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... changes in the concentration or value of an indicator parameter in periodic ground-water samples when... then be compared to the value of the t-statistic found in a table for t-test of significance at the specified level of significance. A calculated value of t which exceeds the value of t found in the table...

  18. 40 CFR Appendix IV to Part 265 - Tests for Significance

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... changes in the concentration or value of an indicator parameter in periodic ground-water samples when... then be compared to the value of the t-statistic found in a table for t-test of significance at the specified level of significance. A calculated value of t which exceeds the value of t found in the table...

  19. Security Considerations and Recommendations in Computer-Based Testing

    PubMed Central

    Al-Saleem, Saleh M.

    2014-01-01

    Many organizations and institutions around the globe are moving or planning to move their paper-and-pencil based testing to computer-based testing (CBT). However, this conversion will not be the best option for all kinds of exams and it will require significant resources. These resources may include the preparation of item banks, methods for test delivery, procedures for test administration, and last but not least test security. Security aspects may include but are not limited to the identification and authentication of examinee, the risks that are associated with cheating on the exam, and the procedures related to test delivery to the examinee. This paper will mainly investigate the security considerations associated with CBT and will provide some recommendations for the security of these kinds of tests. We will also propose a palm-based biometric authentication system incorporated with basic authentication system (username/password) in order to check the identity and authenticity of the examinee. PMID:25254250

  20. Security considerations and recommendations in computer-based testing.

    PubMed

    Al-Saleem, Saleh M; Ullah, Hanif

    2014-01-01

    Many organizations and institutions around the globe are moving or planning to move their paper-and-pencil based testing to computer-based testing (CBT). However, this conversion will not be the best option for all kinds of exams and it will require significant resources. These resources may include the preparation of item banks, methods for test delivery, procedures for test administration, and last but not least test security. Security aspects may include but are not limited to the identification and authentication of examinee, the risks that are associated with cheating on the exam, and the procedures related to test delivery to the examinee. This paper will mainly investigate the security considerations associated with CBT and will provide some recommendations for the security of these kinds of tests. We will also propose a palm-based biometric authentication system incorporated with basic authentication system (username/password) in order to check the identity and authenticity of the examinee.

  1. Coagulation tests show significant differences in patients with breast cancer.

    PubMed

    Tas, Faruk; Kilic, Leyla; Duranyildiz, Derya

    2014-06-01

    Activated coagulation and fibrinolytic system in cancer patients is associated with tumor stroma formation and metastasis in different cancer types. The aim of this study is to explore the correlation of blood coagulation assays for various clinicopathologic factors in breast cancer patients. A total of 123 female breast cancer patients were enrolled into the study. All the patients were treatment naïve. Pretreatment blood coagulation tests including PT, APTT, PTA, INR, D-dimer, fibrinogen levels, and platelet counts were evaluated. Median age of diagnosis was 51 years old (range 26-82). Twenty-two percent of the group consisted of metastatic breast cancer patients. The plasma level of all coagulation tests revealed statistically significant difference between patient and control group except for PT (p<0.001 for all variables except for PT; p=0.08). Elderly age (>50 years) was associated with higher D-dimer levels (p=0.003). Metastatic patients exhibited significantly higher D-dimer values when compared with early breast cancer patients (p=0.049). Advanced tumor stage (T3 and T4) was associated with higher INR (p=0.05) and lower PTA (p=0.025). In conclusion, coagulation tests show significant differences in patients with breast cancer.

  2. Comparability of a Paper-Based Language Test and a Computer-Based Language Test.

    ERIC Educational Resources Information Center

    Choi, Inn-Chull; Kim, Kyoung Sung; Boo, Jaeyool

    2003-01-01

    Utilizing the Test of English Proficiency, developed by Seoul National University (TEPS), examined comparability between the paper-based language test and the computer-based language test based on content and construct validation employing content analyses based on corpus linguistic techniques in addition to such statistical analyses as…

  3. Patterns of Statewide Test Participation for Students with Significant Cognitive Disabilities

    ERIC Educational Resources Information Center

    Saven, Jessica L.; Anderson, Daniel; Nese, Joseph F. T.; Farley, Dan; Tindal, Gerald

    2016-01-01

    Students with significant cognitive disabilities are eligible to participate in two statewide testing options for accountability: alternate assessments or general assessments with appropriate accommodations. Participation guidelines are generally quite vague, leading to students "switching" test participation between years. In this…

  4. Safety Testing of Ammonium Nitrate Based Mixtures

    NASA Astrophysics Data System (ADS)

    Phillips, Jason; Lappo, Karmen; Phelan, James; Peterson, Nathan; Gilbert, Don

    2013-06-01

    Ammonium nitrate (AN)/ammonium nitrate based explosives have a lengthy documented history of use by adversaries in acts of terror. While historical research has been conducted on AN-based explosive mixtures, it has primarily focused on detonation performance while varying the oxygen balance between the oxidizer and fuel components. Similarly, historical safety data on these materials is often lacking in pertinent details such as specific fuel type, particle size parameters, oxidizer form, etc. A variety of AN-based fuel-oxidizer mixtures were tested for small-scale sensitivity in preparation for large-scale testing. Current efforts focus on maintaining a zero oxygen-balance (a stoichiometric ratio for active chemical participants) while varying factors such as charge geometry, oxidizer form, particle size, and inert diluent ratios. Small-scale safety testing was conducted on various mixtures and fuels. It was found that ESD sensitivity is significantly affected by particle size, while this is less so for impact and friction. Thermal testing is in progress to evaluate hazards that may be experienced during large-scale testing.

  5. Evaluating sufficient similarity for drinking-water disinfection by-product (DBP) mixtures with bootstrap hypothesis test procedures.

    PubMed

    Feder, Paul I; Ma, Zhenxu J; Bull, Richard J; Teuschler, Linda K; Rice, Glenn

    2009-01-01

    In chemical mixtures risk assessment, the use of dose-response data developed for one mixture to estimate risk posed by a second mixture depends on whether the two mixtures are sufficiently similar. While evaluations of similarity may be made using qualitative judgments, this article uses nonparametric statistical methods based on the "bootstrap" resampling technique to address the question of similarity among mixtures of chemical disinfectant by-products (DBP) in drinking water. The bootstrap resampling technique is a general-purpose, computer-intensive approach to statistical inference that substitutes empirical sampling for theoretically based parametric mathematical modeling. Nonparametric, bootstrap-based inference involves fewer assumptions than parametric normal theory based inference. The bootstrap procedure is appropriate, at least in an asymptotic sense, whether or not the parametric, distributional assumptions hold, even approximately. The statistical analysis procedures in this article are initially illustrated with data from 5 water treatment plants (Schenck et al., 2009), and then extended using data developed from a study of 35 drinking-water utilities (U.S. EPA/AMWA, 1989), which permits inclusion of a greater number of water constituents and increased structure in the statistical models.

  6. Flight tests of external modifications used to reduce blunt base drag

    NASA Technical Reports Server (NTRS)

    Powers, Sheryll Goecke

    1988-01-01

    The effectiveness of a trailing disk (the trapped vortex concept) in reducing the blunt base drag of an 8-in diameter body of revolution was studied from measurements made both in flight and in full-scale wind-tunnel tests. The experiment demonstrated the significant base drag reduction capability of the trailing disk to Mach 0.93. The maximum base drag reduction obtained from a cavity tested on the flight body of revolution was not significant. The effectiveness of a splitter plate and a vented-wall cavity in reducing the base drag of a quasi-two-dimensional fuselage closure was studied from base pressure measurements made in flight. The fuselage closure was between the two engines of the F-111 airplane; therefore, the base pressures were in the presence of jet engine exhaust. For Mach numbers from 1.10 to 1.51, significant base drag reduction was provided by the vented-wall cavity configuration. The splitter plate was not considered effective in reducing base drag at any Mach number tested.

  7. Shaping Up the Practice of Null Hypothesis Significance Testing.

    ERIC Educational Resources Information Center

    Wainer, Howard; Robinson, Daniel H.

    2003-01-01

    Discusses criticisms of null hypothesis significance testing (NHST), suggesting that historical use of NHST was reasonable, and current users should read Sir Ronald Fisher's applied work. Notes that modifications to NHST and interpretations of its outcomes might better suit the needs of modern science. Concludes that NHST is most often useful as…

  8. [Physicians with access to point-of-care tests significantly reduce the antibiotic prescription for common cold].

    PubMed

    Llor, Carles; Hernández, Silvia; Cots, Josep María; Bjerrum, Lars; González, Beatriz; García, Guillermo; Alcántara, Juan de Dios; Guerra, Gloria; Cid, Marina; Gómez, Manuel; Ortega, Jesús; Pérez, Carolina; Arranz, Javier; Monedero, María José; Paredes, José; Pineda, Vicenta

    2013-03-01

    This study was aimed at evaluating the effect of two levels of intervention on the antibiotic prescribing in patients with common cold. Before and after audit-based study carried out in primary healthcare centres in Spain. General practitioners registered all the episodes of common cold during 15 working days in January and February in 2008 (preintervention). Two types of intervention were considered: full intervention, consisting in individual feedback based on results from the first registry, courses in rational antibiotic prescribing, guidelines, patient information leaflets, workshops on rapid tests -rapid antigen detection and C-reactive protein tests- and provision of these tests in the surgeries; and partial intervention, consisting of all the above intervention except for the workshop and they did not have access to rapid tests. The same registry was repeated in 2009 (postintervention). In addition, new physicians filled out only the registry in 2009 (control group). 210 physicians underwent the full intervention, 71 the partial intervention and 59 were assigned to the control group. The 340 doctors prescribed antibiotics in 274 episodes of a total of 12,373 cases registered (2.2%).The greatest percentage of antibiotic prescription was found in the control group (4.6%). The partial intervention increased the antibiotic prescription percentage from 1.1% to 2.7% while only doctors who underwent the complete intervention lead to a significant reduction of antibiotics prescribed, from 2.9% before to 0.7% after the intervention (p<0.001). Only physicians with access to rapid tests significantly reduced antibiotic prescription in patients with common cold.

  9. Significance of acceleration period in a dynamic strength testing study.

    PubMed

    Chen, W L; Su, F C; Chou, Y L

    1994-06-01

    The acceleration period that occurs during isokinetic tests may provide valuable information regarding neuromuscular readiness to produce maximal contraction. The purpose of this study was to collect the normative data of acceleration time during isokinetic knee testing, to calculate the acceleration work (Wacc), and to determine the errors (ERexp, ERwork, ERpower) due to ignoring Wacc during explosiveness, total work, and average power measurements. Seven male and 13 female subjects attended the test by using the Cybex 325 system and electronic stroboscope machine for 10 testing speeds (30-300 degrees/sec). A three-way ANOVA was used to assess gender, direction, and speed factors on acceleration time, Wacc, and errors. The results indicated that acceleration time was significantly affected by speed and direction; Wacc and ERexp by speed, direction, and gender; and ERwork and ERpower by speed and gender. The errors appeared to increase when testing the female subjects, during the knee flexion test, or when speed increased. To increase validity in clinical testing, it is important to consider the acceleration phase effect, especially in higher velocity isokinetic testing or for weaker muscle groups.

  10. Advances in Testing the Statistical Significance of Mediation Effects

    ERIC Educational Resources Information Center

    Mallinckrodt, Brent; Abraham, W. Todd; Wei, Meifen; Russell, Daniel W.

    2006-01-01

    P. A. Frazier, A. P. Tix, and K. E. Barron (2004) highlighted a normal theory method popularized by R. M. Baron and D. A. Kenny (1986) for testing the statistical significance of indirect effects (i.e., mediator variables) in multiple regression contexts. However, simulation studies suggest that this method lacks statistical power relative to some…

  11. Point of data saturation was assessed using resampling methods in a survey with open-ended questions.

    PubMed

    Tran, Viet-Thi; Porcher, Raphael; Falissard, Bruno; Ravaud, Philippe

    2016-12-01

    To describe methods to determine sample sizes in surveys using open-ended questions and to assess how resampling methods can be used to determine data saturation in these surveys. We searched the literature for surveys with open-ended questions and assessed the methods used to determine sample size in 100 studies selected at random. Then, we used Monte Carlo simulations on data from a previous study on the burden of treatment to assess the probability of identifying new themes as a function of the number of patients recruited. In the literature, 85% of researchers used a convenience sample, with a median size of 167 participants (interquartile range [IQR] = 69-406). In our simulation study, the probability of identifying at least one new theme for the next included subject was 32%, 24%, and 12% after the inclusion of 30, 50, and 100 subjects, respectively. The inclusion of 150 participants at random resulted in the identification of 92% themes (IQR = 91-93%) identified in the original study. In our study, data saturation was most certainly reached for samples >150 participants. Our method may be used to determine when to continue the study to find new themes or stop because of futility. Copyright © 2016 Elsevier Inc. All rights reserved.

  12. Significance tests for functional data with complex dependence structure.

    PubMed

    Staicu, Ana-Maria; Lahiri, Soumen N; Carroll, Raymond J

    2015-01-01

    We propose an L 2 -norm based global testing procedure for the null hypothesis that multiple group mean functions are equal, for functional data with complex dependence structure. Specifically, we consider the setting of functional data with a multilevel structure of the form groups-clusters or subjects-units, where the unit-level profiles are spatially correlated within the cluster, and the cluster-level data are independent. Orthogonal series expansions are used to approximate the group mean functions and the test statistic is estimated using the basis coefficients. The asymptotic null distribution of the test statistic is developed, under mild regularity conditions. To our knowledge this is the first work that studies hypothesis testing, when data have such complex multilevel functional and spatial structure. Two small-sample alternatives, including a novel block bootstrap for functional data, are proposed, and their performance is examined in simulation studies. The paper concludes with an illustration of a motivating experiment.

  13. Group mindfulness-based therapy significantly improves sexual desire in women.

    PubMed

    Brotto, Lori A; Basson, Rosemary

    2014-06-01

    At least a third of women across reproductive ages experience low sexual desire and impaired arousal. There is increasing evidence that mindfulness, defined as non-judgmental present moment awareness, may improve women's sexual functioning. The goal of this study was to test the effectiveness of mindfulness-based therapy, either immediately or after a 3-month waiting period, in women seeking treatment for low sexual desire and arousal. Women participated in four 90-min group sessions that included mindfulness meditation, cognitive therapy, and education. A total of 117 women were assigned to either the immediate treatment (n = 68, mean age 40.8 yrs) or delayed treatment (n = 49, mean age 42.2 yrs) group, in which women had two pre-treatment baseline assessments followed by treatment. A total of 95 women completed assessments through to the 6-month follow-up period. Compared to the delayed treatment control group, treatment significantly improved sexual desire, sexual arousal, lubrication, sexual satisfaction, and overall sexual functioning. Sex-related distress significantly decreased in both conditions, regardless of treatment, as did orgasmic difficulties and depressive symptoms. Increases in mindfulness and a reduction in depressive symptoms predicted improvements in sexual desire. Mindfulness-based group therapy significantly improved sexual desire and other indices of sexual response, and should be considered in the treatment of women's sexual dysfunction. Copyright © 2014 Elsevier Ltd. All rights reserved.

  14. Development of an antigen-based rapid diagnostic test for the identification of blowfly (Calliphoridae) species of forensic significance.

    PubMed

    McDonagh, Laura; Thornton, Chris; Wallman, James F; Stevens, Jamie R

    2009-06-01

    In this study we examine the limitations of currently used sequence-based approaches to blowfly (Calliphoridae) identification and evaluate the utility of an immunological approach to discriminate between blowfly species of forensic importance. By investigating antigenic similarity and dissimilarity between the first instar larval stages of four forensically important blowfly species, we have been able to identify immunoreactive proteins of potential use in the development of species-specific immuno-diagnostic tests. Here we outline our protein-based approach to species determination, and describe how it may be adapted to develop rapid diagnostic assays for the 'on-site' identification of blowfly species.

  15. Statistical Significance Testing in Second Language Research: Basic Problems and Suggestions for Reform

    ERIC Educational Resources Information Center

    Norris, John M.

    2015-01-01

    Traditions of statistical significance testing in second language (L2) quantitative research are strongly entrenched in how researchers design studies, select analyses, and interpret results. However, statistical significance tests using "p" values are commonly misinterpreted by researchers, reviewers, readers, and others, leading to…

  16. Testing Mediation in Structural Equation Modeling: The Effectiveness of the Test of Joint Significance

    ERIC Educational Resources Information Center

    Leth-Steensen, Craig; Gallitto, Elena

    2016-01-01

    A large number of approaches have been proposed for estimating and testing the significance of indirect effects in mediation models. In this study, four sets of Monte Carlo simulations involving full latent variable structural equation models were run in order to contrast the effectiveness of the currently popular bias-corrected bootstrapping…

  17. Dose-rate plays a significant role in synchrotron radiation X-ray-induced damage of rodent testes.

    PubMed

    Chen, Heyu; Wang, Ban; Wang, Caixia; Cao, Wei; Zhang, Jie; Ma, Yingxin; Hong, Yunyi; Fu, Shen; Wu, Fan; Ying, Weihai

    2016-01-01

    Synchrotron radiation (SR) X-ray has significant potential for applications in medical imaging and cancer treatment. However, the mechanisms underlying SR X-ray-induced tissue damage remain unclear. Previous studies on regular X-ray-induced tissue damage have suggested that dose-rate could affect radiation damage. Because SR X-ray has exceedingly high dose-rate compared to regular X-ray, it remains to be determined if dose-rate may affect SR X-ray-induced tissue damage. We used rodent testes as a model to investigate the role of dose-rate in SR X-ray-induced tissue damage. One day after SR X-ray irradiation, we determined the effects of the irradiation of the same dosage at two different dose-rates, 0.11 Gy/s and 1.1 Gy/s, on TUNEL signals, caspase-3 activation and DNA double-strand breaks (DSBs) of the testes. Compared to those produced by the irradiation at 0.11 Gy/s, irradiation at 1.1 Gy/s produced higher levels of DSBs, TUNEL signals, and caspase-3 activation in the testes. Our study has provided the first evidence suggesting that dose-rate could be a significant factor in SR X-ray-induced tissue damage, which may establish a valuable base for utilizing this factor to manipulate the tissue damage in SR X-ray-based medical applications.

  18. Dose-rate plays a significant role in synchrotron radiation X-ray-induced damage of rodent testes

    PubMed Central

    Chen, Heyu; Wang, Ban; Wang, Caixia; Cao, Wei; Zhang, Jie; Ma, Yingxin; Hong, Yunyi; Fu, Shen; Wu, Fan; Ying, Weihai

    2016-01-01

    Synchrotron radiation (SR) X-ray has significant potential for applications in medical imaging and cancer treatment. However, the mechanisms underlying SR X-ray-induced tissue damage remain unclear. Previous studies on regular X-ray-induced tissue damage have suggested that dose-rate could affect radiation damage. Because SR X-ray has exceedingly high dose-rate compared to regular X-ray, it remains to be determined if dose-rate may affect SR X-ray-induced tissue damage. We used rodent testes as a model to investigate the role of dose-rate in SR X-ray-induced tissue damage. One day after SR X-ray irradiation, we determined the effects of the irradiation of the same dosage at two different dose-rates, 0.11 Gy/s and 1.1 Gy/s, on TUNEL signals, caspase-3 activation and DNA double-strand breaks (DSBs) of the testes. Compared to those produced by the irradiation at 0.11 Gy/s, irradiation at 1.1 Gy/s produced higher levels of DSBs, TUNEL signals, and caspase-3 activation in the testes. Our study has provided the first evidence suggesting that dose-rate could be a significant factor in SR X-ray-induced tissue damage, which may establish a valuable base for utilizing this factor to manipulate the tissue damage in SR X-ray-based medical applications. PMID:28078052

  19. Measures of precision for dissimilarity-based multivariate analysis of ecological communities.

    PubMed

    Anderson, Marti J; Santana-Garcon, Julia

    2015-01-01

    Ecological studies require key decisions regarding the appropriate size and number of sampling units. No methods currently exist to measure precision for multivariate assemblage data when dissimilarity-based analyses are intended to follow. Here, we propose a pseudo multivariate dissimilarity-based standard error (MultSE) as a useful quantity for assessing sample-size adequacy in studies of ecological communities. Based on sums of squared dissimilarities, MultSE measures variability in the position of the centroid in the space of a chosen dissimilarity measure under repeated sampling for a given sample size. We describe a novel double resampling method to quantify uncertainty in MultSE values with increasing sample size. For more complex designs, values of MultSE can be calculated from the pseudo residual mean square of a permanova model, with the double resampling done within appropriate cells in the design. R code functions for implementing these techniques, along with ecological examples, are provided. © 2014 The Authors. Ecology Letters published by John Wiley & Sons Ltd and CNRS.

  20. Adult age differences in perceptually based, but not conceptually based implicit tests of memory.

    PubMed

    Small, B J; Hultsch, D F; Masson, M E

    1995-05-01

    Implicit tests of memory assess the influence of recent experience without requiring awareness of remembering. Evidence concerning age differences on implicit tests of memory suggests small age differences in favor of younger adults. However, the majority of research examining this issue has relied upon perceptually based implicit tests. Recently, a second type of implicit test, one that relies upon conceptually based processes, has been identified. The pattern of age differences on this second type of implicit test is less clear. In the present study, we examined the pattern of age differences on one conceptually based (fact completion) and one perceptually based (stem completion) implicit test of memory, as well as two explicit tests of memory (fact and word recall). Tasks were administered to 403 adults from three age groups (19-34 years, 58-73 years, 74-89 years). Significant age differences in favor of the young were found on stem completion but not fact completion. Age differences were present for both word and fast recall. Correlational analyses examining the relationship of memory performance to other cognitive variables indicated that the implicit tests were supported by different components than the explicit tests, as well as being different from each other.

  1. DENBRAN: A basic program for a significance test for multivariate normality of clusters from branching patterns in dendrograms

    NASA Astrophysics Data System (ADS)

    Sneath, P. H. A.

    A BASIC program is presented for significance tests to determine whether a dendrogram is derived from clustering of points that belong to a single multivariate normal distribution. The significance tests are based on statistics of the Kolmogorov—Smirnov type, obtained by comparing the observed cumulative graph of branch levels with a graph for the hypothesis of multivariate normality. The program also permits testing whether the dendrogram could be from a cluster of lower dimensionality due to character correlations. The program makes provision for three similarity coefficients, (1) Euclidean distances, (2) squared Euclidean distances, and (3) Simple Matching Coefficients, and for five cluster methods (1) WPGMA, (2) UPGMA, (3) Single Linkage (or Minimum Spanning Trees), (4) Complete Linkage, and (5) Ward's Increase in Sums of Squares. The program is entitled DENBRAN.

  2. Pregnant patients' risk perception of prenatal test results with uncertain fetal clinical significance: ultrasound versus advanced genetic testing.

    PubMed

    Richards, Elliott G; Sangi-Haghpeykar, Haleh; McGuire, Amy L; Van den Veyver, Ignatia B; Fruhman, Gary

    2015-12-01

    A common concern of utilizing prenatal advanced genetic testing is that a result of uncertain clinical significance will increase patient anxiety. However, prenatal ultrasound may also yield findings of uncertain significance, such as 'soft markers' for fetal aneuploidy, or findings with variable prognosis, such as mild ventriculomegaly. In this study we compared risk perception following uncertain test results from each modality. A single survey with repeated measures design was administered to 133 pregnant women. It included 'intolerance of uncertainty' questions, two hypothetical scenarios involving prenatal ultrasound or advanced genetic testing, and response questions. The primary outcome was risk perception score. Risk perception did not vary significantly between ultrasound and genetic scenarios (p = 0.17). The genetic scenario scored a higher accuracy (p = 0.04) but lower sense of empowerment (p = 0.01). Furthermore, patients were more likely to seek additional testing after an ultrasound than after genetic testing (p = 0.05). There were no differences in other secondary outcomes including perception of life-altering consequences and hypothetical worry, anxiety, confusion, or medical care decisions. Our data suggest that uncertain findings on prenatal genetic testing do not elicit a higher perception of risk or anxiety when compared to ultrasound findings of comparable uncertainty. © 2015 John Wiley & Sons, Ltd. © 2015 John Wiley & Sons, Ltd.

  3. Adaptive Set-Based Methods for Association Testing

    PubMed Central

    Su, Yu-Chen; Gauderman, W. James; Kiros, Berhane; Lewinger, Juan Pablo

    2017-01-01

    With a typical sample size of a few thousand subjects, a single genomewide association study (GWAS) using traditional one-SNP-at-a-time methods can only detect genetic variants conferring a sizable effect on disease risk. Set-based methods, which analyze sets of SNPs jointly, can detect variants with smaller effects acting within a gene, a pathway, or other biologically relevant sets. While self-contained set-based methods (those that test sets of variants without regard to variants not in the set) are generally more powerful than competitive set-based approaches (those that rely on comparison of variants in the set of interest with variants not in the set), there is no consensus as to which self-contained methods are best. In particular, several self-contained set tests have been proposed to directly or indirectly ‘adapt’ to the a priori unknown proportion and distribution of effects of the truly associated SNPs in the set, which is a major determinant of their power. A popular adaptive set-based test is the adaptive rank truncated product (ARTP), which seeks the set of SNPs that yields the best-combined evidence of association. We compared the standard ARTP, several ARTP variations we introduced, and other adaptive methods in a comprehensive simulation study to evaluate their performance. We used permutations to assess significance for all the methods and thus provide a level playing field for comparison. We found the standard ARTP test to have the highest power across our simulations followed closely by the global model of random effects (GMRE) and a LASSO based test. PMID:26707371

  4. A CNN based Hybrid approach towards automatic image registration

    NASA Astrophysics Data System (ADS)

    Arun, Pattathal V.; Katiyar, Sunil K.

    2013-06-01

    Image registration is a key component of various image processing operations which involve the analysis of different image data sets. Automatic image registration domains have witnessed the application of many intelligent methodologies over the past decade; however inability to properly model object shape as well as contextual information had limited the attainable accuracy. In this paper, we propose a framework for accurate feature shape modeling and adaptive resampling using advanced techniques such as Vector Machines, Cellular Neural Network (CNN), SIFT, coreset, and Cellular Automata. CNN has found to be effective in improving feature matching as well as resampling stages of registration and complexity of the approach has been considerably reduced using corset optimization The salient features of this work are cellular neural network approach based SIFT feature point optimisation, adaptive resampling and intelligent object modelling. Developed methodology has been compared with contemporary methods using different statistical measures. Investigations over various satellite images revealed that considerable success was achieved with the approach. System has dynamically used spectral and spatial information for representing contextual knowledge using CNN-prolog approach. Methodology also illustrated to be effective in providing intelligent interpretation and adaptive resampling. Rejestracja obrazu jest kluczowym składnikiem różnych operacji jego przetwarzania. W ostatnich latach do automatycznej rejestracji obrazu wykorzystuje się metody sztucznej inteligencji, których największą wadą, obniżającą dokładność uzyskanych wyników jest brak możliwości dobrego wymodelowania kształtu i informacji kontekstowych. W niniejszej pracy zaproponowano zasady dokładnego modelowania kształtu oraz adaptacyjnego resamplingu z wykorzystaniem zaawansowanych technik, takich jak Vector Machines (VM), komórkowa sieć neuronowa (CNN), przesiewanie (SIFT), Coreset i

  5. Tests of English Language as Significant Thresholds for College-Bound Chinese and the Washback of Test-Preparation

    ERIC Educational Resources Information Center

    Matoush, Marylou M.; Fu, Danling

    2012-01-01

    Tests of English language mark significantly high thresholds for all college-bound students in the People's Republic of China. Many Chinese students hope to seek their fortunes at universities in the United States, or other English speaking countries. These students spend long hours, year after year, in test-preparation centres in order to develop…

  6. Model-Based GUI Testing Using Uppaal at Novo Nordisk

    NASA Astrophysics Data System (ADS)

    Hjort, Ulrik H.; Illum, Jacob; Larsen, Kim G.; Petersen, Michael A.; Skou, Arne

    This paper details a collaboration between Aalborg University and Novo Nordiskin developing an automatic model-based test generation tool for system testing of the graphical user interface of a medical device on an embedded platform. The tool takes as input an UML Statemachine model and generates a test suite satisfying some testing criterion, such as edge or state coverage, and converts the individual test case into a scripting language that can be automatically executed against the target. The tool has significantly reduced the time required for test construction and generation, and reduced the number of test scripts while increasing the coverage.

  7. Medical students’ attitudes and perspectives regarding novel computer-based practical spot tests compared to traditional practical spot tests

    PubMed Central

    Wijerathne, Buddhika; Rathnayake, Geetha

    2013-01-01

    Background Most universities currently practice traditional practical spot tests to evaluate students. However, traditional methods have several disadvantages. Computer-based examination techniques are becoming more popular among medical educators worldwide. Therefore incorporating the computer interface in practical spot testing is a novel concept that may minimize the shortcomings of traditional methods. Assessing students’ attitudes and perspectives is vital in understanding how students perceive the novel method. Methods One hundred and sixty medical students were randomly allocated to either a computer-based spot test (n=80) or a traditional spot test (n=80). The students rated their attitudes and perspectives regarding the spot test method soon after the test. The results were described comparatively. Results Students had higher positive attitudes towards the computer-based practical spot test compared to the traditional spot test. Their recommendations to introduce the novel practical spot test method for future exams and to other universities were statistically significantly higher. Conclusions The computer-based practical spot test is viewed as more acceptable to students than the traditional spot test. PMID:26451213

  8. Adaptive Set-Based Methods for Association Testing.

    PubMed

    Su, Yu-Chen; Gauderman, William James; Berhane, Kiros; Lewinger, Juan Pablo

    2016-02-01

    With a typical sample size of a few thousand subjects, a single genome-wide association study (GWAS) using traditional one single nucleotide polymorphism (SNP)-at-a-time methods can only detect genetic variants conferring a sizable effect on disease risk. Set-based methods, which analyze sets of SNPs jointly, can detect variants with smaller effects acting within a gene, a pathway, or other biologically relevant sets. Although self-contained set-based methods (those that test sets of variants without regard to variants not in the set) are generally more powerful than competitive set-based approaches (those that rely on comparison of variants in the set of interest with variants not in the set), there is no consensus as to which self-contained methods are best. In particular, several self-contained set tests have been proposed to directly or indirectly "adapt" to the a priori unknown proportion and distribution of effects of the truly associated SNPs in the set, which is a major determinant of their power. A popular adaptive set-based test is the adaptive rank truncated product (ARTP), which seeks the set of SNPs that yields the best-combined evidence of association. We compared the standard ARTP, several ARTP variations we introduced, and other adaptive methods in a comprehensive simulation study to evaluate their performance. We used permutations to assess significance for all the methods and thus provide a level playing field for comparison. We found the standard ARTP test to have the highest power across our simulations followed closely by the global model of random effects (GMRE) and a least absolute shrinkage and selection operator (LASSO)-based test. © 2015 WILEY PERIODICALS, INC.

  9. Many tests of significance: new methods for controlling type I errors.

    PubMed

    Keselman, H J; Miller, Charles W; Holland, Burt

    2011-12-01

    There have been many discussions of how Type I errors should be controlled when many hypotheses are tested (e.g., all possible comparisons of means, correlations, proportions, the coefficients in hierarchical models, etc.). By and large, researchers have adopted familywise (FWER) control, though this practice certainly is not universal. Familywise control is intended to deal with the multiplicity issue of computing many tests of significance, yet such control is conservative--that is, less powerful--compared to per test/hypothesis control. The purpose of our article is to introduce the readership, particularly those readers familiar with issues related to controlling Type I errors when many tests of significance are computed, to newer methods that provide protection from the effects of multiple testing, yet are more powerful than familywise controlling methods. Specifically, we introduce a number of procedures that control the k-FWER. These methods--say, 2-FWER instead of 1-FWER (i.e., FWER)--are equivalent to specifying that the probability of 2 or more false rejections is controlled at .05, whereas FWER controls the probability of any (i.e., 1 or more) false rejections at .05. 2-FWER implicitly tolerates 1 false rejection and makes no explicit attempt to control the probability of its occurrence, unlike FWER, which tolerates no false rejections at all. More generally, k-FWER tolerates k - 1 false rejections, but controls the probability of k or more false rejections at α =.05. We demonstrate with two published data sets how more hypotheses can be rejected with k-FWER methods compared to FWER control.

  10. Model selection for semiparametric marginal mean regression accounting for within-cluster subsampling variability and informative cluster size.

    PubMed

    Shen, Chung-Wei; Chen, Yi-Hau

    2018-03-13

    We propose a model selection criterion for semiparametric marginal mean regression based on generalized estimating equations. The work is motivated by a longitudinal study on the physical frailty outcome in the elderly, where the cluster size, that is, the number of the observed outcomes in each subject, is "informative" in the sense that it is related to the frailty outcome itself. The new proposal, called Resampling Cluster Information Criterion (RCIC), is based on the resampling idea utilized in the within-cluster resampling method (Hoffman, Sen, and Weinberg, 2001, Biometrika 88, 1121-1134) and accommodates informative cluster size. The implementation of RCIC, however, is free of performing actual resampling of the data and hence is computationally convenient. Compared with the existing model selection methods for marginal mean regression, the RCIC method incorporates an additional component accounting for variability of the model over within-cluster subsampling, and leads to remarkable improvements in selecting the correct model, regardless of whether the cluster size is informative or not. Applying the RCIC method to the longitudinal frailty study, we identify being female, old age, low income and life satisfaction, and chronic health conditions as significant risk factors for physical frailty in the elderly. © 2018, The International Biometric Society.

  11. Communication Optimizations for a Wireless Distributed Prognostic Framework

    NASA Technical Reports Server (NTRS)

    Saha, Sankalita; Saha, Bhaskar; Goebel, Kai

    2009-01-01

    Distributed architecture for prognostics is an essential step in prognostic research in order to enable feasible real-time system health management. Communication overhead is an important design problem for such systems. In this paper we focus on communication issues faced in the distributed implementation of an important class of algorithms for prognostics - particle filters. In spite of being computation and memory intensive, particle filters lend well to distributed implementation except for one significant step - resampling. We propose new resampling scheme called parameterized resampling that attempts to reduce communication between collaborating nodes in a distributed wireless sensor network. Analysis and comparison with relevant resampling schemes is also presented. A battery health management system is used as a target application. A new resampling scheme for distributed implementation of particle filters has been discussed in this paper. Analysis and comparison of this new scheme with existing resampling schemes in the context for minimizing communication overhead have also been discussed. Our proposed new resampling scheme performs significantly better compared to other schemes by attempting to reduce both the communication message length as well as number total communication messages exchanged while not compromising prediction accuracy and precision. Future work will explore the effects of the new resampling scheme in the overall computational performance of the whole system as well as full implementation of the new schemes on the Sun SPOT devices. Exploring different network architectures for efficient communication is an importance future research direction as well.

  12. Ensemble-based prediction of RNA secondary structures.

    PubMed

    Aghaeepour, Nima; Hoos, Holger H

    2013-04-24

    Accurate structure prediction methods play an important role for the understanding of RNA function. Energy-based, pseudoknot-free secondary structure prediction is one of the most widely used and versatile approaches, and improved methods for this task have received much attention over the past five years. Despite the impressive progress that as been achieved in this area, existing evaluations of the prediction accuracy achieved by various algorithms do not provide a comprehensive, statistically sound assessment. Furthermore, while there is increasing evidence that no prediction algorithm consistently outperforms all others, no work has been done to exploit the complementary strengths of multiple approaches. In this work, we present two contributions to the area of RNA secondary structure prediction. Firstly, we use state-of-the-art, resampling-based statistical methods together with a previously published and increasingly widely used dataset of high-quality RNA structures to conduct a comprehensive evaluation of existing RNA secondary structure prediction procedures. The results from this evaluation clarify the performance relationship between ten well-known existing energy-based pseudoknot-free RNA secondary structure prediction methods and clearly demonstrate the progress that has been achieved in recent years. Secondly, we introduce AveRNA, a generic and powerful method for combining a set of existing secondary structure prediction procedures into an ensemble-based method that achieves significantly higher prediction accuracies than obtained from any of its component procedures. Our new, ensemble-based method, AveRNA, improves the state of the art for energy-based, pseudoknot-free RNA secondary structure prediction by exploiting the complementary strengths of multiple existing prediction procedures, as demonstrated using a state-of-the-art statistical resampling approach. In addition, AveRNA allows an intuitive and effective control of the trade-off between

  13. "What If" Analyses: Ways to Interpret Statistical Significance Test Results Using EXCEL or "R"

    ERIC Educational Resources Information Center

    Ozturk, Elif

    2012-01-01

    The present paper aims to review two motivations to conduct "what if" analyses using Excel and "R" to understand the statistical significance tests through the sample size context. "What if" analyses can be used to teach students what statistical significance tests really do and in applied research either prospectively to estimate what sample size…

  14. Compare diagnostic tests using transformation-invariant smoothed ROC curves⋆

    PubMed Central

    Tang, Liansheng; Du, Pang; Wu, Chengqing

    2012-01-01

    Receiver operating characteristic (ROC) curve, plotting true positive rates against false positive rates as threshold varies, is an important tool for evaluating biomarkers in diagnostic medicine studies. By definition, ROC curve is monotone increasing from 0 to 1 and is invariant to any monotone transformation of test results. And it is often a curve with certain level of smoothness when test results from the diseased and non-diseased subjects follow continuous distributions. Most existing ROC curve estimation methods do not guarantee all of these properties. One of the exceptions is Du and Tang (2009) which applies certain monotone spline regression procedure to empirical ROC estimates. However, their method does not consider the inherent correlations between empirical ROC estimates. This makes the derivation of the asymptotic properties very difficult. In this paper we propose a penalized weighted least square estimation method, which incorporates the covariance between empirical ROC estimates as a weight matrix. The resulting estimator satisfies all the aforementioned properties, and we show that it is also consistent. Then a resampling approach is used to extend our method for comparisons of two or more diagnostic tests. Our simulations show a significantly improved performance over the existing method, especially for steep ROC curves. We then apply the proposed method to a cancer diagnostic study that compares several newly developed diagnostic biomarkers to a traditional one. PMID:22639484

  15. A sup-score test for the cure fraction in mixture models for long-term survivors.

    PubMed

    Hsu, Wei-Wen; Todem, David; Kim, KyungMann

    2016-12-01

    The evaluation of cure fractions in oncology research under the well known cure rate model has attracted considerable attention in the literature, but most of the existing testing procedures have relied on restrictive assumptions. A common assumption has been to restrict the cure fraction to a constant under alternatives to homogeneity, thereby neglecting any information from covariates. This article extends the literature by developing a score-based statistic that incorporates covariate information to detect cure fractions, with the existing testing procedure serving as a special case. A complication of this extension, however, is that the implied hypotheses are not typical and standard regularity conditions to conduct the test may not even hold. Using empirical processes arguments, we construct a sup-score test statistic for cure fractions and establish its limiting null distribution as a functional of mixtures of chi-square processes. In practice, we suggest a simple resampling procedure to approximate this limiting distribution. Our simulation results show that the proposed test can greatly improve efficiency over tests that neglect the heterogeneity of the cure fraction under the alternative. The practical utility of the methodology is illustrated using ovarian cancer survival data with long-term follow-up from the surveillance, epidemiology, and end results registry. © 2016, The International Biometric Society.

  16. Significance of Landsat-7 Spacecraft Level Thermal Balance and Thermal Test for ETM+Instrument

    NASA Technical Reports Server (NTRS)

    Choi, Michael K.

    1999-01-01

    The thermal design and the instrument thermal vacuum (T/V) test of the Landsat-7 Enhanced Thematic Mapper Plus (ETM+) instrument were based on the Landsat-4, 5 and 6 heritage. The ETM+ scanner thermal model was also inherited from Landsat-4, 5 and 6. The temperature predictions of many scanner components in the original thermal model had poor agreement with the spacecraft and instrument integrated sun-pointing safehold (SPSH) thermal balance (T/B) test results. The spacecraft and instrument integrated T/B test led to a change of the Full Aperture Calibrator (FAC) motor stack "solar shield" coating from MIL-C-5541 to multi-layer insulation (MLI) thermal blanket. The temperature predictions of the Auxiliary Electronics Module (AEM) in the thermal model also had poor agreement with the T/B test results. Modifications to the scanner and AEM thermal models were performed to give good agreement between the temperature predictions and the test results. The correlated ETM+ thermal model was used to obtain flight temperature predictions. The flight temperature predictions in the nominal 15-orbit mission profile, plus margins, were used as the yellow limits for most of the ETM+ components. The spacecraft and instrument integrated T/B and TN test also revealed that the standby heater capacity on the Scan Mirror Assembly (SMA) was insufficient when the Earth Background Simulator (EBS) was 1 50C or colder, and the baffle heater possibly caused the coherent noise in the narrow band data when it was on. Also, the cooler cool-down was significantly faster than that in the instrument T/V test, and the coldest Cold Focal Plane Array (CFPA) temperature achieved was colder.

  17. Risk-Based Object Oriented Testing

    NASA Technical Reports Server (NTRS)

    Rosenberg, Linda H.; Stapko, Ruth; Gallo, Albert

    2000-01-01

    Software testing is a well-defined phase of the software development life cycle. Functional ("black box") testing and structural ("white box") testing are two methods of test case design commonly used by software developers. A lesser known testing method is risk-based testing, which takes into account the probability of failure of a portion of code as determined by its complexity. For object oriented programs, a methodology is proposed for identification of risk-prone classes. Risk-based testing is a highly effective testing technique that can be used to find and fix the most important problems as quickly as possible.

  18. Conducting tests for statistically significant differences using forest inventory data

    Treesearch

    James A. Westfall; Scott A. Pugh; John W. Coulston

    2013-01-01

    Many forest inventory and monitoring programs are based on a sample of ground plots from which estimates of forest resources are derived. In addition to evaluating metrics such as number of trees or amount of cubic wood volume, it is often desirable to make comparisons between resource attributes. To properly conduct statistical tests for differences, it is imperative...

  19. The relationship between academic self-concept, intrinsic motivation, test anxiety, and academic achievement among nursing students: mediating and moderating effects.

    PubMed

    Khalaila, Rabia

    2015-03-01

    The impact of cognitive factors on academic achievement is well documented. However, little is known about the mediating and moderating effects of non-cognitive, motivational and situational factors on academic achievement among nursing students. The aim of this study is to explore the direct and/or indirect effects of academic self-concept on academic achievement, and examine whether intrinsic motivation moderates the negative effect of test anxiety on academic achievement. This descriptive-correlational study was carried out on a convenience sample of 170 undergraduate nursing students, in an academic college in northern Israel. Academic motivation, academic self-concept and test anxiety scales were used as measuring instruments. Bootstrapping with resampling strategies was used for testing multiple mediators' model and examining the moderator effect. A higher self-concept was found to be directly related to greater academic achievement. Test anxiety and intrinsic motivation were found to be significant mediators in the relationship between self-concept and academic achievement. In addition, intrinsic motivation significantly moderated the negative effect of test anxiety on academic achievement. The results suggested that institutions should pay more attention to the enhancement of motivational factors (e.g., self-concept and motivation) and alleviate the negative impact of situational factors (e.g., test anxiety) when offering psycho-educational interventions designed to improve nursing students' academic achievements. Copyright © 2014 Elsevier Ltd. All rights reserved.

  20. Quantification and statistical significance analysis of group separation in NMR-based metabonomics studies

    PubMed Central

    Goodpaster, Aaron M.; Kennedy, Michael A.

    2015-01-01

    Currently, no standard metrics are used to quantify cluster separation in PCA or PLS-DA scores plots for metabonomics studies or to determine if cluster separation is statistically significant. Lack of such measures makes it virtually impossible to compare independent or inter-laboratory studies and can lead to confusion in the metabonomics literature when authors putatively identify metabolites distinguishing classes of samples based on visual and qualitative inspection of scores plots that exhibit marginal separation. While previous papers have addressed quantification of cluster separation in PCA scores plots, none have advocated routine use of a quantitative measure of separation that is supported by a standard and rigorous assessment of whether or not the cluster separation is statistically significant. Here quantification and statistical significance of separation of group centroids in PCA and PLS-DA scores plots are considered. The Mahalanobis distance is used to quantify the distance between group centroids, and the two-sample Hotelling's T2 test is computed for the data, related to an F-statistic, and then an F-test is applied to determine if the cluster separation is statistically significant. We demonstrate the value of this approach using four datasets containing various degrees of separation, ranging from groups that had no apparent visual cluster separation to groups that had no visual cluster overlap. Widespread adoption of such concrete metrics to quantify and evaluate the statistical significance of PCA and PLS-DA cluster separation would help standardize reporting of metabonomics data. PMID:26246647

  1. Methodology to identify risk-significant components for inservice inspection and testing

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Anderson, M.T.; Hartley, R.S.; Jones, J.L. Jr.

    1992-08-01

    Periodic inspection and testing of vital system components should be performed to ensure the safe and reliable operation of Department of Energy (DOE) nuclear processing facilities. Probabilistic techniques may be used to help identify and rank components by their relative risk. A risk-based ranking would allow varied DOE sites to implement inspection and testing programs in an effective and cost-efficient manner. This report describes a methodology that can be used to rank components, while addressing multiple risk issues.

  2. Thou Shalt Not Bear False Witness against Null Hypothesis Significance Testing

    ERIC Educational Resources Information Center

    García-Pérez, Miguel A.

    2017-01-01

    Null hypothesis significance testing (NHST) has been the subject of debate for decades and alternative approaches to data analysis have been proposed. This article addresses this debate from the perspective of scientific inquiry and inference. Inference is an inverse problem and application of statistical methods cannot reveal whether effects…

  3. Automation and Evaluation of the SOWH Test with SOWHAT.

    PubMed

    Church, Samuel H; Ryan, Joseph F; Dunn, Casey W

    2015-11-01

    The Swofford-Olsen-Waddell-Hillis (SOWH) test evaluates statistical support for incongruent phylogenetic topologies. It is commonly applied to determine if the maximum likelihood tree in a phylogenetic analysis is significantly different than an alternative hypothesis. The SOWH test compares the observed difference in log-likelihood between two topologies to a null distribution of differences in log-likelihood generated by parametric resampling. The test is a well-established phylogenetic method for topology testing, but it is sensitive to model misspecification, it is computationally burdensome to perform, and its implementation requires the investigator to make several decisions that each have the potential to affect the outcome of the test. We analyzed the effects of multiple factors using seven data sets to which the SOWH test was previously applied. These factors include a number of sample replicates, likelihood software, the introduction of gaps to simulated data, the use of distinct models of evolution for data simulation and likelihood inference, and a suggested test correction wherein an unresolved "zero-constrained" tree is used to simulate sequence data. To facilitate these analyses and future applications of the SOWH test, we wrote SOWHAT, a program that automates the SOWH test. We find that inadequate bootstrap sampling can change the outcome of the SOWH test. The results also show that using a zero-constrained tree for data simulation can result in a wider null distribution and higher p-values, but does not change the outcome of the SOWH test for most of the data sets tested here. These results will help others implement and evaluate the SOWH test and allow us to provide recommendations for future applications of the SOWH test. SOWHAT is available for download from https://github.com/josephryan/SOWHAT. © The Author(s) 2015. Published by Oxford University Press, on behalf of the Society of Systematic Biologists.

  4. Wayside Bearing Fault Diagnosis Based on a Data-Driven Doppler Effect Eliminator and Transient Model Analysis

    PubMed Central

    Liu, Fang; Shen, Changqing; He, Qingbo; Zhang, Ao; Liu, Yongbin; Kong, Fanrang

    2014-01-01

    A fault diagnosis strategy based on the wayside acoustic monitoring technique is investigated for locomotive bearing fault diagnosis. Inspired by the transient modeling analysis method based on correlation filtering analysis, a so-called Parametric-Mother-Doppler-Wavelet (PMDW) is constructed with six parameters, including a center characteristic frequency and five kinematic model parameters. A Doppler effect eliminator containing a PMDW generator, a correlation filtering analysis module, and a signal resampler is invented to eliminate the Doppler effect embedded in the acoustic signal of the recorded bearing. Through the Doppler effect eliminator, the five kinematic model parameters can be identified based on the signal itself. Then, the signal resampler is applied to eliminate the Doppler effect using the identified parameters. With the ability to detect early bearing faults, the transient model analysis method is employed to detect localized bearing faults after the embedded Doppler effect is eliminated. The effectiveness of the proposed fault diagnosis strategy is verified via simulation studies and applications to diagnose locomotive roller bearing defects. PMID:24803197

  5. Exploring the Replicability of a Study's Results: Bootstrap Statistics for the Multivariate Case.

    ERIC Educational Resources Information Center

    Thompson, Bruce

    Conventional statistical significance tests do not inform the researcher regarding the likelihood that results will replicate. One strategy for evaluating result replication is to use a "bootstrap" resampling of a study's data so that the stability of results across numerous configurations of the subjects can be explored. This paper…

  6. Agonist anti-GITR antibody significantly enhances the therapeutic efficacy of Listeria monocytogenes-based immunotherapy.

    PubMed

    Shrimali, Rajeev; Ahmad, Shamim; Berrong, Zuzana; Okoev, Grigori; Matevosyan, Adelaida; Razavi, Ghazaleh Shoja E; Petit, Robert; Gupta, Seema; Mkrtichyan, Mikayel; Khleif, Samir N

    2017-08-15

    We previously demonstrated that in addition to generating an antigen-specific immune response, Listeria monocytogenes (Lm)-based immunotherapy significantly reduces the ratio of regulatory T cells (Tregs)/CD4 + and myeloid-derived suppressor cells (MDSCs) in the tumor microenvironment. Since Lm-based immunotherapy is able to inhibit the immune suppressive environment, we hypothesized that combining this treatment with agonist antibody to a co-stimulatory receptor that would further boost the effector arm of immunity will result in significant improvement of anti-tumor efficacy of treatment. Here we tested the immune and therapeutic efficacy of Listeria-based immunotherapy combination with agonist antibody to glucocorticoid-induced tumor necrosis factor receptor-related protein (GITR) in TC-1 mouse tumor model. We evaluated the potency of combination on tumor growth and survival of treated animals and profiled tumor microenvironment for effector and suppressor cell populations. We demonstrate that combination of Listeria-based immunotherapy with agonist antibody to GITR synergizes to improve immune and therapeutic efficacy of treatment in a mouse tumor model. We show that this combinational treatment leads to significant inhibition of tumor-growth, prolongs survival and leads to complete regression of established tumors in 60% of treated animals. We determined that this therapeutic benefit of combinational treatment is due to a significant increase in tumor infiltrating effector CD4 + and CD8 + T cells along with a decrease of inhibitory cells. To our knowledge, this is the first study that exploits Lm-based immunotherapy combined with agonist anti-GITR antibody as a potent treatment strategy that simultaneously targets both the effector and suppressor arms of the immune system, leading to significantly improved anti-tumor efficacy. We believe that our findings depicted in this manuscript provide a promising and translatable strategy that can enhance the overall

  7. The Need for Nuance in the Null Hypothesis Significance Testing Debate

    ERIC Educational Resources Information Center

    Häggström, Olle

    2017-01-01

    Null hypothesis significance testing (NHST) provides an important statistical toolbox, but there are a number of ways in which it is often abused and misinterpreted, with bad consequences for the reliability and progress of science. Parts of contemporary NHST debate, especially in the psychological sciences, is reviewed, and a suggestion is made…

  8. Multivariate bias adjustment of high-dimensional climate simulations: the Rank Resampling for Distributions and Dependences (R2D2) bias correction

    NASA Astrophysics Data System (ADS)

    Vrac, Mathieu

    2018-06-01

    Climate simulations often suffer from statistical biases with respect to observations or reanalyses. It is therefore common to correct (or adjust) those simulations before using them as inputs into impact models. However, most bias correction (BC) methods are univariate and so do not account for the statistical dependences linking the different locations and/or physical variables of interest. In addition, they are often deterministic, and stochasticity is frequently needed to investigate climate uncertainty and to add constrained randomness to climate simulations that do not possess a realistic variability. This study presents a multivariate method of rank resampling for distributions and dependences (R2D2) bias correction allowing one to adjust not only the univariate distributions but also their inter-variable and inter-site dependence structures. Moreover, the proposed R2D2 method provides some stochasticity since it can generate as many multivariate corrected outputs as the number of statistical dimensions (i.e., number of grid cell × number of climate variables) of the simulations to be corrected. It is based on an assumption of stability in time of the dependence structure - making it possible to deal with a high number of statistical dimensions - that lets the climate model drive the temporal properties and their changes in time. R2D2 is applied on temperature and precipitation reanalysis time series with respect to high-resolution reference data over the southeast of France (1506 grid cell). Bivariate, 1506-dimensional and 3012-dimensional versions of R2D2 are tested over a historical period and compared to a univariate BC. How the different BC methods behave in a climate change context is also illustrated with an application to regional climate simulations over the 2071-2100 period. The results indicate that the 1d-BC basically reproduces the climate model multivariate properties, 2d-R2D2 is only satisfying in the inter-variable context, 1506d-R2D2

  9. Effects of computer-based immediate feedback on foreign language listening comprehension and test-associated anxiety.

    PubMed

    Lee, Shu-Ping; Su, Hui-Kai; Lee, Shin-Da

    2012-06-01

    This study investigated the effects of immediate feedback on computer-based foreign language listening comprehension tests and on intrapersonal test-associated anxiety in 72 English major college students at a Taiwanese University. Foreign language listening comprehension of computer-based tests designed by MOODLE, a dynamic e-learning environment, with or without immediate feedback together with the state-trait anxiety inventory (STAI) were tested and repeated after one week. The analysis indicated that immediate feedback during testing caused significantly higher anxiety and resulted in significantly higher listening scores than in the control group, which had no feedback. However, repeated feedback did not affect the test anxiety and listening scores. Computer-based immediate feedback did not lower debilitating effects of anxiety but enhanced students' intrapersonal eustress-like anxiety and probably improved their attention during listening tests. Computer-based tests with immediate feedback might help foreign language learners to increase attention in foreign language listening comprehension.

  10. Cloud-based solution to identify statistically significant MS peaks differentiating sample categories.

    PubMed

    Ji, Jun; Ling, Jeffrey; Jiang, Helen; Wen, Qiaojun; Whitin, John C; Tian, Lu; Cohen, Harvey J; Ling, Xuefeng B

    2013-03-23

    Mass spectrometry (MS) has evolved to become the primary high throughput tool for proteomics based biomarker discovery. Until now, multiple challenges in protein MS data analysis remain: large-scale and complex data set management; MS peak identification, indexing; and high dimensional peak differential analysis with the concurrent statistical tests based false discovery rate (FDR). "Turnkey" solutions are needed for biomarker investigations to rapidly process MS data sets to identify statistically significant peaks for subsequent validation. Here we present an efficient and effective solution, which provides experimental biologists easy access to "cloud" computing capabilities to analyze MS data. The web portal can be accessed at http://transmed.stanford.edu/ssa/. Presented web application supplies large scale MS data online uploading and analysis with a simple user interface. This bioinformatic tool will facilitate the discovery of the potential protein biomarkers using MS.

  11. [Significance of test results in drug hypersensitivity].

    PubMed

    Wozniak, K D

    1977-12-15

    For the diagnostics of allergic drug reactions in 2,246 patients tests of the skin and in vitro tests were carried out. As causes of the drug rashes analgetics/antipyretics, antibiotics, sulfonamides, local anaesthetics, oral anticonceptive drugs, remedies for the circulation, psychopharmaca and many others have been established. In these cases by means of skin test in 81.5%, by means of the lymphocyte transformation test in 42.9% and by means of the migration inhibition test in 35.9% of the patients a concordant result could be achieved concerning the clinical course of the disease. Relevant to practice from the results must be derived that in sensibilisation proved the avoidance of the pharmacon and of immunochemical related substances is necessary as well as principally in every anamnesis the question for drug tolerances must be asked. The possibility of the development of side effects of pharmaca when these facts are not taken into consideration is emphasized with the help of examples.

  12. When Null Hypothesis Significance Testing Is Unsuitable for Research: A Reassessment.

    PubMed

    Szucs, Denes; Ioannidis, John P A

    2017-01-01

    Null hypothesis significance testing (NHST) has several shortcomings that are likely contributing factors behind the widely debated replication crisis of (cognitive) neuroscience, psychology, and biomedical science in general. We review these shortcomings and suggest that, after sustained negative experience, NHST should no longer be the default, dominant statistical practice of all biomedical and psychological research. If theoretical predictions are weak we should not rely on all or nothing hypothesis tests. Different inferential methods may be most suitable for different types of research questions. Whenever researchers use NHST they should justify its use, and publish pre-study power calculations and effect sizes, including negative findings. Hypothesis-testing studies should be pre-registered and optimally raw data published. The current statistics lite educational approach for students that has sustained the widespread, spurious use of NHST should be phased out.

  13. When Null Hypothesis Significance Testing Is Unsuitable for Research: A Reassessment

    PubMed Central

    Szucs, Denes; Ioannidis, John P. A.

    2017-01-01

    Null hypothesis significance testing (NHST) has several shortcomings that are likely contributing factors behind the widely debated replication crisis of (cognitive) neuroscience, psychology, and biomedical science in general. We review these shortcomings and suggest that, after sustained negative experience, NHST should no longer be the default, dominant statistical practice of all biomedical and psychological research. If theoretical predictions are weak we should not rely on all or nothing hypothesis tests. Different inferential methods may be most suitable for different types of research questions. Whenever researchers use NHST they should justify its use, and publish pre-study power calculations and effect sizes, including negative findings. Hypothesis-testing studies should be pre-registered and optimally raw data published. The current statistics lite educational approach for students that has sustained the widespread, spurious use of NHST should be phased out. PMID:28824397

  14. Merging parallel tempering with sequential geostatistical resampling for improved posterior exploration of high-dimensional subsurface categorical fields

    NASA Astrophysics Data System (ADS)

    Laloy, Eric; Linde, Niklas; Jacques, Diederik; Mariethoz, Grégoire

    2016-04-01

    The sequential geostatistical resampling (SGR) algorithm is a Markov chain Monte Carlo (MCMC) scheme for sampling from possibly non-Gaussian, complex spatially-distributed prior models such as geologic facies or categorical fields. In this work, we highlight the limits of standard SGR for posterior inference of high-dimensional categorical fields with realistically complex likelihood landscapes and benchmark a parallel tempering implementation (PT-SGR). Our proposed PT-SGR approach is demonstrated using synthetic (error corrupted) data from steady-state flow and transport experiments in categorical 7575- and 10,000-dimensional 2D conductivity fields. In both case studies, every SGR trial gets trapped in a local optima while PT-SGR maintains an higher diversity in the sampled model states. The advantage of PT-SGR is most apparent in an inverse transport problem where the posterior distribution is made bimodal by construction. PT-SGR then converges towards the appropriate data misfit much faster than SGR and partly recovers the two modes. In contrast, for the same computational resources SGR does not fit the data to the appropriate error level and hardly produces a locally optimal solution that looks visually similar to one of the two reference modes. Although PT-SGR clearly surpasses SGR in performance, our results also indicate that using a small number (16-24) of temperatures (and thus parallel cores) may not permit complete sampling of the posterior distribution by PT-SGR within a reasonable computational time (less than 1-2 weeks).

  15. EHR based Genetic Testing Knowledge Base (iGTKB) Development

    PubMed Central

    2015-01-01

    Background The gap between a large growing number of genetic tests and a suboptimal clinical workflow of incorporating these tests into regular clinical practice poses barriers to effective reliance on advanced genetic technologies to improve quality of healthcare. A promising solution to fill this gap is to develop an intelligent genetic test recommendation system that not only can provide a comprehensive view of genetic tests as education resources, but also can recommend the most appropriate genetic tests to patients based on clinical evidence. In this study, we developed an EHR based Genetic Testing Knowledge Base for Individualized Medicine (iGTKB). Methods We extracted genetic testing information and patient medical records from EHR systems at Mayo Clinic. Clinical features have been semi-automatically annotated from the clinical notes by applying a Natural Language Processing (NLP) tool, MedTagger suite. To prioritize clinical features for each genetic test, we compared odds ratio across four population groups. Genetic tests, genetic disorders and clinical features with their odds ratios have been applied to establish iGTKB, which is to be integrated into the Genetic Testing Ontology (GTO). Results Overall, there are five genetic tests operated with sample size greater than 100 in 2013 at Mayo Clinic. A total of 1,450 patients who was tested by one of the five genetic tests have been selected. We assembled 243 clinical features from the Human Phenotype Ontology (HPO) for these five genetic tests. There are 60 clinical features with at least one mention in clinical notes of patients taking the test. Twenty-eight clinical features with high odds ratio (greater than 1) have been selected as dominant features and deposited into iGTKB with their associated information about genetic tests and genetic disorders. Conclusions In this study, we developed an EHR based genetic testing knowledge base, iGTKB. iGTKB will be integrated into the GTO by providing relevant

  16. EHR based Genetic Testing Knowledge Base (iGTKB) Development.

    PubMed

    Zhu, Qian; Liu, Hongfang; Chute, Christopher G; Ferber, Matthew

    2015-01-01

    The gap between a large growing number of genetic tests and a suboptimal clinical workflow of incorporating these tests into regular clinical practice poses barriers to effective reliance on advanced genetic technologies to improve quality of healthcare. A promising solution to fill this gap is to develop an intelligent genetic test recommendation system that not only can provide a comprehensive view of genetic tests as education resources, but also can recommend the most appropriate genetic tests to patients based on clinical evidence. In this study, we developed an EHR based Genetic Testing Knowledge Base for Individualized Medicine (iGTKB). We extracted genetic testing information and patient medical records from EHR systems at Mayo Clinic. Clinical features have been semi-automatically annotated from the clinical notes by applying a Natural Language Processing (NLP) tool, MedTagger suite. To prioritize clinical features for each genetic test, we compared odds ratio across four population groups. Genetic tests, genetic disorders and clinical features with their odds ratios have been applied to establish iGTKB, which is to be integrated into the Genetic Testing Ontology (GTO). Overall, there are five genetic tests operated with sample size greater than 100 in 2013 at Mayo Clinic. A total of 1,450 patients who was tested by one of the five genetic tests have been selected. We assembled 243 clinical features from the Human Phenotype Ontology (HPO) for these five genetic tests. There are 60 clinical features with at least one mention in clinical notes of patients taking the test. Twenty-eight clinical features with high odds ratio (greater than 1) have been selected as dominant features and deposited into iGTKB with their associated information about genetic tests and genetic disorders. In this study, we developed an EHR based genetic testing knowledge base, iGTKB. iGTKB will be integrated into the GTO by providing relevant clinical evidence, and ultimately to

  17. Comparing Science Virtual and Paper-Based Test to Measure Students’ Critical Thinking based on VAK Learning Style Model

    NASA Astrophysics Data System (ADS)

    Rosyidah, T. H.; Firman, H.; Rusyati, L.

    2017-02-01

    This research was comparing virtual and paper-based test to measure students’ critical thinking based on VAK (Visual-Auditory-Kynesthetic) learning style model. Quasi experiment method with one group post-test only design is applied in this research in order to analyze the data. There was 40 eight grade students at one of public junior high school in Bandung becoming the sample in this research. The quantitative data was obtained through 26 questions about living thing and environment sustainability which is constructed based on the eight elements of critical thinking and be provided in the form of virtual and paper-based test. Based on analysis of the result, it is shown that within visual, auditory, and kinesthetic were not significantly difference in virtual and paper-based test. Besides, all result was supported by quistionnaire about students’ respond on virtual test which shows 3.47 in the scale of 4. Means that student showed positive respond in all aspet measured, which are interest, impression, and expectation.

  18. A generalised significance test for individual communities in networks.

    PubMed

    Kojaku, Sadamori; Masuda, Naoki

    2018-05-09

    Many empirical networks have community structure, in which nodes are densely interconnected within each community (i.e., a group of nodes) and sparsely across different communities. Like other local and meso-scale structure of networks, communities are generally heterogeneous in various aspects such as the size, density of edges, connectivity to other communities and significance. In the present study, we propose a method to statistically test the significance of individual communities in a given network. Compared to the previous methods, the present algorithm is unique in that it accepts different community-detection algorithms and the corresponding quality function for single communities. The present method requires that a quality of each community can be quantified and that community detection is performed as optimisation of such a quality function summed over the communities. Various community detection algorithms including modularity maximisation and graph partitioning meet this criterion. Our method estimates a distribution of the quality function for randomised networks to calculate a likelihood of each community in the given network. We illustrate our algorithm by synthetic and empirical networks.

  19. A comprehensive laboratory-based program for classification of variants of uncertain significance in hereditary cancer genes.

    PubMed

    Eggington, J M; Bowles, K R; Moyes, K; Manley, S; Esterling, L; Sizemore, S; Rosenthal, E; Theisen, A; Saam, J; Arnell, C; Pruss, D; Bennett, J; Burbidge, L A; Roa, B; Wenstrup, R J

    2014-09-01

    Genetic testing has the potential to guide the prevention and treatment of disease in a variety of settings, and recent technical advances have greatly increased our ability to acquire large amounts of genetic data. The interpretation of this data remains challenging, as the clinical significance of genetic variation detected in the laboratory is not always clear. Although regulatory agencies and professional societies provide some guidance regarding the classification, reporting, and long-term follow-up of variants, few protocols for the implementation of these guidelines have been described. Because the primary aim of clinical testing is to provide results to inform medical management, a variant classification program that offers timely, accurate, confident and cost-effective interpretation of variants should be an integral component of the laboratory process. Here we describe the components of our laboratory's current variant classification program (VCP), based on 20 years of experience and over one million samples tested, using the BRCA1/2 genes as a model. Our VCP has lowered the percentage of tests in which one or more BRCA1/2 variants of uncertain significance (VUSs) are detected to 2.1% in the absence of a pathogenic mutation, demonstrating how the coordinated application of resources toward classification and reclassification significantly impacts the clinical utility of testing. © 2013 John Wiley & Sons A/S. Published by John Wiley & Sons Ltd.

  20. Correcting Evaluation Bias of Relational Classifiers with Network Cross Validation

    DTIC Science & Technology

    2010-01-01

    classi- fication algorithms: simple random resampling (RRS), equal-instance random resampling (ERS), and network cross-validation ( NCV ). The first two... NCV procedure that eliminates overlap between test sets altogether. The procedure samples for k disjoint test sets that will be used for evaluation...propLabeled ∗ S) nodes from train Pool in f erenceSet =network − trainSet F = F ∪ < trainSet, test Set, in f erenceSet > end for output: F NCV addresses

  1. Significant issues in proof testing: A critical appraisal

    NASA Technical Reports Server (NTRS)

    Chell, G. G.; Mcclung, R. C.; Russell, D. A.; Chang, K. J.; Donnelly, B.

    1994-01-01

    Issues which impact on the interpretation and quantification of proof test benefits are reviewed. The importance of each issue in contributing to the extra quality assurance conferred by proof testing components is discussed, particularly with respect to the application of advanced fracture mechanics concepts to enhance the flaw screening capability of a proof test analysis. Items covered include the role in proof testing of elastic-plastic fracture mechanics, ductile instability analysis, deterministic versus probabilistic analysis, single versus multiple cycle proof testing, and non-destructive examination (NDE). The effects of proof testing on subsequent service life are reviewed, particularly with regard to stress redistribution and changes in fracture behavior resulting from the overload. The importance of proof test conditions are also addressed, covering aspects related to test temperature, simulation of service environments, test media and the application of real-time NDE. The role of each issue in a proof test methodology is assessed with respect to its ability to: promote proof test practice to a state-of-the-art; aid optimization of proof test design; and increase awareness and understanding of outstanding issues.

  2. Significance of hydrogen breath tests in children with suspected carbohydrate malabsorption

    PubMed Central

    2014-01-01

    Background Hydrogen breath tests are noninvasive procedures frequently applied in the diagnostic workup of functional gastrointestinal disorders. Here, we review hydrogen breath test results and the occurrence of lactose, fructose and sorbitol malabsorption in pediatric patients; and determine the significance of the findings and the outcome of patients with carbohydrate malabsorption. Methods We included 206 children (88 male, 118 female, median age 10.7 years, range 3–18 years) with a total of 449 hydrogen breath tests (lactose, n = 161; fructose, n = 142; sorbitol, n = 146) into a retrospective analysis. Apart from test results, we documented symptoms, the therapeutic consequences of the test, the outcome and the overall satisfaction of the patients and families. Results In total, 204 (46%) of all breath tests were positive. Long-term follow-up data could be collected from 118 patients. Of 79 patients (67%) who were put on a diet reduced in lactose, fructose and/or sorbitol, the majority (92%, n = 73) reported the diet to be strict and only 13% (n = 10) had no response to diet. Most families (96%, n = 113) were satisfied by the test and the therapy. There were only 21 tests (5%) with a borderline result because the criteria for a positive result were only partially met. Conclusions Hydrogen breath tests can be helpful in the evaluation of children with gastrointestinal symptoms including functional intestinal disorders. If applied for a variety of carbohydrates but only where indicated, around two-third of all children have positive results. The therapeutic consequences are successfully relieving symptoms in the vast majority of patients. PMID:24575947

  3. Consomic mouse strain selection based on effect size measurement, statistical significance testing and integrated behavioral z-scoring: focus on anxiety-related behavior and locomotion.

    PubMed

    Labots, M; Laarakker, M C; Ohl, F; van Lith, H A

    2016-06-29

    Selecting chromosome substitution strains (CSSs, also called consomic strains/lines) used in the search for quantitative trait loci (QTLs) consistently requires the identification of the respective phenotypic trait of interest and is simply based on a significant difference between a consomic and host strain. However, statistical significance as represented by P values does not necessarily predicate practical importance. We therefore propose a method that pays attention to both the statistical significance and the actual size of the observed effect. The present paper extends on this approach and describes in more detail the use of effect size measures (Cohen's d, partial eta squared - η p (2) ) together with the P value as statistical selection parameters for the chromosomal assignment of QTLs influencing anxiety-related behavior and locomotion in laboratory mice. The effect size measures were based on integrated behavioral z-scoring and were calculated in three experiments: (A) a complete consomic male mouse panel with A/J as the donor strain and C57BL/6J as the host strain. This panel, including host and donor strains, was analyzed in the modified Hole Board (mHB). The consomic line with chromosome 19 from A/J (CSS-19A) was selected since it showed increased anxiety-related behavior, but similar locomotion compared to its host. (B) Following experiment A, female CSS-19A mice were compared with their C57BL/6J counterparts; however no significant differences and effect sizes close to zero were found. (C) A different consomic mouse strain (CSS-19PWD), with chromosome 19 from PWD/PhJ transferred on the genetic background of C57BL/6J, was compared with its host strain. Here, in contrast with CSS-19A, there was a decreased overall anxiety in CSS-19PWD compared to C57BL/6J males, but not locomotion. This new method shows an improved way to identify CSSs for QTL analysis for anxiety-related behavior using a combination of statistical significance testing and effect

  4. A General Class of Signed Rank Tests for Clustered Data when the Cluster Size is Potentially Informative.

    PubMed

    Datta, Somnath; Nevalainen, Jaakko; Oja, Hannu

    2012-09-01

    Rank based tests are alternatives to likelihood based tests popularized by their relative robustness and underlying elegant mathematical theory. There has been a serge in research activities in this area in recent years since a number of researchers are working to develop and extend rank based procedures to clustered dependent data which include situations with known correlation structures (e.g., as in mixed effects models) as well as more general form of dependence.The purpose of this paper is to test the symmetry of a marginal distribution under clustered data. However, unlike most other papers in the area, we consider the possibility that the cluster size is a random variable whose distribution is dependent on the distribution of the variable of interest within a cluster. This situation typically arises when the clusters are defined in a natural way (e.g., not controlled by the experimenter or statistician) and in which the size of the cluster may carry information about the distribution of data values within a cluster.Under the scenario of an informative cluster size, attempts to use some form of variance adjusted sign or signed rank tests would fail since they would not maintain the correct size under the distribution of marginal symmetry. To overcome this difficulty Datta and Satten (2008; Biometrics, 64, 501-507) proposed a Wilcoxon type signed rank test based on the principle of within cluster resampling. In this paper we study this problem in more generality by introducing a class of valid tests employing a general score function. Asymptotic null distribution of these tests is obtained. A simulation study shows that a more general choice of the score function can sometimes result in greater power than the Datta and Satten test; furthermore, this development offers the user a wider choice. We illustrate our tests using a real data example on spinal cord injury patients.

  5. A scale-invariant change detection method for land use/cover change research

    NASA Astrophysics Data System (ADS)

    Xing, Jin; Sieber, Renee; Caelli, Terrence

    2018-07-01

    Land Use/Cover Change (LUCC) detection relies increasingly on comparing remote sensing images with different spatial and spectral scales. Based on scale-invariant image analysis algorithms in computer vision, we propose a scale-invariant LUCC detection method to identify changes from scale heterogeneous images. This method is composed of an entropy-based spatial decomposition, two scale-invariant feature extraction methods, Maximally Stable Extremal Region (MSER) and Scale-Invariant Feature Transformation (SIFT) algorithms, a spatial regression voting method to integrate MSER and SIFT results, a Markov Random Field-based smoothing method, and a support vector machine classification method to assign LUCC labels. We test the scale invariance of our new method with a LUCC case study in Montreal, Canada, 2005-2012. We found that the scale-invariant LUCC detection method provides similar accuracy compared with the resampling-based approach but this method avoids the LUCC distortion incurred by resampling.

  6. 77 FR 19861 - Certain Polybrominated Diphenylethers; Significant New Use Rule and Test Rule

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-04-02

    ...The Agency is proposing to amend the Toxic Substances Control Act (TSCA) section 5(a) Significant New Use Rule (SNUR), for certain polybrominated diphenylethers (PBDEs) by: Designating processing of six PBDEs, or any combination of these chemical substances resulting from a chemical reaction, as a significant new use; designating manufacturing, importing, and processing of a seventh PBDE, decabromodiphenyl ether (decaBDE) for any use which is not ongoing after December 31, 2013, as a significant new use; and making inapplicable the article exemption for SNURs for this action. A person who intends to import or process any of the seven PBDEs included in the proposed SNUR, as part of an article for a significant new use would be required to notify EPA at least 90 days in advance to ensure that the Agency has an opportunity to review and, if necessary, restrict or prohibit a new use before it begins. EPA is also proposing a test rule under TSCA that would require any person who manufactures or processes commercial pentabromodiphenyl ether (c-pentaBDE), commercial octabromodiphenyl ether (c-octaBDE), or commercial decaBDE (c-decaBDE), including in articles, for any use after December 31, 2013, to conduct testing on their effects on health and the environment. EPA is proposing to designate all discontinued uses of PBDEs as significant new uses. The test rule would be promulgated if EPA determines that there are persons who intend to manufacture, import, or process c-pentaBDE, c-octaBDE, or c-decaBDE, for any use, including in articles, after December 31, 2013.

  7. Towards Risk-Based Test Protocols: Estimating the Contribution of Intensive Testing to the UK Bovine Tuberculosis Problem

    PubMed Central

    van Dijk, Jan

    2013-01-01

    Eradicating disease from livestock populations involves the balancing act of removing sufficient numbers of diseased animals without removing too many healthy individuals in the process. As ever more tests for bovine tuberculosis (BTB) are carried out on the UK cattle herd, and each positive herd test triggers more testing, the question arises whether ‘false positive’ results contribute significantly to the measured BTB prevalence. Here, this question is explored using simple probabilistic models of test behaviour. When the screening test is applied to the average UK herd, the estimated proportion of test-associated false positive new outbreaks is highly sensitive to small fluctuations in screening test specificity. Estimations of this parameter should be updated as a priority. Once outbreaks have been confirmed in screening-test positive herds, the following rounds of intensive testing with more sensitive, albeit less specific, tests are highly likely to remove large numbers of false positive animals from herds. Despite this, it is unlikely that significantly more truly infected animals are removed. BTB test protocols should become based on quantified risk in order to prevent the needless slaughter of large numbers of healthy animals. PMID:23717517

  8. Theme-Based Tests: Teaching in Context

    ERIC Educational Resources Information Center

    Anderson, Gretchen L.; Heck, Marsha L.

    2005-01-01

    Theme-based tests provide an assessment tool that instructs as well as provides a single general context for a broad set of biochemical concepts. A single story line connects the questions on the tests and models applications of scientific principles and biochemical knowledge in an extended scenario. Theme-based tests are based on a set of…

  9. The uriscreen test to detect significant asymptomatic bacteriuria during pregnancy.

    PubMed

    Teppa, Roberto J; Roberts, James M

    2005-01-01

    Asymptomatic bacteriuria (ASB) occurs in 2-11% of pregnancies and it is a clear predisposition to the development of acute pyelonephritis, which, in turn, poses risk to mother and fetus. Treatment of bacteriuria during pregnancy reduces the incidence of pyelonephritis. Therefore, it is recommended to screen for ASB at the first prenatal visit. The gold standard for detection of bacteriuria during pregnancy is urine culture, but this test is expensive, time-consuming, and labor-intensive. To determine the reliability of an enzymatic urine screening test (Uriscreen; Savyon Diagnostics, Ashdod, Israel) for detecting ASB in pregnancy. Catheterized urine samples were collected from 150 women who had routine prenatal screening for ASB. Patients with urinary symptoms, active vaginal bleeding, or who were previously on antibiotics therapy were excluded from the study. Sensitivity, specificity, and the positive and negative predictive values for the Uriscreen were estimated using urine culture as the criterion standard. Urine cultures were considered positive if they grew >10(5) colony-forming units of a single uropathogen. Twenty-eight women (18.7%) had urine culture results indicating significant bacteriuria, and 17 of these 28 specimens had positive enzyme activity. Of 122 samples with no growth, 109 had negative enzyme activity. Sensitivity, specificity, and positive and negative predictive values for the Uriscreen test were 60.7% (+/-18.1), 89.3% (+/-5.6), 56.6%, and 90.8%, respectively. The Uriscreen test had inadequate sensitivity for rapid screening of bacteriuria in pregnancy.

  10. Significant-Loophole-Free Test of Bell's Theorem with Entangled Photons.

    PubMed

    Giustina, Marissa; Versteegh, Marijn A M; Wengerowsky, Sören; Handsteiner, Johannes; Hochrainer, Armin; Phelan, Kevin; Steinlechner, Fabian; Kofler, Johannes; Larsson, Jan-Åke; Abellán, Carlos; Amaya, Waldimar; Pruneri, Valerio; Mitchell, Morgan W; Beyer, Jörn; Gerrits, Thomas; Lita, Adriana E; Shalm, Lynden K; Nam, Sae Woo; Scheidl, Thomas; Ursin, Rupert; Wittmann, Bernhard; Zeilinger, Anton

    2015-12-18

    Local realism is the worldview in which physical properties of objects exist independently of measurement and where physical influences cannot travel faster than the speed of light. Bell's theorem states that this worldview is incompatible with the predictions of quantum mechanics, as is expressed in Bell's inequalities. Previous experiments convincingly supported the quantum predictions. Yet, every experiment requires assumptions that provide loopholes for a local realist explanation. Here, we report a Bell test that closes the most significant of these loopholes simultaneously. Using a well-optimized source of entangled photons, rapid setting generation, and highly efficient superconducting detectors, we observe a violation of a Bell inequality with high statistical significance. The purely statistical probability of our results to occur under local realism does not exceed 3.74×10^{-31}, corresponding to an 11.5 standard deviation effect.

  11. Advances in Significance Testing for Cluster Detection

    NASA Astrophysics Data System (ADS)

    Coleman, Deidra Andrea

    surveillance data while controlling the Bayesian False Discovery Rate (BFDR). The procedure entails choosing an appropriate Bayesian model that captures the spatial dependency inherent in epidemiological data and considers all days of interest, selecting a test statistic based on a chosen measure that provides the magnitude of the maximumal spatial cluster for each day, and identifying a cutoff value that controls the BFDR for rejecting the collective null hypothesis of no outbreak over a collection of days for a specified region.We use our procedure to analyze botulism-like syndrome data collected by the North Carolina Disease Event Tracking and Epidemiologic Collection Tool (NC DETECT).

  12. Assessing sequential data assimilation techniques for integrating GRACE data into a hydrological model

    NASA Astrophysics Data System (ADS)

    Khaki, M.; Hoteit, I.; Kuhn, M.; Awange, J.; Forootan, E.; van Dijk, A. I. J. M.; Schumacher, M.; Pattiaratchi, C.

    2017-09-01

    The time-variable terrestrial water storage (TWS) products from the Gravity Recovery And Climate Experiment (GRACE) have been increasingly used in recent years to improve the simulation of hydrological models by applying data assimilation techniques. In this study, for the first time, we assess the performance of the most popular data assimilation sequential techniques for integrating GRACE TWS into the World-Wide Water Resources Assessment (W3RA) model. We implement and test stochastic and deterministic ensemble-based Kalman filters (EnKF), as well as Particle filters (PF) using two different resampling approaches of Multinomial Resampling and Systematic Resampling. These choices provide various opportunities for weighting observations and model simulations during the assimilation and also accounting for error distributions. Particularly, the deterministic EnKF is tested to avoid perturbing observations before assimilation (that is the case in an ordinary EnKF). Gaussian-based random updates in the EnKF approaches likely do not fully represent the statistical properties of the model simulations and TWS observations. Therefore, the fully non-Gaussian PF is also applied to estimate more realistic updates. Monthly GRACE TWS are assimilated into W3RA covering the entire Australia. To evaluate the filters performances and analyze their impact on model simulations, their estimates are validated by independent in-situ measurements. Our results indicate that all implemented filters improve the estimation of water storage simulations of W3RA. The best results are obtained using two versions of deterministic EnKF, i.e. the Square Root Analysis (SQRA) scheme and the Ensemble Square Root Filter (EnSRF), respectively, improving the model groundwater estimations errors by 34% and 31% compared to a model run without assimilation. Applying the PF along with Systematic Resampling successfully decreases the model estimation error by 23%.

  13. On random field Completely Automated Public Turing Test to Tell Computers and Humans Apart generation.

    PubMed

    Kouritzin, Michael A; Newton, Fraser; Wu, Biao

    2013-04-01

    Herein, we propose generating CAPTCHAs through random field simulation and give a novel, effective and efficient algorithm to do so. Indeed, we demonstrate that sufficient information about word tests for easy human recognition is contained in the site marginal probabilities and the site-to-nearby-site covariances and that these quantities can be embedded directly into certain conditional probabilities, designed for effective simulation. The CAPTCHAs are then partial random realizations of the random CAPTCHA word. We start with an initial random field (e.g., randomly scattered letter pieces) and use Gibbs resampling to re-simulate portions of the field repeatedly using these conditional probabilities until the word becomes human-readable. The residual randomness from the initial random field together with the random implementation of the CAPTCHA word provide significant resistance to attack. This results in a CAPTCHA, which is unrecognizable to modern optical character recognition but is recognized about 95% of the time in a human readability study.

  14. Bootstrap-based methods for estimating standard errors in Cox's regression analyses of clustered event times.

    PubMed

    Xiao, Yongling; Abrahamowicz, Michal

    2010-03-30

    We propose two bootstrap-based methods to correct the standard errors (SEs) from Cox's model for within-cluster correlation of right-censored event times. The cluster-bootstrap method resamples, with replacement, only the clusters, whereas the two-step bootstrap method resamples (i) the clusters, and (ii) individuals within each selected cluster, with replacement. In simulations, we evaluate both methods and compare them with the existing robust variance estimator and the shared gamma frailty model, which are available in statistical software packages. We simulate clustered event time data, with latent cluster-level random effects, which are ignored in the conventional Cox's model. For cluster-level covariates, both proposed bootstrap methods yield accurate SEs, and type I error rates, and acceptable coverage rates, regardless of the true random effects distribution, and avoid serious variance under-estimation by conventional Cox-based standard errors. However, the two-step bootstrap method over-estimates the variance for individual-level covariates. We also apply the proposed bootstrap methods to obtain confidence bands around flexible estimates of time-dependent effects in a real-life analysis of cluster event times.

  15. A Schema-Based Reading Test.

    ERIC Educational Resources Information Center

    Lewin, Beverly A.

    Schemata based notions need not replace, but should be reflected in, product-centered reading tests. The contributions of schema theory to the psycholinguistic model of reading has been thoroughly reviewed. Schemata-based reading tests provide several advantages: (1) they engage the appropriate conceptual processes for the student which frees the…

  16. Analysing Test-Takers' Views on a Computer-Based Speaking Test

    ERIC Educational Resources Information Center

    Amengual-Pizarro, Marian; García-Laborda, Jesús

    2017-01-01

    This study examines test-takers' views on a computer-delivered speaking test in order to investigate the aspects they consider most relevant in technology-based oral assessment, and to explore the main advantages and disadvantages computer-based tests may offer as compared to face-to-face speaking tests. A small-scale open questionnaire was…

  17. A procedure for the significance testing of unmodeled errors in GNSS observations

    NASA Astrophysics Data System (ADS)

    Li, Bofeng; Zhang, Zhetao; Shen, Yunzhong; Yang, Ling

    2018-01-01

    It is a crucial task to establish a precise mathematical model for global navigation satellite system (GNSS) observations in precise positioning. Due to the spatiotemporal complexity of, and limited knowledge on, systematic errors in GNSS observations, some residual systematic errors would inevitably remain even after corrected with empirical model and parameterization. These residual systematic errors are referred to as unmodeled errors. However, most of the existing studies mainly focus on handling the systematic errors that can be properly modeled and then simply ignore the unmodeled errors that may actually exist. To further improve the accuracy and reliability of GNSS applications, such unmodeled errors must be handled especially when they are significant. Therefore, a very first question is how to statistically validate the significance of unmodeled errors. In this research, we will propose a procedure to examine the significance of these unmodeled errors by the combined use of the hypothesis tests. With this testing procedure, three components of unmodeled errors, i.e., the nonstationary signal, stationary signal and white noise, are identified. The procedure is tested by using simulated data and real BeiDou datasets with varying error sources. The results show that the unmodeled errors can be discriminated by our procedure with approximately 90% confidence. The efficiency of the proposed procedure is further reassured by applying the time-domain Allan variance analysis and frequency-domain fast Fourier transform. In summary, the spatiotemporally correlated unmodeled errors are commonly existent in GNSS observations and mainly governed by the residual atmospheric biases and multipath. Their patterns may also be impacted by the receiver.

  18. Web-Based Testing: Exploring the Relationship between Hardware Usability and Test Performance

    ERIC Educational Resources Information Center

    Huff, Kyle; Cline, Melinda; Guynes, Carl S.

    2012-01-01

    Web-based testing has recently become common in both academic and professional settings. A web-based test is administered through a web browser. Individuals may complete a web-based test at nearly any time and at any place. In addition, almost any computer lab can become a testing center. It is important to understand the environmental issues that…

  19. Home-based HIV testing for men preferred over clinic-based testing by pregnant women and their male partners, a nested cross-sectional study.

    PubMed

    Osoti, Alfred Onyango; John-Stewart, Grace; Kiarie, James Njogu; Barbra, Richardson; Kinuthia, John; Krakowiak, Daisy; Farquhar, Carey

    2015-07-30

    Male partner HIV testing and counseling (HTC) is associated with enhanced uptake of prevention of mother-to-child HIV transmission (PMTCT), yet male HTC during pregnancy remains low. Identifying settings preferred by pregnant women and their male partners may improve male involvement in PMTCT. Participants in a randomized clinical trial (NCT01620073) to improve male partner HTC were interviewed to determine whether the preferred male partner HTC setting was the home, antenatal care (ANC) clinic or VCT center. In this nested cross sectional study, responses were evaluated at baseline and after 6 weeks. Differences between the two time points were compared using McNemar's test and correlates of preference were determined using logistic regression. Among 300 pregnant female participants, 54% preferred home over ANC clinic testing (34.0%) or VCT center (12.0%). Among 188 male partners, 68% preferred home-based HTC to antenatal clinic (19%) or VCT (13%). Men who desired more children and women who had less than secondary education or daily income < $2 USD were more likely to prefer home-based over other settings (p < 0.05 for all comparisons). At 6 weeks, the majority of male (81%) and female (65%) participants recommended home over alternative HTC venues. Adjusting for whether or not the partner was tested during follow-up did not significantly alter preferences. Pregnant women and their male partners preferred home-based compared to clinic or VCT-center based male partner HTC. Home-based HTC during pregnancy appears acceptable and may improve male testing and involvement in PMTCT.

  20. A novel fruit shape classification method based on multi-scale analysis

    NASA Astrophysics Data System (ADS)

    Gui, Jiangsheng; Ying, Yibin; Rao, Xiuqin

    2005-11-01

    Shape is one of the major concerns and which is still a difficult problem in automated inspection and sorting of fruits. In this research, we proposed the multi-scale energy distribution (MSED) for object shape description, the relationship between objects shape and its boundary energy distribution at multi-scale was explored for shape extraction. MSED offers not only the mainly energy which represent primary shape information at the lower scales, but also subordinate energy which represent local shape information at higher differential scales. Thus, it provides a natural tool for multi resolution representation and can be used as a feature for shape classification. We addressed the three main processing steps in the MSED-based shape classification. They are namely, 1) image preprocessing and citrus shape extraction, 2) shape resample and shape feature normalization, 3) energy decomposition by wavelet and classification by BP neural network. Hereinto, shape resample is resample 256 boundary pixel from a curve which is approximated original boundary by using cubic spline in order to get uniform raw data. A probability function was defined and an effective method to select a start point was given through maximal expectation, which overcame the inconvenience of traditional methods in order to have a property of rotation invariants. The experiment result is relatively well normal citrus and serious abnormality, with a classification rate superior to 91.2%. The global correct classification rate is 89.77%, and our method is more effective than traditional method. The global result can meet the request of fruit grading.

  1. Structural and Sequence Similarity Makes a Significant Impact on Machine-Learning-Based Scoring Functions for Protein-Ligand Interactions.

    PubMed

    Li, Yang; Yang, Jianyi

    2017-04-24

    The prediction of protein-ligand binding affinity has recently been improved remarkably by machine-learning-based scoring functions. For example, using a set of simple descriptors representing the atomic distance counts, the RF-Score improves the Pearson correlation coefficient to about 0.8 on the core set of the PDBbind 2007 database, which is significantly higher than the performance of any conventional scoring function on the same benchmark. A few studies have been made to discuss the performance of machine-learning-based methods, but the reason for this improvement remains unclear. In this study, by systemically controlling the structural and sequence similarity between the training and test proteins of the PDBbind benchmark, we demonstrate that protein structural and sequence similarity makes a significant impact on machine-learning-based methods. After removal of training proteins that are highly similar to the test proteins identified by structure alignment and sequence alignment, machine-learning-based methods trained on the new training sets do not outperform the conventional scoring functions any more. On the contrary, the performance of conventional functions like X-Score is relatively stable no matter what training data are used to fit the weights of its energy terms.

  2. Rehearsal significantly improves immediate and delayed recall on the Rey Auditory Verbal Learning Test.

    PubMed

    Hessen, Erik

    2011-10-01

    A repeated observation during memory assessment with the Rey Auditory Verbal Learning Test (RAVLT) is that patients who spontaneously employ a memory rehearsal strategy by repeating the word list more than once achieve better scores than patients who only repeat the word list once. This observation led to concern about the ability of the standard test procedure of RAVLT and similar tests in eliciting the best possible recall scores. The purpose of the present study was to test the hypothesis that a rehearsal recall strategy of repeating the word list more than once would result in improved scores of recall on the RAVLT. We report on differences in outcome after standard administration and after experimental administration on Immediate and Delayed Recall measures from the RAVLT of 50 patients. The experimental administration resulted in significantly improved scores for all the variables employed. Additionally, it was found that patients who failed effort screening showed significantly poorer improvement on Delayed Recall compared with those who passed the effort screening. The general clear improvement both in raw scores and T-scores demonstrates that recall performance can be significantly influenced by the strategy of the patient or by small variations in instructions by the examiner.

  3. Installation-Restoration Program. Phase 2. Confirmation/quantification. Stage 1. Problem confirmation study: Otis Air National Guard Base, Massachusetts, Air National Guard Support Center, Andrews Air Force Base, Maryland. Final technical report, November 1983-July 1985

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kraybill, R.L.; Smart, G.R.; Bopp, F.

    1985-09-04

    A Problem Confirmation Study was performed at seven sites on Otis Air National Guard Base: the Current and Former Training Areas, the Base Landfill, the Nondestructive Inspection Laboratory, the Fuel Test Dump Site, the Railyard Fuel Pumping Station, and the Petrol Fuel Storage Area. The field investigation was conducted in two stages, in November 1983 through January 1984, and in October through December 1984. Resampling was performed at selected locations in April and July 1985. A total of 11 monitor wells were installed and sampled and test-pit investigations were conducted at six sites. In addition, the contents of a sumpmore » tank, and two header pipes for fuel-transmission lines were sampled. Analytes included TOC, TOX, cyanide, phenols, Safe Drinking Water metals, pesticides and herbicides, and in the second round, priority-pollutant volatile organic compounds and a GC fingerprint scan for fuel products. On the basis of the field-work findings, it is concluded that, to date, water-quality impacts on ground water from past activities have been minimal.« less

  4. Toward the identification of causal genes in complex diseases: a gene-centric joint test of significance combining genomic and transcriptomic data.

    PubMed

    Charlesworth, Jac C; Peralta, Juan M; Drigalenko, Eugene; Göring, Harald Hh; Almasy, Laura; Dyer, Thomas D; Blangero, John

    2009-12-15

    Gene identification using linkage, association, or genome-wide expression is often underpowered. We propose that formal combination of information from multiple gene-identification approaches may lead to the identification of novel loci that are missed when only one form of information is available. Firstly, we analyze the Genetic Analysis Workshop 16 Framingham Heart Study Problem 2 genome-wide association data for HDL-cholesterol using a "gene-centric" approach. Then we formally combine the association test results with genome-wide transcriptional profiling data for high-density lipoprotein cholesterol (HDL-C), from the San Antonio Family Heart Study, using a Z-transform test (Stouffer's method). We identified 39 genes by the joint test at a conservative 1% false-discovery rate, including 9 from the significant gene-based association test and 23 whose expression was significantly correlated with HDL-C. Seven genes identified as significant in the joint test were not independently identified by either the association or expression tests. This combined approach has increased power and leads to the direct nomination of novel candidate genes likely to be involved in the determination of HDL-C levels. Such information can then be used as justification for a more exhaustive search for functional sequence variation within the nominated genes. We anticipate that this type of analysis will improve our speed of identification of regulatory genes causally involved in disease risk.

  5. Continuing challenges for computer-based neuropsychological tests.

    PubMed

    Letz, Richard

    2003-08-01

    A number of issues critical to the development of computer-based neuropsychological testing systems that remain continuing challenges to their widespread use in occupational and environmental health are reviewed. Several computer-based neuropsychological testing systems have been developed over the last 20 years, and they have contributed substantially to the study of neurologic effects of a number of environmental exposures. However, many are no longer supported and do not run on contemporary personal computer operating systems. Issues that are continuing challenges for development of computer-based neuropsychological tests in environmental and occupational health are discussed: (1) some current technological trends that generally make test development more difficult; (2) lack of availability of usable speech recognition of the type required for computer-based testing systems; (3) implementing computer-based procedures and tasks that are improvements over, not just adaptations of, their manually-administered predecessors; (4) implementing tests of a wider range of memory functions than the limited range now available; (5) paying more attention to motivational influences that affect the reliability and validity of computer-based measurements; and (6) increasing the usability of and audience for computer-based systems. Partial solutions to some of these challenges are offered. The challenges posed by current technological trends are substantial and generally beyond the control of testing system developers. Widespread acceptance of the "tablet PC" and implementation of accurate small vocabulary, discrete, speaker-independent speech recognition would enable revolutionary improvements to computer-based testing systems, particularly for testing memory functions not covered in existing systems. Dynamic, adaptive procedures, particularly ones based on item-response theory (IRT) and computerized-adaptive testing (CAT) methods, will be implemented in new tests that will be

  6. BRCA1 and BRCA2 genetic testing-pitfalls and recommendations for managing variants of uncertain clinical significance.

    PubMed

    Eccles, D M; Mitchell, G; Monteiro, A N A; Schmutzler, R; Couch, F J; Spurdle, A B; Gómez-García, E B

    2015-10-01

    Increasing use of BRCA1/2 testing for tailoring cancer treatment and extension of testing to tumour tissue for somatic mutation is moving BRCA1/2 mutation screening from a primarily prevention arena delivered by specialist genetic services into mainstream oncology practice. A considerable number of gene tests will identify rare variants where clinical significance cannot be inferred from sequence information alone. The proportion of variants of uncertain clinical significance (VUS) is likely to grow with lower thresholds for testing and laboratory providers with less experience of BRCA. Most VUS will not be associated with a high risk of cancer but a misinterpreted VUS has the potential to lead to mismanagement of both the patient and their relatives. Members of the Clinical Working Group of ENIGMA (Evidence-based Network for the Interpretation of Germline Mutant Alleles) global consortium (www.enigmaconsortium.org) observed wide variation in practices in reporting, disclosure and clinical management of patients with a VUS. Examples from current clinical practice are presented and discussed to illustrate potential pitfalls, explore factors contributing to misinterpretation, and propose approaches to improving clarity. Clinicians, patients and their relatives would all benefit from an improved level of genetic literacy. Genetic laboratories working with clinical geneticists need to agree on a clinically clear and uniform format for reporting BRCA test results to non-geneticists. An international consortium of experts, collecting and integrating all available lines of evidence and classifying variants according to an internationally recognized system, will facilitate reclassification of variants for clinical use. © The Author 2015. Published by Oxford University Press on behalf of the European Society for Medical Oncology. All rights reserved. For permissions, please email: journals.permissions@oup.com.

  7. A Comparison of Two Tests for the Significance of a Mean Vector.

    DTIC Science & Technology

    1978-01-01

    rejected as soon as a component test in the sequence shows significance . It is well. known (3’. Roy 1958; Roy, Gnanadesikan and Srivastava 1971 (p...confidence bounds ” , Annals of Mathematical Statistics, 29, 491—503. (14] Roy, S.N., Gnanadesikan , R., and Srivastava, J.N. (1971). Analysis and

  8. Mutagenicity in drug development: interpretation and significance of test results.

    PubMed

    Clive, D

    1985-03-01

    The use of mutagenicity data has been proposed and widely accepted as a relatively fast and inexpensive means of predicting long-term risk to man (i.e., cancer in somatic cells, heritable mutations in germ cells). This view is based on the universal nature of the genetic material, the somatic mutation model of carcinogenesis, and a number of studies showing correlations between mutagenicity and carcinogenicity. An uncritical acceptance of this approach by some regulatory and industrial concerns is over-conservative, naive, and scientifically unjustifiable on a number of grounds: Human cancers are largely life-style related (e.g., cigarettes, diet, tanning). Mutagens (both natural and man-made) are far more prevalent in the environment than was originally assumed (e.g., the natural bases and nucleosides, protein pyrolysates, fluorescent lights, typewriter ribbon, red wine, diesel fuel exhausts, viruses, our own leukocytes). "False-positive" (relative to carcinogenicity) and "false-negative" mutagenicity results occur, often with rational explanations (e.g., high threshold, inappropriate metabolism, inadequate genetic endpoint), and thereby confound any straightforward interpretation of mutagenicity test results. Test battery composition affects both the proper identification of mutagens and, in many instances, the ability to make preliminary risk assessments. In vitro mutagenicity assays ignore whole animal protective mechanisms, may provide unphysiological metabolism, and may be either too sensitive (e.g., testing at orders-of-magnitude higher doses than can be ingested) or not sensitive enough (e.g., short-term treatments inadequately model chronic exposure in bioassay). Bacterial systems, particularly the Ames assay, cannot in principle detect chromosomal events which are involved in both carcinogenesis and germ line mutations in man. Some compounds induce only chromosomal events and little or no detectable single-gene events (e.g., acyclovir, caffeine

  9. INS/GNSS Tightly-Coupled Integration Using Quaternion-Based AUPF for USV.

    PubMed

    Xia, Guoqing; Wang, Guoqing

    2016-08-02

    This paper addresses the problem of integration of Inertial Navigation System (INS) and Global Navigation Satellite System (GNSS) for the purpose of developing a low-cost, robust and highly accurate navigation system for unmanned surface vehicles (USVs). A tightly-coupled integration approach is one of the most promising architectures to fuse the GNSS data with INS measurements. However, the resulting system and measurement models turn out to be nonlinear, and the sensor stochastic measurement errors are non-Gaussian and distributed in a practical system. Particle filter (PF), one of the most theoretical attractive non-linear/non-Gaussian estimation methods, is becoming more and more attractive in navigation applications. However, the large computation burden limits its practical usage. For the purpose of reducing the computational burden without degrading the system estimation accuracy, a quaternion-based adaptive unscented particle filter (AUPF), which combines the adaptive unscented Kalman filter (AUKF) with PF, has been proposed in this paper. The unscented Kalman filter (UKF) is used in the algorithm to improve the proposal distribution and generate a posterior estimates, which specify the PF importance density function for generating particles more intelligently. In addition, the computational complexity of the filter is reduced with the avoidance of the re-sampling step. Furthermore, a residual-based covariance matching technique is used to adapt the measurement error covariance. A trajectory simulator based on a dynamic model of USV is used to test the proposed algorithm. Results show that quaternion-based AUPF can significantly improve the overall navigation accuracy and reliability.

  10. Testlet-Based Multidimensional Adaptive Testing.

    PubMed

    Frey, Andreas; Seitz, Nicki-Nils; Brandt, Steffen

    2016-01-01

    Multidimensional adaptive testing (MAT) is a highly efficient method for the simultaneous measurement of several latent traits. Currently, no psychometrically sound approach is available for the use of MAT in testlet-based tests. Testlets are sets of items sharing a common stimulus such as a graph or a text. They are frequently used in large operational testing programs like TOEFL, PISA, PIRLS, or NAEP. To make MAT accessible for such testing programs, we present a novel combination of MAT with a multidimensional generalization of the random effects testlet model (MAT-MTIRT). MAT-MTIRT compared to non-adaptive testing is examined for several combinations of testlet effect variances (0.0, 0.5, 1.0, and 1.5) and testlet sizes (3, 6, and 9 items) with a simulation study considering three ability dimensions with simple loading structure. MAT-MTIRT outperformed non-adaptive testing regarding the measurement precision of the ability estimates. Further, the measurement precision decreased when testlet effect variances and testlet sizes increased. The suggested combination of the MTIRT model therefore provides a solution to the substantial problems of testlet-based tests while keeping the length of the test within an acceptable range.

  11. Testlet-Based Multidimensional Adaptive Testing

    PubMed Central

    Frey, Andreas; Seitz, Nicki-Nils; Brandt, Steffen

    2016-01-01

    Multidimensional adaptive testing (MAT) is a highly efficient method for the simultaneous measurement of several latent traits. Currently, no psychometrically sound approach is available for the use of MAT in testlet-based tests. Testlets are sets of items sharing a common stimulus such as a graph or a text. They are frequently used in large operational testing programs like TOEFL, PISA, PIRLS, or NAEP. To make MAT accessible for such testing programs, we present a novel combination of MAT with a multidimensional generalization of the random effects testlet model (MAT-MTIRT). MAT-MTIRT compared to non-adaptive testing is examined for several combinations of testlet effect variances (0.0, 0.5, 1.0, and 1.5) and testlet sizes (3, 6, and 9 items) with a simulation study considering three ability dimensions with simple loading structure. MAT-MTIRT outperformed non-adaptive testing regarding the measurement precision of the ability estimates. Further, the measurement precision decreased when testlet effect variances and testlet sizes increased. The suggested combination of the MTIRT model therefore provides a solution to the substantial problems of testlet-based tests while keeping the length of the test within an acceptable range. PMID:27917132

  12. Measured dose to ovaries and testes from Hodgkin's fields and determination of genetically significant dose

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Niroomand-Rad, A.; Cumberlin, R.

    The purpose of this study was to determine the genetically significant dose from therapeutic radiation exposure with Hodgkin's fields by estimating the doses to ovaries and testes. Phantom measurements were performed to verify estimated doses to ovaries and testes from Hodgkin's fields. Thermoluminescent LiF dosimeters (TLD-100) of 1 x 3 x 3 mm[sup 3] dimensions were embedded in phantoms and exposed to standard mantle and paraaortic fields using Co-60, 4 MV, 6 MV, and 10 MV photon beams. The results show that measured doses to ovaries and testes are about two to five times higher than the corresponding graphically estimatedmore » doses for Co-60 and 4 MVX photon beams as depicted in ICRP publication 44. In addition, the measured doses to ovaries and testes are about 30% to 65% lower for 10 MV photon beams than for their corresponding Co-60 photon beams. The genetically significant dose from Hodgkin's treatment (less than 0.01 mSv) adds about 4% to the genetically significant dose contribution to medical procedures and adds less than 1% to the genetically significant dose from all sources. Therefore, the consequence to society is considered to be very small. The consequences for the individual patient are, likewise, small. 28 refs., 3 figs., 5 tabs.« less

  13. Assessing group differences in biodiversity by simultaneously testing a user-defined selection of diversity indices.

    PubMed

    Pallmann, Philip; Schaarschmidt, Frank; Hothorn, Ludwig A; Fischer, Christiane; Nacke, Heiko; Priesnitz, Kai U; Schork, Nicholas J

    2012-11-01

    Comparing diversities between groups is a task biologists are frequently faced with, for example in ecological field trials or when dealing with metagenomics data. However, researchers often waver about which measure of diversity to choose as there is a multitude of approaches available. As Jost (2008, Molecular Ecology, 17, 4015) has pointed out, widely used measures such as the Shannon or Simpson index have undesirable properties which make them hard to compare and interpret. Many of the problems associated with the use of these 'raw' indices can be corrected by transforming them into 'true' diversity measures. We introduce a technique that allows the comparison of two or more groups of observations and simultaneously tests a user-defined selection of a number of 'true' diversity measures. This procedure yields multiplicity-adjusted P-values according to the method of Westfall and Young (1993, Resampling-Based Multiple Testing: Examples and Methods for p-Value Adjustment, 49, 941), which ensures that the rate of false positives (type I error) does not rise when the number of groups and/or diversity indices is extended. Software is available in the R package 'simboot'. © 2012 Blackwell Publishing Ltd.

  14. Comparison of bootstrap approaches for estimation of uncertainties of DTI parameters.

    PubMed

    Chung, SungWon; Lu, Ying; Henry, Roland G

    2006-11-01

    Bootstrap is an empirical non-parametric statistical technique based on data resampling that has been used to quantify uncertainties of diffusion tensor MRI (DTI) parameters, useful in tractography and in assessing DTI methods. The current bootstrap method (repetition bootstrap) used for DTI analysis performs resampling within the data sharing common diffusion gradients, requiring multiple acquisitions for each diffusion gradient. Recently, wild bootstrap was proposed that can be applied without multiple acquisitions. In this paper, two new approaches are introduced called residual bootstrap and repetition bootknife. We show that repetition bootknife corrects for the large bias present in the repetition bootstrap method and, therefore, better estimates the standard errors. Like wild bootstrap, residual bootstrap is applicable to single acquisition scheme, and both are based on regression residuals (called model-based resampling). Residual bootstrap is based on the assumption that non-constant variance of measured diffusion-attenuated signals can be modeled, which is actually the assumption behind the widely used weighted least squares solution of diffusion tensor. The performances of these bootstrap approaches were compared in terms of bias, variance, and overall error of bootstrap-estimated standard error by Monte Carlo simulation. We demonstrate that residual bootstrap has smaller biases and overall errors, which enables estimation of uncertainties with higher accuracy. Understanding the properties of these bootstrap procedures will help us to choose the optimal approach for estimating uncertainties that can benefit hypothesis testing based on DTI parameters, probabilistic fiber tracking, and optimizing DTI methods.

  15. Adaptive graph-based multiple testing procedures

    PubMed Central

    Klinglmueller, Florian; Posch, Martin; Koenig, Franz

    2016-01-01

    Multiple testing procedures defined by directed, weighted graphs have recently been proposed as an intuitive visual tool for constructing multiple testing strategies that reflect the often complex contextual relations between hypotheses in clinical trials. Many well-known sequentially rejective tests, such as (parallel) gatekeeping tests or hierarchical testing procedures are special cases of the graph based tests. We generalize these graph-based multiple testing procedures to adaptive trial designs with an interim analysis. These designs permit mid-trial design modifications based on unblinded interim data as well as external information, while providing strong family wise error rate control. To maintain the familywise error rate, it is not required to prespecify the adaption rule in detail. Because the adaptive test does not require knowledge of the multivariate distribution of test statistics, it is applicable in a wide range of scenarios including trials with multiple treatment comparisons, endpoints or subgroups, or combinations thereof. Examples of adaptations are dropping of treatment arms, selection of subpopulations, and sample size reassessment. If, in the interim analysis, it is decided to continue the trial as planned, the adaptive test reduces to the originally planned multiple testing procedure. Only if adaptations are actually implemented, an adjusted test needs to be applied. The procedure is illustrated with a case study and its operating characteristics are investigated by simulations. PMID:25319733

  16. Demystifying the GMAT: Computer-Based Testing Terms

    ERIC Educational Resources Information Center

    Rudner, Lawrence M.

    2012-01-01

    Computer-based testing can be a powerful means to make all aspects of test administration not only faster and more efficient, but also more accurate and more secure. While the Graduate Management Admission Test (GMAT) exam is a computer adaptive test, there are other approaches. This installment presents a primer of computer-based testing terms.

  17. The Significance of Temperature Based Approach Over the Energy Based Approaches in the Buildings Thermal Assessment

    NASA Astrophysics Data System (ADS)

    Albatayneh, Aiman; Alterman, Dariusz; Page, Adrian; Moghtaderi, Behdad

    2017-05-01

    The design of low energy buildings requires accurate thermal simulation software to assess the heating and cooling loads. Such designs should sustain thermal comfort for occupants and promote less energy usage over the life time of any building. One of the house energy rating used in Australia is AccuRate, star rating tool to assess and compare the thermal performance of various buildings where the heating and cooling loads are calculated based on fixed operational temperatures between 20 °C to 25 °C to sustain thermal comfort for the occupants. However, these fixed settings for the time and temperatures considerably increase the heating and cooling loads. On the other hand the adaptive thermal model applies a broader range of weather conditions, interacts with the occupants and promotes low energy solutions to maintain thermal comfort. This can be achieved by natural ventilation (opening window/doors), suitable clothes, shading and low energy heating/cooling solutions for the occupied spaces (rooms). These activities will save significant amount of operating energy what can to be taken into account to predict energy consumption for a building. Most of the buildings thermal assessment tools depend on energy-based approaches to predict the thermal performance of any building e.g. AccuRate in Australia. This approach encourages the use of energy to maintain thermal comfort. This paper describes the advantages of a temperature-based approach to assess the building's thermal performance (using an adaptive thermal comfort model) over energy based approach (AccuRate Software used in Australia). The temperature-based approach was validated and compared with the energy-based approach using four full scale housing test modules located in Newcastle, Australia (Cavity Brick (CB), Insulated Cavity Brick (InsCB), Insulated Brick Veneer (InsBV) and Insulated Reverse Brick Veneer (InsRBV)) subjected to a range of seasonal conditions in a moderate climate. The time required for

  18. Incorporating external evidence in trial-based cost-effectiveness analyses: the use of resampling methods

    PubMed Central

    2014-01-01

    Background Cost-effectiveness analyses (CEAs) that use patient-specific data from a randomized controlled trial (RCT) are popular, yet such CEAs are criticized because they neglect to incorporate evidence external to the trial. A popular method for quantifying uncertainty in a RCT-based CEA is the bootstrap. The objective of the present study was to further expand the bootstrap method of RCT-based CEA for the incorporation of external evidence. Methods We utilize the Bayesian interpretation of the bootstrap and derive the distribution for the cost and effectiveness outcomes after observing the current RCT data and the external evidence. We propose simple modifications of the bootstrap for sampling from such posterior distributions. Results In a proof-of-concept case study, we use data from a clinical trial and incorporate external evidence on the effect size of treatments to illustrate the method in action. Compared to the parametric models of evidence synthesis, the proposed approach requires fewer distributional assumptions, does not require explicit modeling of the relation between external evidence and outcomes of interest, and is generally easier to implement. A drawback of this approach is potential computational inefficiency compared to the parametric Bayesian methods. Conclusions The bootstrap method of RCT-based CEA can be extended to incorporate external evidence, while preserving its appealing features such as no requirement for parametric modeling of cost and effectiveness outcomes. PMID:24888356

  19. Incorporating external evidence in trial-based cost-effectiveness analyses: the use of resampling methods.

    PubMed

    Sadatsafavi, Mohsen; Marra, Carlo; Aaron, Shawn; Bryan, Stirling

    2014-06-03

    Cost-effectiveness analyses (CEAs) that use patient-specific data from a randomized controlled trial (RCT) are popular, yet such CEAs are criticized because they neglect to incorporate evidence external to the trial. A popular method for quantifying uncertainty in a RCT-based CEA is the bootstrap. The objective of the present study was to further expand the bootstrap method of RCT-based CEA for the incorporation of external evidence. We utilize the Bayesian interpretation of the bootstrap and derive the distribution for the cost and effectiveness outcomes after observing the current RCT data and the external evidence. We propose simple modifications of the bootstrap for sampling from such posterior distributions. In a proof-of-concept case study, we use data from a clinical trial and incorporate external evidence on the effect size of treatments to illustrate the method in action. Compared to the parametric models of evidence synthesis, the proposed approach requires fewer distributional assumptions, does not require explicit modeling of the relation between external evidence and outcomes of interest, and is generally easier to implement. A drawback of this approach is potential computational inefficiency compared to the parametric Bayesian methods. The bootstrap method of RCT-based CEA can be extended to incorporate external evidence, while preserving its appealing features such as no requirement for parametric modeling of cost and effectiveness outcomes.

  20. Development of a copula-based particle filter (CopPF) approach for hydrologic data assimilation under consideration of parameter interdependence

    NASA Astrophysics Data System (ADS)

    Fan, Y. R.; Huang, G. H.; Baetz, B. W.; Li, Y. P.; Huang, K.

    2017-06-01

    In this study, a copula-based particle filter (CopPF) approach was developed for sequential hydrological data assimilation by considering parameter correlation structures. In CopPF, multivariate copulas are proposed to reflect parameter interdependence before the resampling procedure with new particles then being sampled from the obtained copulas. Such a process can overcome both particle degeneration and sample impoverishment. The applicability of CopPF is illustrated with three case studies using a two-parameter simplified model and two conceptual hydrologic models. The results for the simplified model indicate that model parameters are highly correlated in the data assimilation process, suggesting a demand for full description of their dependence structure. Synthetic experiments on hydrologic data assimilation indicate that CopPF can rejuvenate particle evolution in large spaces and thus achieve good performances with low sample size scenarios. The applicability of CopPF is further illustrated through two real-case studies. It is shown that, compared with traditional particle filter (PF) and particle Markov chain Monte Carlo (PMCMC) approaches, the proposed method can provide more accurate results for both deterministic and probabilistic prediction with a sample size of 100. Furthermore, the sample size would not significantly influence the performance of CopPF. Also, the copula resampling approach dominates parameter evolution in CopPF, with more than 50% of particles sampled by copulas in most sample size scenarios.

  1. Pattern Recognition of Momentary Mental Workload Based on Multi-Channel Electrophysiological Data and Ensemble Convolutional Neural Networks.

    PubMed

    Zhang, Jianhua; Li, Sunan; Wang, Rubin

    2017-01-01

    In this paper, we deal with the Mental Workload (MWL) classification problem based on the measured physiological data. First we discussed the optimal depth (i.e., the number of hidden layers) and parameter optimization algorithms for the Convolutional Neural Networks (CNN). The base CNNs designed were tested according to five classification performance indices, namely Accuracy, Precision, F-measure, G-mean, and required training time. Then we developed an Ensemble Convolutional Neural Network (ECNN) to enhance the accuracy and robustness of the individual CNN model. For the ECNN design, three model aggregation approaches (weighted averaging, majority voting and stacking) were examined and a resampling strategy was used to enhance the diversity of individual CNN models. The results of MWL classification performance comparison indicated that the proposed ECNN framework can effectively improve MWL classification performance and is featured by entirely automatic feature extraction and MWL classification, when compared with traditional machine learning methods.

  2. Temporal rainfall disaggregation using a multiplicative cascade model for spatial application in urban hydrology

    NASA Astrophysics Data System (ADS)

    Müller, H.; Haberlandt, U.

    2018-01-01

    Rainfall time series of high temporal resolution and spatial density are crucial for urban hydrology. The multiplicative random cascade model can be used for temporal rainfall disaggregation of daily data to generate such time series. Here, the uniform splitting approach with a branching number of 3 in the first disaggregation step is applied. To achieve a final resolution of 5 min, subsequent steps after disaggregation are necessary. Three modifications at different disaggregation levels are tested in this investigation (uniform splitting at Δt = 15 min, linear interpolation at Δt = 7.5 min and Δt = 3.75 min). Results are compared both with observations and an often used approach, based on the assumption that a time steps with Δt = 5.625 min, as resulting if a branching number of 2 is applied throughout, can be replaced with Δt = 5 min (called the 1280 min approach). Spatial consistence is implemented in the disaggregated time series using a resampling algorithm. In total, 24 recording stations in Lower Saxony, Northern Germany with a 5 min resolution have been used for the validation of the disaggregation procedure. The urban-hydrological suitability is tested with an artificial combined sewer system of about 170 hectares. The results show that all three variations outperform the 1280 min approach regarding reproduction of wet spell duration, average intensity, fraction of dry intervals and lag-1 autocorrelation. Extreme values with durations of 5 min are also better represented. For durations of 1 h, all approaches show only slight deviations from the observed extremes. The applied resampling algorithm is capable to achieve sufficient spatial consistence. The effects on the urban hydrological simulations are significant. Without spatial consistence, flood volumes of manholes and combined sewer overflow are strongly underestimated. After resampling, results using disaggregated time series as input are in the range of those using observed time series. Best

  3. Prognostic significance of performing universal HER2 testing in cases of advanced gastric cancer.

    PubMed

    Jiménez-Fonseca, Paula; Carmona-Bayonas, Alberto; Sánchez Lorenzo, Maria Luisa; Plazas, Javier Gallego; Custodio, Ana; Hernández, Raquel; Garrido, Marcelo; García, Teresa; Echavarría, Isabel; Cano, Juana María; Rodríguez Palomo, Alberto; Mangas, Monserrat; Macías Declara, Ismael; Ramchandani, Avinash; Visa, Laura; Viudez, Antonio; Buxó, Elvira; Díaz-Serrano, Asunción; López, Carlos; Azkarate, Aitor; Longo, Federico; Castañón, Eduardo; Sánchez Bayona, Rodrigo; Pimentel, Paola; Limón, Maria Luisa; Cerdá, Paula; Álvarez Llosa, Renata; Serrano, Raquel; Lobera, Maria Pilar Felices; Alsina, María; Hurtado Nuño, Alicia; Gómez-Martin, Carlos

    2017-05-01

    Trastuzumab significantly improves overall survival (OS) when added to cisplatin and fluoropyrimidine as a treatment for HER2-positive advanced gastric cancers (AGC). The aim of this study was to evaluate the impact of the gradual implementation of HER2 testing on patient prognosis in a national registry of AGC. This Spanish National Cancer Registry includes cases who were consecutively recruited at 28 centers from January 2008 to January 2016. The effect of missing HER2 status was assessed using stratified Cox proportional hazards (PH) regression. The rate of HER2 testing increased steadily over time, from 58.3 % in 2008 to 92.9 % in 2016. HER2 was positive in 194 tumors (21.3 %). In the stratified Cox PH regression, each 1 % increase in patients who were not tested for HER2 at the institutions was associated with an approximately 0.3 % increase in the risk of death: hazard ratio, 1.0035 (CI 95 %, 1.001-1.005), P = 0.0019. Median OS was significantly lower at institutions with the highest proportions of patients who were not tested for HER2. Patients treated at centers that took longer to implement HER2 testing exhibited worse clinical outcomes. The speed of implementation behaves as a quality-of-care indicator. Reviewed guidelines on HER2 testing should be used to achieve this goal in a timely manner.

  4. BEAT: A Web-Based Boolean Expression Fault-Based Test Case Generation Tool

    ERIC Educational Resources Information Center

    Chen, T. Y.; Grant, D. D.; Lau, M. F.; Ng, S. P.; Vasa, V. R.

    2006-01-01

    BEAT is a Web-based system that generates fault-based test cases from Boolean expressions. It is based on the integration of our several fault-based test case selection strategies. The generated test cases are considered to be fault-based, because they are aiming at the detection of particular faults. For example, when the Boolean expression is in…

  5. Comparisons of Reflectivities from the TRMM Precipitation Radar and Ground-Based Radars

    NASA Technical Reports Server (NTRS)

    Wang, Jianxin; Wolff, David B.

    2008-01-01

    Given the decade long and highly successful Tropical Rainfall Measuring Mission (TRMM), it is now possible to provide quantitative comparisons between ground-based radars (GRs) with the space-borne TRMM precipitation radar (PR) with greater certainty over longer time scales in various tropical climatological regions. This study develops an automated methodology to match and compare simultaneous TRMM PR and GR reflectivities at four primary TRMM Ground Validation (GV) sites: Houston, Texas (HSTN); Melbourne, Florida (MELB); Kwajalein, Republic of the Marshall Islands (KWAJ); and Darwin, Australia (DARW). Data from each instrument are resampled into a three-dimensional Cartesian coordinate system. The horizontal displacement during the PR data resampling is corrected. Comparisons suggest that the PR suffers significant attenuation at lower levels especially in convective rain. The attenuation correction performs quite well for convective rain but appears to slightly over-correct in stratiform rain. The PR and GR observations at HSTN, MELB and KWAJ agree to about 1 dB on average with a few exceptions, while the GR at DARW requires +1 to -5 dB calibration corrections. One of the important findings of this study is that the GR calibration offset is dependent on the reflectivity magnitude. Hence, we propose that the calibration should be carried out using a regression correction, rather than simply adding an offset value to all GR reflectivities. This methodology is developed towards TRMM GV efforts to improve the accuracy of tropical rain estimates, and can also be applied to the proposed Global Precipitation Measurement and other related activities over the globe.

  6. The effects of calculator-based laboratories on standardized test scores

    NASA Astrophysics Data System (ADS)

    Stevens, Charlotte Bethany Rains

    Nationwide, the goal of providing a productive science and math education to our youth in today's educational institutions is centering itself around the technology being utilized in these classrooms. In this age of digital technology, educational software and calculator-based laboratories (CBL) have become significant devices in the teaching of science and math for many states across the United States. Among the technology, the Texas Instruments graphing calculator and Vernier Labpro interface, are among some of the calculator-based laboratories becoming increasingly popular among middle and high school science and math teachers in many school districts across this country. In Tennessee, however, it is reported that this type of technology is not regularly utilized at the student level in most high school science classrooms, especially in the area of Physical Science (Vernier, 2006). This research explored the effect of calculator based laboratory instruction on standardized test scores. The purpose of this study was to determine the effect of traditional teaching methods versus graphing calculator teaching methods on the state mandated End-of-Course (EOC) Physical Science exam based on ability, gender, and ethnicity. The sample included 187 total tenth and eleventh grade physical science students, 101 of which belonged to a control group and 87 of which belonged to the experimental group. Physical Science End-of-Course scores obtained from the Tennessee Department of Education during the spring of 2005 and the spring of 2006 were used to examine the hypotheses. The findings of this research study suggested the type of teaching method, traditional or calculator based, did not have an effect on standardized test scores. However, the students' ability level, as demonstrated on the End-of-Course test, had a significant effect on End-of-Course test scores. This study focused on a limited population of high school physical science students in the middle Tennessee

  7. On the nature of data collection for soft-tissue image-to-physical organ registration: a noise characterization study

    NASA Astrophysics Data System (ADS)

    Collins, Jarrod A.; Heiselman, Jon S.; Weis, Jared A.; Clements, Logan W.; Simpson, Amber L.; Jarnagin, William R.; Miga, Michael I.

    2017-03-01

    In image-guided liver surgery (IGLS), sparse representations of the anterior organ surface may be collected intraoperatively to drive image-to-physical space registration. Soft tissue deformation represents a significant source of error for IGLS techniques. This work investigates the impact of surface data quality on current surface based IGLS registration methods. In this work, we characterize the robustness of our IGLS registration methods to noise in organ surface digitization. We study this within a novel human-to-phantom data framework that allows a rapid evaluation of clinically realistic data and noise patterns on a fully characterized hepatic deformation phantom. Additionally, we implement a surface data resampling strategy that is designed to decrease the impact of differences in surface acquisition. For this analysis, n=5 cases of clinical intraoperative data consisting of organ surface and salient feature digitizations from open liver resection were collected and analyzed within our human-to-phantom validation framework. As expected, results indicate that increasing levels of noise in surface acquisition cause registration fidelity to deteriorate. With respect to rigid registration using the raw and resampled data at clinically realistic levels of noise (i.e. a magnitude of 1.5 mm), resampling improved TRE by 21%. In terms of nonrigid registration, registrations using resampled data outperformed the raw data result by 14% at clinically realistic levels and were less susceptible to noise across the range of noise investigated. These results demonstrate the types of analyses our novel human-to-phantom validation framework can provide and indicate the considerable benefits of resampling strategies.

  8. Simulation-based Testing of Control Software

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ozmen, Ozgur; Nutaro, James J.; Sanyal, Jibonananda

    It is impossible to adequately test complex software by examining its operation in a physical prototype of the system monitored. Adequate test coverage can require millions of test cases, and the cost of equipment prototypes combined with the real-time constraints of testing with them makes it infeasible to sample more than a small number of these tests. Model based testing seeks to avoid this problem by allowing for large numbers of relatively inexpensive virtual prototypes that operate in simulation time at a speed limited only by the available computing resources. In this report, we describe how a computer system emulatormore » can be used as part of a model based testing environment; specifically, we show that a complete software stack including operating system and application software - can be deployed within a simulated environment, and that these simulations can proceed as fast as possible. To illustrate this approach to model based testing, we describe how it is being used to test several building control systems that act to coordinate air conditioning loads for the purpose of reducing peak demand. These tests involve the use of ADEVS (A Discrete Event System Simulator) and QEMU (Quick Emulator) to host the operational software within the simulation, and a building model developed with the MODELICA programming language using Buildings Library and packaged as an FMU (Functional Mock-up Unit) that serves as the virtual test environment.« less

  9. Mass spectrometry-based protein identification with accurate statistical significance assignment.

    PubMed

    Alves, Gelio; Yu, Yi-Kuo

    2015-03-01

    Assigning statistical significance accurately has become increasingly important as metadata of many types, often assembled in hierarchies, are constructed and combined for further biological analyses. Statistical inaccuracy of metadata at any level may propagate to downstream analyses, undermining the validity of scientific conclusions thus drawn. From the perspective of mass spectrometry-based proteomics, even though accurate statistics for peptide identification can now be achieved, accurate protein level statistics remain challenging. We have constructed a protein ID method that combines peptide evidences of a candidate protein based on a rigorous formula derived earlier; in this formula the database P-value of every peptide is weighted, prior to the final combination, according to the number of proteins it maps to. We have also shown that this protein ID method provides accurate protein level E-value, eliminating the need of using empirical post-processing methods for type-I error control. Using a known protein mixture, we find that this protein ID method, when combined with the Sorić formula, yields accurate values for the proportion of false discoveries. In terms of retrieval efficacy, the results from our method are comparable with other methods tested. The source code, implemented in C++ on a linux system, is available for download at ftp://ftp.ncbi.nlm.nih.gov/pub/qmbp/qmbp_ms/RAId/RAId_Linux_64Bit. Published by Oxford University Press 2014. This work is written by US Government employees and is in the public domain in the US.

  10. Performance-based alternative assessments as a means of eliminating gender achievement differences on science tests

    NASA Astrophysics Data System (ADS)

    Brown, Norman Merrill

    1998-09-01

    Historically, researchers have reported an achievement difference between females and males on standardized science tests. These differences have been reported to be based upon science knowledge, abstract reasoning skills, mathematical abilities, and cultural and social phenomena. This research was designed to determine how mastery of specific science content from public school curricula might be evaluated with performance-based assessment models, without producing gender achievement differences. The assessment instruments used were Harcourt Brace Educational Measurement's GOALSsp°ler: A Performance-Based Measure of Achievement and the performance-based portion of the Stanford Achievement Testspcopyright, Ninth Edition. The identified independent variables were test, gender, ethnicity, and grade level. A 2 x 2 x 6 x 12 (test x gender x ethnicity x grade) factorial experimental design was used to organize the data. A stratified random sample (N = 2400) was selected from a national pool of norming data: N = 1200 from the GOALSsp°ler group and N = 1200 from the SAT9spcopyright group. The ANOVA analysis yielded mixed results. The factors of test, gender, ethnicity by grade, gender by grade, and gender by grade by ethnicity failed to produce significant results (alpha = 0.05). The factors yielding significant results were ethnicity, grade, and ethnicity by grade. Therefore, no significant differences were found between female and male achievement on these performance-based assessments.

  11. [Artefacts of questionnaire-based psychological testing of drivers].

    PubMed

    Łuczak, Anna; Tarnowski, Adam

    2014-01-01

    The purpose of this article is to draw attention to a significant role of social approval variable in the qustionnaire-based diagnosis of drivers' psychological aptitude. Three questionnaires were used: Formal Characteristics of Behavior - Temperament Inventory (FCB-TI), Eysenck Personality Questionnaire (EPQ-R(S) and Impulsiveness Questionnaire (Impulsiveness, Venturesomeness, Empathy - IVE). Three groups of drivers were analyzed: professional "without crashes" (N = 46), nonprofessional "without crashes" (N = 75), and nonprofessional "with crashes" (N = 75). Nonprofessional drivers "without crashes" significantly stood up against other drivers. Their personality profile, indicating a significantly utmost perseveration, emotional reactivity, neuroticism, impulsiveness and the lowest endurance did not fit in to the requirements to be met by drivers. The driver safety profile was characteristic of professional drivers (the lowest level of perseveration, impulsiveness and neuroticism and the highest level of endurance). Similar profile occurred among nonprofessional drivers--the offenders of road crashes. Compared to the nonprofessional "without crashes" group, professional drivers and offenders of road crashes were also characterized by a significantly higher score on the Lie scale, determining the need for social approval. This is likely to result from the study procedure according to which the result of professional drivers testing had an impact on a possible continuity of their job and that of nonprofessional drivers "with crashes" decided about possible recovery of the driving license. The variable of social approval can be a significant artifact in the study of psychological drivers' testing and reduce the reliability of the results of questionnaire methods.

  12. Space Launch System Base Heating Test: Environments and Base Flow Physics

    NASA Technical Reports Server (NTRS)

    Mehta, Manish; Knox, Kyle S.; Seaford, C. Mark; Dufrene, Aaron T.

    2016-01-01

    The NASA Space Launch System (SLS) vehicle is composed of four RS-25 liquid oxygen- hydrogen rocket engines in the core-stage and two 5-segment solid rocket boosters and as a result six hot supersonic plumes interact within the aft section of the vehicle during ight. Due to the complex nature of rocket plume-induced ows within the launch vehicle base during ascent and a new vehicle con guration, sub-scale wind tunnel testing is required to reduce SLS base convective environment uncertainty and design risk levels. This hot- re test program was conducted at the CUBRC Large Energy National Shock (LENS) II short-duration test facility to simulate ight from altitudes of 50 kft to 210 kft. The test program is a challenging and innovative e ort that has not been attempted in 40+ years for a NASA vehicle. This presentation discusses the various trends of base convective heat ux and pressure as a function of altitude at various locations within the core-stage and booster base regions of the two-percent SLS wind tunnel model. In-depth understanding of the base ow physics is presented using the test data, infrared high-speed imaging and theory. The normalized test design environments are compared to various NASA semi- empirical numerical models to determine exceedance and conservatism of the ight scaled test-derived base design environments. Brief discussion of thermal impact to the launch vehicle base components is also presented.

  13. Space Launch System Base Heating Test: Environments and Base Flow Physics

    NASA Technical Reports Server (NTRS)

    Mehta, Manish; Knox, Kyle S.; Seaford, C. Mark; Dufrene, Aaron T.

    2016-01-01

    The NASA Space Launch System (SLS) vehicle is composed of four RS-25 liquid oxygen-hydrogen rocket engines in the core-stage and two 5-segment solid rocket boosters and as a result six hot supersonic plumes interact within the aft section of the vehicle during flight. Due to the complex nature of rocket plume-induced flows within the launch vehicle base during ascent and a new vehicle configuration, sub-scale wind tunnel testing is required to reduce SLS base convective environment uncertainty and design risk levels. This hot-fire test program was conducted at the CUBRC Large Energy National Shock (LENS) II short-duration test facility to simulate flight from altitudes of 50 kft to 210 kft. The test program is a challenging and innovative effort that has not been attempted in 40+ years for a NASA vehicle. This paper discusses the various trends of base convective heat flux and pressure as a function of altitude at various locations within the core-stage and booster base regions of the two-percent SLS wind tunnel model. In-depth understanding of the base flow physics is presented using the test data, infrared high-speed imaging and theory. The normalized test design environments are compared to various NASA semi-empirical numerical models to determine exceedance and conservatism of the flight scaled test-derived base design environments. Brief discussion of thermal impact to the launch vehicle base components is also presented.

  14. A General Class of Signed Rank Tests for Clustered Data when the Cluster Size is Potentially Informative

    PubMed Central

    Datta, Somnath; Nevalainen, Jaakko; Oja, Hannu

    2012-01-01

    SUMMARY Rank based tests are alternatives to likelihood based tests popularized by their relative robustness and underlying elegant mathematical theory. There has been a serge in research activities in this area in recent years since a number of researchers are working to develop and extend rank based procedures to clustered dependent data which include situations with known correlation structures (e.g., as in mixed effects models) as well as more general form of dependence. The purpose of this paper is to test the symmetry of a marginal distribution under clustered data. However, unlike most other papers in the area, we consider the possibility that the cluster size is a random variable whose distribution is dependent on the distribution of the variable of interest within a cluster. This situation typically arises when the clusters are defined in a natural way (e.g., not controlled by the experimenter or statistician) and in which the size of the cluster may carry information about the distribution of data values within a cluster. Under the scenario of an informative cluster size, attempts to use some form of variance adjusted sign or signed rank tests would fail since they would not maintain the correct size under the distribution of marginal symmetry. To overcome this difficulty Datta and Satten (2008; Biometrics, 64, 501–507) proposed a Wilcoxon type signed rank test based on the principle of within cluster resampling. In this paper we study this problem in more generality by introducing a class of valid tests employing a general score function. Asymptotic null distribution of these tests is obtained. A simulation study shows that a more general choice of the score function can sometimes result in greater power than the Datta and Satten test; furthermore, this development offers the user a wider choice. We illustrate our tests using a real data example on spinal cord injury patients. PMID:23074359

  15. An improved method to set significance thresholds for β diversity testing in microbial community comparisons.

    PubMed

    Gülay, Arda; Smets, Barth F

    2015-09-01

    Exploring the variation in microbial community diversity between locations (β diversity) is a central topic in microbial ecology. Currently, there is no consensus on how to set the significance threshold for β diversity. Here, we describe and quantify the technical components of β diversity, including those associated with the process of subsampling. These components exist for any proposed β diversity measurement procedure. Further, we introduce a strategy to set significance thresholds for β diversity of any group of microbial samples using rarefaction, invoking the notion of a meta-community. The proposed technique was applied to several in silico generated operational taxonomic unit (OTU) libraries and experimental 16S rRNA pyrosequencing libraries. The latter represented microbial communities from different biological rapid sand filters at a full-scale waterworks. We observe that β diversity, after subsampling, is inflated by intra-sample differences; this inflation is avoided in the proposed method. In addition, microbial community evenness (Gini > 0.08) strongly affects all β diversity estimations due to bias associated with rarefaction. Where published methods to test β significance often fail, the proposed meta-community-based estimator is more successful at rejecting insignificant β diversity values. Applying our approach, we reveal the heterogeneous microbial structure of biological rapid sand filters both within and across filters. © 2014 Society for Applied Microbiology and John Wiley & Sons Ltd.

  16. Summary of Rocketdyne Engine A5 Rocket Based Combined Cycle Testing

    NASA Technical Reports Server (NTRS)

    Ketchum. A.; Emanuel, Mark; Cramer, John

    1998-01-01

    Rocketdyne Propulsion and Power (RPP) has completed a highly successful experimental test program of an advanced rocket based combined cycle (RBCC) propulsion system. The test program was conducted as part of the Advanced Reusable Technology program directed by NASA-MSFC to demonstrate technologies for low-cost access to space. Testing was conducted in the new GASL Flight Acceleration Simulation Test (FAST) facility at sea level (Mach 0), Mach 3.0 - 4.0, and vacuum flight conditions. Significant achievements obtained during the test program include 1) demonstration of engine operation in air-augmented rocket mode (AAR), ramjet mode and rocket mode and 2) smooth transition from AAR to ramjet mode operation. Testing in the fourth mode (scramjet) is scheduled for November 1998.

  17. Meal Microstructure Characterization from Sensor-Based Food Intake Detection.

    PubMed

    Doulah, Abul; Farooq, Muhammad; Yang, Xin; Parton, Jason; McCrory, Megan A; Higgins, Janine A; Sazonov, Edward

    2017-01-01

    To avoid the pitfalls of self-reported dietary intake, wearable sensors can be used. Many food ingestion sensors offer the ability to automatically detect food intake using time resolutions that range from 23 ms to 8 min. There is no defined standard time resolution to accurately measure ingestive behavior or a meal microstructure. This paper aims to estimate the time resolution needed to accurately represent the microstructure of meals such as duration of eating episode, the duration of actual ingestion, and number of eating events. Twelve participants wore the automatic ingestion monitor (AIM) and kept a standard diet diary to report their food intake in free-living conditions for 24 h. As a reference, participants were also asked to mark food intake with a push button sampled every 0.1 s. The duration of eating episodes, duration of ingestion, and number of eating events were computed from the food diary, AIM, and the push button resampled at different time resolutions (0.1-30s). ANOVA and multiple comparison tests showed that the duration of eating episodes estimated from the diary differed significantly from that estimated by the AIM and the push button ( p -value <0.001). There were no significant differences in the number of eating events for push button resolutions of 0.1, 1, and 5 s, but there were significant differences in resolutions of 10-30s ( p -value <0.05). The results suggest that the desired time resolution of sensor-based food intake detection should be ≤5 s to accurately detect meal microstructure. Furthermore, the AIM provides more accurate measurement of the eating episode duration than the diet diary.

  18. Accuracy of lung nodule density on HRCT: analysis by PSF-based image simulation.

    PubMed

    Ohno, Ken; Ohkubo, Masaki; Marasinghe, Janaka C; Murao, Kohei; Matsumoto, Toru; Wada, Shinichi

    2012-11-08

    A computed tomography (CT) image simulation technique based on the point spread function (PSF) was applied to analyze the accuracy of CT-based clinical evaluations of lung nodule density. The PSF of the CT system was measured and used to perform the lung nodule image simulation. Then, the simulated image was resampled at intervals equal to the pixel size and the slice interval found in clinical high-resolution CT (HRCT) images. On those images, the nodule density was measured by placing a region of interest (ROI) commonly used for routine clinical practice, and comparing the measured value with the true value (a known density of object function used in the image simulation). It was quantitatively determined that the measured nodule density depended on the nodule diameter and the image reconstruction parameters (kernel and slice thickness). In addition, the measured density fluctuated, depending on the offset between the nodule center and the image voxel center. This fluctuation was reduced by decreasing the slice interval (i.e., with the use of overlapping reconstruction), leading to a stable density evaluation. Our proposed method of PSF-based image simulation accompanied with resampling enables a quantitative analysis of the accuracy of CT-based evaluations of lung nodule density. These results could potentially reveal clinical misreadings in diagnosis, and lead to more accurate and precise density evaluations. They would also be of value for determining the optimum scan and reconstruction parameters, such as image reconstruction kernels and slice thicknesses/intervals.

  19. A computer-vision-based rotating speed estimation method for motor bearing fault diagnosis

    NASA Astrophysics Data System (ADS)

    Wang, Xiaoxian; Guo, Jie; Lu, Siliang; Shen, Changqing; He, Qingbo

    2017-06-01

    Diagnosis of motor bearing faults under variable speed is a problem. In this study, a new computer-vision-based order tracking method is proposed to address this problem. First, a video recorded by a high-speed camera is analyzed with the speeded-up robust feature extraction and matching algorithm to obtain the instantaneous rotating speed (IRS) of the motor. Subsequently, an audio signal recorded by a microphone is equi-angle resampled for order tracking in accordance with the IRS curve, through which the frequency-domain signal is transferred to an angular-domain one. The envelope order spectrum is then calculated to determine the fault characteristic order, and finally the bearing fault pattern is determined. The effectiveness and robustness of the proposed method are verified with two brushless direct-current motor test rigs, in which two defective bearings and a healthy bearing are tested separately. This study provides a new noninvasive measurement approach that simultaneously avoids the installation of a tachometer and overcomes the disadvantages of tacholess order tracking methods for motor bearing fault diagnosis under variable speed.

  20. Cross-Mode Comparability of Computer-Based Testing (CBT) versus Paper-Pencil Based Testing (PPT): An Investigation of Testing Administration Mode among Iranian Intermediate EFL Learners

    ERIC Educational Resources Information Center

    Khoshsima, Hooshang; Hosseini, Monirosadat; Toroujeni, Seyyed Morteza Hashemi

    2017-01-01

    Advent of technology has caused growing interest in using computers to convert conventional paper and pencil-based testing (Henceforth PPT) into Computer-based testing (Henceforth CBT) in the field of education during last decades. This constant promulgation of computers to reshape the conventional tests into computerized format permeated the…

  1. Clinical significance of the glucose breath test in patients with inflammatory bowel disease.

    PubMed

    Lee, Ji Min; Lee, Kang-Moon; Chung, Yoon Yung; Lee, Yang Woon; Kim, Dae Bum; Sung, Hea Jung; Chung, Woo Chul; Paik, Chang-Nyol

    2015-06-01

    Small intestinal bacterial overgrowth which has recently been diagnosed with the glucose breath test is characterized by excessive colonic bacteria in the small bowel, and results in gastrointestinal symptoms that mimic symptoms of inflammatory bowel disease. This study aimed to estimate the positivity of the glucose breath test and investigate its clinical role in inflammatory bowel disease. Patients aged > 18 years with inflammatory bowel disease were enrolled. All patients completed symptom questionnaires. Fecal calprotectin level was measured to evaluate the disease activity. Thirty historical healthy controls were used to determine normal glucose breath test values. A total of 107 patients, 64 with ulcerative colitis and 43 with Crohn's disease, were included. Twenty-two patients (20.6%) were positive for the glucose breath test (30.2%, Crohn's disease; 14.1%, ulcerative colitis). Positive rate of the glucose breath test was significantly higher in patients with Crohn's disease than in healthy controls (30.2% vs 6.7%, P=0.014). Bloating, flatus, and satiety were higher in glucose breath test-positive patients than glucose breath test-negative patients (P=0.021, 0.014, and 0.049, respectively). The positivity was not correlated with the fecal calprotectin level. The positive rate of the glucose breath test was higher in patients with inflammatory bowel disease, especially Crohn's disease than in healthy controls; gastrointestinal symptoms of patients with inflammatory bowel disease were correlated with this positivity. Glucose breath test can be used to manage intestinal symptoms of patients with inflammatory bowel disease. © 2015 Journal of Gastroenterology and Hepatology Foundation and Wiley Publishing Asia Pty Ltd.

  2. Team-Based Testing Improves Individual Learning

    ERIC Educational Resources Information Center

    Vogler, Jane S.; Robinson, Daniel H.

    2016-01-01

    In two experiments, 90 undergraduates took six tests as part of an educational psychology course. Using a crossover design, students took three tests individually without feedback and then took the same test again, following the process of team-based testing (TBT), in teams in which the members reached consensus for each question and answered…

  3. Confidence Limits for the Indirect Effect: Distribution of the Product and Resampling Methods

    ERIC Educational Resources Information Center

    MacKinnon, David P.; Lockwood, Chondra M.; Williams, Jason

    2004-01-01

    The most commonly used method to test an indirect effect is to divide the estimate of the indirect effect by its standard error and compare the resulting z statistic with a critical value from the standard normal distribution. Confidence limits for the indirect effect are also typically based on critical values from the standard normal…

  4. Validity evidence based on test content.

    PubMed

    Sireci, Stephen; Faulkner-Bond, Molly

    2014-01-01

    Validity evidence based on test content is one of the five forms of validity evidence stipulated in the Standards for Educational and Psychological Testing developed by the American Educational Research Association, American Psychological Association, and National Council on Measurement in Education. In this paper, we describe the logic and theory underlying such evidence and describe traditional and modern methods for gathering and analyzing content validity data. A comprehensive review of the literature and of the aforementioned Standards is presented. For educational tests and other assessments targeting knowledge and skill possessed by examinees, validity evidence based on test content is necessary for building a validity argument to support the use of a test for a particular purpose. By following the methods described in this article, practitioners have a wide arsenal of tools available for determining how well the content of an assessment is congruent with and appropriate for the specific testing purposes.

  5. Test Review: Test of English as a Foreign Language[TM]--Internet-Based Test (TOEFL iBT[R])

    ERIC Educational Resources Information Center

    Alderson, J. Charles

    2009-01-01

    In this article, the author reviews the TOEFL iBT which is the latest version of the TOEFL, whose history stretches back to 1961. The TOEFL iBT was introduced in the USA, Canada, France, Germany and Italy in late 2005. Currently the TOEFL test is offered in two testing formats: (1) Internet-based testing (iBT); and (2) paper-based testing (PBT).…

  6. Uncertainties of flood frequency estimation approaches based on continuous simulation using data resampling

    NASA Astrophysics Data System (ADS)

    Arnaud, Patrick; Cantet, Philippe; Odry, Jean

    2017-11-01

    Flood frequency analyses (FFAs) are needed for flood risk management. Many methods exist ranging from classical purely statistical approaches to more complex approaches based on process simulation. The results of these methods are associated with uncertainties that are sometimes difficult to estimate due to the complexity of the approaches or the number of parameters, especially for process simulation. This is the case of the simulation-based FFA approach called SHYREG presented in this paper, in which a rainfall generator is coupled with a simple rainfall-runoff model in an attempt to estimate the uncertainties due to the estimation of the seven parameters needed to estimate flood frequencies. The six parameters of the rainfall generator are mean values, so their theoretical distribution is known and can be used to estimate the generator uncertainties. In contrast, the theoretical distribution of the single hydrological model parameter is unknown; consequently, a bootstrap method is applied to estimate the calibration uncertainties. The propagation of uncertainty from the rainfall generator to the hydrological model is also taken into account. This method is applied to 1112 basins throughout France. Uncertainties coming from the SHYREG method and from purely statistical approaches are compared, and the results are discussed according to the length of the recorded observations, basin size and basin location. Uncertainties of the SHYREG method decrease as the basin size increases or as the length of the recorded flow increases. Moreover, the results show that the confidence intervals of the SHYREG method are relatively small despite the complexity of the method and the number of parameters (seven). This is due to the stability of the parameters and takes into account the dependence of uncertainties due to the rainfall model and the hydrological calibration. Indeed, the uncertainties on the flow quantiles are on the same order of magnitude as those associated with

  7. Students Perception on the Use of Computer Based Test

    NASA Astrophysics Data System (ADS)

    Nugroho, R. A.; Kusumawati, N. S.; Ambarwati, O. C.

    2018-02-01

    Teaching nowadays might use technology in order to disseminate science and knowledge. As part of teaching, the way evaluating study progress and result has also benefited from this IT rapid progress. The computer-based test (CBT) has been introduced to replace the more conventional Paper and Pencil Test (PPT). CBT are considered more advantageous than PPT. It is considered as more efficient, transparent, and has the ability of minimising fraud in cognitive evaluation. Current studies have indicated the debate of CBT vs PPT usage. Most of the current research compares the two methods without exploring the students’ perception about the test. This study will fill the gap in the literature by providing students’ perception on the two tests method. Survey approach is conducted to obtain the data. The sample is collected in two identical classes with similar subject in a public university in Indonesia. Mann-Whitney U test used to analyse the data. The result indicates that there is a significant difference between two groups of students regarding CBT usage. Student with different test method prefers to have test other than what they were having. Further discussion and research implication is discussed in the paper.

  8. An Automated, Adaptive Framework for Optimizing Preprocessing Pipelines in Task-Based Functional MRI

    PubMed Central

    Churchill, Nathan W.; Spring, Robyn; Afshin-Pour, Babak; Dong, Fan; Strother, Stephen C.

    2015-01-01

    BOLD fMRI is sensitive to blood-oxygenation changes correlated with brain function; however, it is limited by relatively weak signal and significant noise confounds. Many preprocessing algorithms have been developed to control noise and improve signal detection in fMRI. Although the chosen set of preprocessing and analysis steps (the “pipeline”) significantly affects signal detection, pipelines are rarely quantitatively validated in the neuroimaging literature, due to complex preprocessing interactions. This paper outlines and validates an adaptive resampling framework for evaluating and optimizing preprocessing choices by optimizing data-driven metrics of task prediction and spatial reproducibility. Compared to standard “fixed” preprocessing pipelines, this optimization approach significantly improves independent validation measures of within-subject test-retest, and between-subject activation overlap, and behavioural prediction accuracy. We demonstrate that preprocessing choices function as implicit model regularizers, and that improvements due to pipeline optimization generalize across a range of simple to complex experimental tasks and analysis models. Results are shown for brief scanning sessions (<3 minutes each), demonstrating that with pipeline optimization, it is possible to obtain reliable results and brain-behaviour correlations in relatively small datasets. PMID:26161667

  9. Isotropic source terms of San Jacinto fault zone earthquakes based on waveform inversions with a generalized CAP method

    NASA Astrophysics Data System (ADS)

    Ross, Z. E.; Ben-Zion, Y.; Zhu, L.

    2015-02-01

    We analyse source tensor properties of seven Mw > 4.2 earthquakes in the complex trifurcation area of the San Jacinto Fault Zone, CA, with a focus on isotropic radiation that may be produced by rock damage in the source volumes. The earthquake mechanisms are derived with generalized `Cut and Paste' (gCAP) inversions of three-component waveforms typically recorded by >70 stations at regional distances. The gCAP method includes parameters ζ and χ representing, respectively, the relative strength of the isotropic and CLVD source terms. The possible errors in the isotropic and CLVD components due to station variability is quantified with bootstrap resampling for each event. The results indicate statistically significant explosive isotropic components for at least six of the events, corresponding to ˜0.4-8 per cent of the total potency/moment of the sources. In contrast, the CLVD components for most events are not found to be statistically significant. Trade-off and correlation between the isotropic and CLVD components are studied using synthetic tests with realistic station configurations. The associated uncertainties are found to be generally smaller than the observed isotropic components. Two different tests with velocity model perturbation are conducted to quantify the uncertainty due to inaccuracies in the Green's functions. Applications of the Mann-Whitney U test indicate statistically significant explosive isotropic terms for most events consistent with brittle damage production at the source.

  10. A novel iterative mixed model to remap three complex orthopedic traits in dogs

    PubMed Central

    Huang, Meng; Hayward, Jessica J.; Corey, Elizabeth; Garrison, Susan J.; Wagner, Gabriela R.; Krotscheck, Ursula; Hayashi, Kei; Schweitzer, Peter A.; Lust, George; Boyko, Adam R.; Todhunter, Rory J.

    2017-01-01

    Hip dysplasia (HD), elbow dysplasia (ED), and rupture of the cranial (anterior) cruciate ligament (RCCL) are the most common complex orthopedic traits of dogs and all result in debilitating osteoarthritis. We reanalyzed previously reported data: the Norberg angle (a quantitative measure of HD) in 921 dogs, ED in 113 cases and 633 controls, and RCCL in 271 cases and 399 controls and their genotypes at ~185,000 single nucleotide polymorphisms. A novel fixed and random model with a circulating probability unification (FarmCPU) function, with marker-based principal components and a kinship matrix to correct for population stratification, was used. A Bonferroni correction at p<0.01 resulted in a P< 6.96 ×10−8. Six loci were identified; three for HD and three for RCCL. An associated locus at CFA28:34,369,342 for HD was described previously in the same dogs using a conventional mixed model. No loci were identified for RCCL in the previous report but the two loci for ED in the previous report did not reach genome-wide significance using the FarmCPU model. These results were supported by simulation which demonstrated that the FarmCPU held no power advantage over the linear mixed model for the ED sample but provided additional power for the HD and RCCL samples. Candidate genes for HD and RCCL are discussed. When using FarmCPU software, we recommend a resampling test, that a positive control be used to determine the optimum pseudo quantitative trait nucleotide-based covariate structure of the model, and a negative control be used consisting of permutation testing and the identical resampling test as for the non-permuted phenotypes. PMID:28614352

  11. The clinical significance of 10-m walk test standardizations in Parkinson's disease.

    PubMed

    Lindholm, Beata; Nilsson, Maria H; Hansson, Oskar; Hagell, Peter

    2018-06-06

    The 10-m walk test (10MWT) is a widely used measure of gait speed in Parkinson's disease (PD). However, it is unclear if different standardizations of its conduct impact test results. We examined the clinical significance of two aspects of the standardization of the 10MWT in mild PD: static vs. dynamic start, and a single vs. repeated trials. Implications for fall prediction were also explored. 151 people with PD (mean age and PD duration, 68 and 4 years, respectively) completed the 10MWT in comfortable gait speed with static and dynamic start (two trials each), and gait speed (m/s) was recorded. Participants then registered all prospective falls for 6 months. Absolute mean differences between outcomes from the various test conditions ranged between 0.016 and 0.040 m/s (effect sizes, 0.06-0.14) with high levels of agreement (intra-class correlation coefficients, 0.932-0.987) and small standard errors of measurement (0.032-0.076 m/s). Receiver operating characteristic curves showed similar discriminate abilities for prediction of future falls across conditions (areas under curves, 0.70-0.73). Cut-off points were estimated at 1.1-1.2 m/s. Different 10MWT standardizations yield very similar results, suggesting that there is no practical need for an acceleration distance or repeated trials when conducting this test in mild PD.

  12. A multistate dynamic site occupancy model for spatially aggregated sessile communities

    USGS Publications Warehouse

    Fukaya, Keiichi; Royle, J. Andrew; Okuda, Takehiro; Nakaoka, Masahiro; Noda, Takashi

    2017-01-01

    Estimation of transition probabilities of sessile communities seems easy in principle but may still be difficult in practice because resampling error (i.e. a failure to resample exactly the same location at fixed points) may cause significant estimation bias. Previous studies have developed novel analytical methods to correct for this estimation bias. However, they did not consider the local structure of community composition induced by the aggregated distribution of organisms that is typically observed in sessile assemblages and is very likely to affect observations.We developed a multistate dynamic site occupancy model to estimate transition probabilities that accounts for resampling errors associated with local community structure. The model applies a nonparametric multivariate kernel smoothing methodology to the latent occupancy component to estimate the local state composition near each observation point, which is assumed to determine the probability distribution of data conditional on the occurrence of resampling error.By using computer simulations, we confirmed that an observation process that depends on local community structure may bias inferences about transition probabilities. By applying the proposed model to a real data set of intertidal sessile communities, we also showed that estimates of transition probabilities and of the properties of community dynamics may differ considerably when spatial dependence is taken into account.Results suggest the importance of accounting for resampling error and local community structure for developing management plans that are based on Markovian models. Our approach provides a solution to this problem that is applicable to broad sessile communities. It can even accommodate an anisotropic spatial correlation of species composition, and may also serve as a basis for inferring complex nonlinear ecological dynamics.

  13. Development and Validation of a Portable Hearing Self-Testing System Based on a Notebook Personal Computer.

    PubMed

    Liu, Yan; Yang, Dong; Xiong, Fen; Yu, Lan; Ji, Fei; Wang, Qiu-Ju

    2015-09-01

    Hearing loss affects more than 27 million people in mainland China. It would be helpful to develop a portable and self-testing audiometer for the timely detection of hearing loss so that the optimal clinical therapeutic schedule can be determined. The objective of this study was to develop a software-based hearing self-testing system. The software-based self-testing system consisted of a notebook computer, an external sound card, and a pair of 10-Ω insert earphones. The system could be used to test the hearing thresholds by individuals themselves in an interactive manner using software. The reliability and validity of the system at octave frequencies of 0.25 Hz to 8.0 kHz were analyzed in three series of experiments. Thirty-seven normal-hearing particpants (74 ears) were enrolled in experiment 1. Forty individuals (80 ears) with sensorineural hearing loss (SNHL) participated in experiment 2. Thirteen normal-hearing participants (26 ears) and 37 participants (74 ears) with SNHL were enrolled in experiment 3. Each participant was enrolled in only one of the three experiments. In all experiments, pure-tone audiometry in a sound insulation room (standard test) was regarded as the gold standard. SPSS for Windows, version 17.0, was used for statistical analysis. The paired t-test was used to compare the hearing thresholds between the standard test and software-based self-testing (self-test) in experiments 1 and 2. In experiment 3 (main study), one-way analysis of variance and post hoc comparisons were used to compare the hearing thresholds among the standard test and two rounds of the self-test. Linear correlation analysis was carried out for the self-tests performed twice. The concordance was analyzed between the standard test and the self-test using the kappa method. p < 0.05 was considered statistically significant. Experiments 1 and 2: The hearing thresholds determined by the two methods were not significantly different at frequencies of 250, 500, or 8000 Hz (p > 0

  14. Significant lexical relationships

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Pedersen, T.; Kayaalp, M.; Bruce, R.

    Statistical NLP inevitably deals with a large number of rare events. As a consequence, NLP data often violates the assumptions implicit in traditional statistical procedures such as significance testing. We describe a significance test, an exact conditional test, that is appropriate for NLP data and can be performed using freely available software. We apply this test to the study of lexical relationships and demonstrate that the results obtained using this test are both theoretically more reliable and different from the results obtained using previously applied tests.

  15. The Improved Locating Algorithm of Particle Filter Based on ROS Robot

    NASA Astrophysics Data System (ADS)

    Fang, Xun; Fu, Xiaoyang; Sun, Ming

    2018-03-01

    This paperanalyzes basic theory and primary algorithm of the real-time locating system and SLAM technology based on ROS system Robot. It proposes improved locating algorithm of particle filter effectively reduces the matching time of laser radar and map, additional ultra-wideband technology directly accelerates the global efficiency of FastSLAM algorithm, which no longer needs searching on the global map. Meanwhile, the re-sampling has been largely reduced about 5/6 that directly cancels the matching behavior on Roboticsalgorithm.

  16. Prognostic significance of electrophysiological tests for facial nerve outcome in vestibular schwannoma surgery.

    PubMed

    van Dinther, J J S; Van Rompaey, V; Somers, T; Zarowski, A; Offeciers, F E

    2011-01-01

    To assess the prognostic significance of pre-operative electrophysiological tests for facial nerve outcome in vestibular schwannoma surgery. Retrospective study design in a tertiary referral neurology unit. We studied a total of 123 patients with unilateral vestibular schwannoma who underwent microsurgical removal of the lesion. Nine patients were excluded because they had clinically abnormal pre-operative facial function. Pre-operative electrophysiological facial nerve function testing (EPhT) was performed. Short-term (1 month) and long-term (1 year) post-operative clinical facial nerve function were assessed. When pre-operative facial nerve function, evaluated by EPhT, was normal, the outcome from clinical follow-up at 1-month post-operatively was excellent in 78% (i.e. HB I-II) of patients, moderate in 11% (i.e. HB III-IV), and bad in 11% (i.e. HB V-VI). After 1 year, 86% had excellent outcomes, 13% had moderate outcomes, and 1% had bad outcomes. Of all patients with normal clinical facial nerve function, 22% had an abnormal EPhT result and 78% had a normal result. No statistically significant differences could be observed in short-term and long-term post-operative facial function between the groups. In this study, electrophysiological tests were not able to predict facial nerve outcome after vestibular schwannoma surgery. Tumour size remains the best pre-operative prognostic indicator of facial nerve function outcome, i.e. a better outcome in smaller lesions.

  17. Determinants of Teacher Implementation of Youth Fitness Tests in School-Based Physical Education Programs

    ERIC Educational Resources Information Center

    Keating, Xiaofen Deng; Silverman, Stephen

    2009-01-01

    Background: Millions of American children are participating in fitness testing in school-based physical education (PE) programs. However, practitioners and researchers in the field of PE have questioned the need for regular or mandatory youth fitness testing. This was partly because a significant improvement in youth fitness and physical activity…

  18. Using the Bootstrap Method for a Statistical Significance Test of Differences between Summary Histograms

    NASA Technical Reports Server (NTRS)

    Xu, Kuan-Man

    2006-01-01

    A new method is proposed to compare statistical differences between summary histograms, which are the histograms summed over a large ensemble of individual histograms. It consists of choosing a distance statistic for measuring the difference between summary histograms and using a bootstrap procedure to calculate the statistical significance level. Bootstrapping is an approach to statistical inference that makes few assumptions about the underlying probability distribution that describes the data. Three distance statistics are compared in this study. They are the Euclidean distance, the Jeffries-Matusita distance and the Kuiper distance. The data used in testing the bootstrap method are satellite measurements of cloud systems called cloud objects. Each cloud object is defined as a contiguous region/patch composed of individual footprints or fields of view. A histogram of measured values over footprints is generated for each parameter of each cloud object and then summary histograms are accumulated over all individual histograms in a given cloud-object size category. The results of statistical hypothesis tests using all three distances as test statistics are generally similar, indicating the validity of the proposed method. The Euclidean distance is determined to be most suitable after comparing the statistical tests of several parameters with distinct probability distributions among three cloud-object size categories. Impacts on the statistical significance levels resulting from differences in the total lengths of satellite footprint data between two size categories are also discussed.

  19. Design and Testing of a Flexible Inclinometer Probe for Model Tests of Landslide Deep Displacement Measurement.

    PubMed

    Zhang, Yongquan; Tang, Huiming; Li, Changdong; Lu, Guiying; Cai, Yi; Zhang, Junrong; Tan, Fulin

    2018-01-14

    The physical model test of landslides is important for studying landslide structural damage, and parameter measurement is key in this process. To meet the measurement requirements for deep displacement in landslide physical models, an automatic flexible inclinometer probe with good coupling and large deformation capacity was designed. The flexible inclinometer probe consists of several gravity acceleration sensing units that are protected and positioned by silicon encapsulation, all the units are connected to a 485-comunication bus. By sensing the two-axis tilt angle, the direction and magnitude of the displacement for a measurement unit can be calculated, then the overall displacement is accumulated according to all units, integrated from bottom to top in turn. In the conversion from angle to displacement, two spline interpolation methods are introduced to correct and resample the data; one is to interpolate the displacement after conversion, and the other is to interpolate the angle before conversion; compared with the result read from checkered paper, the latter is proved to have a better effect, with an additional condition that the displacement curve move up half the length of the unit. The flexible inclinometer is verified with respect to its principle and arrangement by a laboratory physical model test, and the test results are highly consistent with the actual deformation of the landslide model.

  20. Design and Testing of a Flexible Inclinometer Probe for Model Tests of Landslide Deep Displacement Measurement

    PubMed Central

    Zhang, Yongquan; Tang, Huiming; Li, Changdong; Lu, Guiying; Cai, Yi; Zhang, Junrong; Tan, Fulin

    2018-01-01

    The physical model test of landslides is important for studying landslide structural damage, and parameter measurement is key in this process. To meet the measurement requirements for deep displacement in landslide physical models, an automatic flexible inclinometer probe with good coupling and large deformation capacity was designed. The flexible inclinometer probe consists of several gravity acceleration sensing units that are protected and positioned by silicon encapsulation, all the units are connected to a 485-comunication bus. By sensing the two-axis tilt angle, the direction and magnitude of the displacement for a measurement unit can be calculated, then the overall displacement is accumulated according to all units, integrated from bottom to top in turn. In the conversion from angle to displacement, two spline interpolation methods are introduced to correct and resample the data; one is to interpolate the displacement after conversion, and the other is to interpolate the angle before conversion; compared with the result read from checkered paper, the latter is proved to have a better effect, with an additional condition that the displacement curve move up half the length of the unit. The flexible inclinometer is verified with respect to its principle and arrangement by a laboratory physical model test, and the test results are highly consistent with the actual deformation of the landslide model. PMID:29342902

  1. Ethernet-based test stand for a CAN network

    NASA Astrophysics Data System (ADS)

    Ziebinski, Adam; Cupek, Rafal; Drewniak, Marek

    2017-11-01

    This paper presents a test stand for the CAN-based systems that are used in automotive systems. The authors propose applying an Ethernet-based test system that supports the virtualisation of a CAN network. The proposed solution has many advantages compared to classical test beds that are based on dedicated CAN-PC interfaces: it allows the physical constraints associated with the number of interfaces that can be simultaneously connected to a tested system to be avoided, which enables the test time for parallel tests to be shortened; the high speed of Ethernet transmission allows for more frequent sampling of the messages that are transmitted by a CAN network (as the authors show in the experiment results section) and the cost of the proposed solution is much lower than the traditional lab-based dedicated CAN interfaces for PCs.

  2. Visualization of the significance of Receiver Operating Characteristics based on confidence ellipses

    NASA Astrophysics Data System (ADS)

    Sarlis, Nicholas V.; Christopoulos, Stavros-Richard G.

    2014-03-01

    The Receiver Operating Characteristics (ROC) is used for the evaluation of prediction methods in various disciplines like meteorology, geophysics, complex system physics, medicine etc. The estimation of the significance of a binary prediction method, however, remains a cumbersome task and is usually done by repeating the calculations by Monte Carlo. The FORTRAN code provided here simplifies this problem by evaluating the significance of binary predictions for a family of ellipses which are based on confidence ellipses and cover the whole ROC space. Catalogue identifier: AERY_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AERY_v1_0.html Program obtainable from: CPC Program Library, Queen’s University, Belfast, N. Ireland Licensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 11511 No. of bytes in distributed program, including test data, etc.: 72906 Distribution format: tar.gz Programming language: FORTRAN. Computer: Any computer supporting a GNU FORTRAN compiler. Operating system: Linux, MacOS, Windows. RAM: 1Mbyte Classification: 4.13, 9, 14. Nature of problem: The Receiver Operating Characteristics (ROC) is used for the evaluation of prediction methods in various disciplines like meteorology, geophysics, complex system physics, medicine etc. The estimation of the significance of a binary prediction method, however, remains a cumbersome task and is usually done by repeating the calculations by Monte Carlo. The FORTRAN code provided here simplifies this problem by evaluating the significance of binary predictions for a family of ellipses which are based on confidence ellipses and cover the whole ROC space. Solution method: Using the statistics of random binary predictions for a given value of the predictor threshold ɛt, one can construct the corresponding confidence ellipses. The envelope of these corresponding confidence ellipses is estimated when

  3. [Base excess. Parameter with exceptional clinical significance].

    PubMed

    Schaffartzik, W

    2007-05-01

    The base excess of blood (BE) plays an important role in the description of the acid-base status of a patient and is gaining in clinical interest. Apart from the Quick test, the age, the injury severity score and the Glasgow coma scale, the BE is becoming more and more important to identify, e. g. the risk of mortality for patients with multiple injuries. According to Zander the BE is calculated using the pH, pCO(2), haemoglobin concentration and the oxygen saturation of haemoglobin (sO(2)). The use of sO(2 )allows the blood gas analyser to determine only one value of BE, independent of the type of blood sample analyzed: arterial, mixed venous or venous. The BE and measurement of the lactate concentration (cLac) play an important role in diagnosing critically ill patients. In general, the change in BE corresponds to the change in cLac. If DeltaBE is smaller than DeltacLac the reason could be therapy with HCO(3)(-) but also with infusion solutions containing lactate. Physician are very familiar with the term BE, therefore, knowledge about an alkalizing or acidifying effect of an infusion solution would be very helpful in the treatment of patients, especially critically ill patients. Unfortunately, at present the description of an infusion solution with respect to BE has not yet been accepted by the manufacturers.

  4. Interactive Computer-Based Testing.

    ERIC Educational Resources Information Center

    Franklin, Stephen; Marasco, Joseph

    1977-01-01

    Discusses the use of the Interactive Computer-based Testing (ICBT) in university-level science courses as an effective and economical educational tool. The authors discuss: (1) major objectives to ICBT; (2) advantages and pitfalls of the student use of ICBT; and (3) future prospects of ICBT. (HM)

  5. Large-Scale Multiobjective Static Test Generation for Web-Based Testing with Integer Programming

    ERIC Educational Resources Information Center

    Nguyen, M. L.; Hui, Siu Cheung; Fong, A. C. M.

    2013-01-01

    Web-based testing has become a ubiquitous self-assessment method for online learning. One useful feature that is missing from today's web-based testing systems is the reliable capability to fulfill different assessment requirements of students based on a large-scale question data set. A promising approach for supporting large-scale web-based…

  6. Cost-effectiveness of population-based screening for colorectal cancer: a comparison of guaiac-based faecal occult blood testing, faecal immunochemical testing and flexible sigmoidoscopy

    PubMed Central

    Sharp, L; Tilson, L; Whyte, S; O'Ceilleachair, A; Walsh, C; Usher, C; Tappenden, P; Chilcott, J; Staines, A; Barry, M; Comber, H

    2012-01-01

    Background: Several colorectal cancer-screening tests are available, but it is uncertain which provides the best balance of risks and benefits within a screening programme. We evaluated cost-effectiveness of a population-based screening programme in Ireland based on (i) biennial guaiac-based faecal occult blood testing (gFOBT) at ages 55–74, with reflex faecal immunochemical testing (FIT); (ii) biennial FIT at ages 55–74; and (iii) once-only flexible sigmoidoscopy (FSIG) at age 60. Methods: A state-transition model was used to estimate costs and outcomes for each screening scenario vs no screening. A third party payer perspective was adopted. Probabilistic sensitivity analyses were undertaken. Results: All scenarios would be considered highly cost-effective compared with no screening. The lowest incremental cost-effectiveness ratio (ICER vs no screening €589 per quality-adjusted life-year (QALY) gained) was found for FSIG, followed by FIT (€1696) and gFOBT (€4428); gFOBT was dominated. Compared with FSIG, FIT was associated with greater gains in QALYs and reductions in lifetime cancer incidence and mortality, but was more costly, required considerably more colonoscopies and resulted in more complications. Results were robust to variations in parameter estimates. Conclusion: Population-based screening based on FIT is expected to result in greater health gains than a policy of gFOBT (with reflex FIT) or once-only FSIG, but would require significantly more colonoscopy resources and result in more individuals experiencing adverse effects. Weighing these advantages and disadvantages presents a considerable challenge to policy makers. PMID:22343624

  7. Two-year pilot study of newborn screening for congenital adrenal hyperplasia in New South Wales compared with nationwide case surveillance in Australia.

    PubMed

    Gleeson, Helena K; Wiley, Veronica; Wilcken, Bridget; Elliott, Elizabeth; Cowell, Christopher; Thonsett, Michael; Byrne, Geoffrey; Ambler, Geoffrey

    2008-10-01

    To assess the benefits and practicalities of setting up a newborn screening (NBS) program in Australia for congenital adrenal hyperplasia (CAH) through a 2 year pilot screening in ACT/NSW and comparing with case surveillance in other states. The pilot newborn screening occurred between 1/10/95 and 30/9/97 in NSW/ACT. Concurrently, case reporting for all new CAH cases occurred through the Australian Paediatric Surveillance Unit (APSU) across Australia. Details of clinical presentation, re-sampling and laboratory performance were assessed. 185,854 newborn infants were screened for CAH in NSW/ACT. Concurrently, 30 cases of CAH were reported to APSU, twelve of which were from NSW/ACT. CAH incidence was 1 in 15 488 (screened population) vs 1 in 18,034 births (unscreened) (difference not significant). Median age of initial notification was day 8 with confirmed diagnosis at 13(5-23) days in the screened population vs 16(7-37) days in the unscreened population (not significant). Of the 5 clinically unsuspected males in the screened population, one had mild salt-wasting by the time of notification, compared with salt-wasting crisis in all 6 males from the unscreened population. 96% of results were reported by day 10. Resampling was requested in 637 (0.4%) and median re-sampling delay was 11(0-28) days with higher resample rates in males (p < 0.0001). The within-laboratory cost per case of clinically unsuspected cases was A$42 717. There seems good justification for NBS for CAH based on clear prevention of salt-wasting crises and their potential long-term consequences. Also, prospects exist for enhancing screening performance.

  8. Test Scheduling for Core-Based SOCs Using Genetic Algorithm Based Heuristic Approach

    NASA Astrophysics Data System (ADS)

    Giri, Chandan; Sarkar, Soumojit; Chattopadhyay, Santanu

    This paper presents a Genetic algorithm (GA) based solution to co-optimize test scheduling and wrapper design for core based SOCs. Core testing solutions are generated as a set of wrapper configurations, represented as rectangles with width equal to the number of TAM (Test Access Mechanism) channels and height equal to the corresponding testing time. A locally optimal best-fit heuristic based bin packing algorithm has been used to determine placement of rectangles minimizing the overall test times, whereas, GA has been utilized to generate the sequence of rectangles to be considered for placement. Experimental result on ITC'02 benchmark SOCs shows that the proposed method provides better solutions compared to the recent works reported in the literature.

  9. UFC advisor: An AI-based system for the automatic test environment

    NASA Technical Reports Server (NTRS)

    Lincoln, David T.; Fink, Pamela K.

    1990-01-01

    The Air Logistics Command within the Air Force is responsible for maintaining a wide variety of aircraft fleets and weapon systems. To maintain these fleets and systems requires specialized test equipment that provides data concerning the behavior of a particular device. The test equipment is used to 'poke and prod' the device to determine its functionality. The data represent voltages, pressures, torques, temperatures, etc. and are called testpoints. These testpoints can be defined numerically as being in or out of limits/tolerance. Some test equipment is termed 'automatic' because it is computer-controlled. Due to the fact that effective maintenance in the test arena requires a significant amount of expertise, it is an ideal area for the application of knowledge-based system technology. Such a system would take testpoint data, identify values out-of-limits, and determine potential underlying problems based on what is out-of-limits and how far. This paper discusses the application of this technology to a device called the Unified Fuel Control (UFC) which is maintained in this manner.

  10. The Interrupted Time Series as Quasi-Experiment: Three Tests of Significance. A Fortran Program for the CDC 3400 Computer.

    ERIC Educational Resources Information Center

    Sween, Joyce; Campbell, Donald T.

    Computational formulae for the following three tests of significance, useful in the interrupted time series design, are given: (1) a "t" test (Mood, 1950) for the significance of the first post-change observation from a value predicted by a linear fit of the pre-change observations; (2) an "F" test (Walker and Lev, 1953) of the…

  11. Sample features associated with success rates in population-based EGFR mutation testing.

    PubMed

    Shiau, Carolyn J; Babwah, Jesse P; da Cunha Santos, Gilda; Sykes, Jenna R; Boerner, Scott L; Geddie, William R; Leighl, Natasha B; Wei, Cuihong; Kamel-Reid, Suzanne; Hwang, David M; Tsao, Ming-Sound

    2014-07-01

    Epidermal growth factor receptor (EGFR) mutation testing has become critical in the treatment of patients with advanced non-small-cell lung cancer. This study involves a large cohort and epidemiologically unselected series of EGFR mutation testing for patients with nonsquamous non-small-cell lung cancer in a North American population to determine sample-related factors that influence success in clinical EGFR testing. Data from consecutive cases of Canadian province-wide testing at a centralized diagnostic laboratory for a 24-month period were reviewed. Samples were tested for exon-19 deletion and exon-21 L858R mutations using a validated polymerase chain reaction method with 1% to 5% detection sensitivity. From 2651 samples submitted, 2404 samples were tested with 2293 samples eligible for analysis (1780 histology and 513 cytology specimens). The overall test-failure rate was 5.4% with overall mutation rate of 20.6%. No significant differences in the failure rate, mutation rate, or mutation type were found between histology and cytology samples. Although tumor cellularity was significantly associated with test-success or mutation rates in histology and cytology specimens, respectively, mutations could be detected in all specimen types. Significant rates of EGFR mutation were detected in cases with thyroid transcription factor (TTF)-1-negative immunohistochemistry (6.7%) and mucinous component (9.0%). EGFR mutation testing should be attempted in any specimen, whether histologic or cytologic. Samples should not be excluded from testing based on TTF-1 status or histologic features. Pathologists should report the amount of available tumor for testing. However, suboptimal samples with a negative EGFR mutation result should be considered for repeat testing with an alternate sample.

  12. Gaussian Process Interpolation for Uncertainty Estimation in Image Registration

    PubMed Central

    Wachinger, Christian; Golland, Polina; Reuter, Martin; Wells, William

    2014-01-01

    Intensity-based image registration requires resampling images on a common grid to evaluate the similarity function. The uncertainty of interpolation varies across the image, depending on the location of resampled points relative to the base grid. We propose to perform Bayesian inference with Gaussian processes, where the covariance matrix of the Gaussian process posterior distribution estimates the uncertainty in interpolation. The Gaussian process replaces a single image with a distribution over images that we integrate into a generative model for registration. Marginalization over resampled images leads to a new similarity measure that includes the uncertainty of the interpolation. We demonstrate that our approach increases the registration accuracy and propose an efficient approximation scheme that enables seamless integration with existing registration methods. PMID:25333127

  13. Security for Web-Based Tests.

    ERIC Educational Resources Information Center

    Shermis, Mark D.; Averitt, Jason

    The purpose of this paper is to enumerate a series of security steps that might be taken by those researchers or organizations that are contemplating Web-based tests and performance assessments. From a security viewpoint, much of what goes on with Web-based transactions is similar to other general computer activity, but the recommendations here…

  14. Evaluation of new automated syphilis test reagents 'IMMUNOTICLES AUTO3' series: performance, biochemical reactivity, and clinical significance.

    PubMed

    Yukimasa, Nobuyasu; Miura, Keisuke; Miyagawa, Yukiko; Fukuchi, Kunihiko

    2015-01-01

    Automated nontreponemal and treponemal test reagents based on the latex agglutination method (immunoticles auto3 RPR: ITA3RPR and immunoticles auto3 TP: ITA3TP) have been developed to improve the issues of conventional manual methods such as their subjectivity, a massive amount of assays, and so on. We evaluated these reagents in regards to their performance, reactivity to antibody isotype, and their clinical significance. ITA3RPR and ITA3TP were measured using a clinical chemistry analyzer. Reactivity to antibody isotype was examined by gel filtration analysis. ITA3RPR and ITA3TP showed reactivity to both IgM- and IgG-class antibodies and detected early infections. ITA3RPR was verified to show a higher reactivity to IgM-class antibodies than the conventional methods. ITA3RPR correlated with VDRL in the high titer range, and measurement values decreased with treatment. ITA3RPR showed a negative result earlier after treatment than conventional methods. ITA3TP showed high specificity and did not give any false-negative reaction. Significant differences in the measurement values of ITA3RPR between the infective and previous group were verified. The double test of ITA3RPR and ITA3TP enables efficient and objective judgment for syphilis diagnosis and treatments, achieving clinical availability. Copyright © 2014 Japanese Society of Chemotherapy and The Japanese Association for Infectious Diseases. Published by Elsevier Ltd. All rights reserved.

  15. Computer-aided assessment of breast density: comparison of supervised deep learning and feature-based statistical learning.

    PubMed

    Li, Songfeng; Wei, Jun; Chan, Heang-Ping; Helvie, Mark A; Roubidoux, Marilyn A; Lu, Yao; Zhou, Chuan; Hadjiiski, Lubomir M; Samala, Ravi K

    2018-01-09

    Breast density is one of the most significant factors that is associated with cancer risk. In this study, our purpose was to develop a supervised deep learning approach for automated estimation of percentage density (PD) on digital mammograms (DMs). The input 'for processing' DMs was first log-transformed, enhanced by a multi-resolution preprocessing scheme, and subsampled to a pixel size of 800 µm  ×  800 µm from 100 µm  ×  100 µm. A deep convolutional neural network (DCNN) was trained to estimate a probability map of breast density (PMD) by using a domain adaptation resampling method. The PD was estimated as the ratio of the dense area to the breast area based on the PMD. The DCNN approach was compared to a feature-based statistical learning approach. Gray level, texture and morphological features were extracted and a least absolute shrinkage and selection operator was used to combine the features into a feature-based PMD. With approval of the Institutional Review Board, we retrospectively collected a training set of 478 DMs and an independent test set of 183 DMs from patient files in our institution. Two experienced mammography quality standards act radiologists interactively segmented PD as the reference standard. Ten-fold cross-validation was used for model selection and evaluation with the training set. With cross-validation, DCNN obtained a Dice's coefficient (DC) of 0.79  ±  0.13 and Pearson's correlation (r) of 0.97, whereas feature-based learning obtained DC  =  0.72  ±  0.18 and r  =  0.85. For the independent test set, DCNN achieved DC  =  0.76  ±  0.09 and r  =  0.94, while feature-based learning achieved DC  =  0.62  ±  0.21 and r  =  0.75. Our DCNN approach was significantly better and more robust than the feature-based learning approach for automated PD estimation on DMs, demonstrating its potential use for automated density reporting as well as

  16. Computer-aided assessment of breast density: comparison of supervised deep learning and feature-based statistical learning

    NASA Astrophysics Data System (ADS)

    Li, Songfeng; Wei, Jun; Chan, Heang-Ping; Helvie, Mark A.; Roubidoux, Marilyn A.; Lu, Yao; Zhou, Chuan; Hadjiiski, Lubomir M.; Samala, Ravi K.

    2018-01-01

    Breast density is one of the most significant factors that is associated with cancer risk. In this study, our purpose was to develop a supervised deep learning approach for automated estimation of percentage density (PD) on digital mammograms (DMs). The input ‘for processing’ DMs was first log-transformed, enhanced by a multi-resolution preprocessing scheme, and subsampled to a pixel size of 800 µm  ×  800 µm from 100 µm  ×  100 µm. A deep convolutional neural network (DCNN) was trained to estimate a probability map of breast density (PMD) by using a domain adaptation resampling method. The PD was estimated as the ratio of the dense area to the breast area based on the PMD. The DCNN approach was compared to a feature-based statistical learning approach. Gray level, texture and morphological features were extracted and a least absolute shrinkage and selection operator was used to combine the features into a feature-based PMD. With approval of the Institutional Review Board, we retrospectively collected a training set of 478 DMs and an independent test set of 183 DMs from patient files in our institution. Two experienced mammography quality standards act radiologists interactively segmented PD as the reference standard. Ten-fold cross-validation was used for model selection and evaluation with the training set. With cross-validation, DCNN obtained a Dice’s coefficient (DC) of 0.79  ±  0.13 and Pearson’s correlation (r) of 0.97, whereas feature-based learning obtained DC  =  0.72  ±  0.18 and r  =  0.85. For the independent test set, DCNN achieved DC  =  0.76  ±  0.09 and r  =  0.94, while feature-based learning achieved DC  =  0.62  ±  0.21 and r  =  0.75. Our DCNN approach was significantly better and more robust than the feature-based learning approach for automated PD estimation on DMs, demonstrating its potential use for automated density reporting as

  17. Technology-Based Classroom Assessments: Alternatives to Testing

    ERIC Educational Resources Information Center

    Salend, Spencer J.

    2009-01-01

    Although many teachers are using new technologies to differentiate instruction and administer tests, educators are also employing a range of technology-based resources and strategies to implement a variety of classroom assessments as alternatives to standardized and teacher-made testing. Technology-based classroom assessments focus on the use of…

  18. Accounting for Proof Test Data in a Reliability Based Design Optimization Framework

    NASA Technical Reports Server (NTRS)

    Ventor, Gerharad; Scotti, Stephen J.

    2012-01-01

    This paper investigates the use of proof (or acceptance) test data during the reliability based design optimization of structural components. It is assumed that every component will be proof tested and that the component will only enter into service if it passes the proof test. The goal is to reduce the component weight, while maintaining high reliability, by exploiting the proof test results during the design process. The proposed procedure results in the simultaneous design of the structural component and the proof test itself and provides the designer with direct control over the probability of failing the proof test. The procedure is illustrated using two analytical example problems and the results indicate that significant weight savings are possible when exploiting the proof test results during the design process.

  19. A Survey of UML Based Regression Testing

    NASA Astrophysics Data System (ADS)

    Fahad, Muhammad; Nadeem, Aamer

    Regression testing is the process of ensuring software quality by analyzing whether changed parts behave as intended, and unchanged parts are not affected by the modifications. Since it is a costly process, a lot of techniques are proposed in the research literature that suggest testers how to build regression test suite from existing test suite with minimum cost. In this paper, we discuss the advantages and drawbacks of using UML diagrams for regression testing and analyze that UML model helps in identifying changes for regression test selection effectively. We survey the existing UML based regression testing techniques and provide an analysis matrix to give a quick insight into prominent features of the literature work. We discuss the open research issues like managing and reducing the size of regression test suite, prioritization of the test cases that would be helpful during strict schedule and resources that remain to be addressed for UML based regression testing.

  20. The predictive value of the sacral base pressure test in detecting specific types of sacroiliac dysfunction

    PubMed Central

    Mitchell, Travis D.; Urli, Kristina E.; Breitenbach, Jacques; Yelverton, Chris

    2007-01-01

    Abstract Objective This study aimed to evaluate the validity of the sacral base pressure test in diagnosing sacroiliac joint dysfunction. It also determined the predictive powers of the test in determining which type of sacroiliac joint dysfunction was present. Methods This was a double-blind experimental study with 62 participants. The results from the sacral base pressure test were compared against a cluster of previously validated tests of sacroiliac joint dysfunction to determine its validity and predictive powers. The external rotation of the feet, occurring during the sacral base pressure test, was measured using a digital inclinometer. Results There was no statistically significant difference in the results of the sacral base pressure test between the types of sacroiliac joint dysfunction. In terms of the results of validity, the sacral base pressure test was useful in identifying positive values of sacroiliac joint dysfunction. It was fairly helpful in correctly diagnosing patients with negative test results; however, it had only a “slight” agreement with the diagnosis for κ interpretation. Conclusions In this study, the sacral base pressure test was not a valid test for determining the presence of sacroiliac joint dysfunction or the type of dysfunction present. Further research comparing the agreement of the sacral base pressure test or other sacroiliac joint dysfunction tests with a criterion standard of diagnosis is necessary. PMID:19674694

  1. Optical testing of progressive ophthalmic glasses based on galvo mirrors

    NASA Astrophysics Data System (ADS)

    Stuerwald, S.; Schmitt, R.

    2014-03-01

    In production of ophthalmic freeform optics like progressive eyeglasses, the specimens are tested according to a standardized method which is based on the measurement of the vertex power on usually less than 10 points. For a better quality management and thus to ensure more reliable and valid tests, a more comprehensive measurement approach is required. For Shack Hartmann Sensors (SHS) the dynamic range is defined by the number of micro-lenses and the resolution of the imaging sensor. Here, we present an approach for measuring wavefronts with increased dynamic range and lateral resolution by the use of a scanning procedure. Therefore, the proposed innovative setup is based on galvo mirrors that are capable of measuring the vertex power with a lateral resolution below one millimeter since this is sufficient for a functional test of progressive eyeglasses. Expressed in a more abstract way, the concept is based on a selection and thereby encoding of single sub-apertures of the wave front under test. This allows measuring the wave fronts slope consecutively in a scanning procedure. The use of high precision galvo systems allows a lateral resolution below one millimeter as well as a significant fast scanning ability. The measurement concept and performance of this method will be demonstrated for different spherical and freeformed specimens like progressive eye glasses. Furthermore, approaches for calibration of the measurement system will be characterized and the optical design of the detector will be discussed.

  2. TEXSYS. [a knowledge based system for the Space Station Freedom thermal control system test-bed

    NASA Technical Reports Server (NTRS)

    Bull, John

    1990-01-01

    The Systems Autonomy Demonstration Project has recently completed a major test and evaluation of TEXSYS, a knowledge-based system (KBS) which demonstrates real-time control and FDIR for the Space Station Freedom thermal control system test-bed. TEXSYS is the largest KBS ever developed by NASA and offers a unique opportunity for the study of technical issues associated with the use of advanced KBS concepts including: model-based reasoning and diagnosis, quantitative and qualitative reasoning, integrated use of model-based and rule-based representations, temporal reasoning, and scale-up performance issues. TEXSYS represents a major achievement in advanced automation that has the potential to significantly influence Space Station Freedom's design for the thermal control system. An overview of the Systems Autonomy Demonstration Project, the thermal control system test-bed, the TEXSYS architecture, preliminary test results, and thermal domain expert feedback are presented.

  3. Wind turbine blade testing system using base excitation

    DOEpatents

    Cotrell, Jason; Thresher, Robert; Lambert, Scott; Hughes, Scott; Johnson, Jay

    2014-03-25

    An apparatus (500) for fatigue testing elongate test articles (404) including wind turbine blades through forced or resonant excitation of the base (406) of the test articles (404). The apparatus (500) includes a testing platform or foundation (402). A blade support (410) is provided for retaining or supporting a base (406) of an elongate test article (404), and the blade support (410) is pivotally mounted on the testing platform (402) with at least two degrees of freedom of motion relative to the testing platform (402). An excitation input assembly (540) is interconnected with the blade support (410) and includes first and second actuators (444, 446, 541) that act to concurrently apply forces or loads to the blade support (410). The actuator forces are cyclically applied in first and second transverse directions. The test article (404) responds to shaking of its base (406) by oscillating in two, transverse directions (505, 507).

  4. Normal Threshold Size of Stimuli in Children Using a Game-Based Visual Field Test.

    PubMed

    Wang, Yanfang; Ali, Zaria; Subramani, Siddharth; Biswas, Susmito; Fenerty, Cecilia; Henson, David B; Aslam, Tariq

    2017-06-01

    The aim of this study was to demonstrate and explore the ability of novel game-based perimetry to establish normal visual field thresholds in children. One hundred and eighteen children (aged 8.0 ± 2.8 years old) with no history of visual field loss or significant medical history were recruited. Each child had one eye tested using a game-based visual field test 'Caspar's Castle' at four retinal locations 12.7° (N = 118) from fixation. Thresholds were established repeatedly using up/down staircase algorithms with stimuli of varying diameter (luminance 20 cd/m 2 , duration 200 ms, background luminance 10 cd/m 2 ). Relationships between threshold and age were determined along with measures of intra- and intersubject variability. The Game-based visual field test was able to establish threshold estimates in the full range of children tested. Threshold size reduced with increasing age in children. Intrasubject variability and intersubject variability were inversely related to age in children. Normal visual field thresholds were established for specific locations in children using a novel game-based visual field test. These could be used as a foundation for developing a game-based perimetry screening test for children.

  5. The Influence of Study-Level Inference Models and Study Set Size on Coordinate-Based fMRI Meta-Analyses

    PubMed Central

    Bossier, Han; Seurinck, Ruth; Kühn, Simone; Banaschewski, Tobias; Barker, Gareth J.; Bokde, Arun L. W.; Martinot, Jean-Luc; Lemaitre, Herve; Paus, Tomáš; Millenet, Sabina; Moerkerke, Beatrijs

    2018-01-01

    Given the increasing amount of neuroimaging studies, there is a growing need to summarize published results. Coordinate-based meta-analyses use the locations of statistically significant local maxima with possibly the associated effect sizes to aggregate studies. In this paper, we investigate the influence of key characteristics of a coordinate-based meta-analysis on (1) the balance between false and true positives and (2) the activation reliability of the outcome from a coordinate-based meta-analysis. More particularly, we consider the influence of the chosen group level model at the study level [fixed effects, ordinary least squares (OLS), or mixed effects models], the type of coordinate-based meta-analysis [Activation Likelihood Estimation (ALE) that only uses peak locations, fixed effects, and random effects meta-analysis that take into account both peak location and height] and the amount of studies included in the analysis (from 10 to 35). To do this, we apply a resampling scheme on a large dataset (N = 1,400) to create a test condition and compare this with an independent evaluation condition. The test condition corresponds to subsampling participants into studies and combine these using meta-analyses. The evaluation condition corresponds to a high-powered group analysis. We observe the best performance when using mixed effects models in individual studies combined with a random effects meta-analysis. Moreover the performance increases with the number of studies included in the meta-analysis. When peak height is not taken into consideration, we show that the popular ALE procedure is a good alternative in terms of the balance between type I and II errors. However, it requires more studies compared to other procedures in terms of activation reliability. Finally, we discuss the differences, interpretations, and limitations of our results. PMID:29403344

  6. Automatic bearing fault diagnosis of permanent magnet synchronous generators in wind turbines subjected to noise interference

    NASA Astrophysics Data System (ADS)

    Guo, Jun; Lu, Siliang; Zhai, Chao; He, Qingbo

    2018-02-01

    An automatic bearing fault diagnosis method is proposed for permanent magnet synchronous generators (PMSGs), which are widely installed in wind turbines subjected to low rotating speeds, speed fluctuations, and electrical device noise interferences. The mechanical rotating angle curve is first extracted from the phase current of a PMSG by sequentially applying a series of algorithms. The synchronous sampled vibration signal of the fault bearing is then resampled in the angular domain according to the obtained rotating phase information. Considering that the resampled vibration signal is still overwhelmed by heavy background noise, an adaptive stochastic resonance filter is applied to the resampled signal to enhance the fault indicator and facilitate bearing fault identification. Two types of fault bearings with different fault sizes in a PMSG test rig are subjected to experiments to test the effectiveness of the proposed method. The proposed method is fully automated and thus shows potential for convenient, highly efficient and in situ bearing fault diagnosis for wind turbines subjected to harsh environments.

  7. 12 CFR 652.65 - Risk-based capital stress test.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... 12 Banks and Banking 6 2010-01-01 2010-01-01 false Risk-based capital stress test. 652.65 Section... CORPORATION FUNDING AND FISCAL AFFAIRS Risk-Based Capital Requirements § 652.65 Risk-based capital stress test. You will perform the risk-based capital stress test as described in summary form below and as...

  8. 12 CFR 652.65 - Risk-based capital stress test.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... 12 Banks and Banking 6 2011-01-01 2011-01-01 false Risk-based capital stress test. 652.65 Section... CORPORATION FUNDING AND FISCAL AFFAIRS Risk-Based Capital Requirements § 652.65 Risk-based capital stress test. You will perform the risk-based capital stress test as described in summary form below and as...

  9. Anomalous change detection in imagery

    DOEpatents

    Theiler, James P [Los Alamos, NM; Perkins, Simon J [Santa Fe, NM

    2011-05-31

    A distribution-based anomaly detection platform is described that identifies a non-flat background that is specified in terms of the distribution of the data. A resampling approach is also disclosed employing scrambled resampling of the original data with one class specified by the data and the other by the explicit distribution, and solving using binary classification.

  10. Benford's law first significant digit and distribution distances for testing the reliability of financial reports in developing countries

    NASA Astrophysics Data System (ADS)

    Shi, Jing; Ausloos, Marcel; Zhu, Tingting

    2018-02-01

    We discuss a common suspicion about reported financial data, in 10 industrial sectors of the 6 so called "main developing countries" over the time interval [2000-2014]. These data are examined through Benford's law first significant digit and through distribution distances tests. It is shown that several visually anomalous data have to be a priori removed. Thereafter, the distributions much better follow the first digit significant law, indicating the usefulness of a Benford's law test from the research starting line. The same holds true for distance tests. A few outliers are pointed out.

  11. Meal Microstructure Characterization from Sensor-Based Food Intake Detection

    PubMed Central

    Doulah, Abul; Farooq, Muhammad; Yang, Xin; Parton, Jason; McCrory, Megan A.; Higgins, Janine A.; Sazonov, Edward

    2017-01-01

    To avoid the pitfalls of self-reported dietary intake, wearable sensors can be used. Many food ingestion sensors offer the ability to automatically detect food intake using time resolutions that range from 23 ms to 8 min. There is no defined standard time resolution to accurately measure ingestive behavior or a meal microstructure. This paper aims to estimate the time resolution needed to accurately represent the microstructure of meals such as duration of eating episode, the duration of actual ingestion, and number of eating events. Twelve participants wore the automatic ingestion monitor (AIM) and kept a standard diet diary to report their food intake in free-living conditions for 24 h. As a reference, participants were also asked to mark food intake with a push button sampled every 0.1 s. The duration of eating episodes, duration of ingestion, and number of eating events were computed from the food diary, AIM, and the push button resampled at different time resolutions (0.1–30s). ANOVA and multiple comparison tests showed that the duration of eating episodes estimated from the diary differed significantly from that estimated by the AIM and the push button (p-value <0.001). There were no significant differences in the number of eating events for push button resolutions of 0.1, 1, and 5 s, but there were significant differences in resolutions of 10–30s (p-value <0.05). The results suggest that the desired time resolution of sensor-based food intake detection should be ≤5 s to accurately detect meal microstructure. Furthermore, the AIM provides more accurate measurement of the eating episode duration than the diet diary. PMID:28770206

  12. A Note on Comparing the Power of Test Statistics at Low Significance Levels.

    PubMed

    Morris, Nathan; Elston, Robert

    2011-01-01

    It is an obvious fact that the power of a test statistic is dependent upon the significance (alpha) level at which the test is performed. It is perhaps a less obvious fact that the relative performance of two statistics in terms of power is also a function of the alpha level. Through numerous personal discussions, we have noted that even some competent statisticians have the mistaken intuition that relative power comparisons at traditional levels such as α = 0.05 will be roughly similar to relative power comparisons at very low levels, such as the level α = 5 × 10 -8 , which is commonly used in genome-wide association studies. In this brief note, we demonstrate that this notion is in fact quite wrong, especially with respect to comparing tests with differing degrees of freedom. In fact, at very low alpha levels the cost of additional degrees of freedom is often comparatively low. Thus we recommend that statisticians exercise caution when interpreting the results of power comparison studies which use alpha levels that will not be used in practice.

  13. How Many Subjects are Needed for a Visual Field Normative Database? A Comparison of Ground Truth and Bootstrapped Statistics.

    PubMed

    Phu, Jack; Bui, Bang V; Kalloniatis, Michael; Khuu, Sieu K

    2018-03-01

    The number of subjects needed to establish the normative limits for visual field (VF) testing is not known. Using bootstrap resampling, we determined whether the ground truth mean, distribution limits, and standard deviation (SD) could be approximated using different set size ( x ) levels, in order to provide guidance for the number of healthy subjects required to obtain robust VF normative data. We analyzed the 500 Humphrey Field Analyzer (HFA) SITA-Standard results of 116 healthy subjects and 100 HFA full threshold results of 100 psychophysically experienced healthy subjects. These VFs were resampled (bootstrapped) to determine mean sensitivity, distribution limits (5th and 95th percentiles), and SD for different ' x ' and numbers of resamples. We also used the VF results of 122 glaucoma patients to determine the performance of ground truth and bootstrapped results in identifying and quantifying VF defects. An x of 150 (for SITA-Standard) and 60 (for full threshold) produced bootstrapped descriptive statistics that were no longer different to the original distribution limits and SD. Removing outliers produced similar results. Differences between original and bootstrapped limits in detecting glaucomatous defects were minimized at x = 250. Ground truth statistics of VF sensitivities could be approximated using set sizes that are significantly smaller than the original cohort. Outlier removal facilitates the use of Gaussian statistics and does not significantly affect the distribution limits. We provide guidance for choosing the cohort size for different levels of error when performing normative comparisons with glaucoma patients.

  14. MATTS- A Step Towards Model Based Testing

    NASA Astrophysics Data System (ADS)

    Herpel, H.-J.; Willich, G.; Li, J.; Xie, J.; Johansen, B.; Kvinnesland, K.; Krueger, S.; Barrios, P.

    2016-08-01

    In this paper we describe a Model Based approach to testing of on-board software and compare it with traditional validation strategy currently applied to satellite software. The major problems that software engineering will face over at least the next two decades are increasing application complexity driven by the need for autonomy and serious application robustness. In other words, how do we actually get to declare success when trying to build applications one or two orders of magnitude more complex than today's applications. To solve the problems addressed above the software engineering process has to be improved at least for two aspects: 1) Software design and 2) Software testing. The software design process has to evolve towards model-based approaches with extensive use of code generators. Today, testing is an essential, but time and resource consuming activity in the software development process. Generating a short, but effective test suite usually requires a lot of manual work and expert knowledge. In a model-based process, among other subtasks, test construction and test execution can also be partially automated. The basic idea behind the presented study was to start from a formal model (e.g. State Machines), generate abstract test cases which are then converted to concrete executable test cases (input and expected output pairs). The generated concrete test cases were applied to an on-board software. Results were collected and evaluated wrt. applicability, cost-efficiency, effectiveness at fault finding, and scalability.

  15. Illegal performance enhancing drugs and doping in sport: a picture-based brief implicit association test for measuring athletes' attitudes.

    PubMed

    Brand, Ralf; Heck, Philipp; Ziegler, Matthias

    2014-01-30

    Doping attitude is a key variable in predicting athletes' intention to use forbidden performance enhancing drugs. Indirect reaction-time based attitude tests, such as the implicit association test, conceal the ultimate goal of measurement from the participant better than questionnaires. Indirect tests are especially useful when socially sensitive constructs such as attitudes towards doping need to be described. The present study serves the development and validation of a novel picture-based brief implicit association test (BIAT) for testing athletes' attitudes towards doping in sport. It shall provide the basis for a transnationally compatible research instrument able to harmonize anti-doping research efforts. Following a known-group differences validation strategy, the doping attitudes of 43 athletes from bodybuilding (representative for a highly doping prone sport) and handball (as a contrast group) were compared using the picture-based doping-BIAT. The Performance Enhancement Attitude Scale (PEAS) was employed as a corresponding direct measure in order to additionally validate the results. As expected, in the group of bodybuilders, indirectly measured doping attitudes as tested with the picture-based doping-BIAT were significantly less negative (η2 = .11). The doping-BIAT and PEAS scores correlated significantly at r = .50 for bodybuilders, and not significantly at r = .36 for handball players. There was a low error rate (7%) and a satisfactory internal consistency (rtt = .66) for the picture-based doping-BIAT. The picture-based doping-BIAT constitutes a psychometrically tested method, ready to be adopted by the international research community. The test can be administered via the internet. All test material is available "open source". The test might be implemented, for example, as a new effect-measure in the evaluation of prevention programs.

  16. Illegal performance enhancing drugs and doping in sport: a picture-based brief implicit association test for measuring athletes’ attitudes

    PubMed Central

    2014-01-01

    Background Doping attitude is a key variable in predicting athletes’ intention to use forbidden performance enhancing drugs. Indirect reaction-time based attitude tests, such as the implicit association test, conceal the ultimate goal of measurement from the participant better than questionnaires. Indirect tests are especially useful when socially sensitive constructs such as attitudes towards doping need to be described. The present study serves the development and validation of a novel picture-based brief implicit association test (BIAT) for testing athletes’ attitudes towards doping in sport. It shall provide the basis for a transnationally compatible research instrument able to harmonize anti-doping research efforts. Method Following a known-group differences validation strategy, the doping attitudes of 43 athletes from bodybuilding (representative for a highly doping prone sport) and handball (as a contrast group) were compared using the picture-based doping-BIAT. The Performance Enhancement Attitude Scale (PEAS) was employed as a corresponding direct measure in order to additionally validate the results. Results As expected, in the group of bodybuilders, indirectly measured doping attitudes as tested with the picture-based doping-BIAT were significantly less negative (η2 = .11). The doping-BIAT and PEAS scores correlated significantly at r = .50 for bodybuilders, and not significantly at r = .36 for handball players. There was a low error rate (7%) and a satisfactory internal consistency (r tt  = .66) for the picture-based doping-BIAT. Conclusions The picture-based doping-BIAT constitutes a psychometrically tested method, ready to be adopted by the international research community. The test can be administered via the internet. All test material is available “open source”. The test might be implemented, for example, as a new effect-measure in the evaluation of prevention programs. PMID:24479865

  17. Pre-marital HIV testing in couples from faith-based organisations: experience in Port Harcourt, Nigeria.

    PubMed

    Akani, C I; Erhabor, O; Babatunde, S

    2005-01-01

    This descriptive cross-sectional study was conducted among prospective couples referred from Faith-Based Organisations in Port Harcourt, Nigeria for pre-marital HIV screening. The study sought to establish the sero-prevalence of human immunodeficiency virus (HIV) in this peculiar study group. A total of 84 healthy heterosexual couples who required pre-marital HIV screening were tested between January 2000 and December 2003 using a Double ELISA confirmatory test of Immunocomb and Genscreen HIV I&II Kits. Amongst the 168 individuals tested, 35 (20.8%) were found positive. Seroprevalence was significantly higher among females 23 (27.4%) compared to males 12 (14.3%). Infection rate was highest in the 25-29 years group (29.7%, n=22) and lowest in those of 35-39 years (6.1 %, n=2), though this difference was not statistically significant (p-value=0.058). Infection rate was significantly higher among females (p-value=0.036); among prospective couples from Orthodox churches (p-value=0.021); couples with prolonged courtship (>6 months) (p-value=0.0001); couples with history of premarital sex (p-value=0.0001); and couples with history of cohabitation (p-value=0.0001). Our findings prompt a wake-up call for faith-based organizations (FBOs) to urgently initiate or be more receptive of measures that emphasize behavioural and social changes amongst members. Government and non-governmental organizations should organise capacity building training for religious based organizations to enable them cope with the challenges of HIV/AIDS. The outcomes of this study further underscores the value of voluntary counselling and confidential HIV testing and especially pre- and post-test counselling as the basis of pre-marital HIV testing.

  18. Vibration based condition monitoring of a multistage epicyclic gearbox in lifting cranes

    NASA Astrophysics Data System (ADS)

    Assaad, Bassel; Eltabach, Mario; Antoni, Jérôme

    2014-01-01

    This paper proposes a model-based technique for detecting wear in a multistage planetary gearbox used by lifting cranes. The proposed method establishes a vibration signal model which deals with cyclostationary and autoregressive models. First-order cyclostationarity is addressed by the analysis of the time synchronous average (TSA) of the angular resampled vibration signal. Then an autoregressive model (AR) is applied to the TSA part in order to extract a residual signal containing pertinent fault signatures. The paper also explores a number of methods commonly used in vibration monitoring of planetary gearboxes, in order to make comparisons. In the experimental part of this study, these techniques are applied to accelerated lifetime test bench data for the lifting winch. After processing raw signals recorded with an accelerometer mounted on the outside of the gearbox, a number of condition indicators (CIs) are derived from the TSA signal, the residual autoregressive signal and other signals derived using standard signal processing methods. The goal is to check the evolution of the CIs during the accelerated lifetime test (ALT). Clarity and fluctuation level of the historical trends are finally considered as a criteria for comparing between the extracted CIs.

  19. A Note on Testing Mediated Effects in Structural Equation Models: Reconciling Past and Current Research on the Performance of the Test of Joint Significance

    ERIC Educational Resources Information Center

    Valente, Matthew J.; Gonzalez, Oscar; Miocevic, Milica; MacKinnon, David P.

    2016-01-01

    Methods to assess the significance of mediated effects in education and the social sciences are well studied and fall into two categories: single sample methods and computer-intensive methods. A popular single sample method to detect the significance of the mediated effect is the test of joint significance, and a popular computer-intensive method…

  20. Implementing and testing a fiber-optic polarization-based intrusion detection system

    NASA Astrophysics Data System (ADS)

    Hajj, Rasha El; MacDonald, Gregory; Verma, Pramode; Huck, Robert

    2015-09-01

    We describe a layer-1-based intrusion detection system for fiber-optic-based networks. Layer-1-based intrusion detection represents a significant elevation in security as it prohibits an adversary from obtaining information in the first place (no cryptanalysis is possible). We describe the experimental setup of the intrusion detection system, which is based on monitoring the behavior of certain attributes of light both in unperturbed and perturbed optical fiber links. The system was tested with optical fiber links of various lengths and types, under different environmental conditions, and under changes in fiber geometry similar to what is experienced during tapping activity. Comparison of the results for perturbed and unperturbed links has shown that the state of polarization is more sensitive to intrusion activity than the degree of polarization or power of the received light. The testing was conducted in a simulated telecommunication network environment that included both underground and aerial links. The links were monitored for intrusion activity. Attempts to tap the link were easily detected with no apparent degradation in the visual quality of the real-time surveillance video.

  1. Incentives and Test-Based Accountability in Education

    ERIC Educational Resources Information Center

    Hout, Michael, Ed.; Elliott, Stuart W., Ed.

    2011-01-01

    In recent years there have been increasing efforts to use accountability systems based on large-scale tests of students as a mechanism for improving student achievement. The federal No Child Left Behind Act (NCLB) is a prominent example of such an effort, but it is only the continuation of a steady trend toward greater test-based accountability in…

  2. Monte Carlo Simulations Comparing Fisher Exact Test and Unequal Variances t Test for Analysis of Differences Between Groups in Brief Hospital Lengths of Stay.

    PubMed

    Dexter, Franklin; Bayman, Emine O; Dexter, Elisabeth U

    2017-12-01

    We examined type I and II error rates for analysis of (1) mean hospital length of stay (LOS) versus (2) percentage of hospital LOS that are overnight. These 2 end points are suitable for when LOS is treated as a secondary economic end point. We repeatedly resampled LOS for 5052 discharges of thoracoscopic wedge resections and lung lobectomy at 26 hospitals. Unequal variances t test (Welch method) and Fisher exact test both were conservative (ie, type I error rate less than nominal level). The Wilcoxon rank sum test was included as a comparator; the type I error rates did not differ from the nominal level of 0.05 or 0.01. Fisher exact test was more powerful than the unequal variances t test at detecting differences among hospitals; estimated odds ratio for obtaining P < .05 with Fisher exact test versus unequal variances t test = 1.94, with 95% confidence interval, 1.31-3.01. Fisher exact test and Wilcoxon-Mann-Whitney had comparable statistical power in terms of differentiating LOS between hospitals. For studies with LOS to be used as a secondary end point of economic interest, there is currently considerable interest in the planned analysis being for the percentage of patients suitable for ambulatory surgery (ie, hospital LOS equals 0 or 1 midnight). Our results show that there need not be a loss of statistical power when groups are compared using this binary end point, as compared with either Welch method or Wilcoxon rank sum test.

  3. Tests for informative cluster size using a novel balanced bootstrap scheme.

    PubMed

    Nevalainen, Jaakko; Oja, Hannu; Datta, Somnath

    2017-07-20

    Clustered data are often encountered in biomedical studies, and to date, a number of approaches have been proposed to analyze such data. However, the phenomenon of informative cluster size (ICS) is a challenging problem, and its presence has an impact on the choice of a correct analysis methodology. For example, Dutta and Datta (2015, Biometrics) presented a number of marginal distributions that could be tested. Depending on the nature and degree of informativeness of the cluster size, these marginal distributions may differ, as do the choices of the appropriate test. In particular, they applied their new test to a periodontal data set where the plausibility of the informativeness was mentioned, but no formal test for the same was conducted. We propose bootstrap tests for testing the presence of ICS. A balanced bootstrap method is developed to successfully estimate the null distribution by merging the re-sampled observations with closely matching counterparts. Relying on the assumption of exchangeability within clusters, the proposed procedure performs well in simulations even with a small number of clusters, at different distributions and against different alternative hypotheses, thus making it an omnibus test. We also explain how to extend the ICS test to a regression setting and thereby enhancing its practical utility. The methodologies are illustrated using the periodontal data set mentioned earlier. Copyright © 2017 John Wiley & Sons, Ltd. Copyright © 2017 John Wiley & Sons, Ltd.

  4. Visually directed vs. software-based targeted biopsy compared to transperineal template mapping biopsy in the detection of clinically significant prostate cancer.

    PubMed

    Valerio, Massimo; McCartan, Neil; Freeman, Alex; Punwani, Shonit; Emberton, Mark; Ahmed, Hashim U

    2015-10-01

    Targeted biopsy based on cognitive or software magnetic resonance imaging (MRI) to transrectal ultrasound registration seems to increase the detection rate of clinically significant prostate cancer as compared with standard biopsy. However, these strategies have not been directly compared against an accurate test yet. The aim of this study was to obtain pilot data on the diagnostic ability of visually directed targeted biopsy vs. software-based targeted biopsy, considering transperineal template mapping (TPM) biopsy as the reference test. Prospective paired cohort study included 50 consecutive men undergoing TPM with one or more visible targets detected on preoperative multiparametric MRI. Targets were contoured on the Biojet software. Patients initially underwent software-based targeted biopsies, then visually directed targeted biopsies, and finally systematic TPM. The detection rate of clinically significant disease (Gleason score ≥3+4 and/or maximum cancer core length ≥4mm) of one strategy against another was compared by 3×3 contingency tables. Secondary analyses were performed using a less stringent threshold of significance (Gleason score ≥4+3 and/or maximum cancer core length ≥6mm). Median age was 68 (interquartile range: 63-73); median prostate-specific antigen level was 7.9ng/mL (6.4-10.2). A total of 79 targets were detected with a mean of 1.6 targets per patient. Of these, 27 (34%), 28 (35%), and 24 (31%) were scored 3, 4, and 5, respectively. At a patient level, the detection rate was 32 (64%), 34 (68%), and 38 (76%) for visually directed targeted, software-based biopsy, and TPM, respectively. Combining the 2 targeted strategies would have led to detection rate of 39 (78%). At a patient level and at a target level, software-based targeted biopsy found more clinically significant diseases than did visually directed targeted biopsy, although this was not statistically significant (22% vs. 14%, P = 0.48; 51.9% vs. 44.3%, P = 0.24). Secondary

  5. Postoperative course and clinical significance of biochemical blood tests following hepatic resection.

    PubMed

    Reissfelder, C; Rahbari, N N; Koch, M; Kofler, B; Sutedja, N; Elbers, H; Büchler, M W; Weitz, J

    2011-06-01

    Hepatic resection continues to be associated with substantial morbidity. Although biochemical tests are important for the early diagnosis of complications, there is limited information on their postoperative changes in relation to outcome in patients with surgery-related morbidity. A total of 835 consecutive patients underwent hepatic resection between January 2002 and January 2008. Biochemical blood tests were assessed before, and 1, 3, 5 and 7 days after surgery. Analyses were stratified according to the extent of resection (3 or fewer versus more than 3 segments). A total of 451 patients (54·0 per cent) underwent resection of three or fewer anatomical segments; resection of more than three segments was performed in 384 (46·0 per cent). Surgery-related morbidity was documented in 258 patients (30·9 per cent) and occurred more frequently in patients who had a major resection (P = 0·001). Serum bilirubin and international normalized ratio as measures of serial hepatic function differed significantly depending on the extent of resection. Furthermore, they were significantly affected in patients with complications, irrespective of the extent of resection. The extent of resection had, however, little impact on renal function and haemoglobin levels. Surgery-related morbidity caused an increase in C-reactive protein levels only after a minor resection. Biochemical data may help to recognize surgery-related complications early during the postoperative course, and serve as the basis for the definition of complications after hepatic resection. Copyright © 2011 British Journal of Surgery Society Ltd. Published by John Wiley & Sons, Ltd.

  6. How Have State Level Standards-Based Tests Related to Norm-Referenced Tests in Alaska?.

    ERIC Educational Resources Information Center

    Fenton, Ray

    This overview of the Alaska system for test development, scoring, and reporting explored differences and similarities between norm-referenced and standards-based tests. The current Alaska testing program is based on legislation passed in 1997 and 1998, and is designed to meet the requirements of the federal No Child Left Behind Legislation. In…

  7. Screening for cognitive impairment in older individuals. Validation study of a computer-based test.

    PubMed

    Green, R C; Green, J; Harrison, J M; Kutner, M H

    1994-08-01

    This study examined the validity of a computer-based cognitive test that was recently designed to screen the elderly for cognitive impairment. Criterion-related validity was examined by comparing test scores of impaired patients and normal control subjects. Construct-related validity was computed through correlations between computer-based subtests and related conventional neuropsychological subtests. University center for memory disorders. Fifty-two patients with mild cognitive impairment by strict clinical criteria and 50 unimpaired, age- and education-matched control subjects. Control subjects were rigorously screened by neurological, neuropsychological, imaging, and electrophysiological criteria to identify and exclude individuals with occult abnormalities. Using a cut-off total score of 126, this computer-based instrument had a sensitivity of 0.83 and a specificity of 0.96. Using a prevalence estimate of 10%, predictive values, positive and negative, were 0.70 and 0.96, respectively. Computer-based subtests correlated significantly with conventional neuropsychological tests measuring similar cognitive domains. Thirteen (17.8%) of 73 volunteers with normal medical histories were excluded from the control group, with unsuspected abnormalities on standard neuropsychological tests, electroencephalograms, or magnetic resonance imaging scans. Computer-based testing is a valid screening methodology for the detection of mild cognitive impairment in the elderly, although this particular test has important limitations. Broader applications of computer-based testing will require extensive population-based validation. Future studies should recognize that normal control subjects without a history of disease who are typically used in validation studies may have a high incidence of unsuspected abnormalities on neurodiagnostic studies.

  8. Significance of hair-dye base-induced sensory irritation.

    PubMed

    Fujita, F; Azuma, T; Tajiri, M; Okamoto, H; Sano, M; Tominaga, M

    2010-06-01

    Oxidation hair-dyes, which are the principal hair-dyes, sometimes induce painful sensory irritation of the scalp caused by the combination of highly reactive substances, such as hydrogen peroxide and alkali agents. Although many cases of severe facial and scalp dermatitis have been reported following the use of hair-dyes, sensory irritation caused by contact of the hair-dye with the skin has not been reported clearly. In this study, we used a self-assessment questionnaire to measure the sensory irritation in various regions of the body caused by two model hair-dye bases that contained different amounts of alkali agents without dyes. Moreover, the occipital region was found as an alternative region of the scalp to test for sensory irritation of the hair-dye bases. We used this region to evaluate the relationship of sensitivity with skin properties, such as trans-epidermal water loss (TEWL), stratum corneum water content, sebum amount, surface temperature, current perception threshold (CPT), catalase activities in tape-stripped skin and sensory irritation score with the model hair-dye bases. The hair-dye sensitive group showed higher TEWL, a lower sebum amount, a lower surface temperature and higher catalase activity than the insensitive group, and was similar to that of damaged skin. These results suggest that sensory irritation caused by hair-dye could occur easily on the damaged dry scalp, as that caused by skin cosmetics reported previously.

  9. Space Launch System Base Heating Test: Environments and Base Flow Physics

    NASA Technical Reports Server (NTRS)

    Mehta, Manish; Knox, Kyle; Seaford, Mark; Dufrene, Aaron

    2016-01-01

    NASA MSFC and CUBRC designed and developed a 2% scale SLS propulsive wind tunnel test program to investigate base flow effects during flight from lift-off to MECO. This type of test program has not been conducted in 40+ years during the NASA Shuttle Program. Dufrene et al paper described the operation, instrumentation type and layout, facility and propulsion performance, test matrix and conditions and some raw results. This paper will focus on the SLS base flow physics and the generation and results of the design environments being used to design the thermal protection system.

  10. [ALPHA-fitness test battery: health-related field-based fitness tests assessment in children and adolescents].

    PubMed

    Ruiz, J R; España Romero, V; Castro Piñero, J; Artero, E G; Ortega, F B; Cuenca García, M; Jiménez Pavón, D; Chillón, P; Girela Rejón, Ma J; Mora, J; Gutiérrez, A; Suni, J; Sjöstrom, M; Castillo, M J

    2011-01-01

    Hereby we summarize the work developed by the ALPHA (Assessing Levels of Physical Activity) Study and describe the tests included in the ALPHA health-related fitness test battery for children and adolescents. The evidence-based ALPHA-Fitness test battery include the following tests: 1) the 20 m shuttle run test to assess cardiorespiratory fitness; 2) the handgrip strength and 3) standing broad jump to assess musculoskeletal fitness, and 4) body mass index, 5) waist circumference; and 6) skinfold thickness (triceps and subscapular) to assess body composition. Furthermore, we include two versions: 1) the high priority ALPHA health-related fitness test battery, which comprises all the evidence-based fitness tests except the measurement of the skinfold thickness; and 2) the extended ALPHA health-related fitness tests battery for children and adolescents, which includes all the evidence-based fitness tests plus the 4 x 10 m shuttle run test to assess motor fitness.

  11. Statistical Tests of System Linearity Based on the Method of Surrogate Data

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hunter, N.; Paez, T.; Red-Horse, J.

    When dealing with measured data from dynamic systems we often make the tacit assumption that the data are generated by linear dynamics. While some systematic tests for linearity and determinism are available - for example the coherence fimction, the probability density fimction, and the bispectrum - fi,u-ther tests that quanti$ the existence and the degree of nonlinearity are clearly needed. In this paper we demonstrate a statistical test for the nonlinearity exhibited by a dynamic system excited by Gaussian random noise. We perform the usual division of the input and response time series data into blocks as required by themore » Welch method of spectrum estimation and search for significant relationships between a given input fkequency and response at harmonics of the selected input frequency. We argue that systematic tests based on the recently developed statistical method of surrogate data readily detect significant nonlinear relationships. The paper elucidates the method of surrogate data. Typical results are illustrated for a linear single degree-of-freedom system and for a system with polynomial stiffness nonlinearity.« less

  12. Performance-Based Testing and Success in Naval Advanced Flight Training.

    DTIC Science & Technology

    1992-11-01

    dual-task ADHT reached significance as measured by the increase in R2 . [4] At this point, the number of variables had been pared to 15. We subjected...these 15 variables to further analysis. First, we desired to construct a composite score based on the eight ADHT variables. One feasible composite...score was arrived at by utilizing only the dual-task ADHT test in the following manner: ADHTCS =.20 * ZADHT6 - .50 * ZADHT5 - .10 * ZADHT7 - .20

  13. Identifying significant gene‐environment interactions using a combination of screening testing and hierarchical false discovery rate control

    PubMed Central

    Shen, Li; Saykin, Andrew J.; Williams, Scott M.; Moore, Jason H.

    2016-01-01

    ABSTRACT Although gene‐environment (G× E) interactions play an important role in many biological systems, detecting these interactions within genome‐wide data can be challenging due to the loss in statistical power incurred by multiple hypothesis correction. To address the challenge of poor power and the limitations of existing multistage methods, we recently developed a screening‐testing approach for G× E interaction detection that combines elastic net penalized regression with joint estimation to support a single omnibus test for the presence of G× E interactions. In our original work on this technique, however, we did not assess type I error control or power and evaluated the method using just a single, small bladder cancer data set. In this paper, we extend the original method in two important directions and provide a more rigorous performance evaluation. First, we introduce a hierarchical false discovery rate approach to formally assess the significance of individual G× E interactions. Second, to support the analysis of truly genome‐wide data sets, we incorporate a score statistic‐based prescreening step to reduce the number of single nucleotide polymorphisms prior to fitting the first stage penalized regression model. To assess the statistical properties of our method, we compare the type I error rate and statistical power of our approach with competing techniques using both simple simulation designs as well as designs based on real disease architectures. Finally, we demonstrate the ability of our approach to identify biologically plausible SNP‐education interactions relative to Alzheimer's disease status using genome‐wide association study data from the Alzheimer's Disease Neuroimaging Initiative (ADNI). PMID:27578615

  14. Testing Game-Based Performance in Team-Handball.

    PubMed

    Wagner, Herbert; Orwat, Matthias; Hinz, Matthias; Pfusterschmied, Jürgen; Bacharach, David W; von Duvillard, Serge P; Müller, Erich

    2016-10-01

    Wagner, H, Orwat, M, Hinz, M, Pfusterschmied, J, Bacharach, DW, von Duvillard, SP, and Müller, E. Testing game-based performance in team-handball. J Strength Cond Res 30(10): 2794-2801, 2016-Team-handball is a fast paced game of defensive and offensive action that includes specific movements of jumping, passing, throwing, checking, and screening. To date and to the best of our knowledge, a game-based performance test (GBPT) for team-handball does not exist. Therefore, the aim of this study was to develop and validate such a test. Seventeen experienced team-handball players performed 2 GBPTs separated by 7 days between each test, an incremental treadmill running test, and a team-handball test game (TG) (2 × 20 minutes). Peak oxygen uptake (V[Combining Dot Above]O2peak), blood lactate concentration (BLC), heart rate (HR), sprinting time, time of offensive and defensive actions as well as running intensities, ball velocity, and jump height were measured in the game-based test. Reliability of the tests was calculated using an intraclass correlation coefficient (ICC). Additionally, we measured V[Combining Dot Above]O2peak in the incremental treadmill running test and BLC, HR, and running intensities in the team-handball TG to determine the validity of the GBPT. For the test-retest reliability, we found an ICC >0.70 for the peak BLC and HR, mean offense and defense time, as well as ball velocity that yielded an ICC >0.90 for the V[Combining Dot Above]O2peak in the GBPT. Percent walking and standing constituted 73% of total time. Moderate (18%) and high (9%) intensity running in the GBPT was similar to the team-handball TG. Our results indicated that the GBPT is a valid and reliable test to analyze team-handball performance (physiological and biomechanical variables) under conditions similar to competition.

  15. TREAT (TREe-based Association Test)

    Cancer.gov

    TREAT is an R package for detecting complex joint effects in case-control studies. The test statistic is derived from a tree-structure model by recursive partitioning the data. Ultra-fast algorithm is designed to evaluate the significance of association between candidate gene and disease outcome

  16. Test-retest reliability of computer-based video analysis of general movements in healthy term-born infants.

    PubMed

    Valle, Susanne Collier; Støen, Ragnhild; Sæther, Rannei; Jensenius, Alexander Refsum; Adde, Lars

    2015-10-01

    A computer-based video analysis has recently been presented for quantitative assessment of general movements (GMs). This method's test-retest reliability, however, has not yet been evaluated. The aim of the current study was to evaluate the test-retest reliability of computer-based video analysis of GMs, and to explore the association between computer-based video analysis and the temporal organization of fidgety movements (FMs). Test-retest reliability study. 75 healthy, term-born infants were recorded twice the same day during the FMs period using a standardized video set-up. The computer-based movement variables "quantity of motion mean" (Qmean), "quantity of motion standard deviation" (QSD) and "centroid of motion standard deviation" (CSD) were analyzed, reflecting the amount of motion and the variability of the spatial center of motion of the infant, respectively. In addition, the association between the variable CSD and the temporal organization of FMs was explored. Intraclass correlation coefficients (ICC 1.1 and ICC 3.1) were calculated to assess test-retest reliability. The ICC values for the variables CSD, Qmean and QSD were 0.80, 0.80 and 0.86 for ICC (1.1), respectively; and 0.80, 0.86 and 0.90 for ICC (3.1), respectively. There were significantly lower CSD values in the recordings with continual FMs compared to the recordings with intermittent FMs (p<0.05). This study showed high test-retest reliability of computer-based video analysis of GMs, and a significant association between our computer-based video analysis and the temporal organization of FMs. Copyright © 2015 Elsevier Ireland Ltd. All rights reserved.

  17. Confidence Intervals for Effect Sizes: Applying Bootstrap Resampling

    ERIC Educational Resources Information Center

    Banjanovic, Erin S.; Osborne, Jason W.

    2016-01-01

    Confidence intervals for effect sizes (CIES) provide readers with an estimate of the strength of a reported statistic as well as the relative precision of the point estimate. These statistics offer more information and context than null hypothesis statistic testing. Although confidence intervals have been recommended by scholars for many years,…

  18. Realistic generation of natural phenomena based on video synthesis

    NASA Astrophysics Data System (ADS)

    Wang, Changbo; Quan, Hongyan; Li, Chenhui; Xiao, Zhao; Chen, Xiao; Li, Peng; Shen, Liuwei

    2009-10-01

    Research on the generation of natural phenomena has many applications in special effects of movie, battlefield simulation and virtual reality, etc. Based on video synthesis technique, a new approach is proposed for the synthesis of natural phenomena, including flowing water and fire flame. From the fire and flow video, the seamless video of arbitrary length is generated. Then, the interaction between wind and fire flame is achieved through the skeleton of flame. Later, the flow is also synthesized by extending the video textures using an edge resample method. Finally, we can integrate the synthesized natural phenomena into a virtual scene.

  19. Prognostic significance of blood coagulation tests in carcinoma of the lung and colon.

    PubMed

    Wojtukiewicz, M Z; Zacharski, L R; Moritz, T E; Hur, K; Edwards, R L; Rickles, F R

    1992-08-01

    Blood coagulation test results were collected prospectively in patients with previously untreated, advanced lung or colon cancer who entered into a clinical trial. In patients with colon cancer, reduced survival was associated (in univariate analysis) with higher values obtained at entry to the study for fibrinogen, fibrin(ogen) split products, antiplasmin, and fibrinopeptide A and accelerated euglobulin lysis times. In patients with non-small cell lung cancer, reduced survival was associated (in univariate analysis) with higher fibrinogen and fibrin(ogen) split products, platelet counts and activated partial thromboplastin times. In patients with small cell carcinoma of the lung, only higher activated partial thromboplastin times were associated (in univariate analysis) with reduced survival in patients with disseminated disease. In multivariate analysis, higher activated partial thromboplastin times were a significant independent predictor of survival for patients with non-small cell lung cancer limited to one hemithorax and with disseminated small cell carcinoma of the lung. Fibrin(ogen) split product levels were an independent predictor of survival for patients with disseminated non-small cell lung cancer as were both the fibrinogen and fibrinopeptide A levels for patients with disseminated colon cancer. These results suggest that certain tests of blood coagulation may be indicative of prognosis in lung and colon cancer. The heterogeneity of these results suggests that the mechanism(s), intensity, and pathophysiological significance of coagulation activation in cancer may differ between tumour types.

  20. Evaluation of a Secure Laptop-Based Testing Program in an Undergraduate Nursing Program: Students' Perspective.

    PubMed

    Tao, Jinyuan; Gunter, Glenda; Tsai, Ming-Hsiu; Lim, Dan

    2016-01-01

    Recently, the many robust learning management systems, and the availability of affordable laptops, have made secure laptop-based testing a reality on many campuses. The undergraduate nursing program at the authors' university began to implement a secure laptop-based testing program in 2009, which allowed students to use their newly purchased laptops to take quizzes and tests securely in classrooms. After nearly 5 years' secure laptop-based testing program implementation, a formative evaluation, using a mixed method that has both descriptive and correlational data elements, was conducted to seek constructive feedback from students to improve the program. Evaluation data show that, overall, students (n = 166) believed the secure laptop-based testing program helps them get hands-on experience of taking examinations on the computer and gets them prepared for their computerized NCLEX-RN. Students, however, had a lot of concerns about laptop glitches and campus wireless network glitches they experienced during testing. At the same time, NCLEX-RN first-time passing rate data were analyzed using the χ2 test, and revealed no significant association between the two testing methods (paper-and-pencil testing and the secure laptop-based testing) and students' first-time NCLEX-RN passing rate. Based on the odds ratio, however, the odds of students passing NCLEX-RN the first time was 1.37 times higher if they were taught with the secure laptop-based testing method than if taught with the traditional paper-and-pencil testing method in nursing school. It was recommended to the institution that better quality of laptops needs to be provided to future students, measures needed to be taken to further stabilize the campus wireless Internet network, and there was a need to reevaluate the Laptop Initiative Program.

  1. A PC-based software test for measuring alcohol and drug effects in human subjects.

    PubMed

    Mills, K C; Parkman, K M; Spruill, S E

    1996-12-01

    A new software-based visual search and divided-attention test of cognitive performance was developed and evaluated in an alcohol dose-response study with 24 human subjects aged 21-62 years. The test used language-free, color, graphic displays to represent the visuospatial demands of driving. Cognitive demands were increased over previous hardware-based tests, and the motor skills required for the test involved minimal eye movements and eye-hand coordination. Repeated performance on the test was evaluated with a latin-square design by using a placebo and two alcohol doses, low (0.48 g/kg/LBM) and moderate (0.72 g/kg/LBM). The data on 7 females and 17 males yielded significant falling and rising impairment effects coincident with moderate rising and falling breath alcohol levels (mean peak BrALs = 0.045 g/dl and 0.079 g/dl). None of the subjects reported eye-strain or psychomotor fatigue as compared with previous tests. The high sensitivity/variance relative to use in basic and applied research, and worksite fitness-for-duty testing, was discussed. The most distinct advantage of a software-based test that operates on readily available PCs is that it can be widely distributed to researchers with a common reference to compare a variety of alcohol and drug effects.

  2. Test Platforms for Model-Based Flight Research

    NASA Astrophysics Data System (ADS)

    Dorobantu, Andrei

    Demonstrating the reliability of flight control algorithms is critical to integrating unmanned aircraft systems into the civilian airspace. For many potential applications, design and certification of these algorithms will rely heavily on mathematical models of the aircraft dynamics. Therefore, the aerospace community must develop flight test platforms to support the advancement of model-based techniques. The University of Minnesota has developed a test platform dedicated to model-based flight research for unmanned aircraft systems. This thesis provides an overview of the test platform and its research activities in the areas of system identification, model validation, and closed-loop control for small unmanned aircraft.

  3. Comparison of Earth Science Achievement between Animation-Based and Graphic-Based Testing Designs

    ERIC Educational Resources Information Center

    Wu, Huang-Ching; Chang, Chun-Yen; Chen, Chia-Li D.; Yeh, Ting-Kuang; Liu, Cheng-Chueh

    2010-01-01

    This study developed two testing devices, namely the animation-based test (ABT) and the graphic-based test (GBT) in the area of earth sciences covering four domains that ranged from astronomy, meteorology, oceanography to geology. Both the students' achievements of and their attitudes toward ABT compared to GBT were investigated. The purposes of…

  4. Cytogenotoxicity screening of source water, wastewater and treated water of drinking water treatment plants using two in vivo test systems: Allium cepa root based and Nile tilapia erythrocyte based tests.

    PubMed

    Hemachandra, Chamini K; Pathiratne, Asoka

    2017-01-01

    Biological effect directed in vivo tests with model organisms are useful in assessing potential health risks associated with chemical contaminations in surface waters. This study examined the applicability of two in vivo test systems viz. plant, Allium cepa root based tests and fish, Oreochromis niloticus erythrocyte based tests for screening cytogenotoxic potential of raw source water, water treatment waste (effluents) and treated water of drinking water treatment plants (DWTPs) using two DWTPs associated with a major river in Sri Lanka. Measured physico-chemical parameters of the raw water, effluents and treated water samples complied with the respective Sri Lankan standards. In the in vivo tests, raw water induced statistically significant root growth retardation, mitodepression and chromosomal abnormalities in the root meristem of the plant and micronuclei/nuclear buds evolution and genetic damage (as reflected by comet scores) in the erythrocytes of the fish compared to the aged tap water controls signifying greater genotoxicity of the source water especially in the dry period. The effluents provoked relatively high cytogenotoxic effects on both test systems but the toxicity in most cases was considerably reduced to the raw water level with the effluent dilution (1:8). In vivo tests indicated reduction of cytogenotoxic potential in the tested drinking water samples. The results support the potential applications of practically feasible in vivo biological test systems such as A. cepa root based tests and the fish erythrocyte based tests as complementary tools for screening cytogenotoxicity potential of the source water and water treatment waste reaching downstream of aquatic ecosystems and for evaluating cytogenotoxicity eliminating efficacy of the DWTPs in different seasons in view of human and ecological safety. Copyright © 2016 Elsevier Ltd. All rights reserved.

  5. Forensic aspects of DNA-based human identity testing.

    PubMed

    Roper, Stephen M; Tatum, Owatha L

    2008-01-01

    The forensic applications of DNA-based human identity laboratory testing are often underappreciated. Molecular biology has seen an exponential improvement in the accuracy and statistical power provided by identity testing in the past decade. This technology, dependent upon an individual's unique DNA sequence, has cemented the use of DNA technology in the forensic laboratory. This paper will discuss the state of modern DNA-based identity testing, describe the technology used to perform this testing, and describe its use as it relates to forensic applications. We will also compare individual technologies, including polymerase chain reaction (PCR) and Southern Blotting, that are used to detect the molecular differences that make all individuals unique. An increasing reliance on DNA-based identity testing dictates that healthcare providers develop an understanding of the background, techniques, and guiding principles of this important forensic tool.

  6. A General Approach to Measuring Test-Taking Effort on Computer-Based Tests

    ERIC Educational Resources Information Center

    Wise, Steven L.; Gao, Lingyun

    2017-01-01

    There has been an increased interest in the impact of unmotivated test taking on test performance and score validity. This has led to the development of new ways of measuring test-taking effort based on item response time. In particular, Response Time Effort (RTE) has been shown to provide an assessment of effort down to the level of individual…

  7. Designing a VOIP Based Language Test

    ERIC Educational Resources Information Center

    Garcia Laborda, Jesus; Magal Royo, Teresa; Otero de Juan, Nuria; Gimenez Lopez, Jose L.

    2015-01-01

    Assessing speaking is one of the most difficult tasks in computer based language testing. Many countries all over the world face the need to implement standardized language tests where speaking tasks are commonly included. However, a number of problems make them rather impractical such as the costs, the personnel involved, the length of time for…

  8. A Study of the Effect of Proximally Autocorrelated Error on Tests of Significance for the Interrupted Time Series Quasi-Experimental Design.

    ERIC Educational Resources Information Center

    Sween, Joyce; Campbell, Donald T.

    The primary purpose of the present study was to investigate the appropriateness of several tests of significance for use with interrupted time series data. The second purpose was to determine what effect the violation of the assumption of uncorrelated error would have on the three tests of significance. The three tests were the Mood test,…

  9. Understanding text-based persuasion and support tactics of concerned significant others

    PubMed Central

    van Stolk-Cooke, Katherine; Hayes, Marie; Baumel, Amit

    2015-01-01

    The behavior of concerned significant others (CSOs) can have a measurable impact on the health and wellness of individuals attempting to meet behavioral and health goals, and research is needed to better understand the attributes of text-based CSO language when encouraging target significant others (TSOs) to achieve those goals. In an effort to inform the development of interventions for CSOs, this study examined the language content of brief text-based messages generated by CSOs to motivate TSOs to achieve a behavioral goal. CSOs generated brief text-based messages for TSOs for three scenarios: (1) to help TSOs achieve the goal, (2) in the event that the TSO is struggling to meet the goal, and (3) in the event that the TSO has given up on meeting the goal. Results indicate that there was a significant relationship between the tone and compassion of messages generated by CSOs, the CSOs’ perceptions of TSO motivation, and their expectation of a grateful or annoyed reaction by the TSO to their feedback or support. Results underscore the importance of attending to patterns in language when CSOs communicate with TSOs about goal achievement or failure, and how certain variables in the CSOs’ perceptions of their TSOs affect these characteristics. PMID:26312172

  10. Understanding text-based persuasion and support tactics of concerned significant others.

    PubMed

    van Stolk-Cooke, Katherine; Hayes, Marie; Baumel, Amit; Muench, Frederick

    2015-01-01

    The behavior of concerned significant others (CSOs) can have a measurable impact on the health and wellness of individuals attempting to meet behavioral and health goals, and research is needed to better understand the attributes of text-based CSO language when encouraging target significant others (TSOs) to achieve those goals. In an effort to inform the development of interventions for CSOs, this study examined the language content of brief text-based messages generated by CSOs to motivate TSOs to achieve a behavioral goal. CSOs generated brief text-based messages for TSOs for three scenarios: (1) to help TSOs achieve the goal, (2) in the event that the TSO is struggling to meet the goal, and (3) in the event that the TSO has given up on meeting the goal. Results indicate that there was a significant relationship between the tone and compassion of messages generated by CSOs, the CSOs' perceptions of TSO motivation, and their expectation of a grateful or annoyed reaction by the TSO to their feedback or support. Results underscore the importance of attending to patterns in language when CSOs communicate with TSOs about goal achievement or failure, and how certain variables in the CSOs' perceptions of their TSOs affect these characteristics.

  11. Space Launch System Base Heating Test: Experimental Operations & Results

    NASA Technical Reports Server (NTRS)

    Dufrene, Aaron; Mehta, Manish; MacLean, Matthew; Seaford, Mark; Holden, Michael

    2016-01-01

    NASA's Space Launch System (SLS) uses four clustered liquid rocket engines along with two solid rocket boosters. The interaction between all six rocket exhaust plumes will produce a complex and severe thermal environment in the base of the vehicle. This work focuses on a recent 2% scale, hot-fire SLS base heating test. These base heating tests are short-duration tests executed with chamber pressures near the full-scale values with gaseous hydrogen/oxygen engines and RSRMV analogous solid propellant motors. The LENS II shock tunnel/Ludwieg tube tunnel was used at or near flight duplicated conditions up to Mach 5. Model development was based on the Space Shuttle base heating tests with several improvements including doubling of the maximum chamber pressures and duplication of freestream conditions. Test methodology and conditions are presented, and base heating results from 76 runs are reported in non-dimensional form. Regions of high heating are identified and comparisons of various configuration and conditions are highlighted. Base pressure and radiometer results are also reported.

  12. Test Anxiety Analysis of Chinese College Students in Computer-Based Spoken English Test

    ERIC Educational Resources Information Center

    Yanxia, Yang

    2017-01-01

    Test anxiety was a commonly known or assumed factor that could greatly influence performance of test takers. With the employment of designed questionnaires and computer-based spoken English test, this paper explored test anxiety manifestation of Chinese college students from both macro and micro aspects, and found out that the major anxiety in…

  13. A simple bedside blood test (Fibrofast; FIB-5) is superior to FIB-4 index for the differentiation between non-significant and significant fibrosis in patients with chronic hepatitis C.

    PubMed

    Shiha, G; Seif, S; Eldesoky, A; Elbasiony, M; Soliman, R; Metwally, A; Zalata, K; Mikhail, N

    2017-05-01

    A simple non-invasive score (Fibrofast, FIB-5) was developed using five routine laboratory tests (ALT, AST, alkaline phosphatase, albumin and platelets count) for the detection of significant hepatic fibrosis in patients with chronic hepatitis C. The FIB-4 index is a non-invasive test for the assessment of liver fibrosis, and a score of ≤1.45 enables the correct identification of patients who have non-significant (F0-1) from significant fibrosis (F2-4), and could avoid liver biopsy. The aim of this study was to compare the performance characteristics of FIB-5 and FIB-4 to differentiate between non-significant and significant fibrosis. A cross-sectional study included 604 chronic HCV patients. All liver biopsies were scored using the METAVIR system. Both FIB-5 and FIB-4 scores were measured and the performance characteristics were calculated using the ROC curve. The performance characteristics of FIB-5 at ≥7.5 and FIB-4 at ≤1.45 for the differentiation between non-significant fibrosis and significant fibrosis were: specificity 94.4%, PPV 85.7%, and specificity 54.9%, PPV 55.7% respectively. FIB-5 score at the new cutoff is superior to FIB-4 index for the differentiation between non-significant and significant fibrosis.

  14. Population-based Testing and Treatment Characteristics for Chronic Myelogenous Leukemia

    PubMed Central

    Styles, Timothy; Wu, Manxia; Wilson, Reda; Babcock, Frances; Butterworth, David; West, Dee W.; Richardson, Lisa C.

    2017-01-01

    Introduction National and International Hematology/Oncology Practice Guidelines recommend testing for the BCR-ABL mutation for definitive diagnosis of chronic myeloid leukemia (CML) to allow for appropriate treatment with a Tyrosine Kinase Inhibitor (TKI). The purpose of our study was to describe population-based testing and treatment practice characteristics for patients diagnosed with CML. Methods We analyzed cases of CML using 2011 data from 10 state registries which are part of the Centers for Disease Control and Prevention’s (CDC) National Program of Cancer Registries. We describe completeness of testing for the BCR-ABL gene and availability of outpatient treatment with TKIs and associated characteristics. Results A total of 685 cases of CML were identified; 55% (374) had a documented BCR-ABL gene test with 96% (360) of these being positive for the BCR-ABL gene and the remaining 4% (14) either testing negative or had a missing result. Registries were able to identify the use of TKIs in 54% (369) of patients, though only 43% (296) had a corresponding BCR-ABL gene test documented. One state registry reported a significantly lower percentage of patients being tested for the BCR-ABL gene (25%) and receiving TKI treatment (21%). Limiting analysis to CML case reports from the remaining nine CER registries, 78% (305) patients had a documented BCR-ABL gene test and 79% (308) had documented treatment with a TKI. Receipt of testing or treatment for these nine states did not vary by sex, race, ethnicity, census tract poverty level, census tract urbanization, or insurance status; BCR-ABL testing varied by state of residence and BCR-ABL testing and TKI therapy occurred less often with increasing age (OR: 0.97, 95%CI: 0.95–0.99; OR: 0.97, 95%CI: 0.96–0.99 respectively). Conclusions Collection of detailed CML data vary significantly by states. A majority of the case patients had appropriate testing for the BCR-ABL gene and treatment with tyrosine kinase inhibitors

  15. The comparison between science virtual and paper based test in measuring grade 7 students’ critical thinking

    NASA Astrophysics Data System (ADS)

    Dhitareka, P. H.; Firman, H.; Rusyati, L.

    2018-05-01

    This research is comparing science virtual and paper-based test in measuring grade 7 students’ critical thinking based on Multiple Intelligences and gender. Quasi experimental method with within-subjects design is conducted in this research in order to obtain the data. The population of this research was all seventh grade students in ten classes of one public secondary school in Bandung. There were 71 students within two classes taken randomly became the sample in this research. The data are obtained through 28 questions with a topic of living things and environmental sustainability constructed based on eight critical thinking elements proposed by Inch then the questions provided in science virtual and paper-based test. The data was analysed by using paired-samples t test when the data are parametric and Wilcoxon signed ranks test when the data are non-parametric. In general comparison, the p-value of the comparison between science virtual and paper-based tests’ score is 0.506, indicated that there are no significance difference between science virtual and paper-based test based on the tests’ score. The results are furthermore supported by the students’ attitude result which is 3.15 from the scale from 1 to 4, indicated that they have positive attitudes towards Science Virtual Test.

  16. The GOLM-database standard- a framework for time-series data management based on free software

    NASA Astrophysics Data System (ADS)

    Eichler, M.; Francke, T.; Kneis, D.; Reusser, D.

    2009-04-01

    Monitoring and modelling projects usually involve time series data originating from different sources. Often, file formats, temporal resolution and meta-data documentation rarely adhere to a common standard. As a result, much effort is spent on converting, harmonizing, merging, checking, resampling and reformatting these data. Moreover, in work groups or during the course of time, these tasks tend to be carried out redundantly and repeatedly, especially when new data becomes available. The resulting duplication of data in various formats strains additional ressources. We propose a database structure and complementary scripts for facilitating these tasks. The GOLM- (General Observation and Location Management) framework allows for import and storage of time series data of different type while assisting in meta-data documentation, plausibility checking and harmonization. The imported data can be visually inspected and its coverage among locations and variables may be visualized. Supplementing scripts provide options for data export for selected stations and variables and resampling of the data to the desired temporal resolution. These tools can, for example, be used for generating model input files or reports. Since GOLM fully supports network access, the system can be used efficiently by distributed working groups accessing the same data over the internet. GOLM's database structure and the complementary scripts can easily be customized to specific needs. Any involved software such as MySQL, R, PHP, OpenOffice as well as the scripts for building and using the data base, including documentation, are free for download. GOLM was developed out of the practical requirements of the OPAQUE-project. It has been tested and further refined in the ERANET-CRUE and SESAM projects, all of which used GOLM to manage meteorological, hydrological and/or water quality data.

  17. Perceptions and performance using computer-based testing: One institution's experience.

    PubMed

    Bloom, Timothy J; Rich, Wesley D; Olson, Stephanie M; Adams, Michael L

    2018-02-01

    The purpose of this study was to evaluate student and faculty perceptions of the transition to a required computer-based testing format and to identify any impact of this transition on student exam performance. Separate questionnaires sent to students and faculty asked about perceptions of and problems with computer-based testing. Exam results from program-required courses for two years prior to and two years following the adoption of computer-based testing were compared to determine if this testing format impacted student performance. Responses to Likert-type questions about perceived ease of use showed no difference between students with one and three semesters experience with computer-based testing. Of 223 student-reported problems, 23% related to faculty training with the testing software. Students most commonly reported improved feedback (46% of responses) and ease of exam-taking (17% of responses) as benefits to computer-based testing. Faculty-reported difficulties were most commonly related to problems with student computers during an exam (38% of responses) while the most commonly identified benefit was collecting assessment data (32% of responses). Neither faculty nor students perceived an impact on exam performance due to computer-based testing. An analysis of exam grades confirmed there was no consistent performance difference between the paper and computer-based formats. Both faculty and students rapidly adapted to using computer-based testing. There was no evidence that switching to computer-based testing had any impact on student exam performance. Copyright © 2017 Elsevier Inc. All rights reserved.

  18. The Comparison of Accuracy Scores on the Paper and Pencil Testing vs. Computer-Based Testing

    ERIC Educational Resources Information Center

    Retnawati, Heri

    2015-01-01

    This study aimed to compare the accuracy of the test scores as results of Test of English Proficiency (TOEP) based on paper and pencil test (PPT) versus computer-based test (CBT). Using the participants' responses to the PPT documented from 2008-2010 and data of CBT TOEP documented in 2013-2014 on the sets of 1A, 2A, and 3A for the Listening and…

  19. Precursor Analysis for Flight- and Ground-Based Anomaly Risk Significance Determination

    NASA Technical Reports Server (NTRS)

    Groen, Frank

    2010-01-01

    This slide presentation reviews the precursor analysis for flight and ground based anomaly risk significance. It includes information on accident precursor analysis, real models vs. models, and probabilistic analysis.

  20. Effect of hemoglobin- and Perflubron-based oxygen carriers on common clinical laboratory tests.

    PubMed

    Ma, Z; Monk, T G; Goodnough, L T; McClellan, A; Gawryl, M; Clark, T; Moreira, P; Keipert, P E; Scott, M G

    1997-09-01

    Polymerized hemoglobin solutions (Hb-based oxygen carriers; HBOCs) and a second-generation perfluorocarbon (PFC) emulsion (Perflubron) are in clinical trials as temporary oxygen carriers ("blood substitutes"). Plasma and serum samples from patients receiving HBOCs look markedly red, whereas those from patients receiving PFC appear to be lipemic. Because hemolysis and lipemia are well-known interferents in many assays, we examined the effects of these substances on clinical chemistry, immunoassay, therapeutic drug, and coagulation tests. HBOC concentrations up to 50 g/L caused essentially no interference for Na, K, Cl, urea, total CO2, P, uric acid, Mg, creatinine, and glucose values determined by the Hitachi 747 or Vitros 750 analyzers (or both) or for immunoassays of lidocaine, N-acetylprocainamide, procainamide, digoxin, phenytoin, quinidine, or theophylline performed on the Abbott AxSym or TDx. Gentamycin and vancomycin assays on the AxSym exhibited a significant positive and negative interference, respectively. Immunoassays for TSH on the Abbott IMx and for troponin I on the Dade Stratus were unaffected by HBOC at this concentration. Tests for total protein, albumin, LDH, AST, ALT, GGT, amylase, lipase, and cholesterol were significantly affected to various extents at different HBOC concentrations on the Hitachi 747 and Vitros 750. The CK-MB assay on the Stratus exhibited a negative interference at 5 g/L HBOC. HBOC interference in coagulation tests was method-dependent-fibrometer-based methods on the BBL Fibro System were free from interference, but optical-based methods on the MLA 1000C exhibited interferences at 20 g/L HBOC. A 1:20 dilution of the PFC-based oxygen carrier (600 g/L) caused no interference on any of these chemistry or immunoassay tests except for amylase and ammonia on the Vitros 750 and plasma iron on the Hitachi 747.

  1. Service Lifetime Estimation of EPDM Rubber Based on Accelerated Aging Tests

    NASA Astrophysics Data System (ADS)

    Liu, Jie; Li, Xiangbo; Xu, Likun; He, Tao

    2017-04-01

    Service lifetime of ethylene propylene diene monomer (EPDM) rubber at room temperature (25 °C) was estimated based on accelerated aging tests. The study followed sealing stress loss on compressed cylinder samples by compression stress relaxation methods. The results showed that the cylinder samples of EPDM can quickly reach the physical relaxation equilibrium by using the over-compression method. The non-Arrhenius behavior occurred at the lowest aging temperature. A significant linear relationship was observed between compression set values and normalized stress decay results, and the relationship was not related to the ambient temperature of aging. It was estimated that the sealing stress loss in view of practical application would occur after around 86.8 years at 25 °C. The estimations at 25 °C based on the non-Arrhenius behavior were in agreement with compression set data from storage aging tests in natural environment.

  2. Assessment of statistical significance and clinical relevance.

    PubMed

    Kieser, Meinhard; Friede, Tim; Gondan, Matthias

    2013-05-10

    In drug development, it is well accepted that a successful study will demonstrate not only a statistically significant result but also a clinically relevant effect size. Whereas standard hypothesis tests are used to demonstrate the former, it is less clear how the latter should be established. In the first part of this paper, we consider the responder analysis approach and study the performance of locally optimal rank tests when the outcome distribution is a mixture of responder and non-responder distributions. We find that these tests are quite sensitive to their planning assumptions and have therefore not really any advantage over standard tests such as the t-test and the Wilcoxon-Mann-Whitney test, which perform overall well and can be recommended for applications. In the second part, we present a new approach to the assessment of clinical relevance based on the so-called relative effect (or probabilistic index) and derive appropriate sample size formulae for the design of studies aiming at demonstrating both a statistically significant and clinically relevant effect. Referring to recent studies in multiple sclerosis, we discuss potential issues in the application of this approach. Copyright © 2012 John Wiley & Sons, Ltd.

  3. Testing Spatial Symmetry Using Contingency Tables Based on Nearest Neighbor Relations

    PubMed Central

    Ceyhan, Elvan

    2014-01-01

    We consider two types of spatial symmetry, namely, symmetry in the mixed or shared nearest neighbor (NN) structures. We use Pielou's and Dixon's symmetry tests which are defined using contingency tables based on the NN relationships between the data points. We generalize these tests to multiple classes and demonstrate that both the asymptotic and exact versions of Pielou's first type of symmetry test are extremely conservative in rejecting symmetry in the mixed NN structure and hence should be avoided or only the Monte Carlo randomized version should be used. Under RL, we derive the asymptotic distribution for Dixon's symmetry test and also observe that the usual independence test seems to be appropriate for Pielou's second type of test. Moreover, we apply variants of Fisher's exact test on the shared NN contingency table for Pielou's second test and determine the most appropriate version for our setting. We also consider pairwise and one-versus-rest type tests in post hoc analysis after a significant overall symmetry test. We investigate the asymptotic properties of the tests, prove their consistency under appropriate null hypotheses, and investigate finite sample performance of them by extensive Monte Carlo simulations. The methods are illustrated on a real-life ecological data set. PMID:24605061

  4. Linkage to care following community-based mobile HIV testing compared with clinic-based testing in Umlazi Township, Durban, South Africa.

    PubMed

    Bassett, I V; Regan, S; Luthuli, P; Mbonambi, H; Bearnot, B; Pendleton, A; Robine, M; Mukuvisi, D; Thulare, H; Walensky, R P; Freedberg, K A; Losina, E; Mhlongo, B

    2014-07-01

    The aim of the study was to assess HIV prevalence, disease stage and linkage to HIV care following diagnosis at a mobile HIV testing unit, compared with results for clinic-based testing, in a Durban township. This was a prospective cohort study. We enrolled adults presenting for HIV testing at a community-based mobile testing unit (mobile testers) and at an HIV clinic (clinic testers) serving the same area. Testers diagnosed with HIV infection, regardless of testing site, were offered immediate CD4 testing and instructed to retrieve results at the clinic. We assessed rates of linkage to care, defined as CD4 result retrieval within 90 days of HIV diagnosis and/or completion of antiretroviral therapy (ART) literacy training, for mobile vs. clinic testers. From July to November 2011, 6957 subjects were HIV tested (4703 mobile and 2254 clinic); 55% were female. Mobile testers had a lower HIV prevalence than clinic testers (10% vs. 36%, respectively), were younger (median 23 vs. 27 years, respectively) and were more likely to live >5 km or >30 min from the clinic (64% vs. 40%, respectively; all P < 0.001). Mobile testers were less likely to undergo CD4 testing (33% vs. 83%, respectively) but more likely to have higher CD4 counts [median (interquartile range) 416 (287-587) cells/μL vs. 285 (136-482) cells/μL, respectively] than clinic testers (both P < 0.001). Of those who tested HIV positive, 10% of mobile testers linked to care, vs. 72% of clinic testers (P < 0.001). Mobile HIV testing reaches people who are younger, who are more geographically remote, and who have earlier disease compared with clinic-based testing. Fewer mobile testers underwent CD4 testing and linked to HIV care. Enhancing linkage efforts may improve the impact of mobile testing for those with early HIV disease. © 2013 British HIV Association.

  5. A comparative test of phylogenetic diversity indices.

    PubMed

    Schweiger, Oliver; Klotz, Stefan; Durka, Walter; Kühn, Ingolf

    2008-09-01

    Traditional measures of biodiversity, such as species richness, usually treat species as being equal. As this is obviously not the case, measuring diversity in terms of features accumulated over evolutionary history provides additional value to theoretical and applied ecology. Several phylogenetic diversity indices exist, but their behaviour has not yet been tested in a comparative framework. We provide a test of ten commonly used phylogenetic diversity indices based on 40 simulated phylogenies of varying topology. We restrict our analysis to a topological fully resolved tree without information on branch lengths and species lists with presence-absence data. A total of 38,000 artificial communities varying in species richness covering 5-95% of the phylogenies were created by random resampling. The indices were evaluated based on their ability to meet a priori defined requirements. No index meets all requirements, but three indices turned out to be more suitable than others under particular conditions. Average taxonomic distinctness (AvTD) and intensive quadratic entropy (J) are calculated by averaging and are, therefore, unbiased by species richness while reflecting phylogeny per se well. However, averaging leads to the violation of set monotonicity, which requires that species extinction cannot increase the index. Total taxonomic distinctness (TTD) sums up distinctiveness values for particular species across the community. It is therefore strongly linked to species richness and reflects phylogeny per se weakly but satisfies set monotonicity. We suggest that AvTD and J are best applied to studies that compare spatially or temporally rather independent communities that potentially vary strongly in their phylogenetic composition-i.e. where set monotonicity is a more negligible issue, but independence of species richness is desired. In contrast, we suggest that TTD be used in studies that compare rather interdependent communities where changes occur more gradually by

  6. Combination of blood tests for significant fibrosis and cirrhosis improves the assessment of liver-prognosis in chronic hepatitis C.

    PubMed

    Boursier, J; Brochard, C; Bertrais, S; Michalak, S; Gallois, Y; Fouchard-Hubert, I; Oberti, F; Rousselet, M-C; Calès, P

    2014-07-01

    Recent longitudinal studies have emphasised the prognostic value of noninvasive tests of liver fibrosis and cross-sectional studies have shown their combination significantly improves diagnostic accuracy. To compare the prognostic accuracy of six blood fibrosis tests and liver biopsy, and evaluate if test combination improves the liver-prognosis assessment in chronic hepatitis C (CHC). A total of 373 patients with compensated CHC, liver biopsy (Metavir F) and blood tests targeting fibrosis (APRI, FIB4, Fibrotest, Hepascore, FibroMeter) or cirrhosis (CirrhoMeter) were included. Significant liver-related events (SLRE) and liver-related deaths were recorded during follow-up (started the day of biopsy). During the median follow-up of 9.5 years (3508 person-years), 47 patients had a SLRE and 23 patients died from liver-related causes. For the prediction of first SLRE, most blood tests allowed higher prognostication than Metavir F [Harrell C-index: 0.811 (95% CI: 0.751-0.868)] with a significant increase for FIB4: 0.879 [0.832-0.919] (P = 0.002), FibroMeter: 0.870 [0.812-0.922] (P = 0.005) and APRI: 0.861 [0.813-0.902] (P = 0.039). Multivariate analysis identified FibroMeter, CirrhoMeter and sustained viral response as independent predictors of first SLRE. CirrhoMeter was the only independent predictor of liver-related death. The combination of FibroMeter and CirrhoMeter classifications into a new FM/CM classification improved the liver-prognosis assessment compared to Metavir F staging or single tests by identifying five subgroups of patients with significantly different prognoses. Some blood fibrosis tests are more accurate than liver biopsy for determining liver prognosis in CHC. A new combination of two complementary blood tests, one targeted for fibrosis and the other for cirrhosis, optimises assessment of liver-prognosis. © 2014 John Wiley & Sons Ltd.

  7. Intergenerational predictors of birth weight in the Philippines: correlations with mother's and father's birth weight and test of maternal constraint.

    PubMed

    Kuzawa, Christopher W; Eisenberg, Dan T A

    2012-01-01

    Birth weight (BW) predicts many health outcomes, but the relative contributions of genes and environmental factors to BW remain uncertain. Some studies report stronger mother-offspring than father-offspring BW correlations, with attenuated father-offspring BW correlations when the mother is stunted. These findings have been interpreted as evidence that maternal genetic or environmental factors play an important role in determining birth size, with small maternal size constraining paternal genetic contributions to offspring BW. Here we evaluate mother-offspring and father-offspring birth weight (BW) associations and evaluate whether maternal stunting constrains genetic contributions to offspring birth size. Data include BW of offspring (n = 1,101) born to female members (n = 382) and spouses of male members (n = 275) of a birth cohort (born 1983-84) in Metropolitan Cebu, Philippines. Regression was used to relate parental and offspring BW adjusting for confounders. Resampling testing was used to evaluate whether false paternity could explain any evidence for excess matrilineal inheritance. In a pooled model adjusting for maternal height and confounders, parental BW was a borderline-significantly stronger predictor of offspring BW in mothers compared to fathers (sex of parent interaction p = 0.068). In separate multivariate models, each kg in mother's and father's BW predicted a 271±53 g (p<0.00001) and 132±55 g (p = 0.017) increase in offspring BW, respectively. Resampling statistics suggested that false paternity rates of >25% and likely 50% would be needed to explain these differences. There was no interaction between maternal stature and maternal BW (interaction p = 0.520) or paternal BW (p = 0.545). Each kg change in mother's BW predicted twice the change in offspring BW as predicted by a change in father's BW, consistent with an intergenerational maternal effect on offspring BW. Evidence for excess matrilineal BW heritability at

  8. The effect of hearing impairment in older people on the spouse: development and psychometric testing of the significant other scale for hearing disability (SOS-HEAR).

    PubMed

    Scarinci, Nerina; Worrall, Linda; Hickson, Louise

    2009-01-01

    The effects of hearing impairment on the person with the impairment and on their significant others are pervasive and affect the quality of life for all involved. The effect of hearing impairment on significant others is known as a third-party disability. This study aimed to develop and psychometrically test a scale to measure the third-party disability experienced by spouses of older people with hearing impairment. The Significant Other Scale for Hearing Disability (SOS-HEAR) was based on results of a previous qualitative study investigating the effect of hearing impairment on a spouse's everyday life. Psychometric testing with 100 spouses was conducted using item analysis, Cronbach's alpha, factor analysis, and test-retest reliability. Principal components analysis identified six key underlying factors. A combined set of 27 items was found to be reliable (alpha = 0.94), with weighted kappa for items ranging from fair to very good. The SOS-HEAR is a brief, easy to administer instrument that has evidence of reliability and validity. The SOS-HEAR could serve as a means of identifying spouses of older people with hearing impairment in need of intervention, directed towards either the couple or the spouse alone.

  9. Optimal Test Design with Rule-Based Item Generation

    ERIC Educational Resources Information Center

    Geerlings, Hanneke; van der Linden, Wim J.; Glas, Cees A. W.

    2013-01-01

    Optimal test-design methods are applied to rule-based item generation. Three different cases of automated test design are presented: (a) test assembly from a pool of pregenerated, calibrated items; (b) test generation on the fly from a pool of calibrated item families; and (c) test generation on the fly directly from calibrated features defining…

  10. Exact test-based approach for equivalence test with parameter margin.

    PubMed

    Cassie Dong, Xiaoyu; Bian, Yuanyuan; Tsong, Yi; Wang, Tianhua

    2017-01-01

    The equivalence test has a wide range of applications in pharmaceutical statistics which we need to test for the similarity between two groups. In recent years, the equivalence test has been used in assessing the analytical similarity between a proposed biosimilar product and a reference product. More specifically, the mean values of the two products for a given quality attribute are compared against an equivalence margin in the form of ±f × σ R , where ± f × σ R is a function of the reference variability. In practice, this margin is unknown and is estimated from the sample as ±f × S R . If we use this estimated margin with the classic t-test statistic on the equivalence test for the means, both Type I and Type II error rates may inflate. To resolve this issue, we develop an exact-based test method and compare this method with other proposed methods, such as the Wald test, the constrained Wald test, and the Generalized Pivotal Quantity (GPQ) in terms of Type I error rate and power. Application of those methods on data analysis is also provided in this paper. This work focuses on the development and discussion of the general statistical methodology and is not limited to the application of analytical similarity.

  11. Clinical Significance of an HPV DNA Chip Test with Emphasis on HPV-16 and/or HPV-18 Detection in Korean Gynecological Patients.

    PubMed

    Yeo, Min-Kyung; Lee, Ahwon; Hur, Soo Young; Park, Jong Sup

    2016-07-01

    Human papillomavirus (HPV) is a major risk factor for cervical cancer. We evaluated the clinical significance of the HPV DNA chip genotyping assay (MyHPV chip, Mygene Co.) compared with the Hybrid Capture 2 (HC2) chemiluminescent nucleic acid hybridization kit (Digene Corp.) in 867 patients. The concordance rate between the MyHPV chip and HC2 was 79.4% (kappa coefficient, κ = 0.55). The sensitivity and specificity of both HPV tests were very similar (approximately 85% and 50%, respectively). The addition of HPV result (either MyHPV chip or HC2) to cytology improved the sensitivity (95%, each) but reduced the specificity (approximately 30%, each) compared with the HPV test or cytology alone. Based on the MyHPV chip results, the odds ratio (OR) for ≥ high-grade squamous intraepithelial lesions (HSILs) was 9.9 in the HPV-16/18 (+) group and 3.7 in the non-16/18 high-risk (HR)-HPV (+) group. Based on the HC2 results, the OR for ≥ HSILs was 5.9 in the HR-HPV (+) group. When considering only patients with cytological diagnoses of "negative for intraepithelial lesion or malignancy" and "atypical squamous cell or atypical glandular cell," based on the MyHPV chip results, the ORs for ≥ HSILs were 6.8 and 11.7, respectively, in the HPV-16/18 (+) group. The sensitivity and specificity of the MyHPV chip test are similar to the HC2. Detecting HPV-16/18 with an HPV DNA chip test, which is commonly used in many Asian countries, is useful in assessing the risk of high-grade cervical lesions.

  12. Black-Box System Testing of Real-Time Embedded Systems Using Random and Search-Based Testing

    NASA Astrophysics Data System (ADS)

    Arcuri, Andrea; Iqbal, Muhammad Zohaib; Briand, Lionel

    Testing real-time embedded systems (RTES) is in many ways challenging. Thousands of test cases can be potentially executed on an industrial RTES. Given the magnitude of testing at the system level, only a fully automated approach can really scale up to test industrial RTES. In this paper we take a black-box approach and model the RTES environment using the UML/MARTE international standard. Our main motivation is to provide a more practical approach to the model-based testing of RTES by allowing system testers, who are often not familiar with the system design but know the application domain well-enough, to model the environment to enable test automation. Environment models can support the automation of three tasks: the code generation of an environment simulator, the selection of test cases, and the evaluation of their expected results (oracles). In this paper, we focus on the second task (test case selection) and investigate three test automation strategies using inputs from UML/MARTE environment models: Random Testing (baseline), Adaptive Random Testing, and Search-Based Testing (using Genetic Algorithms). Based on one industrial case study and three artificial systems, we show how, in general, no technique is better than the others. Which test selection technique to use is determined by the failure rate (testing stage) and the execution time of test cases. Finally, we propose a practical process to combine the use of all three test strategies.

  13. Worldwide Research, Worldwide Participation: Web-Based Test Logger

    NASA Technical Reports Server (NTRS)

    Clark, David A.

    1998-01-01

    Thanks to the World Wide Web, a new paradigm has been born. ESCORT (steady state data system) facilities can now be configured to use a Web-based test logger, enabling worldwide participation in tests. NASA Lewis Research Center's new Web-based test logger for ESCORT automatically writes selected test and facility parameters to a browser and allows researchers to insert comments. All data can be viewed in real time via Internet connections, so anyone with a Web browser and the correct URL (universal resource locator, or Web address) can interactively participate. As the test proceeds and ESCORT data are taken, Web browsers connected to the logger are updated automatically. The use of this logger has demonstrated several benefits. First, researchers are free from manual data entry and are able to focus more on the tests. Second, research logs can be printed in report format immediately after (or during) a test. And finally, all test information is readily available to an international public.

  14. Methodology for testing and validating knowledge bases

    NASA Technical Reports Server (NTRS)

    Krishnamurthy, C.; Padalkar, S.; Sztipanovits, J.; Purves, B. R.

    1987-01-01

    A test and validation toolset developed for artificial intelligence programs is described. The basic premises of this method are: (1) knowledge bases have a strongly declarative character and represent mostly structural information about different domains, (2) the conditions for integrity, consistency, and correctness can be transformed into structural properties of knowledge bases, and (3) structural information and structural properties can be uniformly represented by graphs and checked by graph algorithms. The interactive test and validation environment have been implemented on a SUN workstation.

  15. Does the Test Work? Evaluating a Web-Based Language Placement Test

    ERIC Educational Resources Information Center

    Long, Avizia Y.; Shin, Sun-Young; Geeslin, Kimberly; Willis, Erik W.

    2018-01-01

    In response to the need for examples of test validation from which everyday language programs can benefit, this paper reports on a study that used Bachman's (2005) assessment use argument (AUA) framework to examine evidence to support claims made about the intended interpretations and uses of scores based on a new web-based Spanish language…

  16. [Investigation of color vision in acute unilateral optic neuritis using a web-based color vision test].

    PubMed

    Kuchenbecker, J; Blum, M; Paul, F

    2016-03-01

    In acute unilateral optic neuritis (ON) color vision defects combined with a decrease in visual acuity and contrast sensitivity frequently occur. This study investigated whether a web-based color vision test is a reliable detector of acquired color vision defects in ON and, if so, which charts are particularly suitable. In 12 patients with acute unilateral ON, a web-based color vision test ( www.farbsehtest.de ) with 25 color plates (16 Velhagen/Broschmann and 9 Ishihara color plates) was performed. For each patient the affected eye was tested first and then the unaffected eye. The mean best-corrected distance visual acuity (BCDVA) in the ON eye was 0.36 ± 0.20 and 1.0 ± 0.1 in the contralateral eye. The number of incorrectly read plates correlated with the visual acuity. For the ON eye a total of 134 plates were correctly identified and 166 plates were incorrectly identified, while for the disease-free fellow eye, 276 plates were correctly identified and 24 plates were incorrectly identified. Both of the blue/yellow plates were identified correctly 14 times and incorrectly 10 times using the ON eye and exclusively correctly (24 times) using the fellow eye. The Velhagen/Broschmann plates were incorrectly identified significantly more frequently in comparison with the Ishihara plates. In 4 out of 16 Velhagen/Broschmann plates and 5 out of 9 Ishihara plates, no statistically significant differences between the ON eye and the fellow eye could be detected. The number of incorrectly identified plates correlated with a decrease in visual acuity. Red/green and blue/yellow plates were incorrectly identified significantly more frequently with the ON eye, while the Velhagen/Broschmann color plates were incorrectly identified significantly more frequently than the Ishihara color plates. Thus, under defined test conditions the web-based color vision test can also be used to detect acquired color vision defects, such as those caused by ON. Optimization of the test by

  17. Evaluating Diagnostic Accuracy of Noninvasive Tests in Assessment of Significant Liver Fibrosis in Chronic Hepatitis C Egyptian Patients.

    PubMed

    Omran, Dalia; Zayed, Rania A; Nabeel, Mohammed M; Mobarak, Lamiaa; Zakaria, Zeinab; Farid, Azza; Hassany, Mohamed; Saif, Sameh; Mostafa, Muhammad; Saad, Omar Khalid; Yosry, Ayman

    2018-05-01

    Stage of liver fibrosis is critical for treatment decision and prediction of outcomes in chronic hepatitis C (CHC) patients. We evaluated the diagnostic accuracy of transient elastography (TE)-FibroScan and noninvasive serum markers tests in the assessment of liver fibrosis in CHC patients, in reference to liver biopsy. One-hundred treatment-naive CHC patients were subjected to liver biopsy, TE-FibroScan, and eight serum biomarkers tests; AST/ALT ratio (AAR), AST to platelet ratio index (APRI), age-platelet index (AP index), fibrosis quotient (FibroQ), fibrosis 4 index (FIB-4), cirrhosis discriminant score (CDS), King score, and Goteborg University Cirrhosis Index (GUCI). Receiver operating characteristic curves were constructed to compare the diagnostic accuracy of these noninvasive methods in predicting significant fibrosis in CHC patients. TE-FibroScan predicted significant fibrosis at cutoff value 8.5 kPa with area under the receiver operating characteristic (AUROC) 0.90, sensitivity 83%, specificity 91.5%, positive predictive value (PPV) 91.2%, and negative predictive value (NPV) 84.4%. Serum biomarkers tests showed that AP index and FibroQ had the highest diagnostic accuracy in predicting significant liver fibrosis at cutoff 4.5 and 2.7, AUROC was 0.8 and 0.8 with sensitivity 73.6% and 73.6%, specificity 70.2% and 68.1%, PPV 71.1% and 69.8%, and NPV 72.9% and 72.3%, respectively. Combined AP index and FibroQ had AUROC 0.83 with sensitivity 73.6%, specificity 80.9%, PPV 79.6%, and NPV 75.7% for predicting significant liver fibrosis. APRI, FIB-4, CDS, King score, and GUCI had intermediate accuracy in predicting significant liver fibrosis with AUROC 0.68, 0.78, 0.74, 0.74, and 0.67, respectively, while AAR had low accuracy in predicting significant liver fibrosis. TE-FibroScan is the most accurate noninvasive alternative to liver biopsy. AP index and FibroQ, either as individual tests or combined, have good accuracy in predicting significant liver fibrosis

  18. Test-Based Accountability: The Promise and the Perils

    ERIC Educational Resources Information Center

    Loveless, Tom

    2005-01-01

    In the early 1990s, states began establishing standards in academic subjects backed by test-based accountability systems to see that the standards were met. Incentives were implemented for schools and students based on pupil test scores. These early accountability systems paved the way for passage of landmark federal legislation, the No Child Left…

  19. Automated Source-Code-Based Testing of Object-Oriented Software

    NASA Astrophysics Data System (ADS)

    Gerlich, Ralf; Gerlich, Rainer; Dietrich, Carsten

    2014-08-01

    With the advent of languages such as C++ and Java in mission- and safety-critical space on-board software, new challenges for testing and specifically automated testing arise. In this paper we discuss some of these challenges, consequences and solutions based on an experiment in automated source- code-based testing for C++.

  20. Test-Retest Intervisit Variability of Functional and Structural Parameters in X-Linked Retinoschisis.

    PubMed

    Jeffrey, Brett G; Cukras, Catherine A; Vitale, Susan; Turriff, Amy; Bowles, Kristin; Sieving, Paul A

    2014-09-01

    To examine the variability of four outcome measures that could be used to address safety and efficacy in therapeutic trials with X-linked juvenile retinoschisis. Seven men with confirmed mutations in the RS1 gene were evaluated over four visits spanning 6 months. Assessments included visual acuity, full-field electroretinograms (ERG), microperimetric macular sensitivity, and retinal thickness measured by optical coherence tomography (OCT). Eyes were separated into Better or Worse Eye groups based on acuity at baseline. Repeatability coefficients were calculated for each parameter and jackknife resampling used to derive 95% confidence intervals (CIs). The threshold for statistically significant change in visual acuity ranged from three to eight letters. For ERG a-wave, an amplitude reduction greater than 56% would be considered significant. For other parameters, variabilities were lower in the Worse Eye group, likely a result of floor effects due to collapse of the schisis pockets and/or retinal atrophy. The criteria for significant change (Better/Worse Eye) for three important parameters were: ERG b/a-wave ratio (0.44/0.23), point wise sensitivity (10.4/7.0 dB), and central retinal thickness (31%/18%). The 95% CI range for visual acuity, ERG, retinal sensitivity, and central retinal thickness relative to baseline are described for this cohort of participants with X-linked juvenile retinoschisis (XLRS). A quantitative understanding of the variability of outcome measures is vital to establishing the safety and efficacy limits for therapeutic trials of XLRS patients.

  1. Building test data from real outbreaks for evaluating detection algorithms.

    PubMed

    Texier, Gaetan; Jackson, Michael L; Siwe, Leonel; Meynard, Jean-Baptiste; Deparis, Xavier; Chaudet, Herve

    2017-01-01

    Benchmarking surveillance systems requires realistic simulations of disease outbreaks. However, obtaining these data in sufficient quantity, with a realistic shape and covering a sufficient range of agents, size and duration, is known to be very difficult. The dataset of outbreak signals generated should reflect the likely distribution of authentic situations faced by the surveillance system, including very unlikely outbreak signals. We propose and evaluate a new approach based on the use of historical outbreak data to simulate tailored outbreak signals. The method relies on a homothetic transformation of the historical distribution followed by resampling processes (Binomial, Inverse Transform Sampling Method-ITSM, Metropolis-Hasting Random Walk, Metropolis-Hasting Independent, Gibbs Sampler, Hybrid Gibbs Sampler). We carried out an analysis to identify the most important input parameters for simulation quality and to evaluate performance for each of the resampling algorithms. Our analysis confirms the influence of the type of algorithm used and simulation parameters (i.e. days, number of cases, outbreak shape, overall scale factor) on the results. We show that, regardless of the outbreaks, algorithms and metrics chosen for the evaluation, simulation quality decreased with the increase in the number of days simulated and increased with the number of cases simulated. Simulating outbreaks with fewer cases than days of duration (i.e. overall scale factor less than 1) resulted in an important loss of information during the simulation. We found that Gibbs sampling with a shrinkage procedure provides a good balance between accuracy and data dependency. If dependency is of little importance, binomial and ITSM methods are accurate. Given the constraint of keeping the simulation within a range of plausible epidemiological curves faced by the surveillance system, our study confirms that our approach can be used to generate a large spectrum of outbreak signals.

  2. Building test data from real outbreaks for evaluating detection algorithms

    PubMed Central

    Texier, Gaetan; Jackson, Michael L.; Siwe, Leonel; Meynard, Jean-Baptiste; Deparis, Xavier; Chaudet, Herve

    2017-01-01

    Benchmarking surveillance systems requires realistic simulations of disease outbreaks. However, obtaining these data in sufficient quantity, with a realistic shape and covering a sufficient range of agents, size and duration, is known to be very difficult. The dataset of outbreak signals generated should reflect the likely distribution of authentic situations faced by the surveillance system, including very unlikely outbreak signals. We propose and evaluate a new approach based on the use of historical outbreak data to simulate tailored outbreak signals. The method relies on a homothetic transformation of the historical distribution followed by resampling processes (Binomial, Inverse Transform Sampling Method—ITSM, Metropolis-Hasting Random Walk, Metropolis-Hasting Independent, Gibbs Sampler, Hybrid Gibbs Sampler). We carried out an analysis to identify the most important input parameters for simulation quality and to evaluate performance for each of the resampling algorithms. Our analysis confirms the influence of the type of algorithm used and simulation parameters (i.e. days, number of cases, outbreak shape, overall scale factor) on the results. We show that, regardless of the outbreaks, algorithms and metrics chosen for the evaluation, simulation quality decreased with the increase in the number of days simulated and increased with the number of cases simulated. Simulating outbreaks with fewer cases than days of duration (i.e. overall scale factor less than 1) resulted in an important loss of information during the simulation. We found that Gibbs sampling with a shrinkage procedure provides a good balance between accuracy and data dependency. If dependency is of little importance, binomial and ITSM methods are accurate. Given the constraint of keeping the simulation within a range of plausible epidemiological curves faced by the surveillance system, our study confirms that our approach can be used to generate a large spectrum of outbreak signals. PMID

  3. GPS Device Testing Based on User Performance Metrics

    DOT National Transportation Integrated Search

    2015-10-02

    1. Rationale for a Test Program Based on User Performance Metrics ; 2. Roberson and Associates Test Program ; 3. Status of, and Revisions to, the Roberson and Associates Test Program ; 4. Comparison of Roberson and DOT/Volpe Programs

  4. Significance and value of the Widal test in the diagnosis of typhoid fever in an endemic area.

    PubMed Central

    Pang, T; Puthucheary, S D

    1983-01-01

    The diagnostic value of the Widal test was assessed in an endemic area. The test was done on 300 normal individuals, 297 non-typhoidal fevers and 275 bacteriologically proven cases of typhoid. Of 300 normal individuals, 2% had an H agglutinin titre of 1/160 and 5% had an O agglutinin titre of 1/160. On the basis of these criteria a significant H and/or O agglutinin titre of 1/320 or more was observed in 93-97% of typhoid cases and in only 3% of patients with non-typhoidal fever. Of the sera from typhoid cases which gave a significant Widal reaction, the majority (79.9%) showed increases in both H and O agglutinins and 51 of 234 (21.8%) of these sera were collected in the first week of illness. The significance and implications of these findings are discussed. PMID:6833514

  5. Testing the performance of technical trading rules in the Chinese markets based on superior predictive test

    NASA Astrophysics Data System (ADS)

    Wang, Shan; Jiang, Zhi-Qiang; Li, Sai-Ping; Zhou, Wei-Xing

    2015-12-01

    Technical trading rules have a long history of being used by practitioners in financial markets. The profitable ability and efficiency of technical trading rules are yet controversial. In this paper, we test the performance of more than seven thousand traditional technical trading rules on the Shanghai Securities Composite Index (SSCI) from May 21, 1992 through June 30, 2013 and China Securities Index 300 (CSI 300) from April 8, 2005 through June 30, 2013 to check whether an effective trading strategy could be found by using the performance measurements based on the return and Sharpe ratio. To correct for the influence of the data-snooping effect, we adopt the Superior Predictive Ability test to evaluate if there exists a trading rule that can significantly outperform the benchmark. The result shows that for SSCI, technical trading rules offer significant profitability, while for CSI 300, this ability is lost. We further partition the SSCI into two sub-series and find that the efficiency of technical trading in sub-series, which have exactly the same spanning period as that of CSI 300, is severely weakened. By testing the trading rules on both indexes with a five-year moving window, we find that during the financial bubble from 2005 to 2007, the effectiveness of technical trading rules is greatly improved. This is consistent with the predictive ability of technical trading rules which appears when the market is less efficient.

  6. Using the Coefficient of Determination "R"[superscript 2] to Test the Significance of Multiple Linear Regression

    ERIC Educational Resources Information Center

    Quinino, Roberto C.; Reis, Edna A.; Bessegato, Lupercio F.

    2013-01-01

    This article proposes the use of the coefficient of determination as a statistic for hypothesis testing in multiple linear regression based on distributions acquired by beta sampling. (Contains 3 figures.)

  7. A Cost-Effectiveness Analysis of a Home-Based HIV Counselling and Testing Intervention versus the Standard (Facility Based) HIV Testing Strategy in Rural South Africa.

    PubMed

    Tabana, Hanani; Nkonki, Lungiswa; Hongoro, Charles; Doherty, Tanya; Ekström, Anna Mia; Naik, Reshma; Zembe-Mkabile, Wanga; Jackson, Debra; Thorson, Anna

    2015-01-01

    There is growing evidence concerning the acceptability and feasibility of home-based HIV testing. However, less is known about the cost-effectiveness of the approach yet it is a critical component to guide decisions about scaling up access to HIV testing. This study examined the cost-effectiveness of a home-based HIV testing intervention in rural South Africa. Two alternatives: clinic and home-based HIV counselling and testing were compared. Costs were analysed from a provider's perspective for the period of January to December 2010. The outcome, HIV counselling and testing (HCT) uptake was obtained from the Good Start home-based HIV counselling and testing (HBHCT) cluster randomised control trial undertaken in KwaZulu-Natal province. Cost-effectiveness was estimated for a target population of 22,099 versus 23,864 people for intervention and control communities respectively. Average costs were calculated as the cost per client tested, while cost-effectiveness was calculated as the cost per additional client tested through HBHCT. Based on effectiveness of 37% in the intervention (HBHCT) arm compared to 16% in control arm, home based testing costs US$29 compared to US$38 per person for clinic HCT. The incremental cost effectiveness per client tested using HBHCT was $19. HBHCT was less costly and more effective. Home-based HCT could present a cost-effective alternative for rural 'hard to reach' populations depending on affordability by the health system, and should be considered as part of community outreach programs.

  8. Gender Differences in Computer-Administered Versus Paper-Based Tests.

    ERIC Educational Resources Information Center

    Wallace, Patricia; Clariana, Roy B.

    2005-01-01

    For many reasons, paper-based tests of course content are shifting to computer-based administration. This investigation examined student performance on two separate tests delivered either by computer or paper with the first test near the start of the course and the second at the end of the course. Undergraduate-level freshman business majors…

  9. Significance of specificity of Tinetti B-POMA test and fall risk factor in third age of life.

    PubMed

    Avdić, Dijana; Pecar, Dzemal

    2006-02-01

    As for the third age, psychophysical abilities of humans gradually decrease, while the ability of adaptation to endogenous and exogenous burdens is going down. In 1987, "Harada" et al. (1) have found out that 9.5 million persons in USA have difficulties running daily activities, while 59% of them (which is 5.6 million) are older than 65 years in age. The study has encompassed 77 questioned persons of both sexes with their average age 71.73 +/- 5.63 (scope of 65-90 years in age), chosen by random sampling. Each patient has been questioned in his/her own home and familiar to great extent with the methodology and aims of the questionnaire. Percentage of questioned women was 64.94% (50 patients) while the percentage for men was 35.06% (27 patients). As for the value of risk factor score achieved conducting the questionnaire and B-POMA test, there are statistically significant differences between men and women, as well as between patients who fell and those who never did. As for the way of life (alone or in the community), there are no significant statistical differences. Average results gained through B-POMA test in this study are statistically significantly higher in men and patients who did not provide data about falling, while there was no statistically significant difference in the way of life. In relation to the percentage of maximum number of positive answers to particular questions, regarding gender, way of life and the data about falling, there were no statistically significant differences between the value of B-POMA test and the risk factor score (the questionnaire).

  10. Application of a Physics-Based Stabilization Criterion to Flight System Thermal Testing

    NASA Technical Reports Server (NTRS)

    Baker, Charles; Garrison, Matthew; Cottingham, Christine; Peabody, Sharon

    2010-01-01

    The theory shown here can provide thermal stability criteria based on physics and a goal steady state error rather than on an arbitrary "X% Q/mC(sub P)" method. The ability to accurately predict steady-state temperatures well before thermal balance is reached could be very useful during testing. This holds true for systems where components are changing temperature at different rates, although it works better for the components closest to the sink. However, the application to these test cases shows some significant limitations: This theory quickly falls apart if the thermal control system in question is tightly coupled to a large mass not accounted for in the calculations, so it is more useful in subsystem-level testing than full orbiter tests. Tight couplings to a fluctuating sink causes noise in the steady state temperature predictions.

  11. Testing a computer-based ostomy care training resource for staff nurses.

    PubMed

    Bales, Isabel

    2010-05-01

    Fragmented teaching and ostomy care provided by nonspecialized clinicians unfamiliar with state-of-the-art care and products have been identified as problems in teaching ostomy care to the new ostomate. After conducting a literature review of theories and concepts related to the impact of nurse behaviors and confidence on ostomy care, the author developed a computer-based learning resource and assessed its effect on staff nurse confidence. Of 189 staff nurses with a minimum of 1 year acute-care experience employed in the acute care, emergency, and rehabilitation departments of an acute care facility in the Midwestern US, 103 agreed to participate and returned completed pre- and post-tests, each comprising the same eight statements about providing ostomy care. F and P values were computed for differences between pre- and post test scores. Based on a scale where 1 = totally disagree and 5 = totally agree with the statement, baseline confidence and perceived mean knowledge scores averaged 3.8 and after viewing the resource program post-test mean scores averaged 4.51, a statistically significant improvement (P = 0.000). The largest difference between pre- and post test scores involved feeling confident in having the resources to learn ostomy skills independently. The availability of an electronic ostomy care resource was rated highly in both pre- and post testing. Studies to assess the effects of increased confidence and knowledge on the quality and provision of care are warranted.

  12. Performance of a Cartridge-Based Assay for Detection of Clinically Significant Human Papillomavirus (HPV) Infection: Lessons from VALGENT (Validation of HPV Genotyping Tests)

    PubMed Central

    Geraets, Daan; Cuzick, Jack; Cadman, Louise; Moore, Catherine; Vanden Broeck, Davy; Padalko, Elisaveta; Quint, Wim; Arbyn, Marc

    2016-01-01

    The Validation of Human Papillomavirus (HPV) Genotyping Tests (VALGENT) studies offer an opportunity to clinically validate HPV assays for use in primary screening for cervical cancer and also provide a framework for the comparison of analytical and type-specific performance. Through VALGENT, we assessed the performance of the cartridge-based Xpert HPV assay (Xpert HPV), which detects 14 high-risk (HR) types and resolves HPV16 and HPV18/45. Samples from women attending the United Kingdom cervical screening program enriched with cytologically abnormal samples were collated. All had been previously tested by a clinically validated standard comparator test (SCT), the GP5+/6+ enzyme immunoassay (EIA). The clinical sensitivity and specificity of the Xpert HPV for the detection of cervical intraepithelial neoplasia grade 2 or higher (CIN2+) and CIN3+ relative to those of the SCT were assessed as were the inter- and intralaboratory reproducibilities according to international criteria for test validation. Type concordance for HPV16 and HPV18/45 between the Xpert HPV and the SCT was also analyzed. The Xpert HPV detected 94% of CIN2+ and 98% of CIN3+ lesions among all screened women and 90% of CIN2+ and 96% of CIN3+ lesions in women 30 years and older. The specificity for CIN1 or less (≤CIN1) was 83% (95% confidence interval [CI], 80 to 85%) in all women and 88% (95% CI, 86 to 91%) in women 30 years and older. Inter- and intralaboratory agreements for the Xpert HPV were 98% and 97%, respectively. The kappa agreements for HPV16 and HPV18/45 between the clinically validated reference test (GP5+/6+ LMNX) and the Xpert HPV were 0.92 and 0.91, respectively. The clinical performance and reproducibility of the Xpert HPV are comparable to those of well-established HPV assays and fulfill the criteria for use in primary cervical cancer screening. PMID:27385707

  13. The Impact of Settable Test Item Exposure Control Interface Format on Postsecondary Business Student Test Performance

    ERIC Educational Resources Information Center

    Truell, Allen D.; Zhao, Jensen J.; Alexander, Melody W.

    2005-01-01

    The purposes of this study were to determine if there is a significant difference in postsecondary business student scores and test completion time based on settable test item exposure control interface format, and to determine if there is a significant difference in student scores and test completion time based on settable test item exposure…

  14. The performance of non-invasive tests to rule-in and rule-out significant coronary artery stenosis in patients with stable angina: a meta-analysis focused on post-test disease probability.

    PubMed

    Knuuti, Juhani; Ballo, Haitham; Juarez-Orozco, Luis Eduardo; Saraste, Antti; Kolh, Philippe; Rutjes, Anne Wilhelmina Saskia; Jüni, Peter; Windecker, Stephan; Bax, Jeroen J; Wijns, William

    2018-05-29

    To determine the ranges of pre-test probability (PTP) of coronary artery disease (CAD) in which stress electrocardiogram (ECG), stress echocardiography, coronary computed tomography angiography (CCTA), single-photon emission computed tomography (SPECT), positron emission tomography (PET), and cardiac magnetic resonance (CMR) can reclassify patients into a post-test probability that defines (>85%) or excludes (<15%) anatomically (defined by visual evaluation of invasive coronary angiography [ICA]) and functionally (defined by a fractional flow reserve [FFR] ≤0.8) significant CAD. A broad search in electronic databases until August 2017 was performed. Studies on the aforementioned techniques in >100 patients with stable CAD that utilized either ICA or ICA with FFR measurement as reference, were included. Study-level data was pooled using a hierarchical bivariate random-effects model and likelihood ratios were obtained for each technique. The PTP ranges for each technique to rule-in or rule-out significant CAD were defined. A total of 28 664 patients from 132 studies that used ICA as reference and 4131 from 23 studies using FFR, were analysed. Stress ECG can rule-in and rule-out anatomically significant CAD only when PTP is ≥80% (76-83) and ≤19% (15-25), respectively. Coronary computed tomography angiography is able to rule-in anatomic CAD at a PTP ≥58% (45-70) and rule-out at a PTP ≤80% (65-94). The corresponding PTP values for functionally significant CAD were ≥75% (67-83) and ≤57% (40-72) for CCTA, and ≥71% (59-81) and ≤27 (24-31) for ICA, demonstrating poorer performance of anatomic imaging against FFR. In contrast, functional imaging techniques (PET, stress CMR, and SPECT) are able to rule-in functionally significant CAD when PTP is ≥46-59% and rule-out when PTP is ≤34-57%. The various diagnostic modalities have different optimal performance ranges for the detection of anatomically and functionally significant CAD. Stress ECG appears to

  15. NAD+ administration significantly attenuates synchrotron radiation X-ray-induced DNA damage and structural alterations of rodent testes

    PubMed Central

    Sheng, Caibin; Chen, Heyu; Wang, Ban; Liu, Tengyuan; Hong, Yunyi; Shao, Jiaxiang; He, Xin; Ma, Yingxin; Nie, Hui; Liu, Na; Xia, Weiliang; Ying, Weihai

    2012-01-01

    Synchrotron radiation (SR) X-ray has great potential for its applications in medical imaging and cancer treatment. In order to apply SR X-ray in clinical settings, it is necessary to elucidate the mechanisms underlying the damaging effects of SR X-ray on normal tissues, and to search for the strategies to reduce the detrimental effects of SR X-ray on normal tissues. However, so far there has been little information on these topics. In this study we used the testes of rats as a model to characterize SR X-ray-induced tissue damage, and to test our hypothesis that NAD+ administration can prevent SR X-ray-induced injury of the testes. We first determined the effects of SR X-ray at the doses of 0, 0.5, 1.3, 4 and 40 Gy on the biochemical and structural properties of the testes one day after SR X-ray exposures. We found that 40 Gy of SR X-ray induced a massive increase in double-strand DNA damage, as assessed by both immunostaining and Western blot of phosphorylated H2AX levels, which was significantly decreased by intraperitoneally (i.p.) administered NAD+ at doses of 125 and 625 mg/kg. Forty Gy of SR X-ray can also induce marked increases in abnormal cell nuclei as well as significant decreases in the cell layers of the seminiferous tubules one day after SR X-ray exposures, which were also ameliorated by the NAD+ administration. In summary, our study has shown that SR X-ray can produce both molecular and structural alterations of the testes, which can be significantly attenuated by NAD+ administration. These results have provided not only the first evidence that SR X-ray-induced tissue damage can be ameliorated by certain approaches, but also a valuable basis for elucidating the mechanisms underlying SR X-ray-induced tissue injury. PMID:22518270

  16. Confidence intervals for single-case effect size measures based on randomization test inversion.

    PubMed

    Michiels, Bart; Heyvaert, Mieke; Meulders, Ann; Onghena, Patrick

    2017-02-01

    In the current paper, we present a method to construct nonparametric confidence intervals (CIs) for single-case effect size measures in the context of various single-case designs. We use the relationship between a two-sided statistical hypothesis test at significance level α and a 100 (1 - α) % two-sided CI to construct CIs for any effect size measure θ that contain all point null hypothesis θ values that cannot be rejected by the hypothesis test at significance level α. This method of hypothesis test inversion (HTI) can be employed using a randomization test as the statistical hypothesis test in order to construct a nonparametric CI for θ. We will refer to this procedure as randomization test inversion (RTI). We illustrate RTI in a situation in which θ is the unstandardized and the standardized difference in means between two treatments in a completely randomized single-case design. Additionally, we demonstrate how RTI can be extended to other types of single-case designs. Finally, we discuss a few challenges for RTI as well as possibilities when using the method with other effect size measures, such as rank-based nonoverlap indices. Supplementary to this paper, we provide easy-to-use R code, which allows the user to construct nonparametric CIs according to the proposed method.

  17. Do sediment type and test durations affect results of laboratory-based, accelerated testing studies of permeable pavement clogging?

    PubMed

    Nichols, Peter W B; White, Richard; Lucke, Terry

    2015-04-01

    Previous studies have attempted to quantify the clogging processes of Permeable Interlocking Concrete Pavers (PICPs) using accelerated testing methods. However, the results have been variable. This study investigated the effects that three different sediment types (natural and silica), and different simulated rainfall intensities, and testing durations had on the observed clogging processes (and measured surface infiltration rates) of laboratory-based, accelerated PICP testing studies. Results showed that accelerated simulated laboratory testing results are highly dependent on the type, and size of sediment used in the experiments. For example, when using real stormwater sediment up to 1.18 mm in size, the results showed that neither testing duration, nor stormwater application rate had any significant effect on PICP clogging. However, the study clearly showed that shorter testing durations generally increased clogging and reduced the surface infiltration rates of the models when artificial silica sediment was used. Longer testing durations also generally increased clogging of the models when using fine sediment (<300 μm). Results from this study will help researchers and designers better anticipate when and why PICPs are susceptible to clogging, reduce maintenance and extend the useful life of these increasingly common stormwater best management practices. Copyright © 2015 Elsevier B.V. All rights reserved.

  18. Development, construct validity and test-retest reliability of a field-based wheelchair mobility performance test for wheelchair basketball.

    PubMed

    de Witte, Annemarie M H; Hoozemans, Marco J M; Berger, Monique A M; van der Slikke, Rienk M A; van der Woude, Lucas H V; Veeger, Dirkjan H E J

    2018-01-01

    The aim of this study was to develop and describe a wheelchair mobility performance test in wheelchair basketball and to assess its construct validity and reliability. To mimic mobility performance of wheelchair basketball matches in a standardised manner, a test was designed based on observation of wheelchair basketball matches and expert judgement. Forty-six players performed the test to determine its validity and 23 players performed the test twice for reliability. Independent-samples t-tests were used to assess whether the times needed to complete the test were different for classifications, playing standards and sex. Intraclass correlation coefficients (ICC) were calculated to quantify reliability of performance times. Males performed better than females (P < 0.001, effect size [ES] = -1.26) and international men performed better than national men (P < 0.001, ES = -1.62). Performance time of low (≤2.5) and high (≥3.0) classification players was borderline not significant with a moderate ES (P = 0.06, ES = 0.58). The reliability was excellent for overall performance time (ICC = 0.95). These results show that the test can be used as a standardised mobility performance test to validly and reliably assess the capacity in mobility performance of elite wheelchair basketball athletes. Furthermore, the described methodology of development is recommended for use in other sports to develop sport-specific tests.

  19. Repeated significance tests of linear combinations of sensitivity and specificity of a diagnostic biomarker

    PubMed Central

    Wu, Mixia; Shu, Yu; Li, Zhaohai; Liu, Aiyi

    2016-01-01

    A sequential design is proposed to test whether the accuracy of a binary diagnostic biomarker meets the minimal level of acceptance. The accuracy of a binary diagnostic biomarker is a linear combination of the marker’s sensitivity and specificity. The objective of the sequential method is to minimize the maximum expected sample size under the null hypothesis that the marker’s accuracy is below the minimal level of acceptance. The exact results of two-stage designs based on Youden’s index and efficiency indicate that the maximum expected sample sizes are smaller than the sample sizes of the fixed designs. Exact methods are also developed for estimation, confidence interval and p-value concerning the proposed accuracy index upon termination of the sequential testing. PMID:26947768

  20. A microcomputer-based testing station for dynamic and static testing of protective relay systems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lee, W.J.; Li, R.J.; Gu, J.C.

    1995-12-31

    Dynamic and static relay performance testing before installation in the field is a subject of great interest to utility relay engineers. The common practice in utility testing of new relays is to put the new unit to be tested in parallel with an existing functioning relay in the system, wait until an actual transient occurs and then observe and analyze the performance of new relay. It is impossible to have a thorough test of the protective relay system through this procedure. An equipment, Microcomputer-Based Testing Station (or PC-Based Testing Station), that can perform both static and dynamic testing of themore » relay is described in this paper. The Power System Simulation Laboratory at the University of Texas at Arlington is a scaled-down, three-phase, physical power system which correlates well with the important components for a real power system and is an ideal facility for the dynamic and static testing of protective relay systems. A brief introduction to the configuration of this laboratory is presented. Test results of several protective functions by using this laboratory illustrate the usefulness of this test set-up.« less

  1. A Cost-Effectiveness Analysis of a Home-Based HIV Counselling and Testing Intervention versus the Standard (Facility Based) HIV Testing Strategy in Rural South Africa

    PubMed Central

    Tabana, Hanani; Nkonki, Lungiswa; Hongoro, Charles; Doherty, Tanya; Ekström, Anna Mia; Naik, Reshma; Zembe-Mkabile, Wanga; Jackson, Debra; Thorson, Anna

    2015-01-01

    Introduction There is growing evidence concerning the acceptability and feasibility of home-based HIV testing. However, less is known about the cost-effectiveness of the approach yet it is a critical component to guide decisions about scaling up access to HIV testing. This study examined the cost-effectiveness of a home-based HIV testing intervention in rural South Africa. Methods Two alternatives: clinic and home-based HIV counselling and testing were compared. Costs were analysed from a provider’s perspective for the period of January to December 2010. The outcome, HIV counselling and testing (HCT) uptake was obtained from the Good Start home-based HIV counselling and testing (HBHCT) cluster randomised control trial undertaken in KwaZulu-Natal province. Cost-effectiveness was estimated for a target population of 22,099 versus 23,864 people for intervention and control communities respectively. Average costs were calculated as the cost per client tested, while cost-effectiveness was calculated as the cost per additional client tested through HBHCT. Results Based on effectiveness of 37% in the intervention (HBHCT) arm compared to 16% in control arm, home based testing costs US$29 compared to US$38 per person for clinic HCT. The incremental cost effectiveness per client tested using HBHCT was $19. Conclusions HBHCT was less costly and more effective. Home-based HCT could present a cost-effective alternative for rural ‘hard to reach’ populations depending on affordability by the health system, and should be considered as part of community outreach programs. PMID:26275059

  2. The Relative Importance of Low Significance Level and High Power in Multiple Tests of Significance.

    ERIC Educational Resources Information Center

    Westermann, Rainer; Hager, Willi

    1983-01-01

    Two psychological experiments--Anderson and Shanteau (1970), Berkowitz and LePage (1967)--are reanalyzed to present the problem of the relative importance of low Type 1 error probability and high power when answering a research question by testing several statistical hypotheses. (Author/PN)

  3. Design Of Computer Based Test Using The Unified Modeling Language

    NASA Astrophysics Data System (ADS)

    Tedyyana, Agus; Danuri; Lidyawati

    2017-12-01

    The Admission selection of Politeknik Negeri Bengkalis through interest and talent search (PMDK), Joint Selection of admission test for state Polytechnics (SB-UMPN) and Independent (UM-Polbeng) were conducted by using paper-based Test (PBT). Paper Based Test model has some weaknesses. They are wasting too much paper, the leaking of the questios to the public, and data manipulation of the test result. This reasearch was Aimed to create a Computer-based Test (CBT) models by using Unified Modeling Language (UML) the which consists of Use Case diagrams, Activity diagram and sequence diagrams. During the designing process of the application, it is important to pay attention on the process of giving the password for the test questions before they were shown through encryption and description process. RSA cryptography algorithm was used in this process. Then, the questions shown in the questions banks were randomized by using the Fisher-Yates Shuffle method. The network architecture used in Computer Based test application was a client-server network models and Local Area Network (LAN). The result of the design was the Computer Based Test application for admission to the selection of Politeknik Negeri Bengkalis.

  4. Model-Based Diagnosis in a Power Distribution Test-Bed

    NASA Technical Reports Server (NTRS)

    Scarl, E.; McCall, K.

    1998-01-01

    The Rodon model-based diagnosis shell was applied to a breadboard test-bed, modeling an automated power distribution system. The constraint-based modeling paradigm and diagnostic algorithm were found to adequately represent the selected set of test scenarios.

  5. Quality assurance in RT-PCR-based BCR/ABL diagnostics--results of an interlaboratory test and a standardization approach.

    PubMed

    Burmeister, T; Maurer, J; Aivado, M; Elmaagacli, A H; Grünebach, F; Held, K R; Hess, G; Hochhaus, A; Höppner, W; Lentes, K U; Lübbert, M; Schäfer, K L; Schafhausen, P; Schmidt, C A; Schüler, F; Seeger, K; Seelig, R; Thiede, C; Viehmann, S; Weber, C; Wilhelm, S; Christmann, A; Clement, J H; Ebener, U; Enczmann, J; Leo, R; Schleuning, M; Schoch, R; Thiel, E

    2000-10-01

    Here we describe the results of an interlaboratory test for RT-PCR-based BCR/ABL analysis. The test was organized in two parts. The number of participating laboratories in the first and second part was 27 and 20, respectively. In the first part samples containing various concentrations of plasmids with the ela2, b2a2 or b3a2 BCR/ABL transcripts were analyzed by PCR. In the second part of the test, cell samples containing various concentrations of BCR/ABL-positive cells were analyzed by RT-PCR. Overall PCR sensitivity was sufficient in approximately 90% of the tests, but a significant number of false positive results were obtained. There were significant differences in sensitivity in the cell-based analysis between the various participants. The results are discussed, and proposals are made regarding the choice of primers, controls, conditions for RNA extraction and reverse transcription.

  6. A Test That Isn't Torture: A Field-Tested Performance-Based Assessment

    ERIC Educational Resources Information Center

    Eastburn, Mark

    2006-01-01

    This article discusses the author's use of a performance-based evaluation in his fifth grade Spanish class in a K-5 public elementary school located in Princeton, New Jersey. The author realized the need to break the old testing paradigm and discover a new way of demonstrating student language acquisition since the traditional tests did not seem…

  7. Surface Fitting for Quasi Scattered Data from Coordinate Measuring Systems.

    PubMed

    Mao, Qing; Liu, Shugui; Wang, Sen; Ma, Xinhui

    2018-01-13

    Non-uniform rational B-spline (NURBS) surface fitting from data points is wildly used in the fields of computer aided design (CAD), medical imaging, cultural relic representation and object-shape detection. Usually, the measured data acquired from coordinate measuring systems is neither gridded nor completely scattered. The distribution of this kind of data is scattered in physical space, but the data points are stored in a way consistent with the order of measurement, so it is named quasi scattered data in this paper. Therefore they can be organized into rows easily but the number of points in each row is random. In order to overcome the difficulty of surface fitting from this kind of data, a new method based on resampling is proposed. It consists of three major steps: (1) NURBS curve fitting for each row, (2) resampling on the fitted curve and (3) surface fitting from the resampled data. Iterative projection optimization scheme is applied in the first and third step to yield advisable parameterization and reduce the time cost of projection. A resampling approach based on parameters, local peaks and contour curvature is proposed to overcome the problems of nodes redundancy and high time consumption in the fitting of this kind of scattered data. Numerical experiments are conducted with both simulation and practical data, and the results show that the proposed method is fast, effective and robust. What's more, by analyzing the fitting results acquired form data with different degrees of scatterness it can be demonstrated that the error introduced by resampling is negligible and therefore it is feasible.

  8. Performance of a New Rapid Immunoassay Test Kit for Point-of-Care Diagnosis of Significant Bacteriuria

    PubMed Central

    Cox, Marsha E.; DiNello, Robert K.; Geisberg, Mark; Abbott, April; Roberts, Pacita L.; Hooton, Thomas M.

    2015-01-01

    Urinary tract infections (UTIs) are frequently encountered in clinical practice and most commonly caused by Escherichia coli and other Gram-negative uropathogens. We tested RapidBac, a rapid immunoassay for bacteriuria developed by Silver Lake Research Corporation (SLRC), compared with standard bacterial culture using 966 clean-catch urine specimens submitted to a clinical microbiology laboratory in an urban academic medical center. RapidBac was performed in accordance with instructions, providing a positive or negative result in 20 min. RapidBac identified as positive 245/285 (sensitivity 86%) samples with significant bacteriuria, defined as the presence of a Gram-negative uropathogen or Staphylococcus saprophyticus at ≥103 CFU/ml. The sensitivities for Gram-negative bacteriuria at ≥104 CFU/ml and ≥105 CFU/ml were 96% and 99%, respectively. The specificity of the test, detecting the absence of significant bacteriuria, was 94%. The sensitivity and specificity of RapidBac were similar on samples from inpatient and outpatient settings, from male and female patients, and across age groups from 18 to 89 years old, although specificity was higher in men (100%) compared with that in women (92%). The RapidBac test for bacteriuria may be effective as an aid in the point-of-care diagnosis of UTIs especially in emergency and primary care settings. PMID:26063858

  9. Papers Based Electrochemical Biosensors: From Test Strips to Paper-Based Microfluidics

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Liu, Bingwen; Du, Dan; Hua, Xin

    2014-05-08

    Papers based biosensors such as lateral flow test strips and paper-based microfluidic devices (or paperfluidics) are inexpensive, rapid, flexible, and easy-to-use analytical tools. An apparent trend in their detection is to interpret sensing results from qualitative assessment to quantitative determination. Electrochemical detection plays an important role in quantification. This review focuses on electrochemical (EC) detection enabled biosensors. The first part provides detailed examples in paper test strips. The second part gives an overview of paperfluidics engaging EC detections. The outlook and recommendation of future directions of EC enabled biosensors are discussed in the end.

  10. Investigating a multigene prognostic assay based on significant pathways for Luminal A breast cancer through gene expression profile analysis.

    PubMed

    Gao, Haiyan; Yang, Mei; Zhang, Xiaolan

    2018-04-01

    The present study aimed to investigate potential recurrence-risk biomarkers based on significant pathways for Luminal A breast cancer through gene expression profile analysis. Initially, the gene expression profiles of Luminal A breast cancer patients were downloaded from The Cancer Genome Atlas database. The differentially expressed genes (DEGs) were identified using a Limma package and the hierarchical clustering analysis was conducted for the DEGs. In addition, the functional pathways were screened using Kyoto Encyclopedia of Genes and Genomes pathway enrichment analyses and rank ratio calculation. The multigene prognostic assay was exploited based on the statistically significant pathways and its prognostic function was tested using train set and verified using the gene expression data and survival data of Luminal A breast cancer patients downloaded from the Gene Expression Omnibus. A total of 300 DEGs were identified between good and poor outcome groups, including 176 upregulated genes and 124 downregulated genes. The DEGs may be used to effectively distinguish Luminal A samples with different prognoses verified by hierarchical clustering analysis. There were 9 pathways screened as significant pathways and a total of 18 DEGs involved in these 9 pathways were identified as prognostic biomarkers. According to the survival analysis and receiver operating characteristic curve, the obtained 18-gene prognostic assay exhibited good prognostic function with high sensitivity and specificity to both the train and test samples. In conclusion the 18-gene prognostic assay including the key genes, transcription factor 7-like 2, anterior parietal cortex and lymphocyte enhancer factor-1 may provide a new method for predicting outcomes and may be conducive to the promotion of precision medicine for Luminal A breast cancer.

  11. [Algorithm for estimating chlorophyll-a concentration in case II water body based on bio-optical model].

    PubMed

    Yang, Wei; Chen, Jin; Mausushita, Bunki

    2009-01-01

    In the present study, a novel retrieval method for estimating chlorophyll-a concentration in case II waters based on bio-optical model was proposed and was tested with the data measured in the laboratory. A series of reflectance spectra, with which the concentration of each sample constituent (for example chlorophyll-a, NPSS etc.) was obtained from accurate experiments, were used to calculate the absorption and backscattering coefficients of the constituents of the case II waters. Then non-negative least square method was applied to calculate the concentration of chlorophyll-a and non-phytoplankton suspended sediments (NPSS). Green algae was firstly collected from the Kasumigaura lake in Japan and then cultured in the laboratory. The reflectance spectra of waters with different amounts of phytoplankton and NPSS were measured in the dark room using FieldSpec Pro VNIR (Analytical Spectral Devises Inc. , Boulder, CO, USA). In order to validate whether this method can be applied in multispectral data (for example Landsat TM), the spectra measured in the laboratory were resampled with Landsat TM bands 1, 2, 3 and 4. Different combinations of TM bands were compared to derive the most appropriate wavelength for detecting chlorophyll-a in case II water for green algae. The results indicated that the combination of TM bands 2, 3 and 4 achieved much better accuracy than other combinations, and the estimated concentration of chlorophyll-a was significantly more accurate than empirical methods. It is expected that this method can be directly applied to the real remotely sensed image because it is based on bio-optical model.

  12. Effects of an Inquiry-Based Short Intervention on State Test Anxiety in Comparison to Alternative Coping Strategies

    PubMed Central

    Krispenz, Ann; Dickhäuser, Oliver

    2018-01-01

    Background and Objectives: Test anxiety can have undesirable consequences for learning and academic achievement. The control-value theory of achievement emotions assumes that test anxiety is experienced if a student appraises an achievement situation as important (value appraisal), but feels that the situation and its outcome are not fully under his or her control (control appraisal). Accordingly, modification of cognitive appraisals is assumed to reduce test anxiety. One method aiming at the modification of appraisals is inquiry-based stress reduction. In the present study (N = 162), we assessed the effects of an inquiry-based short intervention on test anxiety. Design: Short-term longitudinal, randomized control trial. Methods: Focusing on an individual worry thought, 53 university students received an inquiry-based short intervention. Control participants reflected on their worry thought (n = 55) or were distracted (n = 52). Thought related test anxiety was assessed before, immediately after, and 2 days after the experimental treatment. Results: After the intervention as well as 2 days later, individuals who had received the inquiry-based intervention demonstrated significantly lower test anxiety than participants from the pooled control groups. Further analyses showed that the inquiry-based short intervention was more effective than reflecting on a worry thought but had no advantage over distraction. Conclusions: Our findings provide first experimental evidence for the effectiveness of an inquiry-based short intervention in reducing students’ test anxiety. PMID:29515507

  13. System health monitoring using multiple-model adaptive estimation techniques

    NASA Astrophysics Data System (ADS)

    Sifford, Stanley Ryan

    Monitoring system health for fault detection and diagnosis by tracking system parameters concurrently with state estimates is approached using a new multiple-model adaptive estimation (MMAE) method. This novel method is called GRid-based Adaptive Parameter Estimation (GRAPE). GRAPE expands existing MMAE methods by using new techniques to sample the parameter space. GRAPE expands on MMAE with the hypothesis that sample models can be applied and resampled without relying on a predefined set of models. GRAPE is initially implemented in a linear framework using Kalman filter models. A more generalized GRAPE formulation is presented using extended Kalman filter (EKF) models to represent nonlinear systems. GRAPE can handle both time invariant and time varying systems as it is designed to track parameter changes. Two techniques are presented to generate parameter samples for the parallel filter models. The first approach is called selected grid-based stratification (SGBS). SGBS divides the parameter space into equally spaced strata. The second approach uses Latin Hypercube Sampling (LHS) to determine the parameter locations and minimize the total number of required models. LHS is particularly useful when the parameter dimensions grow. Adding more parameters does not require the model count to increase for LHS. Each resample is independent of the prior sample set other than the location of the parameter estimate. SGBS and LHS can be used for both the initial sample and subsequent resamples. Furthermore, resamples are not required to use the same technique. Both techniques are demonstrated for both linear and nonlinear frameworks. The GRAPE framework further formalizes the parameter tracking process through a general approach for nonlinear systems. These additional methods allow GRAPE to either narrow the focus to converged values within a parameter range or expand the range in the appropriate direction to track the parameters outside the current parameter range boundary

  14. Triennial changes in groundwater quality in aquifers used for public supply in California: Utility as indicators of temporal trends

    USGS Publications Warehouse

    Kent, Robert; Landon, Matthew K.

    2016-01-01

    From 2004 to 2011, the U.S. Geological Survey collected samples from 1686 wells across the State of California as part of the California State Water Resources Control Board’s Groundwater Ambient Monitoring and Assessment (GAMA) Priority Basin Project (PBP). From 2007 to 2013, 224 of these wells were resampled to assess temporal trends in water quality. The samples were analyzed for 216 water-quality constituents, including inorganic and organic compounds as well as isotopic tracers. The resampled wells were grouped into five hydrogeologic zones. A nonparametric hypothesis test was used to test the differences between initial sampling and resampling results to evaluate possible step trends in water-quality, statewide, and within each hydrogeologic zone. The hypothesis tests were performed on the 79 constituents that were detected in more than 5 % of the samples collected during either sampling period in at least one hydrogeologic zone. Step trends were detected for 17 constituents. Increasing trends were detected for alkalinity, aluminum, beryllium, boron, lithium, orthophosphate, perchlorate, sodium, and specific conductance. Decreasing trends were detected for atrazine, cobalt, dissolved oxygen, lead, nickel, pH, simazine, and tritium. Tritium was expected to decrease due to decreasing values in precipitation, and the detection of decreases indicates that the method is capable of resolving temporal trends.

  15. Variance change point detection for fractional Brownian motion based on the likelihood ratio test

    NASA Astrophysics Data System (ADS)

    Kucharczyk, Daniel; Wyłomańska, Agnieszka; Sikora, Grzegorz

    2018-01-01

    Fractional Brownian motion is one of the main stochastic processes used for describing the long-range dependence phenomenon for self-similar processes. It appears that for many real time series, characteristics of the data change significantly over time. Such behaviour one can observe in many applications, including physical and biological experiments. In this paper, we present a new technique for the critical change point detection for cases where the data under consideration are driven by fractional Brownian motion with a time-changed diffusion coefficient. The proposed methodology is based on the likelihood ratio approach and represents an extension of a similar methodology used for Brownian motion, the process with independent increments. Here, we also propose a statistical test for testing the significance of the estimated critical point. In addition to that, an extensive simulation study is provided to test the performance of the proposed method.

  16. Watermarking on 3D mesh based on spherical wavelet transform.

    PubMed

    Jin, Jian-Qiu; Dai, Min-Ya; Bao, Hu-Jun; Peng, Qun-Sheng

    2004-03-01

    In this paper we propose a robust watermarking algorithm for 3D mesh. The algorithm is based on spherical wavelet transform. Our basic idea is to decompose the original mesh into a series of details at different scales by using spherical wavelet transform; the watermark is then embedded into the different levels of details. The embedding process includes: global sphere parameterization, spherical uniform sampling, spherical wavelet forward transform, embedding watermark, spherical wavelet inverse transform, and at last resampling the mesh watermarked to recover the topological connectivity of the original model. Experiments showed that our algorithm can improve the capacity of the watermark and the robustness of watermarking against attacks.

  17. Benchmarking of a T-wave alternans detection method based on empirical mode decomposition.

    PubMed

    Blanco-Velasco, Manuel; Goya-Esteban, Rebeca; Cruz-Roldán, Fernando; García-Alberola, Arcadi; Rojo-Álvarez, José Luis

    2017-07-01

    T-wave alternans (TWA) is a fluctuation of the ST-T complex occurring on an every-other-beat basis of the surface electrocardiogram (ECG). It has been shown to be an informative risk stratifier for sudden cardiac death, though the lack of gold standard to benchmark detection methods has promoted the use of synthetic signals. This work proposes a novel signal model to study the performance of a TWA detection. Additionally, the methodological validation of a denoising technique based on empirical mode decomposition (EMD), which is used here along with the spectral method, is also tackled. The proposed test bed system is based on the following guidelines: (1) use of open source databases to enable experimental replication; (2) use of real ECG signals and physiological noise; (3) inclusion of randomized TWA episodes. Both sensitivity (Se) and specificity (Sp) are separately analyzed. Also a nonparametric hypothesis test, based on Bootstrap resampling, is used to determine whether the presence of the EMD block actually improves the performance. The results show an outstanding specificity when the EMD block is used, even in very noisy conditions (0.96 compared to 0.72 for SNR = 8 dB), being always superior than that of the conventional SM alone. Regarding the sensitivity, using the EMD method also outperforms in noisy conditions (0.57 compared to 0.46 for SNR=8 dB), while it decreases in noiseless conditions. The proposed test setting designed to analyze the performance guarantees that the actual physiological variability of the cardiac system is reproduced. The use of the EMD-based block in noisy environment enables the identification of most patients with fatal arrhythmias. Copyright © 2017 Elsevier B.V. All rights reserved.

  18. Weighted Feature Significance: A Simple, Interpretable Model of Compound Toxicity Based on the Statistical Enrichment of Structural Features

    PubMed Central

    Huang, Ruili; Southall, Noel; Xia, Menghang; Cho, Ming-Hsuang; Jadhav, Ajit; Nguyen, Dac-Trung; Inglese, James; Tice, Raymond R.; Austin, Christopher P.

    2009-01-01

    In support of the U.S. Tox21 program, we have developed a simple and chemically intuitive model we call weighted feature significance (WFS) to predict the toxicological activity of compounds, based on the statistical enrichment of structural features in toxic compounds. We trained and tested the model on the following: (1) data from quantitative high–throughput screening cytotoxicity and caspase activation assays conducted at the National Institutes of Health Chemical Genomics Center, (2) data from Salmonella typhimurium reverse mutagenicity assays conducted by the U.S. National Toxicology Program, and (3) hepatotoxicity data published in the Registry of Toxic Effects of Chemical Substances. Enrichments of structural features in toxic compounds are evaluated for their statistical significance and compiled into a simple additive model of toxicity and then used to score new compounds for potential toxicity. The predictive power of the model for cytotoxicity was validated using an independent set of compounds from the U.S. Environmental Protection Agency tested also at the National Institutes of Health Chemical Genomics Center. We compared the performance of our WFS approach with classical classification methods such as Naive Bayesian clustering and support vector machines. In most test cases, WFS showed similar or slightly better predictive power, especially in the prediction of hepatotoxic compounds, where WFS appeared to have the best performance among the three methods. The new algorithm has the important advantages of simplicity, power, interpretability, and ease of implementation. PMID:19805409

  19. A methodological critique on using temperature-conditioned resampling for climate projections as in the paper of Gerstengarbe et al. (2013) winter storm- and summer thunderstorm-related loss events in Theoretical and Applied Climatology (TAC)

    NASA Astrophysics Data System (ADS)

    Wechsung, Frank; Wechsung, Maximilian

    2016-11-01

    The STatistical Analogue Resampling Scheme (STARS) statistical approach was recently used to project changes of climate variables in Germany corresponding to a supposed degree of warming. We show by theoretical and empirical analysis that STARS simply transforms interannual gradients between warmer and cooler seasons into climate trends. According to STARS projections, summers in Germany will inevitably become dryer and winters wetter under global warming. Due to the dominance of negative interannual correlations between precipitation and temperature during the year, STARS has a tendency to generate a net annual decrease in precipitation under mean German conditions. Furthermore, according to STARS, the annual level of global radiation would increase in Germany. STARS can be still used, e.g., for generating scenarios in vulnerability and uncertainty studies. However, it is not suitable as a climate downscaling tool to access risks following from changing climate for a finer than general circulation model (GCM) spatial scale.

  20. Moving beyond the Failure of Test-Based Accountability

    ERIC Educational Resources Information Center

    Koretz, Daniel

    2018-01-01

    In "The Testing Charade: Pretending to Make Schools Better", the author's new book from which this article is drawn, the failures of test-based accountability are documented and some of the most egregious misuses and outright abuses of testing are described, along with some of the most serious negative effects. Neither good intentions…

  1. Development, evaluation and application of performance-based brake testing technologies field test : executive summary

    DOT National Transportation Integrated Search

    1999-09-01

    This report presents the results of the field test portion of the Development, Evaluation, and Application of Performance-Based Brake Testing Technologies sponsored by the Federal Highway Administrations (FHWA) Office of Motor Carriers.

  2. Experimental study of digital image processing techniques for LANDSAT data

    NASA Technical Reports Server (NTRS)

    Rifman, S. S. (Principal Investigator); Allendoerfer, W. B.; Caron, R. H.; Pemberton, L. J.; Mckinnon, D. M.; Polanski, G.; Simon, K. W.

    1976-01-01

    The author has identified the following significant results. Results are reported for: (1) subscene registration, (2) full scene rectification and registration, (3) resampling techniques, (4) and ground control point (GCP) extraction. Subscenes (354 pixels x 234 lines) were registered to approximately 1/4 pixel accuracy and evaluated by change detection imagery for three cases: (1) bulk data registration, (2) precision correction of a reference subscene using GCP data, and (3) independently precision processed subscenes. Full scene rectification and registration results were evaluated by using a correlation technique to measure registration errors of 0.3 pixel rms thoughout the full scene. Resampling evaluations of nearest neighbor and TRW cubic convolution processed data included change detection imagery and feature classification. Resampled data were also evaluated for an MSS scene containing specular solar reflections.

  3. Effectiveness of a computerized alert system based on re-testing intervals for limiting the inappropriateness of laboratory test requests.

    PubMed

    Lippi, Giuseppe; Brambilla, Marco; Bonelli, Patrizia; Aloe, Rosalia; Balestrino, Antonio; Nardelli, Anna; Ceda, Gian Paolo; Fabi, Massimo

    2015-11-01

    There is consolidated evidence that the burden of inappropriate laboratory test requests is very high, up to 70%. We describe here the function of a computerized alert system linked to the order entry, designed to limit the number of potentially inappropriate laboratory test requests. A computerized alert system based on re-testing intervals and entailing the generation of pop-up alerts when preset criteria of appropriateness for 15 laboratory tests were violated was implemented in two clinical wards of the University Hospital of Parma. The effectiveness of the system for limiting potentially inappropriate tests was monitored for 6months. Overall, 765/3539 (22%) test requests violated the preset criteria of appropriateness and generated the appearance of electronic alert. After alert appearance, 591 requests were annulled (17% of total tests requested and 77% of tests alerted, respectively). The total number of test requests violating the preset criteria of inappropriateness constantly decreased over time (26% in the first three months of implementation versus 17% in the following period; p<0.001). The total financial saving of test withdrawn was 3387 Euros (12.8% of the total test cost) throughout the study period. The results of this study suggest that a computerized alert system may be effective to limit the inappropriateness of laboratory test requests, generating significant economic saving and educating physicians to a more efficient use of laboratory resources. Copyright © 2015 The Canadian Society of Clinical Chemists. Published by Elsevier Inc. All rights reserved.

  4. Finding differentially expressed genes in high dimensional data: Rank based test statistic via a distance measure.

    PubMed

    Mathur, Sunil; Sadana, Ajit

    2015-12-01

    We present a rank-based test statistic for the identification of differentially expressed genes using a distance measure. The proposed test statistic is highly robust against extreme values and does not assume the distribution of parent population. Simulation studies show that the proposed test is more powerful than some of the commonly used methods, such as paired t-test, Wilcoxon signed rank test, and significance analysis of microarray (SAM) under certain non-normal distributions. The asymptotic distribution of the test statistic, and the p-value function are discussed. The application of proposed method is shown using a real-life data set. © The Author(s) 2011.

  5. From integrated observation of pre-earthquake signals towards physical-based forecasting: A prospective test experiment

    NASA Astrophysics Data System (ADS)

    Ouzounov, D.; Pulinets, S. A.; Tramutoli, V.; Lee, L.; Liu, J. G.; Hattori, K.; Kafatos, M.

    2013-12-01

    We are conducting an integrated study involving multi-parameter observations over different seismo- tectonics regions in our investigation of phenomena preceding major earthquakes. Our approach is based on a systematic analysis of several selected parameters namely: gas discharge; thermal infrared radiation; ionospheric electron concentration; and atmospheric temperature and humidity, which we suppose are associated with earthquake preparation phase. We intended to test in prospective mode the set of geophysical measurements for different regions of active earthquakes and volcanoes. In 2012-13 we established a collaborative framework with the leading projects PRE-EARTHQUAKE (EU) and iSTEP3 (Taiwan) for coordinate measurements and prospective validation over seven test regions: Southern California (USA), Eastern Honshu (Japan), Italy, Turkey, Greece, Taiwan (ROC), Kamchatka and Sakhalin (Russia). The current experiment provided a 'stress test' opportunity to validate the physical based approach in teal -time over regions of high seismicity. Our initial results are: (1) Prospective tests have shown the presence in real time of anomalies in the atmosphere before most of the significant (M>5.5) earthquakes in all regions; (2) False positive rate alarm is different for each region and varying between 50% (Italy, Kamchatka and California) to 25% (Taiwan and Japan) with a significant reduction of false positives when at least two parameters are contemporary used; (3) One of most complex problem, which is still open, was the systematic collection and real-time integration of pre-earthquake observations. Our findings suggest that the physical based short-term forecast is feasible and more tests are needed. We discus the physical concept we used, the future integration of data observations and related developments.

  6. Tunable Absorption System based on magnetorheological elastomers and Halbach array: design and testing

    NASA Astrophysics Data System (ADS)

    Bocian, Mirosław; Kaleta, Jerzy; Lewandowski, Daniel; Przybylski, Michał

    2017-08-01

    In this paper, the systematic design, construction and testing of a Tunable Absorption System (TAS) incorporating magnetorheological elastomer (MRE) has been investigated. The TAS has been designed for energy absorption and mitigation of vibratory motions from an impact excitation. The main advantage of the designed TAS is that it has the ability to change and adapt to working conditions. Tunability can be realised through a change in the magnetic field caused by the change of an internal arrangement of permanent magnets within a double dipolar circular Halbach array. To show the capabilities of the tested system, experiments based on an impulse excitation have been performed. Significant changes of the TASs natural frequency and damping characteristics have been obtained. By incorporating magnetic tunability within the TAS a significant qualitative and quantitative change in the devices mechanical properties and performance were obtained.

  7. Random forests-based differential analysis of gene sets for gene expression data.

    PubMed

    Hsueh, Huey-Miin; Zhou, Da-Wei; Tsai, Chen-An

    2013-04-10

    In DNA microarray studies, gene-set analysis (GSA) has become the focus of gene expression data analysis. GSA utilizes the gene expression profiles of functionally related gene sets in Gene Ontology (GO) categories or priori-defined biological classes to assess the significance of gene sets associated with clinical outcomes or phenotypes. Many statistical approaches have been proposed to determine whether such functionally related gene sets express differentially (enrichment and/or deletion) in variations of phenotypes. However, little attention has been given to the discriminatory power of gene sets and classification of patients. In this study, we propose a method of gene set analysis, in which gene sets are used to develop classifications of patients based on the Random Forest (RF) algorithm. The corresponding empirical p-value of an observed out-of-bag (OOB) error rate of the classifier is introduced to identify differentially expressed gene sets using an adequate resampling method. In addition, we discuss the impacts and correlations of genes within each gene set based on the measures of variable importance in the RF algorithm. Significant classifications are reported and visualized together with the underlying gene sets and their contribution to the phenotypes of interest. Numerical studies using both synthesized data and a series of publicly available gene expression data sets are conducted to evaluate the performance of the proposed methods. Compared with other hypothesis testing approaches, our proposed methods are reliable and successful in identifying enriched gene sets and in discovering the contributions of genes within a gene set. The classification results of identified gene sets can provide an valuable alternative to gene set testing to reveal the unknown, biologically relevant classes of samples or patients. In summary, our proposed method allows one to simultaneously assess the discriminatory ability of gene sets and the importance of genes for

  8. Towards Universal Voluntary HIV Testing and Counselling: A Systematic Review and Meta-Analysis of Community-Based Approaches

    PubMed Central

    Suthar, Amitabh B.; Ford, Nathan; Bachanas, Pamela J.; Wong, Vincent J.; Rajan, Jay S.; Saltzman, Alex K.; Ajose, Olawale; Fakoya, Ade O.; Granich, Reuben M.; Negussie, Eyerusalem K.; Baggaley, Rachel C.

    2013-01-01

    Background Effective national and global HIV responses require a significant expansion of HIV testing and counselling (HTC) to expand access to prevention and care. Facility-based HTC, while essential, is unlikely to meet national and global targets on its own. This article systematically reviews the evidence for community-based HTC. Methods and Findings PubMed was searched on 4 March 2013, clinical trial registries were searched on 3 September 2012, and Embase and the World Health Organization Global Index Medicus were searched on 10 April 2012 for studies including community-based HTC (i.e., HTC outside of health facilities). Randomised controlled trials, and observational studies were eligible if they included a community-based testing approach and reported one or more of the following outcomes: uptake, proportion receiving their first HIV test, CD4 value at diagnosis, linkage to care, HIV positivity rate, HTC coverage, HIV incidence, or cost per person tested (outcomes are defined fully in the text). The following community-based HTC approaches were reviewed: (1) door-to-door testing (systematically offering HTC to homes in a catchment area), (2) mobile testing for the general population (offering HTC via a mobile HTC service), (3) index testing (offering HTC to household members of people with HIV and persons who may have been exposed to HIV), (4) mobile testing for men who have sex with men, (5) mobile testing for people who inject drugs, (6) mobile testing for female sex workers, (7) mobile testing for adolescents, (8) self-testing, (9) workplace HTC, (10) church-based HTC, and (11) school-based HTC. The Newcastle-Ottawa Quality Assessment Scale and the Cochrane Collaboration's “risk of bias” tool were used to assess the risk of bias in studies with a comparator arm included in pooled estimates.  117 studies, including 864,651 participants completing HTC, met the inclusion criteria. The percentage of people offered community-based HTC who accepted HTC

  9. Theory-Based University Admissions Testing for a New Millennium

    ERIC Educational Resources Information Center

    Sternberg, Robert J.

    2004-01-01

    This article describes two projects based on Robert J. Sternberg's theory of successful intelligence and designed to provide theory-based testing for university admissions. The first, Rainbow Project, provided a supplementary test of analytical, practical, and creative skills to augment the SAT in predicting college performance. The Rainbow…

  10. Marking Strategies in Metacognition-Evaluated Computer-Based Testing

    ERIC Educational Resources Information Center

    Chen, Li-Ju; Ho, Rong-Guey; Yen, Yung-Chin

    2010-01-01

    This study aimed to explore the effects of marking and metacognition-evaluated feedback (MEF) in computer-based testing (CBT) on student performance and review behavior. Marking is a strategy, in which students place a question mark next to a test item to indicate an uncertain answer. The MEF provided students with feedback on test results…

  11. Design of a Microcomputer-Based Adaptive Testing System.

    ERIC Educational Resources Information Center

    Vale, C. David

    This paper explores the feasibility of developing a single-user microcomputer-based testing system. Testing literature was surveyed to discover types of test items that might be used in the system and to compile a list of strategies that such a system might use. Potential users were surveyed. Several were interviewed, and a questionnaire was…

  12. Detail of north side of Test Stand 'A' base, showing ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    Detail of north side of Test Stand 'A' base, showing tanks for distilled water (left), fuel (center), and gaseous nitrogen (right). Other tanks present for tests were removed before this image was taken. - Jet Propulsion Laboratory Edwards Facility, Test Stand A, Edwards Air Force Base, Boron, Kern County, CA

  13. Frequency of Testing for Dyslipidemia: An Evidence-Based Analysis

    PubMed Central

    2014-01-01

    Background Dyslipidemias include high levels of total cholesterol, low-density lipoprotein (LDL) cholesterol, and triglycerides and low levels of high-density lipoprotein (HDL) cholesterol. Dyslipidemia is a risk factor for cardiovascular disease, which is a major contributor to mortality in Canada. Approximately 23% of the 2009/11 Canadian Health Measures Survey (CHMS) participants had a high level of LDL cholesterol, with prevalence increasing with age, and approximately 15% had a total cholesterol to HDL ratio above the threshold. Objectives To evaluate the frequency of lipid testing in adults not diagnosed with dyslipidemia and in adults on treatment for dyslipidemia. Research Methods A systematic review of the literature set out to identify randomized controlled trials (RCTs), systematic reviews, health technology assessments (HTAs), and observational studies published between January 1, 2000, and November 29, 2012, that evaluated the frequency of testing for dyslipidemia in the 2 populations. Results Two observational studies assessed the frequency of lipid testing, 1 in individuals not on lipid-lowering medications and 1 in treated individuals. Both studies were based on previously collected data intended for a different objective and, therefore, no conclusions could be reached about the frequency of testing at intervals other than the ones used in the original studies. Given this limitation and generalizability issues, the quality of evidence was considered very low. No evidence for the frequency of lipid testing was identified in the 2 HTAs included. Canadian and international guidelines recommend testing for dyslipidemia in individuals at an increased risk for cardiovascular disease. The frequency of testing recommended is based on expert consensus. Conclusions Conclusions on the frequency of lipid testing could not be made based on the 2 observational studies. Current guidelines recommend lipid testing in adults with increased cardiovascular risk, with

  14. Tests of gravity with future space-based experiments

    NASA Astrophysics Data System (ADS)

    Sakstein, Jeremy

    2018-03-01

    Future space-based tests of relativistic gravitation—laser ranging to Phobos, accelerometers in orbit, and optical networks surrounding Earth—will constrain the theory of gravity with unprecedented precision by testing the inverse-square law, the strong and weak equivalence principles, and the deflection and time delay of light by massive bodies. In this paper, we estimate the bounds that could be obtained on alternative gravity theories that use screening mechanisms to suppress deviations from general relativity in the Solar System: chameleon, symmetron, and Galileon models. We find that space-based tests of the parametrized post-Newtonian parameter γ will constrain chameleon and symmetron theories to new levels, and that tests of the inverse-square law using laser ranging to Phobos will provide the most stringent constraints on Galileon theories to date. We end by discussing the potential for constraining these theories using upcoming tests of the weak equivalence principle, and conclude that further theoretical modeling is required in order to fully utilize the data.

  15. Model Based Analysis and Test Generation for Flight Software

    NASA Technical Reports Server (NTRS)

    Pasareanu, Corina S.; Schumann, Johann M.; Mehlitz, Peter C.; Lowry, Mike R.; Karsai, Gabor; Nine, Harmon; Neema, Sandeep

    2009-01-01

    We describe a framework for model-based analysis and test case generation in the context of a heterogeneous model-based development paradigm that uses and combines Math- Works and UML 2.0 models and the associated code generation tools. This paradigm poses novel challenges to analysis and test case generation that, to the best of our knowledge, have not been addressed before. The framework is based on a common intermediate representation for different modeling formalisms and leverages and extends model checking and symbolic execution tools for model analysis and test case generation, respectively. We discuss the application of our framework to software models for a NASA flight mission.

  16. Reliability, Validity, and Sensitivity of a Novel Smartphone-Based Eccentric Hamstring Strength Test in Professional Football Players.

    PubMed

    Lee, Justin W Y; Cai, Ming-Jing; Yung, Patrick S H; Chan, Kai-Ming

    2018-05-01

    To evaluate the test-retest reliability, sensitivity, and concurrent validity of a smartphone-based method for assessing eccentric hamstring strength among male professional football players. A total of 25 healthy male professional football players performed the Chinese University of Hong Kong (CUHK) Nordic break-point test, hamstring fatigue protocol, and isokinetic hamstring strength test. The CUHK Nordic break-point test is based on a Nordic hamstring exercise. The Nordic break-point angle was defined as the maximum point where the participant could no longer support the weight of his body against gravity. The criterion for the sensitivity test was the presprinting and postsprinting difference of the Nordic break-point angle with a hamstring fatigue protocol. The hamstring fatigue protocol consists of 12 repetitions of the 30-m sprint with 30-s recoveries between sprints. Hamstring peak torque of the isokinetic hamstring strength test was used as the criterion for validity. A high test-retest reliability (intraclass correlation coefficient = .94; 95% confidence interval, .82-.98) was found in the Nordic break-point angle measurements. The Nordic break-point angle significantly correlated with isokinetic hamstring peak torques at eccentric action of 30°/s (r = .88, r 2  = .77, P < .001). The minimal detectable difference was 8.03°. The sensitivity of the measure was good enough that a significance difference (effect size = 0.70, P < .001) was found between presprinting and postsprinting values. The CUHK Nordic break-point test is a simple, portable, quick smartphone-based method to provide reliable and accurate eccentric hamstring strength measures among male professional football players.

  17. Vehicle Fault Diagnose Based on Smart Sensor

    NASA Astrophysics Data System (ADS)

    Zhining, Li; Peng, Wang; Jianmin, Mei; Jianwei, Li; Fei, Teng

    In the vehicle's traditional fault diagnose system, we usually use a computer system with a A/D card and with many sensors connected to it. The disadvantage of this system is that these sensor can hardly be shared with control system and other systems, there are too many connect lines and the electro magnetic compatibility(EMC) will be affected. In this paper, smart speed sensor, smart acoustic press sensor, smart oil press sensor, smart acceleration sensor and smart order tracking sensor were designed to solve this problem. With the CAN BUS these smart sensors, fault diagnose computer and other computer could be connected together to establish a network system which can monitor and control the vehicle's diesel and other system without any duplicate sensor. The hard and soft ware of the smart sensor system was introduced, the oil press, vibration and acoustic signal are resampled by constant angle increment to eliminate the influence of the rotate speed. After the resample, the signal in every working cycle could be averaged in angle domain and do other analysis like order spectrum.

  18. Computer-Based English Language Testing in China: Present and Future

    ERIC Educational Resources Information Center

    Yu, Guoxing; Zhang, Jing

    2017-01-01

    In this special issue on high-stakes English language testing in China, the two articles on computer-based testing (Jin & Yan; He & Min) highlight a number of consistent, ongoing challenges and concerns in the development and implementation of the nationwide IB-CET (Internet Based College English Test) and institutional computer-adaptive…

  19. Comparing perceived and test-based knowledge of cancer risk and prevention among Hispanic and African Americans: an example of community participatory research.

    PubMed

    Jones, Loretta; Bazargan, Mohsen; Lucas-Wright, Anna; Vadgama, Jaydutt V; Vargas, Roberto; Smith, James; Otoukesh, Salman; Maxwell, Annette E

    2013-01-01

    Most theoretical formulations acknowledge that knowledge and awareness of cancer screening and prevention recommendations significantly influence health behaviors. This study compares perceived knowledge of cancer prevention and screening with test-based knowledge in a community sample. We also examine demographic variables and self-reported cancer screening and prevention behaviors as correlates of both knowledge scores, and consider whether cancer related knowledge can be accurately assessed using just a few, simple questions in a short and easy-to-complete survey. We used a community-partnered participatory research approach to develop our study aims and a survey. The study sample was composed of 180 predominantly African American and Hispanic community individuals who participated in a full-day cancer prevention and screening promotion conference in South Los Angeles, California, on July 2011. Participants completed a self-administered survey in English or Spanish at the beginning of the conference. Our data indicate that perceived and test-based knowledge scores are only moderately correlated. Perceived knowledge score shows a stronger association with demographic characteristics and other cancer related variables than the test-based score. Thirteen out of twenty variables that are examined in our study showed a statistically significant correlation with the perceived knowledge score, however, only four variables demonstrated a statistically significant correlation with the test-based knowledge score. Perceived knowledge of cancer prevention and screening was assessed with fewer items than test-based knowledge. Thus, using this assessment could potentially reduce respondent burden. However, our data demonstrate that perceived and test-based knowledge are separate constructs.

  20. Distance-based microfluidic quantitative detection methods for point-of-care testing.

    PubMed

    Tian, Tian; Li, Jiuxing; Song, Yanling; Zhou, Leiji; Zhu, Zhi; Yang, Chaoyong James

    2016-04-07

    Equipment-free devices with quantitative readout are of great significance to point-of-care testing (POCT), which provides real-time readout to users and is especially important in low-resource settings. Among various equipment-free approaches, distance-based visual quantitative detection methods rely on reading the visual signal length for corresponding target concentrations, thus eliminating the need for sophisticated instruments. The distance-based methods are low-cost, user-friendly and can be integrated into portable analytical devices. Moreover, such methods enable quantitative detection of various targets by the naked eye. In this review, we first introduce the concept and history of distance-based visual quantitative detection methods. Then, we summarize the main methods for translation of molecular signals to distance-based readout and discuss different microfluidic platforms (glass, PDMS, paper and thread) in terms of applications in biomedical diagnostics, food safety monitoring, and environmental analysis. Finally, the potential and future perspectives are discussed.

  1. Scopolamine provocation-based pharmacological MRI model for testing procognitive agents.

    PubMed

    Hegedűs, Nikolett; Laszy, Judit; Gyertyán, István; Kocsis, Pál; Gajári, Dávid; Dávid, Szabolcs; Deli, Levente; Pozsgay, Zsófia; Tihanyi, Károly

    2015-04-01

    There is a huge unmet need to understand and treat pathological cognitive impairment. The development of disease modifying cognitive enhancers is hindered by the lack of correct pathomechanism and suitable animal models. Most animal models to study cognition and pathology do not fulfil either the predictive validity, face validity or construct validity criteria, and also outcome measures greatly differ from those of human trials. Fortunately, some pharmacological agents such as scopolamine evoke similar effects on cognition and cerebral circulation in rodents and humans and functional MRI enables us to compare cognitive agents directly in different species. In this paper we report the validation of a scopolamine based rodent pharmacological MRI provocation model. The effects of deemed procognitive agents (donepezil, vinpocetine, piracetam, alpha 7 selective cholinergic compounds EVP-6124, PNU-120596) were compared on the blood-oxygen-level dependent responses and also linked to rodent cognitive models. These drugs revealed significant effect on scopolamine induced blood-oxygen-level dependent change except for piracetam. In the water labyrinth test only PNU-120596 did not show a significant effect. This provocational model is suitable for testing procognitive compounds. These functional MR imaging experiments can be paralleled with human studies, which may help reduce the number of false cognitive clinical trials. © The Author(s) 2015.

  2. Gene-Based Testing of Interactions in Association Studies of Quantitative Traits

    PubMed Central

    Ma, Li; Clark, Andrew G.; Keinan, Alon

    2013-01-01

    Various methods have been developed for identifying gene–gene interactions in genome-wide association studies (GWAS). However, most methods focus on individual markers as the testing unit, and the large number of such tests drastically erodes statistical power. In this study, we propose novel interaction tests of quantitative traits that are gene-based and that confer advantage in both statistical power and biological interpretation. The framework of gene-based gene–gene interaction (GGG) tests combine marker-based interaction tests between all pairs of markers in two genes to produce a gene-level test for interaction between the two. The tests are based on an analytical formula we derive for the correlation between marker-based interaction tests due to linkage disequilibrium. We propose four GGG tests that extend the following P value combining methods: minimum P value, extended Simes procedure, truncated tail strength, and truncated P value product. Extensive simulations point to correct type I error rates of all tests and show that the two truncated tests are more powerful than the other tests in cases of markers involved in the underlying interaction not being directly genotyped and in cases of multiple underlying interactions. We applied our tests to pairs of genes that exhibit a protein–protein interaction to test for gene-level interactions underlying lipid levels using genotype data from the Atherosclerosis Risk in Communities study. We identified five novel interactions that are not evident from marker-based interaction testing and successfully replicated one of these interactions, between SMAD3 and NEDD9, in an independent sample from the Multi-Ethnic Study of Atherosclerosis. We conclude that our GGG tests show improved power to identify gene-level interactions in existing, as well as emerging, association studies. PMID:23468652

  3. Bayesian models based on test statistics for multiple hypothesis testing problems.

    PubMed

    Ji, Yuan; Lu, Yiling; Mills, Gordon B

    2008-04-01

    We propose a Bayesian method for the problem of multiple hypothesis testing that is routinely encountered in bioinformatics research, such as the differential gene expression analysis. Our algorithm is based on modeling the distributions of test statistics under both null and alternative hypotheses. We substantially reduce the complexity of the process of defining posterior model probabilities by modeling the test statistics directly instead of modeling the full data. Computationally, we apply a Bayesian FDR approach to control the number of rejections of null hypotheses. To check if our model assumptions for the test statistics are valid for various bioinformatics experiments, we also propose a simple graphical model-assessment tool. Using extensive simulations, we demonstrate the performance of our models and the utility of the model-assessment tool. In the end, we apply the proposed methodology to an siRNA screening and a gene expression experiment.

  4. Space Launch System Base Heating Test: Tunable Diode Laser Absorption Spectroscopy

    NASA Technical Reports Server (NTRS)

    Parker, Ron; Carr, Zak; MacLean, Matthew; Dufrene, Aaron; Mehta, Manish

    2016-01-01

    This paper describes the Tunable Diode Laser Absorption Spectroscopy (TDLAS) measurement of several water transitions that were interrogated during a hot-fire testing of the Space Launch Systems (SLS) sub-scale vehicle installed in LENS II. The temperature of the recirculating gas flow over the base plate was found to increase with altitude and is consistent with CFD results. It was also observed that the gas above the base plate has significant velocity along the optical path of the sensor at the higher altitudes. The line-by-line analysis of the H2O absorption features must include the effects of the Doppler shift phenomena particularly at high altitude. The TDLAS experimental measurements and the analysis procedure which incorporates the velocity dependent flow will be described.

  5. Smart device-based testing for medical students in Korea: satisfaction, convenience, and advantages.

    PubMed

    Lim, Eun Young; Yim, Mi Kyoung; Huh, Sun

    2017-01-01

    The aim of this study was to investigate respondents' satisfaction with smart device-based testing (SBT), as well as its convenience and advantages, in order to improve its implementation. The survey was conducted among 108 junior medical students at Kyungpook National University School of Medicine, Korea, who took a practice licensing examination using SBT in September 2015. The survey contained 28 items scored using a 5-point Likert scale. The items were divided into the following three categories: satisfaction with SBT administration, convenience of SBT features, and advantages of SBT compared to paper-and-pencil testing or computer-based testing. The reliability of the survey was 0.95. Of the three categories, the convenience of the SBT features received the highest mean (M) score (M= 3.75, standard deviation [SD]= 0.69), while the category of satisfaction with SBT received the lowest (M= 3.13, SD= 1.07). No statistically significant differences across these categories with respect to sex, age, or experience were observed. These results indicate that SBT was practical and effective to take and to administer.

  6. International Guidelines on Computer-Based and Internet-Delivered Testing

    ERIC Educational Resources Information Center

    International Journal of Testing, 2006

    2006-01-01

    Developed by the International Test Commission, the International Guidelines on Computer-Based and Internet-Delivered Testing are a set of guidelines specifically developed to highlight good practice issues in relation to computer/Internet tests and testing. These guidelines have been developed from an international perspective and are directed at…

  7. Statistical Significance for Hierarchical Clustering

    PubMed Central

    Kimes, Patrick K.; Liu, Yufeng; Hayes, D. Neil; Marron, J. S.

    2017-01-01

    Summary Cluster analysis has proved to be an invaluable tool for the exploratory and unsupervised analysis of high dimensional datasets. Among methods for clustering, hierarchical approaches have enjoyed substantial popularity in genomics and other fields for their ability to simultaneously uncover multiple layers of clustering structure. A critical and challenging question in cluster analysis is whether the identified clusters represent important underlying structure or are artifacts of natural sampling variation. Few approaches have been proposed for addressing this problem in the context of hierarchical clustering, for which the problem is further complicated by the natural tree structure of the partition, and the multiplicity of tests required to parse the layers of nested clusters. In this paper, we propose a Monte Carlo based approach for testing statistical significance in hierarchical clustering which addresses these issues. The approach is implemented as a sequential testing procedure guaranteeing control of the family-wise error rate. Theoretical justification is provided for our approach, and its power to detect true clustering structure is illustrated through several simulation studies and applications to two cancer gene expression datasets. PMID:28099990

  8. Population-based biobank participants’ preferences for receiving genetic test results

    PubMed Central

    Yamamoto, Kayono; Hachiya, Tsuyoshi; Fukushima, Akimune; Nakaya, Naoki; Okayama, Akira; Tanno, Kozo; Aizawa, Fumie; Tokutomi, Tomoharu; Hozawa, Atsushi; Shimizu, Atsushi

    2017-01-01

    There are ongoing debates on issues relating to returning individual research results (IRRs) and incidental findings (IFs) generated by genetic research in population-based biobanks. To understand how to appropriately return genetic results from biobank studies, we surveyed preferences for returning IRRs and IFs among participants of the Tohoku Medical Megabank Project (TMM). We mailed a questionnaire to individuals enrolled in the TMM cohort study (Group 1; n=1031) and a group of Tohoku region residents (Group 2; n=2314). The respondents were required to be over 20 years of age. Nearly 90% of Group 1 participants and over 80% of Group 2 participants expressed a preference for receiving their genetic test results. Furthermore, over 60% of both groups preferred to receive their genetic results ‘from a genetic specialist.’ A logistic regression analysis revealed that engaging in ‘health-conscious behaviors’ (such as regular physical activity, having a healthy diet, intentionally reducing alcohol intake and/or smoking and so on) was significant, positively associated with preferring to receive their genetic test results (odds ratio=2.397 (Group 1) and 1.897 (Group 2)). Our findings provided useful information and predictors regarding the return of IRRs and IFs in a population-based biobank. PMID:28794501

  9. Evaluation of a low-cost liquid-based Pap test in rural El Salvador: a split-sample study.

    PubMed

    Guo, Jin; Cremer, Miriam; Maza, Mauricio; Alfaro, Karla; Felix, Juan C

    2014-04-01

    We sought to test the diagnostic efficacy of a low-cost, liquid-based cervical cytology that could be implemented in low-resource settings. A prospective, split-sample Pap study was performed in 595 women attending a cervical cancer screening clinic in rural El Salvador. Collected cervical samples were used to make a conventional Pap (cell sample directly to glass slide), whereas residual material was used to make the liquid-based sample using the ClearPrep method. Selected samples were tested from the residual sample of the liquid-based collection for the presence of high-risk Human papillomaviruses. Of 595 patients, 570 were interpreted with the same diagnosis between the 2 methods (95.8% agreement). There were comparable numbers of unsatisfactory cases; however, ClearPrep significantly increased detection of low-grade squamous intraepithelial lesions and decreased the diagnoses of atypical squamous cells of undetermined significance. ClearPrep identified an equivalent number of high-grade squamous intraepithelial lesion cases as the conventional Pap. High-risk human papillomavirus was identified in all cases of high-grade squamous intraepithelial lesion, adenocarcinoma in situ, and cancer as well as in 78% of low-grade squamous intraepithelial lesions out of the residual fluid of the ClearPrep vials. The low-cost ClearPrep Pap test demonstrated equivalent detection of squamous intraepithelial lesions when compared with the conventional Pap smear and demonstrated the potential for ancillary molecular testing. The test seems a viable option for implementation in low-resource settings.

  10. Exploring pharmacy and home-based sexually transmissible infection testing

    PubMed Central

    Habel, Melissa A.; Scheinmann, Roberta; Verdesoto, Elizabeth; Gaydos, Charlotte; Bertisch, Maggie; Chiasson, Mary Ann

    2015-01-01

    Background This study assessed the feasibility and acceptability of pharmacy and home-based sexually transmissible infection (STI) screening as alternate testing venues among emergency contraception (EC) users. Methods The study included two phases in February 2011–July 2012. In Phase I, customers purchasing EC from eight pharmacies in Manhattan received vouchers for free STI testing at onsite medical clinics. In Phase II, three Facebook ads targeted EC users to connect them with free home-based STI test kits ordered online. Participants completed a self-administered survey. Results Only 38 participants enrolled in Phase I: 90% female, ≤29 years (74%), 45% White non-Hispanic and 75% college graduates; 71% were not tested for STIs in the past year and 68% reported a new partner in the past 3 months. None tested positive for STIs. In Phase II, ads led to >45 000 click-throughs, 382 completed the survey and 290 requested kits; 28% were returned. Phase II participants were younger and less educated than Phase I participants; six tested positive for STIs. Challenges included recruitment, pharmacy staff participation, advertising with discretion and cost. Conclusions This study found low uptake of pharmacy and home-based testing among EC users; however, STI testing in these settings is feasible and the acceptability findings indicate an appeal among younger women for testing in non-traditional settings. Collaborating with and training pharmacy and medical staff are key elements of service provision. Future research should explore how different permutations of expanding screening in non-traditional settings could improve testing uptake and detect additional STI cases. PMID:26409484

  11. Exploring pharmacy and home-based sexually transmissible infection testing.

    PubMed

    Habel, Melissa A; Scheinmann, Roberta; Verdesoto, Elizabeth; Gaydos, Charlotte; Bertisch, Maggie; Chiasson, Mary Ann

    2015-11-01

    Background This study assessed the feasibility and acceptability of pharmacy and home-based sexually transmissible infection (STI) screening as alternate testing venues among emergency contraception (EC) users. The study included two phases in February 2011-July 2012. In Phase I, customers purchasing EC from eight pharmacies in Manhattan received vouchers for free STI testing at onsite medical clinics. In Phase II, three Facebook ads targeted EC users to connect them with free home-based STI test kits ordered online. Participants completed a self-administered survey. Only 38 participants enrolled in Phase I: 90% female, ≤29 years (74%), 45% White non-Hispanic and 75% college graduates; 71% were not tested for STIs in the past year and 68% reported a new partner in the past 3 months. None tested positive for STIs. In Phase II, ads led to >45000 click-throughs, 382 completed the survey and 290 requested kits; 28% were returned. Phase II participants were younger and less educated than Phase I participants; six tested positive for STIs. Challenges included recruitment, pharmacy staff participation, advertising with discretion and cost. This study found low uptake of pharmacy and home-based testing among EC users; however, STI testing in these settings is feasible and the acceptability findings indicate an appeal among younger women for testing in non-traditional settings. Collaborating with and training pharmacy and medical staff are key elements of service provision. Future research should explore how different permutations of expanding screening in non-traditional settings could improve testing uptake and detect additional STI cases.

  12. Empirical likelihood-based tests for stochastic ordering

    PubMed Central

    BARMI, HAMMOU EL; MCKEAGUE, IAN W.

    2013-01-01

    This paper develops an empirical likelihood approach to testing for the presence of stochastic ordering among univariate distributions based on independent random samples from each distribution. The proposed test statistic is formed by integrating a localized empirical likelihood statistic with respect to the empirical distribution of the pooled sample. The asymptotic null distribution of this test statistic is found to have a simple distribution-free representation in terms of standard Brownian bridge processes. The approach is used to compare the lengths of rule of Roman Emperors over various historical periods, including the “decline and fall” phase of the empire. In a simulation study, the power of the proposed test is found to improve substantially upon that of a competing test due to El Barmi and Mukerjee. PMID:23874142

  13. Statistical inference, the bootstrap, and neural-network modeling with application to foreign exchange rates.

    PubMed

    White, H; Racine, J

    2001-01-01

    We propose tests for individual and joint irrelevance of network inputs. Such tests can be used to determine whether an input or group of inputs "belong" in a particular model, thus permitting valid statistical inference based on estimated feedforward neural-network models. The approaches employ well-known statistical resampling techniques. We conduct a small Monte Carlo experiment showing that our tests have reasonable level and power behavior, and we apply our methods to examine whether there are predictable regularities in foreign exchange rates. We find that exchange rates do appear to contain information that is exploitable for enhanced point prediction, but the nature of the predictive relations evolves through time.

  14. OPTIS: a satellite-based test of special and general relativity

    NASA Astrophysics Data System (ADS)

    Lämmerzahl, Claus; Dittus, Hansjörg; Peters, Achim; Schiller, Stephan

    2001-07-01

    A new satellite-based test of special and general relativity is proposed. For the Michelson-Morley test we expect an improvement of at least three orders of magnitude, and for the Kennedy-Thorndike test an improvement of more than one order of magnitude. Furthermore, an improvement by two orders of magnitude of the test of the universality of the gravitational redshift by comparison of an atomic clock with an optical clock is projected. The tests are based on ultrastable optical cavities, lasers, an atomic clock and a frequency comb generator.

  15. Changes in self-reported disability after performance-based tests in obese and non-obese individuals diagnosed with osteoarthritis of the knee.

    PubMed

    Coriolano, Kamary; Aiken, Alice; Pukall, Caroline; Harrison, Mark

    2015-01-01

    The purposes of this study are three-fold: (1) To examine whether the WOMAC questionnaire should be obtained before or after performance-based tests. (2) To assess whether self-reported disability scores before and after performance-based tests differ between obese and non-obese individuals. (3) To observe whether physical activity and BMI predict self-reported disability before and after performance based tests. A longitudinal study included thirty one participants diagnosed with knee osteoarthritis (OA) using the Kellgren-Lawrence Scale by an orthopedic surgeon. All WOMAC scores were significantly higher after as compared to before the completion of performance-based tests. This pattern of results suggested that the WOMAC questionnaire should be administered to individuals with OA after performance-based tests. The obese OA was significantly different compared to the non-obese OA group on all WOMAC scores. Physical activity and BMI explained a significant proportion of variance of self-reported disability. Obese individuals with knee OA may over-estimate their ability to perform physical activities, and may under-estimate their level of disability compared to non-obese individuals with knee OA. In addition, self-reported physical activity seems to be a strong indicator of disability in individuals with knee OA, particularly for individuals with a sedentary life style. Implications for Rehabilitation Osteoarthritis is a progressive joint disabling condition that restricts physical function and participation in daily activities, particularity in elderly individuals. Obesity is a comorbidity commonly associated with osteoarthritis and it appears to increase self-reported disability in those diagnosed with osteoarthritis of the knee. In a relatively small sample, this study recommends that rehabilitation professionals obtain self-report questionnaires of disability after performance-based tests in obese individuals with osteoarthritis of the knee as they are more

  16. Superparamagnetic nanoparticle-based viscosity test

    NASA Astrophysics Data System (ADS)

    Wu, Kai; Liu, Jinming; Wang, Yi; Ye, Clark; Feng, Yinglong; Wang, Jian-Ping

    2015-08-01

    Hyperviscosity syndrome is triggered by high blood viscosity in the human body. This syndrome can result in retinopathy, vertigo, coma, and other unanticipated complications. Serum viscosity is one of the important factors affecting whole blood viscosity, which is regarded as an indicator of general health. In this letter, we propose and demonstrate a Brownian relaxation-based mixing frequency method to test human serum viscosity. This method uses excitatory and detection coils and Brownian relaxation-dominated superparamagnetic nanoparticles, which are sensitive to variables of the liquid environment such as viscosity and temperature. We collect the harmonic signals produced by magnetic nanoparticles and estimate the viscosity of unknown solutions by comparison to the calibration curves. An in vitro human serum viscosity test is performed in less than 1.5 min.

  17. Inquiry-Based Instruction and High Stakes Testing

    NASA Astrophysics Data System (ADS)

    Cothern, Rebecca L.

    Science education is a key to economic success for a country in terms of promoting advances in national industry and technology and maximizing competitive advantage in a global marketplace. The December 2010 Program for International Student Assessment (PISA) ranked the United States 23rd of 65 countries in science. That dismal standing in science proficiency impedes the ability of American school graduates to compete in the global market place. Furthermore, the implementation of high stakes testing in science mandated by the 2007 No Child Left Behind (NCLB) Act has created an additional need for educators to find effective science pedagogy. Research has shown that inquiry-based science instruction is one of the predominant science instructional methods. Inquiry-based instruction is a multifaceted teaching method with its theoretical foundation in constructivism. A correlational survey research design was used to determine the relationship between levels of inquiry-based science instruction and student performance on a standardized state science test. A self-report survey, using a Likert-type scale, was completed by 26 fifth grade teachers. Participants' responses were analyzed and grouped as high, medium, or low level inquiry instruction. The unit of analysis for the achievement variable was the student scale score average from the state science test. Spearman's Rho correlation data showed a positive relationship between the level of inquiry-based instruction and student achievement on the state assessment. The findings can assist teachers and administrators by providing additional research on the benefits of the inquiry-based instructional method. Implications for positive social change include increases in student proficiency and decision-making skills related to science policy issues which can help make them more competitive in the global marketplace.

  18. Prognostic significance of electrical alternans versus signal averaged electrocardiography in predicting the outcome of electrophysiological testing and arrhythmia-free survival

    NASA Technical Reports Server (NTRS)

    Armoundas, A. A.; Rosenbaum, D. S.; Ruskin, J. N.; Garan, H.; Cohen, R. J.

    1998-01-01

    OBJECTIVE: To investigate the accuracy of signal averaged electrocardiography (SAECG) and measurement of microvolt level T wave alternans as predictors of susceptibility to ventricular arrhythmias. DESIGN: Analysis of new data from a previously published prospective investigation. SETTING: Electrophysiology laboratory of a major referral hospital. PATIENTS AND INTERVENTIONS: 43 patients, not on class I or class III antiarrhythmic drug treatment, undergoing invasive electrophysiological testing had SAECG and T wave alternans measurements. The SAECG was considered positive in the presence of one (SAECG-I) or two (SAECG-II) of three standard criteria. T wave alternans was considered positive if the alternans ratio exceeded 3.0. MAIN OUTCOME MEASURES: Inducibility of sustained ventricular tachycardia or fibrillation during electrophysiological testing, and 20 month arrhythmia-free survival. RESULTS: The accuracy of T wave alternans in predicting the outcome of electrophysiological testing was 84% (p < 0.0001). Neither SAECG-I (accuracy 60%; p < 0.29) nor SAECG-II (accuracy 71%; p < 0.10) was a statistically significant predictor of electrophysiological testing. SAECG, T wave alternans, electrophysiological testing, and follow up data were available in 36 patients while not on class I or III antiarrhythmic agents. The accuracy of T wave alternans in predicting the outcome of arrhythmia-free survival was 86% (p < 0.030). Neither SAECG-I (accuracy 65%; p < 0.21) nor SAECG-II (accuracy 71%; p < 0.48) was a statistically significant predictor of arrhythmia-free survival. CONCLUSIONS: T wave alternans was a highly significant predictor of the outcome of electrophysiological testing and arrhythmia-free survival, while SAECG was not a statistically significant predictor. Although these results need to be confirmed in prospective clinical studies, they suggest that T wave alternans may serve as a non-invasive probe for screening high risk populations for malignant ventricular

  19. A hybrid approach to fault diagnosis of roller bearings under variable speed conditions

    NASA Astrophysics Data System (ADS)

    Wang, Yanxue; Yang, Lin; Xiang, Jiawei; Yang, Jianwei; He, Shuilong

    2017-12-01

    Rolling element bearings are one of the main elements in rotating machines, whose failure may lead to a fatal breakdown and significant economic losses. Conventional vibration-based diagnostic methods are based on the stationary assumption, thus they are not applicable to the diagnosis of bearings working under varying speeds. This constraint limits the bearing diagnosis to the industrial application significantly. A hybrid approach to fault diagnosis of roller bearings under variable speed conditions is proposed in this work, based on computed order tracking (COT) and variational mode decomposition (VMD)-based time frequency representation (VTFR). COT is utilized to resample the non-stationary vibration signal in the angular domain, while VMD is used to decompose the resampled signal into a number of band-limited intrinsic mode functions (BLIMFs). A VTFR is then constructed based on the estimated instantaneous frequency and instantaneous amplitude of each BLIMF. Moreover, the Gini index and time-frequency kurtosis are both proposed to quantitatively measure the sparsity and concentration measurement of time-frequency representation, respectively. The effectiveness of the VTFR for extracting nonlinear components has been verified by a bat signal. Results of this numerical simulation also show the sparsity and concentration of the VTFR are better than those of short-time Fourier transform, continuous wavelet transform, Hilbert-Huang transform and Wigner-Ville distribution techniques. Several experimental results have further demonstrated that the proposed method can well detect bearing faults under variable speed conditions.

  20. "Taste Strips" - a rapid, lateralized, gustatory bedside identification test based on impregnated filter papers.

    PubMed

    Landis, Basile Nicolas; Welge-Luessen, Antje; Brämerson, Annika; Bende, Mats; Mueller, Christian Albert; Nordin, Steven; Hummel, Thomas

    2009-02-01

    To elaborate normative values for a clinical psychophysical taste test ("Taste Strips"). The "Taste Strips" are a psychophysical chemical taste test. So far, no definitive normative data had been published and only a fairly small sample size has been investigated. In light of this shortcoming for this easy, reliable and quick taste testing device, we attempted to provide normative values suitable for the clinical use. Normative value acquisition study, multicenter study. The investigation involved 537 participants reporting a normal sense of smell and taste (318 female, 219 male, mean age 44 years, age range 18-87 years). The taste test was based on spoon-shaped filter paper strips ("Taste Strips") impregnated with the four (sweet, sour, salty, and bitter) taste qualities in four different concentrations. The strips were placed on the left or right side of the anterior third of the extended tongue, resulting in a total of 32 trials. With their tongue still extended, patients had to identify the taste from a list of four descriptors, i. e., sweet, sour, salty, and bitter (multiple forced-choice). To obtain an impression of overall gustatory function, the number of correctly identified tastes was summed up for a "taste score". Taste function decreased significantly with age. Women exhibited significantly higher taste scores than men which was true for all age groups. The taste score at the 10(th) percentile was selected as a cut-off value to distinguish normogeusia from hypogeusia. Results from a small series of patients with ageusia confirmed the clinical usefulness of the proposed normative values. The present data provide normative values for the "Taste Strips" based on over 500 subjects tested.