Sample records for hypothesis testing procedures

  1. Knowledge dimensions in hypothesis test problems

    NASA Astrophysics Data System (ADS)

    Krishnan, Saras; Idris, Noraini

    2012-05-01

    The reformation in statistics education over the past two decades has predominantly shifted the focus of statistical teaching and learning from procedural understanding to conceptual understanding. The emphasis of procedural understanding is on the formulas and calculation procedures. Meanwhile, conceptual understanding emphasizes students knowing why they are using a particular formula or executing a specific procedure. In addition, the Revised Bloom's Taxonomy offers a twodimensional framework to describe learning objectives comprising of the six revised cognition levels of original Bloom's taxonomy and four knowledge dimensions. Depending on the level of complexities, the four knowledge dimensions essentially distinguish basic understanding from the more connected understanding. This study identifiesthe factual, procedural and conceptual knowledgedimensions in hypothesis test problems. Hypothesis test being an important tool in making inferences about a population from sample informationis taught in many introductory statistics courses. However, researchers find that students in these courses still have difficulty in understanding the underlying concepts of hypothesis test. Past studies also show that even though students can perform the hypothesis testing procedure, they may not understand the rationale of executing these steps or know how to apply them in novel contexts. Besides knowing the procedural steps in conducting a hypothesis test, students must have fundamental statistical knowledge and deep understanding of the underlying inferential concepts such as sampling distribution and central limit theorem. By identifying the knowledge dimensions of hypothesis test problems in this study, suitable instructional and assessment strategies can be developed in future to enhance students' learning of hypothesis test as a valuable inferential tool.

  2. Unscaled Bayes factors for multiple hypothesis testing in microarray experiments.

    PubMed

    Bertolino, Francesco; Cabras, Stefano; Castellanos, Maria Eugenia; Racugno, Walter

    2015-12-01

    Multiple hypothesis testing collects a series of techniques usually based on p-values as a summary of the available evidence from many statistical tests. In hypothesis testing, under a Bayesian perspective, the evidence for a specified hypothesis against an alternative, conditionally on data, is given by the Bayes factor. In this study, we approach multiple hypothesis testing based on both Bayes factors and p-values, regarding multiple hypothesis testing as a multiple model selection problem. To obtain the Bayes factors we assume default priors that are typically improper. In this case, the Bayes factor is usually undetermined due to the ratio of prior pseudo-constants. We show that ignoring prior pseudo-constants leads to unscaled Bayes factor which do not invalidate the inferential procedure in multiple hypothesis testing, because they are used within a comparative scheme. In fact, using partial information from the p-values, we are able to approximate the sampling null distribution of the unscaled Bayes factor and use it within Efron's multiple testing procedure. The simulation study suggests that under normal sampling model and even with small sample sizes, our approach provides false positive and false negative proportions that are less than other common multiple hypothesis testing approaches based only on p-values. The proposed procedure is illustrated in two simulation studies, and the advantages of its use are showed in the analysis of two microarray experiments. © The Author(s) 2011.

  3. Test of association: which one is the most appropriate for my study?

    PubMed

    Gonzalez-Chica, David Alejandro; Bastos, João Luiz; Duquia, Rodrigo Pereira; Bonamigo, Renan Rangel; Martínez-Mesa, Jeovany

    2015-01-01

    Hypothesis tests are statistical tools widely used for assessing whether or not there is an association between two or more variables. These tests provide a probability of the type 1 error (p-value), which is used to accept or reject the null study hypothesis. To provide a practical guide to help researchers carefully select the most appropriate procedure to answer the research question. We discuss the logic of hypothesis testing and present the prerequisites of each procedure based on practical examples.

  4. Using the Coefficient of Confidence to Make the Philosophical Switch from a Posteriori to a Priori Inferential Statistics

    ERIC Educational Resources Information Center

    Trafimow, David

    2017-01-01

    There has been much controversy over the null hypothesis significance testing procedure, with much of the criticism centered on the problem of inverse inference. Specifically, p gives the probability of the finding (or one more extreme) given the null hypothesis, whereas the null hypothesis significance testing procedure involves drawing a…

  5. A two-step hierarchical hypothesis set testing framework, with applications to gene expression data on ordered categories

    PubMed Central

    2014-01-01

    Background In complex large-scale experiments, in addition to simultaneously considering a large number of features, multiple hypotheses are often being tested for each feature. This leads to a problem of multi-dimensional multiple testing. For example, in gene expression studies over ordered categories (such as time-course or dose-response experiments), interest is often in testing differential expression across several categories for each gene. In this paper, we consider a framework for testing multiple sets of hypothesis, which can be applied to a wide range of problems. Results We adopt the concept of the overall false discovery rate (OFDR) for controlling false discoveries on the hypothesis set level. Based on an existing procedure for identifying differentially expressed gene sets, we discuss a general two-step hierarchical hypothesis set testing procedure, which controls the overall false discovery rate under independence across hypothesis sets. In addition, we discuss the concept of the mixed-directional false discovery rate (mdFDR), and extend the general procedure to enable directional decisions for two-sided alternatives. We applied the framework to the case of microarray time-course/dose-response experiments, and proposed three procedures for testing differential expression and making multiple directional decisions for each gene. Simulation studies confirm the control of the OFDR and mdFDR by the proposed procedures under independence and positive correlations across genes. Simulation results also show that two of our new procedures achieve higher power than previous methods. Finally, the proposed methodology is applied to a microarray dose-response study, to identify 17 β-estradiol sensitive genes in breast cancer cells that are induced at low concentrations. Conclusions The framework we discuss provides a platform for multiple testing procedures covering situations involving two (or potentially more) sources of multiplicity. The framework is easy to use and adaptable to various practical settings that frequently occur in large-scale experiments. Procedures generated from the framework are shown to maintain control of the OFDR and mdFDR, quantities that are especially relevant in the case of multiple hypothesis set testing. The procedures work well in both simulations and real datasets, and are shown to have better power than existing methods. PMID:24731138

  6. A Critique of One-Tailed Hypothesis Test Procedures in Business and Economics Statistics Textbooks.

    ERIC Educational Resources Information Center

    Liu, Tung; Stone, Courtenay C.

    1999-01-01

    Surveys introductory business and economics statistics textbooks and finds that they differ over the best way to explain one-tailed hypothesis tests: the simple null-hypothesis approach or the composite null-hypothesis approach. Argues that the composite null-hypothesis approach contains methodological shortcomings that make it more difficult for…

  7. Hypothesis testing for band size detection of high-dimensional banded precision matrices.

    PubMed

    An, Baiguo; Guo, Jianhua; Liu, Yufeng

    2014-06-01

    Many statistical analysis procedures require a good estimator for a high-dimensional covariance matrix or its inverse, the precision matrix. When the precision matrix is banded, the Cholesky-based method often yields a good estimator of the precision matrix. One important aspect of this method is determination of the band size of the precision matrix. In practice, crossvalidation is commonly used; however, we show that crossvalidation not only is computationally intensive but can be very unstable. In this paper, we propose a new hypothesis testing procedure to determine the band size in high dimensions. Our proposed test statistic is shown to be asymptotically normal under the null hypothesis, and its theoretical power is studied. Numerical examples demonstrate the effectiveness of our testing procedure.

  8. Fisher, Neyman-Pearson or NHST? A tutorial for teaching data testing.

    PubMed

    Perezgonzalez, Jose D

    2015-01-01

    Despite frequent calls for the overhaul of null hypothesis significance testing (NHST), this controversial procedure remains ubiquitous in behavioral, social and biomedical teaching and research. Little change seems possible once the procedure becomes well ingrained in the minds and current practice of researchers; thus, the optimal opportunity for such change is at the time the procedure is taught, be this at undergraduate or at postgraduate levels. This paper presents a tutorial for the teaching of data testing procedures, often referred to as hypothesis testing theories. The first procedure introduced is Fisher's approach to data testing-tests of significance; the second is Neyman-Pearson's approach-tests of acceptance; the final procedure is the incongruent combination of the previous two theories into the current approach-NSHT. For those researchers sticking with the latter, two compromise solutions on how to improve NHST conclude the tutorial.

  9. An Exercise for Illustrating the Logic of Hypothesis Testing

    ERIC Educational Resources Information Center

    Lawton, Leigh

    2009-01-01

    Hypothesis testing is one of the more difficult concepts for students to master in a basic, undergraduate statistics course. Students often are puzzled as to why statisticians simply don't calculate the probability that a hypothesis is true. This article presents an exercise that forces students to lay out on their own a procedure for testing a…

  10. Improving the Crossing-SIBTEST Statistic for Detecting Non-uniform DIF.

    PubMed

    Chalmers, R Philip

    2018-06-01

    This paper demonstrates that, after applying a simple modification to Li and Stout's (Psychometrika 61(4):647-677, 1996) CSIBTEST statistic, an improved variant of the statistic could be realized. It is shown that this modified version of CSIBTEST has a more direct association with the SIBTEST statistic presented by Shealy and Stout (Psychometrika 58(2):159-194, 1993). In particular, the asymptotic sampling distributions and general interpretation of the effect size estimates are the same for SIBTEST and the new CSIBTEST. Given the more natural connection to SIBTEST, it is shown that Li and Stout's hypothesis testing approach is insufficient for CSIBTEST; thus, an improved hypothesis testing procedure is required. Based on the presented arguments, a new chi-squared-based hypothesis testing approach is proposed for the modified CSIBTEST statistic. Positive results from a modest Monte Carlo simulation study strongly suggest the original CSIBTEST procedure and randomization hypothesis testing approach should be replaced by the modified statistic and hypothesis testing method.

  11. Bayesian Methods for Determining the Importance of Effects

    USDA-ARS?s Scientific Manuscript database

    Criticisms have plagued the frequentist null-hypothesis significance testing (NHST) procedure since the day it was created from the Fisher Significance Test and Hypothesis Test of Jerzy Neyman and Egon Pearson. Alternatives to NHST exist in frequentist statistics, but competing methods are also avai...

  12. Planned Hypothesis Tests Are Not Necessarily Exempt from Multiplicity Adjustment

    ERIC Educational Resources Information Center

    Frane, Andrew V.

    2015-01-01

    Scientific research often involves testing more than one hypothesis at a time, which can inflate the probability that a Type I error (false discovery) will occur. To prevent this Type I error inflation, adjustments can be made to the testing procedure that compensate for the number of tests. Yet many researchers believe that such adjustments are…

  13. Monitoring Items in Real Time to Enhance CAT Security

    ERIC Educational Resources Information Center

    Zhang, Jinming; Li, Jie

    2016-01-01

    An IRT-based sequential procedure is developed to monitor items for enhancing test security. The procedure uses a series of statistical hypothesis tests to examine whether the statistical characteristics of each item under inspection have changed significantly during CAT administration. This procedure is compared with a previously developed…

  14. Nonparametric relevance-shifted multiple testing procedures for the analysis of high-dimensional multivariate data with small sample sizes.

    PubMed

    Frömke, Cornelia; Hothorn, Ludwig A; Kropf, Siegfried

    2008-01-27

    In many research areas it is necessary to find differences between treatment groups with several variables. For example, studies of microarray data seek to find a significant difference in location parameters from zero or one for ratios thereof for each variable. However, in some studies a significant deviation of the difference in locations from zero (or 1 in terms of the ratio) is biologically meaningless. A relevant difference or ratio is sought in such cases. This article addresses the use of relevance-shifted tests on ratios for a multivariate parallel two-sample group design. Two empirical procedures are proposed which embed the relevance-shifted test on ratios. As both procedures test a hypothesis for each variable, the resulting multiple testing problem has to be considered. Hence, the procedures include a multiplicity correction. Both procedures are extensions of available procedures for point null hypotheses achieving exact control of the familywise error rate. Whereas the shift of the null hypothesis alone would give straight-forward solutions, the problems that are the reason for the empirical considerations discussed here arise by the fact that the shift is considered in both directions and the whole parameter space in between these two limits has to be accepted as null hypothesis. The first algorithm to be discussed uses a permutation algorithm, and is appropriate for designs with a moderately large number of observations. However, many experiments have limited sample sizes. Then the second procedure might be more appropriate, where multiplicity is corrected according to a concept of data-driven order of hypotheses.

  15. Exploring the psychological underpinnings of the moral mandate effect: motivated reasoning, group differentiation, or anger?

    PubMed

    Mullen, Elizabeth; Skitka, Linda J

    2006-04-01

    When people have strong moral convictions about outcomes, their judgments of both outcome and procedural fairness become driven more by whether outcomes support or oppose their moral mandates than by whether procedures are proper or improper (the moral mandate effect). Two studies tested 3 explanations for the moral mandate effect. In particular, people with moral mandates may (a) have a greater motivation to seek out procedural flaws when outcomes fail to support their moral point of view (the motivated reasoning hypothesis), (b) be influenced by in-group distributive biases as a result of identifying with parties that share rather than oppose their moral point of view (the group differentiation hypothesis), or (c) react with anger when outcomes are inconsistent with their moral point of view, which, in turn, colors perceptions of both outcomes and procedures (the anger hypothesis). Results support the anger hypothesis.

  16. Behavioral Treatment of Pseudobulbar Affect: A Case Report.

    PubMed

    Perotti, Laurence P; Cummings, Latiba D; Mercado, Janyna

    2016-04-01

    To determine if it is possible to successfully treat pseudobulbar affect (PBA) using a behavioral approach. Two experiments were conducted, each a double reversal design with the same single subject in both. The first experiment tested the hypothesis that the rate of PBA could be controlled by manipulation of its consequences. The second experiment tested the hypothesis that use of a self-control procedure would control the rate of PBA. Rate of PBA could not be controlled by consequence manipulation, but rate of PBA could be controlled through use of a self-control procedure. Pending confirmatory research, behavioral interventions utilizing self-control procedures should be considered in patients with PBA. © 2016 Wiley Periodicals, Inc.

  17. Sequential Tests of Multiple Hypotheses Controlling Type I and II Familywise Error Rates

    PubMed Central

    Bartroff, Jay; Song, Jinlin

    2014-01-01

    This paper addresses the following general scenario: A scientist wishes to perform a battery of experiments, each generating a sequential stream of data, to investigate some phenomenon. The scientist would like to control the overall error rate in order to draw statistically-valid conclusions from each experiment, while being as efficient as possible. The between-stream data may differ in distribution and dimension but also may be highly correlated, even duplicated exactly in some cases. Treating each experiment as a hypothesis test and adopting the familywise error rate (FWER) metric, we give a procedure that sequentially tests each hypothesis while controlling both the type I and II FWERs regardless of the between-stream correlation, and only requires arbitrary sequential test statistics that control the error rates for a given stream in isolation. The proposed procedure, which we call the sequential Holm procedure because of its inspiration from Holm’s (1979) seminal fixed-sample procedure, shows simultaneous savings in expected sample size and less conservative error control relative to fixed sample, sequential Bonferroni, and other recently proposed sequential procedures in a simulation study. PMID:25092948

  18. Robust Approach to Verifying the Weak Form of the Efficient Market Hypothesis

    NASA Astrophysics Data System (ADS)

    Střelec, Luboš

    2011-09-01

    The weak form of the efficient markets hypothesis states that prices incorporate only past information about the asset. An implication of this form of the efficient markets hypothesis is that one cannot detect mispriced assets and consistently outperform the market through technical analysis of past prices. One of possible formulations of the efficient market hypothesis used for weak form tests is that share prices follow a random walk. It means that returns are realizations of IID sequence of random variables. Consequently, for verifying the weak form of the efficient market hypothesis, we can use distribution tests, among others, i.e. some tests of normality and/or some graphical methods. Many procedures for testing the normality of univariate samples have been proposed in the literature [7]. Today the most popular omnibus test of normality for a general use is the Shapiro-Wilk test. The Jarque-Bera test is the most widely adopted omnibus test of normality in econometrics and related fields. In particular, the Jarque-Bera test (i.e. test based on the classical measures of skewness and kurtosis) is frequently used when one is more concerned about heavy-tailed alternatives. As these measures are based on moments of the data, this test has a zero breakdown value [2]. In other words, a single outlier can make the test worthless. The reason so many classical procedures are nonrobust to outliers is that the parameters of the model are expressed in terms of moments, and their classical estimators are expressed in terms of sample moments, which are very sensitive to outliers. Another approach to robustness is to concentrate on the parameters of interest suggested by the problem under this study. Consequently, novel robust testing procedures of testing normality are presented in this paper to overcome shortcomings of classical normality tests in the field of financial data, which are typical with occurrence of remote data points and additional types of deviations from normality. This study also discusses some results of simulation power studies of these tests for normality against selected alternatives. Based on outcome of the power simulation study, selected normality tests were consequently used to verify weak form of efficiency in Central Europe stock markets.

  19. A shift from significance test to hypothesis test through power analysis in medical research.

    PubMed

    Singh, G

    2006-01-01

    Medical research literature until recently, exhibited substantial dominance of the Fisher's significance test approach of statistical inference concentrating more on probability of type I error over Neyman-Pearson's hypothesis test considering both probability of type I and II error. Fisher's approach dichotomises results into significant or not significant results with a P value. The Neyman-Pearson's approach talks of acceptance or rejection of null hypothesis. Based on the same theory these two approaches deal with same objective and conclude in their own way. The advancement in computing techniques and availability of statistical software have resulted in increasing application of power calculations in medical research and thereby reporting the result of significance tests in the light of power of the test also. Significance test approach, when it incorporates power analysis contains the essence of hypothesis test approach. It may be safely argued that rising application of power analysis in medical research may have initiated a shift from Fisher's significance test to Neyman-Pearson's hypothesis test procedure.

  20. Bayesian hypothesis testing for human threat conditioning research: an introduction and the condir R package

    PubMed Central

    Krypotos, Angelos-Miltiadis; Klugkist, Irene; Engelhard, Iris M.

    2017-01-01

    ABSTRACT Threat conditioning procedures have allowed the experimental investigation of the pathogenesis of Post-Traumatic Stress Disorder. The findings of these procedures have also provided stable foundations for the development of relevant intervention programs (e.g. exposure therapy). Statistical inference of threat conditioning procedures is commonly based on p-values and Null Hypothesis Significance Testing (NHST). Nowadays, however, there is a growing concern about this statistical approach, as many scientists point to the various limitations of p-values and NHST. As an alternative, the use of Bayes factors and Bayesian hypothesis testing has been suggested. In this article, we apply this statistical approach to threat conditioning data. In order to enable the easy computation of Bayes factors for threat conditioning data we present a new R package named condir, which can be used either via the R console or via a Shiny application. This article provides both a non-technical introduction to Bayesian analysis for researchers using the threat conditioning paradigm, and the necessary tools for computing Bayes factors easily. PMID:29038683

  1. Procedural justice, occupational identification, and organizational commitment.

    DOT National Transportation Integrated Search

    1992-06-01

    Extending Tyler's (1989) group-value model, the present study tested the hypothesis that procedural justice may be of differential salience in the development of organizational commitment among individuals who identify primarily with their employing ...

  2. An Empirical Comparison of Selected Two-Sample Hypothesis Testing Procedures Which Are Locally Most Powerful Under Certain Conditions.

    ERIC Educational Resources Information Center

    Hoover, H. D.; Plake, Barbara

    The relative power of the Mann-Whitney statistic, the t-statistic, the median test, a test based on exceedances (A,B), and two special cases of (A,B) the Tukey quick test and the revised Tukey quick test, was investigated via a Monte Carlo experiment. These procedures were compared across four population probability models: uniform, beta, normal,…

  3. Hypothesis testing of scientific Monte Carlo calculations.

    PubMed

    Wallerberger, Markus; Gull, Emanuel

    2017-11-01

    The steadily increasing size of scientific Monte Carlo simulations and the desire for robust, correct, and reproducible results necessitates rigorous testing procedures for scientific simulations in order to detect numerical problems and programming bugs. However, the testing paradigms developed for deterministic algorithms have proven to be ill suited for stochastic algorithms. In this paper we demonstrate explicitly how the technique of statistical hypothesis testing, which is in wide use in other fields of science, can be used to devise automatic and reliable tests for Monte Carlo methods, and we show that these tests are able to detect some of the common problems encountered in stochastic scientific simulations. We argue that hypothesis testing should become part of the standard testing toolkit for scientific simulations.

  4. Hypothesis testing of scientific Monte Carlo calculations

    NASA Astrophysics Data System (ADS)

    Wallerberger, Markus; Gull, Emanuel

    2017-11-01

    The steadily increasing size of scientific Monte Carlo simulations and the desire for robust, correct, and reproducible results necessitates rigorous testing procedures for scientific simulations in order to detect numerical problems and programming bugs. However, the testing paradigms developed for deterministic algorithms have proven to be ill suited for stochastic algorithms. In this paper we demonstrate explicitly how the technique of statistical hypothesis testing, which is in wide use in other fields of science, can be used to devise automatic and reliable tests for Monte Carlo methods, and we show that these tests are able to detect some of the common problems encountered in stochastic scientific simulations. We argue that hypothesis testing should become part of the standard testing toolkit for scientific simulations.

  5. Classical Testing in Functional Linear Models.

    PubMed

    Kong, Dehan; Staicu, Ana-Maria; Maity, Arnab

    2016-01-01

    We extend four tests common in classical regression - Wald, score, likelihood ratio and F tests - to functional linear regression, for testing the null hypothesis, that there is no association between a scalar response and a functional covariate. Using functional principal component analysis, we re-express the functional linear model as a standard linear model, where the effect of the functional covariate can be approximated by a finite linear combination of the functional principal component scores. In this setting, we consider application of the four traditional tests. The proposed testing procedures are investigated theoretically for densely observed functional covariates when the number of principal components diverges. Using the theoretical distribution of the tests under the alternative hypothesis, we develop a procedure for sample size calculation in the context of functional linear regression. The four tests are further compared numerically for both densely and sparsely observed noisy functional data in simulation experiments and using two real data applications.

  6. Classical Testing in Functional Linear Models

    PubMed Central

    Kong, Dehan; Staicu, Ana-Maria; Maity, Arnab

    2016-01-01

    We extend four tests common in classical regression - Wald, score, likelihood ratio and F tests - to functional linear regression, for testing the null hypothesis, that there is no association between a scalar response and a functional covariate. Using functional principal component analysis, we re-express the functional linear model as a standard linear model, where the effect of the functional covariate can be approximated by a finite linear combination of the functional principal component scores. In this setting, we consider application of the four traditional tests. The proposed testing procedures are investigated theoretically for densely observed functional covariates when the number of principal components diverges. Using the theoretical distribution of the tests under the alternative hypothesis, we develop a procedure for sample size calculation in the context of functional linear regression. The four tests are further compared numerically for both densely and sparsely observed noisy functional data in simulation experiments and using two real data applications. PMID:28955155

  7. Finite-sample and asymptotic sign-based tests for parameters of non-linear quantile regression with Markov noise

    NASA Astrophysics Data System (ADS)

    Sirenko, M. A.; Tarasenko, P. F.; Pushkarev, M. I.

    2017-01-01

    One of the most noticeable features of sign-based statistical procedures is an opportunity to build an exact test for simple hypothesis testing of parameters in a regression model. In this article, we expanded a sing-based approach to the nonlinear case with dependent noise. The examined model is a multi-quantile regression, which makes it possible to test hypothesis not only of regression parameters, but of noise parameters as well.

  8. Conservativeness in Rejection of the Null Hypothesis when Using the Continuity Correction in the MH Chi-Square Test in DIF Applications

    ERIC Educational Resources Information Center

    Paek, Insu

    2010-01-01

    Conservative bias in rejection of a null hypothesis from using the continuity correction in the Mantel-Haenszel (MH) procedure was examined through simulation in a differential item functioning (DIF) investigation context in which statistical testing uses a prespecified level [alpha] for the decision on an item with respect to DIF. The standard MH…

  9. Exchange ideology as a moderator of the procedural justice-satisfaction relationship.

    DOT National Transportation Integrated Search

    1991-07-01

    The present study of 92 civilian Federal Government employees in a 2-month, full-time training program tested the hypothesis that exchange ideology would moderate the relationship between procedural justice perceptions and satisfaction with the train...

  10. Developing a Hypothetical Learning Trajectory for the Sampling Distribution of the Sample Means

    NASA Astrophysics Data System (ADS)

    Syafriandi

    2018-04-01

    Special types of probability distribution are sampling distributions that are important in hypothesis testing. The concept of a sampling distribution may well be the key concept in understanding how inferential procedures work. In this paper, we will design a hypothetical learning trajectory (HLT) for the sampling distribution of the sample mean, and we will discuss how the sampling distribution is used in hypothesis testing.

  11. The origins of levels-of-processing effects in a conceptual test: evidence for automatic influences of memory from the process-dissociation procedure.

    PubMed

    Bergerbest, Dafna; Goshen-Gottstein, Yonatan

    2002-12-01

    In three experiments, we explored automatic influences of memory in a conceptual memory task, as affected by a levels-of-processing (LoP) manipulation. We also explored the origins of the LoP effect by examining whether the effect emerged only when participants in the shallow condition truncated the perceptual processing (the lexical-processing hypothesis) or even when the entire word was encoded in this condition (the conceptual-processing hypothesis). Using the process-dissociation procedure and an implicit association-generation task, we found that the deep encoding condition yielded higher estimates of automatic influences than the shallow condition. In support of the conceptual processing hypothesis, the LoP effect was found even when the shallow task did not lead to truncated processing of the lexical units. We suggest that encoding for meaning is a prerequisite for automatic processing on conceptual tests of memory.

  12. Debates—Hypothesis testing in hydrology: Introduction

    NASA Astrophysics Data System (ADS)

    Blöschl, Günter

    2017-03-01

    This paper introduces the papers in the "Debates—Hypothesis testing in hydrology" series. The four articles in the series discuss whether and how the process of testing hypotheses leads to progress in hydrology. Repeated experiments with controlled boundary conditions are rarely feasible in hydrology. Research is therefore not easily aligned with the classical scientific method of testing hypotheses. Hypotheses in hydrology are often enshrined in computer models which are tested against observed data. Testability may be limited due to model complexity and data uncertainty. All four articles suggest that hypothesis testing has contributed to progress in hydrology and is needed in the future. However, the procedure is usually not as systematic as the philosophy of science suggests. A greater emphasis on a creative reasoning process on the basis of clues and explorative analyses is therefore needed.

  13. Acquisition of Formal Operations: The Effects of Two Training Procedures.

    ERIC Educational Resources Information Center

    Rosenthal, Doreen A.

    1979-01-01

    A study of 11- and 12-year-old girls indicates that either of two training procedures, method training or dimension training, can aid in the transition from concrete operational to formal operational thought by promoting a hypothesis-testing attitude. (BH)

  14. Patients with Parkinson's Disease Learn to Control Complex Systems via Procedural as Well as Non-Procedural Learning

    ERIC Educational Resources Information Center

    Osman, Magda; Wilkinson, Leonora; Beigi, Mazda; Castaneda, Cristina Sanchez; Jahanshahi, Marjan

    2008-01-01

    The striatum is considered to mediate some forms of procedural learning. Complex dynamic control (CDC) tasks involve an individual having to make a series of sequential decisions to achieve a specific outcome (e.g. learning to operate and control a car), and they involve procedural learning. The aim of this study was to test the hypothesis that…

  15. Testing 40 Predictions from the Transtheoretical Model Again, with Confidence

    ERIC Educational Resources Information Center

    Velicer, Wayne F.; Brick, Leslie Ann D.; Fava, Joseph L.; Prochaska, James O.

    2013-01-01

    Testing Theory-based Quantitative Predictions (TTQP) represents an alternative to traditional Null Hypothesis Significance Testing (NHST) procedures and is more appropriate for theory testing. The theory generates explicit effect size predictions and these effect size estimates, with related confidence intervals, are used to test the predictions.…

  16. Separation of biological materials in microgravity

    NASA Technical Reports Server (NTRS)

    Brooks, D. E.; Boyce, J.; Bamberger, S. B.; Vanalstine, J. M.; Harris, J. M.

    1986-01-01

    Partition in aqueous two phase polymer systems is a potentially useful procedure in downstream processing of both molecular and particulate biomaterials. The potential efficiency of the process for particle and cell isolations is much higher than the useful levels already achieved. Space provides a unique environment in which to test the hypothesis that convection and settling phenomena degrade the performance of the partition process. The initial space experiment in a series of tests of this hypothesis is described.

  17. Model error in covariance structure models: Some implications for power and Type I error

    PubMed Central

    Coffman, Donna L.

    2010-01-01

    The present study investigated the degree to which violation of the parameter drift assumption affects the Type I error rate for the test of close fit and power analysis procedures proposed by MacCallum, Browne, and Sugawara (1996) for both the test of close fit and the test of exact fit. The parameter drift assumption states that as sample size increases both sampling error and model error (i.e. the degree to which the model is an approximation in the population) decrease. Model error was introduced using a procedure proposed by Cudeck and Browne (1992). The empirical power for both the test of close fit, in which the null hypothesis specifies that the Root Mean Square Error of Approximation (RMSEA) ≤ .05, and the test of exact fit, in which the null hypothesis specifies that RMSEA = 0, is compared with the theoretical power computed using the MacCallum et al. (1996) procedure. The empirical power and theoretical power for both the test of close fit and the test of exact fit are nearly identical under violations of the assumption. The results also indicated that the test of close fit maintains the nominal Type I error rate under violations of the assumption. PMID:21331302

  18. Covariance hypotheses for LANDSAT data

    NASA Technical Reports Server (NTRS)

    Decell, H. P.; Peters, C.

    1983-01-01

    Two covariance hypotheses are considered for LANDSAT data acquired by sampling fields, one an autoregressive covariance structure and the other the hypothesis of exchangeability. A minimum entropy approximation of the first structure by the second is derived and shown to have desirable properties for incorporation into a mixture density estimation procedure. Results of a rough test of the exchangeability hypothesis are presented.

  19. Memory and other properties of multiple test procedures generated by entangled graphs.

    PubMed

    Maurer, Willi; Bretz, Frank

    2013-05-10

    Methods for addressing multiplicity in clinical trials have attracted much attention during the past 20 years. They include the investigation of new classes of multiple test procedures, such as fixed sequence, fallback and gatekeeping procedures. More recently, sequentially rejective graphical test procedures have been introduced to construct and visualize complex multiple test strategies. These methods propagate the local significance level of a rejected null hypothesis to not-yet rejected hypotheses. In the graph defining the test procedure, hypotheses together with their local significance levels are represented by weighted vertices and the propagation rule by weighted directed edges. An algorithm provides the rules for updating the local significance levels and the transition weights after rejecting an individual hypothesis. These graphical procedures have no memory in the sense that the origin of the propagated significance level is ignored in subsequent iterations. However, in some clinical trial applications, memory is desirable to reflect the underlying dependence structure of the study objectives. In such cases, it would allow the further propagation of significance levels to be dependent on their origin and thus reflect the grouped parent-descendant structures of the hypotheses. We will give examples of such situations and show how to induce memory and other properties by convex combination of several individual graphs. The resulting entangled graphs provide an intuitive way to represent the underlying relative importance relationships between the hypotheses, are as easy to perform as the original individual graphs, remain sequentially rejective and control the familywise error rate in the strong sense. Copyright © 2012 John Wiley & Sons, Ltd.

  20. Efficiency Analysis: Enhancing the Statistical and Evaluative Power of the Regression-Discontinuity Design.

    ERIC Educational Resources Information Center

    Madhere, Serge

    An analytic procedure, efficiency analysis, is proposed for improving the utility of quantitative program evaluation for decision making. The three features of the procedure are explained: (1) for statistical control, it adopts and extends the regression-discontinuity design; (2) for statistical inferences, it de-emphasizes hypothesis testing in…

  1. Confidence intervals for single-case effect size measures based on randomization test inversion.

    PubMed

    Michiels, Bart; Heyvaert, Mieke; Meulders, Ann; Onghena, Patrick

    2017-02-01

    In the current paper, we present a method to construct nonparametric confidence intervals (CIs) for single-case effect size measures in the context of various single-case designs. We use the relationship between a two-sided statistical hypothesis test at significance level α and a 100 (1 - α) % two-sided CI to construct CIs for any effect size measure θ that contain all point null hypothesis θ values that cannot be rejected by the hypothesis test at significance level α. This method of hypothesis test inversion (HTI) can be employed using a randomization test as the statistical hypothesis test in order to construct a nonparametric CI for θ. We will refer to this procedure as randomization test inversion (RTI). We illustrate RTI in a situation in which θ is the unstandardized and the standardized difference in means between two treatments in a completely randomized single-case design. Additionally, we demonstrate how RTI can be extended to other types of single-case designs. Finally, we discuss a few challenges for RTI as well as possibilities when using the method with other effect size measures, such as rank-based nonoverlap indices. Supplementary to this paper, we provide easy-to-use R code, which allows the user to construct nonparametric CIs according to the proposed method.

  2. A default Bayesian hypothesis test for mediation.

    PubMed

    Nuijten, Michèle B; Wetzels, Ruud; Matzke, Dora; Dolan, Conor V; Wagenmakers, Eric-Jan

    2015-03-01

    In order to quantify the relationship between multiple variables, researchers often carry out a mediation analysis. In such an analysis, a mediator (e.g., knowledge of a healthy diet) transmits the effect from an independent variable (e.g., classroom instruction on a healthy diet) to a dependent variable (e.g., consumption of fruits and vegetables). Almost all mediation analyses in psychology use frequentist estimation and hypothesis-testing techniques. A recent exception is Yuan and MacKinnon (Psychological Methods, 14, 301-322, 2009), who outlined a Bayesian parameter estimation procedure for mediation analysis. Here we complete the Bayesian alternative to frequentist mediation analysis by specifying a default Bayesian hypothesis test based on the Jeffreys-Zellner-Siow approach. We further extend this default Bayesian test by allowing a comparison to directional or one-sided alternatives, using Markov chain Monte Carlo techniques implemented in JAGS. All Bayesian tests are implemented in the R package BayesMed (Nuijten, Wetzels, Matzke, Dolan, & Wagenmakers, 2014).

  3. Zero- vs. one-dimensional, parametric vs. non-parametric, and confidence interval vs. hypothesis testing procedures in one-dimensional biomechanical trajectory analysis.

    PubMed

    Pataky, Todd C; Vanrenterghem, Jos; Robinson, Mark A

    2015-05-01

    Biomechanical processes are often manifested as one-dimensional (1D) trajectories. It has been shown that 1D confidence intervals (CIs) are biased when based on 0D statistical procedures, and the non-parametric 1D bootstrap CI has emerged in the Biomechanics literature as a viable solution. The primary purpose of this paper was to clarify that, for 1D biomechanics datasets, the distinction between 0D and 1D methods is much more important than the distinction between parametric and non-parametric procedures. A secondary purpose was to demonstrate that a parametric equivalent to the 1D bootstrap exists in the form of a random field theory (RFT) correction for multiple comparisons. To emphasize these points we analyzed six datasets consisting of force and kinematic trajectories in one-sample, paired, two-sample and regression designs. Results showed, first, that the 1D bootstrap and other 1D non-parametric CIs were qualitatively identical to RFT CIs, and all were very different from 0D CIs. Second, 1D parametric and 1D non-parametric hypothesis testing results were qualitatively identical for all six datasets. Last, we highlight the limitations of 1D CIs by demonstrating that they are complex, design-dependent, and thus non-generalizable. These results suggest that (i) analyses of 1D data based on 0D models of randomness are generally biased unless one explicitly identifies 0D variables before the experiment, and (ii) parametric and non-parametric 1D hypothesis testing provide an unambiguous framework for analysis when one׳s hypothesis explicitly or implicitly pertains to whole 1D trajectories. Copyright © 2015 Elsevier Ltd. All rights reserved.

  4. The Gumbel hypothesis test for left censored observations using regional earthquake records as an example

    NASA Astrophysics Data System (ADS)

    Thompson, E. M.; Hewlett, J. B.; Baise, L. G.; Vogel, R. M.

    2011-01-01

    Annual maximum (AM) time series are incomplete (i.e., censored) when no events are included above the assumed censoring threshold (i.e., magnitude of completeness). We introduce a distrtibutional hypothesis test for left-censored Gumbel observations based on the probability plot correlation coefficient (PPCC). Critical values of the PPCC hypothesis test statistic are computed from Monte-Carlo simulations and are a function of sample size, censoring level, and significance level. When applied to a global catalog of earthquake observations, the left-censored Gumbel PPCC tests are unable to reject the Gumbel hypothesis for 45 of 46 seismic regions. We apply four different field significance tests for combining individual tests into a collective hypothesis test. None of the field significance tests are able to reject the global hypothesis that AM earthquake magnitudes arise from a Gumbel distribution. Because the field significance levels are not conclusive, we also compute the likelihood that these field significance tests are unable to reject the Gumbel model when the samples arise from a more complex distributional alternative. A power study documents that the censored Gumbel PPCC test is unable to reject some important and viable Generalized Extreme Value (GEV) alternatives. Thus, we cannot rule out the possibility that the global AM earthquake time series could arise from a GEV distribution with a finite upper bound, also known as a reverse Weibull distribution. Our power study also indicates that the binomial and uniform field significance tests are substantially more powerful than the more commonly used Bonferonni and false discovery rate multiple comparison procedures.

  5. ATS-PD: An Adaptive Testing System for Psychological Disorders

    ERIC Educational Resources Information Center

    Donadello, Ivan; Spoto, Andrea; Sambo, Francesco; Badaloni, Silvana; Granziol, Umberto; Vidotto, Giulio

    2017-01-01

    The clinical assessment of mental disorders can be a time-consuming and error-prone procedure, consisting of a sequence of diagnostic hypothesis formulation and testing aimed at restricting the set of plausible diagnoses for the patient. In this article, we propose a novel computerized system for the adaptive testing of psychological disorders.…

  6. Statistical Power in Evaluations That Investigate Effects on Multiple Outcomes: A Guide for Researchers

    ERIC Educational Resources Information Center

    Porter, Kristin E.

    2018-01-01

    Researchers are often interested in testing the effectiveness of an intervention on multiple outcomes, for multiple subgroups, at multiple points in time, or across multiple treatment groups. The resulting multiplicity of statistical hypothesis tests can lead to spurious findings of effects. Multiple testing procedures (MTPs) are statistical…

  7. Statistical analysis of particle trajectories in living cells

    NASA Astrophysics Data System (ADS)

    Briane, Vincent; Kervrann, Charles; Vimond, Myriam

    2018-06-01

    Recent advances in molecular biology and fluorescence microscopy imaging have made possible the inference of the dynamics of molecules in living cells. Such inference allows us to understand and determine the organization and function of the cell. The trajectories of particles (e.g., biomolecules) in living cells, computed with the help of object tracking methods, can be modeled with diffusion processes. Three types of diffusion are considered: (i) free diffusion, (ii) subdiffusion, and (iii) superdiffusion. The mean-square displacement (MSD) is generally used to discriminate the three types of particle dynamics. We propose here a nonparametric three-decision test as an alternative to the MSD method. The rejection of the null hypothesis, i.e., free diffusion, is accompanied by claims of the direction of the alternative (subdiffusion or superdiffusion). We study the asymptotic behavior of the test statistic under the null hypothesis and under parametric alternatives which are currently considered in the biophysics literature. In addition, we adapt the multiple-testing procedure of Benjamini and Hochberg to fit with the three-decision-test setting, in order to apply the test procedure to a collection of independent trajectories. The performance of our procedure is much better than the MSD method as confirmed by Monte Carlo experiments. The method is demonstrated on real data sets corresponding to protein dynamics observed in fluorescence microscopy.

  8. Acquiring, Representing, and Evaluating a Competence Model of Diagnostic Strategy.

    ERIC Educational Resources Information Center

    Clancey, William J.

    This paper describes NEOMYCIN, a computer program that models one physician's diagnostic reasoning within a limited area of medicine. NEOMYCIN's knowledge base and reasoning procedure constitute a model of how human knowledge is organized and how it is used in diagnosis. The hypothesis is tested that such a procedure can be used to simulate both…

  9. A test of multiple hypotheses for the function of call sharing in female budgerigars, Melopsittacus undulatus

    PubMed Central

    Young, Anna M.; Cordier, Breanne; Mundry, Roger; Wright, Timothy F.

    2014-01-01

    In many social species group, members share acoustically similar calls. Functional hypotheses have been proposed for call sharing, but previous studies have been limited by an inability to distinguish among these hypotheses. We examined the function of vocal sharing in female budgerigars with a two-part experimental design that allowed us to distinguish between two functional hypotheses. The social association hypothesis proposes that shared calls help animals mediate affiliative and aggressive interactions, while the password hypothesis proposes that shared calls allow animals to distinguish group identity and exclude nonmembers. We also tested the labeling hypothesis, a mechanistic explanation which proposes that shared calls are used to address specific individuals within the sender–receiver relationship. We tested the social association hypothesis by creating four–member flocks of unfamiliar female budgerigars (Melopsittacus undulatus) and then monitoring the birds’ calls, social behaviors, and stress levels via fecal glucocorticoid metabolites. We tested the password hypothesis by moving immigrants into established social groups. To test the labeling hypothesis, we conducted additional recording sessions in which individuals were paired with different group members. The social association hypothesis was supported by the development of multiple shared call types in each cage and a correlation between the number of shared call types and the number of aggressive interactions between pairs of birds. We also found support for calls serving as a labeling mechanism using discriminant function analysis with a permutation procedure. Our results did not support the password hypothesis, as there was no difference in stress or directed behaviors between immigrant and control birds. PMID:24860236

  10. Interaction Analysis in MANOVA.

    ERIC Educational Resources Information Center

    Betz, M. Austin

    Simultaneous test procedures (STPS for short) in the context of the unrestricted full rank general linear multivariate model for population cell means are introduced and utilized to analyze interactions in factorial designs. By appropriate choice of an implying hypothesis, it is shown how to test overall main effects, interactions, simple main,…

  11. Students' Understanding of Conditional Probability on Entering University

    ERIC Educational Resources Information Center

    Reaburn, Robyn

    2013-01-01

    An understanding of conditional probability is essential for students of inferential statistics as it is used in Null Hypothesis Tests. Conditional probability is also used in Bayes' theorem, in the interpretation of medical screening tests and in quality control procedures. This study examines the understanding of conditional probability of…

  12. RANDOMIZATION PROCEDURES FOR THE ANALYSIS OF EDUCATIONAL EXPERIMENTS.

    ERIC Educational Resources Information Center

    COLLIER, RAYMOND O.

    CERTAIN SPECIFIC ASPECTS OF HYPOTHESIS TESTS USED FOR ANALYSIS OF RESULTS IN RANDOMIZED EXPERIMENTS WERE STUDIED--(1) THE DEVELOPMENT OF THE THEORETICAL FACTOR, THAT OF PROVIDING INFORMATION ON STATISTICAL TESTS FOR CERTAIN EXPERIMENTAL DESIGNS AND (2) THE DEVELOPMENT OF THE APPLIED ELEMENT, THAT OF SUPPLYING THE EXPERIMENTER WITH MACHINERY FOR…

  13. Correcting power and p-value calculations for bias in diffusion tensor imaging.

    PubMed

    Lauzon, Carolyn B; Landman, Bennett A

    2013-07-01

    Diffusion tensor imaging (DTI) provides quantitative parametric maps sensitive to tissue microarchitecture (e.g., fractional anisotropy, FA). These maps are estimated through computational processes and subject to random distortions including variance and bias. Traditional statistical procedures commonly used for study planning (including power analyses and p-value/alpha-rate thresholds) specifically model variability, but neglect potential impacts of bias. Herein, we quantitatively investigate the impacts of bias in DTI on hypothesis test properties (power and alpha-rate) using a two-sided hypothesis testing framework. We present theoretical evaluation of bias on hypothesis test properties, evaluate the bias estimation technique SIMEX for DTI hypothesis testing using simulated data, and evaluate the impacts of bias on spatially varying power and alpha rates in an empirical study of 21 subjects. Bias is shown to inflame alpha rates, distort the power curve, and cause significant power loss even in empirical settings where the expected difference in bias between groups is zero. These adverse effects can be attenuated by properly accounting for bias in the calculation of power and p-values. Copyright © 2013 Elsevier Inc. All rights reserved.

  14. Hypothesis testing for the validation of the kinetic spectrophotometric methods for the determination of lansoprazole in bulk and drug formulations via Fe(III) and Zn(II) chelates.

    PubMed

    Rahman, Nafisur; Kashif, Mohammad

    2010-03-01

    Point and interval hypothesis tests performed to validate two simple and economical, kinetic spectrophotometric methods for the assay of lansoprazole are described. The methods are based on the formation of chelate complex of the drug with Fe(III) and Zn(II). The reaction is followed spectrophotometrically by measuring the rate of change of absorbance of coloured chelates of the drug with Fe(III) and Zn(II) at 445 and 510 nm, respectively. The stoichiometric ratio of lansoprazole to Fe(III) and Zn(II) complexes were found to be 1:1 and 2:1, respectively. The initial-rate and fixed-time methods are adopted for determination of drug concentrations. The calibration graphs are linear in the range 50-200 µg ml⁻¹ (initial-rate method), 20-180 µg ml⁻¹ (fixed-time method) for lansoprazole-Fe(III) complex and 120-300 (initial-rate method), and 90-210 µg ml⁻¹ (fixed-time method) for lansoprazole-Zn(II) complex. The inter-day and intra-day precision data showed good accuracy and precision of the proposed procedure for analysis of lansoprazole. The point and interval hypothesis tests indicate that the proposed procedures are not biased. Copyright © 2010 John Wiley & Sons, Ltd.

  15. Provision of specific dental procedures by general dentists in the National Dental Practice-Based Research Network: questionnaire findings.

    PubMed

    Gilbert, Gregg H; Gordan, Valeria V; Korelitz, James J; Fellows, Jeffrey L; Meyerowitz, Cyril; Oates, Thomas W; Rindal, D Brad; Gregory, Randall J

    2015-01-22

    Objectives were to: (1) determine whether and how often general dentists (GDs) provide specific dental procedures; and (2) test the hypothesis that provision is associated with key dentist, practice, and patient characteristics. GDs (n = 2,367) in the United States National Dental Practice-Based Research Network completed an Enrollment Questionnaire that included: (1) dentist; (2) practice; and (3) patient characteristics, and how commonly they provide each of 10 dental procedures. We determined how commonly procedures were provided and tested the hypothesis that provision was substantively related to the three sets of characteristics. Two procedure categories were classified as "uncommon" (orthodontics, periodontal surgery), three were "common" (molar endodontics; implants; non-surgical periodontics), and five were "very common" (restorative; esthetic procedures; extractions; removable prosthetics; non-molar endodontics). Dentist, practice, and patient characteristics were substantively related to procedure provision; several characteristics seemed to have pervasive effects, such as dentist gender, training after dental school, full-time/part-time status, private practice vs. institutional practice, presence of a specialist in the same practice, and insurance status of patients. As a group, GDs provide a comprehensive range of procedures. However, provision by individual dentists is substantively related to certain dentist, practice, and patient characteristics. A large number and broad range of factors seem to influence which procedures GDs provide. This may have implications for how GDs respond to the ever-changing landscape of dental care utilization, patient population demography, scope of practice, delivery models and GDs' evolving role in primary care.

  16. Memory Inhibition as a Critical Factor Preventing Creative Problem Solving

    ERIC Educational Resources Information Center

    Gómez-Ariza, Carlos J.; del Prete, Francesco; Prieto del Val, Laura; Valle, Tania; Bajo, M. Teresa; Fernandez, Angel

    2017-01-01

    The hypothesis that reduced accessibility to relevant information can negatively affect problem solving in a remote associate test (RAT) was tested by using, immediately before the RAT, a retrieval practice procedure to hinder access to target solutions. The results of 2 experiments clearly showed that, relative to baseline, target words that had…

  17. Conditional Covariance-Based Subtest Selection for DIMTEST

    ERIC Educational Resources Information Center

    Froelich, Amy G.; Habing, Brian

    2008-01-01

    DIMTEST is a nonparametric hypothesis-testing procedure designed to test the assumptions of a unidimensional and locally independent item response theory model. Several previous Monte Carlo studies have found that using linear factor analysis to select the assessment subtest for DIMTEST results in a moderate to severe loss of power when the exam…

  18. Testing jumps via false discovery rate control.

    PubMed

    Yen, Yu-Min

    2013-01-01

    Many recently developed nonparametric jump tests can be viewed as multiple hypothesis testing problems. For such multiple hypothesis tests, it is well known that controlling type I error often makes a large proportion of erroneous rejections, and such situation becomes even worse when the jump occurrence is a rare event. To obtain more reliable results, we aim to control the false discovery rate (FDR), an efficient compound error measure for erroneous rejections in multiple testing problems. We perform the test via the Barndorff-Nielsen and Shephard (BNS) test statistic, and control the FDR with the Benjamini and Hochberg (BH) procedure. We provide asymptotic results for the FDR control. From simulations, we examine relevant theoretical results and demonstrate the advantages of controlling the FDR. The hybrid approach is then applied to empirical analysis on two benchmark stock indices with high frequency data.

  19. Statistical decision from k test series with particular focus on population genetics tools: a DIY notice.

    PubMed

    De Meeûs, Thierry

    2014-03-01

    In population genetics data analysis, researchers are often faced to the problem of decision making from a series of tests of the same null hypothesis. This is the case when one wants to test differentiation between pathogens found on different host species sampled from different locations (as many tests as number of locations). Many procedures are available to date but not all apply to all situations. Finding which tests are significant or if the whole series is significant, when tests are independent or not do not require the same procedures. In this note I describe several procedures, among the simplest and easiest to undertake, that should allow decision making in most (if not all) situations population geneticists (or biologists) should meet, in particular in host-parasite systems. Copyright © 2014 Elsevier B.V. All rights reserved.

  20. Nondeclarative learning in children with specific language impairment: predicting regularities in the visuomotor, phonological, and cognitive domains.

    PubMed

    Mayor-Dubois, C; Zesiger, P; Van der Linden, M; Roulet-Perez, E

    2014-01-01

    Ullman (2004) suggested that Specific Language Impairment (SLI) results from a general procedural learning deficit. In order to test this hypothesis, we investigated children with SLI via procedural learning tasks exploring the verbal, motor, and cognitive domains. Results showed that compared with a Control Group, the children with SLI (a) were unable to learn a phonotactic learning task, (b) were able but less efficiently to learn a motor learning task and (c) succeeded in a cognitive learning task. Regarding the motor learning task (Serial Reaction Time Task), reaction times were longer and learning slower than in controls. The learning effect was not significant in children with an associated Developmental Coordination Disorder (DCD), and future studies should consider comorbid motor impairment in order to clarify whether impairments are related to the motor rather than the language disorder. Our results indicate that a phonotactic learning but not a cognitive procedural deficit underlies SLI, thus challenging Ullmans' general procedural deficit hypothesis, like a few other recent studies.

  1. Hypothesis tests for stratified mark-specific proportional hazards models with missing covariates, with application to HIV vaccine efficacy trials.

    PubMed

    Sun, Yanqing; Qi, Li; Yang, Guangren; Gilbert, Peter B

    2018-05-01

    This article develops hypothesis testing procedures for the stratified mark-specific proportional hazards model with missing covariates where the baseline functions may vary with strata. The mark-specific proportional hazards model has been studied to evaluate mark-specific relative risks where the mark is the genetic distance of an infecting HIV sequence to an HIV sequence represented inside the vaccine. This research is motivated by analyzing the RV144 phase 3 HIV vaccine efficacy trial, to understand associations of immune response biomarkers on the mark-specific hazard of HIV infection, where the biomarkers are sampled via a two-phase sampling nested case-control design. We test whether the mark-specific relative risks are unity and how they change with the mark. The developed procedures enable assessment of whether risk of HIV infection with HIV variants close or far from the vaccine sequence are modified by immune responses induced by the HIV vaccine; this question is interesting because vaccine protection occurs through immune responses directed at specific HIV sequences. The test statistics are constructed based on augmented inverse probability weighted complete-case estimators. The asymptotic properties and finite-sample performances of the testing procedures are investigated, demonstrating double-robustness and effectiveness of the predictive auxiliaries to recover efficiency. The finite-sample performance of the proposed tests are examined through a comprehensive simulation study. The methods are applied to the RV144 trial. © 2018 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  2. Homogeneity tests of clustered diagnostic markers with applications to the BioCycle Study

    PubMed Central

    Tang, Liansheng Larry; Liu, Aiyi; Schisterman, Enrique F.; Zhou, Xiao-Hua; Liu, Catherine Chun-ling

    2014-01-01

    Diagnostic trials often require the use of a homogeneity test among several markers. Such a test may be necessary to determine the power both during the design phase and in the initial analysis stage. However, no formal method is available for the power and sample size calculation when the number of markers is greater than two and marker measurements are clustered in subjects. This article presents two procedures for testing the accuracy among clustered diagnostic markers. The first procedure is a test of homogeneity among continuous markers based on a global null hypothesis of the same accuracy. The result under the alternative provides the explicit distribution for the power and sample size calculation. The second procedure is a simultaneous pairwise comparison test based on weighted areas under the receiver operating characteristic curves. This test is particularly useful if a global difference among markers is found by the homogeneity test. We apply our procedures to the BioCycle Study designed to assess and compare the accuracy of hormone and oxidative stress markers in distinguishing women with ovulatory menstrual cycles from those without. PMID:22733707

  3. A simple test of association for contingency tables with multiple column responses.

    PubMed

    Decady, Y J; Thomas, D R

    2000-09-01

    Loughin and Scherer (1998, Biometrics 54, 630-637) investigated tests of association in two-way tables when one of the categorical variables allows for multiple-category responses from individual respondents. Standard chi-squared tests are invalid in this case, and they developed a bootstrap test procedure that provides good control of test levels under the null hypothesis. This procedure and some others that have been proposed are computationally involved and are based on techniques that are relatively unfamiliar to many practitioners. In this paper, the methods introduced by Rao and Scott (1981, Journal of the American Statistical Association 76, 221-230) for analyzing complex survey data are used to develop a simple test based on a corrected chi-squared statistic.

  4. Suggestions for presenting the results of data analyses

    USGS Publications Warehouse

    Anderson, David R.; Link, William A.; Johnson, Douglas H.; Burnham, Kenneth P.

    2001-01-01

    We give suggestions for the presentation of research results from frequentist, information-theoretic, and Bayesian analysis paradigms, followed by several general suggestions. The information-theoretic and Bayesian methods offer alternative approaches to data analysis and inference compared to traditionally used methods. Guidance is lacking on the presentation of results under these alternative procedures and on nontesting aspects of classical frequentists methods of statistical analysis. Null hypothesis testing has come under intense criticism. We recommend less reporting of the results of statistical tests of null hypotheses in cases where the null is surely false anyway, or where the null hypothesis is of little interest to science or management.

  5. The use of analysis of variance procedures in biological studies

    USGS Publications Warehouse

    Williams, B.K.

    1987-01-01

    The analysis of variance (ANOVA) is widely used in biological studies, yet there remains considerable confusion among researchers about the interpretation of hypotheses being tested. Ambiguities arise when statistical designs are unbalanced, and in particular when not all combinations of design factors are represented in the data. This paper clarifies the relationship among hypothesis testing, statistical modelling and computing procedures in ANOVA for unbalanced data. A simple two-factor fixed effects design is used to illustrate three common parametrizations for ANOVA models, and some associations among these parametrizations are developed. Biologically meaningful hypotheses for main effects and interactions are given in terms of each parametrization, and procedures for testing the hypotheses are described. The standard statistical computing procedures in ANOVA are given along with their corresponding hypotheses. Throughout the development unbalanced designs are assumed and attention is given to problems that arise with missing cells.

  6. Do Children Understand Fraction Addition?

    ERIC Educational Resources Information Center

    Braithwaite, David W.; Tian, Jing; Siegler, Robert S.

    2017-01-01

    Many children fail to master fraction arithmetic even after years of instruction. A recent theory of fraction arithmetic (Braithwaite, Pyke, & Siegler, in press) hypothesized that this poor learning of fraction arithmetic procedures reflects poor conceptual understanding of them. To test this hypothesis, we performed three experiments…

  7. Biographical Study and Hypothesis Testing. Instructional Technology.

    ERIC Educational Resources Information Center

    Little, Timothy H.

    1995-01-01

    Asserts that the story of Amelia Earhart holds an ongoing fascination for students. Presents an instructional unit using a spreadsheet to create a database about Earhart's final flight. Includes student objectives, step-by-step instructional procedures, and eight graphics of student information or teacher examples. (CFR)

  8. Discontinuous categories affect information-integration but not rule-based category learning.

    PubMed

    Maddox, W Todd; Filoteo, J Vincent; Lauritzen, J Scott; Connally, Emily; Hejl, Kelli D

    2005-07-01

    Three experiments were conducted that provide a direct examination of within-category discontinuity manipulations on the implicit, procedural-based learning and the explicit, hypothesis-testing systems proposed in F. G. Ashby, L. A. Alfonso-Reese, A. U. Turken, and E. M. Waldron's (1998) competition between verbal and implicit systems model. Discontinuous categories adversely affected information-integration but not rule-based category learning. Increasing the magnitude of the discontinuity did not lead to a significant decline in performance. The distance to the bound provides a reasonable description of the generalization profile associated with the hypothesis-testing system, whereas the distance to the bound plus the distance to the trained response region provides a reasonable description of the generalization profile associated with the procedural-based learning system. These results suggest that within-category discontinuity differentially impacts information-integration but not rule-based category learning and provides information regarding the detailed processing characteristics of each category learning system. ((c) 2005 APA, all rights reserved).

  9. Statistical hypothesis testing and common misinterpretations: Should we abandon p-value in forensic science applications?

    PubMed

    Taroni, F; Biedermann, A; Bozza, S

    2016-02-01

    Many people regard the concept of hypothesis testing as fundamental to inferential statistics. Various schools of thought, in particular frequentist and Bayesian, have promoted radically different solutions for taking a decision about the plausibility of competing hypotheses. Comprehensive philosophical comparisons about their advantages and drawbacks are widely available and continue to span over large debates in the literature. More recently, controversial discussion was initiated by an editorial decision of a scientific journal [1] to refuse any paper submitted for publication containing null hypothesis testing procedures. Since the large majority of papers published in forensic journals propose the evaluation of statistical evidence based on the so called p-values, it is of interest to expose the discussion of this journal's decision within the forensic science community. This paper aims to provide forensic science researchers with a primer on the main concepts and their implications for making informed methodological choices. Copyright © 2015 Elsevier Ireland Ltd. All rights reserved.

  10. Supporting shared hypothesis testing in the biomedical domain.

    PubMed

    Agibetov, Asan; Jiménez-Ruiz, Ernesto; Ondrésik, Marta; Solimando, Alessandro; Banerjee, Imon; Guerrini, Giovanna; Catalano, Chiara E; Oliveira, Joaquim M; Patanè, Giuseppe; Reis, Rui L; Spagnuolo, Michela

    2018-02-08

    Pathogenesis of inflammatory diseases can be tracked by studying the causality relationships among the factors contributing to its development. We could, for instance, hypothesize on the connections of the pathogenesis outcomes to the observed conditions. And to prove such causal hypotheses we would need to have the full understanding of the causal relationships, and we would have to provide all the necessary evidences to support our claims. In practice, however, we might not possess all the background knowledge on the causality relationships, and we might be unable to collect all the evidence to prove our hypotheses. In this work we propose a methodology for the translation of biological knowledge on causality relationships of biological processes and their effects on conditions to a computational framework for hypothesis testing. The methodology consists of two main points: hypothesis graph construction from the formalization of the background knowledge on causality relationships, and confidence measurement in a causality hypothesis as a normalized weighted path computation in the hypothesis graph. In this framework, we can simulate collection of evidences and assess confidence in a causality hypothesis by measuring it proportionally to the amount of available knowledge and collected evidences. We evaluate our methodology on a hypothesis graph that represents both contributing factors which may cause cartilage degradation and the factors which might be caused by the cartilage degradation during osteoarthritis. Hypothesis graph construction has proven to be robust to the addition of potentially contradictory information on the simultaneously positive and negative effects. The obtained confidence measures for the specific causality hypotheses have been validated by our domain experts, and, correspond closely to their subjective assessments of confidences in investigated hypotheses. Overall, our methodology for a shared hypothesis testing framework exhibits important properties that researchers will find useful in literature review for their experimental studies, planning and prioritizing evidence collection acquisition procedures, and testing their hypotheses with different depths of knowledge on causal dependencies of biological processes and their effects on the observed conditions.

  11. Conceptual Knowledge of Fraction Arithmetic

    ERIC Educational Resources Information Center

    Siegler, Robert S.; Lortie-Forgues, Hugues

    2015-01-01

    Understanding an arithmetic operation implies, at minimum, knowing the direction of effects that the operation produces. However, many children and adults, even those who execute arithmetic procedures correctly, may lack this knowledge on some operations and types of numbers. To test this hypothesis, we presented preservice teachers (Study 1),…

  12. Performing Inferential Statistics Prior to Data Collection

    ERIC Educational Resources Information Center

    Trafimow, David; MacDonald, Justin A.

    2017-01-01

    Typically, in education and psychology research, the investigator collects data and subsequently performs descriptive and inferential statistics. For example, a researcher might compute group means and use the null hypothesis significance testing procedure to draw conclusions about the populations from which the groups were drawn. We propose an…

  13. Building Intuitions about Statistical Inference Based on Resampling

    ERIC Educational Resources Information Center

    Watson, Jane; Chance, Beth

    2012-01-01

    Formal inference, which makes theoretical assumptions about distributions and applies hypothesis testing procedures with null and alternative hypotheses, is notoriously difficult for tertiary students to master. The debate about whether this content should appear in Years 11 and 12 of the "Australian Curriculum: Mathematics" has gone on…

  14. Cognitive Fatigue Facilitates Procedural Sequence Learning.

    PubMed

    Borragán, Guillermo; Slama, Hichem; Destrebecqz, Arnaud; Peigneux, Philippe

    2016-01-01

    Enhanced procedural learning has been evidenced in conditions where cognitive control is diminished, including hypnosis, disruption of prefrontal activity and non-optimal time of the day. Another condition depleting the availability of controlled resources is cognitive fatigue (CF). We tested the hypothesis that CF, eventually leading to diminished cognitive control, facilitates procedural sequence learning. In a two-day experiment, 23 young healthy adults were administered a serial reaction time task (SRTT) following the induction of high or low levels of CF, in a counterbalanced order. CF was induced using the Time load Dual-back (TloadDback) paradigm, a dual working memory task that allows tailoring cognitive load levels to the individual's optimal performance capacity. In line with our hypothesis, reaction times (RT) in the SRTT were faster in the high- than in the low-level fatigue condition, and performance improvement was higher for the sequential than the motor components. Altogether, our results suggest a paradoxical, facilitating impact of CF on procedural motor sequence learning. We propose that facilitated learning in the high-level fatigue condition stems from a reduction in the cognitive resources devoted to cognitive control processes that normally oppose automatic procedural acquisition mechanisms.

  15. Significance tests for functional data with complex dependence structure.

    PubMed

    Staicu, Ana-Maria; Lahiri, Soumen N; Carroll, Raymond J

    2015-01-01

    We propose an L 2 -norm based global testing procedure for the null hypothesis that multiple group mean functions are equal, for functional data with complex dependence structure. Specifically, we consider the setting of functional data with a multilevel structure of the form groups-clusters or subjects-units, where the unit-level profiles are spatially correlated within the cluster, and the cluster-level data are independent. Orthogonal series expansions are used to approximate the group mean functions and the test statistic is estimated using the basis coefficients. The asymptotic null distribution of the test statistic is developed, under mild regularity conditions. To our knowledge this is the first work that studies hypothesis testing, when data have such complex multilevel functional and spatial structure. Two small-sample alternatives, including a novel block bootstrap for functional data, are proposed, and their performance is examined in simulation studies. The paper concludes with an illustration of a motivating experiment.

  16. Surveillance system and method having an adaptive sequential probability fault detection test

    NASA Technical Reports Server (NTRS)

    Herzog, James P. (Inventor); Bickford, Randall L. (Inventor)

    2005-01-01

    System and method providing surveillance of an asset such as a process and/or apparatus by providing training and surveillance procedures that numerically fit a probability density function to an observed residual error signal distribution that is correlative to normal asset operation and then utilizes the fitted probability density function in a dynamic statistical hypothesis test for providing improved asset surveillance.

  17. Surveillance system and method having an adaptive sequential probability fault detection test

    NASA Technical Reports Server (NTRS)

    Bickford, Randall L. (Inventor); Herzog, James P. (Inventor)

    2006-01-01

    System and method providing surveillance of an asset such as a process and/or apparatus by providing training and surveillance procedures that numerically fit a probability density function to an observed residual error signal distribution that is correlative to normal asset operation and then utilizes the fitted probability density function in a dynamic statistical hypothesis test for providing improved asset surveillance.

  18. Surveillance System and Method having an Adaptive Sequential Probability Fault Detection Test

    NASA Technical Reports Server (NTRS)

    Bickford, Randall L. (Inventor); Herzog, James P. (Inventor)

    2008-01-01

    System and method providing surveillance of an asset such as a process and/or apparatus by providing training and surveillance procedures that numerically fit a probability density function to an observed residual error signal distribution that is correlative to normal asset operation and then utilizes the fitted probability density function in a dynamic statistical hypothesis test for providing improved asset surveillance.

  19. Mechanisms of accelerated proteolysis in rat soleus muscle atrophy induced by unweighting or denervation

    NASA Technical Reports Server (NTRS)

    Tischler, Marc E.; Kirby, Christopher; Rosenberg, Sara; Tome, Margaret; Chase, Peter

    1991-01-01

    A hypothesis proposed by Tischler and coworkers (Henriksen et al., 1986; Tischler et al., 1990) concerning the mechanisms of atrophy induced by unweighting or denervation was tested using rat soleus muscle from animals subjected to hindlimb suspension and denervation of muscles. The procedure included (1) measuring protein degradation in isolated muscles and testing the effects of lysosome inhibitors, (2) analyzing the lysosome permeability and autophagocytosis, (3) testing the effects of altering calcium-dependent proteolysis, and (4) evaluating in vivo the effects of various agents to determine the physiological significance of the hypothesis. The results obtained suggest that there are major differences between the mechanisms of atrophies caused by unweighting and denervation, though slower protein synthesis is an important feature common for both.

  20. On the insignificance of Herschel's sunspot correlation

    NASA Astrophysics Data System (ADS)

    Love, Jeffrey J.

    2013-08-01

    We examine William Herschel's hypothesis that solar-cycle variation of the Sun's irradiance has a modulating effect on the Earth's climate and that this is, specifically, manifested as an anticorrelation between sunspot number and the market price of wheat. Since Herschel first proposed his hypothesis in 1801, it has been regarded with both interest and skepticism. Recently, reports have been published that either support Herschel's hypothesis or rely on its validity. As a test of Herschel's hypothesis, we seek to reject a null hypothesis of a statistically random correlation between historical sunspot numbers, wheat prices in London and the United States, and wheat farm yields in the United States. We employ binary-correlation, Pearson-correlation, and frequency-domain methods. We test our methods using a historical geomagnetic activity index, well known to be causally correlated with sunspot number. As expected, the measured correlation between sunspot number and geomagnetic activity would be an unlikely realization of random data; the correlation is "statistically significant." On the other hand, measured correlations between sunspot number and wheat price and wheat yield data would be very likely realizations of random data; these correlations are "insignificant." Therefore, Herschel's hypothesis must be regarded with skepticism. We compare and contrast our results with those of other researchers. We discuss procedures for evaluating hypotheses that are formulated from historical data.

  1. Explorations in Statistics: Permutation Methods

    ERIC Educational Resources Information Center

    Curran-Everett, Douglas

    2012-01-01

    Learning about statistics is a lot like learning about science: the learning is more meaningful if you can actively explore. This eighth installment of "Explorations in Statistics" explores permutation methods, empiric procedures we can use to assess an experimental result--to test a null hypothesis--when we are reluctant to trust statistical…

  2. Application of Transformations in Parametric Inference

    ERIC Educational Resources Information Center

    Brownstein, Naomi; Pensky, Marianna

    2008-01-01

    The objective of the present paper is to provide a simple approach to statistical inference using the method of transformations of variables. We demonstrate performance of this powerful tool on examples of constructions of various estimation procedures, hypothesis testing, Bayes analysis and statistical inference for the stress-strength systems.…

  3. A close examination of double filtering with fold change and t test in microarray analysis

    PubMed Central

    2009-01-01

    Background Many researchers use the double filtering procedure with fold change and t test to identify differentially expressed genes, in the hope that the double filtering will provide extra confidence in the results. Due to its simplicity, the double filtering procedure has been popular with applied researchers despite the development of more sophisticated methods. Results This paper, for the first time to our knowledge, provides theoretical insight on the drawback of the double filtering procedure. We show that fold change assumes all genes to have a common variance while t statistic assumes gene-specific variances. The two statistics are based on contradicting assumptions. Under the assumption that gene variances arise from a mixture of a common variance and gene-specific variances, we develop the theoretically most powerful likelihood ratio test statistic. We further demonstrate that the posterior inference based on a Bayesian mixture model and the widely used significance analysis of microarrays (SAM) statistic are better approximations to the likelihood ratio test than the double filtering procedure. Conclusion We demonstrate through hypothesis testing theory, simulation studies and real data examples, that well constructed shrinkage testing methods, which can be united under the mixture gene variance assumption, can considerably outperform the double filtering procedure. PMID:19995439

  4. Data-driven inference for the spatial scan statistic.

    PubMed

    Almeida, Alexandre C L; Duarte, Anderson R; Duczmal, Luiz H; Oliveira, Fernando L P; Takahashi, Ricardo H C

    2011-08-02

    Kulldorff's spatial scan statistic for aggregated area maps searches for clusters of cases without specifying their size (number of areas) or geographic location in advance. Their statistical significance is tested while adjusting for the multiple testing inherent in such a procedure. However, as is shown in this work, this adjustment is not done in an even manner for all possible cluster sizes. A modification is proposed to the usual inference test of the spatial scan statistic, incorporating additional information about the size of the most likely cluster found. A new interpretation of the results of the spatial scan statistic is done, posing a modified inference question: what is the probability that the null hypothesis is rejected for the original observed cases map with a most likely cluster of size k, taking into account only those most likely clusters of size k found under null hypothesis for comparison? This question is especially important when the p-value computed by the usual inference process is near the alpha significance level, regarding the correctness of the decision based in this inference. A practical procedure is provided to make more accurate inferences about the most likely cluster found by the spatial scan statistic.

  5. POWER-ENHANCED MULTIPLE DECISION FUNCTIONS CONTROLLING FAMILY-WISE ERROR AND FALSE DISCOVERY RATES.

    PubMed

    Peña, Edsel A; Habiger, Joshua D; Wu, Wensong

    2011-02-01

    Improved procedures, in terms of smaller missed discovery rates (MDR), for performing multiple hypotheses testing with weak and strong control of the family-wise error rate (FWER) or the false discovery rate (FDR) are developed and studied. The improvement over existing procedures such as the Šidák procedure for FWER control and the Benjamini-Hochberg (BH) procedure for FDR control is achieved by exploiting possible differences in the powers of the individual tests. Results signal the need to take into account the powers of the individual tests and to have multiple hypotheses decision functions which are not limited to simply using the individual p -values, as is the case, for example, with the Šidák, Bonferroni, or BH procedures. They also enhance understanding of the role of the powers of individual tests, or more precisely the receiver operating characteristic (ROC) functions of decision processes, in the search for better multiple hypotheses testing procedures. A decision-theoretic framework is utilized, and through auxiliary randomizers the procedures could be used with discrete or mixed-type data or with rank-based nonparametric tests. This is in contrast to existing p -value based procedures whose theoretical validity is contingent on each of these p -value statistics being stochastically equal to or greater than a standard uniform variable under the null hypothesis. Proposed procedures are relevant in the analysis of high-dimensional "large M , small n " data sets arising in the natural, physical, medical, economic and social sciences, whose generation and creation is accelerated by advances in high-throughput technology, notably, but not limited to, microarray technology.

  6. Changing Power Actors in a Midwestern Community.

    ERIC Educational Resources Information Center

    Tait, John L.; And Others

    A longitudinal study was made of Prairie City, Iowa wherein the personal and social characteristics of the 1962 power actor pool were compared with characteristics of the 1973 power actor pool to test the hypothesis that: the personal and social characteristics of power actors will not change significantly over time. Procedures for identifying…

  7. Effects of Instructional Design with Mental Model Analysis on Learning.

    ERIC Educational Resources Information Center

    Hong, Eunsook

    This paper presents a model for systematic instructional design that includes mental model analysis together with the procedures used in developing computer-based instructional materials in the area of statistical hypothesis testing. The instructional design model is based on the premise that the objective for learning is to achieve expert-like…

  8. Conceptual Similarity Promotes Generalization of Higher Order Fear Learning

    ERIC Educational Resources Information Center

    Dunsmoor, Joseph E.; White, Allison J.; LaBar, Kevin S.

    2011-01-01

    We tested the hypothesis that conceptual similarity promotes generalization of conditioned fear. Using a sensory preconditioning procedure, three groups of subjects learned an association between two cues that were conceptually similar, unrelated, or mismatched. Next, one of the cues was paired with a shock. The other cue was then reintroduced to…

  9. Decision Support Systems: Applications in Statistics and Hypothesis Testing.

    ERIC Educational Resources Information Center

    Olsen, Christopher R.; Bozeman, William C.

    1988-01-01

    Discussion of the selection of appropriate statistical procedures by educators highlights a study conducted to investigate the effectiveness of decision aids in facilitating the use of appropriate statistics. Experimental groups and a control group using a printed flow chart, a computer-based decision aid, and a standard text are described. (11…

  10. Children's Memory for Words Under Self-Reported and Induced Imagery Strategies.

    ERIC Educational Resources Information Center

    Filan, Gary L.; Sullivan, Howard J.

    The effectiveness of the use of self-reported imagery strategies on children's subsequent memory performance was studied, and the coding redundancy hypothesis that memory is facilitated by using an encoding procedure in both words and images was tested. The two levels of reported memory strategy (imagize, verbalize) were crossed with "think…

  11. Extensive Training Is Insufficient to Produce the Work-Ethic Effect in Pigeons

    ERIC Educational Resources Information Center

    Vasconcelos, Marco; Urcuioli, Peter J.

    2009-01-01

    Zentall and Singer (2007a) hypothesized that our failure to replicate the work-ethic effect in pigeons (Vasconcelos, Urcuioli, & Lionello-DeNolf, 2007) was due to insufficient overtraining following acquisition of the high- and low-effort discriminations. We tested this hypothesis using the original work-ethic procedure (Experiment 1) and one…

  12. How Often Is p[subscript rep] Close to the True Replication Probability?

    ERIC Educational Resources Information Center

    Trafimow, David; MacDonald, Justin A.; Rice, Stephen; Clason, Dennis L.

    2010-01-01

    Largely due to dissatisfaction with the standard null hypothesis significance testing procedure, researchers have begun to consider alternatives. For example, Killeen (2005a) has argued that researchers should calculate p[subscript rep] that is purported to indicate the probability that, if the experiment in question were replicated, the obtained…

  13. Sentence Repetition Accuracy in Adults with Developmental Language Impairment: Interactions of Participant Capacities and Sentence Structures

    ERIC Educational Resources Information Center

    Poll, Gerard H.; Miller, Carol A.; van Hell, Janet G.

    2016-01-01

    Purpose: We asked whether sentence repetition accuracy could be explained by interactions of participant processing limitations with the structures of the sentences. We also tested a prediction of the procedural deficit hypothesis (Ullman & Pierpont, 2005) that adjuncts are more difficult than arguments for individuals with developmental…

  14. The Experimental State of Mind in Elicitation: Illustrations from Tonal Fieldwork

    ERIC Educational Resources Information Center

    Yu, Kristine M.

    2014-01-01

    This paper illustrates how an "experimental state of mind", i.e. principles of experimental design, can inform hypothesis generation and testing in structured fieldwork elicitation. The application of these principles is demonstrated with case studies in toneme discovery. Pike's classic toneme discovery procedure is shown to be a special…

  15. Spatial autocorrelation in growth of undisturbed natural pine stands across Georgia

    Treesearch

    Raymond L. Czaplewski; Robin M. Reich; William A. Bechtold

    1994-01-01

    Moran's I statistic measures the spatial autocorrelation in a random variable measured at discrete locations in space. Permutation procedures test the null hypothesis that the observed Moran's I value is no greater than that expected by chance. The spatial autocorrelation of gross basal area increment is analyzed for undisturbed, naturally regenerated stands...

  16. Sex Role Learning: A Test of the Selective Attention Hypothesis.

    ERIC Educational Resources Information Center

    Bryan, Janice Westlund; Luria, Zella

    This paper reports three studies designed to determine whether children show selective attention and/or differential memory to slide pictures of same-sex vs. opposite-sex models and activities. Attention was measured using a feedback EEG procedure, which measured the presence or absence of alpha rhythms in the subjects' brains during presentation…

  17. Students' Reasoning about p-Values

    ERIC Educational Resources Information Center

    Aquilonius, Birgit C.; Brenner, Mary E.

    2015-01-01

    Results from a study of 16 community college students are presented. The research question concerned how students reasoned about p-values. Students' approach to p-values in hypothesis testing was procedural. Students viewed p-values as something that one compares to alpha values in order to arrive at an answer and did not attach much meaning to…

  18. Out with the old? The role of selective attention in retaining targets in partial report.

    PubMed

    Lindsey, Dakota R B; Bundesen, Claus; Kyllingsbæk, Søren; Petersen, Anders; Logan, Gordon D

    2017-01-01

    In the partial-report task, subjects are asked to report only a portion of the items presented. Selective attention chooses which objects to represent in short-term memory (STM) on the basis of their relevance. Because STM is limited in capacity, one must sometimes choose which objects are removed from memory in light of new relevant information. We tested the hypothesis that the choices among newly presented information and old information in STM involve the same process-that both are acts of selective attention. We tested this hypothesis using a two-display partial-report procedure. In this procedure, subjects had to select and retain relevant letters (targets) from two sequentially presented displays. If selection in perception and retention in STM are the same process, then irrelevant letters (distractors) in the second display, which demanded attention because of their similarity to the targets, should have decreased target report from the first display. This effect was not obtained in any of four experiments. Thus, choosing objects to keep in STM is not the same process as choosing new objects to bring into STM.

  19. Inference for High-dimensional Differential Correlation Matrices.

    PubMed

    Cai, T Tony; Zhang, Anru

    2016-01-01

    Motivated by differential co-expression analysis in genomics, we consider in this paper estimation and testing of high-dimensional differential correlation matrices. An adaptive thresholding procedure is introduced and theoretical guarantees are given. Minimax rate of convergence is established and the proposed estimator is shown to be adaptively rate-optimal over collections of paired correlation matrices with approximately sparse differences. Simulation results show that the procedure significantly outperforms two other natural methods that are based on separate estimation of the individual correlation matrices. The procedure is also illustrated through an analysis of a breast cancer dataset, which provides evidence at the gene co-expression level that several genes, of which a subset has been previously verified, are associated with the breast cancer. Hypothesis testing on the differential correlation matrices is also considered. A test, which is particularly well suited for testing against sparse alternatives, is introduced. In addition, other related problems, including estimation of a single sparse correlation matrix, estimation of the differential covariance matrices, and estimation of the differential cross-correlation matrices, are also discussed.

  20. A versatile test for equality of two survival functions based on weighted differences of Kaplan-Meier curves.

    PubMed

    Uno, Hajime; Tian, Lu; Claggett, Brian; Wei, L J

    2015-12-10

    With censored event time observations, the logrank test is the most popular tool for testing the equality of two underlying survival distributions. Although this test is asymptotically distribution free, it may not be powerful when the proportional hazards assumption is violated. Various other novel testing procedures have been proposed, which generally are derived by assuming a class of specific alternative hypotheses with respect to the hazard functions. The test considered by Pepe and Fleming (1989) is based on a linear combination of weighted differences of the two Kaplan-Meier curves over time and is a natural tool to assess the difference of two survival functions directly. In this article, we take a similar approach but choose weights that are proportional to the observed standardized difference of the estimated survival curves at each time point. The new proposal automatically makes weighting adjustments empirically. The new test statistic is aimed at a one-sided general alternative hypothesis and is distributed with a short right tail under the null hypothesis but with a heavy tail under the alternative. The results from extensive numerical studies demonstrate that the new procedure performs well under various general alternatives with a caution of a minor inflation of the type I error rate when the sample size is small or the number of observed events is small. The survival data from a recent cancer comparative study are utilized for illustrating the implementation of the process. Copyright © 2015 John Wiley & Sons, Ltd.

  1. On the insignificance of Herschel's sunspot correlation

    USGS Publications Warehouse

    Love, Jeffrey J.

    2013-01-01

    We examine William Herschel's hypothesis that solar-cycle variation of the Sun's irradiance has a modulating effect on the Earth's climate and that this is, specifically, manifested as an anticorrelation between sunspot number and the market price of wheat. Since Herschel first proposed his hypothesis in 1801, it has been regarded with both interest and skepticism. Recently, reports have been published that either support Herschel's hypothesis or rely on its validity. As a test of Herschel's hypothesis, we seek to reject a null hypothesis of a statistically random correlation between historical sunspot numbers, wheat prices in London and the United States, and wheat farm yields in the United States. We employ binary-correlation, Pearson-correlation, and frequency-domain methods. We test our methods using a historical geomagnetic activity index, well known to be causally correlated with sunspot number. As expected, the measured correlation between sunspot number and geomagnetic activity would be an unlikely realization of random data; the correlation is “statistically significant.” On the other hand, measured correlations between sunspot number and wheat price and wheat yield data would be very likely realizations of random data; these correlations are “insignificant.” Therefore, Herschel's hypothesis must be regarded with skepticism. We compare and contrast our results with those of other researchers. We discuss procedures for evaluating hypotheses that are formulated from historical data.

  2. Advances in Significance Testing for Cluster Detection

    NASA Astrophysics Data System (ADS)

    Coleman, Deidra Andrea

    Over the past two decades, much attention has been given to data driven project goals such as the Human Genome Project and the development of syndromic surveillance systems. A major component of these types of projects is analyzing the abundance of data. Detecting clusters within the data can be beneficial as it can lead to the identification of specified sequences of DNA nucleotides that are related to important biological functions or the locations of epidemics such as disease outbreaks or bioterrorism attacks. Cluster detection techniques require efficient and accurate hypothesis testing procedures. In this dissertation, we improve upon the hypothesis testing procedures for cluster detection by enhancing distributional theory and providing an alternative method for spatial cluster detection using syndromic surveillance data. In Chapter 2, we provide an efficient method to compute the exact distribution of the number and coverage of h-clumps of a collection of words. This method involves defining a Markov chain using a minimal deterministic automaton to reduce the number of states needed for computation. We allow words of the collection to contain other words of the collection making the method more general. We use our method to compute the distributions of the number and coverage of h-clumps in the Chi motif of H. influenza.. In Chapter 3, we provide an efficient algorithm to compute the exact distribution of multiple window discrete scan statistics for higher-order, multi-state Markovian sequences. This algorithm involves defining a Markov chain to efficiently keep track of probabilities needed to compute p-values of the statistic. We use our algorithm to identify cases where the available approximation does not perform well. We also use our algorithm to detect unusual clusters of made free throw shots by National Basketball Association players during the 2009-2010 regular season. In Chapter 4, we give a procedure to detect outbreaks using syndromic surveillance data while controlling the Bayesian False Discovery Rate (BFDR). The procedure entails choosing an appropriate Bayesian model that captures the spatial dependency inherent in epidemiological data and considers all days of interest, selecting a test statistic based on a chosen measure that provides the magnitude of the maximumal spatial cluster for each day, and identifying a cutoff value that controls the BFDR for rejecting the collective null hypothesis of no outbreak over a collection of days for a specified region.We use our procedure to analyze botulism-like syndrome data collected by the North Carolina Disease Event Tracking and Epidemiologic Collection Tool (NC DETECT).

  3. Changes in Occupational Radiation Exposures after Incorporation of a Real-time Dosimetry System in the Interventional Radiology Suite.

    PubMed

    Poudel, Sashi; Weir, Lori; Dowling, Dawn; Medich, David C

    2016-08-01

    A statistical pilot study was retrospectively performed to analyze potential changes in occupational radiation exposures to Interventional Radiology (IR) staff at Lawrence General Hospital after implementation of the i2 Active Radiation Dosimetry System (Unfors RaySafe Inc, 6045 Cochran Road Cleveland, OH 44139-3302). In this study, the monthly OSL dosimetry records obtained during the eight-month period prior to i2 implementation were normalized to the number of procedures performed during each month and statistically compared to the normalized dosimetry records obtained for the 8-mo period after i2 implementation. The resulting statistics included calculation of the mean and standard deviation of the dose equivalences per procedure and included appropriate hypothesis tests to assess for statistically valid differences between the pre and post i2 study periods. Hypothesis testing was performed on three groups of staff present during an IR procedure: The first group included all members of the IR staff, the second group consisted of the IR radiologists, and the third group consisted of the IR technician staff. After implementing the i2 active dosimetry system, participating members of the Lawrence General IR staff had a reduction in the average dose equivalence per procedure of 43.1% ± 16.7% (p = 0.04). Similarly, Lawrence General IR radiologists had a 65.8% ± 33.6% (p=0.01) reduction while the technologists had a 45.0% ± 14.4% (p=0.03) reduction.

  4. Interest Inventory Items as Attitude Eliciting Stimuli in Classical Conditioning: A Test of the A-R-D Theory. Language, Personality, and Cross-Cultural Study and Measurement of the Human A-R-D (Motivational) System.

    ERIC Educational Resources Information Center

    Gross, Michael C.; Staats, Arthur W.

    An experiment was conducted to test the hypothesis that interest inventory items elicit classically conditionable attitudinal responses. A higher-order conditioning procedure was used in which items from the Strong Vocational Interest Blank were employed as unconditioned stimuli and nonsense syllables as conditioned stimuli. Items for which the…

  5. Delay discounting moderates the effect of food reinforcement on energy intake among non-obese women☆

    PubMed Central

    Rollins, Brandi Y.; Dearing, Kelly K.; Epstein, Leonard H.

    2011-01-01

    Recent theoretical approaches to food intake hypothesize that eating represents a balance between reward-driven motivation to eat versus inhibitory executive function processes, however this hypothesis remains to be tested. The objective of the current study was to test the hypothesis that the motivation to eat, operationalized by the relative reinforcing value (RRV) of food, and inhibitory processes, assessed by delay discounting (DD), interact to influence energy intake in an ad libitum eating task. Female subjects (n = 24) completed a DD of money procedure, RRV task, and an ad libitum eating task in counterbalanced sessions. RRV of food predicted total energy intake, however the effect of the RRV of food on energy intake was moderated by DD. Women higher in DD and RRV of food consumed greater total energy, whereas women higher in RRV of food but lower in DD consumed less total energy. Our findings support the hypothesis that reinforcing value and executive function mediated processes interactively influence food consumption. PMID:20678532

  6. Estimating times of surgeries with two component procedures: comparison of the lognormal and normal models.

    PubMed

    Strum, David P; May, Jerrold H; Sampson, Allan R; Vargas, Luis G; Spangler, William E

    2003-01-01

    Variability inherent in the duration of surgical procedures complicates surgical scheduling. Modeling the duration and variability of surgeries might improve time estimates. Accurate time estimates are important operationally to improve utilization, reduce costs, and identify surgeries that might be considered outliers. Surgeries with multiple procedures are difficult to model because they are difficult to segment into homogenous groups and because they are performed less frequently than single-procedure surgeries. The authors studied, retrospectively, 10,740 surgeries each with exactly two CPTs and 46,322 surgical cases with only one CPT from a large teaching hospital to determine if the distribution of dual-procedure surgery times fit more closely a lognormal or a normal model. The authors tested model goodness of fit to their data using Shapiro-Wilk tests, studied factors affecting the variability of time estimates, and examined the impact of coding permutations (ordered combinations) on modeling. The Shapiro-Wilk tests indicated that the lognormal model is statistically superior to the normal model for modeling dual-procedure surgeries. Permutations of component codes did not appear to differ significantly with respect to total procedure time and surgical time. To improve individual models for infrequent dual-procedure surgeries, permutations may be reduced and estimates may be based on the longest component procedure and type of anesthesia. The authors recommend use of the lognormal model for estimating surgical times for surgeries with two component procedures. Their results help legitimize the use of log transforms to normalize surgical procedure times prior to hypothesis testing using linear statistical models. Multiple-procedure surgeries may be modeled using the longest (statistically most important) component procedure and type of anesthesia.

  7. Type I error probabilities based on design-stage strategies with applications to noninferiority trials.

    PubMed

    Rothmann, Mark

    2005-01-01

    When testing the equality of means from two different populations, a t-test or large sample normal test tend to be performed. For these tests, when the sample size or design for the second sample is dependent on the results of the first sample, the type I error probability is altered for each specific possibility in the null hypothesis. We will examine the impact on the type I error probabilities for two confidence interval procedures and procedures using test statistics when the design for the second sample or experiment is dependent on the results from the first sample or experiment (or series of experiments). Ways for controlling a desired maximum type I error probability or a desired type I error rate will be discussed. Results are applied to the setting of noninferiority comparisons in active controlled trials where the use of a placebo is unethical.

  8. Asymmetrically dominated choice problems, the isolation hypothesis and random incentive mechanisms.

    PubMed

    Cox, James C; Sadiraj, Vjollca; Schmidt, Ulrich

    2014-01-01

    This paper presents an experimental study of the random incentive mechanisms which are a standard procedure in economic and psychological experiments. Random incentive mechanisms have several advantages but are incentive-compatible only if responses to the single tasks are independent. This is true if either the independence axiom of expected utility theory or the isolation hypothesis of prospect theory holds. We present a simple test of this in the context of choice under risk. In the baseline (one task) treatment we observe risk behavior in a given choice problem. We show that by integrating a second, asymmetrically dominated choice problem in a random incentive mechanism risk behavior can be manipulated systematically. This implies that the isolation hypothesis is violated and the random incentive mechanism does not elicit true preferences in our example.

  9. Sequential Probability Ratio Test for Collision Avoidance Maneuver Decisions Based on a Bank of Norm-Inequality-Constrained Epoch-State Filters

    NASA Technical Reports Server (NTRS)

    Carpenter, J. R.; Markley, F. L.; Alfriend, K. T.; Wright, C.; Arcido, J.

    2011-01-01

    Sequential probability ratio tests explicitly allow decision makers to incorporate false alarm and missed detection risks, and are potentially less sensitive to modeling errors than a procedure that relies solely on a probability of collision threshold. Recent work on constrained Kalman filtering has suggested an approach to formulating such a test for collision avoidance maneuver decisions: a filter bank with two norm-inequality-constrained epoch-state extended Kalman filters. One filter models 1he null hypothesis 1ha1 the miss distance is inside the combined hard body radius at the predicted time of closest approach, and one filter models the alternative hypothesis. The epoch-state filter developed for this method explicitly accounts for any process noise present in the system. The method appears to work well using a realistic example based on an upcoming highly-elliptical orbit formation flying mission.

  10. Do Statistical Segmentation Abilities Predict Lexical-Phonological and Lexical-Semantic Abilities in Children with and without SLI?

    ERIC Educational Resources Information Center

    Mainela-Arnold, Elina; Evans, Julia L.

    2014-01-01

    This study tested the predictions of the procedural deficit hypothesis by investigating the relationship between sequential statistical learning and two aspects of lexical ability, lexical-phonological and lexical-semantic, in children with and without specific language impairment (SLI). Participants included forty children (ages 8;5-12;3), twenty…

  11. Direct and Indirect Effects of Birth Order on Personality and Identity: Support for the Null Hypothesis

    ERIC Educational Resources Information Center

    Dunkel, Curtis S.; Harbke, Colin R.; Papini, Dennis R.

    2009-01-01

    The authors proposed that birth order affects psychosocial outcomes through differential investment from parent to child and differences in the degree of identification from child to parent. The authors conducted this study to test these 2 models. Despite the use of statistical and methodological procedures to increase sensitivity and reduce…

  12. Learning from Number Board Games: You Learn What You Encode

    ERIC Educational Resources Information Center

    Laski, Elida V.; Siegler, Robert S.

    2014-01-01

    We tested the hypothesis that encoding the numerical-spatial relations in a number board game is a key process in promoting learning from playing such games. Experiment 1 used a microgenetic design to examine the effects on learning of the type of counting procedure that children use. As predicted, having kindergartners count-on from their current…

  13. Applying a Qualitative Modeling Shell to Process Diagnosis: The Caster System. ONR Technical Report #16.

    ERIC Educational Resources Information Center

    Thompson, Timothy F.; Clancey, William J.

    This report describes the application of a shell expert system from the medical diagnostic system, Neomycin, to Caster, a diagnostic system for malfunctions in industrial sandcasting. This system was developed to test the hypothesis that starting with a well-developed classification procedure and a relational language for stating the…

  14. Morphological Decomposition in the Recognition of Prefixed and Suffixed Words: Evidence from Korean

    ERIC Educational Resources Information Center

    Kim, Say Young; Wang, Min; Taft, Marcus

    2015-01-01

    Korean has visually salient syllable units that are often mapped onto either prefixes or suffixes in derived words. In addition, prefixed and suffixed words may be processed differently given a left-to-right parsing procedure and the need to resolve morphemic ambiguity in prefixes in Korean. To test this hypothesis, four experiments using the…

  15. Estimation of the Invariance of Factor Structures Across Sex and Race with Implications for Hypothesis Testing

    ERIC Educational Resources Information Center

    Katzenmeyer, W. G.; Stenner, A. Jackson

    1977-01-01

    The problem of demonstrating invariance of factor structures across criterion groups is addressed. Procedures are outlined which combine the replication of factor structures across sex-race groups with use of the coefficient of invariance to demonstrate the level of invariance associated with factors identified in a self concept measure.…

  16. Response Latency as a Function of Hypothesis-Testing Strategies in Concept Identification

    ERIC Educational Resources Information Center

    Fink, Richard T.

    1972-01-01

    The ability of M. Levine's subset-sampling assumptions to account for the decrease in response latency following the trial of the last error was investigated by employing a distributed stimulus set composed of four binary dimensions and a procedure which required Ss to make an overt response in order to sample each dimension. (Author)

  17. Does RAIM with Correct Exclusion Produce Unbiased Positions?

    PubMed Central

    Teunissen, Peter J. G.; Imparato, Davide; Tiberius, Christian C. J. M.

    2017-01-01

    As the navigation solution of exclusion-based RAIM follows from a combination of least-squares estimation and a statistically based exclusion-process, the computation of the integrity of the navigation solution has to take the propagated uncertainty of the combined estimation-testing procedure into account. In this contribution, we analyse, theoretically as well as empirically, the effect that this combination has on the first statistical moment, i.e., the mean, of the computed navigation solution. It will be shown, although statistical testing is intended to remove biases from the data, that biases will always remain under the alternative hypothesis, even when the correct alternative hypothesis is properly identified. The a posteriori exclusion of a biased satellite range from the position solution will therefore never remove the bias in the position solution completely. PMID:28672862

  18. Simulation-based hypothesis testing of high dimensional means under covariance heterogeneity.

    PubMed

    Chang, Jinyuan; Zheng, Chao; Zhou, Wen-Xin; Zhou, Wen

    2017-12-01

    In this article, we study the problem of testing the mean vectors of high dimensional data in both one-sample and two-sample cases. The proposed testing procedures employ maximum-type statistics and the parametric bootstrap techniques to compute the critical values. Different from the existing tests that heavily rely on the structural conditions on the unknown covariance matrices, the proposed tests allow general covariance structures of the data and therefore enjoy wide scope of applicability in practice. To enhance powers of the tests against sparse alternatives, we further propose two-step procedures with a preliminary feature screening step. Theoretical properties of the proposed tests are investigated. Through extensive numerical experiments on synthetic data sets and an human acute lymphoblastic leukemia gene expression data set, we illustrate the performance of the new tests and how they may provide assistance on detecting disease-associated gene-sets. The proposed methods have been implemented in an R-package HDtest and are available on CRAN. © 2017, The International Biometric Society.

  19. The optimal power puzzle: scrutiny of the monotone likelihood ratio assumption in multiple testing.

    PubMed

    Cao, Hongyuan; Sun, Wenguang; Kosorok, Michael R

    2013-01-01

    In single hypothesis testing, power is a non-decreasing function of type I error rate; hence it is desirable to test at the nominal level exactly to achieve optimal power. The puzzle lies in the fact that for multiple testing, under the false discovery rate paradigm, such a monotonic relationship may not hold. In particular, exact false discovery rate control may lead to a less powerful testing procedure if a test statistic fails to fulfil the monotone likelihood ratio condition. In this article, we identify different scenarios wherein the condition fails and give caveats for conducting multiple testing in practical settings.

  20. Several Modified Goodness-Of-Fit Tests for the Cauchy Distribution with Unknown Scale and Location Parameters

    DTIC Science & Technology

    1994-03-01

    labels of a, which is called significance levels. The hypothesis tests are done based on the a levels . The maximum probabilities of making type II error...critical values at specific a levels . This procedure is done for each of the 50,000 samples. The number of the samples passing each test at those specific... a levels is counted. The ratio of the number of accepted samples to 50,000 gives the percentage point. Then, subtracting that value from one would

  1. Free Recall Learning of Hierarchically Organised Lists by Adults with Asperger's Syndrome: Additional Evidence for Diminished Relational Processing

    ERIC Educational Resources Information Center

    Bowler, Dermot M.; Gaigg, Sebastian B.; Gardiner, John M.

    2009-01-01

    The "Task Support Hypothesis" (TSH, Bowler et al. Neuropsychologia 35:65-70 1997) states that individuals with autism spectrum disorder (ASD) show better memory when test procedures provide support for retrieval. The present study aimed to see whether this principle also applied at encoding. Twenty participants with high-functioning ASD and 20…

  2. Tests of the Aversive Summation Hypothesis in Rats: Effects of Restraint Stress on Consummatory Successive Negative Contrast and Extinction in the Barnes Maze

    ERIC Educational Resources Information Center

    Ortega, Leonardo A.; Prado-Rivera, Mayerli A.; Cardenas-Poveda, D. Carolina; McLinden, Kristina A.; Glueck, Amanda C.; Gutierrez, German; Lamprea, Marisol R.; Papini, Mauricio R.

    2013-01-01

    The present research explored the effects of restraint stress on two situations involving incentive downshift: consummatory successive negative contrast (cSNC) and extinction of escape behavior in the Barnes maze. First, Experiment 1 confirmed that the restraint stress procedure used in these experiments increased levels of circulating…

  3. Control over the Scheduling of Simulated Office Work Reduces the Impact of Workload on Mental Fatigue and Task Performance

    ERIC Educational Resources Information Center

    Hockey, G. Robert J.; Earle, Fiona

    2006-01-01

    Two experiments tested the hypothesis that task-induced mental fatigue is moderated by control over work scheduling. Participants worked for 2 hr on simulated office work, with control manipulated by a yoking procedure. Matched participants were assigned to conditions of either high control (HC) or low control (LC). HC participants decided their…

  4. Delayed Feedback Disrupts the Procedural-Learning System but Not the Hypothesis-Testing System in Perceptual Category Learning

    ERIC Educational Resources Information Center

    Maddox, W. Todd; Ing, A. David

    2005-01-01

    W. T. Maddox, F. G. Ashby, and C. J. Bohil (2003) found that delayed feedback adversely affects information-integration but not rule-based category learning in support of a multiple-systems approach to category learning. However, differences in the number of stimulus dimensions relevant to solving the task and perceptual similarity failed to rule…

  5. Profile local linear estimation of generalized semiparametric regression model for longitudinal data.

    PubMed

    Sun, Yanqing; Sun, Liuquan; Zhou, Jie

    2013-07-01

    This paper studies the generalized semiparametric regression model for longitudinal data where the covariate effects are constant for some and time-varying for others. Different link functions can be used to allow more flexible modelling of longitudinal data. The nonparametric components of the model are estimated using a local linear estimating equation and the parametric components are estimated through a profile estimating function. The method automatically adjusts for heterogeneity of sampling times, allowing the sampling strategy to depend on the past sampling history as well as possibly time-dependent covariates without specifically model such dependence. A [Formula: see text]-fold cross-validation bandwidth selection is proposed as a working tool for locating an appropriate bandwidth. A criteria for selecting the link function is proposed to provide better fit of the data. Large sample properties of the proposed estimators are investigated. Large sample pointwise and simultaneous confidence intervals for the regression coefficients are constructed. Formal hypothesis testing procedures are proposed to check for the covariate effects and whether the effects are time-varying. A simulation study is conducted to examine the finite sample performances of the proposed estimation and hypothesis testing procedures. The methods are illustrated with a data example.

  6. Sequential parallel comparison design with binary and time-to-event outcomes.

    PubMed

    Silverman, Rachel Kloss; Ivanova, Anastasia; Fine, Jason

    2018-04-30

    Sequential parallel comparison design (SPCD) has been proposed to increase the likelihood of success of clinical trials especially trials with possibly high placebo effect. Sequential parallel comparison design is conducted with 2 stages. Participants are randomized between active therapy and placebo in stage 1. Then, stage 1 placebo nonresponders are rerandomized between active therapy and placebo. Data from the 2 stages are pooled to yield a single P value. We consider SPCD with binary and with time-to-event outcomes. For time-to-event outcomes, response is defined as a favorable event prior to the end of follow-up for a given stage of SPCD. We show that for these cases, the usual test statistics from stages 1 and 2 are asymptotically normal and uncorrelated under the null hypothesis, leading to a straightforward combined testing procedure. In addition, we show that the estimators of the treatment effects from the 2 stages are asymptotically normal and uncorrelated under the null and alternative hypothesis, yielding confidence interval procedures with correct coverage. Simulations and real data analysis demonstrate the utility of the binary and time-to-event SPCD. Copyright © 2018 John Wiley & Sons, Ltd.

  7. Sequence-specific procedural learning deficits in children with specific language impairment.

    PubMed

    Hsu, Hsinjen Julie; Bishop, Dorothy V M

    2014-05-01

    This study tested the procedural deficit hypothesis of specific language impairment (SLI) by comparing children's performance in two motor procedural learning tasks and an implicit verbal sequence learning task. Participants were 7- to 11-year-old children with SLI (n = 48), typically developing age-matched children (n = 20) and younger typically developing children matched for receptive grammar (n = 28). In a serial reaction time task, the children with SLI performed at the same level as the grammar-matched children, but poorer than age-matched controls in learning motor sequences. When tested with a motor procedural learning task that did not involve learning sequential relationships between discrete elements (i.e. pursuit rotor), the children with SLI performed comparably with age-matched children and better than younger grammar-matched controls. In addition, poor implicit learning of word sequences in a verbal memory task (the Hebb effect) was found in the children with SLI. Together, these findings suggest that SLI might be characterized by deficits in learning sequence-specific information, rather than generally weak procedural learning. © 2014 The Authors. Developmental Science Published by John Wiley & Sons Ltd.

  8. Inference for High-dimensional Differential Correlation Matrices *

    PubMed Central

    Cai, T. Tony; Zhang, Anru

    2015-01-01

    Motivated by differential co-expression analysis in genomics, we consider in this paper estimation and testing of high-dimensional differential correlation matrices. An adaptive thresholding procedure is introduced and theoretical guarantees are given. Minimax rate of convergence is established and the proposed estimator is shown to be adaptively rate-optimal over collections of paired correlation matrices with approximately sparse differences. Simulation results show that the procedure significantly outperforms two other natural methods that are based on separate estimation of the individual correlation matrices. The procedure is also illustrated through an analysis of a breast cancer dataset, which provides evidence at the gene co-expression level that several genes, of which a subset has been previously verified, are associated with the breast cancer. Hypothesis testing on the differential correlation matrices is also considered. A test, which is particularly well suited for testing against sparse alternatives, is introduced. In addition, other related problems, including estimation of a single sparse correlation matrix, estimation of the differential covariance matrices, and estimation of the differential cross-correlation matrices, are also discussed. PMID:26500380

  9. Framework for adaptive multiscale analysis of nonhomogeneous point processes.

    PubMed

    Helgason, Hannes; Bartroff, Jay; Abry, Patrice

    2011-01-01

    We develop the methodology for hypothesis testing and model selection in nonhomogeneous Poisson processes, with an eye toward the application of modeling and variability detection in heart beat data. Modeling the process' non-constant rate function using templates of simple basis functions, we develop the generalized likelihood ratio statistic for a given template and a multiple testing scheme to model-select from a family of templates. A dynamic programming algorithm inspired by network flows is used to compute the maximum likelihood template in a multiscale manner. In a numerical example, the proposed procedure is nearly as powerful as the super-optimal procedures that know the true template size and true partition, respectively. Extensions to general history-dependent point processes is discussed.

  10. Prediction of pilot opinion ratings using an optimal pilot model. [of aircraft handling qualities in multiaxis tasks

    NASA Technical Reports Server (NTRS)

    Hess, R. A.

    1977-01-01

    A brief review of some of the more pertinent applications of analytical pilot models to the prediction of aircraft handling qualities is undertaken. The relative ease with which multiloop piloting tasks can be modeled via the optimal control formulation makes the use of optimal pilot models particularly attractive for handling qualities research. To this end, a rating hypothesis is introduced which relates the numerical pilot opinion rating assigned to a particular vehicle and task to the numerical value of the index of performance resulting from an optimal pilot modeling procedure as applied to that vehicle and task. This hypothesis is tested using data from piloted simulations and is shown to be reasonable. An example concerning a helicopter landing approach is introduced to outline the predictive capability of the rating hypothesis in multiaxis piloting tasks.

  11. The relationship between energy consumption and economic growth in Malaysia: ARDL bound test approach

    NASA Astrophysics Data System (ADS)

    Razali, Radzuan; Khan, Habib; Shafie, Afza; Hassan, Abdul Rahman

    2016-11-01

    The objective of this paper is to examine the short-run and long-run dynamic causal relationship between energy consumption and income per capita both in bivariate and multivariate framework over the period 1971-2014 in the case of Malaysia [1]. The study applies ARDL Bound test procedure for the long run co-integration and Granger causality test for investigation of causal link between the variables. The ARDL bound test confirms the existence of long run co-integration relationship between the variables. The causality test show a feed-back hypothesis between income per capita and energy consumption over the period in the case of Malaysia.

  12. Using the memory activation capture (MAC) procedure to investigate the temporal dynamics of hypothesis generation.

    PubMed

    Lange, Nicholas D; Buttaccio, Daniel R; Davelaar, Eddy J; Thomas, Rick P

    2014-02-01

    Research investigating top-down capture has demonstrated a coupling of working memory content with attention and eye movements. By capitalizing on this relationship, we have developed a novel methodology, called the memory activation capture (MAC) procedure, for measuring the dynamics of working memory content supporting complex cognitive tasks (e.g., decision making, problem solving). The MAC procedure employs briefly presented visual arrays containing task-relevant information at critical points in a task. By observing which items are preferentially fixated, we gain a measure of working memory content as the task evolves through time. The efficacy of the MAC procedure was demonstrated in a dynamic hypothesis generation task in which some of its advantages over existing methods for measuring changes in the contents of working memory over time are highlighted. In two experiments, the MAC procedure was able to detect the hypothesis that was retrieved and placed into working memory. Moreover, the results from Experiment 2 suggest a two-stage process following hypothesis retrieval, whereby the hypothesis undergoes a brief period of heightened activation before entering a lower activation state in which it is maintained for output. The results of both experiments are of additional general interest, as they represent the first demonstrations of top-down capture driven by participant-established WM content retrieved from long-term memory.

  13. Evaluation of information technology impact on effective internal control in the University system

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sanusi Fasilat, A., E-mail: Fasilat17@gmail.com; Hassan, Haslinda, E-mail: lynn@uum.edu.my

    Information Technology (IT) plays a key role in internal control system in various organizations in terms of maintaining records and other internal services. Internal control system is defined as an efficient control procedures set up by firm to safeguard resources and to assure the reliability and accuracy of both financial and non-financial records in line with applicable governance and procedure to acquire the established goal and objectives. This paper focuses on the impact of IT on internal control system in the Nigerian universities. Data are collected from three different universities via questionnaire. Descriptive statistics is used to analyze the data;more » Chi-square is performed to test the hypothesis. The results of the hypothesis showed that IT has a positive relationship with the effective internal control activities in the University system. It is concluded that the adoption of IT will significantly improve the effectiveness of the internal control system operations in the University in terms of quality service delivery.« less

  14. Evaluation of information technology impact on effective internal control in the University system

    NASA Astrophysics Data System (ADS)

    Sanusi Fasilat, A.; Hassan, Haslinda

    2015-12-01

    Information Technology (IT) plays a key role in internal control system in various organizations in terms of maintaining records and other internal services. Internal control system is defined as an efficient control procedures set up by firm to safeguard resources and to assure the reliability and accuracy of both financial and non-financial records in line with applicable governance and procedure to acquire the established goal and objectives. This paper focuses on the impact of IT on internal control system in the Nigerian universities. Data are collected from three different universities via questionnaire. Descriptive statistics is used to analyze the data; Chi-square is performed to test the hypothesis. The results of the hypothesis showed that IT has a positive relationship with the effective internal control activities in the University system. It is concluded that the adoption of IT will significantly improve the effectiveness of the internal control system operations in the University in terms of quality service delivery.

  15. The Infection Dynamics of a Hypothetical Virus in a High School: Use of an Ultraviolet Detectable Powder

    ERIC Educational Resources Information Center

    Baltezore, Joan M.; Newbrey, Michael G.

    2007-01-01

    The purpose of this paper is to provide background information about the spread of viruses in a population, to introduce an adaptable procedure to further the understanding of epidemiology in the high school setting, and to show how hypothesis testing and statistics can be incorporated into a high school lab exercise. It describes a project which…

  16. Reduced incidence of stress ulcer in germ-free Sprague Dawley rats.

    PubMed

    Paré, W P; Burken, M I; Allen, E D; Kluczynski, J M

    1993-01-01

    Recent findings with respect to the role of spiral gram-negative bacteria in peptic ulcer disease have stimulated interest in discerning the role of these agents in stress ulcer disease. We tested the hypothesis that a standard restraint-cold ulcerogenic procedure would fail to produce ulcers in axenic rats. Axenic, as well as normal Sprague Dawley rats, were exposed to a cold-restraint procedure. The germ-free condition was maintained throughout the study in the axenic rats. Axenic rats had significantly fewer ulcers as compared to normal rats exposed to the standard cold-restraint procedure, as well as handling control rats. The data represent the first report suggesting a microbiologic component in the development of stress ulcer using the rat model.

  17. Adaptive seamless designs: selection and prospective testing of hypotheses.

    PubMed

    Jennison, Christopher; Turnbull, Bruce W

    2007-01-01

    There is a current trend towards clinical protocols which involve an initial "selection" phase followed by a hypothesis testing phase. The selection phase may involve a choice between competing treatments or different dose levels of a drug, between different target populations, between different endpoints, or between a superiority and a non-inferiority hypothesis. Clearly there can be benefits in elapsed time and economy in organizational effort if both phases can be designed up front as one experiment, with little downtime between phases. Adaptive designs have been proposed as a way to handle these selection/testing problems. They offer flexibility and allow final inferences to depend on data from both phases, while maintaining control of overall false positive rates. We review and critique the methods, give worked examples and discuss the efficiency of adaptive designs relative to more conventional procedures. Where gains are possible using the adaptive approach, a variety of logistical, operational, data handling and other practical difficulties remain to be overcome if adaptive, seamless designs are to be effectively implemented.

  18. The relationship between the hypnotic induction profile and the stanford hypnotic susceptibility scale, form C: revisited.

    PubMed

    Frischholz, Edward J; Tryon, Warren W; Spiegel, Herbert; Fisher, Stanley

    2015-01-01

    Hilgard's comment raises some important issues, although many of these have little to do with the primary purpose of the study under discussion. This purpose was to objectively examine the relationship between three conceptually and operationally different procedures for measuring hypnotic responsivity. Hilgard's concern over the magnitude of the correlation between the HIP and SHSS:C is unfounded. A cross-validated correlation of .66 was found between the HIP and SHSS:C in a new sample of 44 student volunteers. This demonstrates that the HIP correlates about the same with SHSS:C as the Harvard Group Scale of Hypnotic Susceptibility. Hilgard's conception of the Eye-Roll (ER) hypothesis is clarified. Evidence which utilizes all cases in the correlational analysis is presented in support of the ER hypothesis. Happily, we all agree on a new methodology which will be definitive in testing the validity of the ER hypothesis.

  19. Impact of attributed audit on procedural performance in cardiac electrophysiology catheter laboratory.

    PubMed

    Sawhney, V; Volkova, E; Shaukat, M; Khan, F; Segal, O; Ahsan, S; Chow, A; Ezzat, V; Finlay, M; Lambiase, P; Lowe, M; Dhinoja, M; Sporton, S; Earley, M J; Hunter, R J; Schilling, R J

    2018-06-01

    Audit has played a key role in monitoring and improving clinical practice. However, audit often fails to drive change as summative institutional data alone may be insufficient to do so. We hypothesised that the practice of attributed audit, wherein each individual's procedural performance is presented will have a greater impact on clinical practice. This hypothesis was tested in an observational study evaluating improvement in fluoroscopy times for AF ablation. Retrospective analyses of fluoroscopy times in AF ablations at the Barts Heart Centre (BHC) from 2012-2017. Fluoroscopy times were compared pre- and post- the introduction of attributed audit in 2012 at St Bartholomew's Hospital (SBH). In order to test the hypothesis, this concept was introduced to a second group of experienced operators from the Heart Hospital (HH) as part of a merger of the two institutions in 2015 and change in fluoroscopy times recorded. A significant drop in fluoroscopy times (33.3 ± 9.14 to 8.95 ± 2.50, p < 0.0001) from 2012-2014 was noted after the introduction of attributed audit. At the time of merger, a significant difference in fluoroscopy times between operators from the two centres was seen in 2015. Each operator's procedural performance was shared openly at the audit meeting. Subsequent audits showed a steady decrease in fluoroscopy times for each operator with the fluoroscopy time (min, mean±SD) decreasing from 13.29 ± 7.3 in 2015 to 8.84 ± 4.8 (p < 0.0001) in 2017 across the entire group. Systematic improvement in fluoroscopy times for AF ablation procedures was noted byevaluating individual operators' performance. Attributing data to physicians in attributed audit can promptsignificant improvement and hence should be adopted in clinical practice.

  20. What is the safety of nonemergent operative procedures performed at night? A study of 10,426 operations at an academic tertiary care hospital using the American College of Surgeons national surgical quality program improvement database.

    PubMed

    Turrentine, Florence E; Wang, Hongkun; Young, Jeffrey S; Calland, James Forrest

    2010-08-01

    Ever-increasing numbers of in-house acute care surgeons and competition for operating room time during normal daytime business hours have led to an increased frequency of nonemergent general and vascular surgery procedures occurring at night when there are fewer residents, consultants, nurses, and support staff available for assistance. This investigation tests the hypothesis that patients undergoing such procedures after hours are at increased risk for postoperative morbidity and mortality. Clinical data for 10,426 operative procedures performed over a 5-year period at a single academic tertiary care hospital were obtained from the American College of Surgeons National Surgical Quality Improvement Program Database. The prevalence of preoperative comorbid conditions, postoperative length of stay, morbidity, and mortality was compared between two cohorts of patients: one who underwent nonemergent operative procedures at night and other who underwent similar procedures during the day. Subsequent statistical comparisons utilized chi tests for comparisons of categorical variables and F-tests for continuous variables. Patients undergoing procedures at night had a greater prevalence of serious preoperative comorbid conditions. Procedure complexity as measured by relative value unit did not differ between groups, but length of stay was longer after night procedures (7.8 days vs. 4.3 days, p < 0.0001). Patients undergoing nonemergent general and vascular surgery procedures at night in an academic medical center do not seem to be at increased risk for postoperative morbidity or mortality. Performing nonemergent procedures at night seems to be a safe solution for daytime overcrowding of operating rooms.

  1. [Dilemma of null hypothesis in ecological hypothesis's experiment test.

    PubMed

    Li, Ji

    2016-06-01

    Experimental test is one of the major test methods of ecological hypothesis, though there are many arguments due to null hypothesis. Quinn and Dunham (1983) analyzed the hypothesis deduction model from Platt (1964) and thus stated that there is no null hypothesis in ecology that can be strictly tested by experiments. Fisher's falsificationism and Neyman-Pearson (N-P)'s non-decisivity inhibit statistical null hypothesis from being strictly tested. Moreover, since the null hypothesis H 0 (α=1, β=0) and alternative hypothesis H 1 '(α'=1, β'=0) in ecological progresses are diffe-rent from classic physics, the ecological null hypothesis can neither be strictly tested experimentally. These dilemmas of null hypothesis could be relieved via the reduction of P value, careful selection of null hypothesis, non-centralization of non-null hypothesis, and two-tailed test. However, the statistical null hypothesis significance testing (NHST) should not to be equivalent to the causality logistical test in ecological hypothesis. Hence, the findings and conclusions about methodological studies and experimental tests based on NHST are not always logically reliable.

  2. A Test by Any Other Name: P Values, Bayes Factors, and Statistical Inference.

    PubMed

    Stern, Hal S

    2016-01-01

    Procedures used for statistical inference are receiving increased scrutiny as the scientific community studies the factors associated with insuring reproducible research. This note addresses recent negative attention directed at p values, the relationship of confidence intervals and tests, and the role of Bayesian inference and Bayes factors, with an eye toward better understanding these different strategies for statistical inference. We argue that researchers and data analysts too often resort to binary decisions (e.g., whether to reject or accept the null hypothesis) in settings where this may not be required.

  3. Long memory and multifractality: A joint test

    NASA Astrophysics Data System (ADS)

    Goddard, John; Onali, Enrico

    2016-06-01

    The properties of statistical tests for hypotheses concerning the parameters of the multifractal model of asset returns (MMAR) are investigated, using Monte Carlo techniques. We show that, in the presence of multifractality, conventional tests of long memory tend to over-reject the null hypothesis of no long memory. Our test addresses this issue by jointly estimating long memory and multifractality. The estimation and test procedures are applied to exchange rate data for 12 currencies. Among the nested model specifications that are investigated, in 11 out of 12 cases, daily returns are most appropriately characterized by a variant of the MMAR that applies a multifractal time-deformation process to NIID returns. There is no evidence of long memory.

  4. Techniques for recognizing identity of several response functions from the data of visual inspection

    NASA Astrophysics Data System (ADS)

    Nechval, Nicholas A.

    1996-08-01

    The purpose of this paper is to present some efficient techniques for recognizing from the observed data whether several response functions are identical to each other. For example, in an industrial setting the problem may be to determine whether the production coefficients established in a small-scale pilot study apply to each of several large- scale production facilities. The techniques proposed here combine sensor information from automated visual inspection of manufactured products which is carried out by means of pixel-by-pixel comparison of the sensed image of the product to be inspected with some reference pattern (or image). Let (a1, . . . , am) be p-dimensional parameters associated with m response models of the same type. This study is concerned with the simultaneous comparison of a1, . . . , am. A generalized maximum likelihood ratio (GMLR) test is derived for testing equality of these parameters, where each of the parameters represents a corresponding vector of regression coefficients. The GMLR test reduces to an equivalent test based on a statistic that has an F distribution. The main advantage of the test lies in its relative simplicity and the ease with which it can be applied. Another interesting test for the same problem is an application of Fisher's method of combining independent test statistics which can be considered as a parallel procedure to the GMLR test. The combination of independent test statistics does not appear to have been used very much in applied statistics. There does, however, seem to be potential data analytic value in techniques for combining distributional assessments in relation to statistically independent samples which are of joint experimental relevance. In addition, a new iterated test for the problem defined above is presented. A rejection of the null hypothesis by this test provides some reason why all the parameters are not equal. A numerical example is discussed in the context of the proposed procedures for hypothesis testing.

  5. Hypothesis testing in functional linear regression models with Neyman's truncation and wavelet thresholding for longitudinal data.

    PubMed

    Yang, Xiaowei; Nie, Kun

    2008-03-15

    Longitudinal data sets in biomedical research often consist of large numbers of repeated measures. In many cases, the trajectories do not look globally linear or polynomial, making it difficult to summarize the data or test hypotheses using standard longitudinal data analysis based on various linear models. An alternative approach is to apply the approaches of functional data analysis, which directly target the continuous nonlinear curves underlying discretely sampled repeated measures. For the purposes of data exploration, many functional data analysis strategies have been developed based on various schemes of smoothing, but fewer options are available for making causal inferences regarding predictor-outcome relationships, a common task seen in hypothesis-driven medical studies. To compare groups of curves, two testing strategies with good power have been proposed for high-dimensional analysis of variance: the Fourier-based adaptive Neyman test and the wavelet-based thresholding test. Using a smoking cessation clinical trial data set, this paper demonstrates how to extend the strategies for hypothesis testing into the framework of functional linear regression models (FLRMs) with continuous functional responses and categorical or continuous scalar predictors. The analysis procedure consists of three steps: first, apply the Fourier or wavelet transform to the original repeated measures; then fit a multivariate linear model in the transformed domain; and finally, test the regression coefficients using either adaptive Neyman or thresholding statistics. Since a FLRM can be viewed as a natural extension of the traditional multiple linear regression model, the development of this model and computational tools should enhance the capacity of medical statistics for longitudinal data.

  6. Précis of statistical significance: rationale, validity, and utility.

    PubMed

    Chow, S L

    1998-04-01

    The null-hypothesis significance-test procedure (NHSTP) is defended in the context of the theory-corroboration experiment, as well as the following contrasts: (a) substantive hypotheses versus statistical hypotheses, (b) theory corroboration versus statistical hypothesis testing, (c) theoretical inference versus statistical decision, (d) experiments versus nonexperimental studies, and (e) theory corroboration versus treatment assessment. The null hypothesis can be true because it is the hypothesis that errors are randomly distributed in data. Moreover, the null hypothesis is never used as a categorical proposition. Statistical significance means only that chance influences can be excluded as an explanation of data; it does not identify the nonchance factor responsible. The experimental conclusion is drawn with the inductive principle underlying the experimental design. A chain of deductive arguments gives rise to the theoretical conclusion via the experimental conclusion. The anomalous relationship between statistical significance and the effect size often used to criticize NHSTP is more apparent than real. The absolute size of the effect is not an index of evidential support for the substantive hypothesis. Nor is the effect size, by itself, informative as to the practical importance of the research result. Being a conditional probability, statistical power cannot be the a priori probability of statistical significance. The validity of statistical power is debatable because statistical significance is determined with a single sampling distribution of the test statistic based on H0, whereas it takes two distributions to represent statistical power or effect size. Sample size should not be determined in the mechanical manner envisaged in power analysis. It is inappropriate to criticize NHSTP for nonstatistical reasons. At the same time, neither effect size, nor confidence interval estimate, nor posterior probability can be used to exclude chance as an explanation of data. Neither can any of them fulfill the nonstatistical functions expected of them by critics.

  7. Short- and long-term effects of clinical audits on compliance with procedures in CT scanning.

    PubMed

    Oliveri, Antonio; Howarth, Nigel; Gevenois, Pierre Alain; Tack, Denis

    2016-08-01

    To test the hypothesis that quality clinical audits improve compliance with the procedures in computed tomography (CT) scanning. This retrospective study was conducted in two hospitals, based on 6950 examinations and four procedures, focusing on the acquisition length in lumbar spine CT, the default tube current applied in abdominal un-enhanced CT, the tube potential selection for portal phase abdominal CT and the use of a specific "paediatric brain CT" procedure. The first clinical audit reported compliance with these procedures. After presenting the results to the stakeholders, a second audit was conducted to measure the impact of this information on compliance and was repeated the next year. Comparisons of proportions were performed using the Chi-square Pearson test. Depending on the procedure, the compliance rate ranged from 27 to 88 % during the first audit. After presentation of the audit results to the stakeholders, the compliance rate ranged from 68 to 93 % and was significantly improved for all procedures (P ranging from <0.001 to 0.031) in both hospitals and remained unchanged during the third audit (P ranging from 0.114 to 0.999). Quality improvement through repeated compliance audits with CT procedures durably improves this compliance. • Compliance with CT procedures is operator-dependent and not perfect. • Compliance differs between procedures and hospitals, even within a unified department. • Compliance is improved through audits followed by communication to the stakeholders. • This improvement is sustainable over a one-year period.

  8. A procedure for the significance testing of unmodeled errors in GNSS observations

    NASA Astrophysics Data System (ADS)

    Li, Bofeng; Zhang, Zhetao; Shen, Yunzhong; Yang, Ling

    2018-01-01

    It is a crucial task to establish a precise mathematical model for global navigation satellite system (GNSS) observations in precise positioning. Due to the spatiotemporal complexity of, and limited knowledge on, systematic errors in GNSS observations, some residual systematic errors would inevitably remain even after corrected with empirical model and parameterization. These residual systematic errors are referred to as unmodeled errors. However, most of the existing studies mainly focus on handling the systematic errors that can be properly modeled and then simply ignore the unmodeled errors that may actually exist. To further improve the accuracy and reliability of GNSS applications, such unmodeled errors must be handled especially when they are significant. Therefore, a very first question is how to statistically validate the significance of unmodeled errors. In this research, we will propose a procedure to examine the significance of these unmodeled errors by the combined use of the hypothesis tests. With this testing procedure, three components of unmodeled errors, i.e., the nonstationary signal, stationary signal and white noise, are identified. The procedure is tested by using simulated data and real BeiDou datasets with varying error sources. The results show that the unmodeled errors can be discriminated by our procedure with approximately 90% confidence. The efficiency of the proposed procedure is further reassured by applying the time-domain Allan variance analysis and frequency-domain fast Fourier transform. In summary, the spatiotemporally correlated unmodeled errors are commonly existent in GNSS observations and mainly governed by the residual atmospheric biases and multipath. Their patterns may also be impacted by the receiver.

  9. Clinical supervision, emotional exhaustion, and turnover intention: A study of substance abuse treatment counselors in NIDA’s Clinical Trials Network

    PubMed Central

    Knudsen, Hannah K.; Ducharme, Lori J.; Roman, Paul M

    2008-01-01

    An intriguing hypothesis is that clinical supervision may protect against counselor turnover. This idea has been mentioned in recent discussions of the substance abuse treatment workforce. To test this hypothesis, we extend our previous research on emotional exhaustion and turnover intention among counselors by estimating the associations between clinical supervision and these variables in a large sample (n = 823). An exploratory analysis reveals that clinical supervision was negatively associated with emotional exhaustion and turnover intention. Given our previous findings that emotional exhaustion and turnover intention were associated with job autonomy, procedural justice, and distributive justice, we estimate a structural equation model to examine whether these variables mediated clinical supervision’s associations with emotional exhaustion and turnover intention. These data support the fully mediated model. We found the perceived quality of clinical supervision is strongly associated with counselors’ perceptions of job autonomy, procedural justice, and distributive justice, which are, in turn, associated with emotional exhaustion and turnover intention. These data offer support for the protective role of clinical supervision in substance abuse treatment counselors’ turnover and occupational wellbeing. PMID:18424048

  10. Estimating False Discovery Proportion Under Arbitrary Covariance Dependence*

    PubMed Central

    Fan, Jianqing; Han, Xu; Gu, Weijie

    2012-01-01

    Multiple hypothesis testing is a fundamental problem in high dimensional inference, with wide applications in many scientific fields. In genome-wide association studies, tens of thousands of tests are performed simultaneously to find if any SNPs are associated with some traits and those tests are correlated. When test statistics are correlated, false discovery control becomes very challenging under arbitrary dependence. In the current paper, we propose a novel method based on principal factor approximation, which successfully subtracts the common dependence and weakens significantly the correlation structure, to deal with an arbitrary dependence structure. We derive an approximate expression for false discovery proportion (FDP) in large scale multiple testing when a common threshold is used and provide a consistent estimate of realized FDP. This result has important applications in controlling FDR and FDP. Our estimate of realized FDP compares favorably with Efron (2007)’s approach, as demonstrated in the simulated examples. Our approach is further illustrated by some real data applications. We also propose a dependence-adjusted procedure, which is more powerful than the fixed threshold procedure. PMID:24729644

  11. The effect of aging in recollective experience: the processing speed and executive functioning hypothesis.

    PubMed

    Bugaiska, Aurélia; Clarys, David; Jarry, Caroline; Taconnat, Laurence; Tapia, Géraldine; Vanneste, Sandrine; Isingrini, Michel

    2007-12-01

    This study was designed to investigate the effects of aging on consciousness in recognition memory, using the Remember/Know/Guess procedure (Gardiner, J. M., & Richarson-Klavehn, A. (2000). Remembering and Knowing. In E. Tulving & F. I. M. Craik (Eds.), The Oxford Handbook of Memory. Oxford University Press.). In recognition memory, older participants report fewer occasions on which recognition is accompanied by recollection of the original encoding context. Two main hypotheses were tested: the speed mediation hypothesis (Salthouse, T. A. (1996). The processing-speed theory of adult age differences in cognition. Psychological Review, 3, 403-428) and the executive-aging hypothesis (West, R. L. (1996). An application of prefrontal cortex function theory to cognitive aging. Psychological Bulletin, 120, 272-292). A group of young and a group of older adults took a recognition test in which they classified their responses according to Gardiner, J. M., & Richarson-Klavehn, A. (2000). Remembering and Knowing. In E. Tulving & F. I. M. Craik (Eds.), The Oxford Handbook of Memory. Oxford University Press. remember-know-guess paradigm. Subsequently, participants completed processing speed and executive function tests. The results showed that among the older participants, R responses decreased, but K responses did not. Moreover, a hierarchical regression analysis supported the view that the effect of age in recollection experience is determined by frontal lobe integrity and not by diminution of processing speed.

  12. Back to Anatomy: Improving Landmarking Accuracy of Clinical Procedures Using a Novel Approach to Procedural Teaching.

    PubMed

    Zeller, Michelle; Cristancho, Sayra; Mangel, Joy; Goldszmidt, Mark

    2015-06-01

    Many believe that knowledge of anatomy is essential for performing clinical procedures; however, unlike their surgical counterparts, internal medicine (IM) programs rarely incorporate anatomy review into procedural teaching. This study tested the hypothesis that an educational intervention focused on teaching relevant surface and underlying anatomy would result in improved bone marrow procedure landmarking accuracy. This was a preintervention-postintervention prospective study on landmarking accuracy of consenting IM residents attending their mandatory academic half-day. The intervention included an interactive video and visualization exercise; the video was developed specifically to teach the relevant underlying anatomy and includes views of live volunteers, cadavers, and skeletons. Thirty-one IM residents participated. At pretest, 48% (15/31) of residents landmarked accurately. Inaccuracy of pretest landmarking varied widely (n = 16, mean 20.06 mm; standard deviation 30.03 mm). At posttest, 74% (23/31) of residents accurately performed the procedure. McNemar test revealed a nonsignificant trend toward increased performance at posttest (P = 0.076; unadjusted odds for discordant pairs 3; 95% confidence interval 0.97-9.3). The Wilcoxon signed rank test demonstrated a significant difference between pre- and posttest accuracy in the 16 residents who were inaccurate at pretest (P = 0.004). No association was detected between participant baseline characteristics and pretest accuracy. This study demonstrates that residents who were initially inaccurate were able to significantly improve their landmarking skills by interacting with an educational tool emphasizing the relation between the surface and underlying anatomy. Our results support the use of basic anatomy in teaching bone marrow procedures. Results also support the proper use of video as an effective means for incorporating anatomy teaching around procedural skills.

  13. Psychotherapy Augmentation through Preconscious Priming

    PubMed Central

    Borgeat, François; O’Connor, Kieron; Amado, Danielle; St-Pierre-Delorme, Marie-Ève

    2013-01-01

    Objective: To test the hypothesis that repeated preconscious (masked) priming of personalized positive cognitions could augment cognitive change and facilitate achievement of patients’ goals following a therapy. Methods: Twenty social phobic patients (13 women) completed a 36-weeks study beginning by 12 weeks of group behavioral therapy. After the therapy, they received 6 weeks of preconscious priming and 6 weeks of a control procedure in a randomized cross-over design. The Priming condition involved listening twice daily with a passive attitude to a recording of individualized formulations of appropriate cognitions and attitudes masked by music. The Control condition involved listening to an indistinguishable recording where the formulations had been replaced by random numbers. Changes in social cognitions were measured by the Social Interaction Self Statements Test (SISST). Results: Patients improved following therapy. The Priming procedure was associated with increased positive cognitions and decreased negative cognitions on the SISST while the Control procedure was not. The Priming procedure induced more cognitive change when applied immediately after the group therapy. Conclusion: An effect of priming was observed on social phobia related cognitions in the expected direction. This self administered addition to a therapy could be seen as an augmentation strategy. PMID:23508724

  14. A Demographic Analysis of Suicide Among U.S. Navy Personnel

    DTIC Science & Technology

    1997-08-01

    estimates of a Poisson- distributed variable according to the procedure described in Lilienfeld and Lilienfeld .27 Based on averaged age-specific rates of...n suicides, the total number of pairs will be n(n- 1)/2). The Knox method tests the null hypothesis that the event of a pair of suicides being close...significantly differ. It is likely, however, that the military’s required suicide prevention programs and psychological autopsies help to ascertain as

  15. Ensembles vs. information theory: supporting science under uncertainty

    NASA Astrophysics Data System (ADS)

    Nearing, Grey S.; Gupta, Hoshin V.

    2018-05-01

    Multi-model ensembles are one of the most common ways to deal with epistemic uncertainty in hydrology. This is a problem because there is no known way to sample models such that the resulting ensemble admits a measure that has any systematic (i.e., asymptotic, bounded, or consistent) relationship with uncertainty. Multi-model ensembles are effectively sensitivity analyses and cannot - even partially - quantify uncertainty. One consequence of this is that multi-model approaches cannot support a consistent scientific method - in particular, multi-model approaches yield unbounded errors in inference. In contrast, information theory supports a coherent hypothesis test that is robust to (i.e., bounded under) arbitrary epistemic uncertainty. This paper may be understood as advocating a procedure for hypothesis testing that does not require quantifying uncertainty, but is coherent and reliable (i.e., bounded) in the presence of arbitrary (unknown and unknowable) uncertainty. We conclude by offering some suggestions about how this proposed philosophy of science suggests new ways to conceptualize and construct simulation models of complex, dynamical systems.

  16. The effect of urbanization and industrialization on carbon emissions in Turkey: evidence from ARDL bounds testing procedure.

    PubMed

    Pata, Ugur Korkut

    2018-03-01

    This paper examines the dynamic short- and long-term relationship between per capita GDP, per capita energy consumption, financial development, urbanization, industrialization, and per capita carbon dioxide (CO 2 ) emissions within the framework of the environmental Kuznets curve (EKC) hypothesis for Turkey covering the period from 1974 to 2013. According to the results of the autoregressive distributed lag bounds testing approach, an increase in per capita GDP, per capita energy consumption, financial development, urbanization, and industrialization has a positive effect on per capita CO 2 emissions in the long term, and also the variables other than urbanization increase per capita CO 2 emissions in the short term. In addition, the findings support the validity of the EKC hypothesis for Turkey in the short and long term. However, the turning points obtained from long-term regressions lie outside the sample period. Therefore, as the per capita GDP increases in Turkey, per capita CO 2 emissions continue to increase.

  17. Joint Source-Channel Coding by Means of an Oversampled Filter Bank Code

    NASA Astrophysics Data System (ADS)

    Marinkovic, Slavica; Guillemot, Christine

    2006-12-01

    Quantized frame expansions based on block transforms and oversampled filter banks (OFBs) have been considered recently as joint source-channel codes (JSCCs) for erasure and error-resilient signal transmission over noisy channels. In this paper, we consider a coding chain involving an OFB-based signal decomposition followed by scalar quantization and a variable-length code (VLC) or a fixed-length code (FLC). This paper first examines the problem of channel error localization and correction in quantized OFB signal expansions. The error localization problem is treated as an[InlineEquation not available: see fulltext.]-ary hypothesis testing problem. The likelihood values are derived from the joint pdf of the syndrome vectors under various hypotheses of impulse noise positions, and in a number of consecutive windows of the received samples. The error amplitudes are then estimated by solving the syndrome equations in the least-square sense. The message signal is reconstructed from the corrected received signal by a pseudoinverse receiver. We then improve the error localization procedure by introducing a per-symbol reliability information in the hypothesis testing procedure of the OFB syndrome decoder. The per-symbol reliability information is produced by the soft-input soft-output (SISO) VLC/FLC decoders. This leads to the design of an iterative algorithm for joint decoding of an FLC and an OFB code. The performance of the algorithms developed is evaluated in a wavelet-based image coding system.

  18. The Use of Climatic Niches in Screening Procedures for Introduced Species to Evaluate Risk of Spread: A Case with the American Eastern Grey Squirrel

    PubMed Central

    Di Febbraro, Mirko; Lurz, Peter W. W.; Genovesi, Piero; Maiorano, Luigi; Girardello, Marco; Bertolino, Sandro

    2013-01-01

    Species introduction represents one of the most serious threats for biodiversity. The realized climatic niche of an invasive species can be used to predict its potential distribution in new areas, providing a basis for screening procedures in the compilation of black and white lists to prevent new introductions. We tested this assertion by modeling the realized climatic niche of the Eastern grey squirrel Sciurus carolinensis. Maxent was used to develop three models: one considering only records from the native range (NRM), a second including records from native and invasive range (NIRM), a third calibrated with invasive occurrences and projected in the native range (RCM). Niche conservatism was tested considering both a niche equivalency and a niche similarity test. NRM failed to predict suitable parts of the currently invaded range in Europe, while RCM underestimated the suitability in the native range. NIRM accurately predicted both the native and invasive range. The niche equivalency hypothesis was rejected due to a significant difference between the grey squirrel’s niche in native and invasive ranges. The niche similarity test yielded no significant results. Our analyses support the hypothesis of a shift in the species’ climatic niche in the area of introductions. Species Distribution Models (SDMs) appear to be a useful tool in the compilation of black lists, allowing identifying areas vulnerable to invasions. We advise caution in the use of SDMs based only on the native range of a species for the compilation of white lists for other geographic areas, due to the significant risk of underestimating its potential invasive range. PMID:23843957

  19. A new modeling and inference approach for the Systolic Blood Pressure Intervention Trial outcomes.

    PubMed

    Yang, Song; Ambrosius, Walter T; Fine, Lawrence J; Bress, Adam P; Cushman, William C; Raj, Dominic S; Rehman, Shakaib; Tamariz, Leonardo

    2018-06-01

    Background/aims In clinical trials with time-to-event outcomes, usually the significance tests and confidence intervals are based on a proportional hazards model. Thus, the temporal pattern of the treatment effect is not directly considered. This could be problematic if the proportional hazards assumption is violated, as such violation could impact both interim and final estimates of the treatment effect. Methods We describe the application of inference procedures developed recently in the literature for time-to-event outcomes when the treatment effect may or may not be time-dependent. The inference procedures are based on a new model which contains the proportional hazards model as a sub-model. The temporal pattern of the treatment effect can then be expressed and displayed. The average hazard ratio is used as the summary measure of the treatment effect. The test of the null hypothesis uses adaptive weights that often lead to improvement in power over the log-rank test. Results Without needing to assume proportional hazards, the new approach yields results consistent with previously published findings in the Systolic Blood Pressure Intervention Trial. It provides a visual display of the time course of the treatment effect. At four of the five scheduled interim looks, the new approach yields smaller p values than the log-rank test. The average hazard ratio and its confidence interval indicates a treatment effect nearly a year earlier than a restricted mean survival time-based approach. Conclusion When the hazards are proportional between the comparison groups, the new methods yield results very close to the traditional approaches. When the proportional hazards assumption is violated, the new methods continue to be applicable and can potentially be more sensitive to departure from the null hypothesis.

  20. Evaluating the Stage Learning Hypothesis.

    ERIC Educational Resources Information Center

    Thomas, Hoben

    1980-01-01

    A procedure for evaluating the Genevan stage learning hypothesis is illustrated by analyzing Inhelder, Sinclair, and Bovet's guided learning experiments (in "Learning and the Development of Cognition." Cambridge: Harvard University Press, 1974). (Author/MP)

  1. Explorations in statistics: hypothesis tests and P values.

    PubMed

    Curran-Everett, Douglas

    2009-06-01

    Learning about statistics is a lot like learning about science: the learning is more meaningful if you can actively explore. This second installment of Explorations in Statistics delves into test statistics and P values, two concepts fundamental to the test of a scientific null hypothesis. The essence of a test statistic is that it compares what we observe in the experiment to what we expect to see if the null hypothesis is true. The P value associated with the magnitude of that test statistic answers this question: if the null hypothesis is true, what proportion of possible values of the test statistic are at least as extreme as the one I got? Although statisticians continue to stress the limitations of hypothesis tests, there are two realities we must acknowledge: hypothesis tests are ingrained within science, and the simple test of a null hypothesis can be useful. As a result, it behooves us to explore the notions of hypothesis tests, test statistics, and P values.

  2. PEPA test: fast and powerful differential analysis from relative quantitative proteomics data using shared peptides.

    PubMed

    Jacob, Laurent; Combes, Florence; Burger, Thomas

    2018-06-18

    We propose a new hypothesis test for the differential abundance of proteins in mass-spectrometry based relative quantification. An important feature of this type of high-throughput analyses is that it involves an enzymatic digestion of the sample proteins into peptides prior to identification and quantification. Due to numerous homology sequences, different proteins can lead to peptides with identical amino acid chains, so that their parent protein is ambiguous. These so-called shared peptides make the protein-level statistical analysis a challenge and are often not accounted for. In this article, we use a linear model describing peptide-protein relationships to build a likelihood ratio test of differential abundance for proteins. We show that the likelihood ratio statistic can be computed in linear time with the number of peptides. We also provide the asymptotic null distribution of a regularized version of our statistic. Experiments on both real and simulated datasets show that our procedures outperforms state-of-the-art methods. The procedures are available via the pepa.test function of the DAPAR Bioconductor R package.

  3. Mechanization of library procedures in the medium-sized medical library. 8. Computer applications in hospital departmental libraries.

    PubMed

    Howard, E; Kharibian, G

    1972-07-01

    To test the hypothesis that a standard library system could be designed for hospital departmental libraries, a system was developed and partially tested for four departmental libraries in the Washington University School of Medicine and Associated Hospitals. The system from determination of needs through design and evaluation, is described. The system was limited by specific constraints to control of the monograph collection. Products of control include catalog cards, accessions list, new book list, location list, fund list, missing book list, and discard book list. Sample data form and pages from a procedure manual are given, and conversion from a manual to an automated system is outlined. The question of standardization of library records and procedures is discussed, with indications of the way in which modular design, as utilized in this system, could contribute to greater flexibility in design of future systems. Reference is made to anticipating needs for organizing departmental libraries in developing regional medical library programs and to exploring the role of the departmental library in a medical library network.

  4. Smoking cue reactivity across massed extinction trials: negative affect and gender effects.

    PubMed

    Collins, Bradley N; Nair, Uma S; Komaroff, Eugene

    2011-04-01

    Designing and implementing cue exposure procedures to treat nicotine dependence remains a challenge. This study tested the hypothesis that gender and negative affect (NA) influence changes in smoking urge over time using data from a pilot project testing the feasibility of massed extinction procedures. Forty-three smokers and ex-smokers completed the behavioral laboratory procedures. All participants were over 17 years old, smoked at least 10 cigarettes daily over the last year (or the year prior to quitting) and had expired CO below 10 ppm at the beginning of the ~4-hour session. After informed consent, participants completed 45 min of baseline assessments, and then completed a series of 12 identical, 5-minute exposure trials with inter-trial breaks. Smoking cues included visual, tactile, and olfactory cues with a lit cigarette, in addition to smoking-related motor behaviors without smoking. After each trial, participants reported urge and negative affect (NA). Logistic growth curve models supported the hypothesis that across trials, participants would demonstrate an initial linear increase followed by a decrease in smoking urge (quadratic effect). Data supported hypothesized gender, NA, and gender×NA effects. Significant linear increases in urge were observed among high and low NA males, but not among females in either NA subgroup. A differential quadratic effect showed a significant decrease in urge for the low NA subgroup, but a non-significant decrease in urge in the high NA group. This is the first study to demonstrate gender differences and the effects of NA on the extinction process using a smoking cue exposure paradigm. Results could guide future cue reactivity research and exposure interventions for nicotine dependence. Copyright © 2010 Elsevier B.V. All rights reserved.

  5. Effect of Simplifying Drilling Technique on Heat Generation During Osteotomy Preparation for Dental Implant.

    PubMed

    El-Kholey, Khalid E; Ramasamy, Saravanan; Kumar R, Sheetal; Elkomy, Aamna

    2017-12-01

    To test the hypothesis that there would be no difference in heat production by reducing the number of drills during the implant site preparation relative to conventional drilling sequence. A total of 120 implant site preparations with 3 different diameters (3.6, 4.3, and 4.6 mm) were performed on bovine ribs. Within the same diameter group, half of the preparations were performed by a simplified drilling procedure (pilot drill + final diameter drill) and other half using the conventional drilling protocol (pilot drill followed by graduated series of drills to widen the site). Heat production by different drilling techniques was evaluated by measuring the bone temperature using k-type thermocouple and a sensitive thermometer before and after each drill. Mean for maximum temperature increase during site preparation of the 3.6, 4.3, and 4.6-mm implants was 2.45, 2.60, and 2.95° when the site was prepared by the simplified procedure, whereas it was 2.85, 3.10, and 3.60° for the sites prepared by the conventional technique, respectively. No significant difference in temperature increase was found when implants of the 3 different diameters were prepared either by the conventional or simplified drilling procedure. The simplified drilling technique produced similar amount of heat comparable to the conventional technique that proved the initial hypothesis.

  6. Effect of the Drilling Technique on Heat Generation During Osteotomy Preparation for Wide-Diameter Implants.

    PubMed

    El-Kholey, Khalid E; Elkomy, Aamna

    2016-12-01

    To test the hypothesis that there would be no difference in heat generation by reducing the number of drills during the implant site preparation relative to conventional drilling sequence. A total of 80 implant site preparations with 2 different diameters (5.6 and 6.2 mm) were performed on bovine ribs. Within the same diameter group, half of the preparations were performed by a simplified drilling procedure (pilot drill + final diameter drill) and the other half using the conventional drilling protocol, where multiple drills of increasing diameter were utilized. Heat production by different drilling techniques was evaluated by measuring the bone temperature using K-type thermocouple and a sensitive thermometer before and after each drill. Mean for maximum temperature increase during site preparation of the 5.6- and 6.2-mm implants was 2.20°C, and it was 2.55°C when the site was prepared by the simplified procedure, whereas it was 2.80°C and 2.95°C for the sites prepared by the conventional technique, respectively. No significant difference in temperature increase was found when implants of the 2 chosen diameters were prepared either by the conventional or simplified drilling procedure. The simplified drilling protocol produces similar amount of heat comparable to the conventional technique, which proved the initial hypothesis.

  7. [Issues of research in medicine].

    PubMed

    Topić, Elizabeta

    2006-01-01

    Research in medicine is liable to all rules and standards that apply to research in other natural sciences, since medicine as a science and service fully meets the general definition of science: it is a common, integrated, organized and systematized knowledge of mankind, whereby physician--being more or less aware of doing so-- in his daily activities applies scientific thinking and scientific methods. The procedure of problem solving in scientific work and in medical practice is characterized by many similarities as well as variation. In scientific research, the observation of some phenomenon that cannot be explained by the known facts and theories is followed by making a hypothesis, planning and carrying out experimental investigation resulting in some data. Interpretation of these data then provides evidence to confirm or reject the hypothesis. In medical practice, quite a similar procedure is followed; the initial examination of a patient, when his condition cannot be explained by the data thus obtained, is identical to the observation of a phenomenon which cannot be explained by the known facts; working diagnosis would correspond to making the hypothesis; and experimental investigation would compare to laboratory and other diagnostic studies. The working diagnosis is accepted or rejected depending on these results. Of course, there also are differences in the problem solving procedure between scientific research and daily medical practice. For example, in research a single hypothesis is posed, a single experiment with successive testing and/or repeats is performed, whereas in medical practice several hypotheses are made, multiple studies are concurrently performed to reject current hypotheses and to make new ones. Scientific investigation produces an abundance of systematic data, whereas in medical practice target data are being generated, yet not systematically. Definitive decision making also differs greatly, as in scientific research it only ensues from conclusive evidence, whereas in medical practice definitive decision is made and therapeutic procedures are performed even before reaching final evidence. The general strategy of work and research in medicine can be briefly described by four principles, i.e. good knowledge of one's own work; continuing upgrading of one's own work in collaboration with respective institutions (laboratories, university, and research institutes); implementation of standard, up-to-date and scientific methods most of the time; and publishing work results on a regular basis. This strategy ensures constant progress and treatment quality improvement while allowing due validation and evaluation of the work by the society. Scientific research is based on the pre-existing knowledge of the problem under study, and should be supervised, systematic and planned. Research produces data that may represent some new concepts, or such concepts are developed by further data processing. In research, scientific procedure includes a number of steps that have to be made to reach a new scientific result. This procedure includes (a) thinking about a scientific issue; (b) making a scientific hypothesis, i.e. the main objective of the study; (c) research ethics; (d) determination of sources and mode of data collection; (e) research performance; (f) collection and analysis of all research data; (g) interpretation of results and evidence; and (h) publications. The next section of this chapter brings an example of scientific research in the field of medicine, where the procedures carried out during the research are briefly described; other chapters of this supplement deal with statistical methodology used on processing the data obtained in the study, which is most frequently employed in scientific work in the field of medicine.

  8. Auditory phase and frequency discrimination: a comparison of nine procedures.

    PubMed

    Creelman, C D; Macmillan, N A

    1979-02-01

    Two auditory discrimination tasks were thoroughly investigated: discrimination of frequency differences from a sinusoidal signal of 200 Hz and discrimination of differences in relative phase of mixed sinusoids of 200 Hz and 400 Hz. For each task psychometric functions were constructed for three observers, using nine different psychophysical measurement procedures. These procedures included yes-no, two-interval forced-choice, and various fixed- and variable-standard designs that investigators have used in recent years. The data showed wide ranges of apparent sensitivity. For frequency discrimination, models derived from signal detection theory for each psychophysical procedure seem to account for the performance differences. For phase discrimination the models do not account for the data. We conclude that for some discriminative continua the assumptions of signal detection theory are appropriate, and underlying sensitivity may be derived from raw data by appropriate transformations. For other continua the models of signal detection theory are probably inappropriate; we speculate that phase might be discriminable only on the basis of comparison or change and suggest some tests of our hypothesis.

  9. Vertical ridge augmentation using xenogenous bone blocks: a comparison between the flap and tunneling procedures.

    PubMed

    Xuan, Feng; Lee, Chun-Ui; Son, Jeong-Seog; Fang, Yiqin; Jeong, Seung-Mi; Choi, Byung-Ho

    2014-09-01

    Previous studies have shown that the subperiosteal tunneling procedure in vertical ridge augmentation accelerates healing after grafting and prevents graft exposure, with minor postoperative complications. It is conceivable that new bone formation would be greater with the tunneling procedure than with the flap procedure, because the former is minimally invasive. This hypothesis was tested in this study by comparing new bone formation between the flap and tunneling procedures after vertical ridge augmentation using xenogenous bone blocks in a canine mandible model. Two Bio-Oss blocks were placed on the edentulous ridge in each side of the mandibles of 6 mongrel dogs. The blocks in each side were randomly assigned to grafting with a flap procedure (flap group) or grafting with a tunneling procedure (tunneling group). The mean percentage of newly formed bone within the block was 15.3 ± 6.6% in the flap group and 46.6 ± 23.4% in the tunneling group. Based on data presented in this study, when a tunneling procedure is used to place xenogenous bone blocks for vertical ridge augmentation, bone formation in the graft sites is significantly greater than when a flap procedure is used. Copyright © 2014 American Association of Oral and Maxillofacial Surgeons. Published by Elsevier Inc. All rights reserved.

  10. Are minimally invasive procedures harder to acquire than conventional surgical procedures?

    PubMed

    Hiemstra, Ellen; Kolkman, Wendela; le Cessie, Saskia; Jansen, Frank Willem

    2011-01-01

    It is frequently suggested that minimally invasive surgery (MIS) is harder to acquire than conventional surgery. To test this hypothesis, residents' learning curves of both surgical skills are compared. Residents had to be assessed using a general global rating scale of the OSATS (Objective Structured Assessment of Technical Skills) for every procedure they performed as primary surgeon during a 3-month clinical rotation in gynecological surgery. Nine postgraduate-year-4 residents collected a total of 319 OSATS during the 2 years and 3 months investigation period. These assessments concerned 129 MIS (laparoscopic and hysteroscopic) and 190 conventional (open abdominal and vaginal) procedures. Learning curves (in this study defined as OSATS score plotted against procedure-specific caseload) for MIS and conventional surgery were compared using a linear mixed model. The MIS curve revealed to be steeper than the conventional curve (1.77 vs. 0.75 OSATS points per assessed procedure; 95% CI 1.19-2.35 vs. 0.15-1.35, p < 0.01). Basic MIS procedures do not seem harder to acquire during residency than conventional surgical procedures. This may have resulted from the incorporation of structured MIS training programs in residency. Hopefully, this will lead to a more successful implementation of the advanced MIS procedures. Copyright © 2010 S. Karger AG, Basel.

  11. A study on the effect of varying sequence of lab performance skills on lab performance of high school physics students

    NASA Astrophysics Data System (ADS)

    Bournia-Petrou, Ethel A.

    The main goal of this investigation was to study how student rank in class, student gender and skill sequence affect high school students' performance on the lab skills involved in a laboratory-based inquiry task in physics. The focus of the investigation was the effect of skill sequence as determined by the particular task. The skills considered were: Hypothesis, Procedure, Planning, Data, Graph, Calculations and Conclusion. Three physics lab tasks based on the simple pendulum concept were administered to 282 Regents physics high school students. The reliability of the designed tasks was high. Student performance was evaluated on individual student written responses and a scoring rubric. The tasks had high discrimination power and were of moderate difficulty (65%). It was found that, student performance was weak on Conclusion (42%), Hypothesis (48%), and Procedure (51%), where the numbers in parentheses represent the mean as a percentage of the maximum possible score. Student performance was strong on Calculations (91%), Data (82%), Graph (74%) and Plan (68%). Out of all seven skills, Procedure had the strongest correlation (.73) with the overall task performance. Correlation analysis revealed some strong relationships among the seven skills which were grouped in two distinct clusters: Hypothesis, Procedure and Plan belong to one, and Data, Graph, Calculations, and Conclusion belong to the other. This distinction may indicate different mental processes at play within each skill cluster. The effect of student rank was not statistically significant according to the MANOVA results due to the large variation of rank levels among the participating schools. The effect of gender was significant on the entire test because of performance differences on Calculations and Graph, where male students performed better than female students. Skill sequence had a significant effect on the skills of Procedure, Plan, Data and Conclusion. Students are rather weak in proposing a sensible, detailed procedure for the inquiry task which involves the "novel" concept. However they perform better on Procedure and Plan, if the "novel" task is not preceded by another, which explicitly offers step-by-step procedure instructions. It was concluded that the format of detailed, structured instructions often adopted by many commercial and school-developed lab books and conventional lab practices, fails to prepare students to propose a successful, detailed procedure when faced with a slightly "novel", lab-based inquiry task. Student performance on Data collection was higher in the tasks that involved the more familiar experimental arrangement than in the tasks using the slightly "novel" equipment. Student performance on Conclusion was better in tasks where they had to collect the Data themselves than in tasks, where all relevant Data information was given to them.

  12. Acceptance sampling for attributes via hypothesis testing and the hypergeometric distribution

    NASA Astrophysics Data System (ADS)

    Samohyl, Robert Wayne

    2017-10-01

    This paper questions some aspects of attribute acceptance sampling in light of the original concepts of hypothesis testing from Neyman and Pearson (NP). Attribute acceptance sampling in industry, as developed by Dodge and Romig (DR), generally follows the international standards of ISO 2859, and similarly the Brazilian standards NBR 5425 to NBR 5427 and the United States Standards ANSI/ASQC Z1.4. The paper evaluates and extends the area of acceptance sampling in two directions. First, by suggesting the use of the hypergeometric distribution to calculate the parameters of sampling plans avoiding the unnecessary use of approximations such as the binomial or Poisson distributions. We show that, under usual conditions, discrepancies can be large. The conclusion is that the hypergeometric distribution, ubiquitously available in commonly used software, is more appropriate than other distributions for acceptance sampling. Second, and more importantly, we elaborate the theory of acceptance sampling in terms of hypothesis testing rigorously following the original concepts of NP. By offering a common theoretical structure, hypothesis testing from NP can produce a better understanding of applications even beyond the usual areas of industry and commerce such as public health and political polling. With the new procedures, both sample size and sample error can be reduced. What is unclear in traditional acceptance sampling is the necessity of linking the acceptable quality limit (AQL) exclusively to the producer and the lot quality percent defective (LTPD) exclusively to the consumer. In reality, the consumer should also be preoccupied with a value of AQL, as should the producer with LTPD. Furthermore, we can also question why type I error is always uniquely associated with the producer as producer risk, and likewise, the same question arises with consumer risk which is necessarily associated with type II error. The resolution of these questions is new to the literature. The article presents R code throughout.

  13. False belief in infancy: a fresh look.

    PubMed

    Heyes, Cecilia

    2014-09-01

    Can infants appreciate that others have false beliefs? Do they have a theory of mind? In this article I provide a detailed review of more than 20 experiments that have addressed these questions, and offered an affirmative answer, using nonverbal 'violation of expectation' and 'anticipatory looking' procedures. Although many of these experiments are both elegant and ingenious, I argue that their results can be explained by the operation of domain-general processes and in terms of 'low-level novelty'. This hypothesis suggests that the infants' looking behaviour is a function of the degree to which the observed (perceptual novelty) and remembered or expected (imaginal novelty) low-level properties of the test stimuli - their colours, shapes and movements - are novel with respect to events encoded by the infants earlier in the experiment. If the low-level novelty hypothesis is correct, research on false belief in infancy currently falls short of demonstrating that infants have even an implicit theory of mind. However, I suggest that the use of two experimental strategies - inanimate control procedures, and self-informed belief induction - could be used in combination with existing methods to bring us much closer to understanding the evolutionary and developmental origins of theory of mind. © 2014 John Wiley & Sons Ltd.

  14. Clinical supervision, emotional exhaustion, and turnover intention: a study of substance abuse treatment counselors in the Clinical Trials Network of the National Institute on Drug Abuse.

    PubMed

    Knudsen, Hannah K; Ducharme, Lori J; Roman, Paul M

    2008-12-01

    An intriguing hypothesis is that clinical supervision may protect against counselor turnover. This idea has been mentioned in recent discussions of the substance abuse treatment workforce. To test this hypothesis, we extend our previous research on emotional exhaustion and turnover intention among counselors by estimating the associations between clinical supervision and these variables in a large sample (N = 823). An exploratory analysis reveals that clinical supervision was negatively associated with emotional exhaustion and turnover intention. Given our previous findings that emotional exhaustion and turnover intention were associated with job autonomy, procedural justice, and distributive justice, we estimate a structural equation model to examine whether these variables mediated clinical supervision's associations with emotional exhaustion and turnover intention. These data support the fully mediated model. We found that the perceived quality of clinical supervision is strongly associated with counselors' perceptions of job autonomy, procedural justice, and distributive justice, which are, in turn, associated with emotional exhaustion and turnover intention. These data offer support for the protective role of clinical supervision in substance abuse treatment counselors' turnover and occupational well-being.

  15. Receiver operating characteristic analysis. Application to the study of quantum fluctuation effects in optic nerve of Rana pipiens

    PubMed Central

    1975-01-01

    Receiver operating characteristic (ROC) analysis of nerve messages is described. The hypothesis that quantum fluctuations provide the only limit to the ability of frog ganglion cells to signal luminance change information is examined using ROC analysis. In the context of ROC analysis, the quantum fluctuation hypothesis predicts (a) the detectability of a luminance change signal should rise proportionally to the size of the change, (b) detectability should decrease as the square root of background, an implication of which is the deVries-Rose law, and (c) ROC curves should exhibit a shape particular to underlying Poisson distributions. Each of these predictions is confirmed for the responses of dimming ganglion cells to brief luminance decrements at scotopic levels, but none could have been tested using classical nerve message analysis procedures. PMID:172597

  16. Prediction of perceptual defense from experimental stress and susceptibility to stress as indicated by thematic apperception.

    PubMed

    Tuma, J M

    1975-02-01

    The present investigation tested the hypothesis advanced by J. Inglis (1961) that perceptual defense and perceptual vigilance result from an interaction between personality differences and degrees of experimental stress. The design, which controlled for questionable procedures used in previous studies, utilized 32 introverts and 32 extraverts, half male and half female, in an experiment with a visual recognition-task. Results indicated that under low-stress conditions introverts and extraverts identified by their response to a thematic apperception task react to threatening stimuli with perceptual defense and perceptual vigilance, respectively. Under high-stress conditions, type of avoidance activity reverses; extraverts react with perceptual defense and introverts with perceptual vigilance. It was suggested that, when both personality and stress variables are controlled, results of the perceptual defense paradigm are predictable and consistent, in support of Inglis' hypothesis.

  17. Is the U.S. shale gas boom having an effect on the European gas market?

    NASA Astrophysics Data System (ADS)

    Yao, Isaac

    This thesis focuses on the impact of the American shale gas boom on the European natural gas market. The study presents different tests in order to analyze the dynamics of natural gas prices in the U.S., U.K. and German natural gas market. The question of cointegration between these different markets are analyzed using several tests. More specifically, the ADF tests for the presence of a unit root. The error correction model test and the Johansen cointegration procedure are applied in order to accept or reject the hypothesis of an integrated market. The results suggest no evidence of cointegration between these markets. There currently is no evidence of an impact of the U.S. shale gas boom on the European market.

  18. Two self-test methods applied to an inertial system problem. [estimating gyroscope and accelerometer bias

    NASA Technical Reports Server (NTRS)

    Willsky, A. S.; Deyst, J. J.; Crawford, B. S.

    1975-01-01

    The paper describes two self-test procedures applied to the problem of estimating the biases in accelerometers and gyroscopes on an inertial platform. The first technique is the weighted sum-squared residual (WSSR) test, with which accelerator bias jumps are easily isolated, but gyro bias jumps are difficult to isolate. The WSSR method does not take full advantage of the knowledge of system dynamics. The other technique is a multiple hypothesis method developed by Buxbaum and Haddad (1969). It has the advantage of directly providing jump isolation information, but suffers from computational problems. It might be possible to use the WSSR to detect state jumps and then switch to the BH system for jump isolation and estimate compensation.

  19. Hypothesis test of mediation effect in causal mediation model with high-dimensional continuous mediators.

    PubMed

    Huang, Yen-Tsung; Pan, Wen-Chi

    2016-06-01

    Causal mediation modeling has become a popular approach for studying the effect of an exposure on an outcome through a mediator. However, current methods are not applicable to the setting with a large number of mediators. We propose a testing procedure for mediation effects of high-dimensional continuous mediators. We characterize the marginal mediation effect, the multivariate component-wise mediation effects, and the L2 norm of the component-wise effects, and develop a Monte-Carlo procedure for evaluating their statistical significance. To accommodate the setting with a large number of mediators and a small sample size, we further propose a transformation model using the spectral decomposition. Under the transformation model, mediation effects can be estimated using a series of regression models with a univariate transformed mediator, and examined by our proposed testing procedure. Extensive simulation studies are conducted to assess the performance of our methods for continuous and dichotomous outcomes. We apply the methods to analyze genomic data investigating the effect of microRNA miR-223 on a dichotomous survival status of patients with glioblastoma multiforme (GBM). We identify nine gene ontology sets with expression values that significantly mediate the effect of miR-223 on GBM survival. © 2015, The International Biometric Society.

  20. Seeking health information on the web: positive hypothesis testing.

    PubMed

    Kayhan, Varol Onur

    2013-04-01

    The goal of this study is to investigate positive hypothesis testing among consumers of health information when they search the Web. After demonstrating the extent of positive hypothesis testing using Experiment 1, we conduct Experiment 2 to test the effectiveness of two debiasing techniques. A total of 60 undergraduate students searched a tightly controlled online database developed by the authors to test the validity of a hypothesis. The database had four abstracts that confirmed the hypothesis and three abstracts that disconfirmed it. Findings of Experiment 1 showed that majority of participants (85%) exhibited positive hypothesis testing. In Experiment 2, we found that the recommendation technique was not effective in reducing positive hypothesis testing since none of the participants assigned to this server could retrieve disconfirming evidence. Experiment 2 also showed that the incorporation technique successfully reduced positive hypothesis testing since 75% of the participants could retrieve disconfirming evidence. Positive hypothesis testing on the Web is an understudied topic. More studies are needed to validate the effectiveness of the debiasing techniques discussed in this study and develop new techniques. Search engine developers should consider developing new options for users so that both confirming and disconfirming evidence can be presented in search results as users test hypotheses using search engines. Copyright © 2012 Elsevier Ireland Ltd. All rights reserved.

  1. Detecting temporal trends in species assemblages with bootstrapping procedures and hierarchical models

    USGS Publications Warehouse

    Gotelli, Nicholas J.; Dorazio, Robert M.; Ellison, Aaron M.; Grossman, Gary D.

    2010-01-01

    Quantifying patterns of temporal trends in species assemblages is an important analytical challenge in community ecology. We describe methods of analysis that can be applied to a matrix of counts of individuals that is organized by species (rows) and time-ordered sampling periods (columns). We first developed a bootstrapping procedure to test the null hypothesis of random sampling from a stationary species abundance distribution with temporally varying sampling probabilities. This procedure can be modified to account for undetected species. We next developed a hierarchical model to estimate species-specific trends in abundance while accounting for species-specific probabilities of detection. We analysed two long-term datasets on stream fishes and grassland insects to demonstrate these methods. For both assemblages, the bootstrap test indicated that temporal trends in abundance were more heterogeneous than expected under the null model. We used the hierarchical model to estimate trends in abundance and identified sets of species in each assemblage that were steadily increasing, decreasing or remaining constant in abundance over more than a decade of standardized annual surveys. Our methods of analysis are broadly applicable to other ecological datasets, and they represent an advance over most existing procedures, which do not incorporate effects of incomplete sampling and imperfect detection.

  2. Test Statistics and Confidence Intervals to Establish Noninferiority between Treatments with Ordinal Categorical Data.

    PubMed

    Zhang, Fanghong; Miyaoka, Etsuo; Huang, Fuping; Tanaka, Yutaka

    2015-01-01

    The problem for establishing noninferiority is discussed between a new treatment and a standard (control) treatment with ordinal categorical data. A measure of treatment effect is used and a method of specifying noninferiority margin for the measure is provided. Two Z-type test statistics are proposed where the estimation of variance is constructed under the shifted null hypothesis using U-statistics. Furthermore, the confidence interval and the sample size formula are given based on the proposed test statistics. The proposed procedure is applied to a dataset from a clinical trial. A simulation study is conducted to compare the performance of the proposed test statistics with that of the existing ones, and the results show that the proposed test statistics are better in terms of the deviation from nominal level and the power.

  3. Educational Attainment is not a Good Proxy for Cognitive Function in Methamphetamine Dependence

    PubMed Central

    Dean, Andy C.; Hellemann, Gerhard; Sugar, Catherine A.; London, Edythe D.

    2014-01-01

    We sought to test the hypothesis that methamphetamine use interferes with both the quantity and quality of one's education, such that the years of education obtained by methamphetamine dependent individuals serves to underestimate general cognitive functioning and overestimate the quality of academic learning. Thirty-six methamphetamine-dependent participants and 42 healthy comparison subjects completed cognitive tests and self-report measures in Los Angeles, California. An overall cognitive battery score was used to assess general cognition, and vocabulary knowledge was used as a proxy for the quality of academic learning. Linear regression procedures were used for analyses. Supporting the hypothesis that methamphetamine use interferes with the quantity of education, we found that a) earlier onset of methamphetamine use was associated with fewer years of education (p < .01); b) using a normative model developed in healthy participants, methamphetamine-dependent participants had lower educational attainment than predicted from their demographics and performance on the cognitive battery score (p < .01); and c) greater differences between methamphetamine-dependent participants' predicted and actual educational attainment were associated with an earlier onset of MA use (p ≤ .01). Supporting the hypothesis that methamphetamine use interferes with the quality of education, years of education received prior to the onset of methamphetamine use was a better predictor of a proxy for academic learning, vocabulary knowledge, than was the total years of education obtained. Results support the hypothesis that methamphetamine use interferes with the quantity and quality of educational exposure, leading to under- and overestimation of cognitive function and academic learning, respectively. PMID:22206606

  4. Educational attainment is not a good proxy for cognitive function in methamphetamine dependence.

    PubMed

    Dean, Andy C; Hellemann, Gerhard; Sugar, Catherine A; London, Edythe D

    2012-06-01

    We sought to test the hypothesis that methamphetamine use interferes with both the quantity and quality of one's education, such that the years of education obtained by methamphetamine dependent individuals serves to underestimate general cognitive functioning and overestimate the quality of academic learning. Thirty-six methamphetamine-dependent participants and 42 healthy comparison subjects completed cognitive tests and self-report measures in Los Angeles, California. An overall cognitive battery score was used to assess general cognition, and vocabulary knowledge was used as a proxy for the quality of academic learning. Linear regression procedures were used for analyses. Supporting the hypothesis that methamphetamine use interferes with the quantity of education, we found that (a) earlier onset of methamphetamine use was associated with fewer years of education (p<.01); (b) using a normative model developed in healthy participants, methamphetamine-dependent participants had lower educational attainment than predicted from their demographics and performance on the cognitive battery score (p<.01); and (c) greater differences between methamphetamine-dependent participants' predicted and actual educational attainment were associated with an earlier onset of MA use (p≤.01). Supporting the hypothesis that methamphetamine use interferes with the quality of education, years of education received prior to the onset of methamphetamine use was a better predictor of a proxy for academic learning, vocabulary knowledge, than was the total years of education obtained. Results support the hypothesis that methamphetamine use interferes with the quantity and quality of educational exposure, leading to under- and overestimation of cognitive function and academic learning, respectively. Copyright © 2011. Published by Elsevier Ireland Ltd.

  5. Procedural learning and dyslexia.

    PubMed

    Nicolson, R I; Fawcett, A J; Brookes, R L; Needle, J

    2010-08-01

    Three major 'neural systems', specialized for different types of information processing, are the sensory, declarative, and procedural systems. It has been proposed (Trends Neurosci., 30(4), 135-141) that dyslexia may be attributable to impaired function in the procedural system together with intact declarative function. We provide a brief overview of the increasing evidence relating to the hypothesis, noting that the framework involves two main claims: first that 'neural systems' provides a productive level of description avoiding the underspecificity of cognitive descriptions and the overspecificity of brain structural accounts; and second that a distinctive feature of procedural learning is its extended time course, covering from minutes to months. In this article, we focus on the second claim. Three studies-speeded single word reading, long-term response learning, and overnight skill consolidation-are reviewed which together provide clear evidence of difficulties in procedural learning for individuals with dyslexia, even when the tasks are outside the literacy domain. The educational implications of the results are then discussed, and in particular the potential difficulties that impaired overnight procedural consolidation would entail. It is proposed that response to intervention could be better predicted if diagnostic tests on the different forms of learning were first undertaken. 2010 John Wiley & Sons, Ltd.

  6. Moderate Levels of Activation Lead to Forgetting In the Think/No-Think Paradigm

    PubMed Central

    Detre, Greg J.; Natarajan, Annamalai; Gershman, Samuel J.; Norman, Kenneth A.

    2013-01-01

    Using the think/no-think paradigm (Anderson & Green, 2001), researchers have found that suppressing retrieval of a memory (in the presence of a strong retrieval cue) can make it harder to retrieve that memory on a subsequent test. This effect has been replicated numerous times, but the size of the effect is highly variable. Also, it is unclear from a neural mechanistic standpoint why preventing recall of a memory now should impair your ability to recall that memory later. Here, we address both of these puzzles using the idea, derived from computational modeling and studies of synaptic plasticity, that the function relating memory activation to learning is U-shaped, such that moderate levels of memory activation lead to weakening of the memory and higher levels of activation lead to strengthening. According to this view, forgetting effects in the think/no-think paradigm occur when the suppressed item activates moderately during the suppression attempt, leading to weakening; the effect is variable because sometimes the suppressed item activates strongly (leading to strengthening) and sometimes it does not activate at all (in which case no learning takes place). To test this hypothesis, we ran a think/no-think experiment where participants learned word-picture pairs; we used pattern classifiers, applied to fMRI data, to measure how strongly the picture associates were activating when participants were trying not to retrieve these associates, and we used a novel Bayesian curve-fitting procedure to relate this covert neural measure of retrieval to performance on a later memory test. In keeping with our hypothesis, the curve-fitting procedure revealed a nonmonotonic relationship between memory activation (as measured by the classifier) and subsequent memory, whereby moderate levels of activation of the to-be-suppressed item led to diminished performance on the final memory test, and higher levels of activation led to enhanced performance on the final test. PMID:23499722

  7. Moderate levels of activation lead to forgetting in the think/no-think paradigm.

    PubMed

    Detre, Greg J; Natarajan, Annamalai; Gershman, Samuel J; Norman, Kenneth A

    2013-10-01

    Using the think/no-think paradigm (Anderson & Green, 2001), researchers have found that suppressing retrieval of a memory (in the presence of a strong retrieval cue) can make it harder to retrieve that memory on a subsequent test. This effect has been replicated numerous times, but the size of the effect is highly variable. Also, it is unclear from a neural mechanistic standpoint why preventing recall of a memory now should impair your ability to recall that memory later. Here, we address both of these puzzles using the idea, derived from computational modeling and studies of synaptic plasticity, that the function relating memory activation to learning is U-shaped, such that moderate levels of memory activation lead to weakening of the memory and higher levels of activation lead to strengthening. According to this view, forgetting effects in the think/no-think paradigm occur when the suppressed item activates moderately during the suppression attempt, leading to weakening; the effect is variable because sometimes the suppressed item activates strongly (leading to strengthening) and sometimes it does not activate at all (in which case no learning takes place). To test this hypothesis, we ran a think/no-think experiment where participants learned word-picture pairs; we used pattern classifiers, applied to fMRI data, to measure how strongly the picture associates were activating when participants were trying not to retrieve these associates, and we used a novel Bayesian curve-fitting procedure to relate this covert neural measure of retrieval to performance on a later memory test. In keeping with our hypothesis, the curve-fitting procedure revealed a nonmonotonic relationship between memory activation (as measured by the classifier) and subsequent memory, whereby moderate levels of activation of the to-be-suppressed item led to diminished performance on the final memory test, and higher levels of activation led to enhanced performance on the final test. Copyright © 2013 Elsevier Ltd. All rights reserved.

  8. The protective effects of acute cardiovascular exercise on the interference of procedural memory.

    PubMed

    Jo, J S; Chen, J; Riechman, S; Roig, M; Wright, D L

    2018-04-10

    Numerous studies have reported a positive impact of acute exercise for procedural skill memory. Previous work has revealed this effect, but these findings are confounded by a potential contribution of a night of sleep to the reported exercise-mediated reduction in interference. Thus, it remains unclear if exposure to a brief bout of exercise can provide protection to a newly acquired motor memory. The primary objective of the present study was to examine if a single bout of moderate-intensity cardiovascular exercise after practice of a novel motor sequence reduces the susceptibility to retroactive interference. To address this shortcoming, 17 individuals in a control condition practiced a novel motor sequence that was followed by test after a 6-h wake-filled interval. A separate group of 17 individuals experienced practice with an interfering motor sequence 45 min after practice with the original sequence and were then administered test trials 6 h later. One additional group of 12 participants was exposed to an acute bout of exercise immediately after practice with the original motor sequence but prior to practice with the interfering motor sequence and the subsequent test. In comparison with the control condition, increased response times were revealed during the 6-h test for the individuals that were exposed to interference. The introduction of an acute bout of exercise between the practice of the two motor sequences produced a reduction in interference from practice with the second task at the time of test, however, this effect was not statistically significant. These data reinforce the hypothesis that while there may be a contribution from exercise to post-practice consolidation of procedural skills which is independent of sleep, sleep may interact with exercise to strengthen the effects of the latter on procedural memory.

  9. Cosmetic surgery procedures as luxury goods: measuring price and demand in facial plastic surgery.

    PubMed

    Alsarraf, Ramsey; Alsarraf, Nicole W; Larrabee, Wayne F; Johnson, Calvin M

    2002-01-01

    To evaluate the relationship between cosmetic facial plastic surgery procedure price and demand, and to test the hypothesis that these procedures function as luxury goods in the marketplace, with an upward-sloping demand curve. Data were derived from a survey that was sent to every (N = 1727) active fellow, member, or associate of the American Academy of Facial Plastic and Reconstructive Surgery, assessing the costs and frequency of 4 common cosmetic facial plastic surgery procedures (face-lift, brow-lift, blepharoplasty, and rhinoplasty) for 1999 and 1989. An economic analysis was performed to assess the relationship of price and demand for these procedures. A significant association was found between increasing surgeons' fees and total charges for cosmetic facial plastic surgery procedures and increasing demand for these procedures, as measured by their annual frequency (P

  10. Earthquake likelihood model testing

    USGS Publications Warehouse

    Schorlemmer, D.; Gerstenberger, M.C.; Wiemer, S.; Jackson, D.D.; Rhoades, D.A.

    2007-01-01

    INTRODUCTIONThe Regional Earthquake Likelihood Models (RELM) project aims to produce and evaluate alternate models of earthquake potential (probability per unit volume, magnitude, and time) for California. Based on differing assumptions, these models are produced to test the validity of their assumptions and to explore which models should be incorporated in seismic hazard and risk evaluation. Tests based on physical and geological criteria are useful but we focus on statistical methods using future earthquake catalog data only. We envision two evaluations: a test of consistency with observed data and a comparison of all pairs of models for relative consistency. Both tests are based on the likelihood method, and both are fully prospective (i.e., the models are not adjusted to fit the test data). To be tested, each model must assign a probability to any possible event within a specified region of space, time, and magnitude. For our tests the models must use a common format: earthquake rates in specified “bins” with location, magnitude, time, and focal mechanism limits.Seismology cannot yet deterministically predict individual earthquakes; however, it should seek the best possible models for forecasting earthquake occurrence. This paper describes the statistical rules of an experiment to examine and test earthquake forecasts. The primary purposes of the tests described below are to evaluate physical models for earthquakes, assure that source models used in seismic hazard and risk studies are consistent with earthquake data, and provide quantitative measures by which models can be assigned weights in a consensus model or be judged as suitable for particular regions.In this paper we develop a statistical method for testing earthquake likelihood models. A companion paper (Schorlemmer and Gerstenberger 2007, this issue) discusses the actual implementation of these tests in the framework of the RELM initiative.Statistical testing of hypotheses is a common task and a wide range of possible testing procedures exist. Jolliffe and Stephenson (2003) present different forecast verifications from atmospheric science, among them likelihood testing of probability forecasts and testing the occurrence of binary events. Testing binary events requires that for each forecasted event, the spatial, temporal and magnitude limits be given. Although major earthquakes can be considered binary events, the models within the RELM project express their forecasts on a spatial grid and in 0.1 magnitude units; thus the results are a distribution of rates over space and magnitude. These forecasts can be tested with likelihood tests.In general, likelihood tests assume a valid null hypothesis against which a given hypothesis is tested. The outcome is either a rejection of the null hypothesis in favor of the test hypothesis or a nonrejection, meaning the test hypothesis cannot outperform the null hypothesis at a given significance level. Within RELM, there is no accepted null hypothesis and thus the likelihood test needs to be expanded to allow comparable testing of equipollent hypotheses.To test models against one another, we require that forecasts are expressed in a standard format: the average rate of earthquake occurrence within pre-specified limits of hypocentral latitude, longitude, depth, magnitude, time period, and focal mechanisms. Focal mechanisms should either be described as the inclination of P-axis, declination of P-axis, and inclination of the T-axis, or as strike, dip, and rake angles. Schorlemmer and Gerstenberger (2007, this issue) designed classes of these parameters such that similar models will be tested against each other. These classes make the forecasts comparable between models. Additionally, we are limited to testing only what is precisely defined and consistently reported in earthquake catalogs. Therefore it is currently not possible to test such information as fault rupture length or area, asperity location, etc. Also, to account for data quality issues, we allow for location and magnitude uncertainties as well as the probability that an event is dependent on another event.As we mentioned above, only models with comparable forecasts can be tested against each other. Our current tests are designed to examine grid-based models. This requires that any fault-based model be adapted to a grid before testing is possible. While this is a limitation of the testing, it is an inherent difficulty in any such comparative testing. Please refer to appendix B for a statistical evaluation of the application of the Poisson hypothesis to fault-based models.The testing suite we present consists of three different tests: L-Test, N-Test, and R-Test. These tests are defined similarily to Kagan and Jackson (1995). The first two tests examine the consistency of the hypotheses with the observations while the last test compares the spatial performances of the models.

  11. Post-traumatic cognitions and quality of life in terrorism victims: the role of well-being in indirect versus direct exposure.

    PubMed

    Bajo, Miriam; Blanco, Amalio; Stavraki, Maria; Gandarillas, Beatriz; Cancela, Ana; Requero, Blanca; Díaz, Darío

    2018-05-15

    The effect of indirect (versus direct) exposure to a traumatic event on the quality of life of terrorist attack victims has received considerable attention in the literature. However, more research is required to examine whether the symptoms and underlying processes caused by both types of exposure are equivalent. Our main hypothesis is that well-being plays a different role depending on indirect vs. direct trauma exposure. In this cross-sectional study, eighty direct victims of 11-M terrorist attacks (people who were traveling in trains where bombs were placed) and two-hundred indirect victims (individuals highly exposed to the 11-M terrorist attacks through communications media) voluntarily participated without compensation. To test our hypothesis regarding the mediating role of indirect exposure, we conducted a biased corrected bootstrapping procedure. To test our hypothesis regarding the moderating role of direct exposure, data were subjected to a hierarchical regression analysis. As predicted, for indirect trauma exposure, well-being mediated the relationship between post-traumatic dysfunctional cognitions and trauma symptoms. However, for direct trauma exposure, well-being moderated the relationship between post-traumatic dysfunctional cognitions and trauma symptoms. The results of our study indicate that the different role of well-being found between indirect (causal factor) and direct exposure (protective factor) should be taken into consideration in interventions designed to improve victims' health.

  12. Interaction Models for Functional Regression.

    PubMed

    Usset, Joseph; Staicu, Ana-Maria; Maity, Arnab

    2016-02-01

    A functional regression model with a scalar response and multiple functional predictors is proposed that accommodates two-way interactions in addition to their main effects. The proposed estimation procedure models the main effects using penalized regression splines, and the interaction effect by a tensor product basis. Extensions to generalized linear models and data observed on sparse grids or with measurement error are presented. A hypothesis testing procedure for the functional interaction effect is described. The proposed method can be easily implemented through existing software. Numerical studies show that fitting an additive model in the presence of interaction leads to both poor estimation performance and lost prediction power, while fitting an interaction model where there is in fact no interaction leads to negligible losses. The methodology is illustrated on the AneuRisk65 study data.

  13. New methods of testing nonlinear hypothesis using iterative NLLS estimator

    NASA Astrophysics Data System (ADS)

    Mahaboob, B.; Venkateswarlu, B.; Mokeshrayalu, G.; Balasiddamuni, P.

    2017-11-01

    This research paper discusses the method of testing nonlinear hypothesis using iterative Nonlinear Least Squares (NLLS) estimator. Takeshi Amemiya [1] explained this method. However in the present research paper, a modified Wald test statistic due to Engle, Robert [6] is proposed to test the nonlinear hypothesis using iterative NLLS estimator. An alternative method for testing nonlinear hypothesis using iterative NLLS estimator based on nonlinear hypothesis using iterative NLLS estimator based on nonlinear studentized residuals has been proposed. In this research article an innovative method of testing nonlinear hypothesis using iterative restricted NLLS estimator is derived. Pesaran and Deaton [10] explained the methods of testing nonlinear hypothesis. This paper uses asymptotic properties of nonlinear least squares estimator proposed by Jenrich [8]. The main purpose of this paper is to provide very innovative methods of testing nonlinear hypothesis using iterative NLLS estimator, iterative NLLS estimator based on nonlinear studentized residuals and iterative restricted NLLS estimator. Eakambaram et al. [12] discussed least absolute deviation estimations versus nonlinear regression model with heteroscedastic errors and also they studied the problem of heteroscedasticity with reference to nonlinear regression models with suitable illustration. William Grene [13] examined the interaction effect in nonlinear models disused by Ai and Norton [14] and suggested ways to examine the effects that do not involve statistical testing. Peter [15] provided guidelines for identifying composite hypothesis and addressing the probability of false rejection for multiple hypotheses.

  14. Effects of demographic and health variables on Rasch scaled cognitive scores.

    PubMed

    Zelinski, Elizabeth M; Gilewski, Michael J

    2003-08-01

    To determine whether demographic and health variables interact to predict cognitive scores in Asset and Health Dynamics of the Oldest-Old (AHEAD), a representative survey of older Americans, as a test of the developmental discontinuity hypothesis. Rasch modeling procedures were used to rescale cognitive measures into interval scores, equating scales across measures, making it possible to compare predictor effects directly. Rasch scaling also reduces the likelihood of obtaining spurious interactions. Tasks included combined immediate and delayed recall, the Telephone Interview for Cognitive Status (TICS), Series 7, and an overall cognitive score. Demographic variables most strongly predicted performance on all scores, with health variables having smaller effects. Age interacted with both demographic and health variables, but patterns of effects varied. Demographic variables have strong effects on cognition. The developmental discontinuity hypothesis that health variables have stronger effects than demographic ones on cognition in older adults was not supported.

  15. A Brief Mindfulness Exercise Promotes the Correspondence Between the Implicit Affiliation Motive and Goal Setting.

    PubMed

    Strick, Madelijn; Papies, Esther K

    2017-05-01

    People often choose to pursue goals that are dissociated from their implicit motives, which jeopardizes their motivation and well-being. We hypothesized that mindfulness may attenuate this dissociation to the degree that it increases sensitivity to internal cues that signal one's implicit preferences. We tested this hypothesis with a longitudinal repeated measures experiment. In Session 1, participants' implicit affiliation motive was assessed. In Session 2, half of the participants completed a mindfulness exercise while the other half completed a control task before indicating their motivation toward pursuing affiliation and nonaffiliation goals. In Session 3, this procedure was repeated with reversed assignment to conditions. The results confirmed our hypothesis that, irrespective of the order of the conditions, the implicit affiliation motive predicted a preference to pursue affiliation goals immediately after the mindfulness exercise, but not after the control task. We discuss implications of these findings for satisfaction and well-being.

  16. A Brief Mindfulness Exercise Promotes the Correspondence Between the Implicit Affiliation Motive and Goal Setting

    PubMed Central

    Strick, Madelijn; Papies, Esther K.

    2017-01-01

    People often choose to pursue goals that are dissociated from their implicit motives, which jeopardizes their motivation and well-being. We hypothesized that mindfulness may attenuate this dissociation to the degree that it increases sensitivity to internal cues that signal one’s implicit preferences. We tested this hypothesis with a longitudinal repeated measures experiment. In Session 1, participants’ implicit affiliation motive was assessed. In Session 2, half of the participants completed a mindfulness exercise while the other half completed a control task before indicating their motivation toward pursuing affiliation and nonaffiliation goals. In Session 3, this procedure was repeated with reversed assignment to conditions. The results confirmed our hypothesis that, irrespective of the order of the conditions, the implicit affiliation motive predicted a preference to pursue affiliation goals immediately after the mindfulness exercise, but not after the control task. We discuss implications of these findings for satisfaction and well-being. PMID:28903636

  17. Tryptophan depletion in chronic fatigue syndrome, a pilot cross-over study.

    PubMed

    The, Gerard K H; Verkes, Robbert J; Fekkes, Durk; Bleijenberg, Gijs; van der Meer, Jos W M; Buitelaar, Jan K

    2014-09-16

    Chronic fatigue syndrome (CFS) is still an enigmatic disorder. CFS can be regarded as a complex disorder with tremendous impact on lives of CFS-patients. Full recovery without treatment is rare. A somatic explanation for the fatigue is lacking. There is clinical and experimental evidence implicating enhanced serotonergic neurotransmission in CFS. Genetic studies and imaging studies support the hypothesis of upregulated serotonin system in CFS. In line with the hypothesis of an increased serotonergic state in CFS, we performed a randomised clinical trial investigated the effect of 5-HT3 receptor antagonism in CFS. No benefit was found of the 5-HT3 receptor antagonist ondansetron compared to placebo.To further investigate the involvement of serotonin in CFS we performed a placebo controlled cross over pilot study investigating the effect of Acute Tryptophan Depletion. Five female CFS-patients who met the US Center for Disease Control and Prevention criteria for CFS were recruited. There were two test days, one week apart. Each participant received placebo and ATD. To evaluate the efficacy of the ATD procedure tryptophan and the large neutral amino acids were measured. The outcome measures were fatigue severity, concentration and mood states. ATD resulted in a significant plasma tryptophan to large neutral amino acid ratio reduction of 96%. There were no significant differences in fatigue-, depression and concentration between the placebo- and ATD condition. These first five CFS-patients did not respond to the ATD procedure. However, a much larger sample size is needed to draw final conclusions on the hypothesis of an increased serotonergic state in the pathophysiology of CFS. ISRCTN07518149.

  18. Classification image analysis: estimation and statistical inference for two-alternative forced-choice experiments

    NASA Technical Reports Server (NTRS)

    Abbey, Craig K.; Eckstein, Miguel P.

    2002-01-01

    We consider estimation and statistical hypothesis testing on classification images obtained from the two-alternative forced-choice experimental paradigm. We begin with a probabilistic model of task performance for simple forced-choice detection and discrimination tasks. Particular attention is paid to general linear filter models because these models lead to a direct interpretation of the classification image as an estimate of the filter weights. We then describe an estimation procedure for obtaining classification images from observer data. A number of statistical tests are presented for testing various hypotheses from classification images based on some more compact set of features derived from them. As an example of how the methods we describe can be used, we present a case study investigating detection of a Gaussian bump profile.

  19. Hypothesis testing in hydrology: Theory and practice

    NASA Astrophysics Data System (ADS)

    Kirchner, James; Pfister, Laurent

    2017-04-01

    Well-posed hypothesis tests have spurred major advances in hydrological theory. However, a random sample of recent research papers suggests that in hydrology, as in other fields, hypothesis formulation and testing rarely correspond to the idealized model of the scientific method. Practices such as "p-hacking" or "HARKing" (Hypothesizing After the Results are Known) are major obstacles to more rigorous hypothesis testing in hydrology, along with the well-known problem of confirmation bias - the tendency to value and trust confirmations more than refutations - among both researchers and reviewers. Hypothesis testing is not the only recipe for scientific progress, however: exploratory research, driven by innovations in measurement and observation, has also underlain many key advances. Further improvements in observation and measurement will be vital to both exploratory research and hypothesis testing, and thus to advancing the science of hydrology.

  20. POLYMAT-C: a comprehensive SPSS program for computing the polychoric correlation matrix.

    PubMed

    Lorenzo-Seva, Urbano; Ferrando, Pere J

    2015-09-01

    We provide a free noncommercial SPSS program that implements procedures for (a) obtaining the polychoric correlation matrix between a set of ordered categorical measures, so that it can be used as input for the SPSS factor analysis (FA) program; (b) testing the null hypothesis of zero population correlation for each element of the matrix by using appropriate simulation procedures; (c) obtaining valid and accurate confidence intervals via bootstrap resampling for those correlations found to be significant; and (d) performing, if necessary, a smoothing procedure that makes the matrix amenable to any FA estimation procedure. For the main purpose (a), the program uses a robust unified procedure that allows four different types of estimates to be obtained at the user's choice. Overall, we hope the program will be a very useful tool for the applied researcher, not only because it provides an appropriate input matrix for FA, but also because it allows the researcher to carefully check the appropriateness of the matrix for this purpose. The SPSS syntax, a short manual, and data files related to this article are available as Supplemental materials that are available for download with this article.

  1. The Effect of Simplifying Dental Implant Drilling Sequence on Osseointegration: An Experimental Study in Dogs

    PubMed Central

    Giro, Gabriela; Tovar, Nick; Marin, Charles; Bonfante, Estevam A.; Jimbo, Ryo; Suzuki, Marcelo; Janal, Malvin N.; Coelho, Paulo G.

    2013-01-01

    Objectives. To test the hypothesis that there would be no differences in osseointegration by reducing the number of drills for site preparation relative to conventional drilling sequence. Methods. Seventy-two implants were bilaterally placed in the tibia of 18 beagle dogs and remained for 1, 3, and 5 weeks. Thirty-six implants were 3.75 mm in diameter and the other 36 were 4.2 mm. Half of the implants of each diameter were placed under a simplified technique (pilot drill + final diameter drill) and the other half were placed under conventional drilling where multiple drills of increasing diameter were utilized. After euthanisation, the bone-implant samples were processed and referred to histological analysis. Bone-to-implant contact (BIC) and bone-area-fraction occupancy (BAFO) were assessed. Statistical analyses were performed by GLM ANOVA at 95% level of significance considering implant diameter, time in vivo, and drilling procedure as independent variables and BIC and BAFO as the dependent variables. Results. Both techniques led to implant integration. No differences in BIC and BAFO were observed between drilling procedures as time elapsed in vivo. Conclusions. The simplified drilling protocol presented comparable osseointegration outcomes to the conventional protocol, which proved the initial hypothesis. PMID:23431303

  2. Letter and symbol identification: No evidence for letter-specific crowding mechanisms.

    PubMed

    Castet, Eric; Descamps, Marine; Denis-Noël, Ambre; Colé, Pascale

    2017-09-01

    It has been proposed that letters, as opposed to symbols, trigger specialized crowding processes, boosting identification of the first and last letters of words. This hypothesis is based on evidence that single-letter accuracy as a function of within-string position has a W shape (the classic serial position function [SPF] in psycholinguistics) whereas an inverted V shape is obtained when measured with symbols. Our main goal was to test the robustness of the latter result. Our hypothesis was that any letter/symbol difference might result from short-term visual memory processes (due to the partial report [PR] procedures used in SPF studies) rather than from crowding. We therefore removed the involvement of short-term memory by precueing target-item position and compared SPFs with precueing and postcueing. Perimetric complexity was stringently matched between letters and symbols. In postcueing conditions similar to previous studies, we did not reproduce the inverted V shape for symbols: Clear-cut W shapes were observed with an overall smaller accuracy for symbols compared to letters. This letter/symbol difference was dramatically reduced in precueing conditions in keeping with our prediction. Our results are not consistent with the claim that letter strings trigger specialized crowding processes. We argue that PR procedures are not fit to isolate crowding processes.

  3. The Mantel-Haenszel procedure revisited: models and generalizations.

    PubMed

    Fidler, Vaclav; Nagelkerke, Nico

    2013-01-01

    Several statistical methods have been developed for adjusting the Odds Ratio of the relation between two dichotomous variables X and Y for some confounders Z. With the exception of the Mantel-Haenszel method, commonly used methods, notably binary logistic regression, are not symmetrical in X and Y. The classical Mantel-Haenszel method however only works for confounders with a limited number of discrete strata, which limits its utility, and appears to have no basis in statistical models. Here we revisit the Mantel-Haenszel method and propose an extension to continuous and vector valued Z. The idea is to replace the observed cell entries in strata of the Mantel-Haenszel procedure by subject specific classification probabilities for the four possible values of (X,Y) predicted by a suitable statistical model. For situations where X and Y can be treated symmetrically we propose and explore the multinomial logistic model. Under the homogeneity hypothesis, which states that the odds ratio does not depend on Z, the logarithm of the odds ratio estimator can be expressed as a simple linear combination of three parameters of this model. Methods for testing the homogeneity hypothesis are proposed. The relationship between this method and binary logistic regression is explored. A numerical example using survey data is presented.

  4. The Mantel-Haenszel Procedure Revisited: Models and Generalizations

    PubMed Central

    Fidler, Vaclav; Nagelkerke, Nico

    2013-01-01

    Several statistical methods have been developed for adjusting the Odds Ratio of the relation between two dichotomous variables X and Y for some confounders Z. With the exception of the Mantel-Haenszel method, commonly used methods, notably binary logistic regression, are not symmetrical in X and Y. The classical Mantel-Haenszel method however only works for confounders with a limited number of discrete strata, which limits its utility, and appears to have no basis in statistical models. Here we revisit the Mantel-Haenszel method and propose an extension to continuous and vector valued Z. The idea is to replace the observed cell entries in strata of the Mantel-Haenszel procedure by subject specific classification probabilities for the four possible values of (X,Y) predicted by a suitable statistical model. For situations where X and Y can be treated symmetrically we propose and explore the multinomial logistic model. Under the homogeneity hypothesis, which states that the odds ratio does not depend on Z, the logarithm of the odds ratio estimator can be expressed as a simple linear combination of three parameters of this model. Methods for testing the homogeneity hypothesis are proposed. The relationship between this method and binary logistic regression is explored. A numerical example using survey data is presented. PMID:23516463

  5. "Is It the Real Thing?" Cola Lab.

    ERIC Educational Resources Information Center

    McGuire, Neva; McGraw, Dana

    1988-01-01

    Introduces an interdisciplinary activity using a cola drink. Describes the lesson plan, including objectives, procedures, evaluation, projects, and conclusions. Provides two laboratory sheets containing problem, hypothesis, materials, procedure, observations, and conclusions, vocabulary table, and data table. (YP)

  6. Facilitating the Furrowed Brow: An Unobtrusive Test of the Facial Feedback Hypothesis Applied to Unpleasant Affect.

    PubMed

    Larsen, Randy J; Kasimatis, Margaret; Frey, Kurt

    1992-09-01

    We examined the hypothesis that muscle contractions in the face influence subjective emotional experience. Previously, researchers have been critical of experiments designed to test this facial feedback hypothesis, particularly in terms of methodological problems that may lead to demand characteristics. In an effort to surmount these methodological problems Strack, Martin, and Stepper (1988) developed an experimental procedure whereby subjects were induced to contract facial muscles involved in the production of an emotional pattern, without being asked to actually simulate an emotion. Specifically, subjects were required to hold a pen in their teeth, which unobtrusively creates a contraction of the zygomaticus major muscles, the muscles involved in the production of a human smile. This manipulation minimises the likelihood that subjects are able to interpret their zygomaticus contractions as representing a particular emotion, thereby preventing subjects from determining the purpose of the experiment. Strack et al. (1988) found support for the facial feedback hypothesis applied to pleasant affect, in that subjects in the pen-in-teeth condition rated humorous cartoons as being funnier than subjects in the control condition (in which zygomaticus contractions were inhibited). The present study represents an extension of this nonobtrusive methodology to an investigation of the facial feedback of unpleasant affect. Consistent with the Strack et al. procedure, we wanted to have subjects furrow their brow without actually instructing them to do so and without asking them to produce any emotional facial pattern at all. This was achieved by attaching two golf tees to the subject's brow region (just above the inside comer of each eye) and then instructing them to touch the tips of the golf tees together as part of a "divided-attention" experiment. Touching the tips of the golf tees together could only be achieved by a contraction of the corrugator supercilii muscles, the muscles involved in the production of a sad emotional facial pattern. Subjects reported significantly more sadness in response to aversive photographs while touching the tips of the golf tees together than under conditions which inhibited corrugator contractions. These results provide evidence, using a new and unobtrusive manipulation, that facial feedback operates for unpleasant affect to a degree similar to that previously found for pleasant affect.

  7. Multi-arm group sequential designs with a simultaneous stopping rule.

    PubMed

    Urach, S; Posch, M

    2016-12-30

    Multi-arm group sequential clinical trials are efficient designs to compare multiple treatments to a control. They allow one to test for treatment effects already in interim analyses and can have a lower average sample number than fixed sample designs. Their operating characteristics depend on the stopping rule: We consider simultaneous stopping, where the whole trial is stopped as soon as for any of the arms the null hypothesis of no treatment effect can be rejected, and separate stopping, where only recruitment to arms for which a significant treatment effect could be demonstrated is stopped, but the other arms are continued. For both stopping rules, the family-wise error rate can be controlled by the closed testing procedure applied to group sequential tests of intersection and elementary hypotheses. The group sequential boundaries for the separate stopping rule also control the family-wise error rate if the simultaneous stopping rule is applied. However, we show that for the simultaneous stopping rule, one can apply improved, less conservative stopping boundaries for local tests of elementary hypotheses. We derive corresponding improved Pocock and O'Brien type boundaries as well as optimized boundaries to maximize the power or average sample number and investigate the operating characteristics and small sample properties of the resulting designs. To control the power to reject at least one null hypothesis, the simultaneous stopping rule requires a lower average sample number than the separate stopping rule. This comes at the cost of a lower power to reject all null hypotheses. Some of this loss in power can be regained by applying the improved stopping boundaries for the simultaneous stopping rule. The procedures are illustrated with clinical trials in systemic sclerosis and narcolepsy. © 2016 The Authors. Statistics in Medicine Published by John Wiley & Sons Ltd. © 2016 The Authors. Statistics in Medicine Published by John Wiley & Sons Ltd.

  8. Procedural Motor Learning in Children with Specific Language Impairment

    ERIC Educational Resources Information Center

    Sanjeevan, Teenu; Mainela-Arnold, Elina

    2017-01-01

    Purpose: Specific language impairment (SLI) is a developmental disorder that affects language and motor development in the absence of a clear cause. An explanation for these impairments is offered by the procedural deficit hypothesis (PDH), which argues that motor difficulties in SLI are due to deficits in procedural memory. The aim of this study…

  9. Testing the null hypothesis: the forgotten legacy of Karl Popper?

    PubMed

    Wilkinson, Mick

    2013-01-01

    Testing of the null hypothesis is a fundamental aspect of the scientific method and has its basis in the falsification theory of Karl Popper. Null hypothesis testing makes use of deductive reasoning to ensure that the truth of conclusions is irrefutable. In contrast, attempting to demonstrate the new facts on the basis of testing the experimental or research hypothesis makes use of inductive reasoning and is prone to the problem of the Uniformity of Nature assumption described by David Hume in the eighteenth century. Despite this issue and the well documented solution provided by Popper's falsification theory, the majority of publications are still written such that they suggest the research hypothesis is being tested. This is contrary to accepted scientific convention and possibly highlights a poor understanding of the application of conventional significance-based data analysis approaches. Our work should remain driven by conjecture and attempted falsification such that it is always the null hypothesis that is tested. The write up of our studies should make it clear that we are indeed testing the null hypothesis and conforming to the established and accepted philosophical conventions of the scientific method.

  10. Differential Expression of Ethanol-Induced Hypothermia in Adolescent and Adult Rats Induced by Pretest Familiarization to the Handling/Injection Procedure

    PubMed Central

    Ristuccia, Robert C.; Hernandez, Michael; Wilmouth, Carrie E.; Spear, Linda P.

    2007-01-01

    Background Previous work examining ethanol’s autonomic effects has found contrasting patterns of age-related differences in ethanol-induced hypothermia between adolescent and adult rats. Most studies have found adolescents to be less sensitive than adults to this effect, although other work has indicated that adolescents may be more sensitive than adults under certain testing conditions. To test the hypothesis that adolescents show more ethanol hypothermia than adults when the amount of disruption induced by the test procedures is low, but less hypothermia when the experimental perturbation is greater, the present study examined the consequences of manipulating the amount of perturbation at the time of testing on ethanol-induced hypothermia in adolescent and adult rats. Methods The amount of test disruption was manipulated by administering ethanol through a chronically indwelling gastric cannula (low perturbation) versus via intragastric intubation (higher perturbation) in Experiment 1 or by either familiarizing animals to the handling and injection procedure for several days pretest or leaving them unmanipulated before testing in Experiment 2. Results The results showed that the handling manipulation, but not the use of gastric cannulae, altered the expression of ethanol-induced hypothermia differentially across age. When using a familiarization protocol sufficient to reduce the corticosterone response to the handling and injection procedure associated with testing, adolescents showed greater hypothermia than adults. In contrast, the opposite pattern of age differences in hypothermia was evident in animals that were not manipulated before the test day. Surprisingly, however, this difference across testing circumstances was driven by a marked reduction in hypothermia among adults who had been handled before testing, with handling having relatively little impact on ethanol hypothermia among adolescents. Conclusions Observed differences between adolescents and adults in the autonomic consequences of ethanol were dramatically influenced by whether animals were familiarized with the handling/injection process before testing. Under these circumstances, adolescents were less susceptible than adults to the impact of experimental perturbation on ethanol-induced hypothermia. These findings suggest that seemingly innocuous aspects of experimental design can influence conclusions reached on ontogenetic differences in sensitivity to ethanol, at least when indexed by ethanol-induced hypothermia. PMID:17374036

  11. Differential expression of ethanol-induced hypothermia in adolescent and adult rats induced by pretest familiarization to the handling/injection procedure.

    PubMed

    Ristuccia, Robert C; Hernandez, Michael; Wilmouth, Carrie E; Spear, Linda P

    2007-04-01

    Previous work examining ethanol's autonomic effects has found contrasting patterns of age-related differences in ethanol-induced hypothermia between adolescent and adult rats. Most studies have found adolescents to be less sensitive than adults to this effect, although other work has indicated that adolescents may be more sensitive than adults under certain testing conditions. To test the hypothesis that adolescents show more ethanol hypothermia than adults when the amount of disruption induced by the test procedures is low, but less hypothermia when the experimental perturbation is greater, the present study examined the consequences of manipulating the amount of perturbation at the time of testing on ethanol-induced hypothermia in adolescent and adult rats. The amount of test disruption was manipulated by administering ethanol through a chronically indwelling gastric cannula (low perturbation) versus via intragastric intubation (higher perturbation) in Experiment 1 or by either familiarizing animals to the handling and injection procedure for several days pretest or leaving them unmanipulated before testing in Experiment 2. The results showed that the handling manipulation, but not the use of gastric cannulae, altered the expression of ethanol-induced hypothermia differentially across age. When using a familiarization protocol sufficient to reduce the corticosterone response to the handling and injection procedure associated with testing, adolescents showed greater hypothermia than adults. In contrast, the opposite pattern of age differences in hypothermia was evident in animals that were not manipulated before the test day. Surprisingly, however, this difference across testing circumstances was driven by a marked reduction in hypothermia among adults who had been handled before testing, with handling having relatively little impact on ethanol hypothermia among adolescents. Observed differences between adolescents and adults in the autonomic consequences of ethanol were dramatically influenced by whether animals were familiarized with the handling/injection process before testing. Under these circumstances, adolescents were less susceptible than adults to the impact of experimental perturbation on ethanol-induced hypothermia. These findings suggest that seemingly innocuous aspects of experimental design can influence conclusions reached on ontogenetic differences in sensitivity to ethanol, at least when indexed by ethanol-induced hypothermia.

  12. Qualitative computer aided evaluation of dental impressions in vivo.

    PubMed

    Luthardt, Ralph G; Koch, Rainer; Rudolph, Heike; Walter, Michael H

    2006-01-01

    Clinical investigations dealing with the precision of different impression techniques are rare. Objective of the present study was to develop and evaluate a procedure for the qualitative analysis of the three-dimensional impression precision based on an established in-vitro procedure. The zero hypothesis to be tested was that the precision of impressions does not differ depending on the impression technique used (single-step, monophase and two-step-techniques) and on clinical variables. Digital surface data of patient's teeth prepared for crowns were gathered from standardized manufactured master casts after impressions with three different techniques were taken in a randomized order. Data-sets were analyzed for each patient in comparison with the one-step impression chosen as the reference. The qualitative analysis was limited to data-points within the 99.5%-range. Based on the color-coded representation areas with maximum deviations were determined (preparation margin and the mantle and occlusal surface). To qualitatively analyze the precision of the impression techniques, the hypothesis was tested in linear models for repeated measures factors (p < 0.05). For the positive 99.5% deviations no variables with significant influence were determined in the statistical analysis. In contrast, the impression technique and the position of the preparation margin significantly influenced the negative 99.5% deviations. The influence of clinical parameter on the deviations between impression techniques can be determined reliably using the 99.5 percentile of the deviations. An analysis regarding the areas with maximum deviations showed high clinical relevance. The preparation margin was pointed out as the weak spot of impression taking.

  13. Nervous temperament in infant monkeys is associated with reduced sensitivity of leukocytes to cortisol's influence on trafficking.

    PubMed

    Capitanio, John P; Mendoza, Sally P; Cole, Steve W

    2011-01-01

    There is growing evidence that temperament/personality factors are associated with immune function and health-related outcomes. Neuroticism, in particular, is a risk-factor for several diseases, many with a strong inflammatory component. We propose that neuroticism (or nervous temperament in monkeys) is related to dysregulation of immune function by glucocorticoids. The present study tested the hypothesis that animals with a nervous temperament would show no relationship between cortisol concentrations and leukocyte numbers in peripheral blood (an easily obtainable measure of glucocorticoid-mediated immune function), while animals low on this factor would show expected relationships. Infant rhesus monkeys (n=1507) experienced a standardized testing procedure involving blood sampling, behavioral tests, and temperament ratings. Results confirmed the hypothesis: low-nervous animals showed the expected positive relationship between cortisol levels and neutrophil numbers, while high-nervous animals showed no relationship. High-nervous animals also showed elevated cortisol concentrations at most sample points, and responded to a human challenge with more negative emotional behavior. These data suggest that individuals with a nervous temperament show evidence of glucocorticoid desensitization of immune cells. Differences with other studies, including the specific types of leukocytes that are affected, are discussed, and implications for disease processes are suggested. Copyright © 2010 Elsevier Inc. All rights reserved.

  14. Harnessing Multivariate Statistics for Ellipsoidal Data in Structural Geology

    NASA Astrophysics Data System (ADS)

    Roberts, N.; Davis, J. R.; Titus, S.; Tikoff, B.

    2015-12-01

    Most structural geology articles do not state significance levels, report confidence intervals, or perform regressions to find trends. This is, in part, because structural data tend to include directions, orientations, ellipsoids, and tensors, which are not treatable by elementary statistics. We describe a full procedural methodology for the statistical treatment of ellipsoidal data. We use a reconstructed dataset of deformed ooids in Maryland from Cloos (1947) to illustrate the process. Normalized ellipsoids have five degrees of freedom and can be represented by a second order tensor. This tensor can be permuted into a five dimensional vector that belongs to a vector space and can be treated with standard multivariate statistics. Cloos made several claims about the distribution of deformation in the South Mountain fold, Maryland, and we reexamine two particular claims using hypothesis testing: 1) octahedral shear strain increases towards the axial plane of the fold; 2) finite strain orientation varies systematically along the trend of the axial trace as it bends with the Appalachian orogen. We then test the null hypothesis that the southern segment of South Mountain is the same as the northern segment. This test illustrates the application of ellipsoidal statistics, which combine both orientation and shape. We report confidence intervals for each test, and graphically display our results with novel plots. This poster illustrates the importance of statistics in structural geology, especially when working with noisy or small datasets.

  15. Working, declarative and procedural memory in specific language impairment

    PubMed Central

    Lum, Jarrad A.G.; Conti-Ramsden, Gina; Page, Debra; Ullman, Michael T.

    2012-01-01

    According to the Procedural Deficit Hypothesis (PDH), abnormalities of brain structures underlying procedural memory largely explain the language deficits in children with specific language impairment (SLI). These abnormalities are posited to result in core deficits of procedural memory, which in turn explain the grammar problems in the disorder. The abnormalities are also likely to lead to problems with other, non-procedural functions, such as working memory, that rely at least partly on the affected brain structures. In contrast, declarative memory is expected to remain largely intact, and should play an important compensatory role for grammar. These claims were tested by examining measures of working, declarative and procedural memory in 51 children with SLI and 51 matched typically-developing (TD) children (mean age 10). Working memory was assessed with the Working Memory Test Battery for Children, declarative memory with the Children’s Memory Scale, and procedural memory with a visuo-spatial Serial Reaction Time task. As compared to the TD children, the children with SLI were impaired at procedural memory, even when holding working memory constant. In contrast, they were spared at declarative memory for visual information, and at declarative memory in the verbal domain after controlling for working memory and language. Visuo-spatial short-term memory was intact, whereas verbal working memory was impaired, even when language deficits were held constant. Correlation analyses showed neither visuo-spatial nor verbal working memory was associated with either lexical or grammatical abilities in either the SLI or TD children. Declarative memory correlated with lexical abilities in both groups of children. Finally, grammatical abilities were associated with procedural memory in the TD children, but with declarative memory in the children with SLI. These findings replicate and extend previous studies of working, declarative and procedural memory in SLI. Overall, we suggest that the evidence largely supports the predictions of the PDH. PMID:21774923

  16. Extraordinary Claims Require Extraordinary Evidence: The Case of Non-Local Perception, a Classical and Bayesian Review of Evidences

    PubMed Central

    Tressoldi, Patrizio E.

    2011-01-01

    Starting from the famous phrase “extraordinary claims require extraordinary evidence,” we will present the evidence supporting the concept that human visual perception may have non-local properties, in other words, that it may operate beyond the space and time constraints of sensory organs, in order to discuss which criteria can be used to define evidence as extraordinary. This evidence has been obtained from seven databases which are related to six different protocols used to test the reality and the functioning of non-local perception, analyzed using both a frequentist and a new Bayesian meta-analysis statistical procedure. According to a frequentist meta-analysis, the null hypothesis can be rejected for all six protocols even if the effect sizes range from 0.007 to 0.28. According to Bayesian meta-analysis, the Bayes factors provides strong evidence to support the alternative hypothesis (H1) over the null hypothesis (H0), but only for three out of the six protocols. We will discuss whether quantitative psychology can contribute to defining the criteria for the acceptance of new scientific ideas in order to avoid the inconclusive controversies between supporters and opponents. PMID:21713069

  17. Nonword repetition in lexical decision: support for two opposing processes.

    PubMed

    Wagenmakers, Eric-Jan; Zeelenberg, René; Steyvers, Mark; Shiffrin, Richard; Raaijmakers, Jeroen

    2004-10-01

    We tested and confirmed the hypothesis that the prior presentation of nonwords in lexical decision is the net result of two opposing processes: (1) a relatively fast inhibitory process based on global familiarity; and (2) a relatively slow facilitatory process based on the retrieval of specific episodic information. In three studies, we manipulated speed-stress to influence the balance between the two processes. Experiment 1 showed item-specific improvement for repeated nonwords in a standard "respond-when-ready" lexical decision task. Experiment 2 used a 400-ms deadline procedure and showed performance for nonwords to be unaffected by up to four prior presentations. In Experiment 3 we used a signal-to-respond procedure with variable time intervals and found negative repetition priming for repeated nonwords. These results can be accounted for by dual-process models of lexical decision.

  18. Does it really matter whether students' contributions are spoken versus typed in an intelligent tutoring system with natural language?

    PubMed

    D'Mello, Sidney K; Dowell, Nia; Graesser, Arthur

    2011-03-01

    There is the question of whether learning differs when students speak versus type their responses when interacting with intelligent tutoring systems with natural language dialogues. Theoretical bases exist for three contrasting hypotheses. The speech facilitation hypothesis predicts that spoken input will increase learning, whereas the text facilitation hypothesis predicts typed input will be superior. The modality equivalence hypothesis claims that learning gains will be equivalent. Previous experiments that tested these hypotheses were confounded by automated speech recognition systems with substantial error rates that were detected by learners. We addressed this concern in two experiments via a Wizard of Oz procedure, where a human intercepted the learner's speech and transcribed the utterances before submitting them to the tutor. The overall pattern of the results supported the following conclusions: (1) learning gains associated with spoken and typed input were on par and quantitatively higher than a no-intervention control, (2) participants' evaluations of the session were not influenced by modality, and (3) there were no modality effects associated with differences in prior knowledge and typing proficiency. Although the results generally support the modality equivalence hypothesis, highly motivated learners reported lower cognitive load and demonstrated increased learning when typing compared with speaking. We discuss the implications of our findings for intelligent tutoring systems that can support typed and spoken input.

  19. P value and the theory of hypothesis testing: an explanation for new researchers.

    PubMed

    Biau, David Jean; Jolles, Brigitte M; Porcher, Raphaël

    2010-03-01

    In the 1920s, Ronald Fisher developed the theory behind the p value and Jerzy Neyman and Egon Pearson developed the theory of hypothesis testing. These distinct theories have provided researchers important quantitative tools to confirm or refute their hypotheses. The p value is the probability to obtain an effect equal to or more extreme than the one observed presuming the null hypothesis of no effect is true; it gives researchers a measure of the strength of evidence against the null hypothesis. As commonly used, investigators will select a threshold p value below which they will reject the null hypothesis. The theory of hypothesis testing allows researchers to reject a null hypothesis in favor of an alternative hypothesis of some effect. As commonly used, investigators choose Type I error (rejecting the null hypothesis when it is true) and Type II error (accepting the null hypothesis when it is false) levels and determine some critical region. If the test statistic falls into that critical region, the null hypothesis is rejected in favor of the alternative hypothesis. Despite similarities between the two, the p value and the theory of hypothesis testing are different theories that often are misunderstood and confused, leading researchers to improper conclusions. Perhaps the most common misconception is to consider the p value as the probability that the null hypothesis is true rather than the probability of obtaining the difference observed, or one that is more extreme, considering the null is true. Another concern is the risk that an important proportion of statistically significant results are falsely significant. Researchers should have a minimum understanding of these two theories so that they are better able to plan, conduct, interpret, and report scientific experiments.

  20. POST-RETRIEVAL PROPRANOLOL TREATMENT DOES NOT MODULATE RECONSOLIDATION OR EXTINCTION OF ETHANOL-INDUCED CONDITIONED PLACE PREFERENCE

    PubMed Central

    Font, Laura; Cunningham, Christopher L.

    2012-01-01

    The reconsolidation hypothesis posits that established emotional memories, when reactivated, become labile and susceptible to disruption. Post-retrieval injection of propranolol (PRO), a nonspecific β-adrenergic receptor antagonist, impairs subsequent retention performance of a cocaine- and a morphine-induced conditioned place preference (CPP), implicating the noradrenergic system in the reconsolidation processes of drug-seeking behavior. An important question is whether post-retrieval PRO disrupts memory for the drug-cue associations, or instead interferes with extinction. In the present study, we evaluated the role of the β-adrenergic system on the reconsolidation and extinction of ethanol-induced CPP. Male DBA/2J mice were trained using a weak or a strong conditioning procedure, achieved by varying the ethanol conditioning dose (1 or 2 g/kg) and the number of ethanol trials (2 or 4). After acquisition of ethanol CPP, animals were given a single post-retrieval injection of PRO (0, 10 or 30 mg/kg) and tested for memory reconsolidation 24 h later. Also, after the first reconsolidation test, mice received 18 additional 15-min choice extinction tests in which PRO was injected immediately after every test. Contrary to the prediction of the reconsolidation hypothesis, a single PRO injection after the retrieval test did not modify subsequent memory retention. In addition, repeated post-retrieval administration of PRO did not interfere with extinction of CPP in mice. Overall, our data suggest that the β-adrenergic receptor does not modulate the associative processes underlying ethanol CPP. PMID:22285323

  1. Psychophysics of associative learning: Quantitative properties of subjective contingency.

    PubMed

    Maia, Susana; Lefèvre, Françoise; Jozefowiez, Jérémie

    2018-01-01

    Allan and collaborators (Allan, Hannah, Crump, & Siegel, 2008; Allan, Siegel, & Tangen, 2005; Siegel, Allan, Hannah, & Crump, 2009) recently proposed to apply signal detection theory to the analysis of contingency judgment tasks. When exposed to a flow of stimuli, participants are asked to judge whether there is a contingent relation between a cue and an outcome, that is, whether the subjective cue-outcome contingency exceeds a decision threshold. In this context, we tested the following hypotheses regarding the relation between objective and subjective cue-outcome contingency: (a) The underlying distributions of subjective cue-outcome contingency are Gaussian; (b) The mean distribution of subjective contingency is a linear function of objective cue-outcome contingency; and (c) The variance in the distribution of subjective contingency is constant. The hypotheses were tested by combining a streamed-trial contingency assessment task with a confidence rating procedure. Participants were exposed to rapid flows of stimuli at the end of which they had to judge whether an outcome was more (Experiment 1) or less (Experiment 2) likely to appear following a cue and how sure they were of their judgment. We found that although Hypothesis A seems reasonable, Hypotheses B and C were not. Regarding Hypothesis B, participants were more sensitive to positive than to negative contingencies. Regarding Hypothesis C, the perceived cue-outcome contingency became more variable when the contingency became more positive or negative, but only to a slight extent. (PsycINFO Database Record (c) 2018 APA, all rights reserved).

  2. Online biospeckle assessment without loss of definition and resolution by motion history image

    NASA Astrophysics Data System (ADS)

    Godinho, R. P.; Silva, M. M.; Nozela, J. R.; Braga, R. A.

    2012-03-01

    The application of the dynamic laser speckle as a reliable instrument to achieve maps of activity in biological material is available in literature optics and laser. The application, particularly in live specimens, such as animals and human beings necessitated some approaches to avoid the kinking of the bodies, which creates changes in the patterns undermining the biological activity under monitoring. The adoption of online techniques circumvented the noise generated by the kinking, however, with considerable reduction in the resolution and definition of the activity maps. This work presents a feasible alternative to the routine online methods based on the Motion History Image (MHI) methodology. The adoption of MHI was tested in biological and non-biological samples and compared with online as well as offline procedures of biospeckle image analysis. Tests on paint drying was associated to alcohol volatilization, and tests on a maize seed and on growing of roots confirmed the hypothesis that the MHI would be able to implement an online approach without the reduction of resolution and definition on the resultant images, thereby presenting in some cases results that were comparable to the offline procedures.

  3. Debates—Hypothesis testing in hydrology: Theory and practice

    NASA Astrophysics Data System (ADS)

    Pfister, Laurent; Kirchner, James W.

    2017-03-01

    The basic structure of the scientific method—at least in its idealized form—is widely championed as a recipe for scientific progress, but the day-to-day practice may be different. Here, we explore the spectrum of current practice in hypothesis formulation and testing in hydrology, based on a random sample of recent research papers. This analysis suggests that in hydrology, as in other fields, hypothesis formulation and testing rarely correspond to the idealized model of the scientific method. Practices such as "p-hacking" or "HARKing" (Hypothesizing After the Results are Known) are major obstacles to more rigorous hypothesis testing in hydrology, along with the well-known problem of confirmation bias—the tendency to value and trust confirmations more than refutations—among both researchers and reviewers. Nonetheless, as several examples illustrate, hypothesis tests have played an essential role in spurring major advances in hydrological theory. Hypothesis testing is not the only recipe for scientific progress, however. Exploratory research, driven by innovations in measurement and observation, has also underlain many key advances. Further improvements in observation and measurement will be vital to both exploratory research and hypothesis testing, and thus to advancing the science of hydrology.

  4. Bayesian inference for psychology. Part II: Example applications with JASP.

    PubMed

    Wagenmakers, Eric-Jan; Love, Jonathon; Marsman, Maarten; Jamil, Tahira; Ly, Alexander; Verhagen, Josine; Selker, Ravi; Gronau, Quentin F; Dropmann, Damian; Boutin, Bruno; Meerhoff, Frans; Knight, Patrick; Raj, Akash; van Kesteren, Erik-Jan; van Doorn, Johnny; Šmíra, Martin; Epskamp, Sacha; Etz, Alexander; Matzke, Dora; de Jong, Tim; van den Bergh, Don; Sarafoglou, Alexandra; Steingroever, Helen; Derks, Koen; Rouder, Jeffrey N; Morey, Richard D

    2018-02-01

    Bayesian hypothesis testing presents an attractive alternative to p value hypothesis testing. Part I of this series outlined several advantages of Bayesian hypothesis testing, including the ability to quantify evidence and the ability to monitor and update this evidence as data come in, without the need to know the intention with which the data were collected. Despite these and other practical advantages, Bayesian hypothesis tests are still reported relatively rarely. An important impediment to the widespread adoption of Bayesian tests is arguably the lack of user-friendly software for the run-of-the-mill statistical problems that confront psychologists for the analysis of almost every experiment: the t-test, ANOVA, correlation, regression, and contingency tables. In Part II of this series we introduce JASP ( http://www.jasp-stats.org ), an open-source, cross-platform, user-friendly graphical software package that allows users to carry out Bayesian hypothesis tests for standard statistical problems. JASP is based in part on the Bayesian analyses implemented in Morey and Rouder's BayesFactor package for R. Armed with JASP, the practical advantages of Bayesian hypothesis testing are only a mouse click away.

  5. Teaching Hypothesis Testing by Debunking a Demonstration of Telepathy.

    ERIC Educational Resources Information Center

    Bates, John A.

    1991-01-01

    Discusses a lesson designed to demonstrate hypothesis testing to introductory college psychology students. Explains that a psychology instructor demonstrated apparent psychic abilities to students. Reports that students attempted to explain the instructor's demonstrations through hypothesis testing and revision. Provides instructions on performing…

  6. Trends in hypothesis testing and related variables in nursing research: a retrospective exploratory study.

    PubMed

    Lash, Ayhan Aytekin; Plonczynski, Donna J; Sehdev, Amikar

    2011-01-01

    To compare the inclusion and the influences of selected variables on hypothesis testing during the 1980s and 1990s. In spite of the emphasis on conducting inquiry consistent with the tenets of logical positivism, there have been no studies investigating the frequency and patterns of hypothesis testing in nursing research The sample was obtained from the journal Nursing Research which was the research journal with the highest circulation during the study period under study. All quantitative studies published during the two decades including briefs and historical studies were included in the analyses A retrospective design was used to select the sample. Five years from the 1980s and 1990s each were randomly selected from the journal, Nursing Research. Of the 582 studies, 517 met inclusion criteria. Findings suggest that there has been a decline in the use of hypothesis testing in the last decades of the 20th century. Further research is needed to identify the factors that influence the conduction of research with hypothesis testing. Hypothesis testing in nursing research showed a steady decline from the 1980s to 1990s. Research purposes of explanation, and prediction/ control increased the likelihood of hypothesis testing. Hypothesis testing strengthens the quality of the quantitative studies, increases the generality of findings and provides dependable knowledge. This is particularly true for quantitative studies that aim to explore, explain and predict/control phenomena and/or test theories. The findings also have implications for doctoral programmes, research preparation of nurse-investigators, and theory testing.

  7. The revelation effect: A meta-analytic test of hypotheses.

    PubMed

    Aßfalg, André; Bernstein, Daniel M; Hockley, William

    2017-12-01

    Judgments can depend on the activity directly preceding them. An example is the revelation effect whereby participants are more likely to claim that a stimulus is familiar after a preceding task, such as solving an anagram, than without a preceding task. We test conflicting predictions of four revelation-effect hypotheses in a meta-analysis of 26 years of revelation-effect research. The hypotheses' predictions refer to three subject areas: (1) the basis of judgments that are subject to the revelation effect (recollection vs. familiarity vs. fluency), (2) the degree of similarity between the task and test item, and (3) the difficulty of the preceding task. We use a hierarchical multivariate meta-analysis to account for dependent effect sizes and variance in experimental procedures. We test the revelation-effect hypotheses with a model selection procedure, where each model corresponds to a prediction of a revelation-effect hypothesis. We further quantify the amount of evidence for one model compared to another with Bayes factors. The results of this analysis suggest that none of the extant revelation-effect hypotheses can fully account for the data. The general vagueness of revelation-effect hypotheses and the scarcity of data were the major limiting factors in our analyses, emphasizing the need for formalized theories and further research into the puzzling revelation effect.

  8. Effects of Phasor Measurement Uncertainty on Power Line Outage Detection

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chen, Chen; Wang, Jianhui; Zhu, Hao

    2014-12-01

    Phasor measurement unit (PMU) technology provides an effective tool to enhance the wide-area monitoring systems (WAMSs) in power grids. Although extensive studies have been conducted to develop several PMU applications in power systems (e.g., state estimation, oscillation detection and control, voltage stability analysis, and line outage detection), the uncertainty aspects of PMUs have not been adequately investigated. This paper focuses on quantifying the impact of PMU uncertainty on power line outage detection and identification, in which a limited number of PMUs installed at a subset of buses are utilized to detect and identify the line outage events. Specifically, the linemore » outage detection problem is formulated as a multi-hypothesis test, and a general Bayesian criterion is used for the detection procedure, in which the PMU uncertainty is analytically characterized. We further apply the minimum detection error criterion for the multi-hypothesis test and derive the expected detection error probability in terms of PMU uncertainty. The framework proposed provides fundamental guidance for quantifying the effects of PMU uncertainty on power line outage detection. Case studies are provided to validate our analysis and show how PMU uncertainty influences power line outage detection.« less

  9. Bayesian adaptive phase II screening design for combination trials.

    PubMed

    Cai, Chunyan; Yuan, Ying; Johnson, Valen E

    2013-01-01

    Trials of combination therapies for the treatment of cancer are playing an increasingly important role in the battle against this disease. To more efficiently handle the large number of combination therapies that must be tested, we propose a novel Bayesian phase II adaptive screening design to simultaneously select among possible treatment combinations involving multiple agents. Our design is based on formulating the selection procedure as a Bayesian hypothesis testing problem in which the superiority of each treatment combination is equated to a single hypothesis. During the trial conduct, we use the current values of the posterior probabilities of all hypotheses to adaptively allocate patients to treatment combinations. Simulation studies show that the proposed design substantially outperforms the conventional multiarm balanced factorial trial design. The proposed design yields a significantly higher probability for selecting the best treatment while allocating substantially more patients to efficacious treatments. The proposed design is most appropriate for the trials combining multiple agents and screening out the efficacious combination to be further investigated. The proposed Bayesian adaptive phase II screening design substantially outperformed the conventional complete factorial design. Our design allocates more patients to better treatments while providing higher power to identify the best treatment at the end of the trial.

  10. Semantically enabled and statistically supported biological hypothesis testing with tissue microarray databases

    PubMed Central

    2011-01-01

    Background Although many biological databases are applying semantic web technologies, meaningful biological hypothesis testing cannot be easily achieved. Database-driven high throughput genomic hypothesis testing requires both of the capabilities of obtaining semantically relevant experimental data and of performing relevant statistical testing for the retrieved data. Tissue Microarray (TMA) data are semantically rich and contains many biologically important hypotheses waiting for high throughput conclusions. Methods An application-specific ontology was developed for managing TMA and DNA microarray databases by semantic web technologies. Data were represented as Resource Description Framework (RDF) according to the framework of the ontology. Applications for hypothesis testing (Xperanto-RDF) for TMA data were designed and implemented by (1) formulating the syntactic and semantic structures of the hypotheses derived from TMA experiments, (2) formulating SPARQLs to reflect the semantic structures of the hypotheses, and (3) performing statistical test with the result sets returned by the SPARQLs. Results When a user designs a hypothesis in Xperanto-RDF and submits it, the hypothesis can be tested against TMA experimental data stored in Xperanto-RDF. When we evaluated four previously validated hypotheses as an illustration, all the hypotheses were supported by Xperanto-RDF. Conclusions We demonstrated the utility of high throughput biological hypothesis testing. We believe that preliminary investigation before performing highly controlled experiment can be benefited. PMID:21342584

  11. Direct and indirect effects of birth order on personality and identity: support for the null hypothesis.

    PubMed

    Dunkel, Curtis S; Harbke, Colin R; Papini, Dennis R

    2009-06-01

    The authors proposed that birth order affects psychosocial outcomes through differential investment from parent to child and differences in the degree of identification from child to parent. The authors conducted this study to test these 2 models. Despite the use of statistical and methodological procedures to increase sensitivity and reduce error, the authors did not find support for the models. They discuss results in the context of the mixed-research findings regarding birth order and suggest further research on the proposed developmental dynamics that may produce birth-order effects.

  12. Dynamic testing in schizophrenia: does training change the construct validity of a test?

    PubMed

    Wiedl, Karl H; Schöttke, Henning; Green, Michael F; Nuechterlein, Keith H

    2004-01-01

    Dynamic testing typically involves specific interventions for a test to assess the extent to which test performance can be modified, beyond level of baseline (static) performance. This study used a dynamic version of the Wisconsin Card Sorting Test (WCST) that is based on cognitive remediation techniques within a test-training-test procedure. From results of previous studies with schizophrenia patients, we concluded that the dynamic and static versions of the WCST should have different construct validity. This hypothesis was tested by examining the patterns of correlations with measures of executive functioning, secondary verbal memory, and verbal intelligence. Results demonstrated a specific construct validity of WCST dynamic (i.e., posttest) scores as an index of problem solving (Tower of Hanoi) and secondary verbal memory and learning (Auditory Verbal Learning Test), whereas the impact of general verbal capacity and selective attention (Verbal IQ, Stroop Test) was reduced. It is concluded that the construct validity of the test changes with dynamic administration and that this difference helps to explain why the dynamic version of the WCST predicts functional outcome better than the static version.

  13. Behavioral mechanisms of context fear generalization in mice

    PubMed Central

    Huckleberry, Kylie A.; Ferguson, Laura B.

    2016-01-01

    There is growing interest in generalization of learned contextual fear, driven in part by the hypothesis that mood and anxiety disorders stem from impaired hippocampal mechanisms of fear generalization and discrimination. However, there has been relatively little investigation of the behavioral and procedural mechanisms that might control generalization of contextual fear. We assessed the relative contribution of different contextual features to context fear generalization and characterized how two common conditioning protocols—foreground (uncued) and background (cued) contextual fear conditioning—affected context fear generalization. In one experiment, mice were fear conditioned in context A, and then tested for contextual fear both in A and in an alternate context created by changing a subset of A's elements. The results suggest that floor configuration and odor are more salient features than chamber shape. A second experiment compared context fear generalization in background and foreground context conditioning. Although foreground conditioning produced more context fear than background conditioning, the two procedures produced equal amounts of generalized fear. Finally, results indicated that the order of context tests (original first versus alternate first) significantly modulates context fear generalization, perhaps because the original and alternate contexts are differentially sensitive to extinction. Overall, results demonstrate that context fear generalization is sensitive to procedural variations and likely reflects the operation of multiple interacting psychological and neural mechanisms. PMID:27918275

  14. Image-Based Patient-Specific Ventricle Models with Fluid-Structure Interaction for Cardiac Function Assessment and Surgical Design Optimization

    PubMed Central

    Tang, Dalin; Yang, Chun; Geva, Tal; del Nido, Pedro J.

    2010-01-01

    Recent advances in medical imaging technology and computational modeling techniques are making it possible that patient-specific computational ventricle models be constructed and used to test surgical hypotheses and replace empirical and often risky clinical experimentation to examine the efficiency and suitability of various reconstructive procedures in diseased hearts. In this paper, we provide a brief review on recent development in ventricle modeling and its potential application in surgical planning and management of tetralogy of Fallot (ToF) patients. Aspects of data acquisition, model selection and construction, tissue material properties, ventricle layer structure and tissue fiber orientations, pressure condition, model validation and virtual surgery procedures (changing patient-specific ventricle data and perform computer simulation) were reviewed. Results from a case study using patient-specific cardiac magnetic resonance (CMR) imaging and right/left ventricle and patch (RV/LV/Patch) combination model with fluid-structure interactions (FSI) were reported. The models were used to evaluate and optimize human pulmonary valve replacement/insertion (PVR) surgical procedure and patch design and test a surgical hypothesis that PVR with small patch and aggressive scar tissue trimming in PVR surgery may lead to improved recovery of RV function and reduced stress/strain conditions in the patch area. PMID:21344066

  15. Toward Joint Hypothesis-Tests Seismic Event Screening Analysis: Ms|mb and Event Depth

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Anderson, Dale; Selby, Neil

    2012-08-14

    Well established theory can be used to combine single-phenomenology hypothesis tests into a multi-phenomenology event screening hypothesis test (Fisher's and Tippett's tests). Commonly used standard error in Ms:mb event screening hypothesis test is not fully consistent with physical basis. Improved standard error - Better agreement with physical basis, and correctly partitions error to include Model Error as a component of variance, correctly reduces station noise variance through network averaging. For 2009 DPRK test - Commonly used standard error 'rejects' H0 even with better scaling slope ({beta} = 1, Selby et al.), improved standard error 'fails to rejects' H0.

  16. Memory inhibition as a critical factor preventing creative problem solving.

    PubMed

    Gómez-Ariza, Carlos J; Del Prete, Francesco; Prieto Del Val, Laura; Valle, Tania; Bajo, M Teresa; Fernandez, Angel

    2017-06-01

    The hypothesis that reduced accessibility to relevant information can negatively affect problem solving in a remote associate test (RAT) was tested by using, immediately before the RAT, a retrieval practice procedure to hinder access to target solutions. The results of 2 experiments clearly showed that, relative to baseline, target words that had been competitors during selective retrieval were much less likely to be provided as solutions in the RAT, demonstrating that performance in the problem-solving task was strongly influenced by the predetermined accessibility status of the solutions in memory. Importantly, this was so even when participants were unaware of the relationship between the memory and the problem-solving procedures in the experiments. This finding is consistent with an inhibitory account of retrieval-induced forgetting effects and, more generally, constitutes support for the idea that the activation status of mental representations originating in a given task (e.g., episodic memory) can unwittingly have significant consequences for a different, unrelated task (e.g., problem solving). (PsycINFO Database Record (c) 2017 APA, all rights reserved).

  17. ON THE SUBJECT OF HYPOTHESIS TESTING

    PubMed Central

    Ugoni, Antony

    1993-01-01

    In this paper, the definition of a statistical hypothesis is discussed, and the considerations which need to be addressed when testing a hypothesis. In particular, the p-value, significance level, and power of a test are reviewed. Finally, the often quoted confidence interval is given a brief introduction. PMID:17989768

  18. Some consequences of using the Horsfall-Barratt scale for hypothesis testing

    USDA-ARS?s Scientific Manuscript database

    Comparing treatment effects by hypothesis testing is a common practice in plant pathology. Nearest percent estimates (NPEs) of disease severity were compared to Horsfall-Barratt (H-B) scale data to explore whether there was an effect of assessment method on hypothesis testing. A simulation model ba...

  19. Hypothesis Testing in Task-Based Interaction

    ERIC Educational Resources Information Center

    Choi, Yujeong; Kilpatrick, Cynthia

    2014-01-01

    Whereas studies show that comprehensible output facilitates L2 learning, hypothesis testing has received little attention in Second Language Acquisition (SLA). Following Shehadeh (2003), we focus on hypothesis testing episodes (HTEs) in which learners initiate repair of their own speech in interaction. In the context of a one-way information gap…

  20. Classroom-Based Strategies to Incorporate Hypothesis Testing in Functional Behavior Assessments

    ERIC Educational Resources Information Center

    Lloyd, Blair P.; Weaver, Emily S.; Staubitz, Johanna L.

    2017-01-01

    When results of descriptive functional behavior assessments are unclear, hypothesis testing can help school teams understand how the classroom environment affects a student's challenging behavior. This article describes two hypothesis testing strategies that can be used in classroom settings: structural analysis and functional analysis. For each…

  1. Hypothesis Testing in the Real World

    ERIC Educational Resources Information Center

    Miller, Jeff

    2017-01-01

    Critics of null hypothesis significance testing suggest that (a) its basic logic is invalid and (b) it addresses a question that is of no interest. In contrast to (a), I argue that the underlying logic of hypothesis testing is actually extremely straightforward and compelling. To substantiate that, I present examples showing that hypothesis…

  2. Roles of Abductive Reasoning and Prior Belief in Children's Generation of Hypotheses about Pendulum Motion

    ERIC Educational Resources Information Center

    Kwon, Yong-Ju; Jeong, Jin-Su; Park, Yun-Bok

    2006-01-01

    The purpose of the present study was to test the hypothesis that student's abductive reasoning skills play an important role in the generation of hypotheses on pendulum motion tasks. To test the hypothesis, a hypothesis-generating test on pendulum motion, and a prior-belief test about pendulum motion were developed and administered to a sample of…

  3. Behavioral phenotyping of mice in pharmacological and toxicological research.

    PubMed

    Karl, Tim; Pabst, Reinhard; von Hörsten, Stephan

    2003-07-01

    The evaluation of behavioral effects is an important component for the in vivo screening of drugs or potentially toxic compounds in mice. Ideally, such screening should be composed of monitoring general health, sensory functions, and motor abilities, right before specific behavioral domains are tested. A rational strategy in the design and procedure of testing as well as an effective composition of different well-established and reproducible behavioral tests can minimize the risk of false positive and false negative results in drug screening. In the present review we describe such basic considerations in planning experiments, selecting strains of mice, and propose groups of behavioral tasks suitable for a reliable detection of differences in specific behavioral domains in mice. Screening of general health and neurophysiologic functions (reflexes, sensory abilities) and motor function (pole test, wire hang test, beam walking, rotarod, accelerod, and footprint) as well as specific hypothesis-guided testing in the behavioral domains of learning and memory (water maze, radial maze, conditioned fear, and avoidance tasks), emotionality (open field, hole board, elevated plus maze, and object exploration), nociception (tail flick, hot plate), psychiatric-like conditions (porsolt swim test, acoustic startle response, and prepulse inhibition), and aggression (isolation-induced aggression, spontaneous aggression, and territorial aggression) are described in further detail. This review is designed to describe a general approach, which increases reliability of behavioral screening. Furthermore, it provides an overview on a selection of specific procedures suitable for but not limited to behavioral screening in pharmacology and toxicology.

  4. Identification of potential neuromotor mechanisms of manual therapy in patients with musculoskeletal disablement: rationale and description of a clinical trial.

    PubMed

    Fisher, Beth E; Davenport, Todd E; Kulig, Kornelia; Wu, Allan D

    2009-05-21

    Many health care practitioners use a variety of hands-on treatments to improve symptoms and disablement in patients with musculoskeletal pathology.Research to date indirectly suggests a potentially broad effect of manual therapy on the neuromotor processing of functional behavior within the supraspinal central nervous system (CNS) in a manner that may be independent of modification at the level of local spinal circuits. However, the effect of treatment speed, as well as the specific mechanism and locus of CNS changes, remain unclear. We developed a placebo-controlled, randomized study to test the hypothesis that manual therapy procedures directed to the talocrural joint in individuals with post-acute ankle sprain induce a change in corticospinal excitability that is relevant to improve the performance of lower extremity functional behavior. This study is designed to identify potential neuromotor changes associated with manual therapy procedures directed to the appendicular skeleton, compare the relative effect of treatment speed on potential neuromotor effects of manual therapy procedures, and determine the behavioral relevance of potential neuromotor effects of manual therapy procedures. http://www.clinicaltrials.gov identifier NCT00847769.

  5. Identification of potential neuromotor mechanisms of manual therapy in patients with musculoskeletal disablement: rationale and description of a clinical trial

    PubMed Central

    Fisher, Beth E; Davenport, Todd E; Kulig, Kornelia; Wu, Allan D

    2009-01-01

    Background Many health care practitioners use a variety of hands-on treatments to improve symptoms and disablement in patients with musculoskeletal pathology. Research to date indirectly suggests a potentially broad effect of manual therapy on the neuromotor processing of functional behavior within the supraspinal central nervous system (CNS) in a manner that may be independent of modification at the level of local spinal circuits. However, the effect of treatment speed, as well as the specific mechanism and locus of CNS changes, remain unclear. Methods/Design We developed a placebo-controlled, randomized study to test the hypothesis that manual therapy procedures directed to the talocrural joint in individuals with post-acute ankle sprain induce a change in corticospinal excitability that is relevant to improve the performance of lower extremity functional behavior. Discussion This study is designed to identify potential neuromotor changes associated with manual therapy procedures directed to the appendicular skeleton, compare the relative effect of treatment speed on potential neuromotor effects of manual therapy procedures, and determine the behavioral relevance of potential neuromotor effects of manual therapy procedures. Trial Registration identifier NCT00847769. PMID:19460169

  6. Number Processing and Heterogeneity of Developmental Dyscalculia: Subtypes With Different Cognitive Profiles and Deficits.

    PubMed

    Skagerlund, Kenny; Träff, Ulf

    2016-01-01

    This study investigated if developmental dyscalculia (DD) in children with different profiles of mathematical deficits has the same or different cognitive origins. The defective approximate number system hypothesis and the access deficit hypothesis were tested using two different groups of children with DD (11-13 years old): a group with arithmetic fact dyscalculia (AFD) and a group with general dyscalculia (GD). Several different aspects of number magnitude processing were assessed in these two groups and compared with age-matched typically achieving children. The GD group displayed weaknesses with both symbolic and nonsymbolic number processing, whereas the AFD group displayed problems only with symbolic number processing. These findings provide evidence that the origins of DD in children with different profiles of mathematical problems diverge. Children with GD have impairment in the innate approximate number system, whereas children with AFD suffer from an access deficit. These findings have implications for researchers' selection procedures when studying dyscalculia, and also for practitioners in the educational setting. © Hammill Institute on Disabilities 2014.

  7. Unbiased estimation in seamless phase II/III trials with unequal treatment effect variances and hypothesis-driven selection rules.

    PubMed

    Robertson, David S; Prevost, A Toby; Bowden, Jack

    2016-09-30

    Seamless phase II/III clinical trials offer an efficient way to select an experimental treatment and perform confirmatory analysis within a single trial. However, combining the data from both stages in the final analysis can induce bias into the estimates of treatment effects. Methods for bias adjustment developed thus far have made restrictive assumptions about the design and selection rules followed. In order to address these shortcomings, we apply recent methodological advances to derive the uniformly minimum variance conditionally unbiased estimator for two-stage seamless phase II/III trials. Our framework allows for the precision of the treatment arm estimates to take arbitrary values, can be utilised for all treatments that are taken forward to phase III and is applicable when the decision to select or drop treatment arms is driven by a multiplicity-adjusted hypothesis testing procedure. © 2016 The Authors. Statistics in Medicine Published by John Wiley & Sons Ltd. © 2016 The Authors. Statistics in Medicine Published by John Wiley & Sons Ltd.

  8. Memory Operations That Support Language Comprehension: Evidence From Verb-Phrase Ellipsis

    PubMed Central

    Martin, Andrea E.; McElree, Brian

    2010-01-01

    Comprehension of verb-phrase ellipsis (VPE) requires reevaluation of recently processed constituents, which often necessitates retrieval of information about the elided constituent from memory. A. E. Martin and B. McElree (2008) argued that representations formed during comprehension are content addressable and that VPE antecedents are retrieved from memory via a cue-dependent direct-access pointer rather than via a search process. This hypothesis was further tested by manipulating the location of interfering material—either before the onset of the antecedent (proactive interference; PI) or intervening between antecedent and ellipsis site (retroactive interference; RI). The speed–accuracy tradeoff procedure was used to measure the time course of VPE processing. The location of the interfering material affected VPE comprehension accuracy: RI conditions engendered lower accuracy than PI conditions. Crucially, location did not affect the speed of processing VPE, which is inconsistent with both forward and backward search mechanisms. The observed time-course profiles are consistent with the hypothesis that VPE antecedents are retrieved via a cue-dependent direct-access operation. PMID:19686017

  9. A categorical recall strategy does not explain animacy effects in episodic memory.

    PubMed

    VanArsdall, Joshua E; Nairne, James S; Pandeirada, Josefa N S; Cogdill, Mindi

    2017-04-01

    Animate stimuli are better remembered than matched inanimate stimuli in free recall. Three experiments tested the hypothesis that animacy advantages are due to a more efficient use of a categorical retrieval cue. Experiment 1 developed an "embedded list" procedure that was designed to disrupt participants' ability to perceive category structure at encoding; a strong animacy effect remained. Experiments 2 and 3 employed animate and inanimate word lists consisting of tightly constrained categories (four-footed animals and furniture). Experiment 2 failed to find an animacy advantage when the categorical structure was readily apparent, but the advantage returned in Experiment 3 when the embedded list procedure was employed using the same target words. These results provide strong evidence against an organizational account of the animacy effect, indicating that the animacy effect in episodic memory is probably due to item-specific factors related to animacy.

  10. Flapless versus Conventional Flapped Dental Implant Surgery: A Meta-Analysis

    PubMed Central

    Chrcanovic, Bruno Ramos; Albrektsson, Tomas; Wennerberg, Ann

    2014-01-01

    The aim of this study was to test the null hypothesis of no difference in the implant failure rates, postoperative infection, and marginal bone loss for patients being rehabilitated by dental implants being inserted by a flapless surgical procedure versus the open flap technique, against the alternative hypothesis of a difference. An electronic search without time or language restrictions was undertaken in March 2014. Eligibility criteria included clinical human studies, either randomized or not. The search strategy resulted in 23 publications. The I2 statistic was used to express the percentage of the total variation across studies due to heterogeneity. The inverse variance method was used for random-effects model or fixed-effects model, when indicated. The estimates of relative effect were expressed in risk ratio (RR) and mean difference (MD) in millimeters. Sixteen studies were judged to be at high risk of bias, whereas two studies were considered of moderate risk of bias, and five studies of low risk of bias. The funnel plots indicated absence of publication bias for the three outcomes analyzed. The test for overall effect showed that the difference between the procedures (flapless vs. open flap surgery) significantly affect the implant failure rates (P = 0.03), with a RR of 1.75 (95% CI 1.07–2.86). However, a sensitivity analysis revealed differences when studies of high and low risk of bias were pooled separately. Thus, the results must be interpreted carefully. No apparent significant effects of flapless technique on the occurrence of postoperative infection (P = 0.96; RR 0.96, 95% CI 0.23–4.03) or on the marginal bone loss (P = 0.16; MD −0.07 mm, 95% CI −0.16–0.03) were observed. PMID:24950053

  11. Methodology and results of calculating central California surface temperature trends: Evidence of human-induced climate change?

    USGS Publications Warehouse

    Christy, J.R.; Norris, W.B.; Redmond, K.; Gallo, K.P.

    2006-01-01

    A procedure is described to construct time series of regional surface temperatures and is then applied to interior central California stations to test the hypothesis that century-scale trend differences between irrigated and nonirrigated regions may be identified. The procedure requires documentation of every point in time at which a discontinuity in a station record may have occurred through (a) the examination of metadata forms (e.g., station moves) and (b) simple statistical tests. From this "homogeneous segments" of temperature records for each station are defined. Biases are determined for each segment relative to all others through a method employing mathematical graph theory. The debiased segments are then merged, forming a complete regional time series. Time series of daily maximum and minimum temperatures for stations in the irrigated San Joaquin Valley (Valley) and nearby nonirrigated Sierra Nevada (Sierra) were generated for 1910-2003. Results show that twentieth-century Valley minimum temperatures are warming at a highly significant rate in all seasons, being greatest in summer and fall (> +0.25??C decade-1). The Valley trend of annual mean temperatures is +0.07?? ?? 0.07??C decade-1. Sierra summer and fall minimum temperatures appear to be cooling, but at a less significant rate, while the trend of annual mean Sierra temperatures is an unremarkable -0.02?? ?? 0.10??C decade-1. A working hypothesis is that the relative positive trends in Valley minus Sierra minima (>0.4??C decade-1 for summer and fall) are related to the altered surface environment brought about by the growth of irrigated agriculture, essentially changing a high-albedo desert into a darker, moister, vegetated plain. ?? 2006 American Meteorological Society.

  12. Use of lignin extracted from different plant sources as standards in the spectrophotometric acetyl bromide lignin method.

    PubMed

    Fukushima, Romualdo S; Kerley, Monty S

    2011-04-27

    A nongravimetric acetyl bromide lignin (ABL) method was evaluated to quantify lignin concentration in a variety of plant materials. The traditional approach to lignin quantification required extraction of lignin with acidic dioxane and its isolation from each plant sample to construct a standard curve via spectrophotometric analysis. Lignin concentration was then measured in pre-extracted plant cell walls. However, this presented a methodological complexity because extraction and isolation procedures are lengthy and tedious, particularly if there are many samples involved. This work was targeted to simplify lignin quantification. Our hypothesis was that any lignin, regardless of its botanical origin, could be used to construct a standard curve for the purpose of determining lignin concentration in a variety of plants. To test our hypothesis, lignins were isolated from a range of diverse plants and, along with three commercial lignins, standard curves were built and compared among them. Slopes and intercepts derived from these standard curves were close enough to allow utilization of a mean extinction coefficient in the regression equation to estimate lignin concentration in any plant, independent of its botanical origin. Lignin quantification by use of a common regression equation obviates the steps of lignin extraction, isolation, and standard curve construction, which substantially expedites the ABL method. Acetyl bromide lignin method is a fast, convenient analytical procedure that may routinely be used to quantify lignin.

  13. Making Knowledge Delivery Failsafe: Adding Step Zero in Hypothesis Testing

    ERIC Educational Resources Information Center

    Pan, Xia; Zhou, Qiang

    2010-01-01

    Knowledge of statistical analysis is increasingly important for professionals in modern business. For example, hypothesis testing is one of the critical topics for quality managers and team workers in Six Sigma training programs. Delivering the knowledge of hypothesis testing effectively can be an important step for the incapable learners or…

  14. Arithmetic Procedures are Induced from Examples.

    DTIC Science & Technology

    1985-08-13

    concrete numerals (eg. coins. Dienes blocks, poker chips. Montessori rods etc Analogy is included as a third hypothesis even though it is not particularly...collections of coins. Diennes blocks. Montessori rods and so forth. This is a mapping between two kinds of numerals. and not two procedures Later. this

  15. Multiplicity: discussion points from the Statisticians in the Pharmaceutical Industry multiplicity expert group.

    PubMed

    Phillips, Alan; Fletcher, Chrissie; Atkinson, Gary; Channon, Eddie; Douiri, Abdel; Jaki, Thomas; Maca, Jeff; Morgan, David; Roger, James Henry; Terrill, Paul

    2013-01-01

    In May 2012, the Committee of Health and Medicinal Products issued a concept paper on the need to review the points to consider document on multiplicity issues in clinical trials. In preparation for the release of the updated guidance document, Statisticians in the Pharmaceutical Industry held a one-day expert group meeting in January 2013. Topics debated included multiplicity and the drug development process, the usefulness and limitations of newly developed strategies to deal with multiplicity, multiplicity issues arising from interim decisions and multiregional development, and the need for simultaneous confidence intervals (CIs) corresponding to multiple test procedures. A clear message from the meeting was that multiplicity adjustments need to be considered when the intention is to make a formal statement about efficacy or safety based on hypothesis tests. Statisticians have a key role when designing studies to assess what adjustment really means in the context of the research being conducted. More thought during the planning phase needs to be given to multiplicity adjustments for secondary endpoints given these are increasing in importance in differentiating products in the market place. No consensus was reached on the role of simultaneous CIs in the context of superiority trials. It was argued that unadjusted intervals should be employed as the primary purpose of the intervals is estimation, while the purpose of hypothesis testing is to formally establish an effect. The opposing view was that CIs should correspond to the test decision whenever possible. Copyright © 2013 John Wiley & Sons, Ltd.

  16. Testing of Hypothesis in Equivalence and Non Inferiority Trials-A Concept.

    PubMed

    Juneja, Atul; Aggarwal, Abha R; Adhikari, Tulsi; Pandey, Arvind

    2016-04-01

    Establishing the appropriate hypothesis is one of the important steps for carrying out the statistical tests/analysis. Its understanding is important for interpreting the results of statistical analysis. The current communication attempts to provide the concept of testing of hypothesis in non inferiority and equivalence trials, where the null hypothesis is just reverse of what is set up for conventional superiority trials. It is similarly looked for rejection for establishing the fact the researcher is intending to prove. It is important to mention that equivalence or non inferiority cannot be proved by accepting the null hypothesis of no difference. Hence, establishing the appropriate statistical hypothesis is extremely important to arrive at meaningful conclusion for the set objectives in research.

  17. Fast mapping rapidly integrates information into existing memory networks.

    PubMed

    Coutanche, Marc N; Thompson-Schill, Sharon L

    2014-12-01

    Successful learning involves integrating new material into existing memory networks. A learning procedure known as fast mapping (FM), thought to simulate the word-learning environment of children, has recently been linked to distinct neuroanatomical substrates in adults. This idea has suggested the (never-before tested) hypothesis that FM may promote rapid incorporation into cortical memory networks. We test this hypothesis here in 2 experiments. In our 1st experiment, we introduced 50 participants to 16 unfamiliar animals and names through FM or explicit encoding (EE) and tested participants on the training day, and again after sleep. Learning through EE produced strong declarative memories, without immediate lexical competition, as expected from slow-consolidation models. Learning through FM, however, led to almost immediate lexical competition, which continued to the next day. Additionally, the learned words began to prime related concepts on the day following FM (but not EE) training. In a 2nd experiment, we replicated the lexical integration results and determined that presenting an already-known item during learning was crucial for rapid integration through FM. The findings presented here indicate that learned items can be integrated into cortical memory networks at an accelerated rate through fast mapping. The retrieval of a related known concept, in order to infer the target of the FM question, is critical for this effect. PsycINFO Database Record (c) 2014 APA, all rights reserved.

  18. Pre-Mission Input Requirements to Enable Successful Sample Collection by A Remote Field/EVA Team

    NASA Technical Reports Server (NTRS)

    Cohen, B. A.; Lim, D. S. S.; Young, K. E.; Brunner, A.; Elphic, R. E.; Horne, A.; Kerrigan, M. C.; Osinski, G. R.; Skok, J. R.; Squyres, S. W.; hide

    2016-01-01

    The FINESSE (Field Investigations to Enable Solar System Science and Exploration) team, part of the Solar System Exploration Virtual Institute (SSERVI), is a field-based research program aimed at generating strategic knowledge in preparation for human and robotic exploration of the Moon, near-Earth asteroids, Phobos and Deimos, and beyond. In contract to other technology-driven NASA analog studies, The FINESSE WCIS activity is science-focused and, moreover, is sampling-focused with the explicit intent to return the best samples for geochronology studies in the laboratory. We used the FINESSE field excursion to the West Clearwater Lake Impact structure (WCIS) as an opportunity to test factors related to sampling decisions. We examined the in situ sample characterization and real-time decision-making process of the astronauts, with a guiding hypothesis that pre-mission training that included detailed background information on the analytical fate of a sample would better enable future astronauts to select samples that would best meet science requirements. We conducted three tests of this hypothesis over several days in the field. Our investigation was designed to document processes, tools and procedures for crew sampling of planetary targets. This was not meant to be a blind, controlled test of crew efficacy, but rather an effort to explicitly recognize the relevant variables that enter into sampling protocol and to be able to develop recommendations for crew and backroom training in future endeavors.

  19. Accreditation status and geographic location of outpatient vascular testing facilities among Medicare beneficiaries: the VALUE (Vascular Accreditation, Location & Utilization Evaluation) study.

    PubMed

    Rundek, Tatjana; Brown, Scott C; Wang, Kefeng; Dong, Chuanhui; Farrell, Mary Beth; Heller, Gary V; Gornik, Heather L; Hutchisson, Marge; Needleman, Laurence; Benenati, James F; Jaff, Michael R; Meier, George H; Perese, Susana; Bendick, Phillip; Hamburg, Naomi M; Lohr, Joann M; LaPerna, Lucy; Leers, Steven A; Lilly, Michael P; Tegeler, Charles; Alexandrov, Andrei V; Katanick, Sandra L

    2014-10-01

    There is limited information on the accreditation status and geographic distribution of vascular testing facilities in the US. The Centers for Medicare & Medicaid Services (CMS) provide reimbursement to facilities regardless of accreditation status. The aims were to: (1) identify the proportion of Intersocietal Accreditation Commission (IAC) accredited vascular testing facilities in a 5% random national sample of Medicare beneficiaries receiving outpatient vascular testing services; (2) describe the geographic distribution of these facilities. The VALUE (Vascular Accreditation, Location & Utilization Evaluation) Study examines the proportion of IAC accredited facilities providing vascular testing procedures nationally, and the geographic distribution and utilization of these facilities. The data set containing all facilities that billed Medicare for outpatient vascular testing services in 2011 (5% CMS Outpatient Limited Data Set (LDS) file) was examined, and locations of outpatient vascular testing facilities were obtained from the 2011 CMS/Medicare Provider of Services (POS) file. Of 13,462 total vascular testing facilities billing Medicare for vascular testing procedures in a 5% random Outpatient LDS for the US in 2011, 13% (n=1730) of facilities were IAC accredited. The percentage of IAC accredited vascular testing facilities in the LDS file varied significantly by US region, p<0.0001: 26%, 12%, 11%, and 7% for the Northeast, South, Midwest, and Western regions, respectively. Findings suggest that the proportion of outpatient vascular testing facilities that are IAC accredited is low and varies by region. Increasing the number of accredited vascular testing facilities to improve test quality is a hypothesis that should be tested in future research. © The Author(s) 2014.

  20. Journal news

    USGS Publications Warehouse

    Conroy, M.J.; Samuel, M.D.; White, Joanne C.

    1995-01-01

    Statistical power (and conversely, Type II error) is often ignored by biologists. Power is important to consider in the design of studies, to ensure that sufficient resources are allocated to address a hypothesis under examination. Deter- mining appropriate sample size when designing experiments or calculating power for a statistical test requires an investigator to consider the importance of making incorrect conclusions about the experimental hypothesis and the biological importance of the alternative hypothesis (or the biological effect size researchers are attempting to measure). Poorly designed studies frequently provide results that are at best equivocal, and do little to advance science or assist in decision making. Completed studies that fail to reject Ho should consider power and the related probability of a Type II error in the interpretation of results, particularly when implicit or explicit acceptance of Ho is used to support a biological hypothesis or management decision. Investigators must consider the biological question they wish to answer (Tacha et al. 1982) and assess power on the basis of biologically significant differences (Taylor and Gerrodette 1993). Power calculations are somewhat subjective, because the author must specify either f or the minimum difference that is biologically important. Biologists may have different ideas about what values are appropriate. While determining biological significance is of central importance in power analysis, it is also an issue of importance in wildlife science. Procedures, references, and computer software to compute power are accessible; therefore, authors should consider power. We welcome comments or suggestions on this subject.

  1. MATERNAL REPRESENTATIONS AND INFANT ATTACHMENT: AN EXAMINATION OF THE PROTOTYPE HYPOTHESIS.

    PubMed

    Madigan, Sheri; Hawkins, Erinn; Plamondon, Andre; Moran, Greg; Benoit, Diane

    2015-01-01

    The prototype hypothesis suggests that attachment representations derived in infancy continue to influence subsequent relationships over the life span, including those formed with one's own children. In the current study, we test the prototype hypothesis by exploring (a) whether child-specific representations following actual experience in interaction with a specific child impacts caregiver-child attachment over and above the prenatal forecast of that representation and (b) whether maternal attachment representations exert their influence on infant attachment via the more child-specific representation of that relationship. In a longitudinal study of 84 mother-infant dyads, mothers' representations of their attachment history were obtained prenatally with the Adult Attachment Interview (AAI; M. Main, R. Goldwyn, & E. Hesse, 2002), representations of relationship with a specific child were assessed with the Working Model of the Child Interview (WMCI; C.H. Zeanah, D. Benoit, & L. Barton, 1986), collected both prenatally and again at infant age 11 months, and infant attachment was assessed in the Strange Situation Procedure (M.D.S. Ainsworth, M.C. Blehar, E. Walters, & S. Wall, 1978) when infants were 11 months of age. Consistent with the prototype hypothesis, considerable correspondence was found between mothers' AAI and WMCI classifications. A mediation analysis showed that WMCI fully accounted for the association between AAI and infant attachment. Postnatal WMCI measured at 11 months' postpartum did not add to the prediction of infant attachment, over and above that explained by the prenatal WMCI. Implications for these findings are discussed. © 2015 Michigan Association for Infant Mental Health.

  2. Sequential Probability Ratio Test for Spacecraft Collision Avoidance Maneuver Decisions

    NASA Technical Reports Server (NTRS)

    Carpenter, J. Russell; Markley, F. Landis

    2013-01-01

    A document discusses sequential probability ratio tests that explicitly allow decision-makers to incorporate false alarm and missed detection risks, and are potentially less sensitive to modeling errors than a procedure that relies solely on a probability of collision threshold. Recent work on constrained Kalman filtering has suggested an approach to formulating such a test for collision avoidance maneuver decisions: a filter bank with two norm-inequality-constrained epoch-state extended Kalman filters. One filter models the null hypotheses that the miss distance is inside the combined hard body radius at the predicted time of closest approach, and one filter models the alternative hypothesis. The epoch-state filter developed for this method explicitly accounts for any process noise present in the system. The method appears to work well using a realistic example based on an upcoming, highly elliptical orbit formation flying mission.

  3. Approaches to informed consent for hypothesis-testing and hypothesis-generating clinical genomics research.

    PubMed

    Facio, Flavia M; Sapp, Julie C; Linn, Amy; Biesecker, Leslie G

    2012-10-10

    Massively-parallel sequencing (MPS) technologies create challenges for informed consent of research participants given the enormous scale of the data and the wide range of potential results. We propose that the consent process in these studies be based on whether they use MPS to test a hypothesis or to generate hypotheses. To demonstrate the differences in these approaches to informed consent, we describe the consent processes for two MPS studies. The purpose of our hypothesis-testing study is to elucidate the etiology of rare phenotypes using MPS. The purpose of our hypothesis-generating study is to test the feasibility of using MPS to generate clinical hypotheses, and to approach the return of results as an experimental manipulation. Issues to consider in both designs include: volume and nature of the potential results, primary versus secondary results, return of individual results, duty to warn, length of interaction, target population, and privacy and confidentiality. The categorization of MPS studies as hypothesis-testing versus hypothesis-generating can help to clarify the issue of so-called incidental or secondary results for the consent process, and aid the communication of the research goals to study participants.

  4. Upper extremity palsy following cervical decompression surgery results from a transient spinal cord lesion.

    PubMed

    Hasegawa, Kazuhiro; Homma, Takao; Chiba, Yoshikazu

    2007-03-15

    Retrospective analysis. To test the hypothesis that spinal cord lesions cause postoperative upper extremity palsy. Postoperative paresis, so-called C5 palsy, of the upper extremities is a common complication of cervical surgery. Although there are several hypotheses regarding the etiology of C5 palsy, convincing evidence with a sufficient study population, statistical analysis, and clear radiographic images illustrating the nerve root impediment has not been presented. We hypothesized that the palsy is caused by spinal cord damage following the surgical decompression performed for chronic compressive cervical disorders. The study population comprised 857 patients with chronic cervical cord compressive lesions who underwent decompression surgery. Anterior decompression and fusion was performed in 424 cases, laminoplasty in 345 cases, and laminectomy in 88 cases. Neurologic characteristics of patients with postoperative upper extremity palsy were investigated. Relationships between the palsy, and patient sex, age, diagnosis, procedure, area of decompression, and preoperative Japanese Orthopaedic Association score were evaluated with a risk factor analysis. Radiographic examinations were performed for all palsy cases. Postoperative upper extremity palsy occurred in 49 cases (5.7%). The common features of the palsy cases were solely chronic compressive spinal cord disorders and decompression surgery to the cord. There was no difference in the incidence of palsy among the procedures. Cervical segments beyond C5 were often disturbed with frequent multiple segment involvement. There was a tendency for spontaneous improvement of the palsy. Age, decompression area (anterior procedure), and diagnosis (ossification of the posterior longitudinal ligament) are the highest risk factors of the palsy. The results of the present study support our hypothesis that the etiology of the palsy is a transient disturbance of the spinal cord following a decompression procedure. It appears to be caused by reperfusion after decompression of a chronic compressive lesion of the cervical cord. We recommend that physicians inform patients and surgeons of the potential risk of a spinal cord deficit after cervical decompression surgery.

  5. DFLOWZ: A free program to evaluate the area potentially inundated by a debris flow

    NASA Astrophysics Data System (ADS)

    Berti, M.; Simoni, A.

    2014-06-01

    The transport and deposition mechanisms of debris flows are still poorly understood due to the complexity of the interactions governing the behavior of water-sediment mixtures. Empirical-statistical methods can therefore be used, instead of more sophisticated numerical methods, to predict the depositional behavior of these highly dangerous gravitational movements. We use widely accepted semi-empirical scaling relations and propose an automated procedure (DFLOWZ) to estimate the area potentially inundated by a debris flow event. Beside a digital elevation model (DEM), the procedure has only two input requirements: the debris flow volume and the possible flow-path. The procedure is implemented in Matlab and a Graphical User Interface helps to visualize initial conditions, flow propagation and final results. Different hypothesis about the depositional behavior of an event can be tested together with the possible effect of simple remedial measures. Uncertainties associated to scaling relations can be treated and their impact on results evaluated. Our freeware application aims to facilitate and speed up the process of susceptibility mapping. We discuss limits and advantages of the method in order to inform inexperienced users.

  6. Hypothesis Testing, "p" Values, Confidence Intervals, Measures of Effect Size, and Bayesian Methods in Light of Modern Robust Techniques

    ERIC Educational Resources Information Center

    Wilcox, Rand R.; Serang, Sarfaraz

    2017-01-01

    The article provides perspectives on p values, null hypothesis testing, and alternative techniques in light of modern robust statistical methods. Null hypothesis testing and "p" values can provide useful information provided they are interpreted in a sound manner, which includes taking into account insights and advances that have…

  7. Hypothesis Testing Using Spatially Dependent Heavy Tailed Multisensor Data

    DTIC Science & Technology

    2014-12-01

    Office of Research 113 Bowne Hall Syracuse, NY 13244 -1200 ABSTRACT HYPOTHESIS TESTING USING SPATIALLY DEPENDENT HEAVY-TAILED MULTISENSOR DATA Report...consistent with the null hypothesis of linearity and can be used to estimate the distribution of a test statistic that can discrimi- nate between the null... Test for nonlinearity. Histogram is generated using the surrogate data. The statistic of the original time series is represented by the solid line

  8. Ethanol induces impulsive-like responding in a delay-of-reward operant choice procedure: impulsivity predicts autoshaping.

    PubMed

    Tomie, A; Aguado, A S; Pohorecky, L A; Benjamin, D

    1998-10-01

    Autoshaping conditioned responses (CRs) are reflexive and targeted motor responses expressed as a result of experience with reward. To evaluate the hypothesis that autoshaping may be a form of impulsive responding, within-subjects correlations between performance on autoshaping and impulsivity tasks were assessed in 15 Long-Evans hooded rats. Autoshaping procedures [insertion of retractable lever conditioned stimulus (CS) followed by the response-independent delivery of food (US)] were followed by testing for impulsive-like responding in a two-choice lever-press operant delay-of-reward procedure (immediate small food reward versus delayed large food reward). Delay-of-reward functions revealed two distinct subject populations. Subjects in the Sensitive group (n=7) were more impulsive-like, increasing immediate reward choices at longer delays for large reward, while those in the Insensitive group (n=8) responded predominantly on only one lever. During the prior autoshaping phase, the Sensitive group had performed more autoshaping CRs, and correlations revealed that impulsive subjects acquired the autoshaping CR in fewer trials. In the Sensitive group, acute injections of ethanol (0, 0.25, 0.50, 1.00, 1.50 g/kg) given immediately before delay-of-reward sessions yielded an inverted U-shaped dose-response curve with increased impulsivity induced by the 0.25, 0.50, and 1.00 g/kg doses of ethanol, while choice strategy of the Insensitive group was not influenced by ethanol dose. Ethanol induced impulsive-like responding only in rats that were flexible in their response strategy (Sensitive group), and this group also performed more autoshaping CRs. Data support the hypothesis that autoshaping and impulsivity are linked.

  9. Parallel deterioration to language processing in a bilingual speaker.

    PubMed

    Druks, Judit; Weekes, Brendan Stuart

    2013-01-01

    The convergence hypothesis [Green, D. W. (2003). The neural basis of the lexicon and the grammar in L2 acquisition: The convergence hypothesis. In R. van Hout, A. Hulk, F. Kuiken, & R. Towell (Eds.), The interface between syntax and the lexicon in second language acquisition (pp. 197-218). Amsterdam: John Benjamins] assumes that the neural substrates of language representations are shared between the languages of a bilingual speaker. One prediction of this hypothesis is that neurodegenerative disease should produce parallel deterioration to lexical and grammatical processing in bilingual aphasia. We tested this prediction with a late bilingual Hungarian (first language, L1)-English (second language, L2) speaker J.B. who had nonfluent progressive aphasia (NFPA). J.B. had acquired L2 in adolescence but was premorbidly proficient and used English as his dominant language throughout adult life. Our investigations showed comparable deterioration to lexical and grammatical knowledge in both languages during a one-year period. Parallel deterioration to language processing in a bilingual speaker with NFPA challenges the assumption that L1 and L2 rely on different brain mechanisms as assumed in some theories of bilingual language processing [Ullman, M. T. (2001). The neural basis of lexicon and grammar in first and second language: The declarative/procedural model. Bilingualism: Language and Cognition, 4(1), 105-122].

  10. The role of responsibility and fear of guilt in hypothesis-testing.

    PubMed

    Mancini, Francesco; Gangemi, Amelia

    2006-12-01

    Recent theories argue that both perceived responsibility and fear of guilt increase obsessive-like behaviours. We propose that hypothesis-testing might account for this effect. Both perceived responsibility and fear of guilt would influence subjects' hypothesis-testing, by inducing a prudential style. This style implies focusing on and confirming the worst hypothesis, and reiterating the testing process. In our experiment, we manipulated the responsibility and fear of guilt of 236 normal volunteers who executed a deductive task. The results show that perceived responsibility is the main factor that influenced individuals' hypothesis-testing. Fear of guilt has however a significant additive effect. Guilt-fearing participants preferred to carry on with the diagnostic process, even when faced with initial favourable evidence, whereas participants in the responsibility condition only did so when confronted with an unfavourable evidence. Implications for the understanding of obsessive-compulsive disorder (OCD) are discussed.

  11. Stratified exact tests for the weak causal null hypothesis in randomized trials with a binary outcome.

    PubMed

    Chiba, Yasutaka

    2017-09-01

    Fisher's exact test is commonly used to compare two groups when the outcome is binary in randomized trials. In the context of causal inference, this test explores the sharp causal null hypothesis (i.e. the causal effect of treatment is the same for all subjects), but not the weak causal null hypothesis (i.e. the causal risks are the same in the two groups). Therefore, in general, rejection of the null hypothesis by Fisher's exact test does not mean that the causal risk difference is not zero. Recently, Chiba (Journal of Biometrics and Biostatistics 2015; 6: 244) developed a new exact test for the weak causal null hypothesis when the outcome is binary in randomized trials; the new test is not based on any large sample theory and does not require any assumption. In this paper, we extend the new test; we create a version of the test applicable to a stratified analysis. The stratified exact test that we propose is general in nature and can be used in several approaches toward the estimation of treatment effects after adjusting for stratification factors. The stratified Fisher's exact test of Jung (Biometrical Journal 2014; 56: 129-140) tests the sharp causal null hypothesis. This test applies a crude estimator of the treatment effect and can be regarded as a special case of our proposed exact test. Our proposed stratified exact test can be straightforwardly extended to analysis of noninferiority trials and to construct the associated confidence interval. © 2017 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  12. Extensive Training Is Insufficient to Produce The Work-Ethic Effect In Pigeons

    PubMed Central

    Vasconcelos, Marco; Urcuioli, Peter J

    2009-01-01

    Zentall and Singer (2007a) hypothesized that our failure to replicate the work-ethic effect in pigeons (Vasconcelos, Urcuioli, & Lionello-DeNolf, 2007) was due to insufficient overtraining following acquisition of the high- and low-effort discriminations. We tested this hypothesis using the original work-ethic procedure (Experiment 1) and one similar to that used with starlings (Experiment 2) by providing at least 60 overtraining sessions. Despite this extensive overtraining, neither experiment revealed a significant preference for stimuli obtained after high effort. Together with other findings, these data support our contention that pigeons do not reliably show a work-ethic effect. PMID:19230517

  13. A statistical test to show negligible trend

    Treesearch

    Philip M. Dixon; Joseph H.K. Pechmann

    2005-01-01

    The usual statistical tests of trend are inappropriate for demonstrating the absence of trend. This is because failure to reject the null hypothesis of no trend does not prove that null hypothesis. The appropriate statistical method is based on an equivalence test. The null hypothesis is that the trend is not zero, i.e., outside an a priori specified equivalence region...

  14. Unadjusted Bivariate Two-Group Comparisons: When Simpler is Better.

    PubMed

    Vetter, Thomas R; Mascha, Edward J

    2018-01-01

    Hypothesis testing involves posing both a null hypothesis and an alternative hypothesis. This basic statistical tutorial discusses the appropriate use, including their so-called assumptions, of the common unadjusted bivariate tests for hypothesis testing and thus comparing study sample data for a difference or association. The appropriate choice of a statistical test is predicated on the type of data being analyzed and compared. The unpaired or independent samples t test is used to test the null hypothesis that the 2 population means are equal, thereby accepting the alternative hypothesis that the 2 population means are not equal. The unpaired t test is intended for comparing dependent continuous (interval or ratio) data from 2 study groups. A common mistake is to apply several unpaired t tests when comparing data from 3 or more study groups. In this situation, an analysis of variance with post hoc (posttest) intragroup comparisons should instead be applied. Another common mistake is to apply a series of unpaired t tests when comparing sequentially collected data from 2 study groups. In this situation, a repeated-measures analysis of variance, with tests for group-by-time interaction, and post hoc comparisons, as appropriate, should instead be applied in analyzing data from sequential collection points. The paired t test is used to assess the difference in the means of 2 study groups when the sample observations have been obtained in pairs, often before and after an intervention in each study subject. The Pearson chi-square test is widely used to test the null hypothesis that 2 unpaired categorical variables, each with 2 or more nominal levels (values), are independent of each other. When the null hypothesis is rejected, 1 concludes that there is a probable association between the 2 unpaired categorical variables. When comparing 2 groups on an ordinal or nonnormally distributed continuous outcome variable, the 2-sample t test is usually not appropriate. The Wilcoxon-Mann-Whitney test is instead preferred. When making paired comparisons on data that are ordinal, or continuous but nonnormally distributed, the Wilcoxon signed-rank test can be used. In analyzing their data, researchers should consider the continued merits of these simple yet equally valid unadjusted bivariate statistical tests. However, the appropriate use of an unadjusted bivariate test still requires a solid understanding of its utility, assumptions (requirements), and limitations. This understanding will mitigate the risk of misleading findings, interpretations, and conclusions.

  15. Strain rates, stress markers and earthquake clustering (Invited)

    NASA Astrophysics Data System (ADS)

    Fry, B.; Gerstenberger, M.; Abercrombie, R. E.; Reyners, M.; Eberhart-Phillips, D. M.

    2013-12-01

    The 2010-present Canterbury earthquakes comprise a well-recorded sequence in a relatively low strain-rate shallow crustal region. We present new scientific results to test the hypothesis that: Earthquake sequences in low-strain rate areas experience high stress drop events, low-post seismic relaxation, and accentuated seismic clustering. This hypothesis is based on a physical description of the aftershock process in which the spatial distribution of stress accumulation and stress transfer are controlled by fault strength and orientation. Following large crustal earthquakes, time dependent forecasts are often developed by fitting parameters defined by Omori's aftershock decay law. In high-strain rate areas, simple forecast models utilizing a single p-value fit observed aftershock sequences well. In low-strain rate areas such as Canterbury, assumptions of simple Omori decay may not be sufficient to capture the clustering (sub-sequence) nature exhibited by the punctuated rise in activity following significant child events. In Canterbury, the moment release is more clustered than in more typical Omori sequences. The individual earthquakes in these clusters also exhibit somewhat higher stress drops than in the average crustal sequence in high-strain rate regions, suggesting the earthquakes occur on strong Andersonian-oriented faults, possibly juvenile or well-healed . We use the spectral ratio procedure outlined in (Viegas et al., 2010) to determine corner frequencies and Madariaga stress-drop values for over 800 events in the sequence. Furthermore, we will discuss the relevance of tomographic results of Reyners and Eberhart-Phillips (2013) documenting post-seismic stress-driven fluid processes following the three largest events in the sequence as well as anisotropic patterns in surface wave tomography (Fry et al., 2013). These tomographic studies are both compatible with the hypothesis, providing strong evidence for the presence of widespread and hydrated regional upper crustal cracking parallel to sub-parallel to the dominant transverse failure plane in the sequence. Joint interpretation of the three separate datasets provide a positive first attempt at testing our fundamental hypothesis.

  16. Longitudinal Dimensionality of Adolescent Psychopathology: Testing the Differentiation Hypothesis

    ERIC Educational Resources Information Center

    Sterba, Sonya K.; Copeland, William; Egger, Helen L.; Costello, E. Jane; Erkanli, Alaattin; Angold, Adrian

    2010-01-01

    Background: The differentiation hypothesis posits that the underlying liability distribution for psychopathology is of low dimensionality in young children, inflating diagnostic comorbidity rates, but increases in dimensionality with age as latent syndromes become less correlated. This hypothesis has not been adequately tested with longitudinal…

  17. A large scale test of the gaming-enhancement hypothesis.

    PubMed

    Przybylski, Andrew K; Wang, John C

    2016-01-01

    A growing research literature suggests that regular electronic game play and game-based training programs may confer practically significant benefits to cognitive functioning. Most evidence supporting this idea, the gaming-enhancement hypothesis , has been collected in small-scale studies of university students and older adults. This research investigated the hypothesis in a general way with a large sample of 1,847 school-aged children. Our aim was to examine the relations between young people's gaming experiences and an objective test of reasoning performance. Using a Bayesian hypothesis testing approach, evidence for the gaming-enhancement and null hypotheses were compared. Results provided no substantive evidence supporting the idea that having preference for or regularly playing commercially available games was positively associated with reasoning ability. Evidence ranged from equivocal to very strong in support for the null hypothesis over what was predicted. The discussion focuses on the value of Bayesian hypothesis testing for investigating electronic gaming effects, the importance of open science practices, and pre-registered designs to improve the quality of future work.

  18. Using impedance cardiography to assess left ventricular systolic function via postural change in patients with heart failure.

    PubMed

    DeMarzo, Arthur P; Calvin, James E; Kelly, Russell F; Stamos, Thomas D

    2005-01-01

    For the diagnosis and management of heart failure, it would be useful to have a simple point-of-care test for assessing ventricular function that could be performed by a nurse. An impedance cardiography (ICG) parameter called systolic amplitude (SA) can serve as an indicator of left ventricular systolic function (LVSF). This study tested the hypothesis that patients with normal LVSF should have a significant increase in SA in response to an increase in end-diastolic volume caused by postural change from sitting upright to supine, while patients with depressed LVSF associated with heart failure should have a minimal increase or a decrease in SA from upright to supine. ICG data were obtained in 12 patients without heart disease and with normal LVSF and 18 patients with clinically diagnosed heart failure. Consistent with the hypothesis, patients with normal LVSF had a significant increase in SA from upright to supine, whereas heart failure patients had a minimal increase or a decrease in SA from upright to supine. This ICG procedure may be useful for monitoring the trend of patient response to titration of beta blockers and other medications. ICG potentially could be used to detect worsening LVSF and provide a means of measurement for adjusting treatment.

  19. An introduction to medical statistics for health care professionals: Hypothesis tests and estimation.

    PubMed

    Thomas, Elaine

    2005-01-01

    This article is the second in a series of three that will give health care professionals (HCPs) a sound introduction to medical statistics (Thomas, 2004). The objective of research is to find out about the population at large. However, it is generally not possible to study the whole of the population and research questions are addressed in an appropriate study sample. The next crucial step is then to use the information from the sample of individuals to make statements about the wider population of like individuals. This procedure of drawing conclusions about the population, based on study data, is known as inferential statistics. The findings from the study give us the best estimate of what is true for the relevant population, given the sample is representative of the population. It is important to consider how accurate this best estimate is, based on a single sample, when compared to the unknown population figure. Any difference between the observed sample result and the population characteristic is termed the sampling error. This article will cover the two main forms of statistical inference (hypothesis tests and estimation) along with issues that need to be addressed when considering the implications of the study results. Copyright (c) 2005 Whurr Publishers Ltd.

  20. Bayesian adaptive phase II screening design for combination trials

    PubMed Central

    Cai, Chunyan; Yuan, Ying; Johnson, Valen E

    2013-01-01

    Background Trials of combination therapies for the treatment of cancer are playing an increasingly important role in the battle against this disease. To more efficiently handle the large number of combination therapies that must be tested, we propose a novel Bayesian phase II adaptive screening design to simultaneously select among possible treatment combinations involving multiple agents. Methods Our design is based on formulating the selection procedure as a Bayesian hypothesis testing problem in which the superiority of each treatment combination is equated to a single hypothesis. During the trial conduct, we use the current values of the posterior probabilities of all hypotheses to adaptively allocate patients to treatment combinations. Results Simulation studies show that the proposed design substantially outperforms the conventional multiarm balanced factorial trial design. The proposed design yields a significantly higher probability for selecting the best treatment while allocating substantially more patients to efficacious treatments. Limitations The proposed design is most appropriate for the trials combining multiple agents and screening out the efficacious combination to be further investigated. Conclusions The proposed Bayesian adaptive phase II screening design substantially outperformed the conventional complete factorial design. Our design allocates more patients to better treatments while providing higher power to identify the best treatment at the end of the trial. PMID:23359875

  1. Null but not void: considerations for hypothesis testing.

    PubMed

    Shaw, Pamela A; Proschan, Michael A

    2013-01-30

    Standard statistical theory teaches us that once the null and alternative hypotheses have been defined for a parameter, the choice of the statistical test is clear. Standard theory does not teach us how to choose the null or alternative hypothesis appropriate to the scientific question of interest. Neither does it tell us that in some cases, depending on which alternatives are realistic, we may want to define our null hypothesis differently. Problems in statistical practice are frequently not as pristinely summarized as the classic theory in our textbooks. In this article, we present examples in statistical hypothesis testing in which seemingly simple choices are in fact rich with nuance that, when given full consideration, make the choice of the right hypothesis test much less straightforward. Published 2012. This article is a US Government work and is in the public domain in the USA.

  2. Effect of climate-related mass extinctions on escalation in molluscs

    NASA Astrophysics Data System (ADS)

    Hansen, Thor A.; Kelley, Patricia H.; Melland, Vicky D.; Graham, Scott E.

    1999-12-01

    We test the hypothesis that escalated species (e.g., those with antipredatory adaptations such as heavy armor) are more vulnerable to extinctions caused by changes in climate. If this hypothesis is valid, recovery faunas after climate-related extinctions should include significantly fewer species with escalated shell characteristics, and escalated species should undergo greater rates of extinction than nonescalated species. This hypothesis is tested for the Cretaceous-Paleocene, Eocene-Oligocene, middle Miocene, and Pliocene-Pleistocene mass extinctions. Gastropod and bivalve molluscs from the U.S. coastal plain were evaluated for 10 shell characters that confer resistance to predators. Of 40 tests, one supported the hypothesis; highly ornamented gastropods underwent greater levels of Pliocene-Pleistocene extinction than did nonescalated species. All remaining tests were nonsignificant. The hypothesis that escalated species are more vulnerable to climate-related mass extinctions is not supported.

  3. The Misconceptions of Abuse by School Personnel: A Public School Perspective

    ERIC Educational Resources Information Center

    Starling, Kathleen

    2011-01-01

    The purpose of this study was to determine the effectiveness of school personnel's perceptions, their need to examine their sexual abuse polices procedures, provide appropriate training to allocate those policies and procedures to all stakeholders in their educational school setting. A mixed methods study was used to explore the hypothesis and…

  4. On Restructurable Control System Theory

    NASA Technical Reports Server (NTRS)

    Athans, M.

    1983-01-01

    The state of stochastic system and control theory as it impacts restructurable control issues is addressed. The multivariable characteristics of the control problem are addressed. The failure detection/identification problem is discussed as a multi-hypothesis testing problem. Control strategy reconfiguration, static multivariable controls, static failure hypothesis testing, dynamic multivariable controls, fault-tolerant control theory, dynamic hypothesis testing, generalized likelihood ratio (GLR) methods, and adaptive control are discussed.

  5. Perspectives on the Use of Null Hypothesis Statistical Testing. Part III: the Various Nuts and Bolts of Statistical and Hypothesis Testing

    ERIC Educational Resources Information Center

    Marmolejo-Ramos, Fernando; Cousineau, Denis

    2017-01-01

    The number of articles showing dissatisfaction with the null hypothesis statistical testing (NHST) framework has been progressively increasing over the years. Alternatives to NHST have been proposed and the Bayesian approach seems to have achieved the highest amount of visibility. In this last part of the special issue, a few alternative…

  6. Revised standards for statistical evidence.

    PubMed

    Johnson, Valen E

    2013-11-26

    Recent advances in Bayesian hypothesis testing have led to the development of uniformly most powerful Bayesian tests, which represent an objective, default class of Bayesian hypothesis tests that have the same rejection regions as classical significance tests. Based on the correspondence between these two classes of tests, it is possible to equate the size of classical hypothesis tests with evidence thresholds in Bayesian tests, and to equate P values with Bayes factors. An examination of these connections suggest that recent concerns over the lack of reproducibility of scientific studies can be attributed largely to the conduct of significance tests at unjustifiably high levels of significance. To correct this problem, evidence thresholds required for the declaration of a significant finding should be increased to 25-50:1, and to 100-200:1 for the declaration of a highly significant finding. In terms of classical hypothesis tests, these evidence standards mandate the conduct of tests at the 0.005 or 0.001 level of significance.

  7. Grammar Predicts Procedural Learning and Consolidation Deficits in Children with Specific Language Impairment

    PubMed Central

    Hedenius, Martina; Persson, Jonas; Tremblay, Antoine; Adi-Japha, Esther; Veríssimo, João; Dye, Cristina D.; Alm, Per; Jennische, Margareta; Tomblin, J. Bruce; Ullman, Michael T.

    2011-01-01

    The Procedural Deficit Hypothesis (PDH) posits that Specific Language Impairment (SLI) can be largely explained by abnormalities of brain structures that subserve procedural memory. The PDH predicts impairments of procedural memory itself, and that such impairments underlie the grammatical deficits observed in the disorder. Previous studies have indeed reported procedural learning impairments in SLI, and have found that these are associated with grammatical difficulties. The present study extends this research by examining the consolidation and longer-term procedural sequence learning in children with SLI. The Alternating Serial Reaction Time (ASRT) task was given to children with SLI and typically-developing (TD) children in an initial learning session and an average of three days later to test for consolidation and longer-term learning. Although both groups showed evidence of initial sequence learning, only the TD children showed clear signs of consolidation, even though the two groups did not differ in longer-term learning. When the children were re-categorized on the basis of grammar deficits rather than broader language deficits, a clearer pattern emerged. Whereas both the grammar impaired and normal grammar groups showed evidence of initial sequence learning, only those with normal grammar showed consolidation and longer-term learning. Indeed, the grammar-impaired group appeared to lose any sequence knowledge gained during the initial testing session. These findings held even when controlling for vocabulary or a broad non-grammatical language measure, neither of which were associated with procedural memory. When grammar was examined as a continuous variable over all children, the same relationships between procedural memory and grammar, but not vocabulary or the broader language measure, were observed. Overall, the findings support and further specify the PDH. They suggest that consolidation and longer-term procedural learning are impaired in SLI, but that these impairments are specifically tied to the grammatical deficits in the disorder. The possibility that consolidation and longer-term learning are problematic in the disorder suggests a locus of potential study for therapeutic approaches. In sum, this study clarifies our understanding of the underlying deficits in SLI, and suggests avenues for further research. PMID:21840165

  8. Proficiency Testing for Evaluating Aerospace Materials Test Anomalies

    NASA Technical Reports Server (NTRS)

    Hirsch, D.; Motto, S.; Peyton, S.; Beeson, H.

    2006-01-01

    ASTM G 86 and ASTM G 74 are commonly used to evaluate materials susceptibility to ignition in liquid and gaseous oxygen systems. However, the methods have been known for their lack of repeatability. The inherent problems identified with the test logic would either not allow precise identification or the magnitude of problems related to running the tests, such as lack of consistency of systems performance, lack of adherence to procedures, etc. Excessive variability leads to increasing instances of accepting the null hypothesis erroneously, and so to the false logical deduction that problems are nonexistent when they really do exist. This paper attempts to develop and recommend an approach that could lead to increased accuracy in problem diagnostics by using the 50% reactivity point, which has been shown to be more repeatable. The initial tests conducted indicate that PTFE and Viton A (for pneumatic impact) and Buna S (for mechanical impact) would be good choices for additional testing and consideration for inter-laboratory evaluations. The approach presented could also be used to evaluate variable effects with increased confidence and tolerance optimization.

  9. Biostatistics Series Module 2: Overview of Hypothesis Testing.

    PubMed

    Hazra, Avijit; Gogtay, Nithya

    2016-01-01

    Hypothesis testing (or statistical inference) is one of the major applications of biostatistics. Much of medical research begins with a research question that can be framed as a hypothesis. Inferential statistics begins with a null hypothesis that reflects the conservative position of no change or no difference in comparison to baseline or between groups. Usually, the researcher has reason to believe that there is some effect or some difference which is the alternative hypothesis. The researcher therefore proceeds to study samples and measure outcomes in the hope of generating evidence strong enough for the statistician to be able to reject the null hypothesis. The concept of the P value is almost universally used in hypothesis testing. It denotes the probability of obtaining by chance a result at least as extreme as that observed, even when the null hypothesis is true and no real difference exists. Usually, if P is < 0.05 the null hypothesis is rejected and sample results are deemed statistically significant. With the increasing availability of computers and access to specialized statistical software, the drudgery involved in statistical calculations is now a thing of the past, once the learning curve of the software has been traversed. The life sciences researcher is therefore free to devote oneself to optimally designing the study, carefully selecting the hypothesis tests to be applied, and taking care in conducting the study well. Unfortunately, selecting the right test seems difficult initially. Thinking of the research hypothesis as addressing one of five generic research questions helps in selection of the right hypothesis test. In addition, it is important to be clear about the nature of the variables (e.g., numerical vs. categorical; parametric vs. nonparametric) and the number of groups or data sets being compared (e.g., two or more than two) at a time. The same research question may be explored by more than one type of hypothesis test. While this may be of utility in highlighting different aspects of the problem, merely reapplying different tests to the same issue in the hope of finding a P < 0.05 is a wrong use of statistics. Finally, it is becoming the norm that an estimate of the size of any effect, expressed with its 95% confidence interval, is required for meaningful interpretation of results. A large study is likely to have a small (and therefore "statistically significant") P value, but a "real" estimate of the effect would be provided by the 95% confidence interval. If the intervals overlap between two interventions, then the difference between them is not so clear-cut even if P < 0.05. The two approaches are now considered complementary to one another.

  10. Biostatistics Series Module 2: Overview of Hypothesis Testing

    PubMed Central

    Hazra, Avijit; Gogtay, Nithya

    2016-01-01

    Hypothesis testing (or statistical inference) is one of the major applications of biostatistics. Much of medical research begins with a research question that can be framed as a hypothesis. Inferential statistics begins with a null hypothesis that reflects the conservative position of no change or no difference in comparison to baseline or between groups. Usually, the researcher has reason to believe that there is some effect or some difference which is the alternative hypothesis. The researcher therefore proceeds to study samples and measure outcomes in the hope of generating evidence strong enough for the statistician to be able to reject the null hypothesis. The concept of the P value is almost universally used in hypothesis testing. It denotes the probability of obtaining by chance a result at least as extreme as that observed, even when the null hypothesis is true and no real difference exists. Usually, if P is < 0.05 the null hypothesis is rejected and sample results are deemed statistically significant. With the increasing availability of computers and access to specialized statistical software, the drudgery involved in statistical calculations is now a thing of the past, once the learning curve of the software has been traversed. The life sciences researcher is therefore free to devote oneself to optimally designing the study, carefully selecting the hypothesis tests to be applied, and taking care in conducting the study well. Unfortunately, selecting the right test seems difficult initially. Thinking of the research hypothesis as addressing one of five generic research questions helps in selection of the right hypothesis test. In addition, it is important to be clear about the nature of the variables (e.g., numerical vs. categorical; parametric vs. nonparametric) and the number of groups or data sets being compared (e.g., two or more than two) at a time. The same research question may be explored by more than one type of hypothesis test. While this may be of utility in highlighting different aspects of the problem, merely reapplying different tests to the same issue in the hope of finding a P < 0.05 is a wrong use of statistics. Finally, it is becoming the norm that an estimate of the size of any effect, expressed with its 95% confidence interval, is required for meaningful interpretation of results. A large study is likely to have a small (and therefore “statistically significant”) P value, but a “real” estimate of the effect would be provided by the 95% confidence interval. If the intervals overlap between two interventions, then the difference between them is not so clear-cut even if P < 0.05. The two approaches are now considered complementary to one another. PMID:27057011

  11. Statistical Validation of Surrogate Endpoints: Another Look at the Prentice Criterion and Other Criteria.

    PubMed

    Saraf, Sanatan; Mathew, Thomas; Roy, Anindya

    2015-01-01

    For the statistical validation of surrogate endpoints, an alternative formulation is proposed for testing Prentice's fourth criterion, under a bivariate normal model. In such a setup, the criterion involves inference concerning an appropriate regression parameter, and the criterion holds if the regression parameter is zero. Testing such a null hypothesis has been criticized in the literature since it can only be used to reject a poor surrogate, and not to validate a good surrogate. In order to circumvent this, an equivalence hypothesis is formulated for the regression parameter, namely the hypothesis that the parameter is equivalent to zero. Such an equivalence hypothesis is formulated as an alternative hypothesis, so that the surrogate endpoint is statistically validated when the null hypothesis is rejected. Confidence intervals for the regression parameter and tests for the equivalence hypothesis are proposed using bootstrap methods and small sample asymptotics, and their performances are numerically evaluated and recommendations are made. The choice of the equivalence margin is a regulatory issue that needs to be addressed. The proposed equivalence testing formulation is also adopted for other parameters that have been proposed in the literature on surrogate endpoint validation, namely, the relative effect and proportion explained.

  12. [The "diagnosis" in the light of Charles S. Peirce, Sherlock Holmes, Sigmund Freud and modern neurobiology].

    PubMed

    Adler, R H

    2006-05-10

    A diagnostic hypothesis is a causa ficta. It is an assumption, suitable to explain phenomena, which are not yet proven to be the only and valid explanation of the observed. One of Wilhelm Hauff's faitales illustrates how a hypothesis is generated. It is based on the interpretation of signs. Signs are of an ikonic, an indexical or a symbolic nature. According to S. Peirce, a hypothesis is created by abduction, to Conan Doyle's Sherlock Holmes by immersion into thoughts, and to S. Freud by free floating attention. The three procedures are alike. Neurobiological structures and functions, which correspond to these processes, are described; especially the emotional-implicite memory. The technique of hypothesis-generation is meaningful to clinical medicine.

  13. The Psychological Inventory of Criminal Thinking Styles and Psychopathy Checklist: screening version as incrementally valid predictors of recidivism.

    PubMed

    Walters, Glenn D

    2009-12-01

    A follow-up of 107 male federal prison inmates previously tested with the Psychological Inventory of Criminal Thinking Styles (PICTS) and Psychopathy Checklist: Screening Version (PCL:SV) was conducted to test the incremental validity of both measures. The PICTS General Criminal Thinking (GCT) score was found to predict general recidivism and serious recidivism when age, prior charges, and the PCL:SV were controlled. The PCL:SV, on the other hand, failed to predict general and serious recidivism when age, prior charges, and the PICTS were controlled. These findings support the hypothesis that content-relevant self-report measures like the PICTS are capable of predicting crime-relevant outcomes above and beyond the contributions of basic demographic variables like age, criminal history, and such popular non-self-report rating procedures as the PCL:SV.

  14. Piagetian conservation of discrete quantities in bonobos (Pan paniscus), chimpanzees (Pan troglodytes), and orangutans (Pongo pygmaeus).

    PubMed

    Suda, Chikako; Call, Josep

    2005-10-01

    This study investigated whether physical discreteness helps apes to understand the concept of Piagetian conservation (i.e. the invariance of quantities). Subjects were four bonobos, three chimpanzees, and five orangutans. Apes were tested on their ability to conserve discrete/continuous quantities in an over-conservation procedure in which two unequal quantities of edible rewards underwent various transformations in front of subjects. Subjects were examined to determine whether they could track the larger quantity of reward after the transformation. Comparison between the two types of conservation revealed that tests with bonobos supported the discreteness hypothesis. Bonobos, but neither chimpanzees nor orangutans, performed significantly better with discrete quantities than with continuous ones. The results suggest that at least bonobos could benefit from the discreteness of stimuli in their acquisition of conservation skills.

  15. A bootstrap based Neyman-Pearson test for identifying variable importance.

    PubMed

    Ditzler, Gregory; Polikar, Robi; Rosen, Gail

    2015-04-01

    Selection of most informative features that leads to a small loss on future data are arguably one of the most important steps in classification, data analysis and model selection. Several feature selection (FS) algorithms are available; however, due to noise present in any data set, FS algorithms are typically accompanied by an appropriate cross-validation scheme. In this brief, we propose a statistical hypothesis test derived from the Neyman-Pearson lemma for determining if a feature is statistically relevant. The proposed approach can be applied as a wrapper to any FS algorithm, regardless of the FS criteria used by that algorithm, to determine whether a feature belongs in the relevant set. Perhaps more importantly, this procedure efficiently determines the number of relevant features given an initial starting point. We provide freely available software implementations of the proposed methodology.

  16. Later learning stages in procedural memory are impaired in children with Specific Language Impairment.

    PubMed

    Desmottes, Lise; Meulemans, Thierry; Maillart, Christelle

    2016-01-01

    According to the Procedural Deficit Hypothesis (PDH), difficulties in the procedural memory system may contribute to the language difficulties encountered by children with Specific Language Impairment (SLI). Most studies investigating the PDH have used the sequence learning paradigm; however these studies have principally focused on initial sequence learning in a single practice session. The present study sought to extend these investigations by assessing the consolidation stage and longer-term retention of implicit sequence-specific knowledge in 42 children with or without SLI. Both groups of children completed a serial reaction time task and were tested 24h and one week after practice. Results showed that children with SLI succeeded as well as children with typical development (TD) in the early acquisition stage of the sequence learning task. However, as training blocks progressed, only TD children improved their sequence knowledge while children with SLI did not appear to evolve any more. Moreover, children with SLI showed a lack of the consolidation gains in sequence knowledge displayed by the TD children. Overall, these results were in line with the predictions of the PDH and suggest that later learning stages in procedural memory are impaired in SLI. Copyright © 2015 Elsevier Ltd. All rights reserved.

  17. Temporal Discontiguity Is neither Necessary nor Sufficient for Learning-Induced Effects on Adult Neurogenesis

    PubMed Central

    Leuner, Benedetta; Waddell, Jaylyn; Gould, Elizabeth; Shors, Tracey J.

    2012-01-01

    Some, but not all, types of learning and memory can influence neurogenesis in the adult hippocampus. Trace eyeblink conditioning has been shown to enhance the survival of new neurons, whereas delay eyeblink conditioning has no such effect. The key difference between the two training procedures is that the conditioning stimuli are separated in time during trace but not delay conditioning. These findings raise the question of whether temporal discontiguity is necessary for enhancing the survival of new neurons. Here we used two approaches to test this hypothesis. First, we examined the influence of a delay conditioning task in which the duration of the conditioned stimulus (CS) was increased nearly twofold, a procedure that critically engages the hippocampus. Although the CS and unconditioned stimulus are contiguous, this very long delay conditioning procedure increased the number of new neurons that survived. Second, we examined the influence of learning the trace conditioned response (CR) after having acquired the CR during delay conditioning, a procedure that renders trace conditioning hippocampal-independent. In this case, trace conditioning did not enhance the survival of new neurons. Together, these results demonstrate that associative learning increases the survival of new neurons in the adult hippocampus, regardless of temporal contiguity. PMID:17192426

  18. Readability of Invasive Procedure Consent Forms.

    PubMed

    Eltorai, Adam E M; Naqvi, Syed S; Ghanian, Soha; Eberson, Craig P; Weiss, Arnold-Peter C; Born, Christopher T; Daniels, Alan H

    2015-12-01

    Informed consent is a pillar of ethical medicine which requires patients to fully comprehend relevant issues including the risks, benefits, and alternatives of an intervention. Given the average reading skill of US adults is at the 8th grade level, the American Medical Association (AMA) and the National Institutes of Health (NIH) recommend patient information materials should not exceed a 6th grade reading level. We hypothesized that text provided in invasive procedure consent forms would exceed recommended readability guidelines for medical information. To test this hypothesis, we gathered procedure consent forms from all surgical inpatient hospitals in the state of Rhode Island. For each consent form, readability analysis was measured with the following measures: Flesch Reading Ease Formula, Flesch-Kincaid Grade Level, Fog Scale, SMOG Index, Coleman-Liau Index, Automated Readability Index, and Linsear Write Formula. These readability scores were used to calculate a composite Text Readability Consensus Grade Level. Invasive procedure consent forms were found to be written at an average of 15th grade level (i.e., third year of college), which is significantly higher than the average US adult reading level of 8th grade (p < 0.0001) and the AMA/NIH recommended readability guidelines for patient materials of 6th grade (p < 0.0001). Invasive procedure consent forms have readability levels which makes comprehension difficult or impossible for many patients. Efforts to improve the readability of procedural consent forms should improve patient understanding regarding their healthcare decisions. © 2015 Wiley Periodicals, Inc.

  19. Readability of Invasive Procedure Consent Forms

    PubMed Central

    Eltorai, Adam E. M.; Naqvi, Syed S.; Ghanian, Soha; Eberson, Craig P.; Weiss, Arnold‐Peter C.; Born, Christopher T.

    2015-01-01

    Abstract Background Informed consent is a pillar of ethical medicine which requires patients to fully comprehend relevant issues including the risks, benefits, and alternatives of an intervention. Given the average reading skill of US adults is at the 8th grade level, the American Medical Association (AMA) and the National Institutes of Health (NIH) recommend patient information materials should not exceed a 6th grade reading level. We hypothesized that text provided in invasive procedure consent forms would exceed recommended readability guidelines for medical information. Materials and methods To test this hypothesis, we gathered procedure consent forms from all surgical inpatient hospitals in the state of Rhode Island. For each consent form, readability analysis was measured with the following measures: Flesch Reading Ease Formula, Flesch–Kincaid Grade Level, Fog Scale, SMOG Index, Coleman–Liau Index, Automated Readability Index, and Linsear Write Formula. These readability scores were used to calculate a composite Text Readability Consensus Grade Level. Results Invasive procedure consent forms were found to be written at an average of 15th grade level (i.e., third year of college), which is significantly higher than the average US adult reading level of 8th grade (p < 0.0001) and the AMA/NIH recommended readability guidelines for patient materials of 6th grade (p < 0.0001). Conclusion Invasive procedure consent forms have readability levels which makes comprehension difficult or impossible for many patients. Efforts to improve the readability of procedural consent forms should improve patient understanding regarding their healthcare decisions. PMID:26678039

  20. An evaluation of inferential procedures for adaptive clinical trial designs with pre-specified rules for modifying the sample size.

    PubMed

    Levin, Gregory P; Emerson, Sarah C; Emerson, Scott S

    2014-09-01

    Many papers have introduced adaptive clinical trial methods that allow modifications to the sample size based on interim estimates of treatment effect. There has been extensive commentary on type I error control and efficiency considerations, but little research on estimation after an adaptive hypothesis test. We evaluate the reliability and precision of different inferential procedures in the presence of an adaptive design with pre-specified rules for modifying the sampling plan. We extend group sequential orderings of the outcome space based on the stage at stopping, likelihood ratio statistic, and sample mean to the adaptive setting in order to compute median-unbiased point estimates, exact confidence intervals, and P-values uniformly distributed under the null hypothesis. The likelihood ratio ordering is found to average shorter confidence intervals and produce higher probabilities of P-values below important thresholds than alternative approaches. The bias adjusted mean demonstrates the lowest mean squared error among candidate point estimates. A conditional error-based approach in the literature has the benefit of being the only method that accommodates unplanned adaptations. We compare the performance of this and other methods in order to quantify the cost of failing to plan ahead in settings where adaptations could realistically be pre-specified at the design stage. We find the cost to be meaningful for all designs and treatment effects considered, and to be substantial for designs frequently proposed in the literature. © 2014, The International Biometric Society.

  1. The Procedural Learning Deficit Hypothesis of Language Learning Disorders: We See Some Problems

    ERIC Educational Resources Information Center

    West, Gillian; Vadillo, Miguel A.; Shanks, David R.; Hulme, Charles

    2018-01-01

    Impaired procedural learning has been suggested as a possible cause of developmental dyslexia (DD) and specific language impairment (SLI). This study examined the relationship between measures of verbal and non-verbal implicit and explicit learning and measures of language, literacy and arithmetic attainment in a large sample of 7 to 8-year-old…

  2. Emotional Development across Adulthood: Differential Age-Related Emotional Reactivity and Emotion Regulation in a Negative Mood Induction Procedure

    ERIC Educational Resources Information Center

    Kliegel, Matthias; Jager, Theodor; Phillips, Louise H.

    2007-01-01

    The present study examines the hypothesis that older adults might differentially react to a negative versus neutral mood induction procedure than younger adults. The rationale for this expectation was derived from Socioemotional Selectivity Theory (SST), which postulates differential salience of emotional information and ability to regulate…

  3. The Importance of Teaching Power in Statistical Hypothesis Testing

    ERIC Educational Resources Information Center

    Olinsky, Alan; Schumacher, Phyllis; Quinn, John

    2012-01-01

    In this paper, we discuss the importance of teaching power considerations in statistical hypothesis testing. Statistical power analysis determines the ability of a study to detect a meaningful effect size, where the effect size is the difference between the hypothesized value of the population parameter under the null hypothesis and the true value…

  4. The Relation between Parental Values and Parenting Behavior: A Test of the Kohn Hypothesis.

    ERIC Educational Resources Information Center

    Luster, Tom; And Others

    1989-01-01

    Used data on 65 mother-infant dyads to test Kohn's hypothesis concerning the relation between values and parenting behavior. Findings support Kohn's hypothesis that parents who value self-direction would emphasize supportive function of parenting and parents who value conformity would emphasize their obligations to impose restraints. (Author/NB)

  5. Cognitive Biases in the Interpretation of Autonomic Arousal: A Test of the Construal Bias Hypothesis

    ERIC Educational Resources Information Center

    Ciani, Keith D.; Easter, Matthew A.; Summers, Jessica J.; Posada, Maria L.

    2009-01-01

    According to Bandura's construal bias hypothesis, derived from social cognitive theory, persons with the same heightened state of autonomic arousal may experience either pleasant or deleterious emotions depending on the strength of perceived self-efficacy. The current study tested this hypothesis by proposing that college students' preexisting…

  6. Is Conscious Stimulus Identification Dependent on Knowledge of the Perceptual Modality? Testing the “Source Misidentification Hypothesis”

    PubMed Central

    Overgaard, Morten; Lindeløv, Jonas; Svejstrup, Stinna; Døssing, Marianne; Hvid, Tanja; Kauffmann, Oliver; Mouridsen, Kim

    2013-01-01

    This paper reports an experiment intended to test a particular hypothesis derived from blindsight research, which we name the “source misidentification hypothesis.” According to this hypothesis, a subject may be correct about a stimulus without being correct about how she had access to this knowledge (whether the stimulus was visual, auditory, or something else). We test this hypothesis in healthy subjects, asking them to report whether a masked stimulus was presented auditorily or visually, what the stimulus was, and how clearly they experienced the stimulus using the Perceptual Awareness Scale (PAS). We suggest that knowledge about perceptual modality may be a necessary precondition in order to issue correct reports of which stimulus was presented. Furthermore, we find that PAS ratings correlate with correctness, and that subjects are at chance level when reporting no conscious experience of the stimulus. To demonstrate that particular levels of reporting accuracy are obtained, we employ a statistical strategy, which operationally tests the hypothesis of non-equality, such that the usual rejection of the null-hypothesis admits the conclusion of equivalence. PMID:23508677

  7. A large scale test of the gaming-enhancement hypothesis

    PubMed Central

    Wang, John C.

    2016-01-01

    A growing research literature suggests that regular electronic game play and game-based training programs may confer practically significant benefits to cognitive functioning. Most evidence supporting this idea, the gaming-enhancement hypothesis, has been collected in small-scale studies of university students and older adults. This research investigated the hypothesis in a general way with a large sample of 1,847 school-aged children. Our aim was to examine the relations between young people’s gaming experiences and an objective test of reasoning performance. Using a Bayesian hypothesis testing approach, evidence for the gaming-enhancement and null hypotheses were compared. Results provided no substantive evidence supporting the idea that having preference for or regularly playing commercially available games was positively associated with reasoning ability. Evidence ranged from equivocal to very strong in support for the null hypothesis over what was predicted. The discussion focuses on the value of Bayesian hypothesis testing for investigating electronic gaming effects, the importance of open science practices, and pre-registered designs to improve the quality of future work. PMID:27896035

  8. Damage of GABAergic neurons in the medial septum impairs spatial working memory and extinction of active avoidance: effects on proactive interference.

    PubMed

    Pang, Kevin C H; Jiao, Xilu; Sinha, Swamini; Beck, Kevin D; Servatius, Richard J

    2011-08-01

    The medial septum and diagonal band (MSDB) are important in spatial learning and memory. On the basis of the excitotoxic damage of GABAergic MSDB neurons, we have recently suggested a role for these neurons in controlling proactive interference. Our study sought to test this hypothesis in different behavioral procedures using a new GABAergic immunotoxin. GABA-transporter-saporin (GAT1-SAP) was administered into the MSDB of male Sprague-Dawley rats. Following surgery, rats were trained in a reference memory water maze procedure for 5 days, followed by a working memory (delayed match to position) water maze procedure. Other rats were trained in a lever-press avoidance procedure after intraseptal GAT1-SAP or sham surgery. Intraseptal GAT1-SAP extensively damaged GABAergic neurons while sparing most cholinergic MSDB neurons. Rats treated with GAT1-SAP were not impaired in acquiring a spatial reference memory, learning the location of the escape platform as rapidly as sham rats. In contrast, GAT1-SAP rats were slower than sham rats to learn the platform location in a delayed match to position procedure, in which the platform location was changed every day. Moreover, GAT1-SAP rats returned to previous platform locations more often than sham rats. In the active avoidance procedure, intraseptal GAT1-SAP impaired extinction but not acquisition of the avoidance response. Using a different neurotoxin and behavioral procedures than previous studies, the results of this study paint a similar picture that GABAergic MSDB neurons are important for controlling proactive interference. Copyright © 2010 Wiley-Liss, Inc.

  9. The motivation to self-administer is increased after a history of spiking brain levels of cocaine.

    PubMed

    Zimmer, Benjamin A; Oleson, Erik B; Roberts, David Cs

    2012-07-01

    Recent attempts to model the addiction process in rodents have focused on cocaine self-administration procedures that provide extended daily access. Such procedures produce a characteristic loading phase during which blood levels rapidly rise and then are maintained within an elevated range for the duration of the session. The present experiments tested the hypothesis that multiple fast-rising spikes in cocaine levels contribute to the addiction process more robustly than constant, maintained drug levels. Here, we compared the effects of various cocaine self-administration procedures that produced very different patterns of drug intake and drug dynamics on Pmax, a behavioral economic measure of the motivation to self-administer drug. Two groups received intermittent access (IntA) to cocaine during daily 6-h sessions. Access was limited to twelve 5-min trials that alternated with 25-min timeout periods, using either a hold-down procedure or a fixed ratio 1 (FR1). Cocaine levels could not be maintained with this procedure; instead the animals experienced 12 fast-rising spikes in cocaine levels each day. The IntA groups were compared with groups given 6-h FR1 long access and 2-h short access sessions and two other control groups. Here, we report that cocaine self-administration procedures resulting in repeatedly spiking drug levels produce more robust increases in Pmax than procedures resulting in maintained high levels of cocaine. These results suggest that rapid spiking of brain-cocaine levels is sufficient to increase the motivation to self-administer cocaine.

  10. LIKELIHOOD RATIO TESTS OF HYPOTHESES ON MULTIVARIATE POPULATIONS, VOLUME II, TEST OF HYPOTHESIS--STATISTICAL MODELS FOR THE EVALUATION AND INTERPRETATION OF EDUCATIONAL CRITERIA. PART 4.

    ERIC Educational Resources Information Center

    SAW, J.G.

    THIS PAPER DEALS WITH SOME TESTS OF HYPOTHESIS FREQUENTLY ENCOUNTERED IN THE ANALYSIS OF MULTIVARIATE DATA. THE TYPE OF HYPOTHESIS CONSIDERED IS THAT WHICH THE STATISTICIAN CAN ANSWER IN THE NEGATIVE OR AFFIRMATIVE. THE DOOLITTLE METHOD MAKES IT POSSIBLE TO EVALUATE THE DETERMINANT OF A MATRIX OF HIGH ORDER, TO SOLVE A MATRIX EQUATION, OR TO…

  11. The Bayesian New Statistics: Hypothesis testing, estimation, meta-analysis, and power analysis from a Bayesian perspective.

    PubMed

    Kruschke, John K; Liddell, Torrin M

    2018-02-01

    In the practice of data analysis, there is a conceptual distinction between hypothesis testing, on the one hand, and estimation with quantified uncertainty on the other. Among frequentists in psychology, a shift of emphasis from hypothesis testing to estimation has been dubbed "the New Statistics" (Cumming 2014). A second conceptual distinction is between frequentist methods and Bayesian methods. Our main goal in this article is to explain how Bayesian methods achieve the goals of the New Statistics better than frequentist methods. The article reviews frequentist and Bayesian approaches to hypothesis testing and to estimation with confidence or credible intervals. The article also describes Bayesian approaches to meta-analysis, randomized controlled trials, and power analysis.

  12. A meta-analytic review of collaborative inhibition and postcollaborative memory: Testing the predictions of the retrieval strategy disruption hypothesis.

    PubMed

    Marion, Stéphanie B; Thorley, Craig

    2016-11-01

    The retrieval strategy disruption hypothesis (Basden, Basden, Bryner, & Thomas, 1997) is the most widely cited theoretical explanation for why the memory performance of collaborative groups is inferior to the pooled performance of individual group members remembering alone (i.e., collaborative inhibition). This theory also predicts that several variables will moderate collaborative inhibition. This meta-analysis tests the veracity of the theory by systematically examining whether or not these variables do moderate the presence and strength of collaborative inhibition. A total of 75 effect sizes from 64 studies were included in the analysis. Collaborative inhibition was found to be a robust effect. Moreover, it was enhanced when remembering took place in larger groups, when uncategorized content items were retrieved, when group members followed free-flowing and free-order procedures, and when group members did not know one another. These findings support the retrieval strategy disruption hypothesis as a general theoretical explanation for the collaborative inhibition effect. Several additional analyses were also conducted to elucidate the potential contributions of other cognitive mechanisms to collaborative inhibition. Some results suggest that a contribution of retrieval inhibition is possible, but we failed to find any evidence to suggest retrieval blocking and encoding specificity impact upon collaborative inhibition effects. In a separate analysis (27 effect sizes), moderating factors of postcollaborative memory performance were examined. Generally, collaborative remembering tends to benefit later individual retrieval. Moderator analyses suggest that reexposure to study material may be partly responsible for this postcollaborative memory enhancement. Some applied implications of the meta-analyses are discussed. (PsycINFO Database Record (c) 2016 APA, all rights reserved).

  13. Current Perspectives on the Cerebellum and Reading Development.

    PubMed

    Alvarez, Travis A; Fiez, Julie A

    2018-05-03

    The dominant neural models of typical and atypical reading focus on the cerebral cortex. However, Nicolson et al. (2001) proposed a model, the cerebellar deficit hypothesis, in which the cerebellum plays an important role in reading. To evaluate the evidence in support of this model, we qualitatively review the current literature and employ meta-analytic tools examining patterns of functional connectivity between the cerebellum and the cerebral reading network. We find evidence for a phonological circuit with connectivity between the cerebellum and a dorsal fronto-parietal pathway, and a semantic circuit with cerebellar connectivity to a ventral fronto-temporal pathway. Furthermore, both cerebral pathways have functional connections with the mid-fusiform gyrus, a region implicated in orthographic processing. Consideration of these circuits within the context of the current literature suggests the cerebellum is positioned to influence both phonological and word-based decoding procedures for recognizing unfamiliar printed words. Overall, multiple lines of research provide support for the cerebellar deficit hypothesis, while also highlighting the need for further research to test mechanistic hypotheses. Copyright © 2018. Published by Elsevier Ltd.

  14. Alien abduction: a medical hypothesis.

    PubMed

    Forrest, David V

    2008-01-01

    In response to a new psychological study of persons who believe they have been abducted by space aliens that found that sleep paralysis, a history of being hypnotized, and preoccupation with the paranormal and extraterrestrial were predisposing experiences, I noted that many of the frequently reported particulars of the abduction experience bear more than a passing resemblance to medical-surgical procedures and propose that experience with these may also be contributory. There is the altered state of consciousness, uniformly colored figures with prominent eyes, in a high-tech room under a round bright saucerlike object; there is nakedness, pain and a loss of control while the body's boundaries are being probed; and yet the figures are thought benevolent. No medical-surgical history was apparently taken in the above mentioned study, but psychological laboratory work evaluated false memory formation. I discuss problems in assessing intraoperative awareness and ways in which the medical hypothesis could be elaborated and tested. If physicians are causing this syndrome in a percentage of patients, we should know about it; and persons who feel they have been abducted should be encouraged to inform their surgeons and anesthesiologists without challenging their beliefs.

  15. The Role of Noncriterial Recollection in Estimating Recollection and Familiarity

    PubMed Central

    Parks, Colleen M.

    2007-01-01

    Noncriterial recollection (ncR) is recollection of details that are irrelevant to task demands. It has been shown to elevate familiarity estimates and to be functionally equivalent to familiarity in the process dissociation procedure (Yonelinas & Jacoby, 1996). However, Toth and Parks (2006) found no ncR in older adults, and hypothesized that this absence was related to older adults’ criterial recollection deficit. To test this hypothesis, as well as whether ncR is functionally equivalent to familiarity and increases the subjective experience of familiarity, remember-know and confidence-rating methods were used to estimate recollection and familiarity with young adults, young adults in a divided-attention condition (Experiment 1), and older adults. Supporting Toth and Parks’ hypothesis, ncR was found in all groups, but was consistently larger for groups with higher criterial recollection. Response distributions and receiver-operating characteristics revealed further similarities to criterial recollection and suggested that neither the experience nor usefulness of familiarity was enhanced by ncR. Overall, the results suggest that ncR does not differ fundamentally from criterial recollection. PMID:18591986

  16. Calibrated dilatometer exercise to probe thermoplastic properties of coal in pressurized CO 2

    DOE PAGES

    Romanov, Vyacheslav N.; Lynn, Ronald J.; Warzinski, Robert P.

    2017-07-03

    This research was aimed at testing a hypothesis, that at elevated CO 2 pressure coal can soften at temperatures well below those obtained in the presence of other gases. That could have serious negative implications for injection of CO 2 into deep coal seams. Here, we have examined the experimental design issues and procedures used in the previously published studies, and experimentally investigated the physical behavior of a similar coal in the presence of CO 2 as a function of pressure and temperature, using the same high-pressure micro-dilatometer refurbished and carefully calibrated for this purpose. No notable reduction in coalmore » softening temperature was observed in this study.« less

  17. Calibrated dilatometer exercise to probe thermoplastic properties of coal in pressurized CO 2

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Romanov, Vyacheslav N.; Lynn, Ronald J.; Warzinski, Robert P.

    This research was aimed at testing a hypothesis, that at elevated CO 2 pressure coal can soften at temperatures well below those obtained in the presence of other gases. That could have serious negative implications for injection of CO 2 into deep coal seams. Here, we have examined the experimental design issues and procedures used in the previously published studies, and experimentally investigated the physical behavior of a similar coal in the presence of CO 2 as a function of pressure and temperature, using the same high-pressure micro-dilatometer refurbished and carefully calibrated for this purpose. No notable reduction in coalmore » softening temperature was observed in this study.« less

  18. Lateralized goal framing: how selective presentation impacts message effectiveness.

    PubMed

    McCormick, Michael; Seta, John J

    2012-11-01

    We tested whether framing a message as a gain or loss would alter its effectiveness by using a dichotic listening procedure to selectively present a health related message to the left or right hemisphere. A significant goal framing effect (losses > gains) was found when right, but not left, hemisphere processing was initially enhanced. The results support the position that the contextual processing style of the right hemisphere is especially sensitive to the associative implications of the frame. We discussed the implications of these findings for goal framing research, and the valence hypothesis. We also discussed how these findings converge with prior valence framing research and how they can be of potential use to health care providers.

  19. Computing Inter-Rater Reliability for Observational Data: An Overview and Tutorial

    PubMed Central

    Hallgren, Kevin A.

    2012-01-01

    Many research designs require the assessment of inter-rater reliability (IRR) to demonstrate consistency among observational ratings provided by multiple coders. However, many studies use incorrect statistical procedures, fail to fully report the information necessary to interpret their results, or do not address how IRR affects the power of their subsequent analyses for hypothesis testing. This paper provides an overview of methodological issues related to the assessment of IRR with a focus on study design, selection of appropriate statistics, and the computation, interpretation, and reporting of some commonly-used IRR statistics. Computational examples include SPSS and R syntax for computing Cohen’s kappa and intra-class correlations to assess IRR. PMID:22833776

  20. Function word repetitions emerge when speakers are operantly conditioned to reduce frequency of silent pauses.

    PubMed

    Howell, P; Sackin, S

    2001-09-01

    Beattie and Bradbury (1979) reported a study in which, in one condition, they punished speakers when they produced silent pauses (by lighting a light they were supposed to keep switched off). They found speakers were able to reduce silent pauses and that this was not achieved at the expense of reduced overall speech rate. They reported an unexpected increase in word repetition rate. A recent theory proposed by Howell, Au-Yeung, and Sackin (1999) predicts that the change in word repetition rate will occur on function, not content words. This hypothesis is tested and confirmed. The results are used to assess the theory and to consider practical applications of this conditioning procedure.

  1. Phase II design with sequential testing of hypotheses within each stage.

    PubMed

    Poulopoulou, Stavroula; Karlis, Dimitris; Yiannoutsos, Constantin T; Dafni, Urania

    2014-01-01

    The main goal of a Phase II clinical trial is to decide, whether a particular therapeutic regimen is effective enough to warrant further study. The hypothesis tested by Fleming's Phase II design (Fleming, 1982) is [Formula: see text] versus [Formula: see text], with level [Formula: see text] and with a power [Formula: see text] at [Formula: see text], where [Formula: see text] is chosen to represent the response probability achievable with standard treatment and [Formula: see text] is chosen such that the difference [Formula: see text] represents a targeted improvement with the new treatment. This hypothesis creates a misinterpretation mainly among clinicians that rejection of the null hypothesis is tantamount to accepting the alternative, and vice versa. As mentioned by Storer (1992), this introduces ambiguity in the evaluation of type I and II errors and the choice of the appropriate decision at the end of the study. Instead of testing this hypothesis, an alternative class of designs is proposed in which two hypotheses are tested sequentially. The hypothesis [Formula: see text] versus [Formula: see text] is tested first. If this null hypothesis is rejected, the hypothesis [Formula: see text] versus [Formula: see text] is tested next, in order to examine whether the therapy is effective enough to consider further testing in a Phase III study. For the derivation of the proposed design the exact binomial distribution is used to calculate the decision cut-points. The optimal design parameters are chosen, so as to minimize the average sample number (ASN) under specific upper bounds for error levels. The optimal values for the design were found using a simulated annealing method.

  2. Learning from number board games: you learn what you encode.

    PubMed

    Laski, Elida V; Siegler, Robert S

    2014-03-01

    We tested the hypothesis that encoding the numerical-spatial relations in a number board game is a key process in promoting learning from playing such games. Experiment 1 used a microgenetic design to examine the effects on learning of the type of counting procedure that children use. As predicted, having kindergartners count-on from their current number on the board while playing a 0-100 number board game facilitated their encoding of the numerical-spatial relations on the game board and improved their number line estimates, numeral identification, and count-on skill. Playing the same game using the standard count-from-1 procedure led to considerably less learning. Experiment 2 demonstrated that comparable improvement in number line estimation does not occur with practice encoding the numerals 1-100 outside of the context of a number board game. The general importance of aligning learning activities and physical materials with desired mental representations is discussed. PsycINFO Database Record (c) 2014 APA, all rights reserved.

  3. Do statistical segmentation abilities predict lexical-phonological and lexical-semantic abilities in children with and without SLI?

    PubMed Central

    Mainela-Arnold, Elina; Evans, Julia L.

    2014-01-01

    This study tested the predictions of the procedural deficit hypothesis by investigating the relationship between sequential statistical learning and two aspects of lexical ability, lexical-phonological and lexical-semantic, in children with and without specific language impairment (SLI). Participants included 40 children (ages 8;5–12;3), 20 children with SLI and 20 with typical development. Children completed Saffran’s statistical word segmentation task, a lexical-phonological access task (gating task), and a word definition task. Poor statistical learners were also poor at managing lexical-phonological competition during the gating task. However, statistical learning was not a significant predictor of semantic richness in word definitions. The ability to track statistical sequential regularities may be important for learning the inherently sequential structure of lexical-phonology, but not as important for learning lexical-semantic knowledge. Consistent with the procedural/declarative memory distinction, the brain networks associated with the two types of lexical learning are likely to have different learning properties. PMID:23425593

  4. An Extension of RSS-based Model Comparison Tests for Weighted Least Squares

    DTIC Science & Technology

    2012-08-22

    use the model comparison test statistic to analyze the null hypothesis. Under the null hypothesis, the weighted least squares cost functional is JWLS ...q̂WLSH ) = 10.3040×106. Under the alternative hypothesis, the weighted least squares cost functional is JWLS (q̂WLS) = 8.8394 × 106. Thus the model

  5. Evaluating sufficient similarity for drinking-water disinfection by-product (DBP) mixtures with bootstrap hypothesis test procedures.

    PubMed

    Feder, Paul I; Ma, Zhenxu J; Bull, Richard J; Teuschler, Linda K; Rice, Glenn

    2009-01-01

    In chemical mixtures risk assessment, the use of dose-response data developed for one mixture to estimate risk posed by a second mixture depends on whether the two mixtures are sufficiently similar. While evaluations of similarity may be made using qualitative judgments, this article uses nonparametric statistical methods based on the "bootstrap" resampling technique to address the question of similarity among mixtures of chemical disinfectant by-products (DBP) in drinking water. The bootstrap resampling technique is a general-purpose, computer-intensive approach to statistical inference that substitutes empirical sampling for theoretically based parametric mathematical modeling. Nonparametric, bootstrap-based inference involves fewer assumptions than parametric normal theory based inference. The bootstrap procedure is appropriate, at least in an asymptotic sense, whether or not the parametric, distributional assumptions hold, even approximately. The statistical analysis procedures in this article are initially illustrated with data from 5 water treatment plants (Schenck et al., 2009), and then extended using data developed from a study of 35 drinking-water utilities (U.S. EPA/AMWA, 1989), which permits inclusion of a greater number of water constituents and increased structure in the statistical models.

  6. Selective mood-induced body image disparagement and enhancement effects: are they due to cognitive priming or subjective mood?

    PubMed

    Rotenberg, Ken J; Taylor, Daniel; Davis, Ron

    2004-04-01

    The study evaluated the effects of mood induction procedures on body image. Eighty female undergraduates participated in combinations of two valences (negative vs. positive) and two types (self-referent vs. other-referent) of mood induction procedures (MIPs). A measure of subjective mood and seven measures of body image were administered before and after the MIPs. Individuals in the self-referent MIP who had high negative body image at the pretest demonstrated increases in negative body image after exposure to the negative valence MIP (a disparagement effect) and decreases in negative body image after exposure to the positive valence MIP (an enhancement effect). This pattern was not evident in the other-referent MIP. Also, changes in negative body image were not appreciably associated with changes in subjective mood. The findings yielded support for the cognitive priming hypothesis but not for the subjective mood hypothesis. Further means of examining the cognitive priming hypothesis were outlined. Copyright 2004 by Wiley Periodicals, Inc. Int J Eat Disord 35: 317-332, 2004.

  7. Sex ratios in the two Germanies: a test of the economic stress hypothesis.

    PubMed

    Catalano, Ralph A

    2003-09-01

    Literature describing temporal variation in the secondary sex ratio among humans reports an association between population stressors and declines in the odds of male birth. Explanations of this phenomenon draw on reports that stressed females spontaneously abort male more than female fetuses, and that stressed males exhibit reduced sperm motility. This work has led to the argument that population stress induced by a declining economy reduces the human sex ratio. No direct test of this hypothesis appears in the literature. Here, a test is offered based on a comparison of the sex ratio in East and West Germany for the years 1946 to 1999. The theory suggests that the East German sex ratio should be lower in 1991, when East Germany's economy collapsed, than expected from its own history and from the sex ratio in West Germany. The hypothesis is tested using time-series modelling methods. The data support the hypothesis. The sex ratio in East Germany was at its lowest in 1991. This first direct test supports the hypothesis that economic decline reduces the human sex ratio.

  8. Understanding suicide terrorism: premature dismissal of the religious-belief hypothesis.

    PubMed

    Liddle, James R; Machluf, Karin; Shackelford, Todd K

    2010-07-06

    We comment on work by Ginges, Hansen, and Norenzayan (2009), in which they compare two hypotheses for predicting individual support for suicide terrorism: the religious-belief hypothesis and the coalitional-commitment hypothesis. Although we appreciate the evidence provided in support of the coalitional-commitment hypothesis, we argue that their method of testing the religious-belief hypothesis is conceptually flawed, thus calling into question their conclusion that the religious-belief hypothesis has been disconfirmed. In addition to critiquing the methodology implemented by Ginges et al., we provide suggestions on how the religious-belief hypothesis may be properly tested. It is possible that the premature and unwarranted conclusions reached by Ginges et al. may deter researchers from examining the effect of specific religious beliefs on support for terrorism, and we hope that our comments can mitigate this possibility.

  9. Testing hypotheses and the advancement of science: recent attempts to falsify the equilibrium point hypothesis.

    PubMed

    Feldman, Anatol G; Latash, Mark L

    2005-02-01

    Criticisms of the equilibrium point (EP) hypothesis have recently appeared that are based on misunderstandings of some of its central notions. Starting from such interpretations of the hypothesis, incorrect predictions are made and tested. When the incorrect predictions prove false, the hypothesis is claimed to be falsified. In particular, the hypothesis has been rejected based on the wrong assumptions that it conflicts with empirically defined joint stiffness values or that it is incompatible with violations of equifinality under certain velocity-dependent perturbations. Typically, such attempts use notions describing the control of movements of artificial systems in place of physiologically relevant ones. While appreciating constructive criticisms of the EP hypothesis, we feel that incorrect interpretations have to be clarified by reiterating what the EP hypothesis does and does not predict. We conclude that the recent claims of falsifying the EP hypothesis and the calls for its replacement by EMG-force control hypothesis are unsubstantiated. The EP hypothesis goes far beyond the EMG-force control view. In particular, the former offers a resolution for the famous posture-movement paradox while the latter fails to resolve it.

  10. On the performance of tests for the detection of signatures of selection: a case study with the Spanish autochthonous beef cattle populations.

    PubMed

    González-Rodríguez, Aldemar; Munilla, Sebastián; Mouresan, Elena F; Cañas-Álvarez, Jhon J; Díaz, Clara; Piedrafita, Jesús; Altarriba, Juan; Baro, Jesús Á; Molina, Antonio; Varona, Luis

    2016-10-28

    Procedures for the detection of signatures of selection can be classified according to the source of information they use to reject the null hypothesis of absence of selection. Three main groups of tests can be identified that are based on: (1) the analysis of the site frequency spectrum, (2) the study of the extension of the linkage disequilibrium across the length of the haplotypes that surround the polymorphism, and (3) the differentiation among populations. The aim of this study was to compare the performance of a subset of these procedures by using a dataset on seven Spanish autochthonous beef cattle populations. Analysis of the correlations between the logarithms of the statistics that were obtained by 11 tests for detecting signatures of selection at each single nucleotide polymorphism confirmed that they can be clustered into the three main groups mentioned above. A factor analysis summarized the results of the 11 tests into three canonical axes that were each associated with one of the three groups. Moreover, the signatures of selection identified with the first and second groups of tests were shared across populations, whereas those with the third group were more breed-specific. Nevertheless, an enrichment analysis identified the metabolic pathways that were associated with each group; they coincided with canonical axes and were related to immune response, muscle development, protein biosynthesis, skin and pigmentation, glucose metabolism, fat metabolism, embryogenesis and morphology, heart and uterine metabolism, regulation of the hypothalamic-pituitary-thyroid axis, hormonal, cellular cycle, cell signaling and extracellular receptors. We show that the results of the procedures used to identify signals of selection differed substantially between the three groups of tests. However, they can be classified using a factor analysis. Moreover, each canonical factor that coincided with a group of tests identified different signals of selection, which could be attributed to processes of selection that occurred at different evolutionary times. Nevertheless, the metabolic pathways that were associated with each group of tests were similar, which suggests that the selection events that occurred during the evolutionary history of the populations probably affected the same group of traits.

  11. Action perception as hypothesis testing.

    PubMed

    Donnarumma, Francesco; Costantini, Marcello; Ambrosini, Ettore; Friston, Karl; Pezzulo, Giovanni

    2017-04-01

    We present a novel computational model that describes action perception as an active inferential process that combines motor prediction (the reuse of our own motor system to predict perceived movements) and hypothesis testing (the use of eye movements to disambiguate amongst hypotheses). The system uses a generative model of how (arm and hand) actions are performed to generate hypothesis-specific visual predictions, and directs saccades to the most informative places of the visual scene to test these predictions - and underlying hypotheses. We test the model using eye movement data from a human action observation study. In both the human study and our model, saccades are proactive whenever context affords accurate action prediction; but uncertainty induces a more reactive gaze strategy, via tracking the observed movements. Our model offers a novel perspective on action observation that highlights its active nature based on prediction dynamics and hypothesis testing. Copyright © 2017 The Authors. Published by Elsevier Ltd.. All rights reserved.

  12. Recall and recognition hypermnesia for Socratic stimuli.

    PubMed

    Kazén, Miguel; Solís-Macías, Víctor M

    2016-01-01

    In two experiments, we investigate hypermnesia, net memory improvements with repeated testing of the same material after a single study trial. In the first experiment, we found hypermnesia across three trials for the recall of word solutions to Socratic stimuli (dictionary-like definitions of concepts) replicating Erdelyi, Buschke, and Finkelstein and, for the first time using these materials, for their recognition. In the second experiment, we had two "yes/no" recognition groups, a Socratic stimuli group presented with concrete and abstract verbal materials and a word-only control group. Using signal detection measures, we found hypermnesia for concrete Socratic stimuli-and stable performance for abstract stimuli across three recognition tests. The control group showed memory decrements across tests. We interpret these findings with the alternative retrieval pathways (ARP) hypothesis, contrasting it with alternative theories of hypermnesia, such as depth of processing, generation and retrieve-recognise. We conclude that recognition hypermnesia for concrete Socratic stimuli is a reliable phenomenon, which we found in two experiments involving both forced-choice and yes/no recognition procedures.

  13. Early Adoption of a Multi-target Stool DNA Test for Colorectal Cancer Screening

    PubMed Central

    Finney Rutten, Lila J.; Jacobson, Robert M.; Wilson, Patrick M.; Jacobson, Debra J.; Fan, Chun; Kisiel, John B.; Sweetser, Seth R.; Tulledge-Scheitel, Sidna M.; St. Sauver, Jennifer L.

    2017-01-01

    Objective To characterize early adoption of a novelmulti-target stool deoxyribonucleic acid (MTsDNA) screening test for colorectal cancer (CRC) and test the hypothesis that adoption differs by demographic characteristics, prior CRC screening behavior, and proceeds predictably over time. Patients and Methods We used the Rochester Epidemiology Project infrastructure to assess MTsDNA screening test use among adults aged 50–75 years, and identified 27,147 individuals eligible/due for screening colonoscopy from November 1, 2014 through November 30, 2015, and living in Olmsted County, Minnesota in2014. We used electronic Current Procedure Terminology and Health Care Common Procedure codes to evaluate early adoption of MTsDNA screening test in this population and to test whether early adoption varies by age, sex, race, and prior screening behavior. Results Overall, 2,193 (8.1%) and 974 (3.6%) of individuals were screened by colonoscopy and MT-sDNA, respectively. Age, sex, race, and prior screening were significantly and independently associated with MT-sDNA screening use compared to colonoscopy use after adjustment for all other variables. Rates of adoption of MTsDNA screening increased over time and were highest among those aged 50–54 years, females, whites, and had a prior history of screening. MT-sDNA screening use varied predictably by insurance coverage. Rates of colonoscopy decreased over time, while overall CRC screening rates remained steady. Conclusion Our results are generally consistent with predictions derived from prior research and Diffusion of Innovation framework, pointing to increasing use of the new screening test over time, and early adoption by younger patients, females, whites and those with prior CRC screening. PMID:28473037

  14. Picture-Perfect Is Not Perfect for Metamemory: Testing the Perceptual Fluency Hypothesis with Degraded Images

    ERIC Educational Resources Information Center

    Besken, Miri

    2016-01-01

    The perceptual fluency hypothesis claims that items that are easy to perceive at encoding induce an illusion that they will be easier to remember, despite the finding that perception does not generally affect recall. The current set of studies tested the predictions of the perceptual fluency hypothesis with a picture generation manipulation.…

  15. Adolescents' Body Image Trajectories: A Further Test of the Self-Equilibrium Hypothesis

    ERIC Educational Resources Information Center

    Morin, Alexandre J. S.; Maïano, Christophe; Scalas, L. Francesca; Janosz, Michel; Litalien, David

    2017-01-01

    The self-equilibrium hypothesis underlines the importance of having a strong core self, which is defined as a high and developmentally stable self-concept. This study tested this hypothesis in relation to body image (BI) trajectories in a sample of 1,006 adolescents (M[subscript age] = 12.6, including 541 males and 465 females) across a 4-year…

  16. Does Merit-Based Aid Improve College Affordability? Testing the Bennett Hypothesis in the Era of Merit-Based Aid

    ERIC Educational Resources Information Center

    Lee, Jungmin

    2016-01-01

    This study tested the Bennett hypothesis by examining whether four-year colleges changed listed tuition and fees, the amount of institutional grants per student, and room and board charges after their states implemented statewide merit-based aid programs. According to the Bennett hypothesis, increases in government financial aid make it easier for…

  17. Human female orgasm as evolved signal: a test of two hypotheses.

    PubMed

    Ellsworth, Ryan M; Bailey, Drew H

    2013-11-01

    We present the results of a study designed to empirically test predictions derived from two hypotheses regarding human female orgasm behavior as an evolved communicative trait or signal. One hypothesis tested was the female fidelity hypothesis, which posits that human female orgasm signals a woman's sexual satisfaction and therefore her likelihood of future fidelity to a partner. The other was sire choice hypothesis, which posits that women's orgasm behavior signals increased chances of fertilization. To test the two hypotheses of human female orgasm, we administered a questionnaire to 138 females and 121 males who reported that they were currently in a romantic relationship. Key predictions of the female fidelity hypothesis were not supported. In particular, orgasm was not associated with female sexual fidelity nor was orgasm associated with male perceptions of partner sexual fidelity. However, faked orgasm was associated with female sexual infidelity and lower male relationship satisfaction. Overall, results were in greater support of the sire choice signaling hypothesis than the female fidelity hypothesis. Results also suggest that male satisfaction with, investment in, and sexual fidelity to a mate are benefits that favored the selection of orgasmic signaling in ancestral females.

  18. Sex-Biased Parental Investment among Contemporary Chinese Peasants: Testing the Trivers-Willard Hypothesis.

    PubMed

    Luo, Liqun; Zhao, Wei; Weng, Tangmei

    2016-01-01

    The Trivers-Willard hypothesis predicts that high-status parents will bias their investment to sons, whereas low-status parents will bias their investment to daughters. Among humans, tests of this hypothesis have yielded mixed results. This study tests the hypothesis using data collected among contemporary peasants in Central South China. We use current family status (rated by our informants) and father's former class identity (assigned by the Chinese Communist Party in the early 1950s) as measures of parental status, and proportion of sons in offspring and offspring's years of education as measures of parental investment. Results show that (i) those families with a higher former class identity such as landlord and rich peasant tend to have a higher socioeconomic status currently, (ii) high-status parents are more likely to have sons than daughters among their biological offspring, and (iii) in higher-status families, the years of education obtained by sons exceed that obtained by daughters to a larger extent than in lower-status families. Thus, the first assumption and the two predictions of the hypothesis are supported by this study. This article contributes a contemporary Chinese case to the testing of the Trivers-Willard hypothesis.

  19. Hypothesis testing of a change point during cognitive decline among Alzheimer's disease patients.

    PubMed

    Ji, Ming; Xiong, Chengjie; Grundman, Michael

    2003-10-01

    In this paper, we present a statistical hypothesis test for detecting a change point over the course of cognitive decline among Alzheimer's disease patients. The model under the null hypothesis assumes a constant rate of cognitive decline over time and the model under the alternative hypothesis is a general bilinear model with an unknown change point. When the change point is unknown, however, the null distribution of the test statistics is not analytically tractable and has to be simulated by parametric bootstrap. When the alternative hypothesis that a change point exists is accepted, we propose an estimate of its location based on the Akaike's Information Criterion. We applied our method to a data set from the Neuropsychological Database Initiative by implementing our hypothesis testing method to analyze Mini Mental Status Exam scores based on a random-slope and random-intercept model with a bilinear fixed effect. Our result shows that despite large amount of missing data, accelerated decline did occur for MMSE among AD patients. Our finding supports the clinical belief of the existence of a change point during cognitive decline among AD patients and suggests the use of change point models for the longitudinal modeling of cognitive decline in AD research.

  20. Detection of Undocumented Changepoints Using Multiple Test Statistics and Composite Reference Series.

    NASA Astrophysics Data System (ADS)

    Menne, Matthew J.; Williams, Claude N., Jr.

    2005-10-01

    An evaluation of three hypothesis test statistics that are commonly used in the detection of undocumented changepoints is described. The goal of the evaluation was to determine whether the use of multiple tests could improve undocumented, artificial changepoint detection skill in climate series. The use of successive hypothesis testing is compared to optimal approaches, both of which are designed for situations in which multiple undocumented changepoints may be present. In addition, the importance of the form of the composite climate reference series is evaluated, particularly with regard to the impact of undocumented changepoints in the various component series that are used to calculate the composite.In a comparison of single test changepoint detection skill, the composite reference series formulation is shown to be less important than the choice of the hypothesis test statistic, provided that the composite is calculated from the serially complete and homogeneous component series. However, each of the evaluated composite series is not equally susceptible to the presence of changepoints in its components, which may be erroneously attributed to the target series. Moreover, a reference formulation that is based on the averaging of the first-difference component series is susceptible to random walks when the composition of the component series changes through time (e.g., values are missing), and its use is, therefore, not recommended. When more than one test is required to reject the null hypothesis of no changepoint, the number of detected changepoints is reduced proportionately less than the number of false alarms in a wide variety of Monte Carlo simulations. Consequently, a consensus of hypothesis tests appears to improve undocumented changepoint detection skill, especially when reference series homogeneity is violated. A consensus of successive hypothesis tests using a semihierarchic splitting algorithm also compares favorably to optimal solutions, even when changepoints are not hierarchic.

  1. A Smart Unconscious? Procedural Origins of Automatic Partner Attitudes in Marriage

    PubMed Central

    Murray, Sandra L.; Holmes, John G.; Pinkus, Rebecca T.

    2010-01-01

    The paper examines potential origins of automatic (i.e., unconscious) attitudes toward one’s marital partner. It tests the hypothesis that early experiences in conflict-of-interest situations predict one’s later automatic inclination to approach (or avoid) the partner. A longitudinal study linked daily experiences in conflict-of-interest situations in the initial months of new marriages to automatic evaluations of the partner assessed four years later using the Implicit Associations Test. The results revealed that partners who were initially (1) treated less responsively and (2) evidenced more self-protective and less connectedness-promoting “if-then” contingencies in their thoughts and behavior later evidenced less positive automatic partner attitudes. However, these factors did not predict changes in love, satisfaction, or explicit beliefs about the partner. The findings hint at the existence of a “smart” relationship unconscious that captures behavioral realities conscious reflection can miss. PMID:20526450

  2. The embodied nature of motor imagery processes highlighted by short-term limb immobilization.

    PubMed

    Meugnot, Aurore; Almecija, Yves; Toussaint, Lucette

    2014-01-01

    We investigated the embodied nature of motor imagery processes through a recent use-dependent plasticity approach, a short-term limb immobilization paradigm. A splint placed on the participants' left-hand during a brief period of 24 h was used for immobilization. The immobilized participants performed two mental rotation tasks (a hand mental rotation task and a number mental rotation task) before (pre-test) and immediately after (post-test) the splint removal. The control group did not undergo the immobilization procedure. The main results showed an immobilization-induced effect on left-hand stimuli, resulting in a lack of task-repetition benefit. By contrast, accuracy was higher and response times were shorter for right-hand stimuli. No immobilization-induced effects appeared for number stimuli. These results revealed that the cognitive representation of hand movements can be modified by a brief period of sensorimotor deprivation, supporting the hypothesis of the embodied nature of motor simulation processes.

  3. Visualization-based analysis of multiple response survey data

    NASA Astrophysics Data System (ADS)

    Timofeeva, Anastasiia

    2017-11-01

    During the survey, the respondents are often allowed to tick more than one answer option for a question. Analysis and visualization of such data have difficulties because of the need for processing multiple response variables. With standard representation such as pie and bar charts, information about the association between different answer options is lost. The author proposes a visualization approach for multiple response variables based on Venn diagrams. For a more informative representation with a large number of overlapping groups it is suggested to use similarity and association matrices. Some aggregate indicators of dissimilarity (similarity) are proposed based on the determinant of the similarity matrix and the maximum eigenvalue of association matrix. The application of the proposed approaches is well illustrated by the example of the analysis of advertising sources. Intersection of sets indicates that the same consumer audience is covered by several advertising sources. This information is very important for the allocation of the advertising budget. The differences between target groups in advertising sources are of interest. To identify such differences the hypothesis of homogeneity and independence are tested. Recent approach to the problem are briefly reviewed and compared. An alternative procedure is suggested. It is based on partition of a consumer audience into pairwise disjoint subsets and includes hypothesis testing of the difference between the population proportions. It turned out to be more suitable for the real problem being solved.

  4. [Toward exploration of morphological diversity of measurable traits of mammalian skull. 2. Scalar and vector parameters of the forms of group variation].

    PubMed

    Lisovskiĭ, A A; Pavlinov, I Ia

    2008-01-01

    Any morphospace is partitioned by the forms of group variation, its structure is described by a set of scalar (range, overlap) and vector (direction) characteristics. They are analyzed quantitatively for the sex and age variations in the sample of 200 skulls of the pine marten described by 14 measurable traits. Standard dispersion and variance components analyses are employed, accompanied with several resampling methods (randomization and bootstrep); effects of changes in the analysis design on results of the above methods are also considered. Maximum likelihood algorithm of variance components analysis is shown to give an adequate estimates of portions of particular forms of group variation within the overall disparity. It is quite stable in respect to changes of the analysis design and therefore could be used in the explorations of the real data with variously unbalanced designs. A new algorithm of estimation of co-directionality of particular forms of group variation within the overall disparity is elaborated, which includes angle measures between eigenvectors of covariation matrices of effects of group variations calculated by dispersion analysis. A null hypothesis of random portion of a given group variation could be tested by means of randomization of the respective grouping variable. A null hypothesis of equality of both portions and directionalities of different forms of group variation could be tested by means of the bootstrep procedure.

  5. WH Craib: a critical account of his work

    PubMed Central

    Naidoo, DP

    2009-01-01

    Summary Summary One hundred years after its introduction, the ECG remains the most commonly used cardiovascular laboratory procedure. It fulfils all the requirements of a diagnostic test: it is non-invasive, simple to record, highly reproducible and can be applied serially. It is the first laboratory test to be performed in a patient with chest pain, syncope or cardiac arrhythmias. It is also a prognostic tool that aids in risk stratification and clinical management. Among the many South Africans who have made remarkable contributions in the field of electrocardiography, Don Craib was the first to investigate the changing patterns of the ECG action potential in isolated skeletal muscle strips under varying conditions. It was during his stay at Johns Hopkins Hospital in Baltimore and Sir Thomas Lewis laboratory in London that Craib made singular observations about the fundamental origins of electrical signals in the skeletal muscle, and from these developed his hypothesis on the generation of the action potential in the electrocardiogram. His proposals went contrary to scientific opinion at the time and he was rebuffed by the scientific community. Frank Wilson subsequently went on to develop Craib’s doublet hypothesis into the dipole theory, acknowledging Craib’s work. Today the dipole theory is fundamental to the understanding of the spread of electrical activation in the myocardium and the genesis of the action potential. PMID:19287808

  6. Testing for purchasing power parity in the long-run for ASEAN-5

    NASA Astrophysics Data System (ADS)

    Choji, Niri Martha; Sek, Siok Kun

    2017-04-01

    For more than a decade, there has been a substantial interest in testing for the validity of the purchasing power parity (PPP) hypothesis empirically. This paper performs a test on revealing a long-run relative Purchasing Power Parity for a group of ASEAN-5 countries for the period of 1996-2016 using monthly data. For this purpose, we used the Pedroni co-integration method to test for the long-run hypothesis of purchasing power parity. We first tested for the stationarity of the variables and found that the variables are non-stationary at levels but stationary at first difference. Results of the Pedroni test rejected the null hypothesis of no co-integration meaning that we have enough evidence to support PPP in the long-run for the ASEAN-5 countries over the period of 1996-2016. In other words, the rejection of null hypothesis implies a long-run relation between nominal exchange rates and relative prices.

  7. UNIFORMLY MOST POWERFUL BAYESIAN TESTS

    PubMed Central

    Johnson, Valen E.

    2014-01-01

    Uniformly most powerful tests are statistical hypothesis tests that provide the greatest power against a fixed null hypothesis among all tests of a given size. In this article, the notion of uniformly most powerful tests is extended to the Bayesian setting by defining uniformly most powerful Bayesian tests to be tests that maximize the probability that the Bayes factor, in favor of the alternative hypothesis, exceeds a specified threshold. Like their classical counterpart, uniformly most powerful Bayesian tests are most easily defined in one-parameter exponential family models, although extensions outside of this class are possible. The connection between uniformly most powerful tests and uniformly most powerful Bayesian tests can be used to provide an approximate calibration between p-values and Bayes factors. Finally, issues regarding the strong dependence of resulting Bayes factors and p-values on sample size are discussed. PMID:24659829

  8. Empirical null estimation using zero-inflated discrete mixture distributions and its application to protein domain data.

    PubMed

    Gauran, Iris Ivy M; Park, Junyong; Lim, Johan; Park, DoHwan; Zylstra, John; Peterson, Thomas; Kann, Maricel; Spouge, John L

    2017-09-22

    In recent mutation studies, analyses based on protein domain positions are gaining popularity over gene-centric approaches since the latter have limitations in considering the functional context that the position of the mutation provides. This presents a large-scale simultaneous inference problem, with hundreds of hypothesis tests to consider at the same time. This article aims to select significant mutation counts while controlling a given level of Type I error via False Discovery Rate (FDR) procedures. One main assumption is that the mutation counts follow a zero-inflated model in order to account for the true zeros in the count model and the excess zeros. The class of models considered is the Zero-inflated Generalized Poisson (ZIGP) distribution. Furthermore, we assumed that there exists a cut-off value such that smaller counts than this value are generated from the null distribution. We present several data-dependent methods to determine the cut-off value. We also consider a two-stage procedure based on screening process so that the number of mutations exceeding a certain value should be considered as significant mutations. Simulated and protein domain data sets are used to illustrate this procedure in estimation of the empirical null using a mixture of discrete distributions. Overall, while maintaining control of the FDR, the proposed two-stage testing procedure has superior empirical power. 2017 The Authors. Biometrics published by Wiley Periodicals, Inc. on behalf of International Biometric Society This is an open access article under the terms of the Creative Commons Attribution License, which permits use, distribution and reproduction in any medium, provided the original work is properly cited.

  9. Protein oxidation and aging. I. Difficulties in measuring reactive protein carbonyls in tissues using 2,4-dinitrophenylhydrazine.

    PubMed

    Cao, G; Cutler, R G

    1995-06-20

    A current hypothesis explaining the aging process implicates the accumulation of oxidized protein in animal tissues. This hypothesis is based on a series of reports showing an age-dependent increase in protein carbonyl content and an age-dependent loss of enzyme function. This hypothesis is also supported by the report of a novel effect of N-tert-butyl-alpha-phenylnitrone (PBN) in reversing these age-dependent changes. Here we specifically study the method that was used to measure reactive protein carbonyls in tissues. This method uses 2,4-dinitrophenylhydrazine (DNPH) and includes a washing procedure. Our results indicate that reactive protein carbonyls in normal crude tissue extracts cannot be reliably measured by this method, although it does reliably measure reactive carbonyls in purified proteins which have been oxidatively modified in vitro. The nucleic acids in tissues could be a major problem encountered in the assay. Using the streptomycin sulfate treatment combined with a dialysis step, we were successful in removing most nucleic acids from a crude tissue extract, but then the reactive carbonyl level in the crude tissue extract was too low to be reliably measured. This streptomycin sulfate treatment procedure, however, had no effect on the reactive carbonyl measurement of an oxidized protein sample. The unwashed free DNPH was another major problem in the assay because of its very strong absorption around 370 nm, where reactive carbonyls were quantitated. Nevertheless, on using the procedure described in the literature to measure total "reactive carbonyls" in rat liver and gerbil brain cortex, no change with age or PBN treatment was found. Then, we investigated a HPLC procedure which uses sodium dodecyl sulfate in the mobile phase but this was also found to be unsuitable for the reactive protein carbonyl assay in tissues.

  10. Indexing the relative abundance of age-0 white sturgeons in an impoundment of the lower Columbia River from highly skewed trawling data

    USGS Publications Warehouse

    Counihan, T.D.; Miller, Allen I.; Parsley, M.J.

    1999-01-01

    The development of recruitment monitoring programs for age-0 white sturgeons Acipenser transmontanus is complicated by the statistical properties of catch-per-unit-effort (CPUE) data. We found that age-0 CPUE distributions from bottom trawl surveys violated assumptions of statistical procedures based on normal probability theory. Further, no single data transformation uniformly satisfied these assumptions because CPUE distribution properties varied with the sample mean (??(CPUE)). Given these analytic problems, we propose that an additional index of age-0 white sturgeon relative abundance, the proportion of positive tows (Ep), be used to estimate sample sizes before conducting age-0 recruitment surveys and to evaluate statistical hypothesis tests comparing the relative abundance of age-0 white sturgeons among years. Monte Carlo simulations indicated that Ep was consistently more precise than ??(CPUE), and because Ep is binomially rather than normally distributed, surveys can be planned and analyzed without violating the assumptions of procedures based on normal probability theory. However, we show that Ep may underestimate changes in relative abundance at high levels and confound our ability to quantify responses to management actions if relative abundance is consistently high. If data suggest that most samples will contain age-0 white sturgeons, estimators of relative abundance other than Ep should be considered. Because Ep may also obscure correlations to climatic and hydrologic variables if high abundance levels are present in time series data, we recommend ??(CPUE) be used to describe relations to environmental variables. The use of both Ep and ??(CPUE) will facilitate the evaluation of hypothesis tests comparing relative abundance levels and correlations to variables affecting age-0 recruitment. Estimated sample sizes for surveys should therefore be based on detecting predetermined differences in Ep, but data necessary to calculate ??(CPUE) should also be collected.

  11. [Experimental testing of Pflüger's reflex hypothesis of menstruation in late 19th century].

    PubMed

    Simmer, H H

    1980-07-01

    Pflüger's hypothesis of a nerve reflex as the cause of menstruation published in 1865 and accepted by many, nonetheless did not lead to experimental investigations for 25 years. According to this hypothesis the nerve reflex starts in the ovary by an increase of the intraovarian pressure by the growing follicles. In 1884 Adolph Kehrer proposed a program to test the nerve reflex, but only in 1890, Cohnstein artificially increased the intraovarian pressure in women by bimanual compression from the outside and the vagina. His results were not convincing. Six years later, Strassmann injected fluids into ovaries of animals and obtained changes in the uterus resembling those of oestrus. His results seemed to verify a prognosis derived from Pflüger's hypothesis. Thus, after a long interval, that hypothesis had become a paradigma. Though reasons can be given for the delay, it is little understood, why experimental testing started so late.

  12. Incidental learning of sound categories is impaired in developmental dyslexia.

    PubMed

    Gabay, Yafit; Holt, Lori L

    2015-12-01

    Developmental dyslexia is commonly thought to arise from specific phonological impairments. However, recent evidence is consistent with the possibility that phonological impairments arise as symptoms of an underlying dysfunction of procedural learning. The nature of the link between impaired procedural learning and phonological dysfunction is unresolved. Motivated by the observation that speech processing involves the acquisition of procedural category knowledge, the present study investigates the possibility that procedural learning impairment may affect phonological processing by interfering with the typical course of phonetic category learning. The present study tests this hypothesis while controlling for linguistic experience and possible speech-specific deficits by comparing auditory category learning across artificial, nonlinguistic sounds among dyslexic adults and matched controls in a specialized first-person shooter videogame that has been shown to engage procedural learning. Nonspeech auditory category learning was assessed online via within-game measures and also with a post-training task involving overt categorization of familiar and novel sound exemplars. Each measure reveals that dyslexic participants do not acquire procedural category knowledge as effectively as age- and cognitive-ability matched controls. This difference cannot be explained by differences in perceptual acuity for the sounds. Moreover, poor nonspeech category learning is associated with slower phonological processing. Whereas phonological processing impairments have been emphasized as the cause of dyslexia, the current results suggest that impaired auditory category learning, general in nature and not specific to speech signals, could contribute to phonological deficits in dyslexia with subsequent negative effects on language acquisition and reading. Implications for the neuro-cognitive mechanisms of developmental dyslexia are discussed. Copyright © 2015 Elsevier Ltd. All rights reserved.

  13. Incidental Learning of Sound Categories is Impaired in Developmental Dyslexia

    PubMed Central

    Gabay, Yafit; Holt, Lori L.

    2015-01-01

    Developmental dyslexia is commonly thought to arise from specific phonological impairments. However, recent evidence is consistent with the possibility that phonological impairments arise as symptoms of an underlying dysfunction of procedural learning. The nature of the link between impaired procedural learning and phonological dysfunction is unresolved. Motivated by the observation that speech processing involves the acquisition of procedural category knowledge, the present study investigates the possibility that procedural learning impairment may affect phonological processing by interfering with the typical course of phonetic category learning. The present study tests this hypothesis while controlling for linguistic experience and possible speech-specific deficits by comparing auditory category learning across artificial, nonlinguistic sounds among dyslexic adults and matched controls in a specialized first-person shooter videogame that has been shown to engage procedural learning. Nonspeech auditory category learning was assessed online via within-game measures and also with a post-training task involving overt categorization of familiar and novel sound exemplars. Each measure reveals that dyslexic participants do not acquire procedural category knowledge as effectively as age- and cognitive-ability matched controls. This difference cannot be explained by differences in perceptual acuity for the sounds. Moreover, poor nonspeech category learning is associated with slower phonological processing. Whereas phonological processing impairments have been emphasized as the cause of dyslexia, the current results suggest that impaired auditory category learning, general in nature and not specific to speech signals, could contribute to phonological deficits in dyslexia with subsequent negative effects on language acquisition and reading. Implications for the neuro-cognitive mechanisms of developmental dyslexia are discussed. PMID:26409017

  14. When Null Hypothesis Significance Testing Is Unsuitable for Research: A Reassessment.

    PubMed

    Szucs, Denes; Ioannidis, John P A

    2017-01-01

    Null hypothesis significance testing (NHST) has several shortcomings that are likely contributing factors behind the widely debated replication crisis of (cognitive) neuroscience, psychology, and biomedical science in general. We review these shortcomings and suggest that, after sustained negative experience, NHST should no longer be the default, dominant statistical practice of all biomedical and psychological research. If theoretical predictions are weak we should not rely on all or nothing hypothesis tests. Different inferential methods may be most suitable for different types of research questions. Whenever researchers use NHST they should justify its use, and publish pre-study power calculations and effect sizes, including negative findings. Hypothesis-testing studies should be pre-registered and optimally raw data published. The current statistics lite educational approach for students that has sustained the widespread, spurious use of NHST should be phased out.

  15. When Null Hypothesis Significance Testing Is Unsuitable for Research: A Reassessment

    PubMed Central

    Szucs, Denes; Ioannidis, John P. A.

    2017-01-01

    Null hypothesis significance testing (NHST) has several shortcomings that are likely contributing factors behind the widely debated replication crisis of (cognitive) neuroscience, psychology, and biomedical science in general. We review these shortcomings and suggest that, after sustained negative experience, NHST should no longer be the default, dominant statistical practice of all biomedical and psychological research. If theoretical predictions are weak we should not rely on all or nothing hypothesis tests. Different inferential methods may be most suitable for different types of research questions. Whenever researchers use NHST they should justify its use, and publish pre-study power calculations and effect sizes, including negative findings. Hypothesis-testing studies should be pre-registered and optimally raw data published. The current statistics lite educational approach for students that has sustained the widespread, spurious use of NHST should be phased out. PMID:28824397

  16. Analyzing thematic maps and mapping for accuracy

    USGS Publications Warehouse

    Rosenfield, G.H.

    1982-01-01

    Two problems which exist while attempting to test the accuracy of thematic maps and mapping are: (1) evaluating the accuracy of thematic content, and (2) evaluating the effects of the variables on thematic mapping. Statistical analysis techniques are applicable to both these problems and include techniques for sampling the data and determining their accuracy. In addition, techniques for hypothesis testing, or inferential statistics, are used when comparing the effects of variables. A comprehensive and valid accuracy test of a classification project, such as thematic mapping from remotely sensed data, includes the following components of statistical analysis: (1) sample design, including the sample distribution, sample size, size of the sample unit, and sampling procedure; and (2) accuracy estimation, including estimation of the variance and confidence limits. Careful consideration must be given to the minimum sample size necessary to validate the accuracy of a given. classification category. The results of an accuracy test are presented in a contingency table sometimes called a classification error matrix. Usually the rows represent the interpretation, and the columns represent the verification. The diagonal elements represent the correct classifications. The remaining elements of the rows represent errors by commission, and the remaining elements of the columns represent the errors of omission. For tests of hypothesis that compare variables, the general practice has been to use only the diagonal elements from several related classification error matrices. These data are arranged in the form of another contingency table. The columns of the table represent the different variables being compared, such as different scales of mapping. The rows represent the blocking characteristics, such as the various categories of classification. The values in the cells of the tables might be the counts of correct classification or the binomial proportions of these counts divided by either the row totals or the column totals from the original classification error matrices. In hypothesis testing, when the results of tests of multiple sample cases prove to be significant, some form of statistical test must be used to separate any results that differ significantly from the others. In the past, many analyses of the data in this error matrix were made by comparing the relative magnitudes of the percentage of correct classifications, for either individual categories, the entire map or both. More rigorous analyses have used data transformations and (or) two-way classification analysis of variance. A more sophisticated step of data analysis techniques would be to use the entire classification error matrices using the methods of discrete multivariate analysis or of multiviariate analysis of variance.

  17. Testing fundamental ecological concepts with a Pythium-Prunus pathosystem

    USDA-ARS?s Scientific Manuscript database

    The study of plant-pathogen interactions has enabled tests of basic ecological concepts on plant community assembly (Janzen-Connell Hypothesis) and plant invasion (Enemy Release Hypothesis). We used a field experiment to (#1) test whether Pythium effects depended on host (seedling) density and/or d...

  18. A checklist to facilitate objective hypothesis testing in social psychology research.

    PubMed

    Washburn, Anthony N; Morgan, G Scott; Skitka, Linda J

    2015-01-01

    Social psychology is not a very politically diverse area of inquiry, something that could negatively affect the objectivity of social psychological theory and research, as Duarte et al. argue in the target article. This commentary offers a number of checks to help researchers uncover possible biases and identify when they are engaging in hypothesis confirmation and advocacy instead of hypothesis testing.

  19. Testing the stress-gradient hypothesis during the restoration of tropical degraded land using the shrub Rhodomyrtus tomentosa as a nurse plant

    Treesearch

    Nan Liu; Hai Ren; Sufen Yuan; Qinfeng Guo; Long Yang

    2013-01-01

    The relative importance of facilitation and competition between pairwise plants across abiotic stress gradients as predicted by the stress-gradient hypothesis has been confirmed in arid and temperate ecosystems, but the hypothesis has rarely been tested in tropical systems, particularly across nutrient gradients. The current research examines the interactions between a...

  20. Phase II Clinical Trials: D-methionine to Reduce Noise-Induced Hearing Loss

    DTIC Science & Technology

    2012-03-01

    loss (NIHL) and tinnitus in our troops. Hypotheses: Primary Hypothesis: Administration of oral D-methionine prior to and during weapons...reduce or prevent noise-induced tinnitus . Primary outcome to test the primary hypothesis: Pure tone air-conduction thresholds. Primary outcome to...test the secondary hypothesis: Tinnitus questionnaires. Specific Aims: 1. To determine whether administering oral D-methionine (D-met) can

  1. Cost-Minimization Analysis of Open and Endoscopic Carpal Tunnel Release.

    PubMed

    Zhang, Steven; Vora, Molly; Harris, Alex H S; Baker, Laurence; Curtin, Catherine; Kamal, Robin N

    2016-12-07

    Carpal tunnel release is the most common upper-limb surgical procedure performed annually in the U.S. There are 2 surgical methods of carpal tunnel release: open or endoscopic. Currently, there is no clear clinical or economic evidence supporting the use of one procedure over the other. We completed a cost-minimization analysis of open and endoscopic carpal tunnel release, testing the null hypothesis that there is no difference between the procedures in terms of cost. We conducted a retrospective review using a private-payer and Medicare Advantage database composed of 16 million patient records from 2007 to 2014. The cohort consisted of records with an ICD-9 (International Classification of Diseases, Ninth Revision) diagnosis of carpal tunnel syndrome and a CPT (Current Procedural Terminology) code for carpal tunnel release. Payer fees were used to define cost. We also assessed other associated costs of care, including those of electrodiagnostic studies and occupational therapy. Bivariate comparisons were performed using the chi-square test and the Student t test. Data showed that 86% of the patients underwent open carpal tunnel release. Reimbursement fees for endoscopic release were significantly higher than for open release. Facility fees were responsible for most of the difference between the procedures in reimbursement: facility fees averaged $1,884 for endoscopic release compared with $1,080 for open release (p < 0.0001). Endoscopic release also demonstrated significantly higher physician fees than open release (an average of $555 compared with $428; p < 0.0001). Occupational therapy fees associated with endoscopic release were less than those associated with open release (an average of $237 per session compared with $272; p = 0.07). The total average annual reimbursement per patient for endoscopic release (facility, surgeon, and occupational therapy fees) was significantly higher than for open release ($2,602 compared with $1,751; p < 0.0001). Our data showed that the total average fees per patient for endoscopic release were significantly higher than those for open release, although there currently is no strong evidence supporting better clinical outcomes of either technique. Value-based health-care models that favor delivering high-quality care and improving patient health, while also minimizing costs, may favor open carpal tunnel release.

  2. An omnibus test for the global null hypothesis.

    PubMed

    Futschik, Andreas; Taus, Thomas; Zehetmayer, Sonja

    2018-01-01

    Global hypothesis tests are a useful tool in the context of clinical trials, genetic studies, or meta-analyses, when researchers are not interested in testing individual hypotheses, but in testing whether none of the hypotheses is false. There are several possibilities how to test the global null hypothesis when the individual null hypotheses are independent. If it is assumed that many of the individual null hypotheses are false, combination tests have been recommended to maximize power. If, however, it is assumed that only one or a few null hypotheses are false, global tests based on individual test statistics are more powerful (e.g. Bonferroni or Simes test). However, usually there is no a priori knowledge on the number of false individual null hypotheses. We therefore propose an omnibus test based on cumulative sums of the transformed p-values. We show that this test yields an impressive overall performance. The proposed method is implemented in an R-package called omnibus.

  3. Optically transparent and durable Al2O3 coatings for harsh environments by ultra short pulsed laser deposition

    NASA Astrophysics Data System (ADS)

    Korhonen, Hannu; Syväluoto, Aki; Leskinen, Jari T. T.; Lappalainen, Reijo

    2018-01-01

    Nowadays, an environmental protection is needed for a number of optical applications in conditions quickly impairing the clarity of optical surfaces. Abrasion resistant optical coatings applied onto plastics are usually based on alumina or polysiloxane technology. In many applications transparent glasses and ceramics need a combination of abrasive and chemically resistant shielding or other protective solutions like coatings. In this study, we intended to test our hypothesis that clear and pore free alumina coating can be uniformly distributed on glass prisms by ultra short pulsed laser deposition (USPLD) technique to protect the sensitive surfaces against abrasives. Abrasive wear tests were carried out by the use of SiC emery paper using specified standard procedures. After the wear tests the measured transparencies of coated prisms turned out to be close those of the prisms before coating. The coating on sensitive surfaces consistently displayed enhanced wear resistance exhibiting still high quality, even after severe wear testing. Furthermore, the coating modified the surface properties towards hydrophobic nature in contrast to untreated prisms, which became very hydrophilic especially due to wear.

  4. Integrating Symbolic and Statistical Methods for Testing Intelligent Systems Applications to Machine Learning and Computer Vision

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jha, Sumit Kumar; Pullum, Laura L; Ramanathan, Arvind

    Embedded intelligent systems ranging from tiny im- plantable biomedical devices to large swarms of autonomous un- manned aerial systems are becoming pervasive in our daily lives. While we depend on the flawless functioning of such intelligent systems, and often take their behavioral correctness and safety for granted, it is notoriously difficult to generate test cases that expose subtle errors in the implementations of machine learning algorithms. Hence, the validation of intelligent systems is usually achieved by studying their behavior on representative data sets, using methods such as cross-validation and bootstrapping.In this paper, we present a new testing methodology for studyingmore » the correctness of intelligent systems. Our approach uses symbolic decision procedures coupled with statistical hypothesis testing to. We also use our algorithm to analyze the robustness of a human detection algorithm built using the OpenCV open-source computer vision library. We show that the human detection implementation can fail to detect humans in perturbed video frames even when the perturbations are so small that the corresponding frames look identical to the naked eye.« less

  5. Acquiring Procedural Skills from Lesson Sequences.

    DTIC Science & Technology

    1985-08-13

    Teachers of Mathematics . Washington, D)C: NCTM . Brueckner, I..J. (1930) Diagnostic aund remedial teaching in arithmetic. Philadelphia. PA: Winston. Burton...arithmetic and algebra, fr-m multi-lesson curricula. The central hypothesis is that students and teachers obey cc: :-.entions that cause the goal hierarchy...students and • . teachers obey conventions that cause the goal hierarchy of the acquired procedure to be a particular structural function of the sequential

  6. Novel statistical framework to identify differentially expressed genes allowing transcriptomic background differences.

    PubMed

    Ling, Zhi-Qiang; Wang, Yi; Mukaisho, Kenichi; Hattori, Takanori; Tatsuta, Takeshi; Ge, Ming-Hua; Jin, Li; Mao, Wei-Min; Sugihara, Hiroyuki

    2010-06-01

    Tests of differentially expressed genes (DEGs) from microarray experiments are based on the null hypothesis that genes that are irrelevant to the phenotype/stimulus are expressed equally in the target and control samples. However, this strict hypothesis is not always true, as there can be several transcriptomic background differences between target and control samples, including different cell/tissue types, different cell cycle stages and different biological donors. These differences lead to increased false positives, which have little biological/medical significance. In this article, we propose a statistical framework to identify DEGs between target and control samples from expression microarray data allowing transcriptomic background differences between these samples by introducing a modified null hypothesis that the gene expression background difference is normally distributed. We use an iterative procedure to perform robust estimation of the null hypothesis and identify DEGs as outliers. We evaluated our method using our own triplicate microarray experiment, followed by validations with reverse transcription-polymerase chain reaction (RT-PCR) and on the MicroArray Quality Control dataset. The evaluations suggest that our technique (i) results in less false positive and false negative results, as measured by the degree of agreement with RT-PCR of the same samples, (ii) can be applied to different microarray platforms and results in better reproducibility as measured by the degree of DEG identification concordance both intra- and inter-platforms and (iii) can be applied efficiently with only a few microarray replicates. Based on these evaluations, we propose that this method not only identifies more reliable and biologically/medically significant DEG, but also reduces the power-cost tradeoff problem in the microarray field. Source code and binaries freely available for download at http://comonca.org.cn/fdca/resources/softwares/deg.zip.

  7. Explorations in Statistics: Hypothesis Tests and P Values

    ERIC Educational Resources Information Center

    Curran-Everett, Douglas

    2009-01-01

    Learning about statistics is a lot like learning about science: the learning is more meaningful if you can actively explore. This second installment of "Explorations in Statistics" delves into test statistics and P values, two concepts fundamental to the test of a scientific null hypothesis. The essence of a test statistic is that it compares what…

  8. Rugby versus Soccer in South Africa: Content Familiarity Contributes to Cross-Cultural Differences in Cognitive Test Scores

    ERIC Educational Resources Information Center

    Malda, Maike; van de Vijver, Fons J. R.; Temane, Q. Michael

    2010-01-01

    In this study, cross-cultural differences in cognitive test scores are hypothesized to depend on a test's cultural complexity (Cultural Complexity Hypothesis: CCH), here conceptualized as its content familiarity, rather than on its cognitive complexity (Spearman's Hypothesis: SH). The content familiarity of tests assessing short-term memory,…

  9. The effectiveness of problem-based learning on students’ problem solving ability in vector analysis course

    NASA Astrophysics Data System (ADS)

    Mushlihuddin, R.; Nurafifah; Irvan

    2018-01-01

    The student’s low ability in mathematics problem solving proved to the less effective of a learning process in the classroom. Effective learning was a learning that affects student’s math skills, one of which is problem-solving abilities. Problem-solving capability consisted of several stages: understanding the problem, planning the settlement, solving the problem as planned, re-examining the procedure and the outcome. The purpose of this research was to know: (1) was there any influence of PBL model in improving ability Problem solving of student math in a subject of vector analysis?; (2) was the PBL model effective in improving students’ mathematical problem-solving skills in vector analysis courses? This research was a quasi-experiment research. The data analysis techniques performed from the test stages of data description, a prerequisite test is the normality test, and hypothesis test using the ANCOVA test and Gain test. The results showed that: (1) there was an influence of PBL model in improving students’ math problem-solving abilities in vector analysis courses; (2) the PBL model was effective in improving students’ problem-solving skills in vector analysis courses with a medium category.

  10. Is it better to select or to receive? Learning via active and passive hypothesis testing.

    PubMed

    Markant, Douglas B; Gureckis, Todd M

    2014-02-01

    People can test hypotheses through either selection or reception. In a selection task, the learner actively chooses observations to test his or her beliefs, whereas in reception tasks data are passively encountered. People routinely use both forms of testing in everyday life, but the critical psychological differences between selection and reception learning remain poorly understood. One hypothesis is that selection learning improves learning performance by enhancing generic cognitive processes related to motivation, attention, and engagement. Alternatively, we suggest that differences between these 2 learning modes derives from a hypothesis-dependent sampling bias that is introduced when a person collects data to test his or her own individual hypothesis. Drawing on influential models of sequential hypothesis-testing behavior, we show that such a bias (a) can lead to the collection of data that facilitates learning compared with reception learning and (b) can be more effective than observing the selections of another person. We then report a novel experiment based on a popular category learning paradigm that compares reception and selection learning. We additionally compare selection learners to a set of "yoked" participants who viewed the exact same sequence of observations under reception conditions. The results revealed systematic differences in performance that depended on the learner's role in collecting information and the abstract structure of the problem.

  11. Sleep-related memory consolidation in primary insomnia.

    PubMed

    Nissen, Christoph; Kloepfer, Corinna; Feige, Bernd; Piosczyk, Hannah; Spiegelhalder, Kai; Voderholzer, Ulrich; Riemann, Dieter

    2011-03-01

    It has been suggested that healthy sleep facilitates the consolidation of newly acquired memories and underlying brain plasticity. The authors tested the hypothesis that patients with primary insomnia (PI) would show deficits in sleep-related memory consolidation compared to good sleeper controls (GSC). The study used a four-group parallel design (n=86) to investigate the effects of 12 h of night-time, including polysomnographically monitored sleep ('sleep condition' in PI and GSC), versus 12 h of daytime wakefulness ('wake condition' in PI and GSC) on procedural (mirror tracing task) and declarative memory consolidation (visual and verbal learning task). Demographic characteristics and memory encoding did not differ between the groups at baseline. Polysomnography revealed a significantly disturbed sleep profile in PI compared to GSC in the sleep condition. Night-time periods including sleep in GSC were associated with (i) a significantly enhanced procedural and declarative verbal memory consolidation compared to equal periods of daytime wakefulness in GSC and (ii) a significantly enhanced procedural memory consolidation compared to equal periods of daytime wakefulness and night-time sleep in PI. Across retention intervals of daytime wakefulness, no differences between the experimental groups were observed. This pattern of results suggests that healthy sleep fosters the consolidation of new memories, and that this process is impaired for procedural memories in patients with PI. Future work is needed to investigate the impact of treatment on improving sleep and memory. © 2010 European Sleep Research Society.

  12. Testing for purchasing power parity in 21 African countries using several unit root tests

    NASA Astrophysics Data System (ADS)

    Choji, Niri Martha; Sek, Siok Kun

    2017-04-01

    Purchasing power parity is used as a basis for international income and expenditure comparison through the exchange rate theory. However, empirical studies show disagreement on the validity of PPP. In this paper, we conduct the testing on the validity of PPP using panel data approach. We apply seven different panel unit root tests to test the validity of the purchasing power parity (PPP) hypothesis based on the quarterly data on real effective exchange rate for 21 African countries from the period 1971: Q1-2012: Q4. All the results of the seven tests rejected the hypothesis of stationarity meaning that absolute PPP does not hold in those African Countries. This result confirmed the claim from previous studies that standard panel unit tests fail to support the PPP hypothesis.

  13. Does Testing Increase Spontaneous Mediation in Learning Semantically Related Paired Associates?

    ERIC Educational Resources Information Center

    Cho, Kit W.; Neely, James H.; Brennan, Michael K.; Vitrano, Deana; Crocco, Stephanie

    2017-01-01

    Carpenter (2011) argued that the testing effect she observed for semantically related but associatively unrelated paired associates supports the mediator effectiveness hypothesis. This hypothesis asserts that after the cue-target pair "mother-child" is learned, relative to restudying mother-child, a review test in which…

  14. Do perceived high performance work systems influence the relationship between emotional labour, burnout and intention to leave? A study of Australian nurses.

    PubMed

    Bartram, Timothy; Casimir, Gian; Djurkovic, Nick; Leggat, Sandra G; Stanton, Pauline

    2012-07-01

    The purpose of this article was to explore the relationships between perceived high performance work systems, emotional labour, burnout and intention to leave among nurses in Australia. Previous studies show that emotional labour and burnout are associated with an increase in intention to leave of nurses. There is evidence that high performance work systems are in association with a decrease in turnover. There are no previous studies that examine the relationship between high performance work systems and emotional labour. A cross-sectional, correlational survey. The study was conducted in Australia in 2008 with 183 nurses. Three hypotheses were tested with validated measures of emotional labour, burnout, intention to leave, and perceived high performance work systems. Principal component analysis was used to examine the structure of the measures. The mediation hypothesis was tested using Baron and Kenny's procedure and the moderation hypothesis was tested using hierarchical regression and the product-term. Emotional labour is positively associated with both burnout and intention to leave. Burnout mediates the relationship between emotional labour and intention to leave. Perceived high performance work systems negatively moderates the relationship between emotional labour and burnout. Perceived high performance work systems not only reduces the strength of the negative effect of emotional labour on burnout but also has a unique negative effect on intention to leave. Ensuring effective human resource management practice through the implementation of high performance work systems may reduce the burnout associated with emotional labour. This may assist healthcare organizations to reduce nurse turnover. © 2012 Blackwell Publishing Ltd.

  15. Effect of glucose on fatigue-induced changes in the microstructure and mechanical properties of demineralized bovine cortical bone.

    PubMed

    Trębacz, Hanna; Zdunek, Artur; Wlizło-Dyś, Ewa; Cybulska, Justyna; Pieczywek, Piotr

    2015-10-16

    The aim of this study was to test a hypothesis that fatigue-induced weakening of cortical bone was intensified in bone incubated in glucose and that this weakening is revealed in the microstructure and mechanical competence of the bone matrix. Cubic specimens of bovine femoral shaft were incubated in glucose solution (G) or in buffer (NG). One half of G samples and one half of NG were axially loaded in 300 cycles (30 mm/min) at constant deformation (F); the other half was a control (C). Samples from each group (GF, NGF, GC, NGC) were completely demineralized. Slices from demineralized samples were used for microscopic image analysis. A combined effect of glycation and fatigue on demineralized bone was tested in compression (10 mm/min). Damage of samples during the test was examined in terms of acoustic emission analysis (AE). During the fatigue procedure, resistance to loading in glycated samples decreased by 14.5% but only by 8.1% in nonglycated samples. In glycated samples fatigue resulted in increased porosity with pores significantly larger than in the other groups. Under compression, strain at failure in demineralized bone was significantly affected by glucose and fatigue. AE from demineralized bone matrix was considerably related to the largest pores in the tissue. The results confirm the hypothesis that the effect of fatigue on cortical bone tissue was intensified after incubation in glucose, both in the terms of the mechanical competence of bone tissue and the structural changes in the collagenous matrix of bone.

  16. A note on the correlation between circular and linear variables with an application to wind direction and air temperature data in a Mediterranean climate

    NASA Astrophysics Data System (ADS)

    Lototzis, M.; Papadopoulos, G. K.; Droulia, F.; Tseliou, A.; Tsiros, I. X.

    2018-04-01

    There are several cases where a circular variable is associated with a linear one. A typical example is wind direction that is often associated with linear quantities such as air temperature and air humidity. The analysis of a statistical relationship of this kind can be tested by the use of parametric and non-parametric methods, each of which has its own advantages and drawbacks. This work deals with correlation analysis using both the parametric and the non-parametric procedure on a small set of meteorological data of air temperature and wind direction during a summer period in a Mediterranean climate. Correlations were examined between hourly, daily and maximum-prevailing values, under typical and non-typical meteorological conditions. Both tests indicated a strong correlation between mean hourly wind directions and mean hourly air temperature, whereas mean daily wind direction and mean daily air temperature do not seem to be correlated. In some cases, however, the two procedures were found to give quite dissimilar levels of significance on the rejection or not of the null hypothesis of no correlation. The simple statistical analysis presented in this study, appropriately extended in large sets of meteorological data, may be a useful tool for estimating effects of wind on local climate studies.

  17. Interspecies comparison of subchondral bone properties important for cartilage repair.

    PubMed

    Chevrier, Anik; Kouao, Ahou S M; Picard, Genevieve; Hurtig, Mark B; Buschmann, Michael D

    2015-01-01

    Microfracture repair tissue in young adult humans and in rabbit trochlea is frequently of higher quality than in corresponding ovine or horse models or in the rabbit medial femoral condyle (MFC). This may be related to differences in subchondral properties since repair is initiated from the bone. We tested the hypothesis that subchondral bone from rabbit trochlea and the human MFC are structurally similar. Trochlea and MFC samples from rabbit, sheep, and horse were micro-CT scanned and histoprocessed. Samples were also collected from normal and lesional areas of human MFC. The subchondral bone of the rabbit trochlea was the most similar to human MFC, where both had a relatively thin bone plate and a more porous and less dense character of subchondral bone. MFC from animals all displayed thicker bone plates, denser and less porous bone and thicker trabeculae, which may be more representative of older or osteoarthritic patients, while both sheep trochlear ridges and the horse lateral trochlea shared some structural features with human MFC. Since several cartilage repair procedures rely on subchondral bone for repair, subchondral properties should be accounted for when choosing animal models to study and test procedures that are intended for human cartilage repair. © 2014 Orthopaedic Research Society. Published by Wiley Periodicals, Inc.

  18. [A test of the focusing hypothesis for category judgment: an explanation using the mental-box model].

    PubMed

    Hatori, Tsuyoshi; Takemura, Kazuhisa; Fujii, Satoshi; Ideno, Takashi

    2011-06-01

    This paper presents a new model of category judgment. The model hypothesizes that, when more attention is focused on a category, the psychological range of the category gets narrower (category-focusing hypothesis). We explain this hypothesis by using the metaphor of a "mental-box" model: the more attention that is focused on a mental box (i.e., a category set), the smaller the size of the box becomes (i.e., a cardinal number of the category set). The hypothesis was tested in an experiment (N = 40), where the focus of attention on prescribed verbal categories was manipulated. The obtained data gave support to the hypothesis: category-focusing effects were found in three experimental tasks (regarding the category of "food", "height", and "income"). The validity of the hypothesis was discussed based on the results.

  19. Function Word Repetitions Emerge When Speakers Are Operantly Conditioned to Reduce Frequency of Silent Pauses

    PubMed Central

    Howell, Peter; Sackin, Stevie

    2007-01-01

    Beattie and Bradbury (1979) reported a study in which, in one condition, they punished speakers when they produced silent pauses (by lighting a light they were supposed to keep switched off). They found speakers were able to reduce silent pauses and that this was not achieved at the expense of reduced overall speech rate. They reported an unexpected increase in word repetition rate. A recent theory proposed by Howell, Au-Yeung, and Sackin (1999) predicts that the change in word repetition rate will occur on function, not content words. This hypothesis is tested and confirmed. The results are used to assess the theory and to consider practical applications of this conditioning procedure. PMID:11529422

  20. Technical note: Application of the Box-Cox data transformation to animal science experiments.

    PubMed

    Peltier, M R; Wilcox, C J; Sharp, D C

    1998-03-01

    In the use of ANOVA for hypothesis testing in animal science experiments, the assumption of homogeneity of errors often is violated because of scale effects and the nature of the measurements. We demonstrate a method for transforming data so that the assumptions of ANOVA are met (or violated to a lesser degree) and apply it in analysis of data from a physiology experiment. Our study examined whether melatonin implantation would affect progesterone secretion in cycling pony mares. Overall treatment variances were greater in the melatonin-treated group, and several common transformation procedures failed. Application of the Box-Cox transformation algorithm reduced the heterogeneity of error and permitted the assumption of equal variance to be met.

  1. Techniques for controlling variability in gram staining of obligate anaerobes.

    PubMed Central

    Johnson, M J; Thatcher, E; Cox, M E

    1995-01-01

    Identification of anaerobes recovered from clinical samples is complicated by the fact that certain gram-positive anaerobes routinely stain gram negative; Peptostreptococcus asaccharolyticus, Eubacterium plautii, Clostridium ramosum, Clostridium symbiosum, and Clostridium clostridiiforme are among the nonconformists with regard to conventional Gram-staining procedures. Accurate Gram staining of American Type Culture Collection strains of these anaerobic bacteria is possible by implementing fixing and staining techniques within a gloveless anaerobic chamber. Under anaerobic conditions, gram-positive staining occurred in all test organisms with "quick" fixing techniques with both absolute methanol and formalin. The results support the hypothesis that, when anaerobic bacteria are exposed to oxygen, a breakdown of the physical integrity of the cell wall occurs, introducing Gram stain variability in gram-positive anaerobes. PMID:7538512

  2. What are the most important variables for Poaceae airborne pollen forecasting?

    PubMed

    Navares, Ricardo; Aznarte, José Luis

    2017-02-01

    In this paper, the problem of predicting future concentrations of airborne pollen is solved through a computational intelligence data-driven approach. The proposed method is able to identify the most important variables among those considered by other authors (mainly recent pollen concentrations and weather parameters), without any prior assumptions about the phenological relevance of the variables. Furthermore, an inferential procedure based on non-parametric hypothesis testing is presented to provide statistical evidence of the results, which are coherent to the literature and outperform previous proposals in terms of accuracy. The study is built upon Poaceae airborne pollen concentrations recorded in seven different locations across the Spanish province of Madrid. Copyright © 2016 Elsevier B.V. All rights reserved.

  3. Reasoning Maps: A Generally Applicable Method for Characterizing Hypothesis-Testing Behaviour. Research Report

    ERIC Educational Resources Information Center

    White, Brian

    2004-01-01

    This paper presents a generally applicable method for characterizing subjects' hypothesis-testing behaviour based on a synthesis that extends on previous work. Beginning with a transcript of subjects' speech and videotape of their actions, a Reasoning Map is created that depicts the flow of their hypotheses, tests, predictions, results, and…

  4. Why Is Test-Restudy Practice Beneficial for Memory? An Evaluation of the Mediator Shift Hypothesis

    ERIC Educational Resources Information Center

    Pyc, Mary A.; Rawson, Katherine A.

    2012-01-01

    Although the memorial benefits of testing are well established empirically, the mechanisms underlying this benefit are not well understood. The authors evaluated the mediator shift hypothesis, which states that test-restudy practice is beneficial for memory because retrieval failures during practice allow individuals to evaluate the effectiveness…

  5. Bayesian Approaches to Imputation, Hypothesis Testing, and Parameter Estimation

    ERIC Educational Resources Information Center

    Ross, Steven J.; Mackey, Beth

    2015-01-01

    This chapter introduces three applications of Bayesian inference to common and novel issues in second language research. After a review of the critiques of conventional hypothesis testing, our focus centers on ways Bayesian inference can be used for dealing with missing data, for testing theory-driven substantive hypotheses without a default null…

  6. Should Bouchet's hypothesis be taken into account in rainfall-runoff modelling? An assessment over 308 catchments

    NASA Astrophysics Data System (ADS)

    Oudin, Ludovic; Michel, Claude; Andréassian, Vazken; Anctil, François; Loumagne, Cécile

    2005-12-01

    An implementation of the complementary relationship hypothesis (Bouchet's hypothesis) for estimating regional evapotranspiration within two rainfall-runoff models is proposed and evaluated in terms of streamflow simulation efficiency over a large sample of 308 catchments located in Australia, France and the USA. Complementary relationship models are attractive approaches to estimating actual evapotranspiration because they rely solely on climatic variables. They are even more interesting since they are supported by a conceptual description underlying the interactions between the evapotranspirating surface and the atmospheric boundary layer, which was highlighted by Bouchet (1963). However, these approaches appear to be in contradiction with the methods prevailing in rainfall-runoff models, which compute actual evapotranspiration using soil moisture accounting procedures. The approach adopted in this article is to introduce the estimation of actual evapotranspiration provided by complementary relationship models (complementary relationship for areal evapotranspiration and advection aridity) into two rainfall-runoff models. Results show that directly using the complementary relationship approach to estimate actual evapotranspiration does not give better results than the soil moisture accounting procedures. Finally, we discuss feedback mechanisms between potential evapotranspiration and soil water availability, and their possible impact on rainfall-runoff modelling. Copyright

  7. Distrust and the positive test heuristic: dispositional and situated social distrust improves performance on the Wason rule discovery task.

    PubMed

    Mayo, Ruth; Alfasi, Dana; Schwarz, Norbert

    2014-06-01

    Feelings of distrust alert people not to take information at face value, which may influence their reasoning strategy. Using the Wason (1960) rule identification task, we tested whether chronic and temporary distrust increase the use of negative hypothesis testing strategies suited to falsify one's own initial hunch. In Study 1, participants who were low in dispositional trust were more likely to engage in negative hypothesis testing than participants high in dispositional trust. In Study 2, trust and distrust were induced through an alleged person-memory task. Paralleling the effects of chronic distrust, participants exposed to a single distrust-eliciting face were 3 times as likely to engage in negative hypothesis testing as participants exposed to a trust-eliciting face. In both studies, distrust increased negative hypothesis testing, which was associated with better performance on the Wason task. In contrast, participants' initial rule generation was not consistently affected by distrust. These findings provide first evidence that distrust can influence which reasoning strategy people adopt. PsycINFO Database Record (c) 2014 APA, all rights reserved.

  8. In Defense of the Play-Creativity Hypothesis

    ERIC Educational Resources Information Center

    Silverman, Irwin W.

    2016-01-01

    The hypothesis that pretend play facilitates the creative thought process in children has received a great deal of attention. In a literature review, Lillard et al. (2013, p. 8) concluded that the evidence for this hypothesis was "not convincing." This article focuses on experimental and training studies that have tested this hypothesis.…

  9. Improving data analysis in herpetology: Using Akaike's information criterion (AIC) to assess the strength of biological hypotheses

    USGS Publications Warehouse

    Mazerolle, M.J.

    2006-01-01

    In ecology, researchers frequently use observational studies to explain a given pattern, such as the number of individuals in a habitat patch, with a large number of explanatory (i.e., independent) variables. To elucidate such relationships, ecologists have long relied on hypothesis testing to include or exclude variables in regression models, although the conclusions often depend on the approach used (e.g., forward, backward, stepwise selection). Though better tools have surfaced in the mid 1970's, they are still underutilized in certain fields, particularly in herpetology. This is the case of the Akaike information criterion (AIC) which is remarkably superior in model selection (i.e., variable selection) than hypothesis-based approaches. It is simple to compute and easy to understand, but more importantly, for a given data set, it provides a measure of the strength of evidence for each model that represents a plausible biological hypothesis relative to the entire set of models considered. Using this approach, one can then compute a weighted average of the estimate and standard error for any given variable of interest across all the models considered. This procedure, termed model-averaging or multimodel inference, yields precise and robust estimates. In this paper, I illustrate the use of the AIC in model selection and inference, as well as the interpretation of results analysed in this framework with two real herpetological data sets. The AIC and measures derived from it is should be routinely adopted by herpetologists. ?? Koninklijke Brill NV 2006.

  10. The frequentist implications of optional stopping on Bayesian hypothesis tests.

    PubMed

    Sanborn, Adam N; Hills, Thomas T

    2014-04-01

    Null hypothesis significance testing (NHST) is the most commonly used statistical methodology in psychology. The probability of achieving a value as extreme or more extreme than the statistic obtained from the data is evaluated, and if it is low enough, the null hypothesis is rejected. However, because common experimental practice often clashes with the assumptions underlying NHST, these calculated probabilities are often incorrect. Most commonly, experimenters use tests that assume that sample sizes are fixed in advance of data collection but then use the data to determine when to stop; in the limit, experimenters can use data monitoring to guarantee that the null hypothesis will be rejected. Bayesian hypothesis testing (BHT) provides a solution to these ills because the stopping rule used is irrelevant to the calculation of a Bayes factor. In addition, there are strong mathematical guarantees on the frequentist properties of BHT that are comforting for researchers concerned that stopping rules could influence the Bayes factors produced. Here, we show that these guaranteed bounds have limited scope and often do not apply in psychological research. Specifically, we quantitatively demonstrate the impact of optional stopping on the resulting Bayes factors in two common situations: (1) when the truth is a combination of the hypotheses, such as in a heterogeneous population, and (2) when a hypothesis is composite-taking multiple parameter values-such as the alternative hypothesis in a t-test. We found that, for these situations, while the Bayesian interpretation remains correct regardless of the stopping rule used, the choice of stopping rule can, in some situations, greatly increase the chance of experimenters finding evidence in the direction they desire. We suggest ways to control these frequentist implications of stopping rules on BHT.

  11. Yttrium-90 Resin Microsphere Radioembolization Using an Antireflux Catheter: An Alternative to Traditional Coil Embolization for Nontarget Protection

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Morshedi, Maud M., E-mail: maud.morshedi@my.rfums.org; Bauman, Michael, E-mail: mbauman@ucsd.edu; Rose, Steven C., E-mail: scrose@ucsd.edu

    2015-04-15

    PurposeSerious complications can result from nontarget embolization during yttrium-90 (Y-90) transarterial radioembolization. Hepatoenteric artery coil embolization has been traditionally performed to prevent nontarget radioembolization. The U.S. Food and Drug Administration–approved Surefire Infusion System (SIS) catheter, designed to prevent reflux, is an alternative to coils. The hypothesis that quantifiable SIS procedural parameters are comparable to coil embolization was tested.MethodsFourteen patients aged 36–79 years with colorectal, neuroendocrine, hepatocellular, and other predominantly bilobar hepatic tumors who underwent resin microsphere Y-90 radioembolization using only the SIS catheter (n = 7) versus only detachable coils (n = 7) for nontarget protection were reviewed retrospectively. Procedure time, fluoroscopy time, contrast dose,more » radiation dose, and cost were evaluated.ResultsMultivariate analysis identified significant cohort differences in the procedural parameters evaluated (F(10, 3) = 10.39, p = 0.04). Between-group comparisons of the pretreatment planning procedure in the SIS catheter group compared to the coil embolization group demonstrated a significant reduction in procedure time (102.6 vs. 192.1 min, respectively, p = 0.0004), fluoroscopy time (14.3 vs. 49.7 min, respectively, p = 0.0016), and contrast material dose (mean dose of 174.3 vs. 265.0 mL, respectively, p = 0.0098). Procedural parameters were not significantly different between the two groups during subsequent dose delivery procedures. Overall cost of combined first-time radioembolization procedures was significantly less in the SIS group ($4252) compared to retrievable coil embolization ($11,123; p = 0.001).ConclusionThe SIS catheter results in a reduction in procedure time, fluoroscopy time, and contrast material dose and may be an attractive cost-effective alternative to detachable coil embolization for prevention of nontarget radioembolization.« less

  12. Rehearsal significantly improves immediate and delayed recall on the Rey Auditory Verbal Learning Test.

    PubMed

    Hessen, Erik

    2011-10-01

    A repeated observation during memory assessment with the Rey Auditory Verbal Learning Test (RAVLT) is that patients who spontaneously employ a memory rehearsal strategy by repeating the word list more than once achieve better scores than patients who only repeat the word list once. This observation led to concern about the ability of the standard test procedure of RAVLT and similar tests in eliciting the best possible recall scores. The purpose of the present study was to test the hypothesis that a rehearsal recall strategy of repeating the word list more than once would result in improved scores of recall on the RAVLT. We report on differences in outcome after standard administration and after experimental administration on Immediate and Delayed Recall measures from the RAVLT of 50 patients. The experimental administration resulted in significantly improved scores for all the variables employed. Additionally, it was found that patients who failed effort screening showed significantly poorer improvement on Delayed Recall compared with those who passed the effort screening. The general clear improvement both in raw scores and T-scores demonstrates that recall performance can be significantly influenced by the strategy of the patient or by small variations in instructions by the examiner.

  13. An investigation on the determinants of carbon emissions for OECD countries: empirical evidence from panel models robust to heterogeneity and cross-sectional dependence.

    PubMed

    Dogan, Eyup; Seker, Fahri

    2016-07-01

    This empirical study analyzes the impacts of real income, energy consumption, financial development and trade openness on CO2 emissions for the OECD countries in the Environmental Kuznets Curve (EKC) model by using panel econometric approaches that consider issues of heterogeneity and cross-sectional dependence. Results from the Pesaran CD test, the Pesaran-Yamagata's homogeneity test, the CADF and the CIPS unit root tests, the LM bootstrap cointegration test, the DSUR estimator, and the Emirmahmutoglu-Kose Granger causality test indicate that (i) the panel time-series data are heterogeneous and cross-sectionally dependent; (ii) CO2 emissions, real income, the quadratic income, energy consumption, financial development and openness are integrated of order one; (iii) the analyzed data are cointegrated; (iv) the EKC hypothesis is validated for the OECD countries; (v) increases in openness and financial development mitigate the level of emissions whereas energy consumption contributes to carbon emissions; (vi) a variety of Granger causal relationship is detected among the analyzed variables; and (vii) empirical results and policy recommendations are accurate and efficient since panel econometric models used in this study account for heterogeneity and cross-sectional dependence in their estimation procedures.

  14. Examination of long-term visual memorization capacity in the Clark's nutcracker (Nucifraga columbiana).

    PubMed

    Qadri, Muhammad A J; Leonard, Kevin; Cook, Robert G; Kelly, Debbie M

    2018-02-15

    Clark's nutcrackers exhibit remarkable cache recovery behavior, remembering thousands of seed locations over the winter. No direct laboratory test of their visual memory capacity, however, has yet been performed. Here, two nutcrackers were tested in an operant procedure used to measure different species' visual memory capacities. The nutcrackers were incrementally tested with an ever-expanding pool of pictorial stimuli in a two-alternative discrimination task. Each picture was randomly assigned to either a right or a left choice response, forcing the nutcrackers to memorize each picture-response association. The nutcrackers' visual memorization capacity was estimated at a little over 500 pictures, and the testing suggested effects of primacy, recency, and memory decay over time. The size of this long-term visual memory was less than the approximately 800-picture capacity established for pigeons. These results support the hypothesis that nutcrackers' spatial memory is a specialized adaptation tied to their natural history of food-caching and recovery, and not to a larger long-term, general memory capacity. Furthermore, despite millennia of separate and divergent evolution, the mechanisms of visual information retention seem to reflect common memory systems of differing capacities across the different species tested in this design.

  15. TRANSGENIC MOUSE MODELS AND PARTICULATE MATTER (PM)

    EPA Science Inventory

    The hypothesis to be tested is that metal catalyzed oxidative stress can contribute to the biological effects of particulate matter. We acquired several transgenic mouse strains to test this hypothesis. Breeding of the mice was accomplished by Duke University. Particles employed ...

  16. Hypothesis Testing Using the Films of the Three Stooges

    ERIC Educational Resources Information Center

    Gardner, Robert; Davidson, Robert

    2010-01-01

    The use of The Three Stooges' films as a source of data in an introductory statistics class is described. The Stooges' films are separated into three populations. Using these populations, students may conduct hypothesis tests with data they collect.

  17. Conditional associative memory for musical stimuli in nonmusicians: implications for absolute pitch.

    PubMed

    Bermudez, Patrick; Zatorre, Robert J

    2005-08-24

    A previous positron emission tomography (PET) study of musicians with and without absolute pitch put forth the hypothesis that the posterior dorsolateral prefrontal cortex is involved in the conditional associative aspect of the identification of a pitch. In the work presented here, we tested this hypothesis by training eight nonmusicians to associate each of four different complex musical sounds (triad chords) with an arbitrary number in a task designed to have limited analogy to absolute-pitch identification. Each subject under-went a functional magnetic resonance imaging scanning procedure both before and after training. Active condition (identification of chords)-control condition (amplitude-matched noise bursts) comparisons for the pretraining scan showed no significant activation maxima. The same comparison for the posttraining scan revealed significant peaks of activation in posterior dorsolateral prefrontal, ventrolateral prefrontal, and parietal areas. A conjunction analysis was performed to show that the posterior dorsolateral prefrontal activity in this study is similar to that observed in the aforementioned PET study. We conclude that the posterior dorsolateral prefrontal cortex is selectively involved in the conditional association aspect of our task, as it is in the attribution of a verbal label to a note by absolute-pitch musicians.

  18. Hybridisation is associated with increased fecundity and size in invasive taxa: meta-analytic support for the hybridisation-invasion hypothesis

    PubMed Central

    Hovick, Stephen M; Whitney, Kenneth D

    2014-01-01

    The hypothesis that interspecific hybridisation promotes invasiveness has received much recent attention, but tests of the hypothesis can suffer from important limitations. Here, we provide the first systematic review of studies experimentally testing the hybridisation-invasion (H-I) hypothesis in plants, animals and fungi. We identified 72 hybrid systems for which hybridisation has been putatively associated with invasiveness, weediness or range expansion. Within this group, 15 systems (comprising 34 studies) experimentally tested performance of hybrids vs. their parental species and met our other criteria. Both phylogenetic and non-phylogenetic meta-analyses demonstrated that wild hybrids were significantly more fecund and larger than their parental taxa, but did not differ in survival. Resynthesised hybrids (which typically represent earlier generations than do wild hybrids) did not consistently differ from parental species in fecundity, survival or size. Using meta-regression, we found that fecundity increased (but survival decreased) with generation in resynthesised hybrids, suggesting that natural selection can play an important role in shaping hybrid performance – and thus invasiveness – over time. We conclude that the available evidence supports the H-I hypothesis, with the caveat that our results are clearly driven by tests in plants, which are more numerous than tests in animals and fungi. PMID:25234578

  19. The Harm Done to Reproducibility by the Culture of Null Hypothesis Significance Testing.

    PubMed

    Lash, Timothy L

    2017-09-15

    In the last few years, stakeholders in the scientific community have raised alarms about a perceived lack of reproducibility of scientific results. In reaction, guidelines for journals have been promulgated and grant applicants have been asked to address the rigor and reproducibility of their proposed projects. Neither solution addresses a primary culprit, which is the culture of null hypothesis significance testing that dominates statistical analysis and inference. In an innovative research enterprise, selection of results for further evaluation based on null hypothesis significance testing is doomed to yield a low proportion of reproducible results and a high proportion of effects that are initially overestimated. In addition, the culture of null hypothesis significance testing discourages quantitative adjustments to account for systematic errors and quantitative incorporation of prior information. These strategies would otherwise improve reproducibility and have not been previously proposed in the widely cited literature on this topic. Without discarding the culture of null hypothesis significance testing and implementing these alternative methods for statistical analysis and inference, all other strategies for improving reproducibility will yield marginal gains at best. © The Author(s) 2017. Published by Oxford University Press on behalf of the Johns Hopkins Bloomberg School of Public Health. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.

  20. Effects of distraction on negative behaviors and salivary α-amylase under mildly stressful medical procedures for brief inpatient children.

    PubMed

    Tsumura, Hideki; Shimada, Hironori; Morimoto, Hiroshi; Hinuma, Chihiro; Kawano, Yoshiko

    2014-08-01

    Inconsistent results have been reported on the effects of distraction on negative emotions during medical procedures in infants. These differing results may be attributable to the fact that the effects are apparent under a mildly stressful medical procedure. A total of 17 infants, 18 preschoolers, and 15 school-aged children who were hospitalized were administered, monitoring for vital signs, a mildly stressful medical procedure, by a nurse in a uniform with attractive character designs as a distractor. Consistent with the hypothesis, participating infants showed fewer negative behaviors and lower salivary α-amylase levels when distracted. The results support the efficacy of distraction in infants under a mildly stressful medical procedure. © The Author(s) 2013.

  1. The Impact of Economic Factors and Acquisition Reforms on the Cost of Defense Weapon Systems

    DTIC Science & Technology

    2006-03-01

    test for homoskedasticity, the Breusch - Pagan test is employed. The null hypothesis of the Breusch - Pagan test is that the variance is equal to zero...made. Using the Breusch - Pagan test shown in Table 19 below, the prob>chi2 is greater than 05.=α , therefore we fail to reject the null hypothesis...overrunpercentfp100 Breusch - Pagan Test (Ho=Constant Variance) Estimated Results Variance Standard Deviation overrunpercent100

  2. Scoring correction for MMPI-2 Hs scale with patients experiencing a traumatic brain injury: a test of measurement invariance.

    PubMed

    Alkemade, Nathan; Bowden, Stephen C; Salzman, Louis

    2015-02-01

    It has been suggested that MMPI-2 scoring requires removal of some items when assessing patients after a traumatic brain injury (TBI). Gass (1991. MMPI-2 interpretation and closed head injury: A correction factor. Psychological assessment, 3, 27-31) proposed a correction procedure in line with the hypothesis that MMPI-2 endorsement may be affected by symptoms of TBI. This study assessed the validity of the Gass correction procedure. A sample of patients with a TBI (n = 242), and a random subset of the MMPI-2 normative sample (n = 1,786). The correction procedure implies a failure of measurement invariance across populations. This study examined measurement invariance of one of the MMPI-2 scales (Hs) that includes TBI correction items. A four-factor model of the MMPI-2 Hs items was defined. The factor model was found to meet the criteria for partial measurement invariance. Analysis of the change in sensitivity and specificity values implied by partial measurement invariance failed to indicate significant practical impact of partial invariance. Overall, the results support continued use of all Hs items to assess psychological well-being in patients with TBI. © The Author 2014. Published by Oxford University Press. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.

  3. An Inferential Confidence Interval Method of Establishing Statistical Equivalence that Corrects Tryon's (2001) Reduction Factor

    ERIC Educational Resources Information Center

    Tryon, Warren W.; Lewis, Charles

    2008-01-01

    Evidence of group matching frequently takes the form of a nonsignificant test of statistical difference. Theoretical hypotheses of no difference are also tested in this way. These practices are flawed in that null hypothesis statistical testing provides evidence against the null hypothesis and failing to reject H[subscript 0] is not evidence…

  4. Effects of Item Exposure for Conventional Examinations in a Continuous Testing Environment.

    ERIC Educational Resources Information Center

    Hertz, Norman R.; Chinn, Roberta N.

    This study explored the effect of item exposure on two conventional examinations administered as computer-based tests. A principal hypothesis was that item exposure would have little or no effect on average difficulty of the items over the course of an administrative cycle. This hypothesis was tested by exploring conventional item statistics and…

  5. Directional and Non-directional Hypothesis Testing: A Survey of SIG Members, Journals, and Textbooks.

    ERIC Educational Resources Information Center

    McNeil, Keith

    The use of directional and nondirectional hypothesis testing was examined from the perspectives of textbooks, journal articles, and members of editorial boards. Three widely used statistical texts were reviewed in terms of how directional and nondirectional tests of significance were presented. Texts reviewed were written by: (1) D. E. Hinkle, W.…

  6. The Feminization of School Hypothesis Called into Question among Junior and High School Students

    ERIC Educational Resources Information Center

    Verniers, Catherine; Martinot, Delphine; Dompnier, Benoît

    2016-01-01

    Background: The feminization of school hypothesis suggests that boys underachieve in school compared to girls because school rewards feminine characteristics that are at odds with boys' masculine features. Aims: The feminization of school hypothesis lacks empirical evidence. The aim of this study was to test this hypothesis by examining the extent…

  7. The limits to pride: A test of the pro-anorexia hypothesis.

    PubMed

    Cornelius, Talea; Blanton, Hart

    2016-01-01

    Many social psychological models propose that positive self-conceptions promote self-esteem. An extreme version of this hypothesis is advanced in "pro-anorexia" communities: identifying with anorexia, in conjunction with disordered eating, can lead to higher self-esteem. The current study empirically tested this hypothesis. Results challenge the pro-anorexia hypothesis. Although those with higher levels of pro-anorexia identification trended towards higher self-esteem with increased disordered eating, this did not overcome the strong negative main effect of pro-anorexia identification. These data suggest a more effective strategy for promoting self-esteem is to encourage rejection of disordered eating and an anorexic identity.

  8. Does the Slow-Growth, High-Mortality Hypothesis Apply Below Ground?

    PubMed

    Hourston, James E; Bennett, Alison E; Johnson, Scott N; Gange, Alan C

    2016-01-01

    Belowground tri-trophic study systems present a challenging environment in which to study plant-herbivore-natural enemy interactions. For this reason, belowground examples are rarely available for testing general ecological theories. To redress this imbalance, we present, for the first time, data on a belowground tri-trophic system to test the slow growth, high mortality hypothesis. We investigated whether the differing performance of entomopathogenic nematodes (EPNs) in controlling the common pest black vine weevil Otiorhynchus sulcatus could be linked to differently resistant cultivars of the red raspberry Rubus idaeus. The O. sulcatus larvae recovered from R. idaeus plants showed significantly slower growth and higher mortality on the Glen Rosa cultivar, relative to the more commercially favored Glen Ample cultivar creating a convenient system for testing this hypothesis. Heterorhabditis megidis was found to be less effective at controlling O. sulcatus than Steinernema kraussei, but conformed to the hypothesis. However, S. kraussei maintained high levels of O. sulcatus mortality regardless of how larval growth was influenced by R. idaeus cultivar. We link this to direct effects that S. kraussei had on reducing O. sulcatus larval mass, indicating potential sub-lethal effects of S. kraussei, which the slow-growth, high-mortality hypothesis does not account for. Possible origins of these sub-lethal effects of EPN infection and how they may impact on a hypothesis designed and tested with aboveground predator and parasitoid systems are discussed.

  9. A critique of statistical hypothesis testing in clinical research

    PubMed Central

    Raha, Somik

    2011-01-01

    Many have documented the difficulty of using the current paradigm of Randomized Controlled Trials (RCTs) to test and validate the effectiveness of alternative medical systems such as Ayurveda. This paper critiques the applicability of RCTs for all clinical knowledge-seeking endeavors, of which Ayurveda research is a part. This is done by examining statistical hypothesis testing, the underlying foundation of RCTs, from a practical and philosophical perspective. In the philosophical critique, the two main worldviews of probability are that of the Bayesian and the frequentist. The frequentist worldview is a special case of the Bayesian worldview requiring the unrealistic assumptions of knowing nothing about the universe and believing that all observations are unrelated to each other. Many have claimed that the first belief is necessary for science, and this claim is debunked by comparing variations in learning with different prior beliefs. Moving beyond the Bayesian and frequentist worldviews, the notion of hypothesis testing itself is challenged on the grounds that a hypothesis is an unclear distinction, and assigning a probability on an unclear distinction is an exercise that does not lead to clarity of action. This critique is of the theory itself and not any particular application of statistical hypothesis testing. A decision-making frame is proposed as a way of both addressing this critique and transcending ideological debates on probability. An example of a Bayesian decision-making approach is shown as an alternative to statistical hypothesis testing, utilizing data from a past clinical trial that studied the effect of Aspirin on heart attacks in a sample population of doctors. As a big reason for the prevalence of RCTs in academia is legislation requiring it, the ethics of legislating the use of statistical methods for clinical research is also examined. PMID:22022152

  10. [Effect of filial imprinting procedure on cell proliferation in the chick brain].

    PubMed

    Komissarova, N V; Anokhin, K V

    2007-01-01

    In the present study we tested the hypothesis that memory formation during visual imprinting might be related to generation of new cells in the brain of newborn domestic chicks. Cell proliferation was examined in the intermediate medial mesopallium (IMM), arcopallium intermedium (AI), medial part of nidopallium and mesopallium (MNM), nidopallium dorso-caudalis (Ndc), hippocampus (Hp) and area parahippocampalis (APH), as well as in corresponding ventricular zones. Number of new cells was measured by BrdU incorporation 24 h or 7 days after training, BrdU was injected before training. 24 h after imprinting the number of BrdU-positive cells increased significantly in IMM. 7 days after training no changes were observed in IMM, while the number of new cells decreased in MNM and Ndc in comparison to the control group. These data suggest that newly generated cells in the brain of young chicks are influenced by imprinting procedure, which has opposite short-term and long-term effects. A possible reason for such double action of imprinting in contrast to conventional learning can be its additional stimulation of development of predisposition for features of natural parents.

  11. The epistemology of mathematical and statistical modeling: a quiet methodological revolution.

    PubMed

    Rodgers, Joseph Lee

    2010-01-01

    A quiet methodological revolution, a modeling revolution, has occurred over the past several decades, almost without discussion. In contrast, the 20th century ended with contentious argument over the utility of null hypothesis significance testing (NHST). The NHST controversy may have been at least partially irrelevant, because in certain ways the modeling revolution obviated the NHST argument. I begin with a history of NHST and modeling and their relation to one another. Next, I define and illustrate principles involved in developing and evaluating mathematical models. Following, I discuss the difference between using statistical procedures within a rule-based framework and building mathematical models from a scientific epistemology. Only the former is treated carefully in most psychology graduate training. The pedagogical implications of this imbalance and the revised pedagogy required to account for the modeling revolution are described. To conclude, I discuss how attention to modeling implies shifting statistical practice in certain progressive ways. The epistemological basis of statistics has moved away from being a set of procedures, applied mechanistically, and moved toward building and evaluating statistical and scientific models. Copyrigiht 2009 APA, all rights reserved.

  12. Validity of temporomandibular disorder examination procedures for assessment of temporomandibular joint status.

    PubMed

    Schmitter, Marc; Kress, Bodo; Leckel, Michael; Henschel, Volkmar; Ohlmann, Brigitte; Rammelsberg, Peter

    2008-06-01

    This hypothesis-generating study was performed to determine which items in the Research Diagnostic Criteria for Temporomandibular Disorders (RDC/TMD) and additional diagnostic tests have the best predictive accuracy for joint-related diagnoses. One hundred forty-nine TMD patients and 43 symptom-free subjects were examined in clinical examinations and with magnetic resonance imaging (MRI). The importance of each variable of the clinical examination for correct joint-related diagnosis was assessed by using MRI diagnoses. For this purpose, "random forest" statistical software (based on classification trees) was used. Maximum unassisted jaw opening, maximum assisted jaw opening, history of locked jaw, joint sound with and without compression, joint pain, facial pain, pain on palpation of the lateral pterygoid area, and overjet proved suitable for distinguishing between subtypes of joint-related TMD. Measurement of excursion, protrusion, and midline deviation were less important. The validity of clinical TMD examination procedures can be enhanced by using the 16 variables of greatest importance identified in this study. In addition to other variables, maximum unassisted and assisted opening and a history of locked jaw were important when assessing the status of the TMJ.

  13. Beyond Volume: Hospital-Based Healthcare Technology for Better Outcomes in Cerebrovascular Surgical Patients Diagnosed With Ischemic Stroke: A Population-Based Nationwide Cohort Study From 2002 to 2013.

    PubMed

    Kim, Jae-Hyun; Park, Eun-Cheol; Lee, Sang Gyu; Lee, Tae-Hyun; Jang, Sung-In

    2016-03-01

    We examined whether the level of hospital-based healthcare technology was related to the 30-day postoperative mortality rates, after adjusting for hospital volume, of ischemic stroke patients who underwent a cerebrovascular surgical procedure. Using the National Health Insurance Service-Cohort Sample Database, we reviewed records from 2002 to 2013 for data on patients with ischemic stroke who underwent cerebrovascular surgical procedures. Statistical analysis was performed using Cox proportional hazard models to test our hypothesis. A total of 798 subjects were included in our study. After adjusting for hospital volume of cerebrovascular surgical procedures as well as all for other potential confounders, the hazard ratio (HR) of 30-day mortality in low healthcare technology hospitals as compared to high healthcare technology hospitals was 2.583 (P < 0.001). We also found that, although the HR of 30-day mortality in low healthcare technology hospitals with high volume as compared to high healthcare technology hospitals with high volume was the highest (10.014, P < 0.0001), cerebrovascular surgical procedure patients treated in low healthcare technology hospitals had the highest 30-day mortality rate, irrespective of hospital volume. Although results of our study provide scientific evidence for a hospital volume/30-day mortality rate relationship in ischemic stroke patients who underwent cerebrovascular surgical procedures, our results also suggest that the level of hospital-based healthcare technology is associated with mortality rates independent of hospital volume. Given these results, further research into what components of hospital-based healthcare technology significantly impact mortality is warranted.

  14. Robust inference from multiple test statistics via permutations: a better alternative to the single test statistic approach for randomized trials.

    PubMed

    Ganju, Jitendra; Yu, Xinxin; Ma, Guoguang Julie

    2013-01-01

    Formal inference in randomized clinical trials is based on controlling the type I error rate associated with a single pre-specified statistic. The deficiency of using just one method of analysis is that it depends on assumptions that may not be met. For robust inference, we propose pre-specifying multiple test statistics and relying on the minimum p-value for testing the null hypothesis of no treatment effect. The null hypothesis associated with the various test statistics is that the treatment groups are indistinguishable. The critical value for hypothesis testing comes from permutation distributions. Rejection of the null hypothesis when the smallest p-value is less than the critical value controls the type I error rate at its designated value. Even if one of the candidate test statistics has low power, the adverse effect on the power of the minimum p-value statistic is not much. Its use is illustrated with examples. We conclude that it is better to rely on the minimum p-value rather than a single statistic particularly when that single statistic is the logrank test, because of the cost and complexity of many survival trials. Copyright © 2013 John Wiley & Sons, Ltd.

  15. Chromium release from new stainless steel, recycled and nickel-free orthodontic brackets.

    PubMed

    Sfondrini, Maria Francesca; Cacciafesta, Vittorio; Maffia, Elena; Massironi, Sarah; Scribante, Andrea; Alberti, Giancarla; Biesuz, Raffaela; Klersy, Catherine

    2009-03-01

    To test the hypothesis that there is no difference in the amounts of chromium released from new stainless steel brackets, recycled stainless steel brackets, and nickel-free (Ni-free) orthodontic brackets. This in vitro study was performed using a classic batch procedure by immersion of the samples in artificial saliva at various acidities (pH 4.2, 6.5, and 7.6) over an extended time interval (t(1) = 0.25 h, t(2) = 1 h, t(3) = 24 h, t(4) = 48 h, t(5) = 120 h). The amount of chromium release was determined using an atomic absorption spectrophotometer and an inductively coupled plasma atomic emission spectrometer. Statistical analysis included a linear regression model for repeated measures, with calculation of Huber-White robust standard errors to account for intrabracket correlation of data. For post hoc comparisons the Bonferroni correction was applied. The greatest amount of chromium was released from new stainless steel brackets (0.52 +/- 1.083 microg/g), whereas the recycled brackets released 0.27 +/- 0.38 microg/g. The smallest release was measured with Ni-free brackets (0.21 +/- 0.51 microg/g). The difference between recycled brackets and Ni-free brackets was not statistically significant (P = .13). For all brackets, the greatest release (P = .000) was measured at pH 4.2, and a significant increase was reported between all time intervals (P < .002). The hypothesis is rejected, but the amount of chromium released in all test solutions was well below the daily dietary intake level.

  16. The biological clock of Neurospora in a microgravity environment.

    PubMed

    Ferraro, J S; Fuller, C A; Sulzman, F M

    1989-01-01

    The circadian rhythm of conidiation in Neurospora crassa is thought to be an endogenously derived circadian oscillation; however, several investigators have suggested that circadian rhythms may, instead, be driven by some geophysical time cue(s). An experiment was conducted on space shuttle flight STS-9 in order to test this hypothesis; during the first 7-8 cycles in space, there were several minor alterations observed in the conidiation rhythm, including an increase in the period of the oscillation, an increase in the variability of the growth rate and a diminished rhythm amplitude, which eventually damped out in 25% of the flight tubes. On day seven of flight, the tubes were exposed to light while their growth fronts were marked. Some aspect of the marking process reinstated a robust rhythm in all the tubes which continued throughout the remainder of the flight. These results from the last 86 hours of flight demonstrated that the rhythm can persist in space. Since the aberrant rhythmicity occurred prior to the marking procedure, but not after, it was hypothesized that the damping on STS-9 may have resulted from the hypergravity pulse of launch. To test this hypothesis, we conducted investigations into the effects of altered gravitational forces on conidiation. Exposure to hypergravity (via centrifugation), simulated microgravity (via the use of a clinostat) and altered orientations (via alterations in the vector of a 1 g force) were used to examine the effects of gravity upon the circadian rhythm of conidiation.

  17. Reappraisal of the corticothalamic and thalamocortical interactions that contribute to the augmenting response in the rat.

    PubMed

    Mishima, K; Ohta, M

    1992-01-01

    In urethane-anesthetized rats, low frequency electrical stimulation of the thalamic radiation (TR) evoked an augmenting response in the somatosensory cortex (SCx) which was followed by rhythmic slow waves. The augmenting response mainly consists of the incremental secondary response (II-response). Simultaneously, augmentation also occurs in the ventrobasal nucleus of thalamus (VB) on the late component responses, C- and D-waves, to TR stimulation. The latencies of these augmented responses were shorter for the C-wave and the accompanying unit discharges in the VB relay neurons than for the D-wave and the II-response. We hypothesized that the thalamo-cortico-thalamic reverberating circuit was crucial in generating the augmenting response in the SCx. To test this hypothesis, an attempt was made to block temporarily the corticothalamic glutamatergic transmission by means of microinjections of kynurenate (KYN), an antagonist of glutamate, into the VB with a dose of more than 2 mM. This local procedure blocked all of the augmenting phenomena completely with a full recovery after the duration that depended on the dose of KYN. Besides, in the stage of complete blocking of the II-response to the test TR stimuli, the augmentation was able to be restored by adding a short train of high frequency TR stimuli that mimicked a burst discharge of VB relay neurons. These results in support of the hypothesis would reappraise the functional significance of the reverberating circuit in augmentation that has recently been controversial.

  18. Phylogenetic classification of Aureobasidium pullulans strains for production of pullulan and xylanase

    USDA-ARS?s Scientific Manuscript database

    This study tests the hypothesis that phylogenetic classification can predict whether A. pullulans strains will produce useful levels of the commercial polysaccharide, pullulan, or the valuable enzyme, xylanase. To test this hypothesis, 19 strains of A. pullulans with previously described phenotypes...

  19. Formulating appropriate statistical hypotheses for treatment comparison in clinical trial design and analysis.

    PubMed

    Huang, Peng; Ou, Ai-hua; Piantadosi, Steven; Tan, Ming

    2014-11-01

    We discuss the problem of properly defining treatment superiority through the specification of hypotheses in clinical trials. The need to precisely define the notion of superiority in a one-sided hypothesis test problem has been well recognized by many authors. Ideally designed null and alternative hypotheses should correspond to a partition of all possible scenarios of underlying true probability models P={P(ω):ω∈Ω} such that the alternative hypothesis Ha={P(ω):ω∈Ωa} can be inferred upon the rejection of null hypothesis Ho={P(ω):ω∈Ω(o)} However, in many cases, tests are carried out and recommendations are made without a precise definition of superiority or a specification of alternative hypothesis. Moreover, in some applications, the union of probability models specified by the chosen null and alternative hypothesis does not constitute a completed model collection P (i.e., H(o)∪H(a) is smaller than P). This not only imposes a strong non-validated assumption of the underlying true models, but also leads to different superiority claims depending on which test is used instead of scientific plausibility. Different ways to partition P fro testing treatment superiority often have different implications on sample size, power, and significance in both efficacy and comparative effectiveness trial design. Such differences are often overlooked. We provide a theoretical framework for evaluating the statistical properties of different specification of superiority in typical hypothesis testing. This can help investigators to select proper hypotheses for treatment comparison inclinical trial design. Copyright © 2014 Elsevier Inc. All rights reserved.

  20. The potential for increased power from combining P-values testing the same hypothesis.

    PubMed

    Ganju, Jitendra; Julie Ma, Guoguang

    2017-02-01

    The conventional approach to hypothesis testing for formal inference is to prespecify a single test statistic thought to be optimal. However, we usually have more than one test statistic in mind for testing the null hypothesis of no treatment effect but we do not know which one is the most powerful. Rather than relying on a single p-value, combining p-values from prespecified multiple test statistics can be used for inference. Combining functions include Fisher's combination test and the minimum p-value. Using randomization-based tests, the increase in power can be remarkable when compared with a single test and Simes's method. The versatility of the method is that it also applies when the number of covariates exceeds the number of observations. The increase in power is large enough to prefer combined p-values over a single p-value. The limitation is that the method does not provide an unbiased estimator of the treatment effect and does not apply to situations when the model includes treatment by covariate interaction.

  1. The group engagement model: procedural justice, social identity, and cooperative behavior.

    PubMed

    Tyler, Tom R; Blader, Steven L

    2003-01-01

    The group engagement model expands the insights of the group-value model of procedural justice and the relational model of authority into an explanation for why procedural justice shapes cooperation in groups, organizations, and societies. It hypothesizes that procedures are important because they shape people's social identity within groups, and social identity in turn influences attitudes, values, and behaviors. The model further hypothesizes that resource judgments exercise their influence indirectly by shaping social identity. This social identity mediation hypothesis explains why people focus on procedural justice, and in particular on procedural elements related to the quality of their interpersonal treatment, because those elements carry the most social identity-relevant information. In this article, we review several key insights of the group engagement model, relate these insights to important trends in psychological research on justice, and discuss implications of the model for the future of procedural justice research.

  2. A test of the orthographic recoding hypothesis

    NASA Astrophysics Data System (ADS)

    Gaygen, Daniel E.

    2003-04-01

    The Orthographic Recoding Hypothesis [D. E. Gaygen and P. A. Luce, Percept. Psychophys. 60, 465-483 (1998)] was tested. According to this hypothesis, listeners recognize spoken words heard for the first time by mapping them onto stored representations of the orthographic forms of the words. Listeners have a stable orthographic representation of words, but no phonological representation, when those words have been read frequently but never heard or spoken. Such may be the case for low frequency words such as jargon. Three experiments using visually and auditorily presented nonword stimuli tested this hypothesis. The first two experiments were explicit tests of memory (old-new tests) for words presented visually. In the first experiment, the recognition of auditorily presented nonwords was facilitated when they previously appeared on a visually presented list. The second experiment was similar, but included a concurrent articulation task during a visual word list presentation, thus preventing covert rehearsal of the nonwords. The results were similar to the first experiment. The third experiment was an indirect test of memory (auditory lexical decision task) for visually presented nonwords. Auditorily presented nonwords were identified as nonwords significantly more slowly if they had previously appeared on the visually presented list accompanied by a concurrent articulation task.

  3. Early Adoption of a Multitarget Stool DNA Test for Colorectal Cancer Screening.

    PubMed

    Finney Rutten, Lila J; Jacobson, Robert M; Wilson, Patrick M; Jacobson, Debra J; Fan, Chun; Kisiel, John B; Sweetser, Seth; Tulledge-Scheitel, Sidna M; St Sauver, Jennifer L

    2017-05-01

    To characterize early adoption of a novel multitarget stool DNA (MT-sDNA) screening test for colorectal cancer (CRC) screening and to test the hypothesis that adoption differs by demographic characteristics and prior CRC screening behavior and proceeds predictably over time. We used the Rochester Epidemiology Project research infrastructure to assess the use of the MT-sDNA screening test in adults aged 50 to 75 years living in Olmsted County, Minnesota, in 2014 and identified 27,147 individuals eligible or due for screening colonoscopy from November 1, 2014, through November 30, 2015. We used electronic Current Procedure Terminology and Health Care Common Procedure codes to evaluate early adoption of the MT-sDNA screening test in this population and to test whether early adoption varies by age, sex, race, and prior CRC screening behavior. Overall, 2193 (8.1%) and 974 (3.6%) individuals were screened by colonoscopy and MT-sDNA, respectively. Age, sex, race, and prior CRC screening behavior were significantly and independently associated with MT-sDNA screening use compared with colonoscopy use after adjustment for all other variables (P<.05 for all). The rates of adoption of MT-sDNA screening increased over time and were highest in those aged 50 to 54 years, women, whites, and those who had a history of screening. The use of the MT-sDNA screening test varied predictably by insurance coverage. The rates of colonoscopy decreased over time, whereas overall CRC screening rates remained steady. The results of the present study are generally consistent with predictions derived from prior research and the diffusion of innovation framework, pointing to increasing use of the new screening test over time and early adoption by younger patients, women, whites, and those with prior CRC screening. Copyright © 2017 Mayo Foundation for Medical Education and Research. Published by Elsevier Inc. All rights reserved.

  4. Reporting Practices and Use of Quantitative Methods in Canadian Journal Articles in Psychology.

    PubMed

    Counsell, Alyssa; Harlow, Lisa L

    2017-05-01

    With recent focus on the state of research in psychology, it is essential to assess the nature of the statistical methods and analyses used and reported by psychological researchers. To that end, we investigated the prevalence of different statistical procedures and the nature of statistical reporting practices in recent articles from the four major Canadian psychology journals. The majority of authors evaluated their research hypotheses through the use of analysis of variance (ANOVA), t -tests, and multiple regression. Multivariate approaches were less common. Null hypothesis significance testing remains a popular strategy, but the majority of authors reported a standardized or unstandardized effect size measure alongside their significance test results. Confidence intervals on effect sizes were infrequently employed. Many authors provided minimal details about their statistical analyses and less than a third of the articles presented on data complications such as missing data and violations of statistical assumptions. Strengths of and areas needing improvement for reporting quantitative results are highlighted. The paper concludes with recommendations for how researchers and reviewers can improve comprehension and transparency in statistical reporting.

  5. Management of chronic low back pain: rationales, principles, and targets of imaging-guided spinal injections.

    PubMed

    Fritz, Jan; Niemeyer, Thomas; Clasen, Stephan; Wiskirchen, Jakub; Tepe, Gunnar; Kastler, Bruno; Nägele, Thomas; König, Claudius W; Claussen, Claus D; Pereira, Philippe L

    2007-01-01

    If low back pain does not improve with conservative management, the cause of the pain must be determined before further therapy is initiated. Information obtained from the patient's medical history, physical examination, and imaging may suffice to rule out many common causes of chronic pain (eg, fracture, malignancy, visceral or metabolic abnormality, deformity, inflammation, and infection). However, in most cases, the initial clinical and imaging findings have a low predictive value for the identification of specific pain-producing spinal structures. Diagnostic spinal injections performed in conjunction with imaging may be necessary to test the hypothesis that a particular structure is the source of pain. To ensure a valid test result, diagnostic injection procedures should be monitored with fluoroscopy, computed tomography, or magnetic resonance imaging. The use of controlled and comparative injections helps maximize the reliability of the test results. After a symptomatic structure has been identified, therapeutic spinal injections may be administered as an adjunct to conservative management, especially in patients with inoperable conditions. Therapeutic injections also may help hasten the recovery of patients with persistent or recurrent pain after spinal surgery. RSNA, 2007

  6. The picture superiority effect in conceptual implicit memory: a conceptual distinctiveness hypothesis.

    PubMed

    Hamilton, Maryellen; Geraci, Lisa

    2006-01-01

    According to leading theories, the picture superiority effect is driven by conceptual processing, yet this effect has been difficult to obtain using conceptual implicit memory tests. We hypothesized that the picture superiority effect results from conceptual processing of a picture's distinctive features rather than a picture's semantic features. To test this hypothesis, we used 2 conceptual implicit general knowledge tests; one cued conceptually distinctive features (e.g., "What animal has large eyes?") and the other cued semantic features (e.g., "What animal is the figurehead of Tootsie Roll?"). Results showed a picture superiority effect only on the conceptual test using distinctive cues, supporting our hypothesis that this effect is mediated by conceptual processing of a picture's distinctive features.

  7. Why do mothers favor girls and fathers, boys? : A hypothesis and a test of investment disparity.

    PubMed

    Godoy, Ricardo; Reyes-García, Victoria; McDade, Thomas; Tanner, Susan; Leonard, William R; Huanca, Tomás; Vadez, Vincent; Patel, Karishma

    2006-06-01

    Growing evidence suggests mothers invest more in girls than boys and fathers more in boys than girls. We develop a hypothesis that predicts preference for girls by the parent facing more resource constraints and preference for boys by the parent facing less constraint. We test the hypothesis with panel data from the Tsimane', a foraging-farming society in the Bolivian Amazon. Tsimane' mothers face more resource constraints than fathers. As predicted, mother's wealth protected girl's BMI, but father's wealth had weak effects on boy's BMI. Numerous tests yielded robust results, including those that controlled for fixed effects of child and household.

  8. Addendum to the article: Misuse of null hypothesis significance testing: Would estimation of positive and negative predictive values improve certainty of chemical risk assessment?

    PubMed

    Bundschuh, Mirco; Newman, Michael C; Zubrod, Jochen P; Seitz, Frank; Rosenfeldt, Ricki R; Schulz, Ralf

    2015-03-01

    We argued recently that the positive predictive value (PPV) and the negative predictive value (NPV) are valuable metrics to include during null hypothesis significance testing: They inform the researcher about the probability of statistically significant and non-significant test outcomes actually being true. Although commonly misunderstood, a reported p value estimates only the probability of obtaining the results or more extreme results if the null hypothesis of no effect was true. Calculations of the more informative PPV and NPV require a priori estimate of the probability (R). The present document discusses challenges of estimating R.

  9. Functional imaging of brain responses to different outcomes of hypothesis testing: revealed in a category induction task.

    PubMed

    Li, Fuhong; Cao, Bihua; Luo, Yuejia; Lei, Yi; Li, Hong

    2013-02-01

    Functional magnetic resonance imaging (fMRI) was used to examine differences in brain activation that occur when a person receives the different outcomes of hypothesis testing (HT). Participants were provided with a series of images of batteries and were asked to learn a rule governing what kinds of batteries were charged. Within each trial, the first two charged batteries were sequentially displayed, and participants would generate a preliminary hypothesis based on the perceptual comparison. Next, a third battery that served to strengthen, reject, or was irrelevant to the preliminary hypothesis was displayed. The fMRI results revealed that (1) no significant differences in brain activation were found between the 2 hypothesis-maintain conditions (i.e., strengthen and irrelevant conditions); and (2) compared with the hypothesis-maintain conditions, the hypothesis-reject condition activated the left medial frontal cortex, bilateral putamen, left parietal cortex, and right cerebellum. These findings are discussed in terms of the neural correlates of the subcomponents of HT and working memory manipulation. Copyright © 2012 Elsevier Inc. All rights reserved.

  10. Effects of Model Salivary Esterases and MMP Inhibition on the Restoration's Marginal Integrity and Potential Degradative Contribution of Cariogenic Bacteria

    NASA Astrophysics Data System (ADS)

    Huang, Bo

    Enzyme-catalyzed degradation of the restoration-tooth interface compromises interfacial integrity, thereby contributing to secondary caries, which is a major cause of resin-based restoration failure. It is hypothesized that in addition to salivary esterases, the cariogenic bacterium Streptococcus mutans has specific esterases that degrade the resin-dentin interface, releasing biodegradation by- products (BBPs) such as bis-hydroxy-propoxy-phenyl-propane (BisHPPP). In turn, BisHPPP affects S. mutans by stimulating the expression of esterases. Another hypothesis is that the biostability of the resin-dentin interface is affected by simulated salivary esterases, dentinal matrix metalloproteinase (MMP) inhibition, and restorative materials. To test the first hypothesis, putative esterase genes in S. mutans UA159 were identified, purified, and characterized. SMU_118c was identified as the dominant esterase in S. mutans UA159 and showed a similar hydrolytic activity profile to salivary esterases. BisHPPP upregulated expression of the SMU_118c gene and related protein in a concentration-dependent manner. This positive feedback process could accelerate the degradation of the restoration-tooth interface and lead to premature restoration failure. To test the second hypothesis, an in vitro model was established to evaluate the effects of salivary esterases, MMP inhibition and restorative materials on interfacial integrity. It was confirmed that interfacial integrity was compromised with time and was further deteriorated by simulated salivary esterases, as indicated by the greater depth of bacterial ingress and more bacterial biomass of biofilm along the interface. However, this process could be modulated by using different restorative materials and MMPs inhibition. This project elucidated the mechanistic interaction between oral bacteria and restorative materials and established a new, in vitro, and physiologically relevant model to assess the effect of material chemistry, properties, and application modes on bacterial penetration and biofilm formation. These findings offer the oral health community practical ways to reduce secondary caries by altering material composition and restorative procedures.

  11. Animal Models for Testing the DOHaD Hypothesis

    EPA Science Inventory

    Since the seminal work in human populations by David Barker and colleagues, several species of animals have been used in the laboratory to test the Developmental Origins of Health and Disease (DOHaD) hypothesis. Rats, mice, guinea pigs, sheep, pigs and non-human primates have bee...

  12. A "Projective" Test of the Golden Section Hypothesis.

    ERIC Educational Resources Information Center

    Lee, Chris; Adams-Webber, Jack

    1987-01-01

    In a projective test of the golden section hypothesis, 24 high school students rated themselves and 10 comic strip characters on basis of 12 bipolar constructs. Overall proportion of cartoon figures which subjects assigned to positive poles of constructs was very close to golden section. (Author/NB)

  13. Consolidation through the looking-glass: sleep-dependent proactive interference on visuomotor adaptation in children.

    PubMed

    Urbain, Charline; Houyoux, Emeline; Albouy, Geneviève; Peigneux, Philippe

    2014-02-01

    Although a beneficial role of post-training sleep for declarative memory has been consistently evidenced in children, as in adults, available data suggest that procedural memory consolidation does not benefit from sleep in children. However, besides the absence of performance gains in children, sleep-dependent plasticity processes involved in procedural memory consolidation might be expressed through differential interference effects on the learning of novel but related procedural material. To test this hypothesis, 32 10-12-year-old children were trained on a motor rotation adaptation task. After either a sleep or a wake period, they were first retested on the same rotation applied at learning, thus assessing offline sleep-dependent changes in performance, then on the opposite (unlearned) rotation to assess sleep-dependent modulations in proactive interference coming from the consolidated visuomotor memory trace. Results show that children gradually improve performance over the learning session, showing effective adaptation to the imposed rotation. In line with previous findings, no sleep-dependent changes in performance were observed for the learned rotation. However, presentation of the opposite, unlearned deviation elicited significantly higher interference effects after post-training sleep than wakefulness in children. Considering that a definite feature of procedural motor memory and skill acquisition is the implementation of highly automatized motor behaviour, thus lacking flexibility, our results suggest a better integration and/or automation or motor adaptation skills after post-training sleep, eventually resulting in higher proactive interference effects on untrained material. © 2013 European Sleep Research Society.

  14. Pasture succession in the Neotropics: extending the nucleation hypothesis into a matrix discontinuity hypothesis.

    PubMed

    Peterson, Chris J; Dosch, Jerald J; Carson, Walter P

    2014-08-01

    The nucleation hypothesis appears to explain widespread patterns of succession in tropical pastures, specifically the tendency for isolated trees to promote woody species recruitment. Still, the nucleation hypothesis has usually been tested explicitly for only short durations and in some cases isolated trees fail to promote woody recruitment. Moreover, at times, nucleation occurs in other key habitat patches. Thus, we propose an extension, the matrix discontinuity hypothesis: woody colonization will occur in focal patches that function to mitigate the herbaceous vegetation effects, thus providing safe sites or regeneration niches. We tested predictions of the classical nucleation hypothesis, the matrix discontinuity hypothesis, and a distance from forest edge hypothesis, in five abandoned pastures in Costa Rica, across the first 11 years of succession. Our findings confirmed the matrix discontinuity hypothesis: specifically, rotting logs and steep slopes significantly enhanced woody colonization. Surprisingly, isolated trees did not consistently significantly enhance recruitment; only larger trees did so. Finally, woody recruitment consistently decreased with distance from forest. Our results as well as results from others suggest that the nucleation hypothesis needs to be broadened beyond its historical focus on isolated trees or patches; the matrix discontinuity hypothesis focuses attention on a suite of key patch types or microsites that promote woody species recruitment. We argue that any habitat discontinuities that ameliorate the inhibition by dense graminoid layers will be foci for recruitment. Such patches could easily be manipulated to speed the transition of pastures to closed canopy forests.

  15. Humans have evolved specialized skills of social cognition: the cultural intelligence hypothesis.

    PubMed

    Herrmann, Esther; Call, Josep; Hernàndez-Lloreda, Maráa Victoria; Hare, Brian; Tomasello, Michael

    2007-09-07

    Humans have many cognitive skills not possessed by their nearest primate relatives. The cultural intelligence hypothesis argues that this is mainly due to a species-specific set of social-cognitive skills, emerging early in ontogeny, for participating and exchanging knowledge in cultural groups. We tested this hypothesis by giving a comprehensive battery of cognitive tests to large numbers of two of humans' closest primate relatives, chimpanzees and orangutans, as well as to 2.5-year-old human children before literacy and schooling. Supporting the cultural intelligence hypothesis and contradicting the hypothesis that humans simply have more "general intelligence," we found that the children and chimpanzees had very similar cognitive skills for dealing with the physical world but that the children had more sophisticated cognitive skills than either of the ape species for dealing with the social world.

  16. Multiple-object tracking as a tool for parametrically modulating memory reactivation

    PubMed Central

    Poppenk, J.; Norman, K.A.

    2017-01-01

    Converging evidence supports the “non-monotonic plasticity” hypothesis that although complete retrieval may strengthen memories, partial retrieval weakens them. Yet, the classic experimental paradigms used to study effects of partial retrieval are not ideally suited to doing so, because they lack the parametric control needed to ensure that the memory is activated to the appropriate degree (i.e., that there is some retrieval, but not enough to cause memory strengthening). Here we present a novel procedure designed to accommodate this need. After participants learned a list of word-scene associates, they completed a cued mental visualization task that was combined with a multiple-object tracking (MOT) procedure, which we selected for its ability to interfere with mental visualization in a parametrically adjustable way (by varying the number of MOT targets). We also used fMRI data to successfully train an “associative recall” classifier for use in this task: this classifier revealed greater memory reactivation during trials in which associative memories were cued while participants tracked one, rather than five MOT targets. However, the classifier was insensitive to task difficulty when recall was not taking place, suggesting it had indeed tracked memory reactivation rather than task difficulty per se. Consistent with the classifier findings, participants’ introspective ratings of visualization vividness were modulated by MOT task difficulty. In addition, we observed reduced classifier output and slowing of responses in a post-reactivation memory test, consistent with the hypothesis that partial reactivation, induced by MOT, weakened memory. These results serve as a “proof of concept” that MOT can be used to parametrically modulate memory retrieval – a property that may prove useful in future investigation of partial retrieval effects, e.g., in closed-loop experiments. PMID:28387587

  17. Formal ontology for natural language processing and the integration of biomedical databases.

    PubMed

    Simon, Jonathan; Dos Santos, Mariana; Fielding, James; Smith, Barry

    2006-01-01

    The central hypothesis underlying this communication is that the methodology and conceptual rigor of a philosophically inspired formal ontology can bring significant benefits in the development and maintenance of application ontologies [A. Flett, M. Dos Santos, W. Ceusters, Some Ontology Engineering Procedures and their Supporting Technologies, EKAW2002, 2003]. This hypothesis has been tested in the collaboration between Language and Computing (L&C), a company specializing in software for supporting natural language processing especially in the medical field, and the Institute for Formal Ontology and Medical Information Science (IFOMIS), an academic research institution concerned with the theoretical foundations of ontology. In the course of this collaboration L&C's ontology, LinKBase, which is designed to integrate and support reasoning across a plurality of external databases, has been subjected to a thorough auditing on the basis of the principles underlying IFOMIS's Basic Formal Ontology (BFO) [B. Smith, Basic Formal Ontology, 2002. http://ontology.buffalo.edu/bfo]. The goal is to transform a large terminology-based ontology into one with the ability to support reasoning applications. Our general procedure has been the implementation of a meta-ontological definition space in which the definitions of all the concepts and relations in LinKBase are standardized in the framework of first-order logic. In this paper we describe how this principles-based standardization has led to a greater degree of internal coherence of the LinKBase structure, and how it has facilitated the construction of mappings between external databases using LinKBase as translation hub. We argue that the collaboration here described represents a new phase in the quest to solve the so-called "Tower of Babel" problem of ontology integration [F. Montayne, J. Flanagan, Formal Ontology: The Foundation for Natural Language Processing, 2003. http://www.landcglobal.com/].

  18. Fertility and female employment in Lagos, Nigeria.

    PubMed

    Feyisetan, B J

    1985-01-01

    This paper investigates the relationship between fertility and female employment in a Nigerian urban center, Lagos. The study is built upon the data derived from the Survey of Household Structure, Family Employment, and the Small Family Ideal carried out in 1974. The study centered around currently married women aged 15-49, living in 2 residential areas chosen to include wage-earning and non wage-earning workers. It is principally a test of the maternal role incompatibility hypothesis, whose major tenet is that the maternal role and function of worker are incompatible with each other. On the basis of the assumption, the fertility and female employment equations are estimated by the 2 stage least squares procedure and estimated results compared to those derived from the ordinary least squares procedure. The results demonstrate that mothering and working tend to conflict only if employment is undertaken in the formal sector of the labor market; a positive association is discernable between the proclivity to engage in non-domestic but irregular activities of the informal sector and the bearing and rearing of children. While the conflict between fertility and female employment in the formal sector suggests possible trade-offs between the number of children and employment, the positive association between fertility and female employment in the informal sector suggests the compatibility of the roles of a mother and of a worker in that sector. The results further demonstrate the inadequacy of using a mere rural-urban dichotomy in the examination of the maternal role incompatibility hypothesis as done in some earlier works. The urban labor market, especially in a less developed country like Nigeria, needs formal disaggregation into formal and informal sectors on the basis of the activities being undertaken.

  19. A Search for Factors Causing Training Costs to Rise by Examining the U. S. Navy’s AT, AW, and AX Ratings during their First Enlistment Period

    DTIC Science & Technology

    1986-09-01

    HYPOTHESIS TEST .................... 20 III. TIME TO GET RATED TWO FACTOR ANOVA RESULTS ......... 23 IV. TIME TO GET RATED TUKEY’S PAIRED COvfl’PARISON... TEST RESULTS A ............................................ 24 V. TIME TO GET RATED TUKEY’S PAIRED COMPARISON TEST RESULTS B...25 VI. SINGLE FACTOR ANOVA HYPOTHESIS TEST #I............... 27 VII. AT: TIME TO GET RATED ANOVA TEST RESULTS ............. 30

  20. Dielectric Barrier Discharge (DBD) Plasma Actuators Thrust-Measurement Methodology Incorporating New Anti-Thrust Hypothesis

    NASA Technical Reports Server (NTRS)

    Ashpis, David E.; Laun, Matthew C.

    2014-01-01

    We discuss thrust measurements of Dielectric Barrier Discharge (DBD) plasma actuators devices used for aerodynamic active flow control. After a review of our experience with conventional thrust measurement and significant non-repeatability of the results, we devised a suspended actuator test setup, and now present a methodology of thrust measurements with decreased uncertainty. The methodology consists of frequency scans at constant voltages. The procedure consists of increasing the frequency in a step-wise fashion from several Hz to the maximum frequency of several kHz, followed by frequency decrease back down to the start frequency of several Hz. This sequence is performed first at the highest voltage of interest, then repeated at lower voltages. The data in the descending frequency direction is more consistent and selected for reporting. Sample results show strong dependence of thrust on humidity which also affects the consistency and fluctuations of the measurements. We also observed negative values of thrust or "anti-thrust", at low frequencies between 4 Hz and up to 64 Hz. The anti-thrust is proportional to the mean-squared voltage and is frequency independent. Departures from the parabolic anti-thrust curve are correlated with appearance of visible plasma discharges. We propose the anti-thrust hypothesis. It states that the measured thrust is a sum of plasma thrust and anti-thrust, and assumes that the anti-thrust exists at all frequencies and voltages. The anti-thrust depends on actuator geometry and materials and on the test installation. It enables the separation of the plasma thrust from the measured total thrust. This approach enables more meaningful comparisons between actuators at different installations and laboratories. The dependence on test installation was validated by surrounding the actuator with a large diameter, grounded, metal sleeve.

  1. Wistar-Kyoto rats as an animal model of anxiety vulnerability: support for a hypervigilance hypothesis.

    PubMed

    McAuley, J D; Stewart, A L; Webber, E S; Cromwell, H C; Servatius, R J; Pang, K C H

    2009-12-01

    Inbred Wistar-Kyoto (WKY) rats have been proposed as a model of anxiety vulnerability as they display behavioral inhibition and a constellation of learning and reactivity abnormalities relative to outbred Sprague-Dawley (SD) rats. Together, the behaviors of the WKY rat suggest a hypervigilant state that may contribute to its anxiety vulnerability. To test this hypothesis, open-field behavior, acoustic startle, pre-pulse inhibition and timing behavior were assessed in WKY and Sprague-Dawley (SD) rats. Timing behavior was evaluated using a modified version of the peak-interval timing procedure. Training and testing of timing first occurred without audio-visual (AV) interference. Following this initial test, AV interference was included on some trials. Overall, WKY rats took much longer to leave the center of the arena, made fewer line crossings, and reared less, than did SD rats. WKY rats showed much greater startle responses to acoustic stimuli and significantly greater pre-pulse inhibition than did the SD rats. During timing conditions without AV interference, timing accuracy for both strains was similar; peak times for WKY and SD rats were not different. During interference conditions, however, the timing behavior of the two strains was very different. Whereas peak times for SD rats were similar between non-interference and interference conditions, peak times for WKY rats were shorter and response rates higher in interference conditions than in non-interference conditions. The enhanced acoustic startle response, greater prepulse inhibition and altered timing behavior with audio-visual interference supports a characterization of WKY strain as hypervigilant and provides further evidence for the use of the WKY strain as a model of anxiety vulnerability.

  2. Eddy covariance measurements of carbon dioxide, latent and sensible energy fluxes above a meadow on a mountain slope

    PubMed Central

    Hammerle, Albin; Haslwanter, Alois; Schmitt, Michael; Bahn, Michael; Tappeiner, Ulrike; Cernusca, Alexander; Wohlfahrt, Georg

    2014-01-01

    Carbon dioxide, latent and sensible energy fluxes were measured by means of the eddy covariance method above a mountain meadow situated on a steep slope in the Stubai Valley/Austria, based on the hypothesis that, due to the low canopy height, measurements can be made in the shallow equilibrium layer where the wind field exhibits characteristics akin to level terrain. In order to test the validity of this hypothesis and to identify effects of complex terrain in the turbulence measurements, data were subjected to a rigorous testing procedure using a series of quality control measures established for surface layer flows. The resulting high-quality data set comprised 36 % of the original observations, the substantial reduction being mainly due to a change in surface roughness and associated fetch limitations in the wind sector dominating during nighttime and transition periods. The validity of the high-quality data set was further assessed by two independent tests: i) a comparison with the net ecosystem carbon dioxide exchange measured by means of ecosystem chambers and ii) the ability of the eddy covariance measurements to close the energy balance. The net ecosystem CO2 exchange measured by the eddy covariance method agreed reasonably with ecosystem chamber measurements. The assessment of the energy balance closure showed that there was no significant difference in the correspondence between the meadow on the slope and another one situated on flat ground at the bottom of the Stubai Valley, available energy being underestimated by 28 and 29 %, respectively. We thus conclude that, appropriate quality control provided, the eddy covariance measurements made above a mountain meadow on a steep slope are of similar quality as compared to flat terrain. PMID:24465032

  3. The use of a behavioral response system in the USF/NASA toxicity screening test method

    NASA Technical Reports Server (NTRS)

    Hilado, C. J.; Cumming, H. J.; Packham, S. C.

    1977-01-01

    Relative toxicity data on the pyrolysis effluents from bisphenol A polycarbonate and wool fabric were obtained, based on visual observations of the behavior of free-moving mice and on an avoidance response behavioral paradigm of restrained rats monitored by an instrumented behavioral system. The initial experiments show an essentially 1:1 correlation between the two systems with regard to first signs of incapacitation, collapse, and death from pyrolysis effluents from polycarbonate. It is hypothesized that similarly good correlations between these two systems might exist for other materials exhibiting predominantly carbon monoxide mechanisms of intoxication. This hypothesis needs to be confirmed, however, by additional experiments. Data with wool fabric exhibited greater variability with both procedures, indicating possibly different mechanisms of intoxication for wool as compared with bisphenol A polycarbonate.

  4. Sensory discrimination and intelligence: testing Spearman's other hypothesis.

    PubMed

    Deary, Ian J; Bell, P Joseph; Bell, Andrew J; Campbell, Mary L; Fazal, Nicola D

    2004-01-01

    At the centenary of Spearman's seminal 1904 article, his general intelligence hypothesis remains one of the most influential in psychology. Less well known is the article's other hypothesis that there is "a correspondence between what may provisionally be called 'General Discrimination' and 'General Intelligence' which works out with great approximation to one or absoluteness" (Spearman, 1904, p. 284). Studies that do not find high correlations between psychometric intelligence and single sensory discrimination tests do not falsify this hypothesis. This study is the first directly to address Spearman's general intelligence-general sensory discrimination hypothesis. It attempts to replicate his findings with a similar sample of schoolchildren. In a well-fitting structural equation model of the data, general intelligence and general discrimination correlated .92. In a reanalysis of data published byActon and Schroeder (2001), general intelligence and general sensory ability correlated .68 in men and women. One hundred years after its conception, Spearman's other hypothesis achieves some confirmation. The association between general intelligence and general sensory ability remains to be replicated and explained.

  5. Dynamic test input generation for multiple-fault isolation

    NASA Technical Reports Server (NTRS)

    Schaefer, Phil

    1990-01-01

    Recent work is Causal Reasoning has provided practical techniques for multiple fault diagnosis. These techniques provide a hypothesis/measurement diagnosis cycle. Using probabilistic methods, they choose the best measurements to make, then update fault hypotheses in response. For many applications such as computers and spacecraft, few measurement points may be accessible, or values may change quickly as the system under diagnosis operates. In these cases, a hypothesis/measurement cycle is insufficient. A technique is presented for a hypothesis/test-input/measurement diagnosis cycle. In contrast to generating tests a priori for determining device functionality, it dynamically generates tests in response to current knowledge about fault probabilities. It is shown how the mathematics previously used for measurement specification can be applied to the test input generation process. An example from an efficient implementation called Multi-Purpose Causal (MPC) is presented.

  6. Landslide risk models for decision making.

    PubMed

    Bonachea, Jaime; Remondo, Juan; de Terán, José Ramón Díaz; González-Díez, Alberto; Cendrero, Antonio

    2009-11-01

    This contribution presents a quantitative procedure for landslide risk analysis and zoning considering hazard, exposure (or value of elements at risk), and vulnerability. The method provides the means to obtain landslide risk models (expressing expected damage due to landslides on material elements and economic activities in monetary terms, according to different scenarios and periods) useful to identify areas where mitigation efforts will be most cost effective. It allows identifying priority areas for the implementation of actions to reduce vulnerability (elements) or hazard (processes). The procedure proposed can also be used as a preventive tool, through its application to strategic environmental impact analysis (SEIA) of land-use plans. The underlying hypothesis is that reliable predictions about hazard and risk can be made using models based on a detailed analysis of past landslide occurrences in connection with conditioning factors and data on past damage. The results show that the approach proposed and the hypothesis formulated are essentially correct, providing estimates of the order of magnitude of expected losses for a given time period. Uncertainties, strengths, and shortcomings of the procedure and results obtained are discussed and potential lines of research to improve the models are indicated. Finally, comments and suggestions are provided to generalize this type of analysis.

  7. In silico experiment system for testing hypothesis on gene functions using three condition specific biological networks.

    PubMed

    Lee, Chai-Jin; Kang, Dongwon; Lee, Sangseon; Lee, Sunwon; Kang, Jaewoo; Kim, Sun

    2018-05-25

    Determining functions of a gene requires time consuming, expensive biological experiments. Scientists can speed up this experimental process if the literature information and biological networks can be adequately provided. In this paper, we present a web-based information system that can perform in silico experiments of computationally testing hypothesis on the function of a gene. A hypothesis that is specified in English by the user is converted to genes using a literature and knowledge mining system called BEST. Condition-specific TF, miRNA and PPI (protein-protein interaction) networks are automatically generated by projecting gene and miRNA expression data to template networks. Then, an in silico experiment is to test how well the target genes are connected from the knockout gene through the condition-specific networks. The test result visualizes path from the knockout gene to the target genes in the three networks. Statistical and information-theoretic scores are provided on the resulting web page to help scientists either accept or reject the hypothesis being tested. Our web-based system was extensively tested using three data sets, such as E2f1, Lrrk2, and Dicer1 knockout data sets. We were able to re-produce gene functions reported in the original research papers. In addition, we comprehensively tested with all disease names in MalaCards as hypothesis to show the effectiveness of our system. Our in silico experiment system can be very useful in suggesting biological mechanisms which can be further tested in vivo or in vitro. http://biohealth.snu.ac.kr/software/insilico/. Copyright © 2018 Elsevier Inc. All rights reserved.

  8. Ecological Effects in Cross-Cultural Differences Between U.S. and Japanese Color Preferences.

    PubMed

    Yokosawa, Kazuhiko; Schloss, Karen B; Asano, Michiko; Palmer, Stephen E

    2016-09-01

    We investigated cultural differences between U.S. and Japanese color preferences and the ecological factors that might influence them. Japanese and U.S. color preferences have both similarities (e.g., peaks around blue, troughs around dark-yellow, and preferences for saturated colors) and differences (Japanese participants like darker colors less than U.S. participants do). Complex gender differences were also evident that did not conform to previously reported effects. Palmer and Schloss's (2010) weighted affective valence estimate (WAVE) procedure was used to test the Ecological Valence Theory's (EVT's) prediction that within-culture WAVE-preference correlations should be higher than between-culture WAVE-preference correlations. The results supported several, but not all, predictions. In the second experiment, we tested color preferences of Japanese-U.S. multicultural participants who could read and speak both Japanese and English. Multicultural color preferences were intermediate between U.S. and Japanese preferences, consistent with the hypothesis that culturally specific personal experiences during one's lifetime influence color preferences. Copyright © 2015 Cognitive Science Society, Inc.

  9. A statistical method for measuring activation of gene regulatory networks.

    PubMed

    Esteves, Gustavo H; Reis, Luiz F L

    2018-06-13

    Gene expression data analysis is of great importance for modern molecular biology, given our ability to measure the expression profiles of thousands of genes and enabling studies rooted in systems biology. In this work, we propose a simple statistical model for the activation measuring of gene regulatory networks, instead of the traditional gene co-expression networks. We present the mathematical construction of a statistical procedure for testing hypothesis regarding gene regulatory network activation. The real probability distribution for the test statistic is evaluated by a permutation based study. To illustrate the functionality of the proposed methodology, we also present a simple example based on a small hypothetical network and the activation measuring of two KEGG networks, both based on gene expression data collected from gastric and esophageal samples. The two KEGG networks were also analyzed for a public database, available through NCBI-GEO, presented as Supplementary Material. This method was implemented in an R package that is available at the BioConductor project website under the name maigesPack.

  10. Clairvoyant fusion: a new methodology for designing robust detection algorithms

    NASA Astrophysics Data System (ADS)

    Schaum, Alan

    2016-10-01

    Many realistic detection problems cannot be solved with simple statistical tests for known alternative probability models. Uncontrollable environmental conditions, imperfect sensors, and other uncertainties transform simple detection problems with likelihood ratio solutions into composite hypothesis (CH) testing problems. Recently many multi- and hyperspectral sensing CH problems have been addressed with a new approach. Clairvoyant fusion (CF) integrates the optimal detectors ("clairvoyants") associated with every unspecified value of the parameters appearing in a detection model. For problems with discrete parameter values, logical rules emerge for combining the decisions of the associated clairvoyants. For many problems with continuous parameters, analytic methods of CF have been found that produce closed-form solutions-or approximations for intractable problems. Here the principals of CF are reviewed and mathematical insights are described that have proven useful in the derivation of solutions. It is also shown how a second-stage fusion procedure can be used to create theoretically superior detection algorithms for ALL discrete parameter problems.

  11. The role of unconscious memory errors in judgments of confidence for sentence recognition.

    PubMed

    Sampaio, Cristina; Brewer, William F

    2009-03-01

    The present experiment tested the hypothesis that unconscious reconstructive memory processing can lead to the breakdown of the relationship between memory confidence and memory accuracy. Participants heard deceptive schema-inference sentences and nondeceptive sentences and were tested with either simple or forced-choice recognition. The nondeceptive items showed a positive relation between confidence and accuracy in both simple and forced-choice recognition. However, the deceptive items showed a strong negative confidence/accuracy relationship in simple recognition and a low positive relationship in forced choice. The mean levels of confidence for erroneous responses for deceptive items were inappropriately high in simple recognition but lower in forced choice. These results suggest that unconscious reconstructive memory processes involved in memory for the deceptive schema-inference items led to inaccurate confidence judgments and that, when participants were made aware of the deceptive nature of the schema-inference items through the use of a forced-choice procedure, they adjusted their confidence accordingly.

  12. Presumed fair: ironic effects of organizational diversity structures.

    PubMed

    Kaiser, Cheryl R; Major, Brenda; Jurcevic, Ines; Dover, Tessa L; Brady, Laura M; Shapiro, Jenessa R

    2013-03-01

    This research tests the hypothesis that the presence (vs. absence) of organizational diversity structures causes high-status group members (Whites, men) to perceive organizations with diversity structures as procedurally fairer environments for underrepresented groups (racial minorities, women), even when it is clear that underrepresented groups have been unfairly disadvantaged within these organizations. Furthermore, this illusory sense of fairness derived from the mere presence of diversity structures causes high-status group members to legitimize the status quo by becoming less sensitive to discrimination targeted at underrepresented groups and reacting more harshly toward underrepresented group members who claim discrimination. Six experiments support these hypotheses in designs using 4 types of diversity structures (diversity policies, diversity training, diversity awards, idiosyncratically generated diversity structures from participants' own organizations) among 2 high-status groups in tests involving several types of discrimination (discriminatory promotion practices, adverse impact in hiring, wage discrimination). Implications of these experiments for organizational diversity and employment discrimination law are discussed. PsycINFO Database Record (c) 2013 APA, all rights reserved

  13. Killeen's (2005) "p[subscript rep]" Coefficient: Logical and Mathematical Problems

    ERIC Educational Resources Information Center

    Maraun, Michael; Gabriel, Stephanie

    2010-01-01

    In his article, "An Alternative to Null-Hypothesis Significance Tests," Killeen (2005) urged the discipline to abandon the practice of "p[subscript obs]"-based null hypothesis testing and to quantify the signal-to-noise characteristics of experimental outcomes with replication probabilities. He described the coefficient that he…

  14. Using VITA Service Learning Experiences to Teach Hypothesis Testing and P-Value Analysis

    ERIC Educational Resources Information Center

    Drougas, Anne; Harrington, Steve

    2011-01-01

    This paper describes a hypothesis testing project designed to capture student interest and stimulate classroom interaction and communication. Using an online survey instrument, the authors collected student demographic information and data regarding university service learning experiences. Introductory statistics students performed a series of…

  15. A Rational Analysis of the Selection Task as Optimal Data Selection.

    ERIC Educational Resources Information Center

    Oaksford, Mike; Chater, Nick

    1994-01-01

    Experimental data on human reasoning in hypothesis-testing tasks is reassessed in light of a Bayesian model of optimal data selection in inductive hypothesis testing. The rational analysis provided by the model suggests that reasoning in such tasks may be rational rather than subject to systematic bias. (SLD)

  16. Random Effects Structure for Confirmatory Hypothesis Testing: Keep It Maximal

    ERIC Educational Resources Information Center

    Barr, Dale J.; Levy, Roger; Scheepers, Christoph; Tily, Harry J.

    2013-01-01

    Linear mixed-effects models (LMEMs) have become increasingly prominent in psycholinguistics and related areas. However, many researchers do not seem to appreciate how random effects structures affect the generalizability of an analysis. Here, we argue that researchers using LMEMs for confirmatory hypothesis testing should minimally adhere to the…

  17. The effects of rater bias and assessment method used to estimate disease severity on hypothesis testing

    USDA-ARS?s Scientific Manuscript database

    The effects of bias (over and underestimates) in estimates of disease severity on hypothesis testing using different assessment methods was explored. Nearest percent estimates (NPE), the Horsfall-Barratt (H-B) scale, and two different linear category scales (10% increments, with and without addition...

  18. A Multivariate Test of the Bott Hypothesis in an Urban Irish Setting

    ERIC Educational Resources Information Center

    Gordon, Michael; Downing, Helen

    1978-01-01

    Using a sample of 686 married Irish women in Cork City the Bott hypothesis was tested, and the results of a multivariate regression analysis revealed that neither network connectedness nor the strength of the respondent's emotional ties to the network had any explanatory power. (Author)

  19. Polarization, Definition, and Selective Media Learning.

    ERIC Educational Resources Information Center

    Tichenor, P. J.; And Others

    The traditional hypothesis that extreme attitudinal positions on controversial issues are likely to produce low understanding of messages on these issues--especially when the messages represent opposing views--is tested. Data for test of the hypothesis are from two field studies, each dealing with reader attitudes and decoding of one news article…

  20. The Lasting Effects of Introductory Economics Courses.

    ERIC Educational Resources Information Center

    Sanders, Philip

    1980-01-01

    Reports research which tests the Stigler Hypothesis. The hypothesis suggests that students who have taken introductory economics courses and those who have not show little difference in test performance five years after completing college. Results of the author's research illustrate that economics students do retain some knowledge of economics…

  1. Concerns regarding a call for pluralism of information theory and hypothesis testing

    USGS Publications Warehouse

    Lukacs, P.M.; Thompson, W.L.; Kendall, W.L.; Gould, W.R.; Doherty, P.F.; Burnham, K.P.; Anderson, D.R.

    2007-01-01

    1. Stephens et al . (2005) argue for `pluralism? in statistical analysis, combining null hypothesis testing and information-theoretic (I-T) methods. We show that I-T methods are more informative even in single variable problems and we provide an ecological example. 2. I-T methods allow inferences to be made from multiple models simultaneously. We believe multimodel inference is the future of data analysis, which cannot be achieved with null hypothesis-testing approaches. 3. We argue for a stronger emphasis on critical thinking in science in general and less reliance on exploratory data analysis and data dredging. Deriving alternative hypotheses is central to science; deriving a single interesting science hypothesis and then comparing it to a default null hypothesis (e.g. `no difference?) is not an efficient strategy for gaining knowledge. We think this single-hypothesis strategy has been relied upon too often in the past. 4. We clarify misconceptions presented by Stephens et al . (2005). 5. We think inference should be made about models, directly linked to scientific hypotheses, and their parameters conditioned on data, Prob(Hj| data). I-T methods provide a basis for this inference. Null hypothesis testing merely provides a probability statement about the data conditioned on a null model, Prob(data |H0). 6. Synthesis and applications. I-T methods provide a more informative approach to inference. I-T methods provide a direct measure of evidence for or against hypotheses and a means to consider simultaneously multiple hypotheses as a basis for rigorous inference. Progress in our science can be accelerated if modern methods can be used intelligently; this includes various I-T and Bayesian methods.

  2. Rail-Highway Crossing Resource Allocation Procedure. User's Guide. 2nd edition.

    DOT National Transportation Integrated Search

    2006-01-01

    This report presents findings from a customer satisfaction study conducted in Cobb County, Georgia. The primary hypothesis of this study is that it is possible to develop customer satisfaction measures that are a reliable determinant of roadway quali...

  3. Graphomotor skills in children with developmental coordination disorder (DCD): Handwriting and learning a new letter.

    PubMed

    Huau, Andréa; Velay, Jean-Luc; Jover, Marianne

    2015-08-01

    The aim of the present study was to analyze handwriting difficulties in children with developmental coordination disorder (DCD) and investigate the hypothesis that a deficit in procedural learning could help to explain them. The experimental set-up was designed to compare the performances of children with DCD with those of a non-DCD group on tasks that rely on motor learning in different ways, namely handwriting and learning a new letter. Ten children with DCD and 10 non-DCD children, aged 8-10 years, were asked to perform handwriting tasks (letter/word/sentence; normal/fast), and a learning task (new letter) on a graphic tablet. The BHK concise assessment scale for children's handwriting was used to evaluate their handwriting quality. Results showed that both the handwriting and learning tasks differentiated between the groups. Furthermore, when speed or length constraints were added, handwriting was more impaired in children with DCD than in non-DCD children. Greater intra-individual variability was observed in the group of children with DCD, arguing in favor of a deficit in motor pattern stabilization. The results of this study could support both the hypothesis of a deficit in procedural learning and the hypothesis of neuromotor noise in DCD. Copyright © 2015 Elsevier B.V. All rights reserved.

  4. Transdermal Photopolymerization for Minimally Invasive Implantation

    NASA Astrophysics Data System (ADS)

    Elisseeff, J.; Anseth, K.; Sims, D.; McIntosh, W.; Randolph, M.; Langer, R.

    1999-03-01

    Photopolymerizations are widely used in medicine to create polymer networks for use in applications such as bone restorations and coatings for artificial implants. These photopolymerizations occur by directly exposing materials to light in "open" environments such as the oral cavity or during invasive procedures such as surgery. We hypothesized that light, which penetrates tissue including skin, could cause a photopolymerization indirectly. Liquid materials then could be injected s.c. and solidified by exposing the exterior surface of the skin to light. To test this hypothesis, the penetration of UVA and visible light through skin was studied. Modeling predicted the feasibility of transdermal polymerization with only 2 min of light exposure required to photopolymerize an implant underneath human skin. To establish the validity of these modeling studies, transdermal photopolymerization first was applied to tissue engineering by using "injectable" cartilage as a model system. Polymer/chondrocyte constructs were injected s.c. and transdermally photopolymerized. Implants harvested at 2, 4, and 7 weeks demonstrated collagen and proteoglycan production and histology with tissue structure comparable to native neocartilage. To further examine this phenomenon and test the applicability of transdermal photopolymerization for drug release devices, albumin, a model protein, was released for 1 week from photopolymerized hydrogels. With further study, transdermal photpolymerization potentially could be used to create a variety of new, minimally invasive surgical procedures in applications ranging from plastic and orthopedic surgery to tissue engineering and drug delivery.

  5. The effect of ursodeoxycholic acid in liver functional restoration of patients with obstructive jaundice after endoscopic treatment: a prospective, randomized, and controlled study.

    PubMed

    Fekaj, Enver; Gjata, Arben; Maxhuni, Mehmet

    2013-09-22

    In patients with obstructive jaundice, multi-organ dysfunction may develop. This trial is a prospective, open-label, randomized, and controlled study with the objective to evaluate the effect of ursodeoxycholic acid in liver functional restoration in patients with obstructive jaundice after endoscopic treatment. The aim of this study is to evaluate the effect of ursodeoxycholic acid in liver functional restoration of patients with obstructive jaundice after endoscopic treatment. The hypothesis of this trial is that patients with obstructive jaundice, in which will be administered UDCA, in the early phase after endoscopic intervention will have better and faster functional restoration of the liver than patients in the control group.Patients with obstructive jaundice, randomly, will be divided into two groups: (A) test group in which will be administered ursodeoxycholic acid twenty-four hours after endoscopic procedure and will last fourteen days, and (B) control group.Serum-testing will include determination of bilirubin, alanine transaminase, aspartate transaminase, gama-glutamil transpeptidase, alkaline phosphatase, albumin, and cholesterol levels. These parameters will be determined one day prior endoscopic procedure, and on the third, fifth, seventh, tenth, twelfth and fourteenth days after endoscopic intervention. This trial is a prospective, open-label, randomized, and controlled study to asses the effect of ursodeoxycholic acid in liver functional restoration of patients with obstructive jaundice in the early phase after endoscopic treatment.

  6. Independent test assessment using the extreme value distribution theory.

    PubMed

    Almeida, Marcio; Blondell, Lucy; Peralta, Juan M; Kent, Jack W; Jun, Goo; Teslovich, Tanya M; Fuchsberger, Christian; Wood, Andrew R; Manning, Alisa K; Frayling, Timothy M; Cingolani, Pablo E; Sladek, Robert; Dyer, Thomas D; Abecasis, Goncalo; Duggirala, Ravindranath; Blangero, John

    2016-01-01

    The new generation of whole genome sequencing platforms offers great possibilities and challenges for dissecting the genetic basis of complex traits. With a very high number of sequence variants, a naïve multiple hypothesis threshold correction hinders the identification of reliable associations by the overreduction of statistical power. In this report, we examine 2 alternative approaches to improve the statistical power of a whole genome association study to detect reliable genetic associations. The approaches were tested using the Genetic Analysis Workshop 19 (GAW19) whole genome sequencing data. The first tested method estimates the real number of effective independent tests actually being performed in whole genome association project by the use of an extreme value distribution and a set of phenotype simulations. Given the familiar nature of the GAW19 data and the finite number of pedigree founders in the sample, the number of correlations between genotypes is greater than in a set of unrelated samples. Using our procedure, we estimate that the effective number represents only 15 % of the total number of independent tests performed. However, even using this corrected significance threshold, no genome-wide significant association could be detected for systolic and diastolic blood pressure traits. The second approach implements a biological relevance-driven hypothesis tested by exploiting prior computational predictions on the effect of nonsynonymous genetic variants detected in a whole genome sequencing association study. This guided testing approach was able to identify 2 promising single-nucleotide polymorphisms (SNPs), 1 for each trait, targeting biologically relevant genes that could help shed light on the genesis of the human hypertension. The first gene, PFH14 , associated with systolic blood pressure, interacts directly with genes involved in calcium-channel formation and the second gene, MAP4 , encodes a microtubule-associated protein and had already been detected by previous genome-wide association study experiments conducted in an Asian population. Our results highlight the necessity of the development of alternative approached to improve the efficiency on the detection of reasonable candidate associations in whole genome sequencing studies.

  7. Testing the status-legitimacy hypothesis: A multilevel modeling approach to the perception of legitimacy in income distribution in 36 nations.

    PubMed

    Caricati, Luca

    2017-01-01

    The status-legitimacy hypothesis was tested by analyzing cross-national data about social inequality. Several indicators were used as indexes of social advantage: social class, personal income, and self-position in the social hierarchy. Moreover, inequality and freedom in nations, as indexed by Gini and by the human freedom index, were considered. Results from 36 nations worldwide showed no support for the status-legitimacy hypothesis. The perception that income distribution was fair tended to increase as social advantage increased. Moreover, national context increased the difference between advantaged and disadvantaged people in the perception of social fairness: Contrary to the status-legitimacy hypothesis, disadvantaged people were more likely than advantaged people to perceive income distribution as too large, and this difference increased in nations with greater freedom and equality. The implications for the status-legitimacy hypothesis are discussed.

  8. Technical intelligence and culture: Nut cracking in humans and chimpanzees.

    PubMed

    Boesch, Christophe; Bombjaková, Daša; Boyette, Adam; Meier, Amelia

    2017-06-01

    According to the technical intelligence hypothesis, humans are superior to all other animal species in understanding and using tools. However, the vast majority of comparative studies between humans and chimpanzees, both proficient tool users, have not controlled for the effects of age, prior knowledge, past experience, rearing conditions, or differences in experimental procedures. We tested whether humans are superior to chimpanzees in selecting better tools, using them more dexteriously, achieving higher performance and gaining access to more resource as predicted under the technical intelligence hypothesis. Aka and Mbendjele hunter-gatherers in the rainforest of Central African Republic and the Republic of Congo, respectively, and Taï chimpanzees in the rainforest of Côte d'Ivoire were observed cracking hard Panda oleosa nuts with different tools, as well as the soft Coula edulis and Elaeis guinensis nuts. The nut-cracking techniques, hammer material selection and two efficiency measures were compared. As predicted, the Aka and the Mbendjele were able to exploit more species of hard nuts in the forest than chimpanzees. However, the chimpanzees were sometimes more efficient than the humans. Social roles differed between the two species, with the Aka and especially the Mbendjele exhibiting cooperation between nut-crackers whereas the chimpanzees were mainly individualistic. Observations of nut-cracking by humans and chimpanzees only partially supported the technical intelligence hypothesis as higher degrees of flexibility in tool selection seen in chimpanzees compensated for use of less efficient tool material than in humans. Nut cracking was a stronger social undertaking in humans than in chimpanzees. © 2017 Wiley Periodicals, Inc.

  9. Tests of the Giant Impact Hypothesis

    NASA Technical Reports Server (NTRS)

    Jones, J. H.

    1998-01-01

    The giant impact hypothesis has gained popularity as a means of explaining a volatile-depleted Moon that still has a chemical affinity to the Earth. As Taylor's Axiom decrees, the best models of lunar origin are testable, but this is difficult with the giant impact model. The energy associated with the impact would be sufficient to totally melt and partially vaporize the Earth. And this means that there should he no geological vestige of Barber times. Accordingly, it is important to devise tests that may be used to evaluate the giant impact hypothesis. Three such tests are discussed here. None of these is supportive of the giant impact model, but neither do they disprove it.

  10. Genetics and recent human evolution.

    PubMed

    Templeton, Alan R

    2007-07-01

    Starting with "mitochondrial Eve" in 1987, genetics has played an increasingly important role in studies of the last two million years of human evolution. It initially appeared that genetic data resolved the basic models of recent human evolution in favor of the "out-of-Africa replacement" hypothesis in which anatomically modern humans evolved in Africa about 150,000 years ago, started to spread throughout the world about 100,000 years ago, and subsequently drove to complete genetic extinction (replacement) all other human populations in Eurasia. Unfortunately, many of the genetic studies on recent human evolution have suffered from scientific flaws, including misrepresenting the models of recent human evolution, focusing upon hypothesis compatibility rather than hypothesis testing, committing the ecological fallacy, and failing to consider a broader array of alternative hypotheses. Once these flaws are corrected, there is actually little genetic support for the out-of-Africa replacement hypothesis. Indeed, when genetic data are used in a hypothesis-testing framework, the out-of-Africa replacement hypothesis is strongly rejected. The model of recent human evolution that emerges from a statistical hypothesis-testing framework does not correspond to any of the traditional models of human evolution, but it is compatible with fossil and archaeological data. These studies also reveal that any one gene or DNA region captures only a small part of human evolutionary history, so multilocus studies are essential. As more and more loci became available, genetics will undoubtedly offer additional insights and resolutions of human evolution.

  11. Age Dedifferentiation Hypothesis: Evidence form the WAIS III.

    ERIC Educational Resources Information Center

    Juan-Espinosa, Manuel; Garcia, Luis F.; Escorial, Sergio; Rebollo, Irene; Colom, Roberto; Abad, Francisco J.

    2002-01-01

    Used the Spanish standardization of the Wechsler Adult Intelligence Scale III (WAIS III) (n=1,369) to test the age dedifferentiation hypothesis. Results show no changes in the percentage of variance accounted for by "g" and four group factors when restriction of range is controlled. Discusses an age indifferentation hypothesis. (SLD)

  12. Hypothesis tests for the detection of constant speed radiation moving sources

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dumazert, Jonathan; Coulon, Romain; Kondrasovs, Vladimir

    2015-07-01

    Radiation Portal Monitors are deployed in linear network to detect radiological material in motion. As a complement to single and multichannel detection algorithms, inefficient under too low signal to noise ratios, temporal correlation algorithms have been introduced. Test hypothesis methods based on empirically estimated mean and variance of the signals delivered by the different channels have shown significant gain in terms of a tradeoff between detection sensitivity and false alarm probability. This paper discloses the concept of a new hypothesis test for temporal correlation detection methods, taking advantage of the Poisson nature of the registered counting signals, and establishes amore » benchmark between this test and its empirical counterpart. The simulation study validates that in the four relevant configurations of a pedestrian source carrier under respectively high and low count rate radioactive background, and a vehicle source carrier under the same respectively high and low count rate radioactive background, the newly introduced hypothesis test ensures a significantly improved compromise between sensitivity and false alarm, while guaranteeing the stability of its optimization parameter regardless of signal to noise ratio variations between 2 to 0.8. (authors)« less

  13. Multiple Hypothesis Testing for Experimental Gingivitis Based on Wilcoxon Signed Rank Statistics

    PubMed Central

    Preisser, John S.; Sen, Pranab K.; Offenbacher, Steven

    2011-01-01

    Dental research often involves repeated multivariate outcomes on a small number of subjects for which there is interest in identifying outcomes that exhibit change in their levels over time as well as to characterize the nature of that change. In particular, periodontal research often involves the analysis of molecular mediators of inflammation for which multivariate parametric methods are highly sensitive to outliers and deviations from Gaussian assumptions. In such settings, nonparametric methods may be favored over parametric ones. Additionally, there is a need for statistical methods that control an overall error rate for multiple hypothesis testing. We review univariate and multivariate nonparametric hypothesis tests and apply them to longitudinal data to assess changes over time in 31 biomarkers measured from the gingival crevicular fluid in 22 subjects whereby gingivitis was induced by temporarily withholding tooth brushing. To identify biomarkers that can be induced to change, multivariate Wilcoxon signed rank tests for a set of four summary measures based upon area under the curve are applied for each biomarker and compared to their univariate counterparts. Multiple hypothesis testing methods with choice of control of the false discovery rate or strong control of the family-wise error rate are examined. PMID:21984957

  14. Tests for linkage and association in nuclear families.

    PubMed Central

    Martin, E R; Kaplan, N L; Weir, B S

    1997-01-01

    The transmission/disequilibrium test (TDT) originally was introduced to test for linkage between a genetic marker and a disease-susceptibility locus, in the presence of association. Recently, the TDT has been used to test for association in the presence of linkage. The motivation for this is that linkage analysis typically identifies large candidate regions, and further refinement is necessary before a search for the disease gene is begun, on the molecular level. Evidence of association and linkage may indicate which markers in the region are closest to a disease locus. As a test of linkage, transmissions from heterozygous parents to all of their affected children can be included in the TDT; however, the TDT is a valid chi2 test of association only if transmissions to unrelated affected children are used in the analysis. If the sample contains independent nuclear families with multiple affected children, then one procedure that has been used to test for association is to select randomly a single affected child from each sibship and to apply the TDT to those data. As an alternative, we propose two statistics that use data from all of the affected children. The statistics give valid chi2 tests of the null hypothesis of no association or no linkage and generally are more powerful than the TDT with a single, randomly chosen, affected child from each family. PMID:9311750

  15. Evaluation of sampling plans to detect Cry9C protein in corn flour and meal.

    PubMed

    Whitaker, Thomas B; Trucksess, Mary W; Giesbrecht, Francis G; Slate, Andrew B; Thomas, Francis S

    2004-01-01

    StarLink is a genetically modified corn that produces an insecticidal protein, Cry9C. Studies were conducted to determine the variability and Cry9C distribution among sample test results when Cry9C protein was estimated in a bulk lot of corn flour and meal. Emphasis was placed on measuring sampling and analytical variances associated with each step of the test procedure used to measure Cry9C in corn flour and meal. Two commercially available enzyme-linked immunosorbent assay kits were used: one for the determination of Cry9C protein concentration and the other for % StarLink seed. The sampling and analytical variances associated with each step of the Cry9C test procedures were determined for flour and meal. Variances were found to be functions of Cry9C concentration, and regression equations were developed to describe the relationships. Because of the larger particle size, sampling variability associated with cornmeal was about double that for corn flour. For cornmeal, the sampling variance accounted for 92.6% of the total testing variability. The observed sampling and analytical distributions were compared with the Normal distribution. In almost all comparisons, the null hypothesis that the Cry9C protein values were sampled from a Normal distribution could not be rejected at 95% confidence limits. The Normal distribution and the variance estimates were used to evaluate the performance of several Cry9C protein sampling plans for corn flour and meal. Operating characteristic curves were developed and used to demonstrate the effect of increasing sample size on reducing false positives (seller's risk) and false negatives (buyer's risk).

  16. Elements of a Research Report.

    ERIC Educational Resources Information Center

    Schurter, William J.

    This guide for writing research or technical reports discusses eleven basic elements of such reports and provides examples of "good" and "bad" wordings. These elements are the title, problem statement, purpose statement, need statement, hypothesis, assumptions, procedures, limitations, terminology, conclusion and recommendations. This guide is…

  17. Knowledge Base Refinement as Improving an Incorrect and Incomplete Domain Theory

    DTIC Science & Technology

    1990-04-01

    Ginsberg et al., 1985), and RL (Fu and Buchanan, 1985), which perform empirical induction over a library of test cases. This chapter describes a new...state knowledge. Examples of high-level goals are: to test a hypothesis, to differentiate between several plausible hypotheses, to ask a clarifying...one tuple when we Group Hypotheses Test Hypothesis Applyrule Findout Strategy Metarule Strategy Metarule Strategy Metarule Strategy Metarule goal(group

  18. A robust hypothesis test for the sensitive detection of constant speed radiation moving sources

    NASA Astrophysics Data System (ADS)

    Dumazert, Jonathan; Coulon, Romain; Kondrasovs, Vladimir; Boudergui, Karim; Moline, Yoann; Sannié, Guillaume; Gameiro, Jordan; Normand, Stéphane; Méchin, Laurence

    2015-09-01

    Radiation Portal Monitors are deployed in linear networks to detect radiological material in motion. As a complement to single and multichannel detection algorithms, inefficient under too low signal-to-noise ratios, temporal correlation algorithms have been introduced. Test hypothesis methods based on empirically estimated mean and variance of the signals delivered by the different channels have shown significant gain in terms of a tradeoff between detection sensitivity and false alarm probability. This paper discloses the concept of a new hypothesis test for temporal correlation detection methods, taking advantage of the Poisson nature of the registered counting signals, and establishes a benchmark between this test and its empirical counterpart. The simulation study validates that in the four relevant configurations of a pedestrian source carrier under respectively high and low count rate radioactive backgrounds, and a vehicle source carrier under the same respectively high and low count rate radioactive backgrounds, the newly introduced hypothesis test ensures a significantly improved compromise between sensitivity and false alarm. It also guarantees that the optimal coverage factor for this compromise remains stable regardless of signal-to-noise ratio variations between 2 and 0.8, therefore allowing the final user to parametrize the test with the sole prior knowledge of background amplitude.

  19. The [Geo]Scientific Method; Hypothesis Testing and Geoscience Proposal Writing for Students

    ERIC Educational Resources Information Center

    Markley, Michelle J.

    2010-01-01

    Most undergraduate-level geoscience texts offer a paltry introduction to the nuanced approach to hypothesis testing that geoscientists use when conducting research and writing proposals. Fortunately, there are a handful of excellent papers that are accessible to geoscience undergraduates. Two historical papers by the eminent American geologists G.…

  20. Mental Abilities and School Achievement: A Test of a Mediation Hypothesis

    ERIC Educational Resources Information Center

    Vock, Miriam; Preckel, Franzis; Holling, Heinz

    2011-01-01

    This study analyzes the interplay of four cognitive abilities--reasoning, divergent thinking, mental speed, and short-term memory--and their impact on academic achievement in school in a sample of adolescents in grades seven to 10 (N = 1135). Based on information processing approaches to intelligence, we tested a mediation hypothesis, which states…

  1. The Relation between Parental Values and Parenting Behavior: A Test of the Kohn Hypothesis.

    ERIC Educational Resources Information Center

    Luster, Tom; Rhoades, Kelly

    To investigate how values influence parenting beliefs and practices, a test was made of Kohn's hypothesis that parents valuing self-direction emphasize the supportive function of parenting, while parents valuing conformity emphasize control of unsanctioned behaviors. Participating in the study were 65 mother-infant dyads. Infants ranged in age…

  2. Chromosome Connections: Compelling Clues to Common Ancestry

    ERIC Educational Resources Information Center

    Flammer, Larry

    2013-01-01

    Students compare banding patterns on hominid chromosomes and see striking evidence of their common ancestry. To test this, human chromosome no. 2 is matched with two shorter chimpanzee chromosomes, leading to the hypothesis that human chromosome 2 resulted from the fusion of the two shorter chromosomes. Students test that hypothesis by looking for…

  3. POTENTIAL FOR INVASION OF UNDERGROUND SOURCES OF DRINKING WATER THROUGH MUD-PLUGGED WELLS: AN EXPERIMENTAL APPRAISAL

    EPA Science Inventory

    The main objective of the feasibility study described here was to test the hypothesis that properly plugged wells are effectively sealed by drilling mud. In The process of testing the hypothesis, evidence about dynamics of building mud cake on the wellbore-face was obtained, as ...

  4. A test of the predator satiation hypothesis, acorn predator size, and acorn preference

    Treesearch

    C.H. Greenberg; S.J. Zarnoch

    2018-01-01

    Mast seeding is hypothesized to satiate seed predators with heavy production and reduce populations with crop failure, thereby increasing seed survival. Preference for red or white oak acorns could influence recruitment among oak species. We tested the predator satiation hypothesis, acorn preference, and predator size by concurrently...

  5. The Need for Nuance in the Null Hypothesis Significance Testing Debate

    ERIC Educational Resources Information Center

    Häggström, Olle

    2017-01-01

    Null hypothesis significance testing (NHST) provides an important statistical toolbox, but there are a number of ways in which it is often abused and misinterpreted, with bad consequences for the reliability and progress of science. Parts of contemporary NHST debate, especially in the psychological sciences, is reviewed, and a suggestion is made…

  6. Acorn Caching in Tree Squirrels: Teaching Hypothesis Testing in the Park

    ERIC Educational Resources Information Center

    McEuen, Amy B.; Steele, Michael A.

    2012-01-01

    We developed an exercise for a university-level ecology class that teaches hypothesis testing by examining acorn preferences and caching behavior of tree squirrels (Sciurus spp.). This exercise is easily modified to teach concepts of behavioral ecology for earlier grades, particularly high school, and provides students with a theoretical basis for…

  7. Shaping Up the Practice of Null Hypothesis Significance Testing.

    ERIC Educational Resources Information Center

    Wainer, Howard; Robinson, Daniel H.

    2003-01-01

    Discusses criticisms of null hypothesis significance testing (NHST), suggesting that historical use of NHST was reasonable, and current users should read Sir Ronald Fisher's applied work. Notes that modifications to NHST and interpretations of its outcomes might better suit the needs of modern science. Concludes that NHST is most often useful as…

  8. SOME EFFECTS OF DOGMATISM IN ELEMENTARY SCHOOL PRINCIPALS AND TEACHERS.

    ERIC Educational Resources Information Center

    BENTZEN, MARY M.

    THE HYPOTHESIS THAT RATINGS ON CONGENIALITY AS A COWORKER GIVEN TO TEACHERS WILL BE IN PART A FUNCTION OF THE ORGANIZATIONAL STATUS OF THE RATER WAS TESTED. A SECONDARY PROBLEM WAS TO TEST THE HYPOTHESIS THAT DOGMATIC SUBJECTS MORE THAN NONDOGMATIC SUBJECTS WOULD EXHIBIT COGNITIVE BEHAVIOR WHICH INDICATED (1) GREATER DISTINCTION BETWEEN POSITIVE…

  9. Thou Shalt Not Bear False Witness against Null Hypothesis Significance Testing

    ERIC Educational Resources Information Center

    García-Pérez, Miguel A.

    2017-01-01

    Null hypothesis significance testing (NHST) has been the subject of debate for decades and alternative approaches to data analysis have been proposed. This article addresses this debate from the perspective of scientific inquiry and inference. Inference is an inverse problem and application of statistical methods cannot reveal whether effects…

  10. Test-potentiated learning: three independent replications, a disconfirmed hypothesis, and an unexpected boundary condition.

    PubMed

    Wissman, Kathryn T; Rawson, Katherine A

    2018-04-01

    Arnold and McDermott [(2013). Test-potentiated learning: Distinguishing between direct and indirect effects of testing. Journal of Experimental Psychology: Learning, Memory, and Cognition, 39, 940-945] isolated the indirect effects of testing and concluded that encoding is enhanced to a greater extent following more versus fewer practice tests, referred to as test-potentiated learning. The current research provided further evidence for test-potentiated learning and evaluated the covert retrieval hypothesis as an alternative explanation for the observed effect. Learners initially studied foreign language word pairs and then completed either one or five practice tests before restudy occurred. Results of greatest interest concern performance on test trials following restudy for items that were not correctly recalled on the test trials that preceded restudy. Results replicate Arnold and McDermott (2013) by demonstrating that more versus fewer tests potentiate learning when trial time is limited. Results also provide strong evidence against the covert retrieval hypothesis concerning why the effect occurs (i.e., it does not reflect differential covert retrieval during pre-restudy trials). In addition, outcomes indicate that the magnitude of the test-potentiated learning effect decreases as trial length increases, revealing an unexpected boundary condition to test-potentiated learning.

  11. The estimation of soil water fluxes using lysimeter data

    NASA Astrophysics Data System (ADS)

    Wegehenkel, M.

    2009-04-01

    The validation of soil water balance models regarding soil water fluxes in the field is still a problem. This requires time series of measured model outputs. In our study, a soil water balance model was validated using lysimeter time series of measured model outputs. The soil water balance model used in our study was the Hydrus-1D-model. This model was tested by a comparison of simulated with measured daily rates of actual evapotranspiration, soil water storage, groundwater recharge and capillary rise. These rates were obtained from twelve weighable lysimeters with three different soils and two different lower boundary conditions for the time period from January 1, 1996 to December 31, 1998. In that period, grass vegetation was grown on all lysimeters. These lysimeters are located in Berlin, Germany. One potential source of error in lysimeter experiments is preferential flow caused by an artificial channeling of water due to the occurrence of air space between the soil monolith and the inside wall of the lysimeters. To analyse such sources of errors, Hydrus-1D was applied with different modelling procedures. The first procedure consists of a general uncalibrated appli-cation of Hydrus-1D. The second one includes a calibration of soil hydraulic parameters via inverse modelling of different percolation events with Hydrus-1D. In the third procedure, the model DUALP_1D was applied with the optimized hydraulic parameter set to test the hy-pothesis of the existence of preferential flow paths in the lysimeters. The results of the different modelling procedures indicated that, in addition to a precise determination of the soil water retention functions, vegetation parameters such as rooting depth should also be taken into account. Without such information, the rooting depth is a calibration parameter. However, in some cases, the uncalibrated application of both models also led to an acceptable fit between measured and simulated model outputs.

  12. Hypothesis testing and earthquake prediction.

    PubMed

    Jackson, D D

    1996-04-30

    Requirements for testing include advance specification of the conditional rate density (probability per unit time, area, and magnitude) or, alternatively, probabilities for specified intervals of time, space, and magnitude. Here I consider testing fully specified hypotheses, with no parameter adjustments or arbitrary decisions allowed during the test period. Because it may take decades to validate prediction methods, it is worthwhile to formulate testable hypotheses carefully in advance. Earthquake prediction generally implies that the probability will be temporarily higher than normal. Such a statement requires knowledge of "normal behavior"--that is, it requires a null hypothesis. Hypotheses can be tested in three ways: (i) by comparing the number of actual earth-quakes to the number predicted, (ii) by comparing the likelihood score of actual earthquakes to the predicted distribution, and (iii) by comparing the likelihood ratio to that of a null hypothesis. The first two tests are purely self-consistency tests, while the third is a direct comparison of two hypotheses. Predictions made without a statement of probability are very difficult to test, and any test must be based on the ratio of earthquakes in and out of the forecast regions.

  13. Hypothesis testing and earthquake prediction.

    PubMed Central

    Jackson, D D

    1996-01-01

    Requirements for testing include advance specification of the conditional rate density (probability per unit time, area, and magnitude) or, alternatively, probabilities for specified intervals of time, space, and magnitude. Here I consider testing fully specified hypotheses, with no parameter adjustments or arbitrary decisions allowed during the test period. Because it may take decades to validate prediction methods, it is worthwhile to formulate testable hypotheses carefully in advance. Earthquake prediction generally implies that the probability will be temporarily higher than normal. Such a statement requires knowledge of "normal behavior"--that is, it requires a null hypothesis. Hypotheses can be tested in three ways: (i) by comparing the number of actual earth-quakes to the number predicted, (ii) by comparing the likelihood score of actual earthquakes to the predicted distribution, and (iii) by comparing the likelihood ratio to that of a null hypothesis. The first two tests are purely self-consistency tests, while the third is a direct comparison of two hypotheses. Predictions made without a statement of probability are very difficult to test, and any test must be based on the ratio of earthquakes in and out of the forecast regions. PMID:11607663

  14. Mismatch or cumulative stress: toward an integrated hypothesis of programming effects.

    PubMed

    Nederhof, Esther; Schmidt, Mathias V

    2012-07-16

    This paper integrates the cumulative stress hypothesis with the mismatch hypothesis, taking into account individual differences in sensitivity to programming. According to the cumulative stress hypothesis, individuals are more likely to suffer from disease as adversity accumulates. According to the mismatch hypothesis, individuals are more likely to suffer from disease if a mismatch occurs between the early programming environment and the later adult environment. These seemingly contradicting hypotheses are integrated into a new model proposing that the cumulative stress hypothesis applies to individuals who were not or only to a small extent programmed by their early environment, while the mismatch hypothesis applies to individuals who experienced strong programming effects. Evidence for the main effects of adversity as well as evidence for the interaction between adversity in early and later life is presented from human observational studies and animal models. Next, convincing evidence for individual differences in sensitivity to programming is presented. We extensively discuss how our integrated model can be tested empirically in animal models and human studies, inviting researchers to test this model. Furthermore, this integrated model should tempt clinicians and other intervenors to interpret symptoms as possible adaptations from an evolutionary biology perspective. Copyright © 2011 Elsevier Inc. All rights reserved.

  15. The Relation Among the Likelihood Ratio-, Wald-, and Lagrange Multiplier Tests and Their Applicability to Small Samples,

    DTIC Science & Technology

    1982-04-01

    S. (1979), "Conflict Among Criteria for Testing Hypothesis: Extension and Comments," Econometrica, 47, 203-207 Breusch , T. S. and Pagan , A. R. (1980...Savin, N. E. (1977), "Conflict Among Criteria for Testing Hypothesis in the Multivariate Linear Regression Model," Econometrica, 45, 1263-1278 Breusch , T...VNCLASSIFIED RAND//-6756NL U l~ I- THE RELATION AMONG THE LIKELIHOOD RATIO-, WALD-, AND LAGRANGE MULTIPLIER TESTS AND THEIR APPLICABILITY TO SMALL SAMPLES

  16. Do pain-associated contexts increase pain sensitivity? An investigation using virtual reality.

    PubMed

    Harvie, Daniel S; Sterling, Michele; Smith, Ashley D

    2018-04-30

    Pain is not a linear result of nociception, but is dependent on multisensory inputs, psychological factors, and prior experience. Since nociceptive models appear insufficient to explain chronic pain, understanding non-nociceptive contributors is imperative. Several recent models propose that cues associatively linked to painful events might acquire the capacity to augment, or even cause, pain. This experiment aimed to determine whether contexts associated with pain, could modulate mechanical pain thresholds and pain intensity. Forty-eight healthy participants underwent a contextual conditioning procedure, where three neutral virtual reality contexts were paired with either unpredictable noxious stimulation, unpredictable vibrotactile stimulation, or no stimulation. Following the conditioning procedure, mechanical pain thresholds and pain evoked by a test stimulus were examined in each context. In the test phase, the effect of expectancy was equalised across conditions by informing participants when thresholds and painful stimuli would be presented. Contrary to our hypothesis, scenes that were associated with noxious stimulation did not increase mechanical sensitivity (p=0.08), or increase pain intensity (p=0.46). However, an interaction with sex highlighted the possibility that pain-associated contexts may alter pain sensitivity in females but not males (p=0.03). Overall, our data does not support the idea that pain-associated contexts can alter pain sensitivity in healthy asymptomatic individuals. That an effect was shown in females highlights the possibility that some subgroups may be susceptible to such an effect, although the magnitude of the effect may lack real-world significance. If pain-associated cues prove to have a relevant pain augmenting effect, in some subgroups, procedures aimed at extinguishing pain-related associations may have therapeutic potential.

  17. A hypothesis-driven physical examination learning and assessment procedure for medical students: initial validity evidence.

    PubMed

    Yudkowsky, Rachel; Otaki, Junji; Lowenstein, Tali; Riddle, Janet; Nishigori, Hiroshi; Bordage, Georges

    2009-08-01

    Diagnostic accuracy is maximised by having clinical signs and diagnostic hypotheses in mind during the physical examination (PE). This diagnostic reasoning approach contrasts with the rote, hypothesis-free screening PE learned by many medical students. A hypothesis-driven PE (HDPE) learning and assessment procedure was developed to provide targeted practice and assessment in anticipating, eliciting and interpreting critical aspects of the PE in the context of diagnostic challenges. This study was designed to obtain initial content validity evidence, performance and reliability estimates, and impact data for the HDPE procedure. Nineteen clinical scenarios were developed, covering 160 PE manoeuvres. A total of 66 Year 3 medical students prepared for and encountered three clinical scenarios during required formative assessments. For each case, students listed anticipated positive PE findings for two plausible diagnoses before examining the patient; examined a standardised patient (SP) simulating one of the diagnoses; received immediate feedback from the SP, and documented their findings and working diagnosis. The same students later encountered some of the scenarios during their Year 4 clinical skills examination. On average, Year 3 students anticipated 65% of the positive findings, correctly performed 88% of the PE manoeuvres and documented 61% of the findings. Year 4 students anticipated and elicited fewer findings overall, but achieved proportionally more discriminating findings, thereby more efficiently achieving a diagnostic accuracy equivalent to that of students in Year 3. Year 4 students performed better on cases on which they had received feedback as Year 3 students. Twelve cases would provide a reliability of 0.80, based on discriminating checklist items only. The HDPE provided medical students with a thoughtful, deliberate approach to learning and assessing PE skills in a valid and reliable manner.

  18. The thresholds for statistical and clinical significance – a five-step procedure for evaluation of intervention effects in randomised clinical trials

    PubMed Central

    2014-01-01

    Background Thresholds for statistical significance are insufficiently demonstrated by 95% confidence intervals or P-values when assessing results from randomised clinical trials. First, a P-value only shows the probability of getting a result assuming that the null hypothesis is true and does not reflect the probability of getting a result assuming an alternative hypothesis to the null hypothesis is true. Second, a confidence interval or a P-value showing significance may be caused by multiplicity. Third, statistical significance does not necessarily result in clinical significance. Therefore, assessment of intervention effects in randomised clinical trials deserves more rigour in order to become more valid. Methods Several methodologies for assessing the statistical and clinical significance of intervention effects in randomised clinical trials were considered. Balancing simplicity and comprehensiveness, a simple five-step procedure was developed. Results For a more valid assessment of results from a randomised clinical trial we propose the following five-steps: (1) report the confidence intervals and the exact P-values; (2) report Bayes factor for the primary outcome, being the ratio of the probability that a given trial result is compatible with a ‘null’ effect (corresponding to the P-value) divided by the probability that the trial result is compatible with the intervention effect hypothesised in the sample size calculation; (3) adjust the confidence intervals and the statistical significance threshold if the trial is stopped early or if interim analyses have been conducted; (4) adjust the confidence intervals and the P-values for multiplicity due to number of outcome comparisons; and (5) assess clinical significance of the trial results. Conclusions If the proposed five-step procedure is followed, this may increase the validity of assessments of intervention effects in randomised clinical trials. PMID:24588900

  19. No evidence for the 'expensive-tissue hypothesis' from an intraspecific study in a highly variable species.

    PubMed

    Warren, D L; Iglesias, T L

    2012-06-01

    The 'expensive-tissue hypothesis' states that investment in one metabolically costly tissue necessitates decreased investment in other tissues and has been one of the keystone concepts used in studying the evolution of metabolically expensive tissues. The trade-offs expected under this hypothesis have been investigated in comparative studies in a number of clades, yet support for the hypothesis is mixed. Nevertheless, the expensive-tissue hypothesis has been used to explain everything from the evolution of the human brain to patterns of reproductive investment in bats. The ambiguous support for the hypothesis may be due to interspecific differences in selection, which could lead to spurious results both positive and negative. To control for this, we conduct a study of trade-offs within a single species, Thalassoma bifasciatum, a coral reef fish that exhibits more intraspecific variation in a single tissue (testes) than is seen across many of the clades previously analysed in studies of tissue investment. This constitutes a robust test of the constraints posited under the expensive-tissue hypothesis that is not affected by many of the factors that may confound interspecific studies. However, we find no evidence of trade-offs between investment in testes and investment in liver or brain, which are typically considered to be metabolically expensive. Our results demonstrate that the frequent rejection of the expensive-tissue hypothesis may not be an artefact of interspecific differences in selection and suggests that organisms may be capable of compensating for substantial changes in tissue investment without sacrificing mass in other expensive tissues. © 2012 The Authors. Journal of Evolutionary Biology © 2012 European Society For Evolutionary Biology.

  20. Risk-Based, Hypothesis-Driven Framework for Hydrological Field Campaigns with Case Studies

    NASA Astrophysics Data System (ADS)

    Harken, B.; Rubin, Y.

    2014-12-01

    There are several stages in any hydrological modeling campaign, including: formulation and analysis of a priori information, data acquisition through field campaigns, inverse modeling, and prediction of some environmental performance metric (EPM). The EPM being predicted could be, for example, contaminant concentration or plume travel time. These predictions often have significant bearing on a decision that must be made. Examples include: how to allocate limited remediation resources between contaminated groundwater sites or where to place a waste repository site. Answering such questions depends on predictions of EPMs using forward models as well as levels of uncertainty related to these predictions. Uncertainty in EPM predictions stems from uncertainty in model parameters, which can be reduced by measurements taken in field campaigns. The costly nature of field measurements motivates a rational basis for determining a measurement strategy that is optimal with respect to the uncertainty in the EPM prediction. The tool of hypothesis testing allows this uncertainty to be quantified by computing the significance of the test resulting from a proposed field campaign. The significance of the test gives a rational basis for determining the optimality of a proposed field campaign. This hypothesis testing framework is demonstrated and discussed using various synthetic case studies. This study involves contaminated aquifers where a decision must be made based on prediction of when a contaminant will arrive at a specified location. The EPM, in this case contaminant travel time, is cast into the hypothesis testing framework. The null hypothesis states that the contaminant plume will arrive at the specified location before a critical amount of time passes, and the alternative hypothesis states that the plume will arrive after the critical time passes. The optimality of different field campaigns is assessed by computing the significance of the test resulting from each one. Evaluating the level of significance caused by a field campaign involves steps including likelihood-based inverse modeling and semi-analytical conditional particle tracking.

Top