New methods of testing nonlinear hypothesis using iterative NLLS estimator
NASA Astrophysics Data System (ADS)
Mahaboob, B.; Venkateswarlu, B.; Mokeshrayalu, G.; Balasiddamuni, P.
2017-11-01
This research paper discusses the method of testing nonlinear hypothesis using iterative Nonlinear Least Squares (NLLS) estimator. Takeshi Amemiya [1] explained this method. However in the present research paper, a modified Wald test statistic due to Engle, Robert [6] is proposed to test the nonlinear hypothesis using iterative NLLS estimator. An alternative method for testing nonlinear hypothesis using iterative NLLS estimator based on nonlinear hypothesis using iterative NLLS estimator based on nonlinear studentized residuals has been proposed. In this research article an innovative method of testing nonlinear hypothesis using iterative restricted NLLS estimator is derived. Pesaran and Deaton [10] explained the methods of testing nonlinear hypothesis. This paper uses asymptotic properties of nonlinear least squares estimator proposed by Jenrich [8]. The main purpose of this paper is to provide very innovative methods of testing nonlinear hypothesis using iterative NLLS estimator, iterative NLLS estimator based on nonlinear studentized residuals and iterative restricted NLLS estimator. Eakambaram et al. [12] discussed least absolute deviation estimations versus nonlinear regression model with heteroscedastic errors and also they studied the problem of heteroscedasticity with reference to nonlinear regression models with suitable illustration. William Grene [13] examined the interaction effect in nonlinear models disused by Ai and Norton [14] and suggested ways to examine the effects that do not involve statistical testing. Peter [15] provided guidelines for identifying composite hypothesis and addressing the probability of false rejection for multiple hypotheses.
[Dilemma of null hypothesis in ecological hypothesis's experiment test.
Li, Ji
2016-06-01
Experimental test is one of the major test methods of ecological hypothesis, though there are many arguments due to null hypothesis. Quinn and Dunham (1983) analyzed the hypothesis deduction model from Platt (1964) and thus stated that there is no null hypothesis in ecology that can be strictly tested by experiments. Fisher's falsificationism and Neyman-Pearson (N-P)'s non-decisivity inhibit statistical null hypothesis from being strictly tested. Moreover, since the null hypothesis H 0 (α=1, β=0) and alternative hypothesis H 1 '(α'=1, β'=0) in ecological progresses are diffe-rent from classic physics, the ecological null hypothesis can neither be strictly tested experimentally. These dilemmas of null hypothesis could be relieved via the reduction of P value, careful selection of null hypothesis, non-centralization of non-null hypothesis, and two-tailed test. However, the statistical null hypothesis significance testing (NHST) should not to be equivalent to the causality logistical test in ecological hypothesis. Hence, the findings and conclusions about methodological studies and experimental tests based on NHST are not always logically reliable.
Unscaled Bayes factors for multiple hypothesis testing in microarray experiments.
Bertolino, Francesco; Cabras, Stefano; Castellanos, Maria Eugenia; Racugno, Walter
2015-12-01
Multiple hypothesis testing collects a series of techniques usually based on p-values as a summary of the available evidence from many statistical tests. In hypothesis testing, under a Bayesian perspective, the evidence for a specified hypothesis against an alternative, conditionally on data, is given by the Bayes factor. In this study, we approach multiple hypothesis testing based on both Bayes factors and p-values, regarding multiple hypothesis testing as a multiple model selection problem. To obtain the Bayes factors we assume default priors that are typically improper. In this case, the Bayes factor is usually undetermined due to the ratio of prior pseudo-constants. We show that ignoring prior pseudo-constants leads to unscaled Bayes factor which do not invalidate the inferential procedure in multiple hypothesis testing, because they are used within a comparative scheme. In fact, using partial information from the p-values, we are able to approximate the sampling null distribution of the unscaled Bayes factor and use it within Efron's multiple testing procedure. The simulation study suggests that under normal sampling model and even with small sample sizes, our approach provides false positive and false negative proportions that are less than other common multiple hypothesis testing approaches based only on p-values. The proposed procedure is illustrated in two simulation studies, and the advantages of its use are showed in the analysis of two microarray experiments. © The Author(s) 2011.
ERIC Educational Resources Information Center
Lee, Jungmin
2016-01-01
This study tested the Bennett hypothesis by examining whether four-year colleges changed listed tuition and fees, the amount of institutional grants per student, and room and board charges after their states implemented statewide merit-based aid programs. According to the Bennett hypothesis, increases in government financial aid make it easier for…
Hypothesis Testing in Task-Based Interaction
ERIC Educational Resources Information Center
Choi, Yujeong; Kilpatrick, Cynthia
2014-01-01
Whereas studies show that comprehensible output facilitates L2 learning, hypothesis testing has received little attention in Second Language Acquisition (SLA). Following Shehadeh (2003), we focus on hypothesis testing episodes (HTEs) in which learners initiate repair of their own speech in interaction. In the context of a one-way information gap…
Classroom-Based Strategies to Incorporate Hypothesis Testing in Functional Behavior Assessments
ERIC Educational Resources Information Center
Lloyd, Blair P.; Weaver, Emily S.; Staubitz, Johanna L.
2017-01-01
When results of descriptive functional behavior assessments are unclear, hypothesis testing can help school teams understand how the classroom environment affects a student's challenging behavior. This article describes two hypothesis testing strategies that can be used in classroom settings: structural analysis and functional analysis. For each…
A statistical test to show negligible trend
Philip M. Dixon; Joseph H.K. Pechmann
2005-01-01
The usual statistical tests of trend are inappropriate for demonstrating the absence of trend. This is because failure to reject the null hypothesis of no trend does not prove that null hypothesis. The appropriate statistical method is based on an equivalence test. The null hypothesis is that the trend is not zero, i.e., outside an a priori specified equivalence region...
Improving the Crossing-SIBTEST Statistic for Detecting Non-uniform DIF.
Chalmers, R Philip
2018-06-01
This paper demonstrates that, after applying a simple modification to Li and Stout's (Psychometrika 61(4):647-677, 1996) CSIBTEST statistic, an improved variant of the statistic could be realized. It is shown that this modified version of CSIBTEST has a more direct association with the SIBTEST statistic presented by Shealy and Stout (Psychometrika 58(2):159-194, 1993). In particular, the asymptotic sampling distributions and general interpretation of the effect size estimates are the same for SIBTEST and the new CSIBTEST. Given the more natural connection to SIBTEST, it is shown that Li and Stout's hypothesis testing approach is insufficient for CSIBTEST; thus, an improved hypothesis testing procedure is required. Based on the presented arguments, a new chi-squared-based hypothesis testing approach is proposed for the modified CSIBTEST statistic. Positive results from a modest Monte Carlo simulation study strongly suggest the original CSIBTEST procedure and randomization hypothesis testing approach should be replaced by the modified statistic and hypothesis testing method.
NASA Astrophysics Data System (ADS)
Sirenko, M. A.; Tarasenko, P. F.; Pushkarev, M. I.
2017-01-01
One of the most noticeable features of sign-based statistical procedures is an opportunity to build an exact test for simple hypothesis testing of parameters in a regression model. In this article, we expanded a sing-based approach to the nonlinear case with dependent noise. The examined model is a multi-quantile regression, which makes it possible to test hypothesis not only of regression parameters, but of noise parameters as well.
Experimental comparisons of hypothesis test and moving average based combustion phase controllers.
Gao, Jinwu; Wu, Yuhu; Shen, Tielong
2016-11-01
For engine control, combustion phase is the most effective and direct parameter to improve fuel efficiency. In this paper, the statistical control strategy based on hypothesis test criterion is discussed. Taking location of peak pressure (LPP) as combustion phase indicator, the statistical model of LPP is first proposed, and then the controller design method is discussed on the basis of both Z and T tests. For comparison, moving average based control strategy is also presented and implemented in this study. The experiments on a spark ignition gasoline engine at various operating conditions show that the hypothesis test based controller is able to regulate LPP close to set point while maintaining the rapid transient response, and the variance of LPP is also well constrained. Copyright © 2016 ISA. Published by Elsevier Ltd. All rights reserved.
Testing the null hypothesis: the forgotten legacy of Karl Popper?
Wilkinson, Mick
2013-01-01
Testing of the null hypothesis is a fundamental aspect of the scientific method and has its basis in the falsification theory of Karl Popper. Null hypothesis testing makes use of deductive reasoning to ensure that the truth of conclusions is irrefutable. In contrast, attempting to demonstrate the new facts on the basis of testing the experimental or research hypothesis makes use of inductive reasoning and is prone to the problem of the Uniformity of Nature assumption described by David Hume in the eighteenth century. Despite this issue and the well documented solution provided by Popper's falsification theory, the majority of publications are still written such that they suggest the research hypothesis is being tested. This is contrary to accepted scientific convention and possibly highlights a poor understanding of the application of conventional significance-based data analysis approaches. Our work should remain driven by conjecture and attempted falsification such that it is always the null hypothesis that is tested. The write up of our studies should make it clear that we are indeed testing the null hypothesis and conforming to the established and accepted philosophical conventions of the scientific method.
An Extension of RSS-based Model Comparison Tests for Weighted Least Squares
2012-08-22
use the model comparison test statistic to analyze the null hypothesis. Under the null hypothesis, the weighted least squares cost functional is JWLS ...q̂WLSH ) = 10.3040×106. Under the alternative hypothesis, the weighted least squares cost functional is JWLS (q̂WLS) = 8.8394 × 106. Thus the model
An omnibus test for the global null hypothesis.
Futschik, Andreas; Taus, Thomas; Zehetmayer, Sonja
2018-01-01
Global hypothesis tests are a useful tool in the context of clinical trials, genetic studies, or meta-analyses, when researchers are not interested in testing individual hypotheses, but in testing whether none of the hypotheses is false. There are several possibilities how to test the global null hypothesis when the individual null hypotheses are independent. If it is assumed that many of the individual null hypotheses are false, combination tests have been recommended to maximize power. If, however, it is assumed that only one or a few null hypotheses are false, global tests based on individual test statistics are more powerful (e.g. Bonferroni or Simes test). However, usually there is no a priori knowledge on the number of false individual null hypotheses. We therefore propose an omnibus test based on cumulative sums of the transformed p-values. We show that this test yields an impressive overall performance. The proposed method is implemented in an R-package called omnibus.
Feldman, Anatol G; Latash, Mark L
2005-02-01
Criticisms of the equilibrium point (EP) hypothesis have recently appeared that are based on misunderstandings of some of its central notions. Starting from such interpretations of the hypothesis, incorrect predictions are made and tested. When the incorrect predictions prove false, the hypothesis is claimed to be falsified. In particular, the hypothesis has been rejected based on the wrong assumptions that it conflicts with empirically defined joint stiffness values or that it is incompatible with violations of equifinality under certain velocity-dependent perturbations. Typically, such attempts use notions describing the control of movements of artificial systems in place of physiologically relevant ones. While appreciating constructive criticisms of the EP hypothesis, we feel that incorrect interpretations have to be clarified by reiterating what the EP hypothesis does and does not predict. We conclude that the recent claims of falsifying the EP hypothesis and the calls for its replacement by EMG-force control hypothesis are unsubstantiated. The EP hypothesis goes far beyond the EMG-force control view. In particular, the former offers a resolution for the famous posture-movement paradox while the latter fails to resolve it.
Test of association: which one is the most appropriate for my study?
Gonzalez-Chica, David Alejandro; Bastos, João Luiz; Duquia, Rodrigo Pereira; Bonamigo, Renan Rangel; Martínez-Mesa, Jeovany
2015-01-01
Hypothesis tests are statistical tools widely used for assessing whether or not there is an association between two or more variables. These tests provide a probability of the type 1 error (p-value), which is used to accept or reject the null study hypothesis. To provide a practical guide to help researchers carefully select the most appropriate procedure to answer the research question. We discuss the logic of hypothesis testing and present the prerequisites of each procedure based on practical examples.
A shift from significance test to hypothesis test through power analysis in medical research.
Singh, G
2006-01-01
Medical research literature until recently, exhibited substantial dominance of the Fisher's significance test approach of statistical inference concentrating more on probability of type I error over Neyman-Pearson's hypothesis test considering both probability of type I and II error. Fisher's approach dichotomises results into significant or not significant results with a P value. The Neyman-Pearson's approach talks of acceptance or rejection of null hypothesis. Based on the same theory these two approaches deal with same objective and conclude in their own way. The advancement in computing techniques and availability of statistical software have resulted in increasing application of power calculations in medical research and thereby reporting the result of significance tests in the light of power of the test also. Significance test approach, when it incorporates power analysis contains the essence of hypothesis test approach. It may be safely argued that rising application of power analysis in medical research may have initiated a shift from Fisher's significance test to Neyman-Pearson's hypothesis test procedure.
Bayesian inference for psychology. Part II: Example applications with JASP.
Wagenmakers, Eric-Jan; Love, Jonathon; Marsman, Maarten; Jamil, Tahira; Ly, Alexander; Verhagen, Josine; Selker, Ravi; Gronau, Quentin F; Dropmann, Damian; Boutin, Bruno; Meerhoff, Frans; Knight, Patrick; Raj, Akash; van Kesteren, Erik-Jan; van Doorn, Johnny; Šmíra, Martin; Epskamp, Sacha; Etz, Alexander; Matzke, Dora; de Jong, Tim; van den Bergh, Don; Sarafoglou, Alexandra; Steingroever, Helen; Derks, Koen; Rouder, Jeffrey N; Morey, Richard D
2018-02-01
Bayesian hypothesis testing presents an attractive alternative to p value hypothesis testing. Part I of this series outlined several advantages of Bayesian hypothesis testing, including the ability to quantify evidence and the ability to monitor and update this evidence as data come in, without the need to know the intention with which the data were collected. Despite these and other practical advantages, Bayesian hypothesis tests are still reported relatively rarely. An important impediment to the widespread adoption of Bayesian tests is arguably the lack of user-friendly software for the run-of-the-mill statistical problems that confront psychologists for the analysis of almost every experiment: the t-test, ANOVA, correlation, regression, and contingency tables. In Part II of this series we introduce JASP ( http://www.jasp-stats.org ), an open-source, cross-platform, user-friendly graphical software package that allows users to carry out Bayesian hypothesis tests for standard statistical problems. JASP is based in part on the Bayesian analyses implemented in Morey and Rouder's BayesFactor package for R. Armed with JASP, the practical advantages of Bayesian hypothesis testing are only a mouse click away.
Hypothesis testing of a change point during cognitive decline among Alzheimer's disease patients.
Ji, Ming; Xiong, Chengjie; Grundman, Michael
2003-10-01
In this paper, we present a statistical hypothesis test for detecting a change point over the course of cognitive decline among Alzheimer's disease patients. The model under the null hypothesis assumes a constant rate of cognitive decline over time and the model under the alternative hypothesis is a general bilinear model with an unknown change point. When the change point is unknown, however, the null distribution of the test statistics is not analytically tractable and has to be simulated by parametric bootstrap. When the alternative hypothesis that a change point exists is accepted, we propose an estimate of its location based on the Akaike's Information Criterion. We applied our method to a data set from the Neuropsychological Database Initiative by implementing our hypothesis testing method to analyze Mini Mental Status Exam scores based on a random-slope and random-intercept model with a bilinear fixed effect. Our result shows that despite large amount of missing data, accelerated decline did occur for MMSE among AD patients. Our finding supports the clinical belief of the existence of a change point during cognitive decline among AD patients and suggests the use of change point models for the longitudinal modeling of cognitive decline in AD research.
Debates—Hypothesis testing in hydrology: Theory and practice
NASA Astrophysics Data System (ADS)
Pfister, Laurent; Kirchner, James W.
2017-03-01
The basic structure of the scientific method—at least in its idealized form—is widely championed as a recipe for scientific progress, but the day-to-day practice may be different. Here, we explore the spectrum of current practice in hypothesis formulation and testing in hydrology, based on a random sample of recent research papers. This analysis suggests that in hydrology, as in other fields, hypothesis formulation and testing rarely correspond to the idealized model of the scientific method. Practices such as "p-hacking" or "HARKing" (Hypothesizing After the Results are Known) are major obstacles to more rigorous hypothesis testing in hydrology, along with the well-known problem of confirmation bias—the tendency to value and trust confirmations more than refutations—among both researchers and reviewers. Nonetheless, as several examples illustrate, hypothesis tests have played an essential role in spurring major advances in hydrological theory. Hypothesis testing is not the only recipe for scientific progress, however. Exploratory research, driven by innovations in measurement and observation, has also underlain many key advances. Further improvements in observation and measurement will be vital to both exploratory research and hypothesis testing, and thus to advancing the science of hydrology.
Confidence intervals for single-case effect size measures based on randomization test inversion.
Michiels, Bart; Heyvaert, Mieke; Meulders, Ann; Onghena, Patrick
2017-02-01
In the current paper, we present a method to construct nonparametric confidence intervals (CIs) for single-case effect size measures in the context of various single-case designs. We use the relationship between a two-sided statistical hypothesis test at significance level α and a 100 (1 - α) % two-sided CI to construct CIs for any effect size measure θ that contain all point null hypothesis θ values that cannot be rejected by the hypothesis test at significance level α. This method of hypothesis test inversion (HTI) can be employed using a randomization test as the statistical hypothesis test in order to construct a nonparametric CI for θ. We will refer to this procedure as randomization test inversion (RTI). We illustrate RTI in a situation in which θ is the unstandardized and the standardized difference in means between two treatments in a completely randomized single-case design. Additionally, we demonstrate how RTI can be extended to other types of single-case designs. Finally, we discuss a few challenges for RTI as well as possibilities when using the method with other effect size measures, such as rank-based nonoverlap indices. Supplementary to this paper, we provide easy-to-use R code, which allows the user to construct nonparametric CIs according to the proposed method.
Test of the prey-base hypothesis to explain use of red squirrel midden sites by American martens
Dean E. Pearson; Leonard F. Ruggiero
2001-01-01
We tested the prey-base hypothesis to determine whether selection of red squirrel (Tamiasciurus hudsonicus) midden sites (cone caches) by American martens (Martes americana) for resting and denning could be attributed to greater abundance of small-mammal prey. Five years of livetrapping at 180 sampling stations in 2 drainages showed that small mammals,...
ERIC Educational Resources Information Center
White, Brian
2004-01-01
This paper presents a generally applicable method for characterizing subjects' hypothesis-testing behaviour based on a synthesis that extends on previous work. Beginning with a transcript of subjects' speech and videotape of their actions, a Reasoning Map is created that depicts the flow of their hypotheses, tests, predictions, results, and…
Comparing Web, Group and Telehealth Formats of a Military Parenting Program
2017-06-01
directed approaches. Comparative effectiveness will be tested by specifying a non - equivalence hypothesis for group -based and web-facilitated relative...Comparative effectiveness will be tested by specifying a non - equivalence hypothesis fro group based and individualized facilitated relative to self-directed...documents for review and approval. 1a. Finalize human subjects protocol and consent documents for pilot group (N=5 families), and randomized controlled
Effects of Item Exposure for Conventional Examinations in a Continuous Testing Environment.
ERIC Educational Resources Information Center
Hertz, Norman R.; Chinn, Roberta N.
This study explored the effect of item exposure on two conventional examinations administered as computer-based tests. A principal hypothesis was that item exposure would have little or no effect on average difficulty of the items over the course of an administrative cycle. This hypothesis was tested by exploring conventional item statistics and…
A large scale test of the gaming-enhancement hypothesis.
Przybylski, Andrew K; Wang, John C
2016-01-01
A growing research literature suggests that regular electronic game play and game-based training programs may confer practically significant benefits to cognitive functioning. Most evidence supporting this idea, the gaming-enhancement hypothesis , has been collected in small-scale studies of university students and older adults. This research investigated the hypothesis in a general way with a large sample of 1,847 school-aged children. Our aim was to examine the relations between young people's gaming experiences and an objective test of reasoning performance. Using a Bayesian hypothesis testing approach, evidence for the gaming-enhancement and null hypotheses were compared. Results provided no substantive evidence supporting the idea that having preference for or regularly playing commercially available games was positively associated with reasoning ability. Evidence ranged from equivocal to very strong in support for the null hypothesis over what was predicted. The discussion focuses on the value of Bayesian hypothesis testing for investigating electronic gaming effects, the importance of open science practices, and pre-registered designs to improve the quality of future work.
Facio, Flavia M; Sapp, Julie C; Linn, Amy; Biesecker, Leslie G
2012-10-10
Massively-parallel sequencing (MPS) technologies create challenges for informed consent of research participants given the enormous scale of the data and the wide range of potential results. We propose that the consent process in these studies be based on whether they use MPS to test a hypothesis or to generate hypotheses. To demonstrate the differences in these approaches to informed consent, we describe the consent processes for two MPS studies. The purpose of our hypothesis-testing study is to elucidate the etiology of rare phenotypes using MPS. The purpose of our hypothesis-generating study is to test the feasibility of using MPS to generate clinical hypotheses, and to approach the return of results as an experimental manipulation. Issues to consider in both designs include: volume and nature of the potential results, primary versus secondary results, return of individual results, duty to warn, length of interaction, target population, and privacy and confidentiality. The categorization of MPS studies as hypothesis-testing versus hypothesis-generating can help to clarify the issue of so-called incidental or secondary results for the consent process, and aid the communication of the research goals to study participants.
Revised standards for statistical evidence.
Johnson, Valen E
2013-11-26
Recent advances in Bayesian hypothesis testing have led to the development of uniformly most powerful Bayesian tests, which represent an objective, default class of Bayesian hypothesis tests that have the same rejection regions as classical significance tests. Based on the correspondence between these two classes of tests, it is possible to equate the size of classical hypothesis tests with evidence thresholds in Bayesian tests, and to equate P values with Bayes factors. An examination of these connections suggest that recent concerns over the lack of reproducibility of scientific studies can be attributed largely to the conduct of significance tests at unjustifiably high levels of significance. To correct this problem, evidence thresholds required for the declaration of a significant finding should be increased to 25-50:1, and to 100-200:1 for the declaration of a highly significant finding. In terms of classical hypothesis tests, these evidence standards mandate the conduct of tests at the 0.005 or 0.001 level of significance.
On selecting evidence to test hypotheses: A theory of selection tasks.
Ragni, Marco; Kola, Ilir; Johnson-Laird, Philip N
2018-05-21
How individuals choose evidence to test hypotheses is a long-standing puzzle. According to an algorithmic theory that we present, it is based on dual processes: individuals' intuitions depending on mental models of the hypothesis yield selections of evidence matching instances of the hypothesis, but their deliberations yield selections of potential counterexamples to the hypothesis. The results of 228 experiments using Wason's selection task corroborated the theory's predictions. Participants made dependent choices of items of evidence: the selections in 99 experiments were significantly more redundant (using Shannon's measure) than those of 10,000 simulations of each experiment based on independent selections. Participants tended to select evidence corresponding to instances of hypotheses, or to its counterexamples, or to both. Given certain contents, instructions, or framings of the task, they were more likely to select potential counterexamples to the hypothesis. When participants received feedback about their selections in the "repeated" selection task, they switched from selections of instances of the hypothesis to selection of potential counterexamples. These results eliminated most of the 15 alternative theories of selecting evidence. In a meta-analysis, the model theory yielded a better fit of the results of 228 experiments than the one remaining theory based on reasoning rather than meaning. We discuss the implications of the model theory for hypothesis testing and for a well-known paradox of confirmation. (PsycINFO Database Record (c) 2018 APA, all rights reserved).
Killeen's (2005) "p[subscript rep]" Coefficient: Logical and Mathematical Problems
ERIC Educational Resources Information Center
Maraun, Michael; Gabriel, Stephanie
2010-01-01
In his article, "An Alternative to Null-Hypothesis Significance Tests," Killeen (2005) urged the discipline to abandon the practice of "p[subscript obs]"-based null hypothesis testing and to quantify the signal-to-noise characteristics of experimental outcomes with replication probabilities. He described the coefficient that he…
Robust Approach to Verifying the Weak Form of the Efficient Market Hypothesis
NASA Astrophysics Data System (ADS)
Střelec, Luboš
2011-09-01
The weak form of the efficient markets hypothesis states that prices incorporate only past information about the asset. An implication of this form of the efficient markets hypothesis is that one cannot detect mispriced assets and consistently outperform the market through technical analysis of past prices. One of possible formulations of the efficient market hypothesis used for weak form tests is that share prices follow a random walk. It means that returns are realizations of IID sequence of random variables. Consequently, for verifying the weak form of the efficient market hypothesis, we can use distribution tests, among others, i.e. some tests of normality and/or some graphical methods. Many procedures for testing the normality of univariate samples have been proposed in the literature [7]. Today the most popular omnibus test of normality for a general use is the Shapiro-Wilk test. The Jarque-Bera test is the most widely adopted omnibus test of normality in econometrics and related fields. In particular, the Jarque-Bera test (i.e. test based on the classical measures of skewness and kurtosis) is frequently used when one is more concerned about heavy-tailed alternatives. As these measures are based on moments of the data, this test has a zero breakdown value [2]. In other words, a single outlier can make the test worthless. The reason so many classical procedures are nonrobust to outliers is that the parameters of the model are expressed in terms of moments, and their classical estimators are expressed in terms of sample moments, which are very sensitive to outliers. Another approach to robustness is to concentrate on the parameters of interest suggested by the problem under this study. Consequently, novel robust testing procedures of testing normality are presented in this paper to overcome shortcomings of classical normality tests in the field of financial data, which are typical with occurrence of remote data points and additional types of deviations from normality. This study also discusses some results of simulation power studies of these tests for normality against selected alternatives. Based on outcome of the power simulation study, selected normality tests were consequently used to verify weak form of efficiency in Central Europe stock markets.
Mental Abilities and School Achievement: A Test of a Mediation Hypothesis
ERIC Educational Resources Information Center
Vock, Miriam; Preckel, Franzis; Holling, Heinz
2011-01-01
This study analyzes the interplay of four cognitive abilities--reasoning, divergent thinking, mental speed, and short-term memory--and their impact on academic achievement in school in a sample of adolescents in grades seven to 10 (N = 1135). Based on information processing approaches to intelligence, we tested a mediation hypothesis, which states…
Action perception as hypothesis testing.
Donnarumma, Francesco; Costantini, Marcello; Ambrosini, Ettore; Friston, Karl; Pezzulo, Giovanni
2017-04-01
We present a novel computational model that describes action perception as an active inferential process that combines motor prediction (the reuse of our own motor system to predict perceived movements) and hypothesis testing (the use of eye movements to disambiguate amongst hypotheses). The system uses a generative model of how (arm and hand) actions are performed to generate hypothesis-specific visual predictions, and directs saccades to the most informative places of the visual scene to test these predictions - and underlying hypotheses. We test the model using eye movement data from a human action observation study. In both the human study and our model, saccades are proactive whenever context affords accurate action prediction; but uncertainty induces a more reactive gaze strategy, via tracking the observed movements. Our model offers a novel perspective on action observation that highlights its active nature based on prediction dynamics and hypothesis testing. Copyright © 2017 The Authors. Published by Elsevier Ltd.. All rights reserved.
A large scale test of the gaming-enhancement hypothesis
Wang, John C.
2016-01-01
A growing research literature suggests that regular electronic game play and game-based training programs may confer practically significant benefits to cognitive functioning. Most evidence supporting this idea, the gaming-enhancement hypothesis, has been collected in small-scale studies of university students and older adults. This research investigated the hypothesis in a general way with a large sample of 1,847 school-aged children. Our aim was to examine the relations between young people’s gaming experiences and an objective test of reasoning performance. Using a Bayesian hypothesis testing approach, evidence for the gaming-enhancement and null hypotheses were compared. Results provided no substantive evidence supporting the idea that having preference for or regularly playing commercially available games was positively associated with reasoning ability. Evidence ranged from equivocal to very strong in support for the null hypothesis over what was predicted. The discussion focuses on the value of Bayesian hypothesis testing for investigating electronic gaming effects, the importance of open science practices, and pre-registered designs to improve the quality of future work. PMID:27896035
Multiple Hypothesis Testing for Experimental Gingivitis Based on Wilcoxon Signed Rank Statistics
Preisser, John S.; Sen, Pranab K.; Offenbacher, Steven
2011-01-01
Dental research often involves repeated multivariate outcomes on a small number of subjects for which there is interest in identifying outcomes that exhibit change in their levels over time as well as to characterize the nature of that change. In particular, periodontal research often involves the analysis of molecular mediators of inflammation for which multivariate parametric methods are highly sensitive to outliers and deviations from Gaussian assumptions. In such settings, nonparametric methods may be favored over parametric ones. Additionally, there is a need for statistical methods that control an overall error rate for multiple hypothesis testing. We review univariate and multivariate nonparametric hypothesis tests and apply them to longitudinal data to assess changes over time in 31 biomarkers measured from the gingival crevicular fluid in 22 subjects whereby gingivitis was induced by temporarily withholding tooth brushing. To identify biomarkers that can be induced to change, multivariate Wilcoxon signed rank tests for a set of four summary measures based upon area under the curve are applied for each biomarker and compared to their univariate counterparts. Multiple hypothesis testing methods with choice of control of the false discovery rate or strong control of the family-wise error rate are examined. PMID:21984957
Risk-Based, Hypothesis-Driven Framework for Hydrological Field Campaigns with Case Studies
NASA Astrophysics Data System (ADS)
Harken, B.; Rubin, Y.
2014-12-01
There are several stages in any hydrological modeling campaign, including: formulation and analysis of a priori information, data acquisition through field campaigns, inverse modeling, and prediction of some environmental performance metric (EPM). The EPM being predicted could be, for example, contaminant concentration or plume travel time. These predictions often have significant bearing on a decision that must be made. Examples include: how to allocate limited remediation resources between contaminated groundwater sites or where to place a waste repository site. Answering such questions depends on predictions of EPMs using forward models as well as levels of uncertainty related to these predictions. Uncertainty in EPM predictions stems from uncertainty in model parameters, which can be reduced by measurements taken in field campaigns. The costly nature of field measurements motivates a rational basis for determining a measurement strategy that is optimal with respect to the uncertainty in the EPM prediction. The tool of hypothesis testing allows this uncertainty to be quantified by computing the significance of the test resulting from a proposed field campaign. The significance of the test gives a rational basis for determining the optimality of a proposed field campaign. This hypothesis testing framework is demonstrated and discussed using various synthetic case studies. This study involves contaminated aquifers where a decision must be made based on prediction of when a contaminant will arrive at a specified location. The EPM, in this case contaminant travel time, is cast into the hypothesis testing framework. The null hypothesis states that the contaminant plume will arrive at the specified location before a critical amount of time passes, and the alternative hypothesis states that the plume will arrive after the critical time passes. The optimality of different field campaigns is assessed by computing the significance of the test resulting from each one. Evaluating the level of significance caused by a field campaign involves steps including likelihood-based inverse modeling and semi-analytical conditional particle tracking.
Testing for purchasing power parity in 21 African countries using several unit root tests
NASA Astrophysics Data System (ADS)
Choji, Niri Martha; Sek, Siok Kun
2017-04-01
Purchasing power parity is used as a basis for international income and expenditure comparison through the exchange rate theory. However, empirical studies show disagreement on the validity of PPP. In this paper, we conduct the testing on the validity of PPP using panel data approach. We apply seven different panel unit root tests to test the validity of the purchasing power parity (PPP) hypothesis based on the quarterly data on real effective exchange rate for 21 African countries from the period 1971: Q1-2012: Q4. All the results of the seven tests rejected the hypothesis of stationarity meaning that absolute PPP does not hold in those African Countries. This result confirmed the claim from previous studies that standard panel unit tests fail to support the PPP hypothesis.
A Clinical Evaluation of the Competing Sources of Input Hypothesis
ERIC Educational Resources Information Center
Fey, Marc E.; Leonard, Laurence B.; Bredin-Oja, Shelley L.; Deevy, Patricia
2017-01-01
Purpose: Our purpose was to test the competing sources of input (CSI) hypothesis by evaluating an intervention based on its principles. This hypothesis proposes that children's use of main verbs without tense is the result of their treating certain sentence types in the input (e.g., "Was 'she laughing'?") as models for declaratives…
Chiba, Yasutaka
2017-09-01
Fisher's exact test is commonly used to compare two groups when the outcome is binary in randomized trials. In the context of causal inference, this test explores the sharp causal null hypothesis (i.e. the causal effect of treatment is the same for all subjects), but not the weak causal null hypothesis (i.e. the causal risks are the same in the two groups). Therefore, in general, rejection of the null hypothesis by Fisher's exact test does not mean that the causal risk difference is not zero. Recently, Chiba (Journal of Biometrics and Biostatistics 2015; 6: 244) developed a new exact test for the weak causal null hypothesis when the outcome is binary in randomized trials; the new test is not based on any large sample theory and does not require any assumption. In this paper, we extend the new test; we create a version of the test applicable to a stratified analysis. The stratified exact test that we propose is general in nature and can be used in several approaches toward the estimation of treatment effects after adjusting for stratification factors. The stratified Fisher's exact test of Jung (Biometrical Journal 2014; 56: 129-140) tests the sharp causal null hypothesis. This test applies a crude estimator of the treatment effect and can be regarded as a special case of our proposed exact test. Our proposed stratified exact test can be straightforwardly extended to analysis of noninferiority trials and to construct the associated confidence interval. © 2017 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Sex ratios in the two Germanies: a test of the economic stress hypothesis.
Catalano, Ralph A
2003-09-01
Literature describing temporal variation in the secondary sex ratio among humans reports an association between population stressors and declines in the odds of male birth. Explanations of this phenomenon draw on reports that stressed females spontaneously abort male more than female fetuses, and that stressed males exhibit reduced sperm motility. This work has led to the argument that population stress induced by a declining economy reduces the human sex ratio. No direct test of this hypothesis appears in the literature. Here, a test is offered based on a comparison of the sex ratio in East and West Germany for the years 1946 to 1999. The theory suggests that the East German sex ratio should be lower in 1991, when East Germany's economy collapsed, than expected from its own history and from the sex ratio in West Germany. The hypothesis is tested using time-series modelling methods. The data support the hypothesis. The sex ratio in East Germany was at its lowest in 1991. This first direct test supports the hypothesis that economic decline reduces the human sex ratio.
Knowledge Base Refinement as Improving an Incorrect and Incomplete Domain Theory
1990-04-01
Ginsberg et al., 1985), and RL (Fu and Buchanan, 1985), which perform empirical induction over a library of test cases. This chapter describes a new...state knowledge. Examples of high-level goals are: to test a hypothesis, to differentiate between several plausible hypotheses, to ask a clarifying...one tuple when we Group Hypotheses Test Hypothesis Applyrule Findout Strategy Metarule Strategy Metarule Strategy Metarule Strategy Metarule goal(group
2004-2006 Puget Sound Traffic Choices Study | Transportation Secure Data
Center | NREL 04-2006 Puget Sound Traffic Choices Study 2004-2006 Puget Sound Traffic Choices Study The 2004-2006 Puget Sound Traffic Choices Study tested the hypothesis that time-of-day variable Administration for a pilot project on congestion-based tolling. Methodology To test the hypothesis, the study
Sagnac effect and Ritz ballistic hypothesis (Review)
NASA Astrophysics Data System (ADS)
Malykin, G. B.
2010-12-01
It is shown that the Ritz ballistic hypothesis, which is based on the vector summation of the speed of light with the velocity of the radiation source, contradicts the fact of existence of the Sagnac effect. Based on a particular example of a three-mirror ring interferometer, it is shown that the application of the Ritz ballistic hypothesis leads to an obvious calculation error, namely, to the appearance of a difference in the propagation times of counterpropagating waves in the absence of rotation. A review is given of experiments and of results of processing of astronomical observations and discussions devoted to testing the Ritz ballistic hypothesis. A number of other physical phenomena that refute the Ritz ballistic hypothesis are considered.
Hatori, Tsuyoshi; Takemura, Kazuhisa; Fujii, Satoshi; Ideno, Takashi
2011-06-01
This paper presents a new model of category judgment. The model hypothesizes that, when more attention is focused on a category, the psychological range of the category gets narrower (category-focusing hypothesis). We explain this hypothesis by using the metaphor of a "mental-box" model: the more attention that is focused on a mental box (i.e., a category set), the smaller the size of the box becomes (i.e., a cardinal number of the category set). The hypothesis was tested in an experiment (N = 40), where the focus of attention on prescribed verbal categories was manipulated. The obtained data gave support to the hypothesis: category-focusing effects were found in three experimental tasks (regarding the category of "food", "height", and "income"). The validity of the hypothesis was discussed based on the results.
Kuiper, Rebecca M; Nederhoff, Tim; Klugkist, Irene
2015-05-01
In this paper, the performance of six types of techniques for comparisons of means is examined. These six emerge from the distinction between the method employed (hypothesis testing, model selection using information criteria, or Bayesian model selection) and the set of hypotheses that is investigated (a classical, exploration-based set of hypotheses containing equality constraints on the means, or a theory-based limited set of hypotheses with equality and/or order restrictions). A simulation study is conducted to examine the performance of these techniques. We demonstrate that, if one has specific, a priori specified hypotheses, confirmation (i.e., investigating theory-based hypotheses) has advantages over exploration (i.e., examining all possible equality-constrained hypotheses). Furthermore, examining reasonable order-restricted hypotheses has more power to detect the true effect/non-null hypothesis than evaluating only equality restrictions. Additionally, when investigating more than one theory-based hypothesis, model selection is preferred over hypothesis testing. Because of the first two results, we further examine the techniques that are able to evaluate order restrictions in a confirmatory fashion by examining their performance when the homogeneity of variance assumption is violated. Results show that the techniques are robust to heterogeneity when the sample sizes are equal. When the sample sizes are unequal, the performance is affected by heterogeneity. The size and direction of the deviations from the baseline, where there is no heterogeneity, depend on the effect size (of the means) and on the trend in the group variances with respect to the ordering of the group sizes. Importantly, the deviations are less pronounced when the group variances and sizes exhibit the same trend (e.g., are both increasing with group number). © 2014 The British Psychological Society.
NASA Astrophysics Data System (ADS)
Thompson, E. M.; Hewlett, J. B.; Baise, L. G.; Vogel, R. M.
2011-01-01
Annual maximum (AM) time series are incomplete (i.e., censored) when no events are included above the assumed censoring threshold (i.e., magnitude of completeness). We introduce a distrtibutional hypothesis test for left-censored Gumbel observations based on the probability plot correlation coefficient (PPCC). Critical values of the PPCC hypothesis test statistic are computed from Monte-Carlo simulations and are a function of sample size, censoring level, and significance level. When applied to a global catalog of earthquake observations, the left-censored Gumbel PPCC tests are unable to reject the Gumbel hypothesis for 45 of 46 seismic regions. We apply four different field significance tests for combining individual tests into a collective hypothesis test. None of the field significance tests are able to reject the global hypothesis that AM earthquake magnitudes arise from a Gumbel distribution. Because the field significance levels are not conclusive, we also compute the likelihood that these field significance tests are unable to reject the Gumbel model when the samples arise from a more complex distributional alternative. A power study documents that the censored Gumbel PPCC test is unable to reject some important and viable Generalized Extreme Value (GEV) alternatives. Thus, we cannot rule out the possibility that the global AM earthquake time series could arise from a GEV distribution with a finite upper bound, also known as a reverse Weibull distribution. Our power study also indicates that the binomial and uniform field significance tests are substantially more powerful than the more commonly used Bonferonni and false discovery rate multiple comparison procedures.
Lee, Chai-Jin; Kang, Dongwon; Lee, Sangseon; Lee, Sunwon; Kang, Jaewoo; Kim, Sun
2018-05-25
Determining functions of a gene requires time consuming, expensive biological experiments. Scientists can speed up this experimental process if the literature information and biological networks can be adequately provided. In this paper, we present a web-based information system that can perform in silico experiments of computationally testing hypothesis on the function of a gene. A hypothesis that is specified in English by the user is converted to genes using a literature and knowledge mining system called BEST. Condition-specific TF, miRNA and PPI (protein-protein interaction) networks are automatically generated by projecting gene and miRNA expression data to template networks. Then, an in silico experiment is to test how well the target genes are connected from the knockout gene through the condition-specific networks. The test result visualizes path from the knockout gene to the target genes in the three networks. Statistical and information-theoretic scores are provided on the resulting web page to help scientists either accept or reject the hypothesis being tested. Our web-based system was extensively tested using three data sets, such as E2f1, Lrrk2, and Dicer1 knockout data sets. We were able to re-produce gene functions reported in the original research papers. In addition, we comprehensively tested with all disease names in MalaCards as hypothesis to show the effectiveness of our system. Our in silico experiment system can be very useful in suggesting biological mechanisms which can be further tested in vivo or in vitro. http://biohealth.snu.ac.kr/software/insilico/. Copyright © 2018 Elsevier Inc. All rights reserved.
Does Maltreatment Beget Maltreatment? A Systematic Review of the Intergenerational Literature
Thornberry, Terence P.; Knight, Kelly E.; Lovegrove, Peter J.
2014-01-01
In this paper, we critically review the literature testing the cycle of maltreatment hypothesis which posits continuity in maltreatment across adjacent generations. That is, we examine whether a history of maltreatment victimization is a significant risk factor for the later perpetration of maltreatment. We begin by establishing 11 methodological criteria that studies testing this hypothesis should meet. They include such basic standards as using representative samples, valid and reliable measures, prospective designs, and different reporters for each generation. We identify 47 studies that investigated this issue and then evaluate them with regard to the 11 methodological criteria. Overall, most of these studies report findings consistent with the cycle of maltreatment hypothesis. Unfortunately, at the same time, few of them satisfy the basic methodological criteria that we established; indeed, even the stronger studies in this area only meet about half of them. Moreover, the methodologically stronger studies present mixed support for the hypothesis. As a result, the positive association often reported in the literature appears to be based largely on the methodologically weaker designs. Based on our systematic methodological review, we conclude that this small and methodologically weak body of literature does not provide a definitive test of the cycle of maltreatment hypothesis. We conclude that it is imperative to develop more robust and methodologically adequate assessments of this hypothesis to more accurately inform the development of prevention and treatment programs. PMID:22673145
Hypothesis testing for band size detection of high-dimensional banded precision matrices.
An, Baiguo; Guo, Jianhua; Liu, Yufeng
2014-06-01
Many statistical analysis procedures require a good estimator for a high-dimensional covariance matrix or its inverse, the precision matrix. When the precision matrix is banded, the Cholesky-based method often yields a good estimator of the precision matrix. One important aspect of this method is determination of the band size of the precision matrix. In practice, crossvalidation is commonly used; however, we show that crossvalidation not only is computationally intensive but can be very unstable. In this paper, we propose a new hypothesis testing procedure to determine the band size in high dimensions. Our proposed test statistic is shown to be asymptotically normal under the null hypothesis, and its theoretical power is studied. Numerical examples demonstrate the effectiveness of our testing procedure.
Earthquake likelihood model testing
Schorlemmer, D.; Gerstenberger, M.C.; Wiemer, S.; Jackson, D.D.; Rhoades, D.A.
2007-01-01
INTRODUCTIONThe Regional Earthquake Likelihood Models (RELM) project aims to produce and evaluate alternate models of earthquake potential (probability per unit volume, magnitude, and time) for California. Based on differing assumptions, these models are produced to test the validity of their assumptions and to explore which models should be incorporated in seismic hazard and risk evaluation. Tests based on physical and geological criteria are useful but we focus on statistical methods using future earthquake catalog data only. We envision two evaluations: a test of consistency with observed data and a comparison of all pairs of models for relative consistency. Both tests are based on the likelihood method, and both are fully prospective (i.e., the models are not adjusted to fit the test data). To be tested, each model must assign a probability to any possible event within a specified region of space, time, and magnitude. For our tests the models must use a common format: earthquake rates in specified “bins” with location, magnitude, time, and focal mechanism limits.Seismology cannot yet deterministically predict individual earthquakes; however, it should seek the best possible models for forecasting earthquake occurrence. This paper describes the statistical rules of an experiment to examine and test earthquake forecasts. The primary purposes of the tests described below are to evaluate physical models for earthquakes, assure that source models used in seismic hazard and risk studies are consistent with earthquake data, and provide quantitative measures by which models can be assigned weights in a consensus model or be judged as suitable for particular regions.In this paper we develop a statistical method for testing earthquake likelihood models. A companion paper (Schorlemmer and Gerstenberger 2007, this issue) discusses the actual implementation of these tests in the framework of the RELM initiative.Statistical testing of hypotheses is a common task and a wide range of possible testing procedures exist. Jolliffe and Stephenson (2003) present different forecast verifications from atmospheric science, among them likelihood testing of probability forecasts and testing the occurrence of binary events. Testing binary events requires that for each forecasted event, the spatial, temporal and magnitude limits be given. Although major earthquakes can be considered binary events, the models within the RELM project express their forecasts on a spatial grid and in 0.1 magnitude units; thus the results are a distribution of rates over space and magnitude. These forecasts can be tested with likelihood tests.In general, likelihood tests assume a valid null hypothesis against which a given hypothesis is tested. The outcome is either a rejection of the null hypothesis in favor of the test hypothesis or a nonrejection, meaning the test hypothesis cannot outperform the null hypothesis at a given significance level. Within RELM, there is no accepted null hypothesis and thus the likelihood test needs to be expanded to allow comparable testing of equipollent hypotheses.To test models against one another, we require that forecasts are expressed in a standard format: the average rate of earthquake occurrence within pre-specified limits of hypocentral latitude, longitude, depth, magnitude, time period, and focal mechanisms. Focal mechanisms should either be described as the inclination of P-axis, declination of P-axis, and inclination of the T-axis, or as strike, dip, and rake angles. Schorlemmer and Gerstenberger (2007, this issue) designed classes of these parameters such that similar models will be tested against each other. These classes make the forecasts comparable between models. Additionally, we are limited to testing only what is precisely defined and consistently reported in earthquake catalogs. Therefore it is currently not possible to test such information as fault rupture length or area, asperity location, etc. Also, to account for data quality issues, we allow for location and magnitude uncertainties as well as the probability that an event is dependent on another event.As we mentioned above, only models with comparable forecasts can be tested against each other. Our current tests are designed to examine grid-based models. This requires that any fault-based model be adapted to a grid before testing is possible. While this is a limitation of the testing, it is an inherent difficulty in any such comparative testing. Please refer to appendix B for a statistical evaluation of the application of the Poisson hypothesis to fault-based models.The testing suite we present consists of three different tests: L-Test, N-Test, and R-Test. These tests are defined similarily to Kagan and Jackson (1995). The first two tests examine the consistency of the hypotheses with the observations while the last test compares the spatial performances of the models.
A more powerful test based on ratio distribution for retention noninferiority hypothesis.
Deng, Ling; Chen, Gang
2013-03-11
Rothmann et al. ( 2003 ) proposed a method for the statistical inference of fraction retention noninferiority (NI) hypothesis. A fraction retention hypothesis is defined as a ratio of the new treatment effect verse the control effect in the context of a time to event endpoint. One of the major concerns using this method in the design of an NI trial is that with a limited sample size, the power of the study is usually very low. This makes an NI trial not applicable particularly when using time to event endpoint. To improve power, Wang et al. ( 2006 ) proposed a ratio test based on asymptotic normality theory. Under a strong assumption (equal variance of the NI test statistic under null and alternative hypotheses), the sample size using Wang's test was much smaller than that using Rothmann's test. However, in practice, the assumption of equal variance is generally questionable for an NI trial design. This assumption is removed in the ratio test proposed in this article, which is derived directly from a Cauchy-like ratio distribution. In addition, using this method, the fundamental assumption used in Rothmann's test, that the observed control effect is always positive, that is, the observed hazard ratio for placebo over the control is greater than 1, is no longer necessary. Without assuming equal variance under null and alternative hypotheses, the sample size required for an NI trial can be significantly reduced if using the proposed ratio test for a fraction retention NI hypothesis.
Wald Sequential Probability Ratio Test for Space Object Conjunction Assessment
NASA Technical Reports Server (NTRS)
Carpenter, James R.; Markley, F Landis
2014-01-01
This paper shows how satellite owner/operators may use sequential estimates of collision probability, along with a prior assessment of the base risk of collision, in a compound hypothesis ratio test to inform decisions concerning collision risk mitigation maneuvers. The compound hypothesis test reduces to a simple probability ratio test, which appears to be a novel result. The test satisfies tolerances related to targeted false alarm and missed detection rates. This result is independent of the method one uses to compute the probability density that one integrates to compute collision probability. A well-established test case from the literature shows that this test yields acceptable results within the constraints of a typical operational conjunction assessment decision timeline. Another example illustrates the use of the test in a practical conjunction assessment scenario based on operations of the International Space Station.
An Assessment of the Impact of Hafting on Paleoindian Point Variability
Buchanan, Briggs; O'Brien, Michael J.; Kilby, J. David; Huckell, Bruce B.; Collard, Mark
2012-01-01
It has long been argued that the form of North American Paleoindian points was affected by hafting. According to this hypothesis, hafting constrained point bases such that they are less variable than point blades. The results of several studies have been claimed to be consistent with this hypothesis. However, there are reasons to be skeptical of these results. None of the studies employed statistical tests, and all of them focused on points recovered from kill and camp sites, which makes it difficult to be certain that the differences in variability are the result of hafting rather than a consequence of resharpening. Here, we report a study in which we tested the predictions of the hafting hypothesis by statistically comparing the variability of different parts of Clovis points. We controlled for the potentially confounding effects of resharpening by analyzing largely unused points from caches as well as points from kill and camp sites. The results of our analyses were not consistent with the predictions of the hypothesis. We found that several blade characters and point thickness were no more variable than the base characters. Our results indicate that the hafting hypothesis does not hold for Clovis points and indicate that there is a need to test its applicability in relation to post-Clovis Paleoindian points. PMID:22666320
Is it better to select or to receive? Learning via active and passive hypothesis testing.
Markant, Douglas B; Gureckis, Todd M
2014-02-01
People can test hypotheses through either selection or reception. In a selection task, the learner actively chooses observations to test his or her beliefs, whereas in reception tasks data are passively encountered. People routinely use both forms of testing in everyday life, but the critical psychological differences between selection and reception learning remain poorly understood. One hypothesis is that selection learning improves learning performance by enhancing generic cognitive processes related to motivation, attention, and engagement. Alternatively, we suggest that differences between these 2 learning modes derives from a hypothesis-dependent sampling bias that is introduced when a person collects data to test his or her own individual hypothesis. Drawing on influential models of sequential hypothesis-testing behavior, we show that such a bias (a) can lead to the collection of data that facilitates learning compared with reception learning and (b) can be more effective than observing the selections of another person. We then report a novel experiment based on a popular category learning paradigm that compares reception and selection learning. We additionally compare selection learners to a set of "yoked" participants who viewed the exact same sequence of observations under reception conditions. The results revealed systematic differences in performance that depended on the learner's role in collecting information and the abstract structure of the problem.
2011-01-01
Using an automated shuttlebox system, we conducted patch choice experiments with 32, 8–12 g bluegill sunfish (Lepomis macrochirus) to test a behavioral energetics hypothesis of habitat choice. When patch temperature and food levels were held constant within patches but different between patches, we expected bluegill to choose patches that maximized growth based on the bioenergetic integration of food and temperature as predicted by a bioenergetics model. Alternative hypotheses were that bluegill may choose patches based only on food (optimal foraging) or temperature (behavioral thermoregulation). The behavioral energetics hypothesis was not a good predictor of short-term (from minutes to weeks) patch choice by bluegill; the behavioral thermoregulation hypothesis was the best predictor. In the short-term, food and temperature appeared to affect patch choice hierarchically; temperature was more important, although food can alter temperature preference during feeding periods. Over a 19-d experiment, mean temperatures occupied by fish offered low rations did decline as predicted by the behavioral energetics hypothesis, but the decline was less than 1.0 °C as opposed to a possible 5 °C decline. A short-term, bioenergetic response to food and temperature may be precluded by physiological costs of acclimation not considered explicitly in the behavioral energetics hypothesis.
The potential for increased power from combining P-values testing the same hypothesis.
Ganju, Jitendra; Julie Ma, Guoguang
2017-02-01
The conventional approach to hypothesis testing for formal inference is to prespecify a single test statistic thought to be optimal. However, we usually have more than one test statistic in mind for testing the null hypothesis of no treatment effect but we do not know which one is the most powerful. Rather than relying on a single p-value, combining p-values from prespecified multiple test statistics can be used for inference. Combining functions include Fisher's combination test and the minimum p-value. Using randomization-based tests, the increase in power can be remarkable when compared with a single test and Simes's method. The versatility of the method is that it also applies when the number of covariates exceeds the number of observations. The increase in power is large enough to prefer combined p-values over a single p-value. The limitation is that the method does not provide an unbiased estimator of the treatment effect and does not apply to situations when the model includes treatment by covariate interaction.
A Space Object Detection Algorithm using Fourier Domain Likelihood Ratio Test
NASA Astrophysics Data System (ADS)
Becker, D.; Cain, S.
Space object detection is of great importance in the highly dependent yet competitive and congested space domain. Detection algorithms employed play a crucial role in fulfilling the detection component in the situational awareness mission to detect, track, characterize and catalog unknown space objects. Many current space detection algorithms use a matched filter or a spatial correlator to make a detection decision at a single pixel point of a spatial image based on the assumption that the data follows a Gaussian distribution. This paper explores the potential for detection performance advantages when operating in the Fourier domain of long exposure images of small and/or dim space objects from ground based telescopes. A binary hypothesis test is developed based on the joint probability distribution function of the image under the hypothesis that an object is present and under the hypothesis that the image only contains background noise. The detection algorithm tests each pixel point of the Fourier transformed images to make the determination if an object is present based on the criteria threshold found in the likelihood ratio test. Using simulated data, the performance of the Fourier domain detection algorithm is compared to the current algorithm used in space situational awareness applications to evaluate its value.
D’Onofrio, Brian M.; Class, Quetzal A.; Lahey, Benjamin B.; Larsson, Henrik
2014-01-01
The Developmental Origin of Health and Disease (DOHaD) hypothesis is a broad theoretical framework that emphasizes how early risk factors have a causal influence on psychopathology. Researchers have raised concerns about the causal interpretation of statistical associations between early risk factors and later psychopathology because most existing studies have been unable to rule out the possibility of environmental and genetic confounding. In this paper we illustrate how family-based quasi-experimental designs can test the DOHaD hypothesis by ruling out alternative hypotheses. We review the logic underlying sibling-comparison, co-twin control, offspring of siblings/twins, adoption, and in vitro fertilization designs. We then present results from studies using these designs focused on broad indices of fetal development (low birth weight and gestational age) and a particular teratogen, smoking during pregnancy. The results provide mixed support for the DOHaD hypothesis for psychopathology, illustrating the critical need to use design features that rule out unmeasured confounding. PMID:25364377
Testing an earthquake prediction algorithm
Kossobokov, V.G.; Healy, J.H.; Dewey, J.W.
1997-01-01
A test to evaluate earthquake prediction algorithms is being applied to a Russian algorithm known as M8. The M8 algorithm makes intermediate term predictions for earthquakes to occur in a large circle, based on integral counts of transient seismicity in the circle. In a retroactive prediction for the period January 1, 1985 to July 1, 1991 the algorithm as configured for the forward test would have predicted eight of ten strong earthquakes in the test area. A null hypothesis, based on random assignment of predictions, predicts eight earthquakes in 2.87% of the trials. The forward test began July 1, 1991 and will run through December 31, 1997. As of July 1, 1995, the algorithm had forward predicted five out of nine earthquakes in the test area, which success ratio would have been achieved in 53% of random trials with the null hypothesis.
NASA Astrophysics Data System (ADS)
Menne, Matthew J.; Williams, Claude N., Jr.
2005-10-01
An evaluation of three hypothesis test statistics that are commonly used in the detection of undocumented changepoints is described. The goal of the evaluation was to determine whether the use of multiple tests could improve undocumented, artificial changepoint detection skill in climate series. The use of successive hypothesis testing is compared to optimal approaches, both of which are designed for situations in which multiple undocumented changepoints may be present. In addition, the importance of the form of the composite climate reference series is evaluated, particularly with regard to the impact of undocumented changepoints in the various component series that are used to calculate the composite.In a comparison of single test changepoint detection skill, the composite reference series formulation is shown to be less important than the choice of the hypothesis test statistic, provided that the composite is calculated from the serially complete and homogeneous component series. However, each of the evaluated composite series is not equally susceptible to the presence of changepoints in its components, which may be erroneously attributed to the target series. Moreover, a reference formulation that is based on the averaging of the first-difference component series is susceptible to random walks when the composition of the component series changes through time (e.g., values are missing), and its use is, therefore, not recommended. When more than one test is required to reject the null hypothesis of no changepoint, the number of detected changepoints is reduced proportionately less than the number of false alarms in a wide variety of Monte Carlo simulations. Consequently, a consensus of hypothesis tests appears to improve undocumented changepoint detection skill, especially when reference series homogeneity is violated. A consensus of successive hypothesis tests using a semihierarchic splitting algorithm also compares favorably to optimal solutions, even when changepoints are not hierarchic.
Combination of Interventions Can Change Students' Epistemological Beliefs
ERIC Educational Resources Information Center
Kalman, Calvin S.; Sobhanzadeh, Mandana; Thompson, Robert; Ibrahim, Ahmed; Wang, Xihui
2015-01-01
This study was based on the hypothesis that students' epistemological beliefs could become more expertlike with a combination of appropriate instructional activities: (i) preclass reading with metacognitive reflection, and (ii) in-class active learning that produces cognitive dissonance. This hypothesis was tested through a five-year study…
The Harm Done to Reproducibility by the Culture of Null Hypothesis Significance Testing.
Lash, Timothy L
2017-09-15
In the last few years, stakeholders in the scientific community have raised alarms about a perceived lack of reproducibility of scientific results. In reaction, guidelines for journals have been promulgated and grant applicants have been asked to address the rigor and reproducibility of their proposed projects. Neither solution addresses a primary culprit, which is the culture of null hypothesis significance testing that dominates statistical analysis and inference. In an innovative research enterprise, selection of results for further evaluation based on null hypothesis significance testing is doomed to yield a low proportion of reproducible results and a high proportion of effects that are initially overestimated. In addition, the culture of null hypothesis significance testing discourages quantitative adjustments to account for systematic errors and quantitative incorporation of prior information. These strategies would otherwise improve reproducibility and have not been previously proposed in the widely cited literature on this topic. Without discarding the culture of null hypothesis significance testing and implementing these alternative methods for statistical analysis and inference, all other strategies for improving reproducibility will yield marginal gains at best. © The Author(s) 2017. Published by Oxford University Press on behalf of the Johns Hopkins Bloomberg School of Public Health. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.
Li, Fuhong; Cao, Bihua; Luo, Yuejia; Lei, Yi; Li, Hong
2013-02-01
Functional magnetic resonance imaging (fMRI) was used to examine differences in brain activation that occur when a person receives the different outcomes of hypothesis testing (HT). Participants were provided with a series of images of batteries and were asked to learn a rule governing what kinds of batteries were charged. Within each trial, the first two charged batteries were sequentially displayed, and participants would generate a preliminary hypothesis based on the perceptual comparison. Next, a third battery that served to strengthen, reject, or was irrelevant to the preliminary hypothesis was displayed. The fMRI results revealed that (1) no significant differences in brain activation were found between the 2 hypothesis-maintain conditions (i.e., strengthen and irrelevant conditions); and (2) compared with the hypothesis-maintain conditions, the hypothesis-reject condition activated the left medial frontal cortex, bilateral putamen, left parietal cortex, and right cerebellum. These findings are discussed in terms of the neural correlates of the subcomponents of HT and working memory manipulation. Copyright © 2012 Elsevier Inc. All rights reserved.
A default Bayesian hypothesis test for mediation.
Nuijten, Michèle B; Wetzels, Ruud; Matzke, Dora; Dolan, Conor V; Wagenmakers, Eric-Jan
2015-03-01
In order to quantify the relationship between multiple variables, researchers often carry out a mediation analysis. In such an analysis, a mediator (e.g., knowledge of a healthy diet) transmits the effect from an independent variable (e.g., classroom instruction on a healthy diet) to a dependent variable (e.g., consumption of fruits and vegetables). Almost all mediation analyses in psychology use frequentist estimation and hypothesis-testing techniques. A recent exception is Yuan and MacKinnon (Psychological Methods, 14, 301-322, 2009), who outlined a Bayesian parameter estimation procedure for mediation analysis. Here we complete the Bayesian alternative to frequentist mediation analysis by specifying a default Bayesian hypothesis test based on the Jeffreys-Zellner-Siow approach. We further extend this default Bayesian test by allowing a comparison to directional or one-sided alternatives, using Markov chain Monte Carlo techniques implemented in JAGS. All Bayesian tests are implemented in the R package BayesMed (Nuijten, Wetzels, Matzke, Dolan, & Wagenmakers, 2014).
Investigating prior probabilities in a multiple hypothesis test for use in space domain awareness
NASA Astrophysics Data System (ADS)
Hardy, Tyler J.; Cain, Stephen C.
2016-05-01
The goal of this research effort is to improve Space Domain Awareness (SDA) capabilities of current telescope systems through improved detection algorithms. Ground-based optical SDA telescopes are often spatially under-sampled, or aliased. This fact negatively impacts the detection performance of traditionally proposed binary and correlation-based detection algorithms. A Multiple Hypothesis Test (MHT) algorithm has been previously developed to mitigate the effects of spatial aliasing. This is done by testing potential Resident Space Objects (RSOs) against several sub-pixel shifted Point Spread Functions (PSFs). A MHT has been shown to increase detection performance for the same false alarm rate. In this paper, the assumption of a priori probability used in a MHT algorithm is investigated. First, an analysis of the pixel decision space is completed to determine alternate hypothesis prior probabilities. These probabilities are then implemented into a MHT algorithm, and the algorithm is then tested against previous MHT algorithms using simulated RSO data. Results are reported with Receiver Operating Characteristic (ROC) curves and probability of detection, Pd, analysis.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Nelson, P.; Seth, D.L.; Ray, A.K.
A detailed and systematic study of the nature of the discretization error associated with the upwind finite-difference method is presented. A basic model problem has been identified and based upon the results for this problem, a basic hypothesis regarding the accuracy of the computational solution of the Spencer-Lewis equation is formulated. The basic hypothesis is then tested under various systematic single complexifications of the basic model problem. The results of these tests provide the framework of the refined hypothesis presented in the concluding comments. 27 refs., 3 figs., 14 tabs.
ERIC Educational Resources Information Center
Schochet, Peter Z.; Chiang, Hanley S.
2010-01-01
This paper addresses likely error rates for measuring teacher and school performance in the upper elementary grades using value-added models applied to student test score gain data. Using realistic performance measurement system schemes based on hypothesis testing, we develop error rate formulas based on OLS and Empirical Bayes estimators.…
Ganju, Jitendra; Yu, Xinxin; Ma, Guoguang Julie
2013-01-01
Formal inference in randomized clinical trials is based on controlling the type I error rate associated with a single pre-specified statistic. The deficiency of using just one method of analysis is that it depends on assumptions that may not be met. For robust inference, we propose pre-specifying multiple test statistics and relying on the minimum p-value for testing the null hypothesis of no treatment effect. The null hypothesis associated with the various test statistics is that the treatment groups are indistinguishable. The critical value for hypothesis testing comes from permutation distributions. Rejection of the null hypothesis when the smallest p-value is less than the critical value controls the type I error rate at its designated value. Even if one of the candidate test statistics has low power, the adverse effect on the power of the minimum p-value statistic is not much. Its use is illustrated with examples. We conclude that it is better to rely on the minimum p-value rather than a single statistic particularly when that single statistic is the logrank test, because of the cost and complexity of many survival trials. Copyright © 2013 John Wiley & Sons, Ltd.
Yang, Xiaowei; Nie, Kun
2008-03-15
Longitudinal data sets in biomedical research often consist of large numbers of repeated measures. In many cases, the trajectories do not look globally linear or polynomial, making it difficult to summarize the data or test hypotheses using standard longitudinal data analysis based on various linear models. An alternative approach is to apply the approaches of functional data analysis, which directly target the continuous nonlinear curves underlying discretely sampled repeated measures. For the purposes of data exploration, many functional data analysis strategies have been developed based on various schemes of smoothing, but fewer options are available for making causal inferences regarding predictor-outcome relationships, a common task seen in hypothesis-driven medical studies. To compare groups of curves, two testing strategies with good power have been proposed for high-dimensional analysis of variance: the Fourier-based adaptive Neyman test and the wavelet-based thresholding test. Using a smoking cessation clinical trial data set, this paper demonstrates how to extend the strategies for hypothesis testing into the framework of functional linear regression models (FLRMs) with continuous functional responses and categorical or continuous scalar predictors. The analysis procedure consists of three steps: first, apply the Fourier or wavelet transform to the original repeated measures; then fit a multivariate linear model in the transformed domain; and finally, test the regression coefficients using either adaptive Neyman or thresholding statistics. Since a FLRM can be viewed as a natural extension of the traditional multiple linear regression model, the development of this model and computational tools should enhance the capacity of medical statistics for longitudinal data.
On resilience studies of system detection and recovery techniques against stealthy insider attacks
NASA Astrophysics Data System (ADS)
Wei, Sixiao; Zhang, Hanlin; Chen, Genshe; Shen, Dan; Yu, Wei; Pham, Khanh D.; Blasch, Erik P.; Cruz, Jose B.
2016-05-01
With the explosive growth of network technologies, insider attacks have become a major concern to business operations that largely rely on computer networks. To better detect insider attacks that marginally manipulate network traffic over time, and to recover the system from attacks, in this paper we implement a temporal-based detection scheme using the sequential hypothesis testing technique. Two hypothetical states are considered: the null hypothesis that the collected information is from benign historical traffic and the alternative hypothesis that the network is under attack. The objective of such a detection scheme is to recognize the change within the shortest time by comparing the two defined hypotheses. In addition, once the attack is detected, a server migration-based system recovery scheme can be triggered to recover the system to the state prior to the attack. To understand mitigation of insider attacks, a multi-functional web display of the detection analysis was developed for real-time analytic. Experiments using real-world traffic traces evaluate the effectiveness of Detection System and Recovery (DeSyAR) scheme. The evaluation data validates the detection scheme based on sequential hypothesis testing and the server migration-based system recovery scheme can perform well in effectively detecting insider attacks and recovering the system under attack.
Bayesian models based on test statistics for multiple hypothesis testing problems.
Ji, Yuan; Lu, Yiling; Mills, Gordon B
2008-04-01
We propose a Bayesian method for the problem of multiple hypothesis testing that is routinely encountered in bioinformatics research, such as the differential gene expression analysis. Our algorithm is based on modeling the distributions of test statistics under both null and alternative hypotheses. We substantially reduce the complexity of the process of defining posterior model probabilities by modeling the test statistics directly instead of modeling the full data. Computationally, we apply a Bayesian FDR approach to control the number of rejections of null hypotheses. To check if our model assumptions for the test statistics are valid for various bioinformatics experiments, we also propose a simple graphical model-assessment tool. Using extensive simulations, we demonstrate the performance of our models and the utility of the model-assessment tool. In the end, we apply the proposed methodology to an siRNA screening and a gene expression experiment.
Resiliency in the Face of Adversity: A Short Longitudinal Test of the Trait Hypothesis.
Karaırmak, Özlem; Figley, Charles
2017-01-01
Resilience represents coping with adversity and is in line with a more positive paradigm for viewing responses to adversity. Most research has focused on resilience as coping-a state-based response to adversity. However, a competing hypothesis views resilience or resiliency as a trait that exists across time and types of adversity. We tested undergraduates enrolled in social work classes at a large southern university at two time periods during a single semester using measures of adversity, positive and negative affect, and trait-based resiliency. Consistent with the trait-based resiliency, and in contrast to state-based resilience, resiliency scores were not strongly correlated with adversity at both testing points but were with positive affect, and resiliency scores remained the same over time despite adversity variations. There was no gender or ethnic group difference in resilience scores. Black/African Americans reported significantly less negative affect and more positive affect than White/Caucasians.
Bayes factors for testing inequality constrained hypotheses: Issues with prior specification.
Mulder, Joris
2014-02-01
Several issues are discussed when testing inequality constrained hypotheses using a Bayesian approach. First, the complexity (or size) of the inequality constrained parameter spaces can be ignored. This is the case when using the posterior probability that the inequality constraints of a hypothesis hold, Bayes factors based on non-informative improper priors, and partial Bayes factors based on posterior priors. Second, the Bayes factor may not be invariant for linear one-to-one transformations of the data. This can be observed when using balanced priors which are centred on the boundary of the constrained parameter space with a diagonal covariance structure. Third, the information paradox can be observed. When testing inequality constrained hypotheses, the information paradox occurs when the Bayes factor of an inequality constrained hypothesis against its complement converges to a constant as the evidence for the first hypothesis accumulates while keeping the sample size fixed. This paradox occurs when using Zellner's g prior as a result of too much prior shrinkage. Therefore, two new methods are proposed that avoid these issues. First, partial Bayes factors are proposed based on transformed minimal training samples. These training samples result in posterior priors that are centred on the boundary of the constrained parameter space with the same covariance structure as in the sample. Second, a g prior approach is proposed by letting g go to infinity. This is possible because the Jeffreys-Lindley paradox is not an issue when testing inequality constrained hypotheses. A simulation study indicated that the Bayes factor based on this g prior approach converges fastest to the true inequality constrained hypothesis. © 2013 The British Psychological Society.
Hypothesis testing and earthquake prediction.
Jackson, D D
1996-04-30
Requirements for testing include advance specification of the conditional rate density (probability per unit time, area, and magnitude) or, alternatively, probabilities for specified intervals of time, space, and magnitude. Here I consider testing fully specified hypotheses, with no parameter adjustments or arbitrary decisions allowed during the test period. Because it may take decades to validate prediction methods, it is worthwhile to formulate testable hypotheses carefully in advance. Earthquake prediction generally implies that the probability will be temporarily higher than normal. Such a statement requires knowledge of "normal behavior"--that is, it requires a null hypothesis. Hypotheses can be tested in three ways: (i) by comparing the number of actual earth-quakes to the number predicted, (ii) by comparing the likelihood score of actual earthquakes to the predicted distribution, and (iii) by comparing the likelihood ratio to that of a null hypothesis. The first two tests are purely self-consistency tests, while the third is a direct comparison of two hypotheses. Predictions made without a statement of probability are very difficult to test, and any test must be based on the ratio of earthquakes in and out of the forecast regions.
Hypothesis testing and earthquake prediction.
Jackson, D D
1996-01-01
Requirements for testing include advance specification of the conditional rate density (probability per unit time, area, and magnitude) or, alternatively, probabilities for specified intervals of time, space, and magnitude. Here I consider testing fully specified hypotheses, with no parameter adjustments or arbitrary decisions allowed during the test period. Because it may take decades to validate prediction methods, it is worthwhile to formulate testable hypotheses carefully in advance. Earthquake prediction generally implies that the probability will be temporarily higher than normal. Such a statement requires knowledge of "normal behavior"--that is, it requires a null hypothesis. Hypotheses can be tested in three ways: (i) by comparing the number of actual earth-quakes to the number predicted, (ii) by comparing the likelihood score of actual earthquakes to the predicted distribution, and (iii) by comparing the likelihood ratio to that of a null hypothesis. The first two tests are purely self-consistency tests, while the third is a direct comparison of two hypotheses. Predictions made without a statement of probability are very difficult to test, and any test must be based on the ratio of earthquakes in and out of the forecast regions. PMID:11607663
Gollwitzer, Mario
2004-12-01
According to psychoanalytic theory, punitiveness is based on a projection of one's own immoral desires and the moral conflict they cause (scapegoat hypothesis). This hypothesis implies that transgressors impose harsher punishment onto comparable wrongdoers. This effect should be amplified by strength of decision conflict. An alternative hypothesis based on blameavoidance motivation is derived. Participants (N = 291) were asked to indicate whether they would commit an unlawful act in a moral temptation situation and how conflicted they felt in making their decision. Later, they had to judge convicts in criminal cases that were similar to the previous temptation situations. Authoritarianism was assessed as covariate. In contrast to the scapegoat but consistent with the blame-avoidance interpretation, transgressors were more lenient than nontransgressors. Authoritarianism had main effects on punitiveness. Decision conflict was neither directly nor indirectly related to punitiveness. The findings challenge the validity of the scapegoat hypothesis.
Morehead, Kayla; Dunlosky, John; Rawson, Katherine A; Bishop, Melissa; Pyc, Mary A
2018-04-01
When study is spaced across sessions (versus massed within a single session), final performance is greater after spacing. This spacing effect may have multiple causes, and according to the mediator hypothesis, part of the effect can be explained by the use of mediator-based strategies. This hypothesis proposes that when study is spaced across sessions, rather than massed within a session, more mediators will be generated that are longer lasting and hence more mediators will be available to support criterion recall. In two experiments, participants were randomly assigned to study paired associates using either a spaced or massed schedule. They reported strategy use for each item during study trials and during the final test. Consistent with the mediator hypothesis, participants who had spaced (as compared to massed) practice reported using more mediators on the final test. This use of effective mediators also statistically accounted for some - but not all of - the spacing effect on final performance.
ERIC Educational Resources Information Center
Coene, Martine; Schauwers, Karen; Gillis, Steven; Rooryck, Johan; Govaerts, Paul J.
2011-01-01
Recent neurobiological studies have advanced the hypothesis that language development is not continuously plastic but is governed by biological constraints that may be modified by experience within a particular time window. This hypothesis is tested based on spontaneous speech data from deaf cochlear-implanted (CI) children with access to…
Hypothesis tests for the detection of constant speed radiation moving sources
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dumazert, Jonathan; Coulon, Romain; Kondrasovs, Vladimir
2015-07-01
Radiation Portal Monitors are deployed in linear network to detect radiological material in motion. As a complement to single and multichannel detection algorithms, inefficient under too low signal to noise ratios, temporal correlation algorithms have been introduced. Test hypothesis methods based on empirically estimated mean and variance of the signals delivered by the different channels have shown significant gain in terms of a tradeoff between detection sensitivity and false alarm probability. This paper discloses the concept of a new hypothesis test for temporal correlation detection methods, taking advantage of the Poisson nature of the registered counting signals, and establishes amore » benchmark between this test and its empirical counterpart. The simulation study validates that in the four relevant configurations of a pedestrian source carrier under respectively high and low count rate radioactive background, and a vehicle source carrier under the same respectively high and low count rate radioactive background, the newly introduced hypothesis test ensures a significantly improved compromise between sensitivity and false alarm, while guaranteeing the stability of its optimization parameter regardless of signal to noise ratio variations between 2 to 0.8. (authors)« less
Short- and long-term rhythmic interventions: perspectives for language rehabilitation.
Schön, Daniele; Tillmann, Barbara
2015-03-01
This paper brings together different perspectives on the investigation and understanding of temporal processing and temporal expectations. We aim to bridge different temporal deficit hypotheses in dyslexia, dysphasia, or deafness in a larger framework, taking into account multiple nested temporal scales. We present data testing the hypothesis that temporal attention can be influenced by external rhythmic auditory stimulation (i.e., musical rhythm) and benefits subsequent language processing, including syntax processing and speech production. We also present data testing the hypothesis that phonological awareness can be influenced by several months of musical training and, more particularly, rhythmic training, which in turn improves reading skills. Together, our data support the hypothesis of a causal role of rhythm-based processing for language processing and acquisition. These results open new avenues for music-based remediation of language and hearing impairment. © 2015 New York Academy of Sciences.
Kilborn, Joshua P; Jones, David L; Peebles, Ernst B; Naar, David F
2017-04-01
Clustering data continues to be a highly active area of data analysis, and resemblance profiles are being incorporated into ecological methodologies as a hypothesis testing-based approach to clustering multivariate data. However, these new clustering techniques have not been rigorously tested to determine the performance variability based on the algorithm's assumptions or any underlying data structures. Here, we use simulation studies to estimate the statistical error rates for the hypothesis test for multivariate structure based on dissimilarity profiles (DISPROF). We concurrently tested a widely used algorithm that employs the unweighted pair group method with arithmetic mean (UPGMA) to estimate the proficiency of clustering with DISPROF as a decision criterion. We simulated unstructured multivariate data from different probability distributions with increasing numbers of objects and descriptors, and grouped data with increasing overlap, overdispersion for ecological data, and correlation among descriptors within groups. Using simulated data, we measured the resolution and correspondence of clustering solutions achieved by DISPROF with UPGMA against the reference grouping partitions used to simulate the structured test datasets. Our results highlight the dynamic interactions between dataset dimensionality, group overlap, and the properties of the descriptors within a group (i.e., overdispersion or correlation structure) that are relevant to resemblance profiles as a clustering criterion for multivariate data. These methods are particularly useful for multivariate ecological datasets that benefit from distance-based statistical analyses. We propose guidelines for using DISPROF as a clustering decision tool that will help future users avoid potential pitfalls during the application of methods and the interpretation of results.
Explorations in statistics: hypothesis tests and P values.
Curran-Everett, Douglas
2009-06-01
Learning about statistics is a lot like learning about science: the learning is more meaningful if you can actively explore. This second installment of Explorations in Statistics delves into test statistics and P values, two concepts fundamental to the test of a scientific null hypothesis. The essence of a test statistic is that it compares what we observe in the experiment to what we expect to see if the null hypothesis is true. The P value associated with the magnitude of that test statistic answers this question: if the null hypothesis is true, what proportion of possible values of the test statistic are at least as extreme as the one I got? Although statisticians continue to stress the limitations of hypothesis tests, there are two realities we must acknowledge: hypothesis tests are ingrained within science, and the simple test of a null hypothesis can be useful. As a result, it behooves us to explore the notions of hypothesis tests, test statistics, and P values.
Tissue interactions with nonionizing electromagnetic fields. Final report, March 1979-February 1986
DOE Office of Scientific and Technical Information (OSTI.GOV)
Adey, W.R.; Bawin, S.M.; Byus, C.V.
1986-08-01
This report provides an overview of this research program focused on basic research in nervous system responses to electric fields at 60 Hz. The emphasis in this project was to determine the fundamental mechanisms underlying some phenomena of electric field interactions in neural systems. The five studies of the initial program were tests of behavioral responses in the rat based upon the hypothesis that electric field detection might follow psychophysical rules known from prior research with light, sound and other stimuli; tests of electrophysiological responses to ''normal'' forms of stimulation in rat brain tissue exposed in vitro to electric fields,more » based on the hypothesis that the excitability of brain tissue might be affected by fields in the extracellular environment; tests of electrophysiological responses of spontaneously active pacemaker neurons of the Aplysia abdominal ganglion, based on the hypothesis that electric field interactions at the cell membrane might affect the balance among the several membrane-related processes that govern pacemaker activity; studies of mechanisms of low frequency electromagnetic field interactions with bone cells in the context of field therapy of ununited fractures; and manipulation of cell surface receptor proteins in studies of their mobility during EM field exposure.« less
Testing 40 Predictions from the Transtheoretical Model Again, with Confidence
ERIC Educational Resources Information Center
Velicer, Wayne F.; Brick, Leslie Ann D.; Fava, Joseph L.; Prochaska, James O.
2013-01-01
Testing Theory-based Quantitative Predictions (TTQP) represents an alternative to traditional Null Hypothesis Significance Testing (NHST) procedures and is more appropriate for theory testing. The theory generates explicit effect size predictions and these effect size estimates, with related confidence intervals, are used to test the predictions.…
Reiss, Philip T
2015-08-01
The "ten ironic rules for statistical reviewers" presented by Friston (2012) prompted a rebuttal by Lindquist et al. (2013), which was followed by a rejoinder by Friston (2013). A key issue left unresolved in this discussion is the use of cross-validation to test the significance of predictive analyses. This note discusses the role that cross-validation-based and related hypothesis tests have come to play in modern data analyses, in neuroimaging and other fields. It is shown that such tests need not be suboptimal and can fill otherwise-unmet inferential needs. Copyright © 2015 Elsevier Inc. All rights reserved.
Estimating equivalence with quantile regression
Cade, B.S.
2011-01-01
Equivalence testing and corresponding confidence interval estimates are used to provide more enlightened statistical statements about parameter estimates by relating them to intervals of effect sizes deemed to be of scientific or practical importance rather than just to an effect size of zero. Equivalence tests and confidence interval estimates are based on a null hypothesis that a parameter estimate is either outside (inequivalence hypothesis) or inside (equivalence hypothesis) an equivalence region, depending on the question of interest and assignment of risk. The former approach, often referred to as bioequivalence testing, is often used in regulatory settings because it reverses the burden of proof compared to a standard test of significance, following a precautionary principle for environmental protection. Unfortunately, many applications of equivalence testing focus on establishing average equivalence by estimating differences in means of distributions that do not have homogeneous variances. I discuss how to compare equivalence across quantiles of distributions using confidence intervals on quantile regression estimates that detect differences in heterogeneous distributions missed by focusing on means. I used one-tailed confidence intervals based on inequivalence hypotheses in a two-group treatment-control design for estimating bioequivalence of arsenic concentrations in soils at an old ammunition testing site and bioequivalence of vegetation biomass at a reclaimed mining site. Two-tailed confidence intervals based both on inequivalence and equivalence hypotheses were used to examine quantile equivalence for negligible trends over time for a continuous exponential model of amphibian abundance. ?? 2011 by the Ecological Society of America.
Introduction to Permutation and Resampling-Based Hypothesis Tests
ERIC Educational Resources Information Center
LaFleur, Bonnie J.; Greevy, Robert A.
2009-01-01
A resampling-based method of inference--permutation tests--is often used when distributional assumptions are questionable or unmet. Not only are these methods useful for obvious departures from parametric assumptions (e.g., normality) and small sample sizes, but they are also more robust than their parametric counterparts in the presences of…
A robust hypothesis test for the sensitive detection of constant speed radiation moving sources
NASA Astrophysics Data System (ADS)
Dumazert, Jonathan; Coulon, Romain; Kondrasovs, Vladimir; Boudergui, Karim; Moline, Yoann; Sannié, Guillaume; Gameiro, Jordan; Normand, Stéphane; Méchin, Laurence
2015-09-01
Radiation Portal Monitors are deployed in linear networks to detect radiological material in motion. As a complement to single and multichannel detection algorithms, inefficient under too low signal-to-noise ratios, temporal correlation algorithms have been introduced. Test hypothesis methods based on empirically estimated mean and variance of the signals delivered by the different channels have shown significant gain in terms of a tradeoff between detection sensitivity and false alarm probability. This paper discloses the concept of a new hypothesis test for temporal correlation detection methods, taking advantage of the Poisson nature of the registered counting signals, and establishes a benchmark between this test and its empirical counterpart. The simulation study validates that in the four relevant configurations of a pedestrian source carrier under respectively high and low count rate radioactive backgrounds, and a vehicle source carrier under the same respectively high and low count rate radioactive backgrounds, the newly introduced hypothesis test ensures a significantly improved compromise between sensitivity and false alarm. It also guarantees that the optimal coverage factor for this compromise remains stable regardless of signal-to-noise ratio variations between 2 and 0.8, therefore allowing the final user to parametrize the test with the sole prior knowledge of background amplitude.
Luo, Zhenhua; Tang, Songhua; Li, Chunwang; Fang, Hongxia; Hu, Huijian; Yang, Ji; Ding, Jingjing; Jiang, Zhigang
2012-01-01
Explaining species richness patterns is a central issue in biogeography and macroecology. Several hypotheses have been proposed to explain the mechanisms driving biodiversity patterns, but the causes of species richness gradients remain unclear. In this study, we aimed to explain the impacts of energy, environmental stability, and habitat heterogeneity factors on variation of vertebrate species richness (VSR), based on the VSR pattern in China, so as to test the energy hypothesis, the environmental stability hypothesis, and the habitat heterogeneity hypothesis. A dataset was compiled containing the distributions of 2,665 vertebrate species and eleven ecogeographic predictive variables in China. We grouped these variables into categories of energy, environmental stability, and habitat heterogeneity and transformed the data into 100 × 100 km quadrat systems. To test the three hypotheses, AIC-based model selection was carried out between VSR and the variables in each group and correlation analyses were conducted. There was a decreasing VSR gradient from the southeast to the northwest of China. Our results showed that energy explained 67.6% of the VSR variation, with the annual mean temperature as the main factor, which was followed by annual precipitation and NDVI. Environmental stability factors explained 69.1% of the VSR variation and both temperature annual range and precipitation seasonality had important contributions. By contrast, habitat heterogeneity variables explained only 26.3% of the VSR variation. Significantly positive correlations were detected among VSR, annual mean temperature, annual precipitation, and NDVI, whereas the relationship of VSR and temperature annual range was strongly negative. In addition, other variables showed moderate or ambiguous relations to VSR. The energy hypothesis and the environmental stability hypothesis were supported, whereas little support was found for the habitat heterogeneity hypothesis.
Luo, Zhenhua; Tang, Songhua; Li, Chunwang; Fang, Hongxia; Hu, Huijian; Yang, Ji; Ding, Jingjing; Jiang, Zhigang
2012-01-01
Background Explaining species richness patterns is a central issue in biogeography and macroecology. Several hypotheses have been proposed to explain the mechanisms driving biodiversity patterns, but the causes of species richness gradients remain unclear. In this study, we aimed to explain the impacts of energy, environmental stability, and habitat heterogeneity factors on variation of vertebrate species richness (VSR), based on the VSR pattern in China, so as to test the energy hypothesis, the environmental stability hypothesis, and the habitat heterogeneity hypothesis. Methodology/Principal Findings A dataset was compiled containing the distributions of 2,665 vertebrate species and eleven ecogeographic predictive variables in China. We grouped these variables into categories of energy, environmental stability, and habitat heterogeneity and transformed the data into 100×100 km quadrat systems. To test the three hypotheses, AIC-based model selection was carried out between VSR and the variables in each group and correlation analyses were conducted. There was a decreasing VSR gradient from the southeast to the northwest of China. Our results showed that energy explained 67.6% of the VSR variation, with the annual mean temperature as the main factor, which was followed by annual precipitation and NDVI. Environmental stability factors explained 69.1% of the VSR variation and both temperature annual range and precipitation seasonality had important contributions. By contrast, habitat heterogeneity variables explained only 26.3% of the VSR variation. Significantly positive correlations were detected among VSR, annual mean temperature, annual precipitation, and NDVI, whereas the relationship of VSR and temperature annual range was strongly negative. In addition, other variables showed moderate or ambiguous relations to VSR. Conclusions/Significance The energy hypothesis and the environmental stability hypothesis were supported, whereas little support was found for the habitat heterogeneity hypothesis. PMID:22530038
Dorn, Patricia L; de la Rúa, Nicholas M; Axen, Heather; Smith, Nicholas; Richards, Bethany R; Charabati, Jirias; Suarez, Julianne; Woods, Adrienne; Pessoa, Rafaela; Monroy, Carlota; Kilpatrick, C William; Stevens, Lori
2016-10-01
The widespread and diverse Triatoma dimidiata is the kissing bug species most important for Chagas disease transmission in Central America and a secondary vector in Mexico and northern South America. Its diversity may contribute to different Chagas disease prevalence in different localities and has led to conflicting systematic hypotheses describing various populations as subspecies or cryptic species. To resolve these conflicting hypotheses, we sequenced a nuclear (internal transcribed spacer 2, ITS-2) and mitochondrial gene (cytochrome b) from an extensive sampling of T. dimidiata across its geographic range. We evaluated the congruence of ITS-2 and cyt b phylogenies and tested the support for the previously proposed subspecies (inferred from ITS-2) by: (1) overlaying the ITS-2 subspecies assignments on a cyt b tree and, (2) assessing the statistical support for a cyt b topology constrained by the subspecies hypothesis. Unconstrained phylogenies inferred from ITS-2 and cyt b are congruent and reveal three clades including two putative cryptic species in addition to T. dimidiata sensu stricto. Neither the cyt b phylogeny nor hypothesis testing support the proposed subspecies inferred from ITS-2. Additionally, the two cryptic species are supported by phylogenies inferred from mitochondrially-encoded genes cytochrome c oxidase I and NADH dehydrogenase 4. In summary, our results reveal two cryptic species. Phylogenetic relationships indicate T. dimidiata sensu stricto is not subdivided into monophyletic clades consistent with subspecies. Based on increased support by hypothesis testing, we propose an updated systematic hypothesis for T. dimidiata based on extensive taxon sampling and analysis of both mitochondrial and nuclear genes. Copyright © 2016 Elsevier B.V. All rights reserved.
Feature Centrality and Property Induction
ERIC Educational Resources Information Center
Hadjichristidis, Constantinos; Sloman, Steven; Stevenson, Rosemary; Over, David
2004-01-01
A feature is central to a concept to the extent that other features depend on it. Four studies tested the hypothesis that people will project a feature from a base concept to a target concept to the extent that they believe the feature is central to the two concepts. This centrality hypothesis implies that feature projection is guided by a…
Lozano, José H
2016-02-01
Previous research aimed at testing the situational strength hypothesis suffers from serious limitations regarding the conceptualization of strength. In order to overcome these limitations, the present study attempts to test the situational strength hypothesis based on the operationalization of strength as reinforcement contingencies. One dispositional factor of proven effect on cooperative behavior, social value orientation (SVO), was used as a predictor of behavior in four social dilemmas with varying degree of situational strength. The moderating role of incentive condition (hypothetical vs. real) on the relationship between SVO and behavior was also tested. One hundred undergraduates were presented with the four social dilemmas and the Social Value Orientation Scale. One-half of the sample played the social dilemmas using real incentives, whereas the other half used hypothetical incentives. Results supported the situational strength hypothesis in that no behavioral variability and no effect of SVO on behavior were found in the strongest situation. However, situational strength did not moderate the effect of SVO on behavior in situations where behavior showed variability. No moderating effect was found for incentive condition either. The implications of these results for personality theory and assessment are discussed. © 2014 Wiley Periodicals, Inc.
Seeking health information on the web: positive hypothesis testing.
Kayhan, Varol Onur
2013-04-01
The goal of this study is to investigate positive hypothesis testing among consumers of health information when they search the Web. After demonstrating the extent of positive hypothesis testing using Experiment 1, we conduct Experiment 2 to test the effectiveness of two debiasing techniques. A total of 60 undergraduate students searched a tightly controlled online database developed by the authors to test the validity of a hypothesis. The database had four abstracts that confirmed the hypothesis and three abstracts that disconfirmed it. Findings of Experiment 1 showed that majority of participants (85%) exhibited positive hypothesis testing. In Experiment 2, we found that the recommendation technique was not effective in reducing positive hypothesis testing since none of the participants assigned to this server could retrieve disconfirming evidence. Experiment 2 also showed that the incorporation technique successfully reduced positive hypothesis testing since 75% of the participants could retrieve disconfirming evidence. Positive hypothesis testing on the Web is an understudied topic. More studies are needed to validate the effectiveness of the debiasing techniques discussed in this study and develop new techniques. Search engine developers should consider developing new options for users so that both confirming and disconfirming evidence can be presented in search results as users test hypotheses using search engines. Copyright © 2012 Elsevier Ireland Ltd. All rights reserved.
Reynolds, Matthew R; Scheiber, Caroline; Hajovsky, Daniel B; Schwartz, Bryanna; Kaufman, Alan S
2015-01-01
The gender similarities hypothesis by J. S. Hyde ( 2005 ), based on large-scale reviews of studies, concludes that boys and girls are more alike than different on most psychological variables, including academic skills such as reading and math (J. S. Hyde, 2005 ). Writing is an academic skill that may be an exception. The authors investigated gender differences in academic achievement using a large, nationally stratified sample of children and adolescents ranging from ages 7-19 years (N = 2,027). Achievement data were from the conormed sample for the Kaufman intelligence and achievement tests. Multiple-indicator, multiple-cause, and multigroup mean and covariance structure models were used to test for mean differences. Girls had higher latent reading ability and higher scores on a test of math computation, but the effect sizes were consistent with the gender similarities hypothesis. Conversely, girls scored higher on spelling and written expression, with effect sizes inconsistent with the gender similarities hypothesis. The findings remained the same after controlling for cognitive ability. Girls outperform boys on tasks of writing.
Statistical modeling, detection, and segmentation of stains in digitized fabric images
NASA Astrophysics Data System (ADS)
Gururajan, Arunkumar; Sari-Sarraf, Hamed; Hequet, Eric F.
2007-02-01
This paper will describe a novel and automated system based on a computer vision approach, for objective evaluation of stain release on cotton fabrics. Digitized color images of the stained fabrics are obtained, and the pixel values in the color and intensity planes of these images are probabilistically modeled as a Gaussian Mixture Model (GMM). Stain detection is posed as a decision theoretic problem, where the null hypothesis corresponds to absence of a stain. The null hypothesis and the alternate hypothesis mathematically translate into a first order GMM and a second order GMM respectively. The parameters of the GMM are estimated using a modified Expectation-Maximization (EM) algorithm. Minimum Description Length (MDL) is then used as the test statistic to decide the verity of the null hypothesis. The stain is then segmented by a decision rule based on the probability map generated by the EM algorithm. The proposed approach was tested on a dataset of 48 fabric images soiled with stains of ketchup, corn oil, mustard, ragu sauce, revlon makeup and grape juice. The decision theoretic part of the algorithm produced a correct detection rate (true positive) of 93% and a false alarm rate of 5% on these set of images.
cit: hypothesis testing software for mediation analysis in genomic applications.
Millstein, Joshua; Chen, Gary K; Breton, Carrie V
2016-08-01
The challenges of successfully applying causal inference methods include: (i) satisfying underlying assumptions, (ii) limitations in data/models accommodated by the software and (iii) low power of common multiple testing approaches. The causal inference test (CIT) is based on hypothesis testing rather than estimation, allowing the testable assumptions to be evaluated in the determination of statistical significance. A user-friendly software package provides P-values and optionally permutation-based FDR estimates (q-values) for potential mediators. It can handle single and multiple binary and continuous instrumental variables, binary or continuous outcome variables and adjustment covariates. Also, the permutation-based FDR option provides a non-parametric implementation. Simulation studies demonstrate the validity of the cit package and show a substantial advantage of permutation-based FDR over other common multiple testing strategies. The cit open-source R package is freely available from the CRAN website (https://cran.r-project.org/web/packages/cit/index.html) with embedded C ++ code that utilizes the GNU Scientific Library, also freely available (http://www.gnu.org/software/gsl/). joshua.millstein@usc.edu Supplementary data are available at Bioinformatics online. © The Author 2016. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.
ERIC Educational Resources Information Center
Quinino, Roberto C.; Reis, Edna A.; Bessegato, Lupercio F.
2013-01-01
This article proposes the use of the coefficient of determination as a statistic for hypothesis testing in multiple linear regression based on distributions acquired by beta sampling. (Contains 3 figures.)
The Leaf Fell (the Leaf): The Online Processing of Unaccusatives
Friedmann, Naama; Taranto, Gina; Shapiro, Lewis P.; Swinney, David
2012-01-01
According to the Unaccusative Hypothesis, unaccusative subjects are base-generated in object position and move to subject position. We examined this hypothesis using the cross-modal lexical priming technique, which tests whether and when an antecedent is reactivated during the online processing of a sentence. We compared sentences containing unergative verbs with sentences containing unaccusatives, both alternating and nonalternating, and found that subjects of unaccusatives reactivate after the verb, while subjects of unergatives do not. Alternating unaccusatives showed a mixed pattern of reactivation. The research directly supports the Unaccusative Hypothesis. PMID:22822348
Using Backward Design in Education Research: A Research Methods Essay †
Jensen, Jamie L.; Bailey, Elizabeth G.; Kummer, Tyler A.; Weber, K. Scott
2017-01-01
Education research within the STEM disciplines applies a scholarly approach to teaching and learning, with the intent of better understanding how people learn and of improving pedagogy at the undergraduate level. Most of the professionals practicing in this field have ‘crossed over’ from other disciplinary fields and thus have faced challenges in becoming experts in a new discipline. In this article, we offer a novel framework for approaching education research design called Backward Design in Education Research. It is patterned on backward curricular design and provides a three-step, systematic approach to designing education projects: 1) Define a research question that leads to a testable causal hypothesis based on a theoretical rationale; 2) Choose or design the assessment instruments to test the research hypothesis; and 3) Develop an experimental protocol that will be effective in testing the research hypothesis. This approach provides a systematic method to develop and carry out evidence-based research design. PMID:29854045
The GABA Hypothesis in Essential Tremor: Lights and Shadows.
Gironell, Alexandre
2014-01-01
The gamma-aminobutyric acid (GABA) hypothesis in essential tremor (ET) implies a disturbance of the GABAergic system, especially involving the cerebellum. This review examines the evidence of the GABA hypothesis. The review is based on published data about GABA dysfunction in ET, taking into account studies on cerebrospinal fluid, pathology, electrophysiology, genetics, neuroimaging, experimental animal models, and human drug therapies. Findings from several studies support the GABA hypothesis in ET. The hypothesis follows four steps: 1) cerebellar neurodegeneration with Purkinje cell loss; 2) a decrease in GABA system activity in deep cerebellar neurons; 3) disinhibition in output deep cerebellar neurons with pacemaker activity; and 4) an increase in rhythmic activity of the thalamus and thalamo-cortical circuit, contributing to the generation of tremor. Doubts have been cast on this hypothesis, however, by the fact that it is based on relatively few works, controversial post-mortem findings, and negative genetic studies on the GABA system. Furthermore, GABAergic drug efficacy is low and some GABAergic drugs do not have antitremoric efficacy. The GABA hypothesis continues to be the most robust pathophysiological hypothesis to explain ET. There is light in all GABA hypothesis steps, but a number of shadows cannot be overlooked. We need more studies to clarify the neurodegenerative nature of the disease, to confirm the decrease of GABA activity in the cerebellum, and to test more therapies that enhance the GABA transmission specifically in the cerebellum area.
Vanderstraeten, Jacques; Burda, Hynek; Verschaeve, Luc; De Brouwer, Christophe
2015-07-01
It has been suggested that weak 50/60 Hz [extremely low frequency (ELF)] magnetic fields (MF) could affect circadian biorhythms by disrupting the clock function of cryptochromes (the "cryptochrome hypothesis," currently under study). That hypothesis is based on the premise that weak (Earth strength) static magnetic fields affect the redox balance of cryptochromes, thus possibly their signaling state as well. An appropriate method for testing this postulate could be real time or short-term study of the circadian clock function of retinal cryptochromes under exposure to the static field intensities that elicit the largest redox changes (maximal "low field" and "high field" effects, respectively) compared to zero field. Positive results might encourage further study of the cryptochrome hypothesis itself. However, they would indicate the need for performing a similar study, this time comparing the effects of only slight intensity changes (low field range) in order to explore the possible role of the proximity of metal structures and furniture as a confounder under the cryptochrome hypothesis.
What Are Error Rates for Classifying Teacher and School Performance Using Value-Added Models?
ERIC Educational Resources Information Center
Schochet, Peter Z.; Chiang, Hanley S.
2013-01-01
This article addresses likely error rates for measuring teacher and school performance in the upper elementary grades using value-added models applied to student test score gain data. Using a realistic performance measurement system scheme based on hypothesis testing, the authors develop error rate formulas based on ordinary least squares and…
Testing the single-state dominance hypothesis
DOE Office of Scientific and Technical Information (OSTI.GOV)
Álvarez-Rodríguez, R.; Moreno, O.; Moya de Guerra, E.
2013-12-30
We present a theoretical analysis of the single-state dominance hypothesis for the two-neutrino double-beta decay process. The theoretical framework is a proton-neutron QRPA based on a deformed Hartree-Fock mean field with BCS pairing correlations. We focus on the decays of {sup 100}Mo, {sup 116}Cd and {sup 128}Te. We do not find clear evidences for single-state dominance within the present approach.
Shayer, Michael; Adhami, Mundher
2010-09-01
In the context of the British Government's policy directed on improving standards in schools, this paper presents research on the effects of a programme intended to promote the cognitive development of children in the first 2 years of primary school (Y1 & 2, aged 5-7 years). The programme is based on earlier work dealing with classroom-based interventions with older children at both primary and secondary levels of schooling. The hypothesis tested is that it is possible to increase the cognitive ability of children by assisting teachers towards that aim in the context of mathematics. A corollary hypothesis is that such an increase would result in an increase in long-term school achievement. The participants were 8 teachers in one local education authority (LEA) and 10 teachers in another. Data were analysed on 275 children present at Year 1 pre-test in 2002 and at long-term Key Stage 2 post-test in 2008. Two intervention methods were employed: a Y1 set of interactive activities designed around Piagetian concrete operational schemata, and mathematics lessons in both Y1 and Y2 designed from a theory-base derived from both Piaget and Vygotsky. At post-test in 2004, the mean effect sizes for cognitive development of the children - assessed by the Piagetian test Spatial Relations - were 0.71 SD in one LEA and 0.60 SD in the other. Five classes achieved a median increase of 1.3 SD. The mean gains over pre-test in 2002 for all children in Key Stage 1 English in 2004 were 0.51 SD, and at Key Stage 2 English in 2008 - the long-term effect - were 0.36 SD, an improvement of 14 percentile points. The main hypothesis was supported by the data on cognitive development. The corollary hypothesis is supported by the gains in English. The implications of this study are that relative intelligence can be increased and is not fixed, and that children can be led into collaborating with each other to the benefit of their own thinking, and that there does exist a theory-based methodology for the improvement of teaching.
A Ratio Test of Interrater Agreement with High Specificity
ERIC Educational Resources Information Center
Cousineau, Denis; Laurencelle, Louis
2015-01-01
Existing tests of interrater agreements have high statistical power; however, they lack specificity. If the ratings of the two raters do not show agreement but are not random, the current tests, some of which are based on Cohen's kappa, will often reject the null hypothesis, leading to the wrong conclusion that agreement is present. A new test of…
Achievement Goals as Mediators of the Relationship between Competence Beliefs and Test Anxiety
ERIC Educational Resources Information Center
Putwain, David W.; Symes, Wendy
2012-01-01
Background: Previous work suggests that the expectation of failure is related to higher test anxiety and achievement goals grounded in a fear of failure. Aim: To test the hypothesis, based on the work of Elliot and Pekrun (2007), that the relationship between perceived competence and test anxiety is mediated by achievement goal orientations.…
Zhao, Xing; Zhou, Xiao-Hua; Feng, Zijian; Guo, Pengfei; He, Hongyan; Zhang, Tao; Duan, Lei; Li, Xiaosong
2013-01-01
As a useful tool for geographical cluster detection of events, the spatial scan statistic is widely applied in many fields and plays an increasingly important role. The classic version of the spatial scan statistic for the binary outcome is developed by Kulldorff, based on the Bernoulli or the Poisson probability model. In this paper, we apply the Hypergeometric probability model to construct the likelihood function under the null hypothesis. Compared with existing methods, the likelihood function under the null hypothesis is an alternative and indirect method to identify the potential cluster, and the test statistic is the extreme value of the likelihood function. Similar with Kulldorff's methods, we adopt Monte Carlo test for the test of significance. Both methods are applied for detecting spatial clusters of Japanese encephalitis in Sichuan province, China, in 2009, and the detected clusters are identical. Through a simulation to independent benchmark data, it is indicated that the test statistic based on the Hypergeometric model outweighs Kulldorff's statistics for clusters of high population density or large size; otherwise Kulldorff's statistics are superior.
The Effects of a Translation Bias on the Scores for the "Basic Economics Test"
ERIC Educational Resources Information Center
Hahn, Jinsoo; Jang, Kyungho
2012-01-01
International comparisons of economic understanding generally require a translation of a standardized test written in English into another language. Test results can differ based on how researchers translate the English written exam into one in their own language. To confirm this hypothesis, two differently translated versions of the "Basic…
NASA Technical Reports Server (NTRS)
Carpenter, J. R.; Markley, F. L.; Alfriend, K. T.; Wright, C.; Arcido, J.
2011-01-01
Sequential probability ratio tests explicitly allow decision makers to incorporate false alarm and missed detection risks, and are potentially less sensitive to modeling errors than a procedure that relies solely on a probability of collision threshold. Recent work on constrained Kalman filtering has suggested an approach to formulating such a test for collision avoidance maneuver decisions: a filter bank with two norm-inequality-constrained epoch-state extended Kalman filters. One filter models 1he null hypothesis 1ha1 the miss distance is inside the combined hard body radius at the predicted time of closest approach, and one filter models the alternative hypothesis. The epoch-state filter developed for this method explicitly accounts for any process noise present in the system. The method appears to work well using a realistic example based on an upcoming highly-elliptical orbit formation flying mission.
Burby, Joshua W.; Lacker, Daniel
2016-01-01
Systems as diverse as the interacting species in a community, alleles at a genetic locus, and companies in a market are characterized by competition (over resources, space, capital, etc) and adaptation. Neutral theory, built around the hypothesis that individual performance is independent of group membership, has found utility across the disciplines of ecology, population genetics, and economics, both because of the success of the neutral hypothesis in predicting system properties and because deviations from these predictions provide information about the underlying dynamics. However, most tests of neutrality are weak, based on static system properties such as species-abundance distributions or the number of singletons in a sample. Time-series data provide a window onto a system’s dynamics, and should furnish tests of the neutral hypothesis that are more powerful to detect deviations from neutrality and more informative about to the type of competitive asymmetry that drives the deviation. Here, we present a neutrality test for time-series data. We apply this test to several microbial time-series and financial time-series and find that most of these systems are not neutral. Our test isolates the covariance structure of neutral competition, thus facilitating further exploration of the nature of asymmetry in the covariance structure of competitive systems. Much like neutrality tests from population genetics that use relative abundance distributions have enabled researchers to scan entire genomes for genes under selection, we anticipate our time-series test will be useful for quick significance tests of neutrality across a range of ecological, economic, and sociological systems for which time-series data are available. Future work can use our test to categorize and compare the dynamic fingerprints of particular competitive asymmetries (frequency dependence, volatility smiles, etc) to improve forecasting and management of complex adaptive systems. PMID:27689714
Cantalapiedra, Juan L; Hernández Fernández, Manuel; Morales, Jorge
2011-01-01
The resource-use hypothesis proposed by E.S. Vrba predicts that specialist species have higher speciation and extinction rates than generalists because they are more susceptible to environmental changes and vicariance. In this work, we test some of the predictions derived from this hypothesis on the 197 extant and recently extinct species of Ruminantia (Cetartiodactyla, Mammalia) using the biomic specialization index (BSI) of each species, which is based on its distribution within different biomes. We ran 10000 Monte Carlo simulations of our data in order to get a null distribution of BSI values against which to contrast the observed data. Additionally, we drew on a supertree of the ruminants and a phylogenetic likelihood-based method (QuaSSE) for testing whether the degree of biomic specialization affects speciation rates in ruminant lineages. Our results are consistent with the predictions of the resource-use hypothesis, which foretells a higher speciation rate of lineages restricted to a single biome (BSI = 1) and higher frequency of specialist species in biomes that underwent high degree of contraction and fragmentation during climatic cycles. Bovids and deer present differential specialization across biomes; cervids show higher specialization in biomes with a marked hydric seasonality (tropical deciduous woodlands and schlerophyllous woodlands), while bovids present higher specialization in a greater variety of biomes. This might be the result of divergent physiological constraints as well as a different biogeographic and evolutionary history.
Cantalapiedra, Juan L.; Hernández Fernández, Manuel; Morales, Jorge
2011-01-01
The resource-use hypothesis proposed by E.S. Vrba predicts that specialist species have higher speciation and extinction rates than generalists because they are more susceptible to environmental changes and vicariance. In this work, we test some of the predictions derived from this hypothesis on the 197 extant and recently extinct species of Ruminantia (Cetartiodactyla, Mammalia) using the biomic specialization index (BSI) of each species, which is based on its distribution within different biomes. We ran 10000 Monte Carlo simulations of our data in order to get a null distribution of BSI values against which to contrast the observed data. Additionally, we drew on a supertree of the ruminants and a phylogenetic likelihood-based method (QuaSSE) for testing whether the degree of biomic specialization affects speciation rates in ruminant lineages. Our results are consistent with the predictions of the resource-use hypothesis, which foretells a higher speciation rate of lineages restricted to a single biome (BSI = 1) and higher frequency of specialist species in biomes that underwent high degree of contraction and fragmentation during climatic cycles. Bovids and deer present differential specialization across biomes; cervids show higher specialization in biomes with a marked hydric seasonality (tropical deciduous woodlands and schlerophyllous woodlands), while bovids present higher specialization in a greater variety of biomes. This might be the result of divergent physiological constraints as well as a different biogeographic and evolutionary history. PMID:22174888
Hypothesis testing in hydrology: Theory and practice
NASA Astrophysics Data System (ADS)
Kirchner, James; Pfister, Laurent
2017-04-01
Well-posed hypothesis tests have spurred major advances in hydrological theory. However, a random sample of recent research papers suggests that in hydrology, as in other fields, hypothesis formulation and testing rarely correspond to the idealized model of the scientific method. Practices such as "p-hacking" or "HARKing" (Hypothesizing After the Results are Known) are major obstacles to more rigorous hypothesis testing in hydrology, along with the well-known problem of confirmation bias - the tendency to value and trust confirmations more than refutations - among both researchers and reviewers. Hypothesis testing is not the only recipe for scientific progress, however: exploratory research, driven by innovations in measurement and observation, has also underlain many key advances. Further improvements in observation and measurement will be vital to both exploratory research and hypothesis testing, and thus to advancing the science of hydrology.
A Bayesian Approach to the Paleomagnetic Conglomerate Test
NASA Astrophysics Data System (ADS)
Heslop, David; Roberts, Andrew P.
2018-02-01
The conglomerate test has served the paleomagnetic community for over 60 years as a means to detect remagnetizations. The test states that if a suite of clasts within a bed have uniformly random paleomagnetic directions, then the conglomerate cannot have experienced a pervasive event that remagnetized the clasts in the same direction. The current form of the conglomerate test is based on null hypothesis testing, which results in a binary "pass" (uniformly random directions) or "fail" (nonrandom directions) outcome. We have recast the conglomerate test in a Bayesian framework with the aim of providing more information concerning the level of support a given data set provides for a hypothesis of uniformly random paleomagnetic directions. Using this approach, we place the conglomerate test in a fully probabilistic framework that allows for inconclusive results when insufficient information is available to draw firm conclusions concerning the randomness or nonrandomness of directions. With our method, sample sets larger than those typically employed in paleomagnetism may be required to achieve strong support for a hypothesis of random directions. Given the potentially detrimental effect of unrecognized remagnetizations on paleomagnetic reconstructions, it is important to provide a means to draw statistically robust data-driven inferences. Our Bayesian analysis provides a means to do this for the conglomerate test.
BIOLOGICALLY ENHANCED OXYGEN TRANSFER IN THE ACTIVATED SLUDGE PROCESS (JOURNAL)
Biologically enhanced oxgyen transfer has been a hypothesis to explain observed oxygen transfer rates in activated sludge systems that were well above that predicted from aerator clean-water testing. The enhanced oxygen transfer rates were based on tests using BOD bottle oxygen ...
Alertness and cognitive control: Testing the early onset hypothesis.
Schneider, Darryl W
2018-05-01
Previous research has revealed a peculiar interaction between alertness and cognitive control in selective-attention tasks: Congruency effects are larger on alert trials (on which an alerting cue is presented briefly in advance of the imperative stimulus) than on no-alert trials, despite shorter response times (RTs) on alert trials. One explanation for this finding is the early onset hypothesis, which is based on the assumptions that increased alertness shortens stimulus-encoding time and that cognitive control involves gradually focusing attention during a trial. The author tested the hypothesis in 3 experiments by manipulating alertness and stimulus quality (which were intended to shorten and lengthen stimulus-encoding time, respectively) in an arrow-based flanker task involving congruent and incongruent stimuli. Replicating past findings, the alerting manipulation led to shorter RTs but larger congruency effects on alert trials than on no-alert trials. The stimulus-quality manipulation led to longer RTs and larger congruency effects for degraded stimuli than for intact stimuli. These results provide mixed support for the early onset hypothesis, but the author discusses how data and theory might be reconciled if stimulus quality affects stimulus-encoding time and the rate of evidence accumulation in the decision process. (PsycINFO Database Record (c) 2018 APA, all rights reserved).
In silico model-based inference: a contemporary approach for hypothesis testing in network biology
Klinke, David J.
2014-01-01
Inductive inference plays a central role in the study of biological systems where one aims to increase their understanding of the system by reasoning backwards from uncertain observations to identify causal relationships among components of the system. These causal relationships are postulated from prior knowledge as a hypothesis or simply a model. Experiments are designed to test the model. Inferential statistics are used to establish a level of confidence in how well our postulated model explains the acquired data. This iterative process, commonly referred to as the scientific method, either improves our confidence in a model or suggests that we revisit our prior knowledge to develop a new model. Advances in technology impact how we use prior knowledge and data to formulate models of biological networks and how we observe cellular behavior. However, the approach for model-based inference has remained largely unchanged since Fisher, Neyman and Pearson developed the ideas in the early 1900’s that gave rise to what is now known as classical statistical hypothesis (model) testing. Here, I will summarize conventional methods for model-based inference and suggest a contemporary approach to aid in our quest to discover how cells dynamically interpret and transmit information for therapeutic aims that integrates ideas drawn from high performance computing, Bayesian statistics, and chemical kinetics. PMID:25139179
In silico model-based inference: a contemporary approach for hypothesis testing in network biology.
Klinke, David J
2014-01-01
Inductive inference plays a central role in the study of biological systems where one aims to increase their understanding of the system by reasoning backwards from uncertain observations to identify causal relationships among components of the system. These causal relationships are postulated from prior knowledge as a hypothesis or simply a model. Experiments are designed to test the model. Inferential statistics are used to establish a level of confidence in how well our postulated model explains the acquired data. This iterative process, commonly referred to as the scientific method, either improves our confidence in a model or suggests that we revisit our prior knowledge to develop a new model. Advances in technology impact how we use prior knowledge and data to formulate models of biological networks and how we observe cellular behavior. However, the approach for model-based inference has remained largely unchanged since Fisher, Neyman and Pearson developed the ideas in the early 1900s that gave rise to what is now known as classical statistical hypothesis (model) testing. Here, I will summarize conventional methods for model-based inference and suggest a contemporary approach to aid in our quest to discover how cells dynamically interpret and transmit information for therapeutic aims that integrates ideas drawn from high performance computing, Bayesian statistics, and chemical kinetics. © 2014 American Institute of Chemical Engineers.
Discontinuous categories affect information-integration but not rule-based category learning.
Maddox, W Todd; Filoteo, J Vincent; Lauritzen, J Scott; Connally, Emily; Hejl, Kelli D
2005-07-01
Three experiments were conducted that provide a direct examination of within-category discontinuity manipulations on the implicit, procedural-based learning and the explicit, hypothesis-testing systems proposed in F. G. Ashby, L. A. Alfonso-Reese, A. U. Turken, and E. M. Waldron's (1998) competition between verbal and implicit systems model. Discontinuous categories adversely affected information-integration but not rule-based category learning. Increasing the magnitude of the discontinuity did not lead to a significant decline in performance. The distance to the bound provides a reasonable description of the generalization profile associated with the hypothesis-testing system, whereas the distance to the bound plus the distance to the trained response region provides a reasonable description of the generalization profile associated with the procedural-based learning system. These results suggest that within-category discontinuity differentially impacts information-integration but not rule-based category learning and provides information regarding the detailed processing characteristics of each category learning system. ((c) 2005 APA, all rights reserved).
Timing and proximate causes of mortality in wild bird populations: testing Ashmole’s hypothesis
Barton, Daniel C.; Martin, Thomas E.
2012-01-01
Fecundity in birds is widely recognized to increase with latitude across diverse phylogenetic groups and regions, yet the causes of this variation remain enigmatic. Ashmole’s hypothesis is one of the most broadly accepted explanations for this pattern. This hypothesis suggests that increasing seasonality leads to increasing overwinter mortality due to resource scarcity during the lean season (e.g., winter) in higher latitude climates. This mortality is then thought to yield increased per-capita resources for breeding that allow larger clutch sizes at high latitudes. Support for this hypothesis has been based on indirect tests, whereas the underlying mechanisms and assumptions remain poorly explored. We used a meta-analysis of over 150 published studies to test two underlying and critical assumptions of Ashmole’s hypothesis: first, that ad ult mortality is greatest during the season of greatest resource scarcity, and second, t hat most mortality is caused by starvation. We found that the lean season (winter) was generally not the season of greatest mortality. Instead, spring or summer was most frequently the season of greatest mortality. Moreover, monthly survival rates were not explained by monthly productivity, again opposing predictions from Ashmole’s hypothesis. Finally, predation, rather than starvation, was the most frequent proximate cause o f mortality. Our results do not support the mechanistic predictions of Ashmole‘s hypothesis, and suggest alternative explanations of latitudinal variation in clutch size should remain under consideration. Our meta-analysis also highlights a paucity of data available on the timing and causes of mortality in many bird populations, particularly tropical bird populations, despite the clear theoretical and empirical importance of such data.
The Wigner distribution and 2D classical maps
NASA Astrophysics Data System (ADS)
Sakhr, Jamal
2017-07-01
The Wigner spacing distribution has a long and illustrious history in nuclear physics and in the quantum mechanics of classically chaotic systems. In this paper, a novel connection between the Wigner distribution and 2D classical mechanics is introduced. Based on a well-known correspondence between the Wigner distribution and the 2D Poisson point process, the hypothesis that typical pseudo-trajectories of a 2D ergodic map have a Wignerian nearest-neighbor spacing distribution (NNSD) is put forward and numerically tested. The standard Euclidean metric is used to compute the interpoint spacings. In all test cases, the hypothesis is upheld, and the range of validity of the hypothesis appears to be robust in the sense that it is not affected by the presence or absence of: (i) mixing; (ii) time-reversal symmetry; and/or (iii) dissipation.
Taroni, F; Biedermann, A; Bozza, S
2016-02-01
Many people regard the concept of hypothesis testing as fundamental to inferential statistics. Various schools of thought, in particular frequentist and Bayesian, have promoted radically different solutions for taking a decision about the plausibility of competing hypotheses. Comprehensive philosophical comparisons about their advantages and drawbacks are widely available and continue to span over large debates in the literature. More recently, controversial discussion was initiated by an editorial decision of a scientific journal [1] to refuse any paper submitted for publication containing null hypothesis testing procedures. Since the large majority of papers published in forensic journals propose the evaluation of statistical evidence based on the so called p-values, it is of interest to expose the discussion of this journal's decision within the forensic science community. This paper aims to provide forensic science researchers with a primer on the main concepts and their implications for making informed methodological choices. Copyright © 2015 Elsevier Ireland Ltd. All rights reserved.
A Critique of One-Tailed Hypothesis Test Procedures in Business and Economics Statistics Textbooks.
ERIC Educational Resources Information Center
Liu, Tung; Stone, Courtenay C.
1999-01-01
Surveys introductory business and economics statistics textbooks and finds that they differ over the best way to explain one-tailed hypothesis tests: the simple null-hypothesis approach or the composite null-hypothesis approach. Argues that the composite null-hypothesis approach contains methodological shortcomings that make it more difficult for…
The membrane pacemaker hypothesis: novel tests during the ontogeny of endothermy.
Price, Edwin R; Sirsat, Tushar S; Sirsat, Sarah K G; Curran, Thomas; Venables, Barney J; Dzialowski, Edward M
2018-03-29
The 'membrane pacemaker' hypothesis proposes a biochemical explanation for among-species variation in resting metabolism, based on the positive correlation between membrane docosahexaenoic acid (DHA) and metabolic rate. We tested this hypothesis using a novel model, altricial red-winged blackbird nestlings, predicting that the proportion of DHA in muscle and liver membranes should increase with the increasing metabolic rate of the nestling as it develops endothermy. We also used a dietary manipulation, supplementing the natural diet with fish oil (high DHA) or sunflower oil (high linoleic acid) to alter membrane composition and then assessed metabolic rate. In support of the membrane pacemaker hypothesis, DHA proportions increased in membranes from pectoralis muscle, muscle mitochondria and liver during post-hatch development. By contrast, elevated dietary DHA had no effect on resting metabolic rate, despite causing significant changes to membrane lipid composition. During cold challenges, higher metabolic rates were achieved by birds that had lower DHA and higher linoleic acid in membrane phospholipids. Given the mixed support for this hypothesis, we conclude that correlations between membrane DHA and metabolic rate are likely spurious, and should be attributed to a still-unidentified confounding variable. © 2018. Published by The Company of Biologists Ltd.
Does RAIM with Correct Exclusion Produce Unbiased Positions?
Teunissen, Peter J. G.; Imparato, Davide; Tiberius, Christian C. J. M.
2017-01-01
As the navigation solution of exclusion-based RAIM follows from a combination of least-squares estimation and a statistically based exclusion-process, the computation of the integrity of the navigation solution has to take the propagated uncertainty of the combined estimation-testing procedure into account. In this contribution, we analyse, theoretically as well as empirically, the effect that this combination has on the first statistical moment, i.e., the mean, of the computed navigation solution. It will be shown, although statistical testing is intended to remove biases from the data, that biases will always remain under the alternative hypothesis, even when the correct alternative hypothesis is properly identified. The a posteriori exclusion of a biased satellite range from the position solution will therefore never remove the bias in the position solution completely. PMID:28672862
Sanchez, Gaëtan; Lecaignard, Françoise; Otman, Anatole; Maby, Emmanuel; Mattout, Jérémie
2016-01-01
The relatively young field of Brain-Computer Interfaces has promoted the use of electrophysiology and neuroimaging in real-time. In the meantime, cognitive neuroscience studies, which make extensive use of functional exploration techniques, have evolved toward model-based experiments and fine hypothesis testing protocols. Although these two developments are mostly unrelated, we argue that, brought together, they may trigger an important shift in the way experimental paradigms are being designed, which should prove fruitful to both endeavors. This change simply consists in using real-time neuroimaging in order to optimize advanced neurocognitive hypothesis testing. We refer to this new approach as the instantiation of an Active SAmpling Protocol (ASAP). As opposed to classical (static) experimental protocols, ASAP implements online model comparison, enabling the optimization of design parameters (e.g., stimuli) during the course of data acquisition. This follows the well-known principle of sequential hypothesis testing. What is radically new, however, is our ability to perform online processing of the huge amount of complex data that brain imaging techniques provide. This is all the more relevant at a time when physiological and psychological processes are beginning to be approached using more realistic, generative models which may be difficult to tease apart empirically. Based upon Bayesian inference, ASAP proposes a generic and principled way to optimize experimental design adaptively. In this perspective paper, we summarize the main steps in ASAP. Using synthetic data we illustrate its superiority in selecting the right perceptual model compared to a classical design. Finally, we briefly discuss its future potential for basic and clinical neuroscience as well as some remaining challenges.
Suner, Aslı; Karakülah, Gökhan; Dicle, Oğuz
2014-01-01
Statistical hypothesis testing is an essential component of biological and medical studies for making inferences and estimations from the collected data in the study; however, the misuse of statistical tests is widely common. In order to prevent possible errors in convenient statistical test selection, it is currently possible to consult available test selection algorithms developed for various purposes. However, the lack of an algorithm presenting the most common statistical tests used in biomedical research in a single flowchart causes several problems such as shifting users among the algorithms, poor decision support in test selection and lack of satisfaction of potential users. Herein, we demonstrated a unified flowchart; covers mostly used statistical tests in biomedical domain, to provide decision aid to non-statistician users while choosing the appropriate statistical test for testing their hypothesis. We also discuss some of the findings while we are integrating the flowcharts into each other to develop a single but more comprehensive decision algorithm.
ERIC Educational Resources Information Center
Stice, Eric; Marti, C. Nathan; Rohde, Paul; Shaw, Heather
2011-01-01
Objective: Test the hypothesis that reductions in thin-ideal internalization and body dissatisfaction mediate the effects of a dissonance-based eating disorder prevention program on reductions in eating disorder symptoms over 1-year follow-up. Method: Data were drawn from a randomized effectiveness trial in which 306 female high school students…
Repeated Challenge Studies: A Comparison of Union-Intersection Testing with Linear Modeling.
ERIC Educational Resources Information Center
Levine, Richard A.; Ohman, Pamela A.
1997-01-01
Challenge studies can be used to see whether there is a causal relationship between an agent of interest and a response. An approach based on union-intersection testing is presented that allows researchers to examine observations on a single subject and test the hypothesis of interest. An application using psychological data is presented. (SLD)
The Effect of Instruction on the Acquisition of Conservation of Volume.
ERIC Educational Resources Information Center
Butts, David P.; Howe, Ann C.
Tested was the hypothesis that science instruction based on task analysis will lead to the acquisition of the ability to perform certain Piaget volume tasks which have been characterized as requiring formal operations for their solutions. A Test on Formal Operations and a Learning Hierarchies Test were given to fourth- and sixth-grade students in…
A modeling process to understand complex system architectures
NASA Astrophysics Data System (ADS)
Robinson, Santiago Balestrini
2009-12-01
In recent decades, several tools have been developed by the armed forces, and their contractors, to test the capability of a force. These campaign level analysis tools, often times characterized as constructive simulations are generally expensive to create and execute, and at best they are extremely difficult to verify and validate. This central observation, that the analysts are relying more and more on constructive simulations to predict the performance of future networks of systems, leads to the two central objectives of this thesis: (1) to enable the quantitative comparison of architectures in terms of their ability to satisfy a capability without resorting to constructive simulations, and (2) when constructive simulations must be created, to quantitatively determine how to spend the modeling effort amongst the different system classes. The first objective led to Hypothesis A, the first main hypotheses, which states that by studying the relationships between the entities that compose an architecture, one can infer how well it will perform a given capability. The method used to test the hypothesis is based on two assumptions: (1) the capability can be defined as a cycle of functions, and that it (2) must be possible to estimate the probability that a function-based relationship occurs between any two types of entities. If these two requirements are met, then by creating random functional networks, different architectures can be compared in terms of their ability to satisfy a capability. In order to test this hypothesis, a novel process for creating representative functional networks of large-scale system architectures was developed. The process, named the Digraph Modeling for Architectures (DiMA), was tested by comparing its results to those of complex constructive simulations. Results indicate that if the inputs assigned to DiMA are correct (in the tests they were based on time-averaged data obtained from the ABM), DiMA is able to identify which of any two architectures is better more than 98% of the time. The second objective led to Hypothesis B, the second of the main hypotheses. This hypothesis stated that by studying the functional relations, the most critical entities composing the architecture could be identified. The critical entities are those that when their behavior varies slightly, the behavior of the overall architecture varies greatly. These are the entities that must be modeled more carefully and where modeling effort should be expended. This hypothesis was tested by simplifying agent-based models to the non-trivial minimum, and executing a large number of different simulations in order to obtain statistically significant results. The tests were conducted by evolving the complex model without any error induced, and then evolving the model once again for each ranking and assigning error to any of the nodes with a probability inversely proportional to the ranking. The results from this hypothesis test indicate that depending on the structural characteristics of the functional relations, it is useful to use one of two of the intelligent rankings tested, or it is best to expend effort equally amongst all the entities. Random ranking always performed worse than uniform ranking, indicating that if modeling effort is to be prioritized amongst the entities composing the large-scale system architecture, it should be prioritized intelligently. The benefit threshold between intelligent prioritization and no prioritization lays on the large-scale system's chaotic boundary. If the large-scale system behaves chaotically, small variations in any of the entities tends to have a great impact on the behavior of the entire system. Therefore, even low ranking entities can still affect the behavior of the model greatly, and error should not be concentrated in any one entity. It was discovered that the threshold can be identified from studying the structure of the networks, in particular the cyclicity, the Off-diagonal Complexity, and the Digraph Algebraic Connectivity. (Abstract shortened by UMI.)
Power Enhancement in High Dimensional Cross-Sectional Tests
Fan, Jianqing; Liao, Yuan; Yao, Jiawei
2016-01-01
We propose a novel technique to boost the power of testing a high-dimensional vector H : θ = 0 against sparse alternatives where the null hypothesis is violated only by a couple of components. Existing tests based on quadratic forms such as the Wald statistic often suffer from low powers due to the accumulation of errors in estimating high-dimensional parameters. More powerful tests for sparse alternatives such as thresholding and extreme-value tests, on the other hand, require either stringent conditions or bootstrap to derive the null distribution and often suffer from size distortions due to the slow convergence. Based on a screening technique, we introduce a “power enhancement component”, which is zero under the null hypothesis with high probability, but diverges quickly under sparse alternatives. The proposed test statistic combines the power enhancement component with an asymptotically pivotal statistic, and strengthens the power under sparse alternatives. The null distribution does not require stringent regularity conditions, and is completely determined by that of the pivotal statistic. As specific applications, the proposed methods are applied to testing the factor pricing models and validating the cross-sectional independence in panel data models. PMID:26778846
García-Navas, Vicente; Ortego, Joaquín; Sanz, Juan José
2009-01-01
The general hypothesis of mate choice based on non-additive genetic traits suggests that individuals would gain important benefits by choosing genetically dissimilar mates (compatible mate hypothesis) and/or more heterozygous mates (heterozygous mate hypothesis). In this study, we test these hypotheses in a socially monogamous bird, the blue tit (Cyanistes caeruleus). We found no evidence for a relatedness-based mating pattern, but heterozygosity was positively correlated between social mates, suggesting that blue tits may base their mating preferences on partner's heterozygosity. We found evidence that the observed heterozygosity-based assortative mating could be maintained by both direct and indirect benefits. Heterozygosity reflected individual quality in both sexes: egg production and quality increased with female heterozygosity while more heterozygous males showed higher feeding rates during the brood-rearing period. Further, estimated offspring heterozygosity correlated with both paternal and maternal heterozygosity, suggesting that mating with heterozygous individuals can increase offspring genetic quality. Finally, plumage crown coloration was associated with male heterozygosity, and this could explain unanimous mate preferences for highly heterozygous and more ornamented individuals. Overall, this study suggests that non-additive genetic traits may play an important role in the evolution of mating preferences and offers empirical support to the resolution of the lek paradox from the perspective of the heterozygous mate hypothesis. PMID:19474042
P value and the theory of hypothesis testing: an explanation for new researchers.
Biau, David Jean; Jolles, Brigitte M; Porcher, Raphaël
2010-03-01
In the 1920s, Ronald Fisher developed the theory behind the p value and Jerzy Neyman and Egon Pearson developed the theory of hypothesis testing. These distinct theories have provided researchers important quantitative tools to confirm or refute their hypotheses. The p value is the probability to obtain an effect equal to or more extreme than the one observed presuming the null hypothesis of no effect is true; it gives researchers a measure of the strength of evidence against the null hypothesis. As commonly used, investigators will select a threshold p value below which they will reject the null hypothesis. The theory of hypothesis testing allows researchers to reject a null hypothesis in favor of an alternative hypothesis of some effect. As commonly used, investigators choose Type I error (rejecting the null hypothesis when it is true) and Type II error (accepting the null hypothesis when it is false) levels and determine some critical region. If the test statistic falls into that critical region, the null hypothesis is rejected in favor of the alternative hypothesis. Despite similarities between the two, the p value and the theory of hypothesis testing are different theories that often are misunderstood and confused, leading researchers to improper conclusions. Perhaps the most common misconception is to consider the p value as the probability that the null hypothesis is true rather than the probability of obtaining the difference observed, or one that is more extreme, considering the null is true. Another concern is the risk that an important proportion of statistically significant results are falsely significant. Researchers should have a minimum understanding of these two theories so that they are better able to plan, conduct, interpret, and report scientific experiments.
2014-01-01
Background In complex large-scale experiments, in addition to simultaneously considering a large number of features, multiple hypotheses are often being tested for each feature. This leads to a problem of multi-dimensional multiple testing. For example, in gene expression studies over ordered categories (such as time-course or dose-response experiments), interest is often in testing differential expression across several categories for each gene. In this paper, we consider a framework for testing multiple sets of hypothesis, which can be applied to a wide range of problems. Results We adopt the concept of the overall false discovery rate (OFDR) for controlling false discoveries on the hypothesis set level. Based on an existing procedure for identifying differentially expressed gene sets, we discuss a general two-step hierarchical hypothesis set testing procedure, which controls the overall false discovery rate under independence across hypothesis sets. In addition, we discuss the concept of the mixed-directional false discovery rate (mdFDR), and extend the general procedure to enable directional decisions for two-sided alternatives. We applied the framework to the case of microarray time-course/dose-response experiments, and proposed three procedures for testing differential expression and making multiple directional decisions for each gene. Simulation studies confirm the control of the OFDR and mdFDR by the proposed procedures under independence and positive correlations across genes. Simulation results also show that two of our new procedures achieve higher power than previous methods. Finally, the proposed methodology is applied to a microarray dose-response study, to identify 17 β-estradiol sensitive genes in breast cancer cells that are induced at low concentrations. Conclusions The framework we discuss provides a platform for multiple testing procedures covering situations involving two (or potentially more) sources of multiplicity. The framework is easy to use and adaptable to various practical settings that frequently occur in large-scale experiments. Procedures generated from the framework are shown to maintain control of the OFDR and mdFDR, quantities that are especially relevant in the case of multiple hypothesis set testing. The procedures work well in both simulations and real datasets, and are shown to have better power than existing methods. PMID:24731138
ERIC Educational Resources Information Center
Hoover, H. D.; Plake, Barbara
The relative power of the Mann-Whitney statistic, the t-statistic, the median test, a test based on exceedances (A,B), and two special cases of (A,B) the Tukey quick test and the revised Tukey quick test, was investigated via a Monte Carlo experiment. These procedures were compared across four population probability models: uniform, beta, normal,…
Monitoring Items in Real Time to Enhance CAT Security
ERIC Educational Resources Information Center
Zhang, Jinming; Li, Jie
2016-01-01
An IRT-based sequential procedure is developed to monitor items for enhancing test security. The procedure uses a series of statistical hypothesis tests to examine whether the statistical characteristics of each item under inspection have changed significantly during CAT administration. This procedure is compared with a previously developed…
Testing Evolutionary Hypotheses in the Classroom with MacClade Software.
ERIC Educational Resources Information Center
Codella, Sylvio G.
2002-01-01
Introduces MacClade which is a Macintosh-based software package that uses the techniques of cladistic analysis to explore evolutionary patterns. Describes a novel and effective exercise that allows undergraduate biology majors to test a hypothesis about behavioral evolution in insects. (Contains 13 references.) (Author/YDS)
Differentiating between rights-based and relational ethical approaches.
Trobec, Irena; Herbst, Majda; Zvanut, Bostjan
2009-05-01
When forced treatment in mental health care is under consideration, two approaches guide clinicians in their actions: the dominant rights-based approach and the relational ethical approach. We hypothesized that nurses with bachelor's degrees differentiate better between the two approaches than nurses without a degree. To test this hypothesis a survey was performed in major Slovenian health institutions. We found that nurses emphasize the importance of ethics and personal values, but 55.4% of all the nurse participants confused the two approaches. The results confirmed our hypothesis and indicate the importance of nurses' formal education, especially when caring for patients with mental illness.
Contextual effects on the perceived health benefits of exercise: the exercise rank hypothesis.
Maltby, John; Wood, Alex M; Vlaev, Ivo; Taylor, Michael J; Brown, Gordon D A
2012-12-01
Many accounts of social influences on exercise participation describe how people compare their behaviors to those of others. We develop and test a novel hypothesis, the exercise rank hypothesis, of how this comparison can occur. The exercise rank hypothesis, derived from evolutionary theory and the decision by sampling model of judgment, suggests that individuals' perceptions of the health benefits of exercise are influenced by how individuals believe the amount of exercise ranks in comparison with other people's amounts of exercise. Study 1 demonstrated that individuals' perceptions of the health benefits of their own current exercise amounts were as predicted by the exercise rank hypothesis. Study 2 demonstrated that the perceptions of the health benefits of an amount of exercise can be manipulated by experimentally changing the ranked position of the amount within a comparison context. The discussion focuses on how social norm-based interventions could benefit from using rank information.
NASA Astrophysics Data System (ADS)
Harken, B.; Geiges, A.; Rubin, Y.
2013-12-01
There are several stages in any hydrological modeling campaign, including: formulation and analysis of a priori information, data acquisition through field campaigns, inverse modeling, and forward modeling and prediction of some environmental performance metric (EPM). The EPM being predicted could be, for example, contaminant concentration, plume travel time, or aquifer recharge rate. These predictions often have significant bearing on some decision that must be made. Examples include: how to allocate limited remediation resources between multiple contaminated groundwater sites, where to place a waste repository site, and what extraction rates can be considered sustainable in an aquifer. Providing an answer to these questions depends on predictions of EPMs using forward models as well as levels of uncertainty related to these predictions. Uncertainty in model parameters, such as hydraulic conductivity, leads to uncertainty in EPM predictions. Often, field campaigns and inverse modeling efforts are planned and undertaken with reduction of parametric uncertainty as the objective. The tool of hypothesis testing allows this to be taken one step further by considering uncertainty reduction in the ultimate prediction of the EPM as the objective and gives a rational basis for weighing costs and benefits at each stage. When using the tool of statistical hypothesis testing, the EPM is cast into a binary outcome. This is formulated as null and alternative hypotheses, which can be accepted and rejected with statistical formality. When accounting for all sources of uncertainty at each stage, the level of significance of this test provides a rational basis for planning, optimization, and evaluation of the entire campaign. Case-specific information, such as consequences prediction error and site-specific costs can be used in establishing selection criteria based on what level of risk is deemed acceptable. This framework is demonstrated and discussed using various synthetic case studies. The case studies involve contaminated aquifers where a decision must be made based on prediction of when a contaminant will arrive at a given location. The EPM, in this case contaminant travel time, is cast into the hypothesis testing framework. The null hypothesis states that the contaminant plume will arrive at the specified location before a critical value of time passes, and the alternative hypothesis states that the plume will arrive after the critical time passes. Different field campaigns are analyzed based on effectiveness in reducing the probability of selecting the wrong hypothesis, which in this case corresponds to reducing uncertainty in the prediction of plume arrival time. To examine the role of inverse modeling in this framework, case studies involving both Maximum Likelihood parameter estimation and Bayesian inversion are used.
Teaching Hypothesis Testing by Debunking a Demonstration of Telepathy.
ERIC Educational Resources Information Center
Bates, John A.
1991-01-01
Discusses a lesson designed to demonstrate hypothesis testing to introductory college psychology students. Explains that a psychology instructor demonstrated apparent psychic abilities to students. Reports that students attempted to explain the instructor's demonstrations through hypothesis testing and revision. Provides instructions on performing…
2014-10-06
to a subset Θ̃ of `-dimensional Euclidean space. The sub-σ-algebra Fn = FXn = σ(X n 1 ) of F is generated by the stochastic process X n 1 = (X1...developed asymptotic hypothesis testing theory is based on the SLLN and rates of convergence in the strong law for the LLR processes , specifically by...ξn to C. Write λn(θ, θ̃) = log dPnθ dPn θ̃ = ∑n k=1 log pθ(Xk|Xk−11 ) pθ̃(Xk|X k−1 1 ) for the log-likelihood ratio (LLR) process . Assume that there
Why Current Statistics of Complementary Alternative Medicine Clinical Trials is Invalid.
Pandolfi, Maurizio; Carreras, Giulia
2018-06-07
It is not sufficiently known that frequentist statistics cannot provide direct information on the probability that the research hypothesis tested is correct. The error resulting from this misunderstanding is compounded when the hypotheses under scrutiny have precarious scientific bases, which, generally, those of complementary alternative medicine (CAM) are. In such cases, it is mandatory to use inferential statistics, considering the prior probability that the hypothesis tested is true, such as the Bayesian statistics. The authors show that, under such circumstances, no real statistical significance can be achieved in CAM clinical trials. In this respect, CAM trials involving human material are also hardly defensible from an ethical viewpoint.
ERIC Educational Resources Information Center
Cobos, Pedro L.; Gutiérrez-Cobo, María J.; Morís, Joaquín; Luque, David
2017-01-01
In our study, we tested the hypothesis that feature-based and rule-based generalization involve different types of processes that may affect each other producing different results depending on time constraints and on how generalization is measured. For this purpose, participants in our experiments learned cue-outcome relationships that followed…
Lash, Ayhan Aytekin; Plonczynski, Donna J; Sehdev, Amikar
2011-01-01
To compare the inclusion and the influences of selected variables on hypothesis testing during the 1980s and 1990s. In spite of the emphasis on conducting inquiry consistent with the tenets of logical positivism, there have been no studies investigating the frequency and patterns of hypothesis testing in nursing research The sample was obtained from the journal Nursing Research which was the research journal with the highest circulation during the study period under study. All quantitative studies published during the two decades including briefs and historical studies were included in the analyses A retrospective design was used to select the sample. Five years from the 1980s and 1990s each were randomly selected from the journal, Nursing Research. Of the 582 studies, 517 met inclusion criteria. Findings suggest that there has been a decline in the use of hypothesis testing in the last decades of the 20th century. Further research is needed to identify the factors that influence the conduction of research with hypothesis testing. Hypothesis testing in nursing research showed a steady decline from the 1980s to 1990s. Research purposes of explanation, and prediction/ control increased the likelihood of hypothesis testing. Hypothesis testing strengthens the quality of the quantitative studies, increases the generality of findings and provides dependable knowledge. This is particularly true for quantitative studies that aim to explore, explain and predict/control phenomena and/or test theories. The findings also have implications for doctoral programmes, research preparation of nurse-investigators, and theory testing.
Modeling Aspects Of Nature Of Science To Preservice Elementary Teachers
NASA Astrophysics Data System (ADS)
Ashcraft, Paul
2007-01-01
Nature of science was modeled using guided inquiry activities in the university classroom with elementary education majors. A physical science content course initially used an Aristotelian model where students discussed the relationship between distance from a constant radiation source and the amount of radiation received based on accepted ``truths'' or principles and concluded that there was an inverse relationship. The class became Galilean in nature, using the scientific method to test that hypothesis. Examining data, the class rejected their hypothesis and concluded that there is an inverse square relationship. Assignments, given before and after the hypothesis testing, show the student's misconceptions and their acceptance of scientifically acceptable conceptions. Answers on exam questions further support this conceptual change. Students spent less class time on the inverse square relationship later when examining electrostatic force, magnetic force, gravity, and planetary solar radiation because the students related this particular experience to other physical relationships.
2011-01-01
Background Although many biological databases are applying semantic web technologies, meaningful biological hypothesis testing cannot be easily achieved. Database-driven high throughput genomic hypothesis testing requires both of the capabilities of obtaining semantically relevant experimental data and of performing relevant statistical testing for the retrieved data. Tissue Microarray (TMA) data are semantically rich and contains many biologically important hypotheses waiting for high throughput conclusions. Methods An application-specific ontology was developed for managing TMA and DNA microarray databases by semantic web technologies. Data were represented as Resource Description Framework (RDF) according to the framework of the ontology. Applications for hypothesis testing (Xperanto-RDF) for TMA data were designed and implemented by (1) formulating the syntactic and semantic structures of the hypotheses derived from TMA experiments, (2) formulating SPARQLs to reflect the semantic structures of the hypotheses, and (3) performing statistical test with the result sets returned by the SPARQLs. Results When a user designs a hypothesis in Xperanto-RDF and submits it, the hypothesis can be tested against TMA experimental data stored in Xperanto-RDF. When we evaluated four previously validated hypotheses as an illustration, all the hypotheses were supported by Xperanto-RDF. Conclusions We demonstrated the utility of high throughput biological hypothesis testing. We believe that preliminary investigation before performing highly controlled experiment can be benefited. PMID:21342584
Testing the Model-Observer Similarity Hypothesis with Text-Based Worked Examples
ERIC Educational Resources Information Center
Hoogerheide, Vincent; Loyens, Sofie M. M.; Jadi, Fedora; Vrins, Anna; van Gog, Tamara
2017-01-01
Example-based learning is a very effective and efficient instructional strategy for novices. It can be implemented using text-based worked examples that provide a written demonstration of how to perform a task, or (video) modelling examples in which an instructor (the "model") provides a demonstration. The model-observer similarity (MOS)…
USDA-ARS?s Scientific Manuscript database
Microbial-based inoculants have been reported to stimulate plant growth and nutrient uptake. However, their effect may vary depending on the growth stage when evaluated and on the chemical fertilizer applied. Thus, the objective of this study was to test the hypothesis that microbial-based inoculant...
Significance tests for functional data with complex dependence structure.
Staicu, Ana-Maria; Lahiri, Soumen N; Carroll, Raymond J
2015-01-01
We propose an L 2 -norm based global testing procedure for the null hypothesis that multiple group mean functions are equal, for functional data with complex dependence structure. Specifically, we consider the setting of functional data with a multilevel structure of the form groups-clusters or subjects-units, where the unit-level profiles are spatially correlated within the cluster, and the cluster-level data are independent. Orthogonal series expansions are used to approximate the group mean functions and the test statistic is estimated using the basis coefficients. The asymptotic null distribution of the test statistic is developed, under mild regularity conditions. To our knowledge this is the first work that studies hypothesis testing, when data have such complex multilevel functional and spatial structure. Two small-sample alternatives, including a novel block bootstrap for functional data, are proposed, and their performance is examined in simulation studies. The paper concludes with an illustration of a motivating experiment.
Knowledge dimensions in hypothesis test problems
NASA Astrophysics Data System (ADS)
Krishnan, Saras; Idris, Noraini
2012-05-01
The reformation in statistics education over the past two decades has predominantly shifted the focus of statistical teaching and learning from procedural understanding to conceptual understanding. The emphasis of procedural understanding is on the formulas and calculation procedures. Meanwhile, conceptual understanding emphasizes students knowing why they are using a particular formula or executing a specific procedure. In addition, the Revised Bloom's Taxonomy offers a twodimensional framework to describe learning objectives comprising of the six revised cognition levels of original Bloom's taxonomy and four knowledge dimensions. Depending on the level of complexities, the four knowledge dimensions essentially distinguish basic understanding from the more connected understanding. This study identifiesthe factual, procedural and conceptual knowledgedimensions in hypothesis test problems. Hypothesis test being an important tool in making inferences about a population from sample informationis taught in many introductory statistics courses. However, researchers find that students in these courses still have difficulty in understanding the underlying concepts of hypothesis test. Past studies also show that even though students can perform the hypothesis testing procedure, they may not understand the rationale of executing these steps or know how to apply them in novel contexts. Besides knowing the procedural steps in conducting a hypothesis test, students must have fundamental statistical knowledge and deep understanding of the underlying inferential concepts such as sampling distribution and central limit theorem. By identifying the knowledge dimensions of hypothesis test problems in this study, suitable instructional and assessment strategies can be developed in future to enhance students' learning of hypothesis test as a valuable inferential tool.
Toward Joint Hypothesis-Tests Seismic Event Screening Analysis: Ms|mb and Event Depth
DOE Office of Scientific and Technical Information (OSTI.GOV)
Anderson, Dale; Selby, Neil
2012-08-14
Well established theory can be used to combine single-phenomenology hypothesis tests into a multi-phenomenology event screening hypothesis test (Fisher's and Tippett's tests). Commonly used standard error in Ms:mb event screening hypothesis test is not fully consistent with physical basis. Improved standard error - Better agreement with physical basis, and correctly partitions error to include Model Error as a component of variance, correctly reduces station noise variance through network averaging. For 2009 DPRK test - Commonly used standard error 'rejects' H0 even with better scaling slope ({beta} = 1, Selby et al.), improved standard error 'fails to rejects' H0.
Confidence bounds and hypothesis tests for normal distribution coefficients of variation
Steve P. Verrill; Richard A. Johnson
2007-01-01
For normally distributed populations, we obtain confidence bounds on a ratio of two coefficients of variation, provide a test for the equality of k coefficients of variation, and provide confidence bounds on a coefficient of variation shared by k populations. To develop these confidence bounds and test, we first establish that estimators based on Newton steps from n-...
Chalfoun, A.D.; Martin, T.E.
2009-01-01
Predation is an important and ubiquitous selective force that can shape habitat preferences of prey species, but tests of alternative mechanistic hypotheses of habitat influences on predation risk are lacking. 2. We studied predation risk at nest sites of a passerine bird and tested two hypotheses based on theories of predator foraging behaviour. The total-foliage hypothesis predicts that predation will decline in areas of greater overall vegetation density by impeding cues for detection by predators. The potential-prey-site hypothesis predicts that predation decreases where predators must search more unoccupied potential nest sites. 3. Both observational data and results from a habitat manipulation provided clear support for the potential-prey-site hypothesis and rejection of the total-foliage hypothesis. Birds chose nest patches containing both greater total foliage and potential nest site density (which were correlated in their abundance) than at random sites, yet only potential nest site density significantly influenced nest predation risk. 4. Our results therefore provided a clear and rare example of adaptive nest site selection that would have been missed had structural complexity or total vegetation density been considered alone. 5. Our results also demonstrated that interactions between predator foraging success and habitat structure can be more complex than simple impedance or occlusion by vegetation. ?? 2008 British Ecological Society.
Schultheiss, Oliver C.
2013-01-01
Traditionally, implicit motives (i.e., non-conscious preferences for specific classes of incentives) are assessed through semantic coding of imaginative stories. The present research tested the marker-word hypothesis, which states that implicit motives are reflected in the frequencies of specific words. Using Linguistic Inquiry and Word Count (LIWC; Pennebaker et al., 2001), Study 1 identified word categories that converged with a content-coding measure of the implicit motives for power, achievement, and affiliation in picture stories collected in German and US student samples, showed discriminant validity with self-reported motives, and predicted well-validated criteria of implicit motives (gender difference for the affiliation motive; in interaction with personal-goal progress: emotional well-being). Study 2 demonstrated LIWC-based motive scores' causal validity by documenting their sensitivity to motive arousal. PMID:24137149
Deterministic versus evidence-based attitude towards clinical diagnosis.
Soltani, Akbar; Moayyeri, Alireza
2007-08-01
Generally, two basic classes have been proposed for scientific explanation of events. Deductive reasoning emphasizes on reaching conclusions about a hypothesis based on verification of universal laws pertinent to that hypothesis, while inductive or probabilistic reasoning explains an event by calculation of some probabilities for that event to be related to a given hypothesis. Although both types of reasoning are used in clinical practice, evidence-based medicine stresses on the advantages of the second approach for most instances in medical decision making. While 'probabilistic or evidence-based' reasoning seems to involve more mathematical formulas at the first look, this attitude is more dynamic and less imprisoned by the rigidity of mathematics comparing with 'deterministic or mathematical attitude'. In the field of medical diagnosis, appreciation of uncertainty in clinical encounters and utilization of likelihood ratio as measure of accuracy seem to be the most important characteristics of evidence-based doctors. Other characteristics include use of series of tests for refining probability, changing diagnostic thresholds considering external evidences and nature of the disease, and attention to confidence intervals to estimate uncertainty of research-derived parameters.
1996-09-01
Generalized Likelihood Ratio (GLR) and voting techniques. The third class consisted of multiple hypothesis filter detectors, specifically the MMAE. The...vector version, versus a tensor if we use the matrix version of the power spectral density estimate. Using this notation, we will derive an...as MATLAB , have an intrinsic sample covariance computation available, which makes this method quite easy to implement. In practice, the mean for the
2008-09-01
in the Input Hypothesis (IH) of Krashen (1981) and the Expectancy Hypothesis (EH) of Oller and Richard-Amato (1983). Situations where readers attempt...hypothesizing what will come next can be conceptualized in terms of grammar-based expectancies ( Oller , 1983). New elements are interspersed with known...Testing. Applied Language Learning 1993, 4. Clifford, R.; Granoien, N.; Jones, D. A.; Shen, W.; Weinstein, C. J . The Effect of Text Difficulty on
Profitability of HMOs: does non-profit status make a difference?
Bryce, H J
1994-06-01
This study, based on 163 HMOs, tests the hypothesis that the rates of return on assets (ROA) are not significantly different between for-profit and non-profit HMOs. It finds no statistical support for rejecting the hypothesis. The marked similarity in profitability is fully explained by analyzing methods of cost control and accounting, operational incentives and constraints, and price determination. The paper concludes that profitability is not a defining distinction in the operation of managed care.
ON THE SUBJECT OF HYPOTHESIS TESTING
Ugoni, Antony
1993-01-01
In this paper, the definition of a statistical hypothesis is discussed, and the considerations which need to be addressed when testing a hypothesis. In particular, the p-value, significance level, and power of a test are reviewed. Finally, the often quoted confidence interval is given a brief introduction. PMID:17989768
Some consequences of using the Horsfall-Barratt scale for hypothesis testing
USDA-ARS?s Scientific Manuscript database
Comparing treatment effects by hypothesis testing is a common practice in plant pathology. Nearest percent estimates (NPEs) of disease severity were compared to Horsfall-Barratt (H-B) scale data to explore whether there was an effect of assessment method on hypothesis testing. A simulation model ba...
Hypothesis Testing in the Real World
ERIC Educational Resources Information Center
Miller, Jeff
2017-01-01
Critics of null hypothesis significance testing suggest that (a) its basic logic is invalid and (b) it addresses a question that is of no interest. In contrast to (a), I argue that the underlying logic of hypothesis testing is actually extremely straightforward and compelling. To substantiate that, I present examples showing that hypothesis…
Sun, Hao; Zhou, Chi; Huang, Xiaoqin; Lin, Keqin; Shi, Lei; Yu, Liang; Liu, Shuyuan; Chu, Jiayou; Yang, Zhaoqing
2013-01-01
Tai people are widely distributed in Thailand, Laos and southwestern China and are a large population of Southeast Asia. Although most anthropologists and historians agree that modern Tai people are from southwestern China and northern Thailand, the place from which they historically migrated remains controversial. Three popular hypotheses have been proposed: northern origin hypothesis, southern origin hypothesis or an indigenous origin. We compared the genetic relationships between the Tai in China and their "siblings" to test different hypotheses by analyzing 10 autosomal microsatellites. The genetic data of 916 samples from 19 populations were analyzed in this survey. The autosomal STR data from 15 of the 19 populations came from our previous study (Lin et al., 2010). 194 samples from four additional populations were genotyped in this study: Han (Yunnan), Dai (Dehong), Dai (Yuxi) and Mongolian. The results of genetic distance comparisons, genetic structure analyses and admixture analyses all indicate that populations from northern origin hypothesis have large genetic distances and are clearly differentiated from the Tai. The simulation-based ABC analysis also indicates this. The posterior probability of the northern origin hypothesis is just 0.04 [95%CI: (0.01-0.06)]. Conversely, genetic relationships were very close between the Tai and populations from southern origin or an indigenous origin hypothesis. Simulation-based ABC analyses were also used to distinguish the southern origin hypothesis from the indigenous origin hypothesis. The results indicate that the posterior probability of the southern origin hypothesis [0.640, 95%CI: (0.524-0.757)] is greater than that of the indigenous origin hypothesis [0.324, 95%CI: (0.211-0.438)]. Therefore, we propose that the genetic evidence does not support the hypothesis of northern origin. Our genetic data indicate that the southern origin hypothesis has higher probability than the other two hypotheses statistically, suggesting that the Tai people most likely originated from southern China.
ERIC Educational Resources Information Center
Kwon, Yong-Ju; Jeong, Jin-Su; Park, Yun-Bok
2006-01-01
The purpose of the present study was to test the hypothesis that student's abductive reasoning skills play an important role in the generation of hypotheses on pendulum motion tasks. To test the hypothesis, a hypothesis-generating test on pendulum motion, and a prior-belief test about pendulum motion were developed and administered to a sample of…
Conditional Covariance-Based Subtest Selection for DIMTEST
ERIC Educational Resources Information Center
Froelich, Amy G.; Habing, Brian
2008-01-01
DIMTEST is a nonparametric hypothesis-testing procedure designed to test the assumptions of a unidimensional and locally independent item response theory model. Several previous Monte Carlo studies have found that using linear factor analysis to select the assessment subtest for DIMTEST results in a moderate to severe loss of power when the exam…
An algorithm for testing the efficient market hypothesis.
Boboc, Ioana-Andreea; Dinică, Mihai-Cristian
2013-01-01
The objective of this research is to examine the efficiency of EUR/USD market through the application of a trading system. The system uses a genetic algorithm based on technical analysis indicators such as Exponential Moving Average (EMA), Moving Average Convergence Divergence (MACD), Relative Strength Index (RSI) and Filter that gives buying and selling recommendations to investors. The algorithm optimizes the strategies by dynamically searching for parameters that improve profitability in the training period. The best sets of rules are then applied on the testing period. The results show inconsistency in finding a set of trading rules that performs well in both periods. Strategies that achieve very good returns in the training period show difficulty in returning positive results in the testing period, this being consistent with the efficient market hypothesis (EMH).
An Algorithm for Testing the Efficient Market Hypothesis
Boboc, Ioana-Andreea; Dinică, Mihai-Cristian
2013-01-01
The objective of this research is to examine the efficiency of EUR/USD market through the application of a trading system. The system uses a genetic algorithm based on technical analysis indicators such as Exponential Moving Average (EMA), Moving Average Convergence Divergence (MACD), Relative Strength Index (RSI) and Filter that gives buying and selling recommendations to investors. The algorithm optimizes the strategies by dynamically searching for parameters that improve profitability in the training period. The best sets of rules are then applied on the testing period. The results show inconsistency in finding a set of trading rules that performs well in both periods. Strategies that achieve very good returns in the training period show difficulty in returning positive results in the testing period, this being consistent with the efficient market hypothesis (EMH). PMID:24205148
The Influence of a Desirability Halo Effect on Ratings of Institutional Environment
ERIC Educational Resources Information Center
Mitchell, James V., Jr.
1973-01-01
A rationale is provided for hypothesizing that a counterpart of the social desirability variable influences environmental ratings based on student perceptions, and a test is made of the hypothesis. (Editor)
Krypotos, Angelos-Miltiadis; Klugkist, Irene; Engelhard, Iris M.
2017-01-01
ABSTRACT Threat conditioning procedures have allowed the experimental investigation of the pathogenesis of Post-Traumatic Stress Disorder. The findings of these procedures have also provided stable foundations for the development of relevant intervention programs (e.g. exposure therapy). Statistical inference of threat conditioning procedures is commonly based on p-values and Null Hypothesis Significance Testing (NHST). Nowadays, however, there is a growing concern about this statistical approach, as many scientists point to the various limitations of p-values and NHST. As an alternative, the use of Bayes factors and Bayesian hypothesis testing has been suggested. In this article, we apply this statistical approach to threat conditioning data. In order to enable the easy computation of Bayes factors for threat conditioning data we present a new R package named condir, which can be used either via the R console or via a Shiny application. This article provides both a non-technical introduction to Bayesian analysis for researchers using the threat conditioning paradigm, and the necessary tools for computing Bayes factors easily. PMID:29038683
The late Neandertal supraorbital fossils from Vindija Cave, Croatia: a biased sample?
Ahern, James C M; Lee, Sang-Hee; Hawks, John D
2002-09-01
The late Neandertal sample from Vindija (Croatia) has been described as transitional between the earlier Central European Neandertals from Krapina (Croatia) and modern humans. However, the morphological differences indicating this transition may rather be the result of different sex and/or age compositions between the samples. This study tests the hypothesis that the metric differences between the Krapina and Vindija supraorbital samples are due to sampling bias. We focus upon the supraorbital region because past studies have posited this region as particularly indicative of the Vindija sample's transitional nature. Furthermore, the supraorbital region varies significantly with both age and sex. We analyzed four chords and two derived indices of supraorbital torus form as defined by Smith & Ranyard (1980, Am. J. phys. Anthrop.93, pp. 589-610). For each variable, we analyzed relative sample bias of the Krapina and Vindija samples using three sampling methods. In order to test the hypothesis that the Vindija sample contains an over-representation of females and/or young while the Krapina sample is normal or also female/young biased, we determined the probability of drawing a sample of the same size as and with a mean equal to or less than Vindija's from a Krapina-based population. In order to test the hypothesis that the Vindija sample is female/young biased while the Krapina sample is male/old biased, we determined the probability of drawing a sample of the same size as and with a mean equal or less than Vindija's from a generated population whose mean is halfway between Krapina's and Vindija's. Finally, in order to test the hypothesis that the Vindija sample is normal while the Krapina sample contains an over-representation of males and/or old, we determined the probability of drawing a sample of the same size as and with a mean equal to or greater than Krapina's from a Vindija-based population. Unless we assume that the Vindija sample is female/young and the Krapina sample is male/old biased, our results falsify the hypothesis that the metric differences between the Krapina and Vindija samples are due to sample bias.
ERIC Educational Resources Information Center
Spires, Hiller A.; Rowe, Jonathan P.; Mott, Bradford W.; Lester, James C.
2011-01-01
Targeted as a highly desired skill for contemporary work and life, problem solving is central to game-based learning research. In this study, middle grade students achieved significant learning gains from gameplay interactions that required solving a science mystery based on microbiology content. Student trace data results indicated that effective…
Yang, Yang; DeGruttola, Victor
2016-01-01
Traditional resampling-based tests for homogeneity in covariance matrices across multiple groups resample residuals, that is, data centered by group means. These residuals do not share the same second moments when the null hypothesis is false, which makes them difficult to use in the setting of multiple testing. An alternative approach is to resample standardized residuals, data centered by group sample means and standardized by group sample covariance matrices. This approach, however, has been observed to inflate type I error when sample size is small or data are generated from heavy-tailed distributions. We propose to improve this approach by using robust estimation for the first and second moments. We discuss two statistics: the Bartlett statistic and a statistic based on eigen-decomposition of sample covariance matrices. Both statistics can be expressed in terms of standardized errors under the null hypothesis. These methods are extended to test homogeneity in correlation matrices. Using simulation studies, we demonstrate that the robust resampling approach provides comparable or superior performance, relative to traditional approaches, for single testing and reasonable performance for multiple testing. The proposed methods are applied to data collected in an HIV vaccine trial to investigate possible determinants, including vaccine status, vaccine-induced immune response level and viral genotype, of unusual correlation pattern between HIV viral load and CD4 count in newly infected patients. PMID:22740584
Yang, Yang; DeGruttola, Victor
2012-06-22
Traditional resampling-based tests for homogeneity in covariance matrices across multiple groups resample residuals, that is, data centered by group means. These residuals do not share the same second moments when the null hypothesis is false, which makes them difficult to use in the setting of multiple testing. An alternative approach is to resample standardized residuals, data centered by group sample means and standardized by group sample covariance matrices. This approach, however, has been observed to inflate type I error when sample size is small or data are generated from heavy-tailed distributions. We propose to improve this approach by using robust estimation for the first and second moments. We discuss two statistics: the Bartlett statistic and a statistic based on eigen-decomposition of sample covariance matrices. Both statistics can be expressed in terms of standardized errors under the null hypothesis. These methods are extended to test homogeneity in correlation matrices. Using simulation studies, we demonstrate that the robust resampling approach provides comparable or superior performance, relative to traditional approaches, for single testing and reasonable performance for multiple testing. The proposed methods are applied to data collected in an HIV vaccine trial to investigate possible determinants, including vaccine status, vaccine-induced immune response level and viral genotype, of unusual correlation pattern between HIV viral load and CD4 count in newly infected patients.
Washburne, Alex D.; Burby, Joshua W.; Lacker, Daniel; ...
2016-09-30
Systems as diverse as the interacting species in a community, alleles at a genetic locus, and companies in a market are characterized by competition (over resources, space, capital, etc) and adaptation. Neutral theory, built around the hypothesis that individual performance is independent of group membership, has found utility across the disciplines of ecology, population genetics, and economics, both because of the success of the neutral hypothesis in predicting system properties and because deviations from these predictions provide information about the underlying dynamics. However, most tests of neutrality are weak, based on static system properties such as species-abundance distributions or themore » number of singletons in a sample. Time-series data provide a window onto a system’s dynamics, and should furnish tests of the neutral hypothesis that are more powerful to detect deviations from neutrality and more informative about to the type of competitive asymmetry that drives the deviation. Here, we present a neutrality test for time-series data. We apply this test to several microbial time-series and financial time-series and find that most of these systems are not neutral. Our test isolates the covariance structure of neutral competition, thus facilitating further exploration of the nature of asymmetry in the covariance structure of competitive systems. Much like neutrality tests from population genetics that use relative abundance distributions have enabled researchers to scan entire genomes for genes under selection, we anticipate our time-series test will be useful for quick significance tests of neutrality across a range of ecological, economic, and sociological systems for which time-series data are available. Here, future work can use our test to categorize and compare the dynamic fingerprints of particular competitive asymmetries (frequency dependence, volatility smiles, etc) to improve forecasting and management of complex adaptive systems.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Washburne, Alex D.; Burby, Joshua W.; Lacker, Daniel
Systems as diverse as the interacting species in a community, alleles at a genetic locus, and companies in a market are characterized by competition (over resources, space, capital, etc) and adaptation. Neutral theory, built around the hypothesis that individual performance is independent of group membership, has found utility across the disciplines of ecology, population genetics, and economics, both because of the success of the neutral hypothesis in predicting system properties and because deviations from these predictions provide information about the underlying dynamics. However, most tests of neutrality are weak, based on static system properties such as species-abundance distributions or themore » number of singletons in a sample. Time-series data provide a window onto a system’s dynamics, and should furnish tests of the neutral hypothesis that are more powerful to detect deviations from neutrality and more informative about to the type of competitive asymmetry that drives the deviation. Here, we present a neutrality test for time-series data. We apply this test to several microbial time-series and financial time-series and find that most of these systems are not neutral. Our test isolates the covariance structure of neutral competition, thus facilitating further exploration of the nature of asymmetry in the covariance structure of competitive systems. Much like neutrality tests from population genetics that use relative abundance distributions have enabled researchers to scan entire genomes for genes under selection, we anticipate our time-series test will be useful for quick significance tests of neutrality across a range of ecological, economic, and sociological systems for which time-series data are available. Here, future work can use our test to categorize and compare the dynamic fingerprints of particular competitive asymmetries (frequency dependence, volatility smiles, etc) to improve forecasting and management of complex adaptive systems.« less
Making Knowledge Delivery Failsafe: Adding Step Zero in Hypothesis Testing
ERIC Educational Resources Information Center
Pan, Xia; Zhou, Qiang
2010-01-01
Knowledge of statistical analysis is increasingly important for professionals in modern business. For example, hypothesis testing is one of the critical topics for quality managers and team workers in Six Sigma training programs. Delivering the knowledge of hypothesis testing effectively can be an important step for the incapable learners or…
Values, Norms, and Peer Effects on Weight Status.
Nie, Peng; Gwozdz, Wencke; Reisch, Lucia; Sousa-Poza, Alfonso
2017-01-01
This study uses data from the European Social Survey in order to test the Prinstein-Dodge hypothesis that posits that peer effects may be larger in collectivistic than in individualistic societies. When defining individualism and collectivism at the country level, our results show that peer effects on obesity are indeed larger in collectivistic than in individualistic societies. However, when defining individualism and collectivism with individual values based on the Shalom Schwartz universal values theory, we find little support for this hypothesis.
Values, Norms, and Peer Effects on Weight Status
Gwozdz, Wencke; Sousa-Poza, Alfonso
2017-01-01
This study uses data from the European Social Survey in order to test the Prinstein-Dodge hypothesis that posits that peer effects may be larger in collectivistic than in individualistic societies. When defining individualism and collectivism at the country level, our results show that peer effects on obesity are indeed larger in collectivistic than in individualistic societies. However, when defining individualism and collectivism with individual values based on the Shalom Schwartz universal values theory, we find little support for this hypothesis. PMID:28348886
Increased Course Structure Improves Performance in Introductory Biology
Freeman, Scott; Haak, David; Wenderoth, Mary Pat
2011-01-01
We tested the hypothesis that highly structured course designs, which implement reading quizzes and/or extensive in-class active-learning activities and weekly practice exams, can lower failure rates in an introductory biology course for majors, compared with low-structure course designs that are based on lecturing and a few high-risk assessments. We controlled for 1) instructor effects by analyzing data from quarters when the same instructor taught the course, 2) exam equivalence with new assessments called the Weighted Bloom's Index and Predicted Exam Score, and 3) student equivalence using a regression-based Predicted Grade. We also tested the hypothesis that points from reading quizzes, clicker questions, and other “practice” assessments in highly structured courses inflate grades and confound comparisons with low-structure course designs. We found no evidence that points from active-learning exercises inflate grades or reduce the impact of exams on final grades. When we controlled for variation in student ability, failure rates were lower in a moderately structured course design and were dramatically lower in a highly structured course design. This result supports the hypothesis that active-learning exercises can make students more skilled learners and help bridge the gap between poorly prepared students and their better-prepared peers. PMID:21633066
Increased course structure improves performance in introductory biology.
Freeman, Scott; Haak, David; Wenderoth, Mary Pat
2011-01-01
We tested the hypothesis that highly structured course designs, which implement reading quizzes and/or extensive in-class active-learning activities and weekly practice exams, can lower failure rates in an introductory biology course for majors, compared with low-structure course designs that are based on lecturing and a few high-risk assessments. We controlled for 1) instructor effects by analyzing data from quarters when the same instructor taught the course, 2) exam equivalence with new assessments called the Weighted Bloom's Index and Predicted Exam Score, and 3) student equivalence using a regression-based Predicted Grade. We also tested the hypothesis that points from reading quizzes, clicker questions, and other "practice" assessments in highly structured courses inflate grades and confound comparisons with low-structure course designs. We found no evidence that points from active-learning exercises inflate grades or reduce the impact of exams on final grades. When we controlled for variation in student ability, failure rates were lower in a moderately structured course design and were dramatically lower in a highly structured course design. This result supports the hypothesis that active-learning exercises can make students more skilled learners and help bridge the gap between poorly prepared students and their better-prepared peers.
Testing of Hypothesis in Equivalence and Non Inferiority Trials-A Concept.
Juneja, Atul; Aggarwal, Abha R; Adhikari, Tulsi; Pandey, Arvind
2016-04-01
Establishing the appropriate hypothesis is one of the important steps for carrying out the statistical tests/analysis. Its understanding is important for interpreting the results of statistical analysis. The current communication attempts to provide the concept of testing of hypothesis in non inferiority and equivalence trials, where the null hypothesis is just reverse of what is set up for conventional superiority trials. It is similarly looked for rejection for establishing the fact the researcher is intending to prove. It is important to mention that equivalence or non inferiority cannot be proved by accepting the null hypothesis of no difference. Hence, establishing the appropriate statistical hypothesis is extremely important to arrive at meaningful conclusion for the set objectives in research.
ADOLESCENTS’ EXPOSURE TO COMMUNITY VIOLENCE: ARE NEIGHBORHOOD YOUTH ORGANIZATIONS PROTECTIVE?
Gardner, Margo; Brooks-Gunn, Jeanne
2011-01-01
Using data from the Project on Human Development in Chicago Neighborhoods (PHDCN), we identified a significant inverse association between the variety of youth organizations available at the neighborhood level and adolescents’ exposure to community violence. We examined two non-competing explanations for this finding. First, at the individual level, we tested the hypothesis that access to a greater variety of neighborhood youth organizations predicts adolescents’ participation in organized community-based activities, which, in turn, protects against community violence exposure. Second, at the neighborhood level, we tested the hypothesis that lower violent crime rates explain the inverse relation between neighborhood youth organization variety and community violence exposure. Our findings supported the latter of these two mechanisms. PMID:21666761
ADOLESCENTS' EXPOSURE TO COMMUNITY VIOLENCE: ARE NEIGHBORHOOD YOUTH ORGANIZATIONS PROTECTIVE?
Gardner, Margo; Brooks-Gunn, Jeanne
2009-05-01
Using data from the Project on Human Development in Chicago Neighborhoods (PHDCN), we identified a significant inverse association between the variety of youth organizations available at the neighborhood level and adolescents' exposure to community violence. We examined two non-competing explanations for this finding. First, at the individual level, we tested the hypothesis that access to a greater variety of neighborhood youth organizations predicts adolescents' participation in organized community-based activities, which, in turn, protects against community violence exposure. Second, at the neighborhood level, we tested the hypothesis that lower violent crime rates explain the inverse relation between neighborhood youth organization variety and community violence exposure. Our findings supported the latter of these two mechanisms.
Increasing arousal enhances inhibitory control in calm but not excitable dogs
Bray, Emily E.; MacLean, Evan L.; Hare, Brian A.
2015-01-01
The emotional-reactivity hypothesis proposes that problem-solving abilities can be constrained by temperament, within and across species. One way to test this hypothesis is with the predictions of the Yerkes-Dodson law. The law posits that arousal level, a component of temperament, affects problem solving in an inverted U-shaped relationship: optimal performance is reached at intermediate levels of arousal and impeded by high and low levels. Thus, a powerful test of the emotional-reactivity hypothesis is to compare cognitive performance in dog populations that have been bred and trained based in part on their arousal levels. We therefore compared a group of pet dogs to a group of assistance dogs bred and trained for low arousal (N = 106) on a task of inhibitory control involving a detour response. Consistent with the Yerkes-Dodson law, assistance dogs, which began the test with lower levels of baseline arousal, showed improvements when arousal was artificially increased. In contrast, pet dogs, which began the test with higher levels of baseline arousal, were negatively affected when their arousal was increased. Furthermore, the dogs’ baseline levels of arousal, as measured in their rate of tail wagging, differed by population in the expected directions. Low-arousal assistance dogs showed the most inhibition in a detour task when humans eagerly encouraged them while more highly aroused pet dogs performed worst on the same task with strong encouragement. Our findings support the hypothesis that selection on temperament can have important implications for cognitive performance. PMID:26169659
Increasing arousal enhances inhibitory control in calm but not excitable dogs.
Bray, Emily E; MacLean, Evan L; Hare, Brian A
2015-11-01
The emotional-reactivity hypothesis proposes that problem-solving abilities can be constrained by temperament, within and across species. One way to test this hypothesis is with the predictions of the Yerkes-Dodson law. The law posits that arousal level, a component of temperament, affects problem solving in an inverted U-shaped relationship: Optimal performance is reached at intermediate levels of arousal and impeded by high and low levels. Thus, a powerful test of the emotional-reactivity hypothesis is to compare cognitive performance in dog populations that have been bred and trained based in part on their arousal levels. We therefore compared a group of pet dogs to a group of assistance dogs bred and trained for low arousal (N = 106) on a task of inhibitory control involving a detour response. Consistent with the Yerkes-Dodson law, assistance dogs, which began the test with lower levels of baseline arousal, showed improvements when arousal was artificially increased. In contrast, pet dogs, which began the test with higher levels of baseline arousal, were negatively affected when their arousal was increased. Furthermore, the dogs' baseline levels of arousal, as measured in their rate of tail wagging, differed by population in the expected directions. Low-arousal assistance dogs showed the most inhibition in a detour task when humans eagerly encouraged them, while more highly aroused pet dogs performed worst on the same task with strong encouragement. Our findings support the hypothesis that selection on temperament can have important implications for cognitive performance.
Green, Mark A
2013-06-01
The equalisation hypothesis argues that during adolescence and early adulthood, inequality in mortality declines and begins to even out. However the evidence for this phenomenon is contested and mainly based on old data. This study proposes to examine how age-specific inequalities in mortality rates have changed over the past decade, during a time of widening health inequalities. To test this, mortality rates were calculated for deprivation quintiles in England, split by individual ages and sex for three time periods (2002-2004, 2005-2007 and 2008-2010). The results showed evidence for equalisation, with a clear decline in the ratio of mortality rates during late adolescence. However this decline was not accounted for by traditional explanations of the hypothesis. Overall, geographical inequalities were shown to be widening for the majority of ages, although there was some narrowing of patterns observed. Copyright © 2013 Elsevier Ltd. All rights reserved.
Mourning dove hunting regulation strategy based on annual harvest statistics and banding data
Otis, D.L.
2006-01-01
Although managers should strive to base game bird harvest management strategies on mechanistic population models, monitoring programs required to build and continuously update these models may not be in place. Alternatively, If estimates of total harvest and harvest rates are available, then population estimates derived from these harvest data can serve as the basis for making hunting regulation decisions based on population growth rates derived from these estimates. I present a statistically rigorous approach for regulation decision-making using a hypothesis-testing framework and an assumed framework of 3 hunting regulation alternatives. I illustrate and evaluate the technique with historical data on the mid-continent mallard (Anas platyrhynchos) population. I evaluate the statistical properties of the hypothesis-testing framework using the best available data on mourning doves (Zenaida macroura). I use these results to discuss practical implementation of the technique as an interim harvest strategy for mourning doves until reliable mechanistic population models and associated monitoring programs are developed.
Short-term earthquake forecasting based on an epidemic clustering model
NASA Astrophysics Data System (ADS)
Console, Rodolfo; Murru, Maura; Falcone, Giuseppe
2016-04-01
The application of rigorous statistical tools, with the aim of verifying any prediction method, requires a univocal definition of the hypothesis, or the model, characterizing the concerned anomaly or precursor, so as it can be objectively recognized in any circumstance and by any observer. This is mandatory to build up on the old-fashion approach consisting only of the retrospective anecdotic study of past cases. A rigorous definition of an earthquake forecasting hypothesis should lead to the objective identification of particular sub-volumes (usually named alarm volumes) of the total time-space volume within which the probability of occurrence of strong earthquakes is higher than the usual. The test of a similar hypothesis needs the observation of a sufficient number of past cases upon which a statistical analysis is possible. This analysis should be aimed to determine the rate at which the precursor has been followed (success rate) or not followed (false alarm rate) by the target seismic event, or the rate at which a target event has been preceded (alarm rate) or not preceded (failure rate) by the precursor. The binary table obtained from this kind of analysis leads to the definition of the parameters of the model that achieve the maximum number of successes and the minimum number of false alarms for a specific class of precursors. The mathematical tools suitable for this purpose may include the definition of Probability Gain or the R-Score, as well as the application of popular plots such as the Molchan error-diagram and the ROC diagram. Another tool for evaluating the validity of a forecasting method is the concept of the likelihood ratio (also named performance factor) of occurrence and non-occurrence of seismic events under different hypotheses. Whatever is the method chosen for building up a new hypothesis, usually based on retrospective data, the final assessment of its validity should be carried out by a test on a new and independent set of observations. The implementation of this step could be problematic for seismicity characterized by long-term recurrence. However, the separation of the data base of the data base collected in the past in two separate sections (one on which the best fit of the parameters is carried out, and the other on which the hypothesis is tested) can be a viable solution, known as retrospective-forward testing. In this study we show examples of application of the above mentioned concepts to the analysis of the Italian catalog of instrumental seismicity, making use of an epidemic algorithm developed to model short-term clustering features. This model, for which a precursory anomaly is just the occurrence of seismic activity, doesn't need the retrospective categorization of earthquakes in terms of foreshocks, mainshocks and aftershocks. It was introduced more than 15 years ago and tested so far in a number of real cases. It is now being run by several seismological centers around the world in forward real-time mode for testing purposes.
Enriching plausible new hypothesis generation in PubMed.
Baek, Seung Han; Lee, Dahee; Kim, Minjoo; Lee, Jong Ho; Song, Min
2017-01-01
Most of earlier studies in the field of literature-based discovery have adopted Swanson's ABC model that links pieces of knowledge entailed in disjoint literatures. However, the issue concerning their practicability remains to be solved since most of them did not deal with the context surrounding the discovered associations and usually not accompanied with clinical confirmation. In this study, we aim to propose a method that expands and elaborates the existing hypothesis by advanced text mining techniques for capturing contexts. We extend ABC model to allow for multiple B terms with various biological types. We were able to concretize a specific, metabolite-related hypothesis with abundant contextual information by using the proposed method. Starting from explaining the relationship between lactosylceramide and arterial stiffness, the hypothesis was extended to suggest a potential pathway consisting of lactosylceramide, nitric oxide, malondialdehyde, and arterial stiffness. The experiment by domain experts showed that it is clinically valid. The proposed method is designed to provide plausible candidates of the concretized hypothesis, which are based on extracted heterogeneous entities and detailed relation information, along with a reliable ranking criterion. Statistical tests collaboratively conducted with biomedical experts provide the validity and practical usefulness of the method unlike previous studies. Applying the proposed method to other cases, it would be helpful for biologists to support the existing hypothesis and easily expect the logical process within it.
Try and Prove It: An Exercise in the Logic of Science.
ERIC Educational Resources Information Center
Bady, Richard J.; Enyeart, Morris A.
1978-01-01
Presents a classroom activity which can be used both to assess student's modes of thought and to clarify the logic of hypothesis testing. The task is based on Piaget's theories of cognitive development. (HM)
Pataky, Todd C; Vanrenterghem, Jos; Robinson, Mark A
2015-05-01
Biomechanical processes are often manifested as one-dimensional (1D) trajectories. It has been shown that 1D confidence intervals (CIs) are biased when based on 0D statistical procedures, and the non-parametric 1D bootstrap CI has emerged in the Biomechanics literature as a viable solution. The primary purpose of this paper was to clarify that, for 1D biomechanics datasets, the distinction between 0D and 1D methods is much more important than the distinction between parametric and non-parametric procedures. A secondary purpose was to demonstrate that a parametric equivalent to the 1D bootstrap exists in the form of a random field theory (RFT) correction for multiple comparisons. To emphasize these points we analyzed six datasets consisting of force and kinematic trajectories in one-sample, paired, two-sample and regression designs. Results showed, first, that the 1D bootstrap and other 1D non-parametric CIs were qualitatively identical to RFT CIs, and all were very different from 0D CIs. Second, 1D parametric and 1D non-parametric hypothesis testing results were qualitatively identical for all six datasets. Last, we highlight the limitations of 1D CIs by demonstrating that they are complex, design-dependent, and thus non-generalizable. These results suggest that (i) analyses of 1D data based on 0D models of randomness are generally biased unless one explicitly identifies 0D variables before the experiment, and (ii) parametric and non-parametric 1D hypothesis testing provide an unambiguous framework for analysis when one׳s hypothesis explicitly or implicitly pertains to whole 1D trajectories. Copyright © 2015 Elsevier Ltd. All rights reserved.
Wight, Vanessa; Kaushal, Neeraj; Waldfogel, Jane; Garfinkel, Irv
2014-01-02
This paper examines the association between poverty and food insecurity among children, using two different definitions of poverty-the official poverty measure (OPM) and the new supplemental poverty measure (SPM) of the Census Bureau, which is based on a more inclusive definition of family resources and needs. Our analysis is based on data from the 2001-11 Current Population Survey and shows that food insecurity and very low food security among children decline as income-to-needs ratio increases. The point estimates show that the associations are stronger as measured by the new supplemental measure of income-to-needs ratio than when estimated through the official measure. Statistical tests reject the hypothesis that poor households' odds of experiencing low food security are the same whether the SPM or OPM measure is used; but the tests do not reject the hypothesis when very low food security is the outcome.
Wight, Vanessa; Kaushal, Neeraj; Waldfogel, Jane; Garfinkel, Irv
2014-01-01
This paper examines the association between poverty and food insecurity among children, using two different definitions of poverty—the official poverty measure (OPM) and the new supplemental poverty measure (SPM) of the Census Bureau, which is based on a more inclusive definition of family resources and needs. Our analysis is based on data from the 2001–11 Current Population Survey and shows that food insecurity and very low food security among children decline as income-to-needs ratio increases. The point estimates show that the associations are stronger as measured by the new supplemental measure of income-to-needs ratio than when estimated through the official measure. Statistical tests reject the hypothesis that poor households’ odds of experiencing low food security are the same whether the SPM or OPM measure is used; but the tests do not reject the hypothesis when very low food security is the outcome. PMID:25045244
An Exercise for Illustrating the Logic of Hypothesis Testing
ERIC Educational Resources Information Center
Lawton, Leigh
2009-01-01
Hypothesis testing is one of the more difficult concepts for students to master in a basic, undergraduate statistics course. Students often are puzzled as to why statisticians simply don't calculate the probability that a hypothesis is true. This article presents an exercise that forces students to lay out on their own a procedure for testing a…
ERIC Educational Resources Information Center
Wilcox, Rand R.; Serang, Sarfaraz
2017-01-01
The article provides perspectives on p values, null hypothesis testing, and alternative techniques in light of modern robust statistical methods. Null hypothesis testing and "p" values can provide useful information provided they are interpreted in a sound manner, which includes taking into account insights and advances that have…
Hypothesis Testing Using Spatially Dependent Heavy Tailed Multisensor Data
2014-12-01
Office of Research 113 Bowne Hall Syracuse, NY 13244 -1200 ABSTRACT HYPOTHESIS TESTING USING SPATIALLY DEPENDENT HEAVY-TAILED MULTISENSOR DATA Report...consistent with the null hypothesis of linearity and can be used to estimate the distribution of a test statistic that can discrimi- nate between the null... Test for nonlinearity. Histogram is generated using the surrogate data. The statistic of the original time series is represented by the solid line
2016-10-27
Ask a research • Implement practice question or change based on best • Test a hypothesis evidence • Generate new • M easure outcomes of...doing che besc thinfl .. , right, a.nd continuously? • M onitor evidence - based practice change and out comes continuously • Modify practice based ...Lynn Gallaaher·ford; Center for Tninsdlsdpllnary Evidence - based Pra ctice \\ J •. ~ ~ Non-Research vs Research: It’s all about the question
The role of responsibility and fear of guilt in hypothesis-testing.
Mancini, Francesco; Gangemi, Amelia
2006-12-01
Recent theories argue that both perceived responsibility and fear of guilt increase obsessive-like behaviours. We propose that hypothesis-testing might account for this effect. Both perceived responsibility and fear of guilt would influence subjects' hypothesis-testing, by inducing a prudential style. This style implies focusing on and confirming the worst hypothesis, and reiterating the testing process. In our experiment, we manipulated the responsibility and fear of guilt of 236 normal volunteers who executed a deductive task. The results show that perceived responsibility is the main factor that influenced individuals' hypothesis-testing. Fear of guilt has however a significant additive effect. Guilt-fearing participants preferred to carry on with the diagnostic process, even when faced with initial favourable evidence, whereas participants in the responsibility condition only did so when confronted with an unfavourable evidence. Implications for the understanding of obsessive-compulsive disorder (OCD) are discussed.
Effects of Instructional Design with Mental Model Analysis on Learning.
ERIC Educational Resources Information Center
Hong, Eunsook
This paper presents a model for systematic instructional design that includes mental model analysis together with the procedures used in developing computer-based instructional materials in the area of statistical hypothesis testing. The instructional design model is based on the premise that the objective for learning is to achieve expert-like…
ERIC Educational Resources Information Center
Lehman, Melissa; Smith, Megan A.; Karpicke, Jeffrey D.
2014-01-01
We tested the predictions of 2 explanations for retrieval-based learning; while the elaborative retrieval hypothesis assumes that the retrieval of studied information promotes the generation of semantically related information, which aids in later retrieval (Carpenter, 2009), the episodic context account proposed by Karpicke, Lehman, and Aue (in…
Phonological Bases for L2 Morphological Learning
ERIC Educational Resources Information Center
Hu, Chieh-Fang
2010-01-01
Two experiments examined the hypothesis that L1 phonological awareness plays a role in children's ability to extract morphological patterns of English as L2 from the auditory input. In Experiment 1, 84 Chinese-speaking third graders were tested on whether they extracted the alternation pattern between the base and the derived form (e.g.,…
The Influence of Performance-Based Accountability on the Distribution of Teacher Salary Increases
ERIC Educational Resources Information Center
Bifulco, Robert
2010-01-01
This study examines how aspects of a district's institutional and policy environment influence the distribution of teacher salary increases. The primary hypothesis tested is that statewide performance-based accountability policies influence the extent to which districts backload teacher salary increases. I use data on teacher salaries from the…
Effects of Inquiry-Based Agriscience Instruction on Student Scientific Reasoning
ERIC Educational Resources Information Center
Thoron, Andrew C.; Myers, Brian E.
2012-01-01
The purpose of this study was to determine the effect of inquiry-based agriscience instruction on student scientific reasoning. Scientific reasoning is defined as the use of the scientific method, inductive, and deductive reasoning to develop and test hypothesis. Developing scientific reasoning skills can provide learners with a connection to the…
Executive functioning and general cognitive ability in pregnant women and matched controls.
Onyper, Serge V; Searleman, Alan; Thacher, Pamela V; Maine, Emily E; Johnson, Alicia G
2010-11-01
The current study compared the performances of pregnant women with education- and age-matched controls on a variety of measures that assessed perceptual speed, short-term and working memory capacity, subjective memory complaints, sleep quality, level of fatigue, executive functioning, episodic and prospective memory, and crystallized and fluid intelligence. A primary purpose was to test the hypothesis of Henry and Rendell (2007) that pregnancy-related declines in cognitive functioning would be especially evident in tasks that place a high demand on executive processes. We also investigated a parallel hypothesis: that the pregnant women would experience a broad-based reduction in cognitive capability. Very limited support was found for the executive functioning hypothesis. Pregnant women scored lower only on the measure of verbal fluency (Controlled Oral Word Association Test, COWAT) but not on the Wisconsin Card Sorting Task or on any working memory measures. Furthermore, group differences in COWAT performance disappeared after controlling for verbal IQ (Shipley vocabulary). In addition, there was no support for the general decline hypothesis. We conclude that pregnancy-associated differences in performance observed in the current study were relatively mild and rarely reached either clinical or practical significance.
Varying execution discipline to increase performance
DOE Office of Scientific and Technical Information (OSTI.GOV)
Campbell, P.L.; Maccabe, A.B.
1993-12-22
This research investigates the relationship between execution discipline and performance. The hypothesis has two parts: 1. Different execution disciplines exhibit different performance for different computations, and 2. These differences can be effectively predicted by heuristics. A machine model is developed that can vary its execution discipline. That is, the model can execute a given program using either the control-driven, data-driven or demand-driven execution discipline. This model is referred to as a ``variable-execution-discipline`` machine. The instruction set for the model is the Program Dependence Web (PDW). The first part of the hypothesis will be tested by simulating the execution of themore » machine model on a suite of computations, based on the Livermore Fortran Kernel (LFK) Test (a.k.a. the Livermore Loops), using all three execution disciplines. Heuristics are developed to predict relative performance. These heuristics predict (a) the execution time under each discipline for one iteration of each loop and (b) the number of iterations taken by that loop; then the heuristics use those predictions to develop a prediction for the execution of the entire loop. Similar calculations are performed for branch statements. The second part of the hypothesis will be tested by comparing the results of the simulated execution with the predictions produced by the heuristics. If the hypothesis is supported, then the door is open for the development of machines that can vary execution discipline to increase performance.« less
ERIC Educational Resources Information Center
Amundsen, Ellen J.; Ravndal, Edle
2010-01-01
Aim: To test whether the school-based Olweus prevention programme against bullying may have lasting effects on substance use, a hypothesis based on the characteristics of bullies having misconduct behaviour associated with substance use. Methods: The Olweus programme was introduced from grades 7 through 9 in four schools and monitored up to grade…
Firsov, Mikhail L; Donner, Kristian; Govardovskii, Victor I
2002-01-01
Thermal activation of the visual pigment constitutes a fundamental constraint on visual sensitivity. Its electrical correlate in the membrane current of dark-adapted rods are randomly occurring discrete ‘dark events’ indistinguishable from responses to single photons. It has been proposed that thermal activation occurs in a small subpopulation of rhodopsin molecules where the Schiff base linking the chromophore to the protein part is unprotonated. On this hypothesis, rates of thermal activation should increase strongly with rising pH. The hypothesis has been tested by measuring the effect of pH changes on the frequency of discrete dark events in red rods of the common toad Bufo bufo. Dark noise was recorded from isolated rods using the suction pipette technique. Changes in cytoplasmic pH upon manipulations of extracellular pH were quantified by measuring, using fast single-cell microspectrophotometry, the pH-dependent metarhodopsin I-metarhodopsin II equilibrium and subsequent metarhodopsin III formation. These measurements show that, in the conditions of the electrophysiological experiments, changing perfusion pH from 6.5 to 9.3 resulted in a cytoplasmic pH shift from 7.6 to 8.5 that was readily sensed by the rhodopsin. This shift, which implies an 8-fold decrease in cytoplasmic [H+], did not increase the rate of dark events. The results contradict the hypothesis that thermal pigment activation depends on prior deprotonation of the Schiff base. PMID:11897853
Conceptual biology, hypothesis discovery, and text mining: Swanson's legacy.
Bekhuis, Tanja
2006-04-03
Innovative biomedical librarians and information specialists who want to expand their roles as expert searchers need to know about profound changes in biology and parallel trends in text mining. In recent years, conceptual biology has emerged as a complement to empirical biology. This is partly in response to the availability of massive digital resources such as the network of databases for molecular biologists at the National Center for Biotechnology Information. Developments in text mining and hypothesis discovery systems based on the early work of Swanson, a mathematician and information scientist, are coincident with the emergence of conceptual biology. Very little has been written to introduce biomedical digital librarians to these new trends. In this paper, background for data and text mining, as well as for knowledge discovery in databases (KDD) and in text (KDT) is presented, then a brief review of Swanson's ideas, followed by a discussion of recent approaches to hypothesis discovery and testing. 'Testing' in the context of text mining involves partially automated methods for finding evidence in the literature to support hypothetical relationships. Concluding remarks follow regarding (a) the limits of current strategies for evaluation of hypothesis discovery systems and (b) the role of literature-based discovery in concert with empirical research. Report of an informatics-driven literature review for biomarkers of systemic lupus erythematosus is mentioned. Swanson's vision of the hidden value in the literature of science and, by extension, in biomedical digital databases, is still remarkably generative for information scientists, biologists, and physicians.
Testing a single regression coefficient in high dimensional linear models
Zhong, Ping-Shou; Li, Runze; Wang, Hansheng; Tsai, Chih-Ling
2017-01-01
In linear regression models with high dimensional data, the classical z-test (or t-test) for testing the significance of each single regression coefficient is no longer applicable. This is mainly because the number of covariates exceeds the sample size. In this paper, we propose a simple and novel alternative by introducing the Correlated Predictors Screening (CPS) method to control for predictors that are highly correlated with the target covariate. Accordingly, the classical ordinary least squares approach can be employed to estimate the regression coefficient associated with the target covariate. In addition, we demonstrate that the resulting estimator is consistent and asymptotically normal even if the random errors are heteroscedastic. This enables us to apply the z-test to assess the significance of each covariate. Based on the p-value obtained from testing the significance of each covariate, we further conduct multiple hypothesis testing by controlling the false discovery rate at the nominal level. Then, we show that the multiple hypothesis testing achieves consistent model selection. Simulation studies and empirical examples are presented to illustrate the finite sample performance and the usefulness of the proposed method, respectively. PMID:28663668
Testing a single regression coefficient in high dimensional linear models.
Lan, Wei; Zhong, Ping-Shou; Li, Runze; Wang, Hansheng; Tsai, Chih-Ling
2016-11-01
In linear regression models with high dimensional data, the classical z -test (or t -test) for testing the significance of each single regression coefficient is no longer applicable. This is mainly because the number of covariates exceeds the sample size. In this paper, we propose a simple and novel alternative by introducing the Correlated Predictors Screening (CPS) method to control for predictors that are highly correlated with the target covariate. Accordingly, the classical ordinary least squares approach can be employed to estimate the regression coefficient associated with the target covariate. In addition, we demonstrate that the resulting estimator is consistent and asymptotically normal even if the random errors are heteroscedastic. This enables us to apply the z -test to assess the significance of each covariate. Based on the p -value obtained from testing the significance of each covariate, we further conduct multiple hypothesis testing by controlling the false discovery rate at the nominal level. Then, we show that the multiple hypothesis testing achieves consistent model selection. Simulation studies and empirical examples are presented to illustrate the finite sample performance and the usefulness of the proposed method, respectively.
Hanrahan, Lawrence P.; Anderson, Henry A.; Busby, Brian; Bekkedal, Marni; Sieger, Thomas; Stephenson, Laura; Knobeloch, Lynda; Werner, Mark; Imm, Pamela; Olson, Joseph
2004-01-01
In this article we describe the development of an information system for environmental childhood cancer surveillance. The Wisconsin Cancer Registry annually receives more than 25,000 incident case reports. Approximately 269 cases per year involve children. Over time, there has been considerable community interest in understanding the role the environment plays as a cause of these cancer cases. Wisconsin’s Public Health Information Network (WI-PHIN) is a robust web portal integrating both Health Alert Network and National Electronic Disease Surveillance System components. WI-PHIN is the information technology platform for all public health surveillance programs. Functions include the secure, automated exchange of cancer case data between public health–based and hospital-based cancer registrars; web-based supplemental data entry for environmental exposure confirmation and hypothesis testing; automated data analysis, visualization, and exposure–outcome record linkage; directories of public health and clinical personnel for role-based access control of sensitive surveillance information; public health information dissemination and alerting; and information technology security and critical infrastructure protection. For hypothesis generation, cancer case data are sent electronically to WI-PHIN and populate the integrated data repository. Environmental data are linked and the exposure–disease relationships are explored using statistical tools for ecologic exposure risk assessment. For hypothesis testing, case–control interviews collect exposure histories, including parental employment and residential histories. This information technology approach can thus serve as the basis for building a comprehensive system to assess environmental cancer etiology. PMID:15471739
Testing for Marshall-Lerner hypothesis: A panel approach
NASA Astrophysics Data System (ADS)
Azizan, Nur Najwa; Sek, Siok Kun
2014-12-01
The relationship between real exchange rate and trade balances are documented in many theories. One of the theories is the so-called Marshall-Lerner condition. In this study, we seek to test for the validity of Marshall-Lerner hypothesis, i.e. to reveal if the depreciation of real exchange rate leads to the improvement in trade balances. We focus our study in ASEAN-5 countries and their main trade partners of U.S., Japan and China. The dynamic panel data of pooled mean group (PMG) approach is used to detect the Marshall-Lerner hypothesis among ASEAN-5, between ASEAN-5 and U.S., between ASEAN-5 and Japan and between ASEAN-5 and China respectively. The estimation is based on the autoregressive Distributed Lag or ARDL model for the period of 1970-2012. The paper concludes that Marshal Lerner theory does not hold in bilateral trades in four groups of countries. The trade balances of ASEAN5 are mainly determined by the domestic income level and foreign production cost.
Testing the implicit processing hypothesis of precognitive dream experience.
Valášek, Milan; Watt, Caroline; Hutton, Jenny; Neill, Rebecca; Nuttall, Rachel; Renwick, Grace
2014-08-01
Seemingly precognitive (prophetic) dreams may be a result of one's unconscious processing of environmental cues and having an implicit inference based on these cues manifest itself in one's dreams. We present two studies exploring this implicit processing hypothesis of precognitive dream experience. Study 1 investigated the relationship between implicit learning, transliminality, and precognitive dream belief and experience. Participants completed the Serial Reaction Time task and several questionnaires. We predicted a positive relationship between the variables. With the exception of relationships between transliminality and precognitive dream belief and experience, this prediction was not supported. Study 2 tested the hypothesis that differences in the ability to notice subtle cues explicitly might account for precognitive dream beliefs and experiences. Participants completed a modified version of the flicker paradigm. We predicted a negative relationship between the ability to explicitly detect changes and precognitive dream variables. This relationship was not found. There was also no relationship between precognitive dream belief and experience and implicit change detection. Copyright © 2014 Elsevier Inc. All rights reserved.
2010-01-01
Inventions combine technological features. When features are barely related, burdensomely broad knowledge is required to identify the situations that they share. When features are overly related, burdensomely broad knowledge is required to identify the situations that distinguish them. Thus, according to my first hypothesis, when features are moderately related, the costs of connecting and costs of synthesizing are cumulatively minimized, and the most useful inventions emerge. I also hypothesize that continued experimentation with a specific set of features is likely to lead to the discovery of decreasingly useful inventions; the earlier-identified connections reflect the more common consumer situations. Covering data from all industries, the empirical analysis provides broad support for the first hypothesis. Regressions to test the second hypothesis are inconclusive when examining industry types individually. Yet, this study represents an exploratory investigation, and future research should test refined hypotheses with more sophisticated data, such as that found in literature-based discovery research. PMID:21297855
Predicting fatty acid profiles in blood based on food intake and the FADS1 rs174546 SNP.
Hallmann, Jacqueline; Kolossa, Silvia; Gedrich, Kurt; Celis-Morales, Carlos; Forster, Hannah; O'Donovan, Clare B; Woolhead, Clara; Macready, Anna L; Fallaize, Rosalind; Marsaux, Cyril F M; Lambrinou, Christina-Paulina; Mavrogianni, Christina; Moschonis, George; Navas-Carretero, Santiago; San-Cristobal, Rodrigo; Godlewska, Magdalena; Surwiłło, Agnieszka; Mathers, John C; Gibney, Eileen R; Brennan, Lorraine; Walsh, Marianne C; Lovegrove, Julie A; Saris, Wim H M; Manios, Yannis; Martinez, Jose Alfredo; Traczyk, Iwona; Gibney, Michael J; Daniel, Hannelore
2015-12-01
A high intake of n-3 PUFA provides health benefits via changes in the n-6/n-3 ratio in blood. In addition to such dietary PUFAs, variants in the fatty acid desaturase 1 (FADS1) gene are also associated with altered PUFA profiles. We used mathematical modeling to predict levels of PUFA in whole blood, based on multiple hypothesis testing and bootstrapped LASSO selected food items, anthropometric and lifestyle factors, and the rs174546 genotypes in FADS1 from 1607 participants (Food4Me Study). The models were developed using data from the first reported time point (training set) and their predictive power was evaluated using data from the last reported time point (test set). Among other food items, fish, pizza, chicken, and cereals were identified as being associated with the PUFA profiles. Using these food items and the rs174546 genotypes as predictors, models explained 26-43% of the variability in PUFA concentrations in the training set and 22-33% in the test set. Selecting food items using multiple hypothesis testing is a valuable contribution to determine predictors, as our models' predictive power is higher compared to analogue studies. As unique feature, we additionally confirmed our models' power based on a test set. © 2015 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Unadjusted Bivariate Two-Group Comparisons: When Simpler is Better.
Vetter, Thomas R; Mascha, Edward J
2018-01-01
Hypothesis testing involves posing both a null hypothesis and an alternative hypothesis. This basic statistical tutorial discusses the appropriate use, including their so-called assumptions, of the common unadjusted bivariate tests for hypothesis testing and thus comparing study sample data for a difference or association. The appropriate choice of a statistical test is predicated on the type of data being analyzed and compared. The unpaired or independent samples t test is used to test the null hypothesis that the 2 population means are equal, thereby accepting the alternative hypothesis that the 2 population means are not equal. The unpaired t test is intended for comparing dependent continuous (interval or ratio) data from 2 study groups. A common mistake is to apply several unpaired t tests when comparing data from 3 or more study groups. In this situation, an analysis of variance with post hoc (posttest) intragroup comparisons should instead be applied. Another common mistake is to apply a series of unpaired t tests when comparing sequentially collected data from 2 study groups. In this situation, a repeated-measures analysis of variance, with tests for group-by-time interaction, and post hoc comparisons, as appropriate, should instead be applied in analyzing data from sequential collection points. The paired t test is used to assess the difference in the means of 2 study groups when the sample observations have been obtained in pairs, often before and after an intervention in each study subject. The Pearson chi-square test is widely used to test the null hypothesis that 2 unpaired categorical variables, each with 2 or more nominal levels (values), are independent of each other. When the null hypothesis is rejected, 1 concludes that there is a probable association between the 2 unpaired categorical variables. When comparing 2 groups on an ordinal or nonnormally distributed continuous outcome variable, the 2-sample t test is usually not appropriate. The Wilcoxon-Mann-Whitney test is instead preferred. When making paired comparisons on data that are ordinal, or continuous but nonnormally distributed, the Wilcoxon signed-rank test can be used. In analyzing their data, researchers should consider the continued merits of these simple yet equally valid unadjusted bivariate statistical tests. However, the appropriate use of an unadjusted bivariate test still requires a solid understanding of its utility, assumptions (requirements), and limitations. This understanding will mitigate the risk of misleading findings, interpretations, and conclusions.
Foraging behaviour of coypus Myocastor coypus: why do coypus consume aquatic plants?
NASA Astrophysics Data System (ADS)
Guichón, M. L.; Benítez, V. B.; Abba, A.; Borgnia, M.; Cassini, M. H.
2003-12-01
Foraging behaviour of wild coypu was studied to examine two hypotheses that had been previously proposed to explain the species' preference for aquatic plants. First, the nutritional benefit hypothesis which states that aquatic plants are more nutritional than terrestrial plants. Second, the behavioural trade-off hypothesis which states that coypus avoid foraging far from the water because of the costs associated with other types of behaviour. In order to test the nutritional benefit hypothesis, we studied the diet composition of coypus in relation to the protein content of the diet and of the plants available in the environment. Fieldwork was conducted seasonally from November 1999 to August 2000 at one study site located in the Province of Buenos Aires, east central Argentina. Behavioural observations showed that coypus remained foraging in the water and microhistological analysis of faeces indicated that their diet was principally composed of hygrophilic monocotyledons ( Lemna spp. and Eleocharis spp.) throughout the year. We did not find support for the nutritional benefit hypothesis: nutritional quality (based on nitrogen content) of hygrophilic plants was not higher than that of terrestrial plants, and seasonal changes in diet quality did not match either fluctuations in vegetation quality or proportion of hygrophilic plants in the diet. Although not directly tested, the behavioural trade-off hypothesis may explain why coypus prefer to forage in or near the water as a mechanism for reducing predation risk.
Transform-Based Wideband Array Processing
1992-01-31
Breusch and Pagan [2], it is possible to test which model, M,€, 0 AR or random coefficient, will better fit typical array data. Li The test indicates that...bearing estimation problems," Proc. IEEE, vol. 70, no. 9, pp. 1018-1028, 1982. (2] T. S. Breusch and A. R. Pagan , "A simple test for het...cor- relations do not obey an AR relationship across the array; relations in the observations. Through the use of a binary hypothesis test , it is
Longitudinal Dimensionality of Adolescent Psychopathology: Testing the Differentiation Hypothesis
ERIC Educational Resources Information Center
Sterba, Sonya K.; Copeland, William; Egger, Helen L.; Costello, E. Jane; Erkanli, Alaattin; Angold, Adrian
2010-01-01
Background: The differentiation hypothesis posits that the underlying liability distribution for psychopathology is of low dimensionality in young children, inflating diagnostic comorbidity rates, but increases in dimensionality with age as latent syndromes become less correlated. This hypothesis has not been adequately tested with longitudinal…
A Clinical Evaluation of the Competing Sources of Input Hypothesis
Leonard, Laurence B.; Bredin-Oja, Shelley L.; Deevy, Patricia
2017-01-01
Purpose Our purpose was to test the competing sources of input (CSI) hypothesis by evaluating an intervention based on its principles. This hypothesis proposes that children's use of main verbs without tense is the result of their treating certain sentence types in the input (e.g., Was she laughing ?) as models for declaratives (e.g., She laughing). Method Twenty preschoolers with specific language impairment were randomly assigned to receive either a CSI-based intervention or a more traditional intervention that lacked the novel CSI features. The auxiliary is and the third-person singular suffix –s were directly treated over a 16-week period. Past tense –ed was monitored as a control. Results The CSI-based group exhibited greater improvements in use of is than did the traditional group (d = 1.31), providing strong support for the CSI hypothesis. There were no significant between-groups differences in the production of the third-person singular suffix –s or the control (–ed), however. Conclusions The group differences in the effects on the 2 treated morphemes may be due to differences in their distribution in interrogatives and declaratives (e.g., Is he hiding/He is hiding vs. Does he hide/He hide s). Refinements in the intervention could address this issue and lead to more general effects across morphemes. PMID:28114610
Null but not void: considerations for hypothesis testing.
Shaw, Pamela A; Proschan, Michael A
2013-01-30
Standard statistical theory teaches us that once the null and alternative hypotheses have been defined for a parameter, the choice of the statistical test is clear. Standard theory does not teach us how to choose the null or alternative hypothesis appropriate to the scientific question of interest. Neither does it tell us that in some cases, depending on which alternatives are realistic, we may want to define our null hypothesis differently. Problems in statistical practice are frequently not as pristinely summarized as the classic theory in our textbooks. In this article, we present examples in statistical hypothesis testing in which seemingly simple choices are in fact rich with nuance that, when given full consideration, make the choice of the right hypothesis test much less straightforward. Published 2012. This article is a US Government work and is in the public domain in the USA.
Effect of climate-related mass extinctions on escalation in molluscs
NASA Astrophysics Data System (ADS)
Hansen, Thor A.; Kelley, Patricia H.; Melland, Vicky D.; Graham, Scott E.
1999-12-01
We test the hypothesis that escalated species (e.g., those with antipredatory adaptations such as heavy armor) are more vulnerable to extinctions caused by changes in climate. If this hypothesis is valid, recovery faunas after climate-related extinctions should include significantly fewer species with escalated shell characteristics, and escalated species should undergo greater rates of extinction than nonescalated species. This hypothesis is tested for the Cretaceous-Paleocene, Eocene-Oligocene, middle Miocene, and Pliocene-Pleistocene mass extinctions. Gastropod and bivalve molluscs from the U.S. coastal plain were evaluated for 10 shell characters that confer resistance to predators. Of 40 tests, one supported the hypothesis; highly ornamented gastropods underwent greater levels of Pliocene-Pleistocene extinction than did nonescalated species. All remaining tests were nonsignificant. The hypothesis that escalated species are more vulnerable to climate-related mass extinctions is not supported.
On Restructurable Control System Theory
NASA Technical Reports Server (NTRS)
Athans, M.
1983-01-01
The state of stochastic system and control theory as it impacts restructurable control issues is addressed. The multivariable characteristics of the control problem are addressed. The failure detection/identification problem is discussed as a multi-hypothesis testing problem. Control strategy reconfiguration, static multivariable controls, static failure hypothesis testing, dynamic multivariable controls, fault-tolerant control theory, dynamic hypothesis testing, generalized likelihood ratio (GLR) methods, and adaptive control are discussed.
ERIC Educational Resources Information Center
Marmolejo-Ramos, Fernando; Cousineau, Denis
2017-01-01
The number of articles showing dissatisfaction with the null hypothesis statistical testing (NHST) framework has been progressively increasing over the years. Alternatives to NHST have been proposed and the Bayesian approach seems to have achieved the highest amount of visibility. In this last part of the special issue, a few alternative…
ERIC Educational Resources Information Center
Kegel, Cornelia A. T.; Bus, Adriana G.
2014-01-01
Children showing poor executive functioning may not fully benefit from learning experiences at home and school and may lag behind in literacy skills. This hypothesis was tested in a sample of 276 kindergarten children. Executive functions and literacy skills were tested at about 61?months and again a year later. In line with earlier studies,…
The Effect of DBAE Approach on Teaching Painting of Undergraduate Art Students
ERIC Educational Resources Information Center
Hedayat, Mina; Kahn, Sabzali Musa; Honarvar, Habibeh; Bakar, Syed Alwi Syed Abu; Samsuddin, Mohd Effindi
2013-01-01
The aim of this study is to implement a new method of teaching painting which uses the Discipline-Based Art Education (DBAE) approach for the undergraduate art students at Tehran University. In the current study, the quasi-experimental method was used to test the hypothesis three times (pre, mid and post-tests). Thirty students from two classes…
2012-03-22
shapes tested , when the objective parameter set was confined to a dictionary’s de - fined parameter space. These physical characteristics included...8 2.3 Hypothesis Testing and Detection Theory . . . . . . . . . . . . . . . 8 2.4 3-D SAR Scattering Models...basis pursuit de -noising (BPDN) algorithm is chosen to perform extraction due to inherent efficiency and error tolerance. Multiple shape dictionaries
Meta-Analysis of Rare Binary Adverse Event Data
Bhaumik, Dulal K.; Amatya, Anup; Normand, Sharon-Lise; Greenhouse, Joel; Kaizar, Eloise; Neelon, Brian; Gibbons, Robert D.
2013-01-01
We examine the use of fixed-effects and random-effects moment-based meta-analytic methods for analysis of binary adverse event data. Special attention is paid to the case of rare adverse events which are commonly encountered in routine practice. We study estimation of model parameters and between-study heterogeneity. In addition, we examine traditional approaches to hypothesis testing of the average treatment effect and detection of the heterogeneity of treatment effect across studies. We derive three new methods, simple (unweighted) average treatment effect estimator, a new heterogeneity estimator, and a parametric bootstrapping test for heterogeneity. We then study the statistical properties of both the traditional and new methods via simulation. We find that in general, moment-based estimators of combined treatment effects and heterogeneity are biased and the degree of bias is proportional to the rarity of the event under study. The new methods eliminate much, but not all of this bias. The various estimators and hypothesis testing methods are then compared and contrasted using an example dataset on treatment of stable coronary artery disease. PMID:23734068
Joo, Aeran; Park, Inhyae
2010-04-01
The purpose of this study was to determine effects of an empowerment education program (EEP) on internet games addiction, empowerment, and stress in middle school students. The EEP used in this study was based on the Freire's Empowerment Education Model. The research design of this study was a non-equivalent control group pretest-posttest design for 48 middle school students, who were conveniently assigned to an experimental group or a control group. The data were collected from May 29 to June 19, 2005. Data were analyzed using SPSS/PC program with frequencies, X(2)-test, Fisher exact test, t-test, mean, standard deviation and ANCOVA. 1) The first hypothesis that, "the experimental group would have higher empowerment scores than the control group." was supported. 2) The second hypothesis that, "the experimental group would have lower internet games addiction scores than the control group." was supported. 3) The third hypothesis that, "the experimental group would have lower stress scores than the control group." was supported. We suggest, therefore, that the EEP should be used with adolescent to help them control their stress, internet games addiction and to increase their empowerment.
Sexual difference in polychlorinated biphenyl accumulation rates of walleye (Stizostedion vitreum)
Madenjian, Charles P.; Noguchi, George E.; Haas, Robert C.; Schrouder, Kathrin S.
1998-01-01
Adult male walleye (Stizostedion vitreum) exhibited significantly higher polychlorinated biphenyl (PCB) concentrations than similarly aged female walleye from Saginaw Bay (Lake Huron). To explain this difference, we tested the following three hypotheses: (i) females showed a considerably greater reduction in PCB concentration immediately following spawning than males, (ii) females grew at a faster rate and therefore exhibited lower PCB concentrations than males, and (iii) males spent more time in the Saginaw River system than females, and therefore received a greater exposure to PCBs. The first hypothesis was tested by comparing PCB concentration in gonadal tissue with whole-body concentration, the second hypothesis was tested via bioenergetics modeling, and we used mark-recapture data from the Saginaw Bay walleye fishery to address the third hypothesis. The only plausible explanation for the observed difference in PCB accumulation rate was that males spent substantially more time in the highly contaminated Saginaw River system than females, and therefore were exposed to greater environmental concentrations of PCBs. Based on the results of our study, we strongly recommend a stratified random sampling design for monitoring PCB concentration in Saginaw Bay walleye, with fixed numbers of females and males sampled each year.
Bellomo, A; Inbar, G
1997-01-01
One of the theories of human motor control is the gamma Equilibrium Point Hypothesis. It is an attractive theory since it offers an easy control scheme where the planned trajectory shifts monotionically from an initial to a final equilibrium state. The feasibility of this model was tested by reconstructing the virtual trajectory and the stiffness profiles for movements performed with different inertial loads and examining them. Three types of movements were tested: passive movements, targeted movements, and repetitive movements. Each of the movements was performed with five different inertial loads. Plausible virtual trajectories and stiffness profiles were reconstructed based on the gamma Equilibrium Point Hypothesis for the three different types of movements performed with different inertial loads. However, the simple control strategy supported by the model, where the planned trajectory shifts monotonically from an initial to a final equilibrium state, could not be supported for targeted movements performed with added inertial load. To test the feasibility of the model further we must examine the probability that the human motor control system would choose a trajectory more complicated than the actual trajectory to control.
Barth, Amy E; Denton, Carolyn A; Stuebing, Karla K; Fletcher, Jack M; Cirino, Paul T; Francis, David J; Vaughn, Sharon
2010-05-01
The cerebellar hypothesis of dyslexia posits that cerebellar deficits are associated with reading disabilities and may explain why some individuals with reading disabilities fail to respond to reading interventions. We tested these hypotheses in a sample of children who participated in a grade 1 reading intervention study (n = 174) and a group of typically achieving children (n = 62). At posttest, children were classified as adequately responding to the intervention (n = 82), inadequately responding with decoding and fluency deficits (n = 36), or inadequately responding with only fluency deficits (n = 56). Based on the Bead Threading and Postural Stability subtests from the Dyslexia Screening Test-Junior, we found little evidence that assessments of cerebellar functions were associated with academic performance or responder status. In addition, we did not find evidence supporting the hypothesis that cerebellar deficits are more prominent for poor readers with "specific" reading disabilities (i.e., with discrepancies relative to IQ) than for poor readers with reading scores consistent with IQ. In contrast, measures of phonological awareness, rapid naming, and vocabulary were strongly associated with responder status and academic outcomes. These results add to accumulating evidence that fails to associate cerebellar functions with reading difficulties.
Problems in Bibliographic Access to Non-Print Materials. Project Media Base: Final Report.
ERIC Educational Resources Information Center
Brong, Gerald; And Others
Project Media Base reports its conclusions and recommendations for the establishment of bibliographic control of audiovisual resources as a part of an overall objective to plan, develop, and implement a nationwide network of library and information services. The purpose of this project was to test the hypothesis that the essential elements of a…
Biostatistics Series Module 2: Overview of Hypothesis Testing.
Hazra, Avijit; Gogtay, Nithya
2016-01-01
Hypothesis testing (or statistical inference) is one of the major applications of biostatistics. Much of medical research begins with a research question that can be framed as a hypothesis. Inferential statistics begins with a null hypothesis that reflects the conservative position of no change or no difference in comparison to baseline or between groups. Usually, the researcher has reason to believe that there is some effect or some difference which is the alternative hypothesis. The researcher therefore proceeds to study samples and measure outcomes in the hope of generating evidence strong enough for the statistician to be able to reject the null hypothesis. The concept of the P value is almost universally used in hypothesis testing. It denotes the probability of obtaining by chance a result at least as extreme as that observed, even when the null hypothesis is true and no real difference exists. Usually, if P is < 0.05 the null hypothesis is rejected and sample results are deemed statistically significant. With the increasing availability of computers and access to specialized statistical software, the drudgery involved in statistical calculations is now a thing of the past, once the learning curve of the software has been traversed. The life sciences researcher is therefore free to devote oneself to optimally designing the study, carefully selecting the hypothesis tests to be applied, and taking care in conducting the study well. Unfortunately, selecting the right test seems difficult initially. Thinking of the research hypothesis as addressing one of five generic research questions helps in selection of the right hypothesis test. In addition, it is important to be clear about the nature of the variables (e.g., numerical vs. categorical; parametric vs. nonparametric) and the number of groups or data sets being compared (e.g., two or more than two) at a time. The same research question may be explored by more than one type of hypothesis test. While this may be of utility in highlighting different aspects of the problem, merely reapplying different tests to the same issue in the hope of finding a P < 0.05 is a wrong use of statistics. Finally, it is becoming the norm that an estimate of the size of any effect, expressed with its 95% confidence interval, is required for meaningful interpretation of results. A large study is likely to have a small (and therefore "statistically significant") P value, but a "real" estimate of the effect would be provided by the 95% confidence interval. If the intervals overlap between two interventions, then the difference between them is not so clear-cut even if P < 0.05. The two approaches are now considered complementary to one another.
Biostatistics Series Module 2: Overview of Hypothesis Testing
Hazra, Avijit; Gogtay, Nithya
2016-01-01
Hypothesis testing (or statistical inference) is one of the major applications of biostatistics. Much of medical research begins with a research question that can be framed as a hypothesis. Inferential statistics begins with a null hypothesis that reflects the conservative position of no change or no difference in comparison to baseline or between groups. Usually, the researcher has reason to believe that there is some effect or some difference which is the alternative hypothesis. The researcher therefore proceeds to study samples and measure outcomes in the hope of generating evidence strong enough for the statistician to be able to reject the null hypothesis. The concept of the P value is almost universally used in hypothesis testing. It denotes the probability of obtaining by chance a result at least as extreme as that observed, even when the null hypothesis is true and no real difference exists. Usually, if P is < 0.05 the null hypothesis is rejected and sample results are deemed statistically significant. With the increasing availability of computers and access to specialized statistical software, the drudgery involved in statistical calculations is now a thing of the past, once the learning curve of the software has been traversed. The life sciences researcher is therefore free to devote oneself to optimally designing the study, carefully selecting the hypothesis tests to be applied, and taking care in conducting the study well. Unfortunately, selecting the right test seems difficult initially. Thinking of the research hypothesis as addressing one of five generic research questions helps in selection of the right hypothesis test. In addition, it is important to be clear about the nature of the variables (e.g., numerical vs. categorical; parametric vs. nonparametric) and the number of groups or data sets being compared (e.g., two or more than two) at a time. The same research question may be explored by more than one type of hypothesis test. While this may be of utility in highlighting different aspects of the problem, merely reapplying different tests to the same issue in the hope of finding a P < 0.05 is a wrong use of statistics. Finally, it is becoming the norm that an estimate of the size of any effect, expressed with its 95% confidence interval, is required for meaningful interpretation of results. A large study is likely to have a small (and therefore “statistically significant”) P value, but a “real” estimate of the effect would be provided by the 95% confidence interval. If the intervals overlap between two interventions, then the difference between them is not so clear-cut even if P < 0.05. The two approaches are now considered complementary to one another. PMID:27057011
Saraf, Sanatan; Mathew, Thomas; Roy, Anindya
2015-01-01
For the statistical validation of surrogate endpoints, an alternative formulation is proposed for testing Prentice's fourth criterion, under a bivariate normal model. In such a setup, the criterion involves inference concerning an appropriate regression parameter, and the criterion holds if the regression parameter is zero. Testing such a null hypothesis has been criticized in the literature since it can only be used to reject a poor surrogate, and not to validate a good surrogate. In order to circumvent this, an equivalence hypothesis is formulated for the regression parameter, namely the hypothesis that the parameter is equivalent to zero. Such an equivalence hypothesis is formulated as an alternative hypothesis, so that the surrogate endpoint is statistically validated when the null hypothesis is rejected. Confidence intervals for the regression parameter and tests for the equivalence hypothesis are proposed using bootstrap methods and small sample asymptotics, and their performances are numerically evaluated and recommendations are made. The choice of the equivalence margin is a regulatory issue that needs to be addressed. The proposed equivalence testing formulation is also adopted for other parameters that have been proposed in the literature on surrogate endpoint validation, namely, the relative effect and proportion explained.
Clinicians' perceptions of the benefits and harms of prostate and colorectal cancer screening.
Elstad, Emily A; Sutkowi-Hemstreet, Anne; Sheridan, Stacey L; Vu, Maihan; Harris, Russell; Reyna, Valerie F; Rini, Christine; Earp, Jo Anne; Brewer, Noel T
2015-05-01
Clinicians' perceptions of screening benefits and harms influence their recommendations, which in turn shape patients' screening decisions. We sought to understand clinicians' perceptions of the benefits and harms of cancer screening by comparing 2 screening tests that differ in their balance of potential benefits to harms: colonoscopy, which results in net benefit for many adults, and prostate-specific antigen (PSA) testing, which may do more harm than good. In this cross-sectional study, 126 clinicians at 24 family/internal medicine practices completed surveys in which they listed and rated the magnitude of colonoscopy and PSA testing benefits and harms for a hypothetical 70-year-old male patient and then estimated the likelihood that these tests would cause harm and lengthen the life of 100 similar men in the next 10 years. We tested the hypothesis that the availability heuristic would explain the association of screening test to perceived likelihood of benefit/harm and a competing hypothesis that clinicians' gist of screening tests as good or bad would mediate this association. Clinicians perceived PSA testing to have a greater likelihood of harm and a lower likelihood of lengthening life relative to colonoscopy. Consistent with our gist hypothesis, these associations were mediated by clinicians' gist of screening (balance of perceived benefits to perceived harms). Generalizability beyond academic clinicians remains to be established. Targeting clinicians' gist of screening, for example through graphical displays that allow clinicians to make gist-based relative magnitude comparisons, may influence their risk perception and possibly reduce overrecommendation of screening. © The Author(s) 2015.
Farming fit? Dispelling the Australian agrarian myth
2011-01-01
Background Rural Australians face a higher mental health and lifestyle disease burden (obesity, diabetes and cardiovascular disease) than their urban counterparts. Our ongoing research reveals that the Australian farming community has even poorer physical and mental health outcomes than rural averages. In particular, farm men and women have high rates of overweightness, obesity, abdominal adiposity, high blood pressure and psychological distress when compared against Australian averages. Within our farming cohort we observed a significant association between psychological distress and obesity, abdominal adiposity and body fat percentage in the farming population. Presentation of hypothesis This paper presents a hypothesis based on preliminary data obtained from an ongoing study that could potentially explain the complex correlation between obesity, psychological distress and physical activity among a farming population. We posit that spasmodic physical activity, changing farm practices and climate variability induce prolonged stress in farmers. This increases systemic cortisol that, in turn, promotes abdominal adiposity and weight gain. Testing the hypothesis The hypothesis will be tested by anthropometric, biochemical and psychological analysis matched against systemic cortisol levels and the physical activity of the subjects. Implications of the hypothesis tested Previous studies indicate that farming populations have elevated rates of psychological distress and high rates of suicide. Australian farmers have recently experienced challenging climatic conditions including prolonged drought, floods and cyclones. Through our interactions and through the media it is not uncommon for farmers to describe the effect of this long-term stress with feelings of 'defeat'. By gaining a greater understanding of the role cortisol and physical activity have on mental and physical health we may positively impact the current rates of psychological distress in farmers. Trial registration ACTRN12610000827033 PMID:21447192
Engel, Samantha; Shapiro, Lewis P; Love, Tracy
2018-02-01
To evaluate processing and comprehension of pronouns and reflexives in individuals with agrammatic (Broca's) aphasia and age-matched control participants. Specifically, we evaluate processing and comprehension patterns in terms of a specific hypothesis -- the Intervener Hypothesis - that posits that the difficulty of individuals with agrammatic (Broca's) aphasia results from similarity-based interference caused by the presence of an intervening NP between two elements of a dependency chain. We used an eye tracking-while-listening paradigm to investigate real-time processing (Experiment 1) and a sentence-picture matching task to investigate final interpretive comprehension (Experiment 2) of sentences containing proforms in complement phrase and subject relative constructions. Individuals with agrammatic aphasia demonstrated a greater proportion of gazes to the correct referent of reflexives relative to pronouns and significantly greater comprehension accuracy of reflexives relative to pronouns. These results provide support for the Intervener Hypothesis, previous support for which comes from studies of Wh- questions and unaccusative verbs, and we argue that this account provides an explanation for the deficits of individuals with agrammatic aphasia across a growing set of sentence constructions. The current study extends this hypothesis beyond filler-gap dependencies to referential dependencies and allows us to refine the hypothesis in terms of the structural constraints that meet the description of the Intervener Hypothesis.
Précis of statistical significance: rationale, validity, and utility.
Chow, S L
1998-04-01
The null-hypothesis significance-test procedure (NHSTP) is defended in the context of the theory-corroboration experiment, as well as the following contrasts: (a) substantive hypotheses versus statistical hypotheses, (b) theory corroboration versus statistical hypothesis testing, (c) theoretical inference versus statistical decision, (d) experiments versus nonexperimental studies, and (e) theory corroboration versus treatment assessment. The null hypothesis can be true because it is the hypothesis that errors are randomly distributed in data. Moreover, the null hypothesis is never used as a categorical proposition. Statistical significance means only that chance influences can be excluded as an explanation of data; it does not identify the nonchance factor responsible. The experimental conclusion is drawn with the inductive principle underlying the experimental design. A chain of deductive arguments gives rise to the theoretical conclusion via the experimental conclusion. The anomalous relationship between statistical significance and the effect size often used to criticize NHSTP is more apparent than real. The absolute size of the effect is not an index of evidential support for the substantive hypothesis. Nor is the effect size, by itself, informative as to the practical importance of the research result. Being a conditional probability, statistical power cannot be the a priori probability of statistical significance. The validity of statistical power is debatable because statistical significance is determined with a single sampling distribution of the test statistic based on H0, whereas it takes two distributions to represent statistical power or effect size. Sample size should not be determined in the mechanical manner envisaged in power analysis. It is inappropriate to criticize NHSTP for nonstatistical reasons. At the same time, neither effect size, nor confidence interval estimate, nor posterior probability can be used to exclude chance as an explanation of data. Neither can any of them fulfill the nonstatistical functions expected of them by critics.
Causal Learning with Local Computations
ERIC Educational Resources Information Center
Fernbach, Philip M.; Sloman, Steven A.
2009-01-01
The authors proposed and tested a psychological theory of causal structure learning based on local computations. Local computations simplify complex learning problems via cues available on individual trials to update a single causal structure hypothesis. Structural inferences from local computations make minimal demands on memory, require…
Carter, Cindy L; Onicescu, Georgiana; Cartmell, Kathleen B; Sterba, Katherine R; Tomsic, James; Alberg, Anthony J
2012-08-01
Physical activity benefits cancer survivors, but the comparative effectiveness of a team-based delivery approach remains unexplored. The hypothesis tested was that a team-based physical activity intervention delivery approach has added physical and psychological benefits compared to a group-based approach. A team-based sport accessible to survivors is dragon boating, which requires no previous experience and allows for diverse skill levels. In a non-randomized trial, cancer survivors chose between two similarly structured 8-week programs, a dragon boat paddling team (n = 68) or group-based walking program (n = 52). Three separate intervention rounds were carried out in 2007-2008. Pre-post testing measured physical and psychosocial outcomes. Compared to walkers, paddlers had significantly greater (all p < 0.01) team cohesion, program adherence/attendance, and increased upper-body strength. For quality-of-life outcomes, both interventions were associated with pre-post improvements, but with no clear-cut pattern of between-intervention differences. These hypothesis-generating findings suggest that a short-term, team-based physical activity program (dragon boat paddling) was associated with increased cohesion and adherence/attendance. Improvements in physical fitness and psychosocial benefits were comparable to a traditional, group-based walking program. Compared to a group-based intervention delivery format, the team-based intervention delivery format holds promise for promoting physical activity program adherence/attendance in cancer survivors.
The Importance of Teaching Power in Statistical Hypothesis Testing
ERIC Educational Resources Information Center
Olinsky, Alan; Schumacher, Phyllis; Quinn, John
2012-01-01
In this paper, we discuss the importance of teaching power considerations in statistical hypothesis testing. Statistical power analysis determines the ability of a study to detect a meaningful effect size, where the effect size is the difference between the hypothesized value of the population parameter under the null hypothesis and the true value…
The Relation between Parental Values and Parenting Behavior: A Test of the Kohn Hypothesis.
ERIC Educational Resources Information Center
Luster, Tom; And Others
1989-01-01
Used data on 65 mother-infant dyads to test Kohn's hypothesis concerning the relation between values and parenting behavior. Findings support Kohn's hypothesis that parents who value self-direction would emphasize supportive function of parenting and parents who value conformity would emphasize their obligations to impose restraints. (Author/NB)
Cognitive Biases in the Interpretation of Autonomic Arousal: A Test of the Construal Bias Hypothesis
ERIC Educational Resources Information Center
Ciani, Keith D.; Easter, Matthew A.; Summers, Jessica J.; Posada, Maria L.
2009-01-01
According to Bandura's construal bias hypothesis, derived from social cognitive theory, persons with the same heightened state of autonomic arousal may experience either pleasant or deleterious emotions depending on the strength of perceived self-efficacy. The current study tested this hypothesis by proposing that college students' preexisting…
Same-Sex and Race-Based Disparities in Statutory Rape Arrests.
Chaffin, Mark; Chenoweth, Stephanie; Letourneau, Elizabeth J
2016-01-01
This study tests a liberation hypothesis for statutory rape incidents, specifically that there may be same-sex and race/ethnicity arrest disparities among statutory rape incidents and that these will be greater among statutory rape than among forcible sex crime incidents. 26,726 reported incidents of statutory rape as defined under state statutes and 96,474 forcible sex crime incidents were extracted from National Incident-Based Reporting System data sets. Arrest outcomes were tested using multilevel modeling. Same-sex statutory rape pairings were rare but had much higher arrest odds. A victim-offender romantic relationship amplified arrest odds for same-sex pairings, but damped arrest odds for male-on-female pairings. Same-sex disparities were larger among statutory than among forcible incidents. Female-on-male incidents had uniformly lower arrest odds. Race/ethnicity effects were smaller than gender effects and more complexly patterned. The findings support the liberation hypothesis for same-sex statutory rape arrest disparities, particularly among same-sex romantic pairings. Support for race/ethnicity-based arrest disparities was limited and mixed. © The Author(s) 2014.
Overgaard, Morten; Lindeløv, Jonas; Svejstrup, Stinna; Døssing, Marianne; Hvid, Tanja; Kauffmann, Oliver; Mouridsen, Kim
2013-01-01
This paper reports an experiment intended to test a particular hypothesis derived from blindsight research, which we name the “source misidentification hypothesis.” According to this hypothesis, a subject may be correct about a stimulus without being correct about how she had access to this knowledge (whether the stimulus was visual, auditory, or something else). We test this hypothesis in healthy subjects, asking them to report whether a masked stimulus was presented auditorily or visually, what the stimulus was, and how clearly they experienced the stimulus using the Perceptual Awareness Scale (PAS). We suggest that knowledge about perceptual modality may be a necessary precondition in order to issue correct reports of which stimulus was presented. Furthermore, we find that PAS ratings correlate with correctness, and that subjects are at chance level when reporting no conscious experience of the stimulus. To demonstrate that particular levels of reporting accuracy are obtained, we employ a statistical strategy, which operationally tests the hypothesis of non-equality, such that the usual rejection of the null-hypothesis admits the conclusion of equivalence. PMID:23508677
Bracha, H Stefan; Bienvenu, O Joseph; Eaton, William W
2007-01-01
The research agenda for the fifth edition of the Diagnostic and Statistical Manual of Mental Disorders (DSM-V) has emphasized the need for a more etiologically-based classification system, especially for stress-induced and fear-circuitry disorders. Testable hypotheses based on threats to survival during particular segments of the human era of evolutionary adaptedness (EEA) may be useful in developing a brain-evolution-based classification for the wide spectrum of disorders ranging from disorders which are mostly overconsolidationally such as PTSD, to fear-circuitry disorders which are mostly innate such as specific phobias. The recently presented Paleolithic-human-warfare hypothesis posits that blood-injection phobia can be traced to a "survival (fitness) enhancing" trait, which evolved in some females of reproductive-age during the millennia of intergroup warfare in the Paleolithic EEA. The study presented here tests the key a priori prediction of this hypothesis-that current blood-injection phobia will have higher prevalence in reproductive-age women than in post-menopausal women. The Diagnostic Interview Schedule (version III-R), which included a section on blood and injection phobia, was administered to 1920 subjects in the Baltimore ECA Follow-up Study. Data on BII phobia was available on 1724 subjects (1078 women and 646 males). The prevalence of current blood-injection phobia was 3.3% in women aged 27-49 and 1.1% in women over age 50 (OR 3.05, 95% CI 1.20-7.73). [The corresponding figures for males were 0.8% and 0.7% (OR 1.19, 95% CI 0.20-7.14)]. This epidemiological study provides one source of support for the Paleolithic-human-warfare (Paleolithic-threat) hypothesis regarding the evolutionary (distal) etiology of bloodletting-related phobia, and may contribute to a more brain-evolution-based re-conceptualization and classification of this fear circuitry-related trait for the DSM-V. In addition, the finding reported here may also stimulate new research directions on more proximal mechanisms which can lead to the development of evidence-based psychopharmacological preventive interventions for this common and sometimes disabling fear-circuitry disorder.
A Hypothesis-Driven Approach to Site Investigation
NASA Astrophysics Data System (ADS)
Nowak, W.
2008-12-01
Variability of subsurface formations and the scarcity of data lead to the notion of aquifer parameters as geostatistical random variables. Given an information need and limited resources for field campaigns, site investigation is often put into the context of optimal design. In optimal design, the types, numbers and positions of samples are optimized under case-specific objectives to meet the information needs. Past studies feature optimal data worth (balancing maximum financial profit in an engineering task versus the cost of additional sampling), or aim at a minimum prediction uncertainty of stochastic models for a prescribed investigation budget. Recent studies also account for other sources of uncertainty outside the hydrogeological range, such as uncertain toxicity, ingestion and behavioral parameters of the affected population when predicting the human health risk from groundwater contaminations. The current study looks at optimal site investigation from a new angle. Answering a yes/no question under uncertainty directly requires recasting the original question as a hypothesis test. Otherwise, false confidence in the resulting answer would be pretended. A straightforward example is whether a recent contaminant spill will cause contaminant concentrations in excess of a legal limit at a nearby drinking water well. This question can only be answered down to a specified chance of error, i.e., based on the significance level used in hypothesis tests. Optimal design is placed into the hypothesis-driven context by using the chance of providing a false yes/no answer as new criterion to be minimized. Different configurations apply for one-sided and two-sided hypothesis tests. If a false answer entails financial liability, the hypothesis-driven context can be re-cast in the context of data worth. The remaining difference is that failure is a hard constraint in the data worth context versus a monetary punishment term in the hypothesis-driven context. The basic principle is discussed and illustrated on the case of a hypothetical contaminant spill and the exceedance of critical contaminant levels at a downstream location. An tempting and important side question is whether site investigation could be tweaked towards a yes or no answer in maliciously biased campaigns by unfair formulation of the optimization objective.
Foverskov, Else; Holm, Anders
2016-02-01
Despite social inequality in health being well documented, it is still debated which causal mechanism best explains the negative association between socioeconomic position (SEP) and health. This paper is concerned with testing the explanatory power of three widely proposed causal explanations for social inequality in health in adulthood: the social causation hypothesis (SEP determines health), the health selection hypothesis (health determines SEP) and the indirect selection hypothesis (no causal relationship). We employ dynamic data of respondents aged 30 to 60 from the last nine waves of the British Household Panel Survey. Household income and location on the Cambridge Scale is included as measures of different dimensions of SEP and health is measured as a latent factor score. The causal hypotheses are tested using a time-based Granger approach by estimating dynamic fixed effects panel regression models following the method suggested by Anderson and Hsiao. We propose using this method to estimate the associations over time since it allows one to control for all unobserved time-invariant factors and hence lower the chances of biased estimates due to unobserved heterogeneity. The results showed no proof of the social causation hypothesis over a one to five year period and limited support for the health selection hypothesis was seen only for men in relation to HH income. These findings were robust in multiple sensitivity analysis. We conclude that the indirect selection hypothesis may be the most important in explaining social inequality in health in adulthood, indicating that the well-known cross-sectional correlations between health and SEP in adulthood seem not to be driven by a causal relationship, but instead by dynamics and influences in place before the respondents turn 30 years old that affect both their health and SEP onwards. The conclusion is limited in that we do not consider the effect of specific diseases and causal relationships in adulthood may be present over a longer timespan than 5 years. Copyright © 2015 Elsevier Ltd. All rights reserved.
ERIC Educational Resources Information Center
SAW, J.G.
THIS PAPER DEALS WITH SOME TESTS OF HYPOTHESIS FREQUENTLY ENCOUNTERED IN THE ANALYSIS OF MULTIVARIATE DATA. THE TYPE OF HYPOTHESIS CONSIDERED IS THAT WHICH THE STATISTICIAN CAN ANSWER IN THE NEGATIVE OR AFFIRMATIVE. THE DOOLITTLE METHOD MAKES IT POSSIBLE TO EVALUATE THE DETERMINANT OF A MATRIX OF HIGH ORDER, TO SOLVE A MATRIX EQUATION, OR TO…
Kruschke, John K; Liddell, Torrin M
2018-02-01
In the practice of data analysis, there is a conceptual distinction between hypothesis testing, on the one hand, and estimation with quantified uncertainty on the other. Among frequentists in psychology, a shift of emphasis from hypothesis testing to estimation has been dubbed "the New Statistics" (Cumming 2014). A second conceptual distinction is between frequentist methods and Bayesian methods. Our main goal in this article is to explain how Bayesian methods achieve the goals of the New Statistics better than frequentist methods. The article reviews frequentist and Bayesian approaches to hypothesis testing and to estimation with confidence or credible intervals. The article also describes Bayesian approaches to meta-analysis, randomized controlled trials, and power analysis.
Gagliardo, Anna; Ioalè, Paolo; Filannino, Caterina; Wikelski, Martin
2011-01-01
A large body of evidence has shown that anosmic pigeons are impaired in their navigation. However, the role of odours in navigation is still subject to debate. While according to the olfactory navigation hypothesis homing pigeons possess a navigational map based on the distribution of environmental odours, the olfactory activation hypothesis proposes that odour perception is only needed to activate a navigational mechanism based on cues of another nature. Here we tested experimentally whether the perception of artificial odours is sufficient to allow pigeons to navigate, as expected from the olfactory activation hypothesis. We transported three groups of pigeons in air-tight containers to release sites 53 and 61 km from home in three different olfactory conditions. The Control group received natural environmental air; both the Pure Air and the Artificial Odour groups received pure air filtered through an active charcoal filter. Only the Artificial Odour group received additional puffs of artificial odours until release. We then released pigeons while recording their tracks with 1 Hz GPS data loggers. We also followed non-homing pigeons using an aerial data readout to a Cessna plane, allowing, for the first time, the tracking of non-homing homing pigeons. Within the first hour after release, the pigeons in both the Artificial Odour and the Pure Air group (receiving no environmental odours) showed impaired navigational performances at each release site. Our data provide evidence against an activation role of odours in navigation, and document that pigeons only navigate well when they perceive environmental odours.
Testing for Polytomies in Phylogenetic Species Trees Using Quartet Frequencies.
Sayyari, Erfan; Mirarab, Siavash
2018-02-28
Phylogenetic species trees typically represent the speciation history as a bifurcating tree. Speciation events that simultaneously create more than two descendants, thereby creating polytomies in the phylogeny, are possible. Moreover, the inability to resolve relationships is often shown as a (soft) polytomy. Both types of polytomies have been traditionally studied in the context of gene tree reconstruction from sequence data. However, polytomies in the species tree cannot be detected or ruled out without considering gene tree discordance. In this paper, we describe a statistical test based on properties of the multi-species coalescent model to test the null hypothesis that a branch in an estimated species tree should be replaced by a polytomy. On both simulated and biological datasets, we show that the null hypothesis is rejected for all but the shortest branches, and in most cases, it is retained for true polytomies. The test, available as part of the Accurate Species TRee ALgorithm (ASTRAL) package, can help systematists decide whether their datasets are sufficient to resolve specific relationships of interest.
Testing for Polytomies in Phylogenetic Species Trees Using Quartet Frequencies
Sayyari, Erfan
2018-01-01
Phylogenetic species trees typically represent the speciation history as a bifurcating tree. Speciation events that simultaneously create more than two descendants, thereby creating polytomies in the phylogeny, are possible. Moreover, the inability to resolve relationships is often shown as a (soft) polytomy. Both types of polytomies have been traditionally studied in the context of gene tree reconstruction from sequence data. However, polytomies in the species tree cannot be detected or ruled out without considering gene tree discordance. In this paper, we describe a statistical test based on properties of the multi-species coalescent model to test the null hypothesis that a branch in an estimated species tree should be replaced by a polytomy. On both simulated and biological datasets, we show that the null hypothesis is rejected for all but the shortest branches, and in most cases, it is retained for true polytomies. The test, available as part of the Accurate Species TRee ALgorithm (ASTRAL) package, can help systematists decide whether their datasets are sufficient to resolve specific relationships of interest. PMID:29495636
Distributed Immune Systems for Wireless Network Information Assurance
2010-04-26
ratio test (SPRT), where the goal is to optimize a hypothesis testing problem given a trade-off between the probability of errors and the...using cumulative sum (CUSUM) and Girshik-Rubin-Shiryaev (GRSh) statistics. In sequential versions of the problem the sequential probability ratio ...the more complicated problems, in particular those where no clear mean can be established. We developed algorithms based on the sequential probability
A simple test of association for contingency tables with multiple column responses.
Decady, Y J; Thomas, D R
2000-09-01
Loughin and Scherer (1998, Biometrics 54, 630-637) investigated tests of association in two-way tables when one of the categorical variables allows for multiple-category responses from individual respondents. Standard chi-squared tests are invalid in this case, and they developed a bootstrap test procedure that provides good control of test levels under the null hypothesis. This procedure and some others that have been proposed are computationally involved and are based on techniques that are relatively unfamiliar to many practitioners. In this paper, the methods introduced by Rao and Scott (1981, Journal of the American Statistical Association 76, 221-230) for analyzing complex survey data are used to develop a simple test based on a corrected chi-squared statistic.
Young, Anna M.; Cordier, Breanne; Mundry, Roger; Wright, Timothy F.
2014-01-01
In many social species group, members share acoustically similar calls. Functional hypotheses have been proposed for call sharing, but previous studies have been limited by an inability to distinguish among these hypotheses. We examined the function of vocal sharing in female budgerigars with a two-part experimental design that allowed us to distinguish between two functional hypotheses. The social association hypothesis proposes that shared calls help animals mediate affiliative and aggressive interactions, while the password hypothesis proposes that shared calls allow animals to distinguish group identity and exclude nonmembers. We also tested the labeling hypothesis, a mechanistic explanation which proposes that shared calls are used to address specific individuals within the sender–receiver relationship. We tested the social association hypothesis by creating four–member flocks of unfamiliar female budgerigars (Melopsittacus undulatus) and then monitoring the birds’ calls, social behaviors, and stress levels via fecal glucocorticoid metabolites. We tested the password hypothesis by moving immigrants into established social groups. To test the labeling hypothesis, we conducted additional recording sessions in which individuals were paired with different group members. The social association hypothesis was supported by the development of multiple shared call types in each cage and a correlation between the number of shared call types and the number of aggressive interactions between pairs of birds. We also found support for calls serving as a labeling mechanism using discriminant function analysis with a permutation procedure. Our results did not support the password hypothesis, as there was no difference in stress or directed behaviors between immigrant and control birds. PMID:24860236
New Approaches to Robust Confidence Intervals for Location: A Simulation Study.
1984-06-01
obtain a denominator for the test statistic. Those statistics based on location estimates derived from Hampel’s redescending influence function or v...defined an influence function for a test in terms of the behavior of its P-values when the data are sampled from a model distribution modified by point...proposal could be used for interval estimation as well as hypothesis testing, the extension is immediate. Once an influence function has been defined
1990-09-12
electronics reading to the next. To test this hypothesis and the suitability of EBL to acquiring schemas, I have implemented an automated reader/learner as...used. For example, testing the utility of a kidnapping schema using several readings about kidnapping can only go so far toward establishing the...the cost of carrying the new rules while processing unrelated material will be underestimated. The present research tests the utility of new schemas in
Modular, Semantics-Based Composition of Biosimulation Models
ERIC Educational Resources Information Center
Neal, Maxwell Lewis
2010-01-01
Biosimulation models are valuable, versatile tools used for hypothesis generation and testing, codification of biological theory, education, and patient-specific modeling. Driven by recent advances in computational power and the accumulation of systems-level experimental data, modelers today are creating models with an unprecedented level of…
Neutral aggregation in finite-length genotype space
NASA Astrophysics Data System (ADS)
Houchmandzadeh, Bahram
2017-01-01
The advent of modern genome sequencing techniques allows for a more stringent test of the neutrality hypothesis of Darwinian evolution, where all individuals have the same fitness. Using the individual-based model of Wright and Fisher, we compute the amplitude of neutral aggregation in the genome space, i.e., the probability of finding two individuals at genetic (Hamming) distance k as a function of the genome size L , population size N , and mutation probability per base ν . In well-mixed populations, we show that for N ν <1 /L , neutral aggregation is the dominant force and most individuals are found at short genetic distances from each other. For N ν >1 , on the contrary, individuals are randomly dispersed in genome space. The results are extended to a geographically dispersed population, where the controlling parameter is shown to be a combination of mutation and migration probability. The theory we develop can be used to test the neutrality hypothesis in various ecological and evolutionary systems.
Reboiro-Jato, Miguel; Arrais, Joel P; Oliveira, José Luis; Fdez-Riverola, Florentino
2014-01-30
The diagnosis and prognosis of several diseases can be shortened through the use of different large-scale genome experiments. In this context, microarrays can generate expression data for a huge set of genes. However, to obtain solid statistical evidence from the resulting data, it is necessary to train and to validate many classification techniques in order to find the best discriminative method. This is a time-consuming process that normally depends on intricate statistical tools. geneCommittee is a web-based interactive tool for routinely evaluating the discriminative classification power of custom hypothesis in the form of biologically relevant gene sets. While the user can work with different gene set collections and several microarray data files to configure specific classification experiments, the tool is able to run several tests in parallel. Provided with a straightforward and intuitive interface, geneCommittee is able to render valuable information for diagnostic analyses and clinical management decisions based on systematically evaluating custom hypothesis over different data sets using complementary classifiers, a key aspect in clinical research. geneCommittee allows the enrichment of microarrays raw data with gene functional annotations, producing integrated datasets that simplify the construction of better discriminative hypothesis, and allows the creation of a set of complementary classifiers. The trained committees can then be used for clinical research and diagnosis. Full documentation including common use cases and guided analysis workflows is freely available at http://sing.ei.uvigo.es/GC/.
ERIC Educational Resources Information Center
Posada, German; Lu, Ting; Trumbell, Jill; Kaloustian, Garene; Trudel, Marcel; Plata, Sandra J.; Peña, Paola P.; Perez, Jennifer; Tereno, Susana; Dugravier, Romain; Coppola, Gabrielle; Constantini, Alessandro; Cassibba, Rosalinda; Kondo-Ikemura, Kiyomi; Nóblega, Magaly; Haya, Ines M.; Pedraglio, Claudia; Verissimo, Manuela; Santos, Antonio J.; Monteiro, Ligia; Lay, Keng-Ling
2013-01-01
The evolutionary rationale offered by Bowlby implies that secure base relationships are common in child-caregiver dyads and thus, child secure behavior observable across diverse social contexts and cultures. This study offers a test of the universality hypothesis. Trained observers in nine countries used the Attachment Q-set to describe the…
Spatial Abilities in an Elective Course of Applied Anatomy after a Problem-Based Learning Curriculum
ERIC Educational Resources Information Center
Langlois, Jean; Wells, George A.; Lecourtois, Marc; Bergeron, Germain; Yetisir, Elizabeth; Martin, Marcel
2009-01-01
A concern on the level of anatomy knowledge reached after a problem-based learning curriculum has been documented in the literature. Spatial anatomy, arguably the highest level in anatomy knowledge, has been related to spatial abilities. Our first objective was to test the hypothesis that residents are interested in a course of applied anatomy…
ERIC Educational Resources Information Center
Mahasneh, Omar. M.; Farajat, Amani. M.
2015-01-01
The present research was conducted to identify the effectiveness of a training program based on practice of careers in vocational interests development, to answer questions about the study and test its hypothesis the training program had been prepared and the adoption of a measure of vocational interests, as validity and reliability of each of…
Fast Poisson noise removal by biorthogonal Haar domain hypothesis testing
NASA Astrophysics Data System (ADS)
Zhang, B.; Fadili, M. J.; Starck, J.-L.; Digel, S. W.
2008-07-01
Methods based on hypothesis tests (HTs) in the Haar domain are widely used to denoise Poisson count data. Facing large datasets or real-time applications, Haar-based denoisers have to use the decimated transform to meet limited-memory or computation-time constraints. Unfortunately, for regular underlying intensities, decimation yields discontinuous estimates and strong “staircase” artifacts. In this paper, we propose to combine the HT framework with the decimated biorthogonal Haar (Bi-Haar) transform instead of the classical Haar. The Bi-Haar filter bank is normalized such that the p-values of Bi-Haar coefficients (p) provide good approximation to those of Haar (pH) for high-intensity settings or large scales; for low-intensity settings and small scales, we show that p are essentially upper-bounded by pH. Thus, we may apply the Haar-based HTs to Bi-Haar coefficients to control a prefixed false positive rate. By doing so, we benefit from the regular Bi-Haar filter bank to gain a smooth estimate while always maintaining a low computational complexity. A Fisher-approximation-based threshold implementing the HTs is also established. The efficiency of this method is illustrated on an example of hyperspectral-source-flux estimation.
Yannelli, F A; Koch, C; Jeschke, J M; Kollmann, J
2017-03-01
Several hypotheses have been proposed to explain biotic resistance of a recipient plant community based on reduced niche opportunities for invasive alien plant species. The limiting similarity hypothesis predicts that invasive species are less likely to establish in communities of species holding similar functional traits. Likewise, Darwin's naturalization hypothesis states that invasive species closely related to the native community would be less successful. We tested both using the invasive alien Ambrosia artemisiifolia L. and Solidago gigantea Aiton, and grassland species used for ecological restoration in central Europe. We classified all plant species into groups based on functional traits obtained from trait databases and calculated the phylogenetic distance among them. In a greenhouse experiment, we submitted the two invasive species at two propagule pressures to competition with communities of ten native species from the same functional group. In another experiment, they were submitted to pairwise competition with native species selected from each functional group. At the community level, highest suppression for both invasive species was observed at low propagule pressure and not explained by similarity in functional traits. Moreover, suppression decreased asymptotically with increasing phylogenetic distance to species of the native community. When submitted to pairwise competition, suppression for both invasive species was also better explained by phylogenetic distance. Overall, our results support Darwin's naturalization hypothesis but not the limiting similarity hypothesis based on the selected traits. Biotic resistance of native communities against invasive species at an early stage of establishment is enhanced by competitive traits and phylogenetic relatedness.
Linguistic Phylogenies Support Back-Migration from Beringia to Asia
Sicoli, Mark A.; Holton, Gary
2014-01-01
Recent arguments connecting Na-Dene languages of North America with Yeniseian languages of Siberia have been used to assert proof for the origin of Native Americans in central or western Asia. We apply phylogenetic methods to test support for this hypothesis against an alternative hypothesis that Yeniseian represents a back-migration to Asia from a Beringian ancestral population. We coded a linguistic dataset of typological features and used neighbor-joining network algorithms and Bayesian model comparison based on Bayes factors to test the fit between the data and the linguistic phylogenies modeling two dispersal hypotheses. Our results support that a Dene-Yeniseian connection more likely represents radiation out of Beringia with back-migration into central Asia than a migration from central or western Asia to North America. PMID:24621925
Phase II design with sequential testing of hypotheses within each stage.
Poulopoulou, Stavroula; Karlis, Dimitris; Yiannoutsos, Constantin T; Dafni, Urania
2014-01-01
The main goal of a Phase II clinical trial is to decide, whether a particular therapeutic regimen is effective enough to warrant further study. The hypothesis tested by Fleming's Phase II design (Fleming, 1982) is [Formula: see text] versus [Formula: see text], with level [Formula: see text] and with a power [Formula: see text] at [Formula: see text], where [Formula: see text] is chosen to represent the response probability achievable with standard treatment and [Formula: see text] is chosen such that the difference [Formula: see text] represents a targeted improvement with the new treatment. This hypothesis creates a misinterpretation mainly among clinicians that rejection of the null hypothesis is tantamount to accepting the alternative, and vice versa. As mentioned by Storer (1992), this introduces ambiguity in the evaluation of type I and II errors and the choice of the appropriate decision at the end of the study. Instead of testing this hypothesis, an alternative class of designs is proposed in which two hypotheses are tested sequentially. The hypothesis [Formula: see text] versus [Formula: see text] is tested first. If this null hypothesis is rejected, the hypothesis [Formula: see text] versus [Formula: see text] is tested next, in order to examine whether the therapy is effective enough to consider further testing in a Phase III study. For the derivation of the proposed design the exact binomial distribution is used to calculate the decision cut-points. The optimal design parameters are chosen, so as to minimize the average sample number (ASN) under specific upper bounds for error levels. The optimal values for the design were found using a simulated annealing method.
Monocular precrash vehicle detection: features and classifiers.
Sun, Zehang; Bebis, George; Miller, Ronald
2006-07-01
Robust and reliable vehicle detection from images acquired by a moving vehicle (i.e., on-road vehicle detection) is an important problem with applications to driver assistance systems and autonomous, self-guided vehicles. The focus of this work is on the issues of feature extraction and classification for rear-view vehicle detection. Specifically, by treating the problem of vehicle detection as a two-class classification problem, we have investigated several different feature extraction methods such as principal component analysis, wavelets, and Gabor filters. To evaluate the extracted features, we have experimented with two popular classifiers, neural networks and support vector machines (SVMs). Based on our evaluation results, we have developed an on-board real-time monocular vehicle detection system that is capable of acquiring grey-scale images, using Ford's proprietary low-light camera, achieving an average detection rate of 10 Hz. Our vehicle detection algorithm consists of two main steps: a multiscale driven hypothesis generation step and an appearance-based hypothesis verification step. During the hypothesis generation step, image locations where vehicles might be present are extracted. This step uses multiscale techniques not only to speed up detection, but also to improve system robustness. The appearance-based hypothesis verification step verifies the hypotheses using Gabor features and SVMs. The system has been tested in Ford's concept vehicle under different traffic conditions (e.g., structured highway, complex urban streets, and varying weather conditions), illustrating good performance.
Place recognition using batlike sonar.
Vanderelst, Dieter; Steckel, Jan; Boen, Andre; Peremans, Herbert; Holderied, Marc W
2016-08-02
Echolocating bats have excellent spatial memory and are able to navigate to salient locations using bio-sonar. Navigating and route-following require animals to recognize places. Currently, it is mostly unknown how bats recognize places using echolocation. In this paper, we propose template based place recognition might underlie sonar-based navigation in bats. Under this hypothesis, bats recognize places by remembering their echo signature - rather than their 3D layout. Using a large body of ensonification data collected in three different habitats, we test the viability of this hypothesis assessing two critical properties of the proposed echo signatures: (1) they can be uniquely classified and (2) they vary continuously across space. Based on the results presented, we conclude that the proposed echo signatures satisfy both criteria. We discuss how these two properties of the echo signatures can support navigation and building a cognitive map.
Hypothesis testing of scientific Monte Carlo calculations.
Wallerberger, Markus; Gull, Emanuel
2017-11-01
The steadily increasing size of scientific Monte Carlo simulations and the desire for robust, correct, and reproducible results necessitates rigorous testing procedures for scientific simulations in order to detect numerical problems and programming bugs. However, the testing paradigms developed for deterministic algorithms have proven to be ill suited for stochastic algorithms. In this paper we demonstrate explicitly how the technique of statistical hypothesis testing, which is in wide use in other fields of science, can be used to devise automatic and reliable tests for Monte Carlo methods, and we show that these tests are able to detect some of the common problems encountered in stochastic scientific simulations. We argue that hypothesis testing should become part of the standard testing toolkit for scientific simulations.
Hypothesis testing of scientific Monte Carlo calculations
NASA Astrophysics Data System (ADS)
Wallerberger, Markus; Gull, Emanuel
2017-11-01
The steadily increasing size of scientific Monte Carlo simulations and the desire for robust, correct, and reproducible results necessitates rigorous testing procedures for scientific simulations in order to detect numerical problems and programming bugs. However, the testing paradigms developed for deterministic algorithms have proven to be ill suited for stochastic algorithms. In this paper we demonstrate explicitly how the technique of statistical hypothesis testing, which is in wide use in other fields of science, can be used to devise automatic and reliable tests for Monte Carlo methods, and we show that these tests are able to detect some of the common problems encountered in stochastic scientific simulations. We argue that hypothesis testing should become part of the standard testing toolkit for scientific simulations.
Chiang, Kuo-Szu; Bock, Clive H; Lee, I-Hsuan; El Jarroudi, Moussa; Delfosse, Philippe
2016-12-01
The effect of rater bias and assessment method on hypothesis testing was studied for representative experimental designs for plant disease assessment using balanced and unbalanced data sets. Data sets with the same number of replicate estimates for each of two treatments are termed "balanced" and those with unequal numbers of replicate estimates are termed "unbalanced". The three assessment methods considered were nearest percent estimates (NPEs), an amended 10% incremental scale, and the Horsfall-Barratt (H-B) scale. Estimates of severity of Septoria leaf blotch on leaves of winter wheat were used to develop distributions for a simulation model. The experimental designs are presented here in the context of simulation experiments which consider the optimal design for the number of specimens (individual units sampled) and the number of replicate estimates per specimen for a fixed total number of observations (total sample size for the treatments being compared). The criterion used to gauge each method was the power of the hypothesis test. As expected, at a given fixed number of observations, the balanced experimental designs invariably resulted in a higher power compared with the unbalanced designs at different disease severity means, mean differences, and variances. Based on these results, with unbiased estimates using NPE, the recommended number of replicate estimates taken per specimen is 2 (from a sample of specimens of at least 30), because this conserves resources. Furthermore, for biased estimates, an apparent difference in the power of the hypothesis test was observed between assessment methods and between experimental designs. Results indicated that, regardless of experimental design or rater bias, an amended 10% incremental scale has slightly less power compared with NPEs, and that the H-B scale is more likely than the others to cause a type II error. These results suggest that choice of assessment method, optimizing sample number and number of replicate estimates, and using a balanced experimental design are important criteria to consider to maximize the power of hypothesis tests for comparing treatments using disease severity estimates.
Understanding suicide terrorism: premature dismissal of the religious-belief hypothesis.
Liddle, James R; Machluf, Karin; Shackelford, Todd K
2010-07-06
We comment on work by Ginges, Hansen, and Norenzayan (2009), in which they compare two hypotheses for predicting individual support for suicide terrorism: the religious-belief hypothesis and the coalitional-commitment hypothesis. Although we appreciate the evidence provided in support of the coalitional-commitment hypothesis, we argue that their method of testing the religious-belief hypothesis is conceptually flawed, thus calling into question their conclusion that the religious-belief hypothesis has been disconfirmed. In addition to critiquing the methodology implemented by Ginges et al., we provide suggestions on how the religious-belief hypothesis may be properly tested. It is possible that the premature and unwarranted conclusions reached by Ginges et al. may deter researchers from examining the effect of specific religious beliefs on support for terrorism, and we hope that our comments can mitigate this possibility.
Tanoue, Naomi
2007-10-01
For any kind of research, "Research Design" is the most important. The design is used to structure the research, to show how all of the major parts of the research project. It is necessary for all the researchers to begin the research after planning research design for what is the main theme, what is the background and reference, what kind of data is needed, and what kind of analysis is needed. It seems to be a roundabout route, but, in fact, it will be a shortcut. The research methods must be appropriate to the objectives of the study. Regarding the hypothesis-testing research that is the traditional style of the research, the research design based on statistics is undoubtedly necessary considering that the research basically proves "a hypothesis" with data and statistics theory. On the subject of the clinical trial, which is the clinical version of the hypothesis-testing research, the statistical method must be mentioned in a clinical trial planning. This report describes the basis of the research design for a prosthodontics study.
Statistical power analysis in wildlife research
Steidl, R.J.; Hayes, J.P.
1997-01-01
Statistical power analysis can be used to increase the efficiency of research efforts and to clarify research results. Power analysis is most valuable in the design or planning phases of research efforts. Such prospective (a priori) power analyses can be used to guide research design and to estimate the number of samples necessary to achieve a high probability of detecting biologically significant effects. Retrospective (a posteriori) power analysis has been advocated as a method to increase information about hypothesis tests that were not rejected. However, estimating power for tests of null hypotheses that were not rejected with the effect size observed in the study is incorrect; these power estimates will always be a??0.50 when bias adjusted and have no relation to true power. Therefore, retrospective power estimates based on the observed effect size for hypothesis tests that were not rejected are misleading; retrospective power estimates are only meaningful when based on effect sizes other than the observed effect size, such as those effect sizes hypothesized to be biologically significant. Retrospective power analysis can be used effectively to estimate the number of samples or effect size that would have been necessary for a completed study to have rejected a specific null hypothesis. Simply presenting confidence intervals can provide additional information about null hypotheses that were not rejected, including information about the size of the true effect and whether or not there is adequate evidence to 'accept' a null hypothesis as true. We suggest that (1) statistical power analyses be routinely incorporated into research planning efforts to increase their efficiency, (2) confidence intervals be used in lieu of retrospective power analyses for null hypotheses that were not rejected to assess the likely size of the true effect, (3) minimum biologically significant effect sizes be used for all power analyses, and (4) if retrospective power estimates are to be reported, then the I?-level, effect sizes, and sample sizes used in calculations must also be reported.
[Screening for psychiatric risk factors in a facial trauma patients. Validating a questionnaire].
Foletti, J M; Bruneau, S; Farisse, J; Thiery, G; Chossegros, C; Guyot, L
2014-12-01
We recorded similarities between patients managed in the psychiatry department and in the maxillo-facial surgical unit. Our hypothesis was that some psychiatric conditions act as risk factors for facial trauma. We had for aim to test our hypothesis and to validate a simple and efficient questionnaire to identify these psychiatric disorders. Fifty-eight consenting patients with facial trauma, recruited prospectively in the 3 maxillo-facial surgery departments of the Marseille area during 3 months (December 2012-March 2013) completed a self-questionnaire based on the French version of 3 validated screening tests (Self Reported Psychopathy test, Rapid Alcohol Problem Screening test quantity-frequency, and Personal Health Questionnaire). This preliminary study confirmed that psychiatric conditions detected by our questionnaire, namely alcohol abuse and dependence, substance abuse, and depression, were risk factors for facial trauma. Maxillo-facial surgeons are often unaware of psychiatric disorders that may be the cause of facial trauma. The self-screening test we propose allows documenting the psychiatric history of patients and implementing earlier psychiatric care. Copyright © 2014 Elsevier Masson SAS. All rights reserved.
On Some Assumptions of the Null Hypothesis Statistical Testing
ERIC Educational Resources Information Center
Patriota, Alexandre Galvão
2017-01-01
Bayesian and classical statistical approaches are based on different types of logical principles. In order to avoid mistaken inferences and misguided interpretations, the practitioner must respect the inference rules embedded into each statistical method. Ignoring these principles leads to the paradoxical conclusions that the hypothesis…
Building Intuitions about Statistical Inference Based on Resampling
ERIC Educational Resources Information Center
Watson, Jane; Chance, Beth
2012-01-01
Formal inference, which makes theoretical assumptions about distributions and applies hypothesis testing procedures with null and alternative hypotheses, is notoriously difficult for tertiary students to master. The debate about whether this content should appear in Years 11 and 12 of the "Australian Curriculum: Mathematics" has gone on…
ERIC Educational Resources Information Center
McDonough, Ian M.; Gallo, David A.
2008-01-01
Retrieval monitoring enhances episodic memory accuracy. For instance, false recognition is reduced when participants base their decisions on more distinctive recollections, a retrieval monitoring process called the distinctiveness heuristic. The experiments reported here tested the hypothesis that autobiographical elaboration during study (i.e.,…
Confirmatory and Competitive Evaluation of Alternative Gene-Environment Interaction Hypotheses
ERIC Educational Resources Information Center
Belsky, Jay; Pluess, Michael; Widaman, Keith F.
2013-01-01
Background: Most gene-environment interaction (GXE) research, though based on clear, vulnerability-oriented hypotheses, is carried out using exploratory rather than hypothesis-informed statistical tests, limiting power and making formal evaluation of competing GXE propositions difficult. Method: We present and illustrate a new regression technique…
ERIC Educational Resources Information Center
Besken, Miri
2016-01-01
The perceptual fluency hypothesis claims that items that are easy to perceive at encoding induce an illusion that they will be easier to remember, despite the finding that perception does not generally affect recall. The current set of studies tested the predictions of the perceptual fluency hypothesis with a picture generation manipulation.…
Adolescents' Body Image Trajectories: A Further Test of the Self-Equilibrium Hypothesis
ERIC Educational Resources Information Center
Morin, Alexandre J. S.; Maïano, Christophe; Scalas, L. Francesca; Janosz, Michel; Litalien, David
2017-01-01
The self-equilibrium hypothesis underlines the importance of having a strong core self, which is defined as a high and developmentally stable self-concept. This study tested this hypothesis in relation to body image (BI) trajectories in a sample of 1,006 adolescents (M[subscript age] = 12.6, including 541 males and 465 females) across a 4-year…
ERIC Educational Resources Information Center
Trafimow, David
2017-01-01
There has been much controversy over the null hypothesis significance testing procedure, with much of the criticism centered on the problem of inverse inference. Specifically, p gives the probability of the finding (or one more extreme) given the null hypothesis, whereas the null hypothesis significance testing procedure involves drawing a…
Human female orgasm as evolved signal: a test of two hypotheses.
Ellsworth, Ryan M; Bailey, Drew H
2013-11-01
We present the results of a study designed to empirically test predictions derived from two hypotheses regarding human female orgasm behavior as an evolved communicative trait or signal. One hypothesis tested was the female fidelity hypothesis, which posits that human female orgasm signals a woman's sexual satisfaction and therefore her likelihood of future fidelity to a partner. The other was sire choice hypothesis, which posits that women's orgasm behavior signals increased chances of fertilization. To test the two hypotheses of human female orgasm, we administered a questionnaire to 138 females and 121 males who reported that they were currently in a romantic relationship. Key predictions of the female fidelity hypothesis were not supported. In particular, orgasm was not associated with female sexual fidelity nor was orgasm associated with male perceptions of partner sexual fidelity. However, faked orgasm was associated with female sexual infidelity and lower male relationship satisfaction. Overall, results were in greater support of the sire choice signaling hypothesis than the female fidelity hypothesis. Results also suggest that male satisfaction with, investment in, and sexual fidelity to a mate are benefits that favored the selection of orgasmic signaling in ancestral females.
Luo, Liqun; Zhao, Wei; Weng, Tangmei
2016-01-01
The Trivers-Willard hypothesis predicts that high-status parents will bias their investment to sons, whereas low-status parents will bias their investment to daughters. Among humans, tests of this hypothesis have yielded mixed results. This study tests the hypothesis using data collected among contemporary peasants in Central South China. We use current family status (rated by our informants) and father's former class identity (assigned by the Chinese Communist Party in the early 1950s) as measures of parental status, and proportion of sons in offspring and offspring's years of education as measures of parental investment. Results show that (i) those families with a higher former class identity such as landlord and rich peasant tend to have a higher socioeconomic status currently, (ii) high-status parents are more likely to have sons than daughters among their biological offspring, and (iii) in higher-status families, the years of education obtained by sons exceed that obtained by daughters to a larger extent than in lower-status families. Thus, the first assumption and the two predictions of the hypothesis are supported by this study. This article contributes a contemporary Chinese case to the testing of the Trivers-Willard hypothesis.
A Classic Test of the Hubbert-Rubey Weakening Mechanism: M7.6 Thrust-Belt Earthquake Taiwan
NASA Astrophysics Data System (ADS)
Yue, L.; Suppe, J.
2005-12-01
The Hubbert-Rubey (1959) fluid-pressure hypothesis has long been accepted as a classic solution to the problem of the apparent weakness of long thin thrust sheets. This hypothesis, in its classic form argues that ambient high pore-fluid pressures, which are common in sedimentary basins, reduce the normalized shear traction on the fault τb/ρ g H = μb(1-λb) where λb=Pf/ρ g H is the normalized pore-fluid pressure and μb is the coefficient of friction. Remarkably, there have been few large-scale tests of this classic hypothesis. Here we document ambient pore-fluid pressures surrounding the active frontal thrusts of western Taiwan, including the Chulungpu thrust that slipped in the 1999 Mw7.6 Chi-Chi earthquake. We show from 3-D mapping of these thrusts that they flatten to a shallow detachment at about 5 km depth in the Pliocene Chinshui Shale. Using critical-taper wedge theory and the dip of the detachment and surface slope we constrain the basal shear traction τb/ρ g H ≍ 0.1 which is substantially weaker than common lab friction values of of Byerlee's law (μb= 0.85-0.6). We have determined the pore-fluid pressures as a function of depth in 76 wells, based on in-situ formation tests, sonic logs and mud densities. Fluid pressures are regionally controlled stratigraphically by sedimentary facies. The top of overpressures is everywhere below the base of the Chinshui Shale, therefore the entire Chinshui thrust system is at ambient hydrostatic pore-fluid pressures (λb ≍ 0.4). According to the classic Hubbert-Rubey hypothesis the required basal coefficient of friction is therefore μb ≍ 0.1-0.2. Therefore the classic Hubbert & Rubey mechanism involving static ambient excess fluid pressures is not the cause of extreme fault weakening in this western Taiwan example. We must look to other mechanisms of large-scale fault weakening, many of which are difficult to test.
Diaz, Francisco J.; McDonald, Peter R.; Pinter, Abraham; Chaguturu, Rathnam
2018-01-01
Biomolecular screening research frequently searches for the chemical compounds that are most likely to make a biochemical or cell-based assay system produce a strong continuous response. Several doses are tested with each compound and it is assumed that, if there is a dose-response relationship, the relationship follows a monotonic curve, usually a version of the median-effect equation. However, the null hypothesis of no relationship cannot be statistically tested using this equation. We used a linearized version of this equation to define a measure of pharmacological effect size, and use this measure to rank the investigated compounds in order of their overall capability to produce strong responses. The null hypothesis that none of the examined doses of a particular compound produced a strong response can be tested with this approach. The proposed approach is based on a new statistical model of the important concept of response detection limit, a concept that is usually neglected in the analysis of dose-response data with continuous responses. The methodology is illustrated with data from a study searching for compounds that neutralize the infection by a human immunodeficiency virus of brain glioblastoma cells. PMID:24905187
Uno, Hajime; Tian, Lu; Claggett, Brian; Wei, L J
2015-12-10
With censored event time observations, the logrank test is the most popular tool for testing the equality of two underlying survival distributions. Although this test is asymptotically distribution free, it may not be powerful when the proportional hazards assumption is violated. Various other novel testing procedures have been proposed, which generally are derived by assuming a class of specific alternative hypotheses with respect to the hazard functions. The test considered by Pepe and Fleming (1989) is based on a linear combination of weighted differences of the two Kaplan-Meier curves over time and is a natural tool to assess the difference of two survival functions directly. In this article, we take a similar approach but choose weights that are proportional to the observed standardized difference of the estimated survival curves at each time point. The new proposal automatically makes weighting adjustments empirically. The new test statistic is aimed at a one-sided general alternative hypothesis and is distributed with a short right tail under the null hypothesis but with a heavy tail under the alternative. The results from extensive numerical studies demonstrate that the new procedure performs well under various general alternatives with a caution of a minor inflation of the type I error rate when the sample size is small or the number of observed events is small. The survival data from a recent cancer comparative study are utilized for illustrating the implementation of the process. Copyright © 2015 John Wiley & Sons, Ltd.
Central Atlantic regional ecological test site
NASA Technical Reports Server (NTRS)
Alexander, R. H.
1972-01-01
The work of the Central Atlantic Regional Ecological Test Site (CARETS) project is discussed. The primary aim of CARETS is to test the hypothesis that data from ERTS-A can be made an integral part of a regional land resources information system, encompassing both inventory of the resource base and monitoring of changes, along with their effects on the quality of the environment. Another objective of the project is to determine scaling factors for developing land use information and regional analysis for regions of any given size.
The continuum fusion theory of signal detection applied to a bi-modal fusion problem
NASA Astrophysics Data System (ADS)
Schaum, A.
2011-05-01
A new formalism has been developed that produces detection algorithms for model-based problems, in which one or more parameter values is unknown. Continuum Fusion can be used to generate different flavors of algorithm for any composite hypothesis testing problem. The methodology is defined by a fusion logic that can be translated into max/min conditions. Here it is applied to a simple sensor fusion model, but one for which the generalized likelihood ratio test is intractable. By contrast, a fusion-based response to the same problem can be devised that is solvable in closed form and represents a good approximation to the GLR test.
Timing of quizzes during learning: Effects on motivation and retention.
Healy, Alice F; Jones, Matt; Lalchandani, Lakshmi A; Tack, Lindsay Anderson
2017-06-01
This article investigates how the timing of quizzes given during learning impacts retention of studied material. We investigated the hypothesis that interspersing quizzes among study blocks increases student engagement, thus improving learning. Participants learned 8 artificial facts about each of 8 plant categories, with the categories blocked during learning. Quizzes about 4 of the 8 facts from each category occurred either immediately after studying the facts for that category (standard) or after studying the facts from all 8 categories (postponed). In Experiment 1, participants were given tests shortly after learning and several days later, including both the initially quizzed and unquizzed facts. Test performance was better in the standard than in the postponed condition, especially for categories learned later in the sequence. This result held even for the facts not quizzed during learning, suggesting that the advantage cannot be due to any direct testing effects. Instead the results support the hypothesis that interrupting learning with quiz questions is beneficial because it can enhance learner engagement. Experiment 2 provided further support for this hypothesis, based on participants' retrospective ratings of their task engagement during the learning phase. These findings have practical implications for when to introduce quizzes in the classroom. (PsycINFO Database Record (c) 2017 APA, all rights reserved).
Davis, O S P; Kovas, Y; Harlaar, N; Busfield, P; McMillan, A; Frances, J; Petrill, S A; Dale, P S; Plomin, R
2008-06-01
A key translational issue for neuroscience is to understand how genes affect individual differences in brain function. Although it is reasonable to suppose that genetic effects on specific learning abilities, such as reading and mathematics, as well as general cognitive ability (g), will overlap very little, the counterintuitive finding emerging from multivariate genetic studies is that the same genes affect these diverse learning abilities: a Generalist Genes hypothesis. To conclusively test this hypothesis, we exploited the widespread access to inexpensive and fast Internet connections in the UK to assess 2541 pairs of 10-year-old twins for reading, mathematics and g, using a web-based test battery. Heritabilities were 0.38 for reading, 0.49 for mathematics and 0.44 for g. Multivariate genetic analysis showed substantial genetic correlations between learning abilities: 0.57 between reading and mathematics, 0.61 between reading and g, and 0.75 between mathematics and g, providing strong support for the Generalist Genes hypothesis. If genetic effects on cognition are so general, the effects of these genes on the brain are also likely to be general. In this way, generalist genes may prove invaluable in integrating top-down and bottom-up approaches to the systems biology of the brain.
Gurven, Michael; Fenelon, Andrew
2012-01-01
G.C. Williams’ 1957 hypothesis famously argues that higher age-independent, or “extrinsic”, mortality should select for faster rates of senescence. Long-lived species should therefore show relatively few deaths from extrinsic causes such as predation and starvation. Theoretical explorations and empirical tests of Williams’ hypothesis have flourished in the past decade but it has not yet been tested empirically among humans. We test Williams’ hypothesis using mortality data from subsistence populations and from historical cohorts from Sweden and England/Wales, and examine whether rates of actuarial aging declined over the past two centuries. We employ three aging measures: mortality rate doubling time (MRDT), Ricklef’s ω, and the slope of mortality hazard from ages sixty to seventy, m’60–70, and model mortality using both Weibull and Gompertz-Makeham hazard models. We find that (1) actuarial aging in subsistence societies is similar to that of early Europe, (2) actuarial senescence has slowed in later European cohorts, (3) reductions in extrinsic mortality associate with slower actuarial aging in longitudinal samples, and (4) men senesce more rapidly than women, especially in later cohorts. To interpret these results, we attempt to bridge population-based evolutionary analysis with individual-level proximate mechanisms. PMID:19220451
Rahman, Nafisur; Kashif, Mohammad
2010-03-01
Point and interval hypothesis tests performed to validate two simple and economical, kinetic spectrophotometric methods for the assay of lansoprazole are described. The methods are based on the formation of chelate complex of the drug with Fe(III) and Zn(II). The reaction is followed spectrophotometrically by measuring the rate of change of absorbance of coloured chelates of the drug with Fe(III) and Zn(II) at 445 and 510 nm, respectively. The stoichiometric ratio of lansoprazole to Fe(III) and Zn(II) complexes were found to be 1:1 and 2:1, respectively. The initial-rate and fixed-time methods are adopted for determination of drug concentrations. The calibration graphs are linear in the range 50-200 µg ml⁻¹ (initial-rate method), 20-180 µg ml⁻¹ (fixed-time method) for lansoprazole-Fe(III) complex and 120-300 (initial-rate method), and 90-210 µg ml⁻¹ (fixed-time method) for lansoprazole-Zn(II) complex. The inter-day and intra-day precision data showed good accuracy and precision of the proposed procedure for analysis of lansoprazole. The point and interval hypothesis tests indicate that the proposed procedures are not biased. Copyright © 2010 John Wiley & Sons, Ltd.
Microbes in Mascara: Hypothesis-Driven Research in a Nonmajor Biology Lab †
Burleson, Kathryn M.; Martinez-Vaz, Betsy M.
2011-01-01
In this laboratory exercise, students were taught concepts of microbiology and scientific process through an everyday activity — cosmetic use. The students’ goals for the lab were to develop a hypothesis regarding microbial contamination in cosmetics, learn techniques to culture and differentiate microorganisms from cosmetics, and propose best practices in cosmetics use based on their findings. Prior to the lab, students took a pretest to assess their knowledge of scientific hypotheses, microbiology, and cosmetic safety. In the first week, students were introduced to microbiological concepts and methodologies, and cosmetic terminology and safety. Students completed a hypothesis-writing exercise before formulating and testing their own hypotheses regarding cosmetic contamination. Students provided a cosmetic of their own and, in consultation with their lab group, chose one product for testing. Samples were serially diluted and plated on a variety of selective media. In the second week, students analyzed their plates to determine the presence and diversity of microbes and if their hypotheses were supported. Students completed a worksheet of their results and were given a posttest to assess their knowledge. Average test scores improved from 5.2 (pretest) to 7.8 (posttest), with p-values < 0.0001. Seventy-nine percent (79%) of students correctly identified hypotheses that were not falsifiable or lacked variables, and 89% of students improved their scores on questions concerning safe cosmetic use. Ninety-one percent (91%) of students demonstrated increased knowledge of microbial concepts and methods. Based on our results, this lab is an easy, yet effective, way to enhance knowledge of scientific concepts for nonmajors, while maintaining relevance to everyday life. PMID:23653761
Bayesian Methods for Determining the Importance of Effects
USDA-ARS?s Scientific Manuscript database
Criticisms have plagued the frequentist null-hypothesis significance testing (NHST) procedure since the day it was created from the Fisher Significance Test and Hypothesis Test of Jerzy Neyman and Egon Pearson. Alternatives to NHST exist in frequentist statistics, but competing methods are also avai...
Testing for purchasing power parity in the long-run for ASEAN-5
NASA Astrophysics Data System (ADS)
Choji, Niri Martha; Sek, Siok Kun
2017-04-01
For more than a decade, there has been a substantial interest in testing for the validity of the purchasing power parity (PPP) hypothesis empirically. This paper performs a test on revealing a long-run relative Purchasing Power Parity for a group of ASEAN-5 countries for the period of 1996-2016 using monthly data. For this purpose, we used the Pedroni co-integration method to test for the long-run hypothesis of purchasing power parity. We first tested for the stationarity of the variables and found that the variables are non-stationary at levels but stationary at first difference. Results of the Pedroni test rejected the null hypothesis of no co-integration meaning that we have enough evidence to support PPP in the long-run for the ASEAN-5 countries over the period of 1996-2016. In other words, the rejection of null hypothesis implies a long-run relation between nominal exchange rates and relative prices.
Wilcoxon's signed-rank statistic: what null hypothesis and why it matters.
Li, Heng; Johnson, Terri
2014-01-01
In statistical literature, the term 'signed-rank test' (or 'Wilcoxon signed-rank test') has been used to refer to two distinct tests: a test for symmetry of distribution and a test for the median of a symmetric distribution, sharing a common test statistic. To avoid potential ambiguity, we propose to refer to those two tests by different names, as 'test for symmetry based on signed-rank statistic' and 'test for median based on signed-rank statistic', respectively. The utility of such terminological differentiation should become evident through our discussion of how those tests connect and contrast with sign test and one-sample t-test. Published 2014. This article is a U.S. Government work and is in the public domain in the USA. Published 2014. This article is a U.S. Government work and is in the public domain in the USA.
UNIFORMLY MOST POWERFUL BAYESIAN TESTS
Johnson, Valen E.
2014-01-01
Uniformly most powerful tests are statistical hypothesis tests that provide the greatest power against a fixed null hypothesis among all tests of a given size. In this article, the notion of uniformly most powerful tests is extended to the Bayesian setting by defining uniformly most powerful Bayesian tests to be tests that maximize the probability that the Bayes factor, in favor of the alternative hypothesis, exceeds a specified threshold. Like their classical counterpart, uniformly most powerful Bayesian tests are most easily defined in one-parameter exponential family models, although extensions outside of this class are possible. The connection between uniformly most powerful tests and uniformly most powerful Bayesian tests can be used to provide an approximate calibration between p-values and Bayes factors. Finally, issues regarding the strong dependence of resulting Bayes factors and p-values on sample size are discussed. PMID:24659829
ERIC Educational Resources Information Center
Abdel Latif, Muhammad M.
2009-01-01
This article reports on a study aimed at testing the hypothesis that, because of strategic and temporal variables, composing rate and text quantity may not be valid measures of writing fluency. A second objective was to validate the mean length of writers' translating episodes as a process-based indicator that mirrors their fluent written…
Colangelo, Annette; Buchanan, Lori
2006-12-01
The failure of inhibition hypothesis posits a theoretical distinction between implicit and explicit access in deep dyslexia. Specifically, the effects of failure of inhibition are assumed only in conditions that have an explicit selection requirement in the context of production (i.e., aloud reading). In contrast, the failure of inhibition hypothesis proposes that implicit processing and explicit access to semantic information without production demands are intact in deep dyslexia. Evidence for intact implicit and explicit access requires that performance in deep dyslexia parallels that observed in neurologically intact participants on tasks based on implicit and explicit processes. In other words, deep dyslexics should produce normal effects in conditions with implicit task demands (i.e., lexical decision) and on tasks based on explicit access without production (i.e., forced choice semantic decisions) because failure of inhibition does not impact the availability of lexical information, only explicit retrieval in the context of production. This research examined the distinction between implicit and explicit processes in deep dyslexia using semantic blocking in lexical decision and forced choice semantic decisions as a test for the failure of inhibition hypothesis. The results of the semantic blocking paradigm support the distinction between implicit and explicit processing and provide evidence for failure of inhibition as an explanation for semantic errors in deep dyslexia.
[Experimental testing of Pflüger's reflex hypothesis of menstruation in late 19th century].
Simmer, H H
1980-07-01
Pflüger's hypothesis of a nerve reflex as the cause of menstruation published in 1865 and accepted by many, nonetheless did not lead to experimental investigations for 25 years. According to this hypothesis the nerve reflex starts in the ovary by an increase of the intraovarian pressure by the growing follicles. In 1884 Adolph Kehrer proposed a program to test the nerve reflex, but only in 1890, Cohnstein artificially increased the intraovarian pressure in women by bimanual compression from the outside and the vagina. His results were not convincing. Six years later, Strassmann injected fluids into ovaries of animals and obtained changes in the uterus resembling those of oestrus. His results seemed to verify a prognosis derived from Pflüger's hypothesis. Thus, after a long interval, that hypothesis had become a paradigma. Though reasons can be given for the delay, it is little understood, why experimental testing started so late.
DOT National Transportation Integrated Search
1995-10-01
THIS INVESTIGATION WAS COMPLETED AS PART OF THE ITS-IDEA PROGRAM WHICH IS ONE OF THREE IDEA PROGRAMS MANAGED BY THE TRANSPORTATION RESEARCH BOARD (TRB) TO FOSTER INNOVATIONS IN SURFACE TRANSPORTATION. IT FOCUSES ON PRODUCTS AND RESULT FOR THE DEVELOP...
2017-04-06
Research Hypothesis ........................................................................................................... 15 Research Design ...user community and of accommodating advancing software applications by the vendors. Research Design My approach to this project was to conduct... design descriptions , requirements specifications, test documentation, interface requirement specifications, product specifications, and software
Spontaneous Facial Mimicry in Response to Dynamic Facial Expressions
ERIC Educational Resources Information Center
Sato, Wataru; Yoshikawa, Sakiko
2007-01-01
Based on previous neuroscientific evidence indicating activation of the mirror neuron system in response to dynamic facial actions, we hypothesized that facial mimicry would occur while subjects viewed dynamic facial expressions. To test this hypothesis, dynamic/static facial expressions of anger/happiness were presented using computer-morphing…
Does Active Learning Improve Students' Knowledge of and Attitudes toward Research Methods?
ERIC Educational Resources Information Center
Campisi, Jay; Finn, Kevin E.
2011-01-01
We incorporated an active, collaborative-based research project in our undergraduate Research Methods course for first-year sports medicine majors. Working in small groups, students identified a research question, generated a hypothesis to be tested, designed an experiment, implemented the experiment, analyzed the data, and presented their…
A Scheme for Categorizing Traumatic Military Events
ERIC Educational Resources Information Center
Stein, Nathan R.; Mills, Mary Alice; Arditte, Kimberly; Mendoza, Crystal; Borah, Adam M.; Resick, Patricia A.; Litz, Brett T.
2012-01-01
A common assumption among clinicians and researchers is that war trauma primarily involves fear-based reactions to life-threatening situations. However, the authors believe that there are multiple types of trauma in the military context, each with unique perievent and postevent response patterns. To test this hypothesis, they reviewed structured…
Comparisons of Means Using Exploratory and Confirmatory Approaches
ERIC Educational Resources Information Center
Kuiper, Rebecca M.; Hoijtink, Herbert
2010-01-01
This article discusses comparisons of means using exploratory and confirmatory approaches. Three methods are discussed: hypothesis testing, model selection based on information criteria, and Bayesian model selection. Throughout the article, an example is used to illustrate and evaluate the two approaches and the three methods. We demonstrate that…
USDA-ARS?s Scientific Manuscript database
Background - Optimal nutritional choices are linked with better health but most current interventions to improve diet have limited effect. We tested the hypothesis that providing personalized nutrition (PN) advice based on collected information on individual diet and lifestyle, phenotype or genotype...
The Effect of Family Communication Patterns on Adopted Adolescent Adjustment
ERIC Educational Resources Information Center
Rueter, Martha A.; Koerner, Ascan F.
2008-01-01
Adoption and family communication both affect adolescent adjustment. We proposed that adoption status and family communication interact such that adopted adolescents in families with certain communication patterns are at greater risk for adjustment problems. We tested this hypothesis using a community-based sample of 384 adoptive and 208…
This project was based upon outcomes from earlier work conducted under APM 465 to test the hypothesis that the chlorotriazine herbicide, atrazine (ATR), causes an increase in serum estrogens through an induction of aromatase (CYP19) gene expression. The current research has invol...
Restorative Justice in Schools: The Influence of Race on Restorative Discipline
ERIC Educational Resources Information Center
Payne, Allison Ann; Welch, Kelly
2015-01-01
Schools today are more frequently using punitive discipline practices to control student behavior, despite the greater effectiveness of community-building techniques on compliance that are based on restorative justice principles found in the criminal justice system. Prior research testing the racial threat hypothesis has found that the racial…
When Null Hypothesis Significance Testing Is Unsuitable for Research: A Reassessment.
Szucs, Denes; Ioannidis, John P A
2017-01-01
Null hypothesis significance testing (NHST) has several shortcomings that are likely contributing factors behind the widely debated replication crisis of (cognitive) neuroscience, psychology, and biomedical science in general. We review these shortcomings and suggest that, after sustained negative experience, NHST should no longer be the default, dominant statistical practice of all biomedical and psychological research. If theoretical predictions are weak we should not rely on all or nothing hypothesis tests. Different inferential methods may be most suitable for different types of research questions. Whenever researchers use NHST they should justify its use, and publish pre-study power calculations and effect sizes, including negative findings. Hypothesis-testing studies should be pre-registered and optimally raw data published. The current statistics lite educational approach for students that has sustained the widespread, spurious use of NHST should be phased out.
When Null Hypothesis Significance Testing Is Unsuitable for Research: A Reassessment
Szucs, Denes; Ioannidis, John P. A.
2017-01-01
Null hypothesis significance testing (NHST) has several shortcomings that are likely contributing factors behind the widely debated replication crisis of (cognitive) neuroscience, psychology, and biomedical science in general. We review these shortcomings and suggest that, after sustained negative experience, NHST should no longer be the default, dominant statistical practice of all biomedical and psychological research. If theoretical predictions are weak we should not rely on all or nothing hypothesis tests. Different inferential methods may be most suitable for different types of research questions. Whenever researchers use NHST they should justify its use, and publish pre-study power calculations and effect sizes, including negative findings. Hypothesis-testing studies should be pre-registered and optimally raw data published. The current statistics lite educational approach for students that has sustained the widespread, spurious use of NHST should be phased out. PMID:28824397
NASA Technical Reports Server (NTRS)
Hussain, A. K. M. F.
1980-01-01
Comparisons of the distributions of large scale structures in turbulent flow with distributions based on time dependent signals from stationary probes and the Taylor hypothesis are presented. The study investigated an area in the near field of a 7.62 cm circular air jet at a Re of 32,000, specifically having coherent structures through small-amplitude controlled excitation and stable vortex pairing in the jet column mode. Hot-wire and X-wire anemometry were employed to establish phase averaged spatial distributions of longitudinal and lateral velocities, coherent Reynolds stress and vorticity, background turbulent intensities, streamlines and pseudo-stream functions. The Taylor hypothesis was used to calculate spatial distributions of the phase-averaged properties, with results indicating that the usage of the local time-average velocity or streamwise velocity produces large distortions.
The individual tolerance concept is not the sole explanation for the probit dose-effect model
DOE Office of Scientific and Technical Information (OSTI.GOV)
Newman, M.C.; McCloskey, J.T.
2000-02-01
Predominant methods for analyzing dose- or concentration-effect data (i.e., probit analysis) are based on the concept of individual tolerance or individual effective dose (IED, the smallest characteristic dose needed to kill an individual). An alternative explanation (stochasticity hypothesis) is that individuals do not have unique tolerances: death results from stochastic processes occurring similarly in all individuals. These opposing hypotheses were tested with two types of experiments. First, time to stupefaction (TTS) was measured for zebra fish (Brachydanio rerio) exposed to benzocaine. The same 40 fish were exposed during five trials to test if the same order for TTS was maintainedmore » among trials. The IED hypothesis was supported with a minor stochastic component being present. Second, eastern mosquitofish (Gambusia holbrooki) were exposed to sublethal or lethal NaCl concentrations until a large portion of the lethally exposed fish died. After sufficient time for recovery, fish sublethally exposed and fish surviving lethal exposure were exposed simultaneously to lethal NaCl concentrations. No statistically significant effect was found of previous exposure on survival time but a large stochastic component to the survival dynamics was obvious. Repetition of this second type of test with pentachlorophenol also provided no support for the IED hypothesis. The authors conclude that neither hypothesis alone was the sole or dominant explanation for the lognormal (probit) model. Determination of the correct explanation (IED or stochastic) or the relative contributions of each is crucial to predicting consequences to populations after repeated or chronic exposures to any particular toxicant.« less
Testing fundamental ecological concepts with a Pythium-Prunus pathosystem
USDA-ARS?s Scientific Manuscript database
The study of plant-pathogen interactions has enabled tests of basic ecological concepts on plant community assembly (Janzen-Connell Hypothesis) and plant invasion (Enemy Release Hypothesis). We used a field experiment to (#1) test whether Pythium effects depended on host (seedling) density and/or d...
A checklist to facilitate objective hypothesis testing in social psychology research.
Washburn, Anthony N; Morgan, G Scott; Skitka, Linda J
2015-01-01
Social psychology is not a very politically diverse area of inquiry, something that could negatively affect the objectivity of social psychological theory and research, as Duarte et al. argue in the target article. This commentary offers a number of checks to help researchers uncover possible biases and identify when they are engaging in hypothesis confirmation and advocacy instead of hypothesis testing.
Nan Liu; Hai Ren; Sufen Yuan; Qinfeng Guo; Long Yang
2013-01-01
The relative importance of facilitation and competition between pairwise plants across abiotic stress gradients as predicted by the stress-gradient hypothesis has been confirmed in arid and temperate ecosystems, but the hypothesis has rarely been tested in tropical systems, particularly across nutrient gradients. The current research examines the interactions between a...
Phase II Clinical Trials: D-methionine to Reduce Noise-Induced Hearing Loss
2012-03-01
loss (NIHL) and tinnitus in our troops. Hypotheses: Primary Hypothesis: Administration of oral D-methionine prior to and during weapons...reduce or prevent noise-induced tinnitus . Primary outcome to test the primary hypothesis: Pure tone air-conduction thresholds. Primary outcome to...test the secondary hypothesis: Tinnitus questionnaires. Specific Aims: 1. To determine whether administering oral D-methionine (D-met) can
NASA Astrophysics Data System (ADS)
Saputra, K. V. I.; Cahyadi, L.; Sembiring, U. A.
2018-01-01
Start in this paper, we assess our traditional elementary statistics education and also we introduce elementary statistics with simulation-based inference. To assess our statistical class, we adapt the well-known CAOS (Comprehensive Assessment of Outcomes in Statistics) test that serves as an external measure to assess the student’s basic statistical literacy. This test generally represents as an accepted measure of statistical literacy. We also introduce a new teaching method on elementary statistics class. Different from the traditional elementary statistics course, we will introduce a simulation-based inference method to conduct hypothesis testing. From the literature, it has shown that this new teaching method works very well in increasing student’s understanding of statistics.
The Function and Organization of Lateral Prefrontal Cortex: A Test of Competing Hypotheses
Reynolds, Jeremy R.; O'Reilly, Randall C.; Cohen, Jonathan D.; Braver, Todd S.
2012-01-01
The present experiment tested three hypotheses regarding the function and organization of lateral prefrontal cortex (PFC). The first account (the information cascade hypothesis) suggests that the anterior-posterior organization of lateral PFC is based on the timing with which cue stimuli reduce uncertainty in the action selection process. The second account (the levels-of-abstraction hypothesis) suggests that the anterior-posterior organization of lateral PFC is based on the degree of abstraction of the task goals. The current study began by investigating these two hypotheses, and identified several areas of lateral PFC that were predicted to be active by both the information cascade and levels-of-abstraction accounts. However, the pattern of activation across experimental conditions was inconsistent with both theoretical accounts. Specifically, an anterior area of mid-dorsolateral PFC exhibited sensitivity to experimental conditions that, according to both accounts, should have selectively engaged only posterior areas of PFC. We therefore investigated a third possible account (the adaptive context maintenance hypothesis) that postulates that both posterior and anterior regions of PFC are reliably engaged in task conditions requiring active maintenance of contextual information, with the temporal dynamics of activity in these regions flexibly tracking the duration of maintenance demands. Activity patterns in lateral PFC were consistent with this third hypothesis: regions across lateral PFC exhibited transient activation when contextual information had to be updated and maintained in a trial-by-trial manner, but sustained activation when contextual information had to be maintained over a series of trials. These findings prompt a reconceptualization of current views regarding the anterior-posterior organization of lateral PFC, but do support other findings regarding the active maintenance role of lateral PFC in sequential working memory paradigms. PMID:22355309
Last night I had the strangest dream: Varieties of rational thought processes in dream reports.
Wolman, Richard N; Kozmová, Miloslava
2007-12-01
From the neurophysiological perspective, thinking in dreaming and the quality of dream thought have been considered hallucinatory, bizarre, illogical, improbable, or even impossible. This empirical phenomenological research concentrates on testing whether dream thought can be defined as rational in the sense of an intervening mental process between sensory perception and the creation of meaning, leading to a conclusion or to taking action. From 10 individual dream journals of male participants aged 22-59 years and female participants aged 25-49 years, we delimited four dreams per journal and randomly selected five thought units from each dream for scoring. The units provided a base for testing a hypothesis that the thought processes of dream construction are rational. The results support the hypothesis and demonstrate that eight fundamental rational thought processes can be applied to the dreaming process.
Explorations in Statistics: Hypothesis Tests and P Values
ERIC Educational Resources Information Center
Curran-Everett, Douglas
2009-01-01
Learning about statistics is a lot like learning about science: the learning is more meaningful if you can actively explore. This second installment of "Explorations in Statistics" delves into test statistics and P values, two concepts fundamental to the test of a scientific null hypothesis. The essence of a test statistic is that it compares what…
Planned Hypothesis Tests Are Not Necessarily Exempt from Multiplicity Adjustment
ERIC Educational Resources Information Center
Frane, Andrew V.
2015-01-01
Scientific research often involves testing more than one hypothesis at a time, which can inflate the probability that a Type I error (false discovery) will occur. To prevent this Type I error inflation, adjustments can be made to the testing procedure that compensate for the number of tests. Yet many researchers believe that such adjustments are…
ERIC Educational Resources Information Center
Malda, Maike; van de Vijver, Fons J. R.; Temane, Q. Michael
2010-01-01
In this study, cross-cultural differences in cognitive test scores are hypothesized to depend on a test's cultural complexity (Cultural Complexity Hypothesis: CCH), here conceptualized as its content familiarity, rather than on its cognitive complexity (Spearman's Hypothesis: SH). The content familiarity of tests assessing short-term memory,…
Genomics, "Discovery Science," Systems Biology, and Causal Explanation: What Really Works?
Davidson, Eric H
2015-01-01
Diverse and non-coherent sets of epistemological principles currently inform research in the general area of functional genomics. Here, from the personal point of view of a scientist with over half a century of immersion in hypothesis driven scientific discovery, I compare and deconstruct the ideological bases of prominent recent alternatives, such as "discovery science," some productions of the ENCODE project, and aspects of large data set systems biology. The outputs of these types of scientific enterprise qualitatively reflect their radical definitions of scientific knowledge, and of its logical requirements. Their properties emerge in high relief when contrasted (as an example) to a recent, system-wide, predictive analysis of a developmental regulatory apparatus that was instead based directly on hypothesis-driven experimental tests of mechanism.
NASA Astrophysics Data System (ADS)
Přibil, Jiří; Přibilová, Anna; Frollo, Ivan
2017-12-01
The paper focuses on two methods of evaluation of successfulness of speech signal enhancement recorded in the open-air magnetic resonance imager during phonation for the 3D human vocal tract modeling. The first approach enables to obtain a comparison based on statistical analysis by ANOVA and hypothesis tests. The second method is based on classification by Gaussian mixture models (GMM). The performed experiments have confirmed that the proposed ANOVA and GMM classifiers for automatic evaluation of the speech quality are functional and produce fully comparable results with the standard evaluation based on the listening test method.
Place recognition using batlike sonar
Vanderelst, Dieter; Steckel, Jan; Boen, Andre; Peremans, Herbert; Holderied, Marc W
2016-01-01
Echolocating bats have excellent spatial memory and are able to navigate to salient locations using bio-sonar. Navigating and route-following require animals to recognize places. Currently, it is mostly unknown how bats recognize places using echolocation. In this paper, we propose template based place recognition might underlie sonar-based navigation in bats. Under this hypothesis, bats recognize places by remembering their echo signature - rather than their 3D layout. Using a large body of ensonification data collected in three different habitats, we test the viability of this hypothesis assessing two critical properties of the proposed echo signatures: (1) they can be uniquely classified and (2) they vary continuously across space. Based on the results presented, we conclude that the proposed echo signatures satisfy both criteria. We discuss how these two properties of the echo signatures can support navigation and building a cognitive map. DOI: http://dx.doi.org/10.7554/eLife.14188.001 PMID:27481189
Testing the hypothesis of hierarchical predictability in ecological restoration and succession.
Abella, Scott R; Schetter, Timothy A; Walters, Timothy L
2018-02-01
To advance predictive ecology, the hypothesis of hierarchical predictability proposes that community measures for which species are interchangeable (e.g., structure and species richness) are more predictable than measures for which species identity matters (e.g., community composition). Predictability is hypothesized to decrease for response measures in order of the following categories: structure, species richness, function, and species composition. We tested this hypothesis using a 14-year, oak savanna-prairie restoration experiment that removed non-native pine plantations at 24 sites in northwestern Ohio, USA. Based on 24 response measures, the data showed minimal support for the hypothesis, because response measures varied in predictability within categories. Half of response measures had over half their variability modeled using fixed (restoration treatment and year) and random plot effects, and these "predictable" measures occurred in all four categories. Pine basal area, environment (e.g., soil texture), and antecedent vegetation accounted for over half the variation in change within the first three post-restoration years for 77% of response measures. Change between the 3rd and 14th years was less predictable, but most restoration measures increased favorably via sites achieving them in unique ways. We propose that variation will not conform with the hypothesis of hierarchical predictability in ecosystems with vegetation dynamics driven by stochastic processes such as seed dispersal, or where vegetation structure and species richness are influenced by species composition. The ability to predict a community measure may be more driven by the number of combinations of casual factors affecting a measure than by the number of values it can have.
Does Testing Increase Spontaneous Mediation in Learning Semantically Related Paired Associates?
ERIC Educational Resources Information Center
Cho, Kit W.; Neely, James H.; Brennan, Michael K.; Vitrano, Deana; Crocco, Stephanie
2017-01-01
Carpenter (2011) argued that the testing effect she observed for semantically related but associatively unrelated paired associates supports the mediator effectiveness hypothesis. This hypothesis asserts that after the cue-target pair "mother-child" is learned, relative to restudying mother-child, a review test in which…
Brion, Mélanie; Pitel, Anne-Lise; Beaunieux, Hélène; Maurage, Pierre
2014-01-01
Korsakoff syndrome (KS) is a neurological state mostly caused by alcohol-dependence and leading to disproportionate episodic memory deficits. KS patients present more severe anterograde amnesia than Alcohol-Dependent Subjects (ADS), which led to the continuum hypothesis postulating a progressive increase in brain and cognitive damages during the evolution from ADS to KS. This hypothesis has been extensively examined for memory but is still debated for other abilities, notably executive functions (EF). EF have up to now been explored by unspecific tasks in KS, and few studies explored their interactions with memory. Exploring EF in KS by specific tasks based on current EF models could thus renew the exploration of the continuum hypothesis. This paper will propose a research program aiming at: (1) clarifying the extent of executive dysfunctions in KS by tasks focusing on specific EF subcomponents; (2) determining the differential EF deficits in ADS and KS; (3) exploring EF-memory interactions in KS with innovative tasks. At the fundamental level, this exploration will test the continuum hypothesis beyond memory. At the clinical level, it will propose new rehabilitation tools focusing on the EF specifically impaired in KS.
Brion, Mélanie; Pitel, Anne-Lise; Beaunieux, Hélène; Maurage, Pierre
2014-01-01
Korsakoff syndrome (KS) is a neurological state mostly caused by alcohol-dependence and leading to disproportionate episodic memory deficits. KS patients present more severe anterograde amnesia than Alcohol-Dependent Subjects (ADS), which led to the continuum hypothesis postulating a progressive increase in brain and cognitive damages during the evolution from ADS to KS. This hypothesis has been extensively examined for memory but is still debated for other abilities, notably executive functions (EF). EF have up to now been explored by unspecific tasks in KS, and few studies explored their interactions with memory. Exploring EF in KS by specific tasks based on current EF models could thus renew the exploration of the continuum hypothesis. This paper will propose a research program aiming at: (1) clarifying the extent of executive dysfunctions in KS by tasks focusing on specific EF subcomponents; (2) determining the differential EF deficits in ADS and KS; (3) exploring EF-memory interactions in KS with innovative tasks. At the fundamental level, this exploration will test the continuum hypothesis beyond memory. At the clinical level, it will propose new rehabilitation tools focusing on the EF specifically impaired in KS. PMID:25071526
Internal attention to features in visual short-term memory guides object learning
Fan, Judith E.; Turk-Browne, Nicholas B.
2013-01-01
Attending to objects in the world affects how we perceive and remember them. What are the consequences of attending to an object in mind? In particular, how does reporting the features of a recently seen object guide visual learning? In three experiments, observers were presented with abstract shapes in a particular color, orientation, and location. After viewing each object, observers were cued to report one feature from visual short-term memory (VSTM). In a subsequent test, observers were cued to report features of the same objects from visual long-term memory (VLTM). We tested whether reporting a feature from VSTM: (1) enhances VLTM for just that feature (practice-benefit hypothesis), (2) enhances VLTM for all features (object-based hypothesis), or (3) simultaneously enhances VLTM for that feature and suppresses VLTM for unreported features (feature-competition hypothesis). The results provided support for the feature-competition hypothesis, whereby the representation of an object in VLTM was biased towards features reported from VSTM and away from unreported features (Experiment 1). This bias could not be explained by the amount of sensory exposure or response learning (Experiment 2) and was amplified by the reporting of multiple features (Experiment 3). Taken together, these results suggest that selective internal attention induces competitive dynamics among features during visual learning, flexibly tuning object representations to align with prior mnemonic goals. PMID:23954925
Toward a Theory of Stuttering.
Mawson, Anthony R; Radford, Nola T; Jacob, Binu
2016-01-01
Stuttering affects about 1% of the general population and from 8 to 11% of children. The onset of persistent developmental stuttering (PDS) typically occurs between 2 and 4 years of age. The etiology of stuttering is unknown and a unifying hypothesis is lacking. Clues to the pathogenesis of stuttering include the following observations: PDS is associated with adverse perinatal outcomes and birth-associated trauma; stuttering can recur or develop in adulthood following traumatic events such as brain injury and stroke; PDS is associated with structural and functional abnormalities in the brain associated with speech and language; and stuttering resolves spontaneously in a high percentage of affected children. Evidence marshaled from the literature on stuttering and from related sources suggests the hypothesis that stuttering is a neuro-motor disorder resulting from perinatal or later-onset hypoxic-ischemic injury (HII), and that chronic stuttering and its behavioral correlates are manifestations of recurrent transient ischemic episodes affecting speech-motor pathways. The hypothesis could be tested by comparing children who stutter and nonstutterers (controls) in terms of the occurrence of perinatal trauma, based on birth records, and by determining rates of stuttering in children exposed to HII during the perinatal period. Subject to testing, the hypothesis suggests that interventions to increase brain perfusion directly could be effective both in the treatment of stuttering and its prevention at the time of birth or later trauma. © 2016 S. Karger AG, Basel.
Internal attention to features in visual short-term memory guides object learning.
Fan, Judith E; Turk-Browne, Nicholas B
2013-11-01
Attending to objects in the world affects how we perceive and remember them. What are the consequences of attending to an object in mind? In particular, how does reporting the features of a recently seen object guide visual learning? In three experiments, observers were presented with abstract shapes in a particular color, orientation, and location. After viewing each object, observers were cued to report one feature from visual short-term memory (VSTM). In a subsequent test, observers were cued to report features of the same objects from visual long-term memory (VLTM). We tested whether reporting a feature from VSTM: (1) enhances VLTM for just that feature (practice-benefit hypothesis), (2) enhances VLTM for all features (object-based hypothesis), or (3) simultaneously enhances VLTM for that feature and suppresses VLTM for unreported features (feature-competition hypothesis). The results provided support for the feature-competition hypothesis, whereby the representation of an object in VLTM was biased towards features reported from VSTM and away from unreported features (Experiment 1). This bias could not be explained by the amount of sensory exposure or response learning (Experiment 2) and was amplified by the reporting of multiple features (Experiment 3). Taken together, these results suggest that selective internal attention induces competitive dynamics among features during visual learning, flexibly tuning object representations to align with prior mnemonic goals. Copyright © 2013 Elsevier B.V. All rights reserved.
Fowler, Charles W; Hobbs, Larry
2003-01-01
The principles and tenets of management require action to avoid sustained abnormal/pathological conditions. For the sustainability of interactive systems, each system should fall within its normal range of natural variation. This applies to individuals (as for fevers and hypertension, in medicine), populations (e.g. outbreaks of crop pests in agriculture), species (e.g. the rarity of endangerment in conservation) and ecosystems (e.g. abnormally low productivity or diversity in 'ecosystem-based management'). In this paper, we report tests of the hypothesis that the human species is ecologically normal. We reject the hypothesis for almost all of the cases we tested. Our species rarely falls within statistical confidence limits that envelop the central tendencies in variation among other species. For example, our population size, CO(2) production, energy use, biomass consumption and geographical range size differ from those of other species by orders of magnitude. We argue that other measures should be tested in a similar fashion to assess the prevalence of such differences and their practical implications. PMID:14728780
Role of glucose in chewing gum-related facilitation of cognitive function.
Stephens, Richard; Tunney, Richard J
2004-10-01
This study tests the hypothesis that chewing gum leads to cognitive benefits through improved delivery of glucose to the brain, by comparing the cognitive performance effects of gum and glucose administered separately and together. Participants completed a battery of cognitive tests in a fully related 2 x 2 design, where one factor was Chewing Gum (gum vs. mint sweet) and the other factor was Glucose Co-administration (consuming a 25 g glucose drink vs. consuming water). For four tests (AVLT Immediate Recall, Digit Span, Spatial Span and Grammatical Transformation), beneficial effects of chewing and glucose were found, supporting the study hypothesis. However, on AVLT Delayed Recall, enhancement due to chewing gum was not paralleled by glucose enhancement, suggesting an alternative mechanism. The glucose delivery model is supported with respect to the cognitive domains: working memory, immediate episodic long-term memory and language-based attention and processing speed. However, some other mechanism is more likely to underlie the facilitatory effect of chewing gum on delayed episodic long-term memory.
Schiffer, Anne-Marike; Nevado-Holgado, Alejo J; Johnen, Andreas; Schönberger, Anna R; Fink, Gereon R; Schubotz, Ricarda I
2015-11-01
Action observation is known to trigger predictions of the ongoing course of action and thus considered a hallmark example for predictive perception. A related task, which explicitly taps into the ability to predict actions based on their internal representations, is action segmentation; the task requires participants to demarcate where one action step is completed and another one begins. It thus benefits from a temporally precise prediction of the current action. Formation and exploitation of these temporal predictions of external events is now closely associated with a network including the basal ganglia and prefrontal cortex. Because decline of dopaminergic innervation leads to impaired function of the basal ganglia and prefrontal cortex in Parkinson's disease (PD), we hypothesised that PD patients would show increased temporal variability in the action segmentation task, especially under medication withdrawal (hypothesis 1). Another crucial aspect of action segmentation is its reliance on a semantic representation of actions. There is no evidence to suggest that action representations are substantially altered, or cannot be accessed, in non-demented PD patients. We therefore expected action segmentation judgments to follow the same overall patterns in PD patients and healthy controls (hypothesis 2), resulting in comparable segmentation profiles. Both hypotheses were tested with a novel classification approach. We present evidence for both hypotheses in the present study: classifier performance was slightly decreased when it was tested for its ability to predict the identity of movies segmented by PD patients, and a measure of normativity of response behaviour was decreased when patients segmented movies under medication-withdrawal without access to an episodic memory of the sequence. This pattern of results is consistent with hypothesis 1. However, the classifier analysis also revealed that responses given by patients and controls create very similar action-specific patterns, thus delivering evidence in favour hypothesis 2. In terms of methodology, the use of classifiers in the present study allowed us to establish similarity of behaviour across groups (hypothesis 2). The approach opens up a new avenue that standard statistical methods often fail to provide and is discussed in terms of its merits to measure hypothesised similarities across study populations. Copyright © 2015 Elsevier Ltd. All rights reserved.
[Effectiveness of the Military Mental Health Promotion Program].
Woo, Chung Hee; Kim, Sun Ah
2014-12-01
This study was done to evaluate the Military Mental Health Promotion Program. The program was an email based cognitive behavioral intervention. The research design was a quasi-experimental study with a non-equivalent control group pretest-posttest design. Participants were 32 soldiers who agreed to participate in the program. Data were collected at three different times from January 2012 to March 2012; pre-test, post-test, and a one-month follow-up test. The data were statistically analyzed using SPSS 18.0. The effectiveness of the program was tested by repeated measures ANOVA. The first hypothesis that the level of depression in the experimental group who participated in the program would decrease compared to the control group was not supported in that the difference in group-time interaction was not statistically significant (F=2.19, p=.121). The second and third hypothesis related to anxiety and self-esteem were supported in group-time interaction, respectively (F=7.41, p=.001, F=11.67, p<.001). Results indicate that the program is effective in improving soldiers' mental health status in areas of anxiety and self-esteem.
Chaurasia, Ashok; Harel, Ofer
2015-02-10
Tests for regression coefficients such as global, local, and partial F-tests are common in applied research. In the framework of multiple imputation, there are several papers addressing tests for regression coefficients. However, for simultaneous hypothesis testing, the existing methods are computationally intensive because they involve calculation with vectors and (inversion of) matrices. In this paper, we propose a simple method based on the scalar entity, coefficient of determination, to perform (global, local, and partial) F-tests with multiply imputed data. The proposed method is evaluated using simulated data and applied to suicide prevention data. Copyright © 2014 John Wiley & Sons, Ltd.
Why is the age-standardized incidence of low-trauma fractures rising in many elderly populations?
Kannus, Pekka; Niemi, Seppo; Parkkari, Jari; Palvanen, Mika; Heinonen, Ari; Sievänen, Harri; Järvinen, Teppo; Khan, Karim; Järvinen, Markku
2002-08-01
Low-trauma fractures of elderly people are a major public health burden worldwide, and as the number and mean age of older adults in the population continue to increase, the number of fractures is also likely to increase. Epidemiologically, however, an additional concern is that, for unknown reasons, the age-standardized incidence (average individual risk) of fracture has also risen in many populations during the recent decades. Possible reasons for this rise include a birth cohort effect, deterioration in the average bone strength by time, and increased average risk of (serious) falls. Literature provides evidence that the rise is not due to a birth cohort effect, whereas no study shows whether bone fragility has increased during this relatively short period of time. This osteoporosis hypothesis could, however, be tested if researchers would now repeat the population measurements of bone mass and density that were made in the late 1980s and the 1990s. If such studies proved that women's and men's age-standardized mean values of bone mass and density have declined over time, the osteoporosis hypothesis would receive scientific support. The third explanation is based on the hypothesis that the number and/or severity of falls has risen in elderly populations during the recent decades. Although no study has directly tested this hypothesis, a great deal of indirect epidemiologic evidence supports this contention. For example, the age-standardized incidence of fall-induced severe head injuries, bruises and contusions, and joint distortions and dislocations has increased among elderly people similarly to the low-trauma fractures. The fall hypothesis could also be tested in the coming years because the 1990s saw many research teams reporting age- and sex-specific incidences of falling for elderly populations, and the same could be done now to provide data comparing the current incidence rates of falls with the earlier ones.
On the interpretation of the geomagnetic energy spectrum
Benton, E.R.; Alldredge, L.R.
1987-01-01
Two recent high-degree magnetic energy spectra, based mostly on MAGSAT data, are compared and found to agree very well out to order and degree n = 15, but the spectrum remains somewhat uncertain for higher degrees. The hypothesis that a primary break in the slope of the spectrum, plotted semi-logarithmically, is due to a transition from dominance by core sources to dominance by crustal magnetization is tested. Simple arrays of dipoles and current loops are found whose combined fields fit the spectrum. Two distinctly different ranges of source depth are found to be adequate. Because one range is shallow and the other deep, the hypothesis is supported. ?? 1987.
Markolf, Matthias; Kappeler, Peter M
2013-11-14
Due to its remarkable species diversity and micro-endemism, Madagascar has recently been suggested to serve as a biogeographic model region. However, hypothesis-based tests of various diversification mechanisms that have been proposed for the evolution of the island's micro-endemic lineages are still limited. Here, we test the fit of several diversification hypotheses with new data on the broadly distributed genus Eulemur using coalescent-based phylogeographic analyses. Time-calibrated species tree analyses and population genetic clustering resolved the previously polytomic species relationships among eulemurs. The most recent common ancestor of eulemurs was estimated to have lived about 4.45 million years ago (mya). Divergence date estimates furthermore suggested a very recent diversification among the members of the "brown lemur complex", i.e. former subspecies of E. fulvus, during the Pleistocene (0.33-1.43 mya). Phylogeographic model comparisons of past migration rates showed significant levels of gene flow between lineages of neighboring river catchments as well as between eastern and western populations of the redfronted lemur (E. rufifrons). Together, our results are concordant with the centers of endemism hypothesis (Wilmé et al. 2006, Science 312:1063-1065), highlight the importance of river catchments for the evolution of Madagascar's micro-endemic biota, and they underline the usefulness of testing diversification mechanisms using coalescent-based phylogeographic methods.
The apparency hypothesis applied to a local pharmacopoeia in the Brazilian northeast
2014-01-01
Background Data from an ethnobotanical study were analyzed to see if they were in agreement with the biochemical basis of the apparency hypothesis based on an analysis of a pharmacopeia in a rural community adjacent to the Araripe National Forest (Floresta Nacional do Araripe - FLONA) in northeastern Brazil. The apparency hypothesis considers two groups of plants, apparent and non-apparent, that are characterized by conspicuity for herbivores (humans) and their chemical defenses. Methods This study involved 153 interviewees and used semi-structured interviews. The plants were grouped by habit and lignification to evaluate the behavior of these categories in terms of ethnospecies richness, use value and practical and commercial importance. Information about sites for collecting medicinal plants was also obtained. The salience of the ethnospecies was calculated. G-tests were used to test for differences in ethnospecies richness among collection sites and the Kruskal-Wallis test to identify differences in the use values of plants depending on habit and lignifications (e.g. plants were classes as woody or non-woody, the first group comprising trees, shrubs, and lignified climbers (vines) and the latter group comprising herbs and non-lignified climbers). Spearman’s correlation test was performed to relate salience to use value and these two factors with the commercial value of the plants. Results A total of 222 medicinal plants were cited. Herbaceous and woody plants exhibited the highest ethnospecies richness, the non-woody and herbaceous plants had the most practical value (current use), and anthropogenic areas were the main sources of woody and non-woody medicinal plants; herbs and trees were equally versatile in treating diseases and did not differ with regard to use value. Trees were highlighted as the most commercially important growth habit. Conclusions From the perspective of its biochemical fundamentals, the apparency hypothesis does not have predictive potential to explain the use value and commercial value of medicinal plants. In other hand, the herbaceous habit showed the highest ethnospecies richness in the community pharmacopeia, which is an expected prediction, corroborating the apparency hypothesis. PMID:24410756
Debates—Hypothesis testing in hydrology: Introduction
NASA Astrophysics Data System (ADS)
Blöschl, Günter
2017-03-01
This paper introduces the papers in the "Debates—Hypothesis testing in hydrology" series. The four articles in the series discuss whether and how the process of testing hypotheses leads to progress in hydrology. Repeated experiments with controlled boundary conditions are rarely feasible in hydrology. Research is therefore not easily aligned with the classical scientific method of testing hypotheses. Hypotheses in hydrology are often enshrined in computer models which are tested against observed data. Testability may be limited due to model complexity and data uncertainty. All four articles suggest that hypothesis testing has contributed to progress in hydrology and is needed in the future. However, the procedure is usually not as systematic as the philosophy of science suggests. A greater emphasis on a creative reasoning process on the basis of clues and explorative analyses is therefore needed.
Why Is Test-Restudy Practice Beneficial for Memory? An Evaluation of the Mediator Shift Hypothesis
ERIC Educational Resources Information Center
Pyc, Mary A.; Rawson, Katherine A.
2012-01-01
Although the memorial benefits of testing are well established empirically, the mechanisms underlying this benefit are not well understood. The authors evaluated the mediator shift hypothesis, which states that test-restudy practice is beneficial for memory because retrieval failures during practice allow individuals to evaluate the effectiveness…
Bayesian Approaches to Imputation, Hypothesis Testing, and Parameter Estimation
ERIC Educational Resources Information Center
Ross, Steven J.; Mackey, Beth
2015-01-01
This chapter introduces three applications of Bayesian inference to common and novel issues in second language research. After a review of the critiques of conventional hypothesis testing, our focus centers on ways Bayesian inference can be used for dealing with missing data, for testing theory-driven substantive hypotheses without a default null…
Mayo, Ruth; Alfasi, Dana; Schwarz, Norbert
2014-06-01
Feelings of distrust alert people not to take information at face value, which may influence their reasoning strategy. Using the Wason (1960) rule identification task, we tested whether chronic and temporary distrust increase the use of negative hypothesis testing strategies suited to falsify one's own initial hunch. In Study 1, participants who were low in dispositional trust were more likely to engage in negative hypothesis testing than participants high in dispositional trust. In Study 2, trust and distrust were induced through an alleged person-memory task. Paralleling the effects of chronic distrust, participants exposed to a single distrust-eliciting face were 3 times as likely to engage in negative hypothesis testing as participants exposed to a trust-eliciting face. In both studies, distrust increased negative hypothesis testing, which was associated with better performance on the Wason task. In contrast, participants' initial rule generation was not consistently affected by distrust. These findings provide first evidence that distrust can influence which reasoning strategy people adopt. PsycINFO Database Record (c) 2014 APA, all rights reserved.
In Defense of the Play-Creativity Hypothesis
ERIC Educational Resources Information Center
Silverman, Irwin W.
2016-01-01
The hypothesis that pretend play facilitates the creative thought process in children has received a great deal of attention. In a literature review, Lillard et al. (2013, p. 8) concluded that the evidence for this hypothesis was "not convincing." This article focuses on experimental and training studies that have tested this hypothesis.…
Decision Support Systems: Applications in Statistics and Hypothesis Testing.
ERIC Educational Resources Information Center
Olsen, Christopher R.; Bozeman, William C.
1988-01-01
Discussion of the selection of appropriate statistical procedures by educators highlights a study conducted to investigate the effectiveness of decision aids in facilitating the use of appropriate statistics. Experimental groups and a control group using a printed flow chart, a computer-based decision aid, and a standard text are described. (11…
Socio-Economic Status, Parenting Practices and Early Learning at French Kindergartens
ERIC Educational Resources Information Center
Tazouti, Youssef; Jarlégan, Annette
2014-01-01
The present research tests the hypothesis that parental values and educational practices are intermediary variables between the socio-economic status (SES) of families and early learning in children. Our empirical study was based on 299 parents with children in their final year at eight French kindergartens. We constructed an explanatory…
Integrating Articulatory Constraints into Models of Second Language Phonological Acquisition
ERIC Educational Resources Information Center
Colantoni, Laura; Steele, Jeffrey
2008-01-01
Models such as Eckman's markedness differential hypothesis, Flege's speech learning model, and Brown's feature-based theory of perception seek to explain and predict the relative difficulty second language (L2) learners face when acquiring new or similar sounds. In this paper, we test their predictive adequacy as concerns native English speakers'…
An improved Ångström-type model for estimating solar radiation over the Tibetan Plateau
USDA-ARS?s Scientific Manuscript database
Sunshine- and temperature-based empirical models are widely used for solar radiation estimation over the world, but the coefficients of the models are mostly site-dependent. The coefficients are expected to vary more under complex terrain conditions than under flat terrains. To test this hypothesis,...
ERIC Educational Resources Information Center
Hoglund, Wendy L. G.; Jones, Stephanie M.; Brown, Joshua L.; Aber, J. Lawrence
2015-01-01
The current study examines 3 alternative conceptual models of the directional associations between parent involvement in schooling (homework assistance, home-school conferencing, school-based support) and child adjustment (academic and social competence, aggressive behaviors). The parent socialization model tests the hypothesis that parent…
Remediating Misconception on Climate Change among Secondary School Students in Malaysia
ERIC Educational Resources Information Center
Karpudewan, Mageswary; Roth, Wolff-Michael; Chandrakesan, Kasturi
2015-01-01
Existing studies report on secondary school students' misconceptions related to climate change; they also report on the methods of teaching as reinforcing misconceptions. This quasi-experimental study was designed to test the null hypothesis that a curriculum based on constructivist principles does not lead to greater understanding and fewer…
Arabic Digit Naming Speed: Task Context and Redundancy Gain
ERIC Educational Resources Information Center
Campbell, Jamie I. D.; Metcalfe, Arron W. S.
2008-01-01
There is evidence for both semantic and asemantic routes for naming Arabic digits, but neuropsychological dissociations suggest that number-fact retrieval (2x3=6) can inhibit the semantic route for digit naming. Here, we tested the hypothesis that such inhibition should slow digit naming, based on the principle that reduced access to multiple…
Vitamin A Status is Associated With T-Cell Responses In Bangladeshi Men
USDA-ARS?s Scientific Manuscript database
Recommendations for vitamin A intake are based on maintaining liver stores of equal to or greater than 0.070 umol/g, which is sufficient to maintain normal vision. We propose that higher levels may be required to maintain normal immune function. To test this hypothesis, we conducted an 8 wk resident...
An Articulated English Program: A Hypothesis to Test.
ERIC Educational Resources Information Center
Publications of the Modern Language Association of America, 1959
1959-01-01
This report is the result of discussion of some 35 interrelated issues which have contributed to a loss of definition of "English" programs at all educational levels. Fearing further fragmentation of the curriculum would take place without reform, the conferees propose an articulated English program based on four cardinal principles. They seek to:…
ERIC Educational Resources Information Center
Raby, K. Lee; Lawler, Jamie M.; Shlafer, Rebecca J.; Hesemeyer, Paloma S.; Collins, W. Andrew; Sroufe, L. Alan
2015-01-01
This study drew on prospective, longitudinal data to test the hypothesis that the intergenerational transmission of positive parenting is mediated by competence in subsequent relationships with peers and romantic partners. Interview-based ratings of supportive parenting were completed with a sample of 113 individuals (46% male) followed from birth…
ERIC Educational Resources Information Center
Millis, Richard M.; Dyson, Sharon; Cannon, Dawn
2009-01-01
The advent of internet-based delivery of basic medical science lectures may unintentionally lead to decreased classroom attendance and participation, thereby creating a distance learning paradigm. In this study, we tested the hypothesis that classroom attendance/participation may be positively correlated with performance on a written examination…
Truth, Evasion, and Deception: A Study of Communicative Behavior.
ERIC Educational Resources Information Center
Kardes, Frank; And Others
Based on research which suggests that individuals transmit good news more than bad news and that people are motivated to project a positive image of themselves, 48 college students participated in a study to test the hypothesis that individuals would be more conscientious in giving information when future social interaction was anticipated.…
Race, Exposure, and Initial Affective Ratings in Interpersonal Attraction.
ERIC Educational Resources Information Center
Nikels, Kenneth W.; Hamm, Norman H.
To test the mere exposure hypothesis, subjects were exposed to 20 slides of black and white stimulus persons. Based upon pre-experimental ratings, each slide had been initially assigned to one of four groups: high favorable black, high favorable white, low favorable black, and low favorable white. The experimental group, consisting of 25 white…
Sensory Processing Relates to Attachment to Childhood Comfort Objects of College Students
ERIC Educational Resources Information Center
Kalpidou, Maria
2012-01-01
The author tested the hypothesis that attachment to comfort objects is based on the sensory processing characteristics of the individual. Fifty-two undergraduate students with and without a childhood comfort object reported sensory responses and performed a tactile threshold task. Those with a comfort object described their object and rated their…
USDA-ARS?s Scientific Manuscript database
Soy protein is effective at preventing hepatic steatosis; however, the mechanisms are poorly understood. We tested the hypothesis that soy versus dairy protein-based diet would alter microbiota and attenuate hepatic steatosis in hyperphagic Otsuka Long-Evans Tokushima Fatty (OLETF) rats. Male OLETF ...
ERIC Educational Resources Information Center
Naaz, Farah; Chariker, Julia H.; Pani, John R.
2014-01-01
A study was conducted to test the hypothesis that instruction with graphically integrated representations of whole and sectional neuroanatomy is especially effective for learning to recognize neural structures in sectional imagery (such as magnetic resonance imaging [MRI]). Neuroanatomy was taught to two groups of participants using computer…
Evolution of Protein Lipograms: A Bioinformatics Problem
ERIC Educational Resources Information Center
White, Harold B., III; Dhurjati, Prasad
2006-01-01
A protein lacking one of the 20 common amino acids is a protein lipogram. This open-ended problem-based learning assignment deals with the evolution of proteins with biased amino acid composition. It has students query protein and metabolic databases to test the hypothesis that natural selection has reduced the frequency of each amino acid…
Use of the disease severity index for null hypothesis testing
USDA-ARS?s Scientific Manuscript database
A disease severity index (DSI) is a single number for summarizing a large amount of disease severity information. It is used to indicate relative resistance of cultivars, to relate disease severity to yield loss, or to compare treatments. The DSI has most often been based on a special type of ordina...
Quantifying (dis)agreement between direct detection experiments in a halo-independent way
DOE Office of Scientific and Technical Information (OSTI.GOV)
Feldstein, Brian; Kahlhoefer, Felix, E-mail: brian.feldstein@physics.ox.ac.uk, E-mail: felix.kahlhoefer@physics.ox.ac.uk
We propose an improved method to study recent and near-future dark matter direct detection experiments with small numbers of observed events. Our method determines in a quantitative and halo-independent way whether the experiments point towards a consistent dark matter signal and identifies the best-fit dark matter parameters. To achieve true halo independence, we apply a recently developed method based on finding the velocity distribution that best describes a given set of data. For a quantitative global analysis we construct a likelihood function suitable for small numbers of events, which allows us to determine the best-fit particle physics properties of darkmore » matter considering all experiments simultaneously. Based on this likelihood function we propose a new test statistic that quantifies how well the proposed model fits the data and how large the tension between different direct detection experiments is. We perform Monte Carlo simulations in order to determine the probability distribution function of this test statistic and to calculate the p-value for both the dark matter hypothesis and the background-only hypothesis.« less
Uncued Low SNR Detection with Likelihood from Image Multi Bernoulli Filter
NASA Astrophysics Data System (ADS)
Murphy, T.; Holzinger, M.
2016-09-01
Both SSA and SDA necessitate uncued, partially informed detection and orbit determination efforts for small space objects which often produce only low strength electro-optical signatures. General frame to frame detection and tracking of objects includes methods such as moving target indicator, multiple hypothesis testing, direct track-before-detect methods, and random finite set based multiobject tracking. This paper will apply the multi-Bernoilli filter to low signal-to-noise ratio (SNR), uncued detection of space objects for space domain awareness applications. The primary novel innovation in this paper is a detailed analysis of the existing state-of-the-art likelihood functions and a likelihood function, based on a binary hypothesis, previously proposed by the authors. The algorithm is tested on electro-optical imagery obtained from a variety of sensors at Georgia Tech, including the GT-SORT 0.5m Raven-class telescope, and a twenty degree field of view high frame rate CMOS sensor. In particular, a data set of an extended pass of the Hitomi Astro-H satellite approximately 3 days after loss of communication and potential break up is examined.
Chronic Motivational State Interacts with Task Reward Structure in Dynamic Decision-Making
Cooper, Jessica A.; Worthy, Darrell A.; Maddox, W. Todd
2015-01-01
Research distinguishes between a habitual, model-free system motivated toward immediately rewarding actions, and a goal-directed, model-based system motivated toward actions that improve future state. We examined the balance of processing in these two systems during state-based decision-making. We tested a regulatory fit hypothesis (Maddox & Markman, 2010) that predicts that global trait motivation affects the balance of habitual- vs. goal-directed processing but only through its interaction with the task framing as gain-maximization or loss-minimization. We found support for the hypothesis that a match between an individual’s chronic motivational state and the task framing enhances goal-directed processing, and thus state-based decision-making. Specifically, chronic promotion-focused individuals under gain-maximization and chronic prevention-focused individuals under loss-minimization both showed enhanced state-based decision-making. Computational modeling indicates that individuals in a match between global chronic motivational state and local task reward structure engaged more goal-directed processing, whereas those in a mismatch engaged more habitual processing. PMID:26520256
A load-based mechanism for inter-leg coordination in insects
2017-01-01
Animals rely on an adaptive coordination of legs during walking. However, which specific mechanisms underlie coordination during natural locomotion remains largely unknown. One hypothesis is that legs can be coordinated mechanically based on a transfer of body load from one leg to another. To test this hypothesis, we simultaneously recorded leg kinematics, ground reaction forces and muscle activity in freely walking stick insects (Carausius morosus). Based on torque calculations, we show that load sensors (campaniform sensilla) at the proximal leg joints are well suited to encode the unloading of the leg in individual steps. The unloading coincides with a switch from stance to swing muscle activity, consistent with a load reflex promoting the stance-to-swing transition. Moreover, a mechanical simulation reveals that the unloading can be ascribed to the loading of a specific neighbouring leg, making it exploitable for inter-leg coordination. We propose that mechanically mediated load-based coordination is used across insects analogously to mammals. PMID:29187626
The frequentist implications of optional stopping on Bayesian hypothesis tests.
Sanborn, Adam N; Hills, Thomas T
2014-04-01
Null hypothesis significance testing (NHST) is the most commonly used statistical methodology in psychology. The probability of achieving a value as extreme or more extreme than the statistic obtained from the data is evaluated, and if it is low enough, the null hypothesis is rejected. However, because common experimental practice often clashes with the assumptions underlying NHST, these calculated probabilities are often incorrect. Most commonly, experimenters use tests that assume that sample sizes are fixed in advance of data collection but then use the data to determine when to stop; in the limit, experimenters can use data monitoring to guarantee that the null hypothesis will be rejected. Bayesian hypothesis testing (BHT) provides a solution to these ills because the stopping rule used is irrelevant to the calculation of a Bayes factor. In addition, there are strong mathematical guarantees on the frequentist properties of BHT that are comforting for researchers concerned that stopping rules could influence the Bayes factors produced. Here, we show that these guaranteed bounds have limited scope and often do not apply in psychological research. Specifically, we quantitatively demonstrate the impact of optional stopping on the resulting Bayes factors in two common situations: (1) when the truth is a combination of the hypotheses, such as in a heterogeneous population, and (2) when a hypothesis is composite-taking multiple parameter values-such as the alternative hypothesis in a t-test. We found that, for these situations, while the Bayesian interpretation remains correct regardless of the stopping rule used, the choice of stopping rule can, in some situations, greatly increase the chance of experimenters finding evidence in the direction they desire. We suggest ways to control these frequentist implications of stopping rules on BHT.
Starrett, James; Hayashi, Cheryl Y; Derkarabetian, Shahan; Hedin, Marshal
2018-01-01
The relative roles of ecological niche conservatism versus niche divergence in promoting montane speciation remains an important topic in biogeography. Here, our aim was to test whether lineage diversification in a species complex of trapdoor spiders corresponds with riverine barriers or with an ecological gradient associated with elevational tiering. Aliatypus janus was sampled from throughout its range, with emphasis on populations in the southern Sierra Nevada Mountains of California. We collected multi-locus genetic data to generate a species tree for A. janus and its close relatives. Coalescent based hypothesis tests were conducted to determine if genetic breaks within A. janus conform to riverine barriers. Ecological niche models (ENM) under current and Last Glacial Maximum (LGM) conditions were generated and hypothesis tests of niche conservatism and divergence were performed. Coalescent analyses reveal deeply divergent genetic lineages within A. janus, likely corresponding to cryptic species. Two primary lineages meet along an elevational gradient on the western slopes of the southern Sierra Nevada Mountains. ENMs under both current and LGM conditions indicate that these groups occupy largely non-overlapping niches. ENM hypothesis testing rejected niche identity between the two groups, and supported a sharp ecological gradient occurring where the groups meet. However, the niche similarity test indicated that the two groups may not inhabit different background niches. The Sierra Nevada Mountains provide a natural laboratory for simultaneously testing ecological niche divergence and conservatism and their role in speciation across a diverse range of taxa. Aliatypus janus represents a species complex with cryptic lineages that may have diverged due to parapatric speciation along an ecological gradient, or been maintained by the evolution of ecological niche differences following allopatric speciation. Copyright © 2017 Elsevier Inc. All rights reserved.
Octopamine influences honey bee foraging preference.
Giray, Tugrul; Galindo-Cardona, Alberto; Oskay, Devrim
2007-07-01
Colony condition and differences in individual preferences influence forage type collected by bees. Physiological bases for the changing preferences of individual foragers are just beginning to be examined. Recently, for honey bees octopamine is shown to influence age at onset of foraging and probability of dance for rewards. However, octopamine has not been causally linked with foraging preference in the field. We tested the hypothesis that changes in octopamine may alter forage type (preference hypothesis). We treated identified foragers orally with octopamine or its immediate precursor, tyramine, or sucrose syrup (control). Octopamine-treated foragers switched type of material collected; control bees did not. Tyramine group results were not different from the control group. In addition, sugar concentrations of nectar collected by foragers after octopamine treatment were lower than before treatment, indicating change in preference. In contrast, before and after nectar concentrations for bees in the control group were similar. These results, taken together, support the preference hypothesis.
McCann, Stewart J H
2015-01-01
The precocity-longevity hypothesis that those who reach career milestones earlier in life have shorter life spans was tested with the 430 men elected to serve in the House of Representatives for the 71st U.S. Congress in 1929-1930 who were alive throughout 1930. There was no tendency for those first serving at an earlier age to die sooner or those serving first at a later age to die later than expected based on individual life expectancy in 1930. Although age at first serving was correlated with death age, the correlation was not significant when expected death age was controlled. The results cast serious doubt on the contention of the precocity-longevity hypothesis that the developmental aspects of the prerequisites, concomitants, and consequences of early career achievement peaks actively enhance the conditions for an earlier death.
NASA Astrophysics Data System (ADS)
Huang, M.; Bazurto, R.; Camparo, J.
2018-01-01
The ring-mode to red-mode transition in alkali metal inductively coupled plasmas (ICPs) (i.e., rf-discharge lamps) is perhaps the most important physical phenomenon affecting these devices as optical pumping light sources for atomic clocks and magnetometers. It sets the limit on useful ICP operating temperature, thereby setting a limit on ICP light output for atomic-clock/magnetometer signal generation, and it is a temperature region of ICP operation associated with discharge instability. Previous work has suggested that the mechanism driving the ring-mode to red-mode transition is associated with radiation trapping, but definitive experimental evidence validating that hypothesis has been lacking. Based on that hypothesis, one would predict that the introduction of an alkali-fluorescence quenching gas (i.e., N2) into the ICP would increase the ring-mode to red-mode transition temperature. Here, we test that prediction, finding direct evidence supporting the radiation-trapping hypothesis.
Studying biodiversity: is a new paradigm really needed?
Nichols, James D.; Cooch, Evan G.; Nichols, Jonathan M.; Sauer, John R.
2012-01-01
Authors in this journal have recommended a new approach to the conduct of biodiversity science. This data-driven approach requires the organization of large amounts of ecological data, analysis of these data to discover complex patterns, and subsequent development of hypotheses corresponding to detected patterns. This proposed new approach has been contrasted with more-traditional knowledge-based approaches in which investigators deduce consequences of competing hypotheses to be confronted with actual data, providing a basis for discriminating among the hypotheses. We note that one approach is directed at hypothesis generation, whereas the other is also focused on discriminating among competing hypotheses. Here, we argue for the importance of using existing knowledge to the separate issues of (a) hypothesis selection and generation and (b) hypothesis discrimination and testing. In times of limited conservation funding, the relative efficiency of different approaches to learning should be an important consideration in decisions about how to study biodiversity.
Mechanical implications of the mandibular coronoid process morphology in Neandertals.
Marom, Assaf; Rak, Yoel
2018-06-01
Among the diagnostic features of the Neandertal mandible are the broad base of the coronoid process and its straight posterior margin. The adaptive value of these (and other) anatomical features has been linked to the Neandertal's need to cope with a large gape. The present study aims to test this hypothesis with regard to the morphology of the coronoid process. This admittedly simple, intuitive hypothesis was tested here via a comparative finite-element study of the primitive versus modified state of the coronoid process, using two-dimensional models of the mandible. Our simulations demonstrate that a large gape has an unfavorable effect on the primitive state of the coronoid process: the diagonal, almost horizontal, component of the temporalis muscle resultant (relative to the long axis of the coronoid process) bends the process in the sagittal plane. Furthermore, we show that the modification of the coronoid process morphology alone reduces the process' bending in a wide gape increasing the compression to tension ratio. These results provide indirect evidence in support of the hypothesis that the modification of the coronoid process in Neandertals is necessary for enabling their mandible to cope with a large gape. © 2018 Wiley Periodicals, Inc.
Decentralized Hypothesis Testing in Energy Harvesting Wireless Sensor Networks
NASA Astrophysics Data System (ADS)
Tarighati, Alla; Gross, James; Jalden, Joakim
2017-09-01
We consider the problem of decentralized hypothesis testing in a network of energy harvesting sensors, where sensors make noisy observations of a phenomenon and send quantized information about the phenomenon towards a fusion center. The fusion center makes a decision about the present hypothesis using the aggregate received data during a time interval. We explicitly consider a scenario under which the messages are sent through parallel access channels towards the fusion center. To avoid limited lifetime issues, we assume each sensor is capable of harvesting all the energy it needs for the communication from the environment. Each sensor has an energy buffer (battery) to save its harvested energy for use in other time intervals. Our key contribution is to formulate the problem of decentralized detection in a sensor network with energy harvesting devices. Our analysis is based on a queuing-theoretic model for the battery and we propose a sensor decision design method by considering long term energy management at the sensors. We show how the performance of the system changes for different battery capacities. We then numerically show how our findings can be used in the design of sensor networks with energy harvesting sensors.
1994-03-01
labels of a, which is called significance levels. The hypothesis tests are done based on the a levels . The maximum probabilities of making type II error...critical values at specific a levels . This procedure is done for each of the 50,000 samples. The number of the samples passing each test at those specific... a levels is counted. The ratio of the number of accepted samples to 50,000 gives the percentage point. Then, subtracting that value from one would
TRANSGENIC MOUSE MODELS AND PARTICULATE MATTER (PM)
The hypothesis to be tested is that metal catalyzed oxidative stress can contribute to the biological effects of particulate matter. We acquired several transgenic mouse strains to test this hypothesis. Breeding of the mice was accomplished by Duke University. Particles employed ...
Hypothesis Testing Using the Films of the Three Stooges
ERIC Educational Resources Information Center
Gardner, Robert; Davidson, Robert
2010-01-01
The use of The Three Stooges' films as a source of data in an introductory statistics class is described. The Stooges' films are separated into three populations. Using these populations, students may conduct hypothesis tests with data they collect.
Hovick, Stephen M; Whitney, Kenneth D
2014-01-01
The hypothesis that interspecific hybridisation promotes invasiveness has received much recent attention, but tests of the hypothesis can suffer from important limitations. Here, we provide the first systematic review of studies experimentally testing the hybridisation-invasion (H-I) hypothesis in plants, animals and fungi. We identified 72 hybrid systems for which hybridisation has been putatively associated with invasiveness, weediness or range expansion. Within this group, 15 systems (comprising 34 studies) experimentally tested performance of hybrids vs. their parental species and met our other criteria. Both phylogenetic and non-phylogenetic meta-analyses demonstrated that wild hybrids were significantly more fecund and larger than their parental taxa, but did not differ in survival. Resynthesised hybrids (which typically represent earlier generations than do wild hybrids) did not consistently differ from parental species in fecundity, survival or size. Using meta-regression, we found that fecundity increased (but survival decreased) with generation in resynthesised hybrids, suggesting that natural selection can play an important role in shaping hybrid performance – and thus invasiveness – over time. We conclude that the available evidence supports the H-I hypothesis, with the caveat that our results are clearly driven by tests in plants, which are more numerous than tests in animals and fungi. PMID:25234578
Coherent single pion production by antineutrino charged current interactions and test of PCAC
NASA Astrophysics Data System (ADS)
Marage, P.; Aderholz, M.; Allport, P.; Armenise, N.; Baton, J. P.; Berggren, M.; Bertrand, D.; Brisson, V.; Bullock, F. W.; Burkot, W.; Calicchio, M.; Clayton, E. F.; Coghen, T.; Cooper-Sarkar, A. M.; Erriquez, O.; Fitch, P. J.; Gerbier, G.; Guy, J.; Hamisi, F.; Hulth, P. O.; Jones, G. T.; Kasper, P.; Klein, H.; Middleton, R. P.; Miller, D. B.; Mobayyen, M. M.; Morrison, D. R. O.; Natali, S.; Neveu, M.; O'Neale, S. W.; Parker, M. A.; Petiau, P.; Sacton, J.; Sansum, R. A.; Simopoulou, E.; Vallée, C.; Varvell, K.; Vayaki, A.; Venus, W.; Wachsmuth, H.; Wells, J.; Wittek, W.
1986-06-01
The cross section for coherent production of a single π- meson in charged current antineutrino interactions on neon nuclei has been measured in BEBC to be (175±25) 10-40 cm2/neon nucleus, averaged over the energy spectrum of the antineutrino wide band beam at the CERN SPS; this corresponds to (0.9±0.1) % of the total charged currentbar v_μ cross section. The distributions of kinematical variables are in agreement with theoretical predictions based on the PCAC hypothesis and the meson dominance model; in particular, the Q 2 dependence is well described by a propagator containing a mass m=(1.35±0.18) GeV. The absolute value of the cross section is also in agreement with the model. This analysis thus provides a test of the PCAC hypothesis in the antineutrino energy range 5 150 GeV.
Autocorrelation of location estimates and the analysis of radiotracking data
Otis, D.L.; White, Gary C.
1999-01-01
The wildlife literature has been contradictory about the importance of autocorrelation in radiotracking data used for home range estimation and hypothesis tests of habitat selection. By definition, the concept of a home range involves autocorrelated movements, but estimates or hypothesis tests based on sampling designs that predefine a time frame of interest, and that generate representative samples of an animal's movement during this time frame, should not be affected by length of the sampling interval and autocorrelation. Intensive sampling of the individual's home range and habitat use during the time frame of the study leads to improved estimates for the individual, but use of location estimates as the sample unit to compare across animals is pseudoreplication. We therefore recommend against use of habitat selection analysis techniques that use locations instead of individuals as the sample unit. We offer a general outline for sampling designs for radiotracking studies.
Halloran, Michael J; Kashima, Emiko S
2004-07-01
In this article, the authors report an investigation of the relationship between terror management and social identity processes by testing for the effects of social identity salience on worldview validation. Two studies, with distinct populations, were conducted to test the hypothesis that mortality salience would lead to worldview validation of values related to a salient social identity. In Study 1, reasonable support for this hypothesis was found with bicultural Aboriginal Australian participants (N = 97). It was found that thoughts of death led participants to validate ingroup and reject outgroup values depending on the social identity that had been made salient. In Study 2, when their student and Australian identities were primed, respectively, Anglo-Australian students (N = 119) validated values related to those identities, exclusively. The implications of the findings for identity-based worldview validation are discussed.
One-way ANOVA based on interval information
NASA Astrophysics Data System (ADS)
Hesamian, Gholamreza
2016-08-01
This paper deals with extending the one-way analysis of variance (ANOVA) to the case where the observed data are represented by closed intervals rather than real numbers. In this approach, first a notion of interval random variable is introduced. Especially, a normal distribution with interval parameters is introduced to investigate hypotheses about the equality of interval means or test the homogeneity of interval variances assumption. Moreover, the least significant difference (LSD method) for investigating multiple comparison of interval means is developed when the null hypothesis about the equality of means is rejected. Then, at a given interval significance level, an index is applied to compare the interval test statistic and the related interval critical value as a criterion to accept or reject the null interval hypothesis of interest. Finally, the method of decision-making leads to some degrees to accept or reject the interval hypotheses. An applied example will be used to show the performance of this method.
A web-portal for interactive data exploration, visualization, and hypothesis testing
Bartsch, Hauke; Thompson, Wesley K.; Jernigan, Terry L.; Dale, Anders M.
2014-01-01
Clinical research studies generate data that need to be shared and statistically analyzed by their participating institutions. The distributed nature of research and the different domains involved present major challenges to data sharing, exploration, and visualization. The Data Portal infrastructure was developed to support ongoing research in the areas of neurocognition, imaging, and genetics. Researchers benefit from the integration of data sources across domains, the explicit representation of knowledge from domain experts, and user interfaces providing convenient access to project specific data resources and algorithms. The system provides an interactive approach to statistical analysis, data mining, and hypothesis testing over the lifetime of a study and fulfills a mandate of public sharing by integrating data sharing into a system built for active data exploration. The web-based platform removes barriers for research and supports the ongoing exploration of data. PMID:24723882
Evidence that insect herbivores are deterred by ant pheromones.
Offenberg, Joachim; Nielsen, Mogens Gissel; MacIntosh, Donald J; Havanon, Sopon; Aksornkoae, Sanit
2004-01-01
It is well documented that ants can protect plants against insect herbivores, but the underlying mechanisms remain almost undocumented. We propose and test the pheromone avoidance hypothesis--an indirect mechanism where insect herbivores are repelled not only by ants but also by ant pheromones. Herbivores subjected to ant predation will experience a selective advantage if they evolve mechanisms enabling them to avoid feeding within ant territories. Such a mechanism could be based on the ability to detect and evade ant pheromones. Field observations and data from the literature showed that the ant Oecophylla smaragdina distributes persistent pheromones throughout its territory. In addition, a laboratory test showed that the beetle Rhyparida wallacei, which this ant preys on, was reluctant to feed on leaves sampled within ant territories compared with leaves sampled outside territories. Thus, this study provides an example of an ant-herbivore system conforming to the pheromone avoidance hypothesis. PMID:15801596
The Impact of Economic Factors and Acquisition Reforms on the Cost of Defense Weapon Systems
2006-03-01
test for homoskedasticity, the Breusch - Pagan test is employed. The null hypothesis of the Breusch - Pagan test is that the variance is equal to zero...made. Using the Breusch - Pagan test shown in Table 19 below, the prob>chi2 is greater than 05.=α , therefore we fail to reject the null hypothesis...overrunpercentfp100 Breusch - Pagan Test (Ho=Constant Variance) Estimated Results Variance Standard Deviation overrunpercent100
Kawabe, Kouichi
2017-08-01
The two-hit hypothesis has been used to explain the onset mechanism of schizophrenia. It assumes that predisposition to schizophrenia is originally attributed to vulnerability in the brain which stems from genetic or early developmental factors, and that onset is triggered by exposure to later detrimental factors such as stress in adolescence or adulthood. Based on this hypothesis, the present study examined whether rats that had received neonatal repeated treatment with an N-methyl-d-aspartate (NMDA) receptor antagonist (MK-801), an animal model of schizophrenia, were vulnerable to chronic stress. Rats were treated with MK-801 (0.2mg/kg) or saline twice daily on postnatal days 7-20, and animals in the stress subgroups were subjected to 20days (5days/week×4weeks) of forced-swim stress in adulthood. Following this, behavioral tests (prepulse inhibition, spontaneous alternation, open-field, and forced-swim tests) were carried out. The results indicate that neonatal repeated MK-801 treatment in rats inhibits an increase in immobility in the forced-swim test after they have experienced chronic forced-swim stress. This suggests that rats that have undergone chronic neonatal repeated NMDA receptor blockade could have a reduced ability to habituate or adapt to a stressful situation, and supports the hypothesis that these rats are sensitive or vulnerable to stress. Copyright © 2017 Elsevier Inc. All rights reserved.
Stanciu, Adrian
2017-01-01
Alignment of individuals on more than one diversity attribute (i.e., faultlines) may lead to intergroup biases in teams, disrupting the efficiency expectancies. Research has yet to examine if this can be a consequence of a stereotypical consistency between social and information attributes of diversity. The present study tests the hypothesis that, in a team with a stereotype-based faultline (a stereotypical consistency between gender and skills), there is increased out-group derogation compared to a team with a stereotype-inconsistent faultline. Furthermore, the study proposes that tasks can activate stereotypes, and the need for cognition dictates whether stereotypes are applied. The findings confirm the hypothesis and additionally provide evidence that tasks that activate gender stereotypes emphasize out-group derogation, especially for team members with low need for cognition.
Extended target recognition in cognitive radar networks.
Wei, Yimin; Meng, Huadong; Liu, Yimin; Wang, Xiqin
2010-01-01
We address the problem of adaptive waveform design for extended target recognition in cognitive radar networks. A closed-loop active target recognition radar system is extended to the case of a centralized cognitive radar network, in which a generalized likelihood ratio (GLR) based sequential hypothesis testing (SHT) framework is employed. Using Doppler velocities measured by multiple radars, the target aspect angle for each radar is calculated. The joint probability of each target hypothesis is then updated using observations from different radar line of sights (LOS). Based on these probabilities, a minimum correlation algorithm is proposed to adaptively design the transmit waveform for each radar in an amplitude fluctuation situation. Simulation results demonstrate performance improvements due to the cognitive radar network and adaptive waveform design. Our minimum correlation algorithm outperforms the eigen-waveform solution and other non-cognitive waveform design approaches.
ERIC Educational Resources Information Center
Tryon, Warren W.; Lewis, Charles
2008-01-01
Evidence of group matching frequently takes the form of a nonsignificant test of statistical difference. Theoretical hypotheses of no difference are also tested in this way. These practices are flawed in that null hypothesis statistical testing provides evidence against the null hypothesis and failing to reject H[subscript 0] is not evidence…
ERIC Educational Resources Information Center
McNeil, Keith
The use of directional and nondirectional hypothesis testing was examined from the perspectives of textbooks, journal articles, and members of editorial boards. Three widely used statistical texts were reviewed in terms of how directional and nondirectional tests of significance were presented. Texts reviewed were written by: (1) D. E. Hinkle, W.…
The Feminization of School Hypothesis Called into Question among Junior and High School Students
ERIC Educational Resources Information Center
Verniers, Catherine; Martinot, Delphine; Dompnier, Benoît
2016-01-01
Background: The feminization of school hypothesis suggests that boys underachieve in school compared to girls because school rewards feminine characteristics that are at odds with boys' masculine features. Aims: The feminization of school hypothesis lacks empirical evidence. The aim of this study was to test this hypothesis by examining the extent…
Supporting shared hypothesis testing in the biomedical domain.
Agibetov, Asan; Jiménez-Ruiz, Ernesto; Ondrésik, Marta; Solimando, Alessandro; Banerjee, Imon; Guerrini, Giovanna; Catalano, Chiara E; Oliveira, Joaquim M; Patanè, Giuseppe; Reis, Rui L; Spagnuolo, Michela
2018-02-08
Pathogenesis of inflammatory diseases can be tracked by studying the causality relationships among the factors contributing to its development. We could, for instance, hypothesize on the connections of the pathogenesis outcomes to the observed conditions. And to prove such causal hypotheses we would need to have the full understanding of the causal relationships, and we would have to provide all the necessary evidences to support our claims. In practice, however, we might not possess all the background knowledge on the causality relationships, and we might be unable to collect all the evidence to prove our hypotheses. In this work we propose a methodology for the translation of biological knowledge on causality relationships of biological processes and their effects on conditions to a computational framework for hypothesis testing. The methodology consists of two main points: hypothesis graph construction from the formalization of the background knowledge on causality relationships, and confidence measurement in a causality hypothesis as a normalized weighted path computation in the hypothesis graph. In this framework, we can simulate collection of evidences and assess confidence in a causality hypothesis by measuring it proportionally to the amount of available knowledge and collected evidences. We evaluate our methodology on a hypothesis graph that represents both contributing factors which may cause cartilage degradation and the factors which might be caused by the cartilage degradation during osteoarthritis. Hypothesis graph construction has proven to be robust to the addition of potentially contradictory information on the simultaneously positive and negative effects. The obtained confidence measures for the specific causality hypotheses have been validated by our domain experts, and, correspond closely to their subjective assessments of confidences in investigated hypotheses. Overall, our methodology for a shared hypothesis testing framework exhibits important properties that researchers will find useful in literature review for their experimental studies, planning and prioritizing evidence collection acquisition procedures, and testing their hypotheses with different depths of knowledge on causal dependencies of biological processes and their effects on the observed conditions.
The limits to pride: A test of the pro-anorexia hypothesis.
Cornelius, Talea; Blanton, Hart
2016-01-01
Many social psychological models propose that positive self-conceptions promote self-esteem. An extreme version of this hypothesis is advanced in "pro-anorexia" communities: identifying with anorexia, in conjunction with disordered eating, can lead to higher self-esteem. The current study empirically tested this hypothesis. Results challenge the pro-anorexia hypothesis. Although those with higher levels of pro-anorexia identification trended towards higher self-esteem with increased disordered eating, this did not overcome the strong negative main effect of pro-anorexia identification. These data suggest a more effective strategy for promoting self-esteem is to encourage rejection of disordered eating and an anorexic identity.
Does the Slow-Growth, High-Mortality Hypothesis Apply Below Ground?
Hourston, James E; Bennett, Alison E; Johnson, Scott N; Gange, Alan C
2016-01-01
Belowground tri-trophic study systems present a challenging environment in which to study plant-herbivore-natural enemy interactions. For this reason, belowground examples are rarely available for testing general ecological theories. To redress this imbalance, we present, for the first time, data on a belowground tri-trophic system to test the slow growth, high mortality hypothesis. We investigated whether the differing performance of entomopathogenic nematodes (EPNs) in controlling the common pest black vine weevil Otiorhynchus sulcatus could be linked to differently resistant cultivars of the red raspberry Rubus idaeus. The O. sulcatus larvae recovered from R. idaeus plants showed significantly slower growth and higher mortality on the Glen Rosa cultivar, relative to the more commercially favored Glen Ample cultivar creating a convenient system for testing this hypothesis. Heterorhabditis megidis was found to be less effective at controlling O. sulcatus than Steinernema kraussei, but conformed to the hypothesis. However, S. kraussei maintained high levels of O. sulcatus mortality regardless of how larval growth was influenced by R. idaeus cultivar. We link this to direct effects that S. kraussei had on reducing O. sulcatus larval mass, indicating potential sub-lethal effects of S. kraussei, which the slow-growth, high-mortality hypothesis does not account for. Possible origins of these sub-lethal effects of EPN infection and how they may impact on a hypothesis designed and tested with aboveground predator and parasitoid systems are discussed.
Why Be a Shrub? A Basic Model and Hypotheses for the Adaptive Values of a Common Growth Form
Götmark, Frank; Götmark, Elin; Jensen, Anna M.
2016-01-01
Shrubs are multi-stemmed short woody plants, more widespread than trees, important in many ecosystems, neglected in ecology compared to herbs and trees, but currently in focus due to their global expansion. We present a novel model based on scaling relationships and four hypotheses to explain the adaptive significance of shrubs, including a review of the literature with a test of one hypothesis. Our model describes advantages for a small shrub compared to a small tree with the same above-ground woody volume, based on larger cross-sectional stem area, larger area of photosynthetic tissue in bark and stem, larger vascular cambium area, larger epidermis (bark) area, and larger area for sprouting, and faster production of twigs and canopy. These components form our Hypothesis 1 that predicts higher growth rate for a small shrub than a small tree. This prediction was supported by available relevant empirical studies (14 publications). Further, a shrub will produce seeds faster than a tree (Hypothesis 2), multiple stems in shrubs insure future survival and growth if one or more stems die (Hypothesis 3), and three structural traits of short shrub stems improve survival compared to tall tree stems (Hypothesis 4)—all hypotheses have some empirical support. Multi-stemmed trees may be distinguished from shrubs by more upright stems, reducing bending moment. Improved understanding of shrubs can clarify their recent expansion on savannas, grasslands, and alpine heaths. More experiments and other empirical studies, followed by more elaborate models, are needed to understand why the shrub growth form is successful in many habitats. PMID:27507981
Agent-Based Modeling in Systems Pharmacology.
Cosgrove, J; Butler, J; Alden, K; Read, M; Kumar, V; Cucurull-Sanchez, L; Timmis, J; Coles, M
2015-11-01
Modeling and simulation (M&S) techniques provide a platform for knowledge integration and hypothesis testing to gain insights into biological systems that would not be possible a priori. Agent-based modeling (ABM) is an M&S technique that focuses on describing individual components rather than homogenous populations. This tutorial introduces ABM to systems pharmacologists, using relevant case studies to highlight how ABM-specific strengths have yielded success in the area of preclinical mechanistic modeling.
Prevention and Treatment of Noise-Induced Tinnitus
2012-07-01
process of completing the normative data base(s) of VGLUT1 , VAT and VGAT immunostaining in the rat AVCN and DCN that will allow assessment of changes under...our experimental conditions. Initial results indicate some loss of VGLUT1 immunolabeled auditory nerve terminals in the ventral cochlear nucleus...Research Accomplishments for TASK 3: Test the hypothesis that the loss of AN terminals (marked by VGLUT1 immunolabel) on neurons in the AVCN and
Deciphering the crowd: modeling and identification of pedestrian group motion.
Yücel, Zeynep; Zanlungo, Francesco; Ikeda, Tetsushi; Miyashita, Takahiro; Hagita, Norihiro
2013-01-14
Associating attributes to pedestrians in a crowd is relevant for various areas like surveillance, customer profiling and service providing. The attributes of interest greatly depend on the application domain and might involve such social relations as friends or family as well as the hierarchy of the group including the leader or subordinates. Nevertheless, the complex social setting inherently complicates this task. We attack this problem by exploiting the small group structures in the crowd. The relations among individuals and their peers within a social group are reliable indicators of social attributes. To that end, this paper identifies social groups based on explicit motion models integrated through a hypothesis testing scheme. We develop two models relating positional and directional relations. A pair of pedestrians is identified as belonging to the same group or not by utilizing the two models in parallel, which defines a compound hypothesis testing scheme. By testing the proposed approach on three datasets with different environmental properties and group characteristics, it is demonstrated that we achieve an identification accuracy of 87% to 99%. The contribution of this study lies in its definition of positional and directional relation models, its description of compound evaluations, and the resolution of ambiguities with our proposed uncertainty measure based on the local and global indicators of group relation.
Deciphering the Crowd: Modeling and Identification of Pedestrian Group Motion
Yücel, Zeynep; Zanlungo, Francesco; Ikeda, Tetsushi; Miyashita, Takahiro; Hagita, Norihiro
2013-01-01
Associating attributes to pedestrians in a crowd is relevant for various areas like surveillance, customer profiling and service providing. The attributes of interest greatly depend on the application domain and might involve such social relations as friends or family as well as the hierarchy of the group including the leader or subordinates. Nevertheless, the complex social setting inherently complicates this task. We attack this problem by exploiting the small group structures in the crowd. The relations among individuals and their peers within a social group are reliable indicators of social attributes. To that end, this paper identifies social groups based on explicit motion models integrated through a hypothesis testing scheme. We develop two models relating positional and directional relations. A pair of pedestrians is identified as belonging to the same group or not by utilizing the two models in parallel, which defines a compound hypothesis testing scheme. By testing the proposed approach on three datasets with different environmental properties and group characteristics, it is demonstrated that we achieve an identification accuracy of 87% to 99%. The contribution of this study lies in its definition of positional and directional relation models, its description of compound evaluations, and the resolution of ambiguities with our proposed uncertainty measure based on the local and global indicators of group relation. PMID:23344382
Shi, Yunfei; Yao, Jiang; Young, Jonathan M.; Fee, Judy A.; Perucchio, Renato; Taber, Larry A.
2014-01-01
The morphogenetic process of cardiac looping transforms the straight heart tube into a curved tube that resembles the shape of the future four-chambered heart. Although great progress has been made in identifying the molecular and genetic factors involved in looping, the physical mechanisms that drive this process have remained poorly understood. Recent work, however, has shed new light on this complicated problem. After briefly reviewing the current state of knowledge, we propose a relatively comprehensive hypothesis for the mechanics of the first phase of looping, termed c-looping, as the straight heart tube deforms into a c-shaped tube. According to this hypothesis, differential hypertrophic growth in the myocardium supplies the main forces that cause the heart tube to bend ventrally, while regional growth and cytoskeletal contraction in the omphalomesenteric veins (primitive atria) and compressive loads exerted by the splanchnopleuric membrane drive rightward torsion. A computational model based on realistic embryonic heart geometry is used to test the physical plausibility of this hypothesis. The behavior of the model is in reasonable agreement with available experimental data from control and perturbed embryos, offering support for our hypothesis. The results also suggest, however, that several other mechanisms contribute secondarily to normal looping, and we speculate that these mechanisms play backup roles when looping is perturbed. Finally, some outstanding questions are discussed for future study. PMID:25161623
Shi, Yunfei; Yao, Jiang; Young, Jonathan M; Fee, Judy A; Perucchio, Renato; Taber, Larry A
2014-01-01
The morphogenetic process of cardiac looping transforms the straight heart tube into a curved tube that resembles the shape of the future four-chambered heart. Although great progress has been made in identifying the molecular and genetic factors involved in looping, the physical mechanisms that drive this process have remained poorly understood. Recent work, however, has shed new light on this complicated problem. After briefly reviewing the current state of knowledge, we propose a relatively comprehensive hypothesis for the mechanics of the first phase of looping, termed c-looping, as the straight heart tube deforms into a c-shaped tube. According to this hypothesis, differential hypertrophic growth in the myocardium supplies the main forces that cause the heart tube to bend ventrally, while regional growth and cytoskeletal contraction in the omphalomesenteric veins (primitive atria) and compressive loads exerted by the splanchnopleuric membrane drive rightward torsion. A computational model based on realistic embryonic heart geometry is used to test the physical plausibility of this hypothesis. The behavior of the model is in reasonable agreement with available experimental data from control and perturbed embryos, offering support for our hypothesis. The results also suggest, however, that several other mechanisms contribute secondarily to normal looping, and we speculate that these mechanisms play backup roles when looping is perturbed. Finally, some outstanding questions are discussed for future study.
Berryhill, Marian E.; Chein, Jason; Olson, Ingrid R.
2011-01-01
Portions of the posterior parietal cortex (PPC) play a role in working memory (WM) yet the precise mechanistic function of this region remains poorly understood. The pure storage hypothesis proposes that this region functions as a short-lived modality-specific memory store. Alternatively, the internal attention hypothesis proposes that the PPC functions as an attention-based storage and refreshing mechanism deployable as an alternative to material-specific rehearsal. These models were tested in patients with bilateral PPC lesions. Our findings discount the pure storage hypothesis because variables indexing storage capacity and longevity were not disproportionately affected by PPC damage. Instead, our data support the internal attention account by showing that (a) normal participants tend to use a rehearsal-based WM maintenance strategy for recall tasks but not for recognition tasks; (b) patients with PPC lesions performed normally on WM tasks that relied on material-specific rehearsal strategies but poorly on WM tasks that relied on attention-based maintenance strategies and patient strategy usage could be shifted by task or instructions; (c) patients’ memory deficits extended into the long-term domain. These findings suggest that the PPC maintains or shifts internal attention among the representations of items in WM. PMID:21345344
Berryhill, Marian E; Chein, Jason; Olson, Ingrid R
2011-04-01
Portions of the posterior parietal cortex (PPC) play a role in working memory (WM) yet the precise mechanistic function of this region remains poorly understood. The pure storage hypothesis proposes that this region functions as a short-lived modality-specific memory store. Alternatively, the internal attention hypothesis proposes that the PPC functions as an attention-based storage and refreshing mechanism deployable as an alternative to material-specific rehearsal. These models were tested in patients with bilateral PPC lesions. Our findings discount the pure storage hypothesis because variables indexing storage capacity and longevity were not disproportionately affected by PPC damage. Instead, our data support the internal attention account by showing that (a) normal participants tend to use a rehearsal-based WM maintenance strategy for recall tasks but not for recognition tasks; (b) patients with PPC lesions performed normally on WM tasks that relied on material-specific rehearsal strategies but poorly on WM tasks that relied on attention-based maintenance strategies and patient strategy usage could be shifted by task or instructions; (c) patients' memory deficits extended into the long-term domain. These findings suggest that the PPC maintains or shifts internal attention among the representations of items in WM. Copyright © 2011 Elsevier Ltd. All rights reserved.
Murray, Kris A; Skerratt, Lee F; Garland, Stephen; Kriticos, Darren; McCallum, Hamish
2013-01-01
The pandemic amphibian disease chytridiomycosis often exhibits strong seasonality in both prevalence and disease-associated mortality once it becomes endemic. One hypothesis that could explain this temporal pattern is that simple weather-driven pathogen proliferation (population growth) is a major driver of chytridiomycosis disease dynamics. Despite various elaborations of this hypothesis in the literature for explaining amphibian declines (e.g., the chytrid thermal-optimum hypothesis) it has not been formally tested on infection patterns in the wild. In this study we developed a simple process-based model to simulate the growth of the pathogen Batrachochytrium dendrobatidis (Bd) under varying weather conditions to provide an a priori test of a weather-linked pathogen proliferation hypothesis for endemic chytridiomycosis. We found strong support for several predictions of the proliferation hypothesis when applied to our model species, Litoria pearsoniana, sampled across multiple sites and years: the weather-driven simulations of pathogen growth potential (represented as a growth index in the 30 days prior to sampling; GI30) were positively related to both the prevalence and intensity of Bd infections, which were themselves strongly and positively correlated. In addition, a machine-learning classifier achieved ~72% success in classifying positive qPCR results when utilising just three informative predictors 1) GI30, 2) frog body size and 3) rain on the day of sampling. Hence, while intrinsic traits of the individuals sampled (species, size, sex) and nuisance sampling variables (rainfall when sampling) influenced infection patterns obtained when sampling via qPCR, our results also strongly suggest that weather-linked pathogen proliferation plays a key role in the infection dynamics of endemic chytridiomycosis in our study system. Predictive applications of the model include surveillance design, outbreak preparedness and response, climate change scenario modelling and the interpretation of historical patterns of amphibian decline.
Monceau, Karine; Dechaume-Moncharmont, François-Xavier; Moreau, Jérôme; Lucas, Camille; Capoduro, Rémi; Motreuil, Sébastien; Moret, Yannick
2017-07-01
The pace-of-life syndrome (POLS) hypothesis is an extended concept of the life-history theory that includes behavioural traits. The studies challenging the POLS hypothesis often focus on the relationships between a single personality trait and a physiological and/or life-history trait. While pathogens represent a major selective pressure, few studies have been interested in testing relationships between behavioural syndrome, and several fitness components including immunity. The aim of this study was to address this question in the mealworm beetle, Tenebrio molitor, a model species in immunity studies. The personality score was estimated from a multidimensional syndrome based of four repeatable behavioural traits. In a first experiment, we investigated its relationship with two measures of fitness (reproduction and survival) and three components of the innate immunity (haemocyte concentration, and levels of activity of the phenoloxidase including the total proenzyme and the naturally activated one) to challenge the POLS hypothesis in T. molitor. Overall, we found a relationship between behavioural syndrome and reproductive success in this species, thus supporting the POLS hypothesis. We also showed a sex-specific relationship between behavioural syndrome and basal immune parameters. In a second experiment, we tested whether this observed relationship with innate immunity could be confirmed in term of differential survival after challenging by entomopathogenic bacteria, Bacillus thuringiensis. In this case, no significant relationship was evidenced. We recommend that future researchers on the POLS should control for differences in evolutionary trajectory between sexes and to pay attention to the choice of the proxy used, especially when looking at immune traits. © 2017 The Authors. Journal of Animal Ecology © 2017 British Ecological Society.
NASA Astrophysics Data System (ADS)
Ebert, Darilyn
The gender gap of women in science is an important and unresolved issue in higher education and occupational opportunities. The present study was motivated by the fact that there are typically fewer females than males advancing in science, and therefore fewer female science instructor role models. This observation inspired the questions: Are female college students influenced in a positive way by female science teaching assistants (TAs), and if so how can their influence be measured? The study tested the hypothesis that female TAs act as role models for female students and thereby encourage interest and increase overall performance. To test this "role model" hypothesis, the reasoning ability and self-efficacy of a sample of 724 introductory college biology students were assessed at the beginning and end of the Spring 2010 semester. Achievement was measured by exams and course work. Performance of four randomly formed groups was compared: 1) female students with female TAs, 2) male students with female TAs, 3) female students with male TAs, and 4) male students with male TAs. Based on the role model hypothesis, female students with female TAs were predicted to perform better than female students with male TAs. However, group comparisons revealed similar performances across all four groups in achievement, reasoning ability and self-efficacy. The slight differences found between the four groups in student exam and coursework scores were not statistically significant. Therefore, the results did not support the role model hypothesis. Given that both lecture professors in the present study were males, and given that professors typically have more teaching experience, finer skills and knowledge of subject matter than do TAs, a future study that includes both female science professors and female TAs, may be more likely to find support for the hypothesis.
A critique of statistical hypothesis testing in clinical research
Raha, Somik
2011-01-01
Many have documented the difficulty of using the current paradigm of Randomized Controlled Trials (RCTs) to test and validate the effectiveness of alternative medical systems such as Ayurveda. This paper critiques the applicability of RCTs for all clinical knowledge-seeking endeavors, of which Ayurveda research is a part. This is done by examining statistical hypothesis testing, the underlying foundation of RCTs, from a practical and philosophical perspective. In the philosophical critique, the two main worldviews of probability are that of the Bayesian and the frequentist. The frequentist worldview is a special case of the Bayesian worldview requiring the unrealistic assumptions of knowing nothing about the universe and believing that all observations are unrelated to each other. Many have claimed that the first belief is necessary for science, and this claim is debunked by comparing variations in learning with different prior beliefs. Moving beyond the Bayesian and frequentist worldviews, the notion of hypothesis testing itself is challenged on the grounds that a hypothesis is an unclear distinction, and assigning a probability on an unclear distinction is an exercise that does not lead to clarity of action. This critique is of the theory itself and not any particular application of statistical hypothesis testing. A decision-making frame is proposed as a way of both addressing this critique and transcending ideological debates on probability. An example of a Bayesian decision-making approach is shown as an alternative to statistical hypothesis testing, utilizing data from a past clinical trial that studied the effect of Aspirin on heart attacks in a sample population of doctors. As a big reason for the prevalence of RCTs in academia is legislation requiring it, the ethics of legislating the use of statistical methods for clinical research is also examined. PMID:22022152
Chandrasekaran, Srinivas Niranj; Yardimci, Galip Gürkan; Erdogan, Ozgün; Roach, Jeffrey; Carter, Charles W.
2013-01-01
We tested the idea that ancestral class I and II aminoacyl-tRNA synthetases arose on opposite strands of the same gene. We assembled excerpted 94-residue Urgenes for class I tryptophanyl-tRNA synthetase (TrpRS) and class II Histidyl-tRNA synthetase (HisRS) from a diverse group of species, by identifying and catenating three blocks coding for secondary structures that position the most highly conserved, active-site residues. The codon middle-base pairing frequency was 0.35 ± 0.0002 in all-by-all sense/antisense alignments for 211 TrpRS and 207 HisRS sequences, compared with frequencies between 0.22 ± 0.0009 and 0.27 ± 0.0005 for eight different representations of the null hypothesis. Clustering algorithms demonstrate further that profiles of middle-base pairing in the synthetase antisense alignments are correlated along the sequences from one species-pair to another, whereas this is not the case for similar operations on sets representing the null hypothesis. Most probable reconstructed sequences for ancestral nodes of maximum likelihood trees show that middle-base pairing frequency increases to approximately 0.42 ± 0.002 as bacterial trees approach their roots; ancestral nodes from trees including archaeal sequences show a less pronounced increase. Thus, contemporary and reconstructed sequences all validate important bioinformatic predictions based on descent from opposite strands of the same ancestral gene. They further provide novel evidence for the hypothesis that bacteria lie closer than archaea to the origin of translation. Moreover, the inverse polarity of genetic coding, together with a priori α-helix propensities suggest that in-frame coding on opposite strands leads to similar secondary structures with opposite polarity, as observed in TrpRS and HisRS crystal structures. PMID:23576570
Spurlock, Linda B.; Fisch, Michael
2018-01-01
Purpose This study was designed to assess the mechanical properties of two calcium carbonate tempers, limestone and burnt shell. These tempers have been previously compared, in separate studies, to silicate-based grit or sand temper and, relative to the latter, are assumed to possess similar mechanical properties. However, their simultaneous use at the Morrison Village site begs the question: do these two calcium carbonate tempers indeed possess similar mechanical properties? In order to assess their performance characteristics, a side-by-side controlled experimental test was conducted to determine the degree of similarity in providing increased vessel strength and toughness. Methods Standardized ceramic test samples were systematically prepared via a set, explicit protocol. An Instron Series IX universal testing machine configured with a four-point flexural test jig was used to perform a flexural strength test of the test samples. The Instron load and deflection data were used to calculate three values related to mechanical performance: peak load, modulus of rupture, and modulus of elasticity. Results All four comparative tests clearly show substantial differences in peak load, modulus of rupture, and modulus of elasticity. These differences are statistically significant for each performance attribute in every iteration of the experiment and as indicated by Mann-Whitney U Tests. Conclusions These results do not support the hypothesis that limestone and burnt shell offer the same performance characteristics. These results have implications for our understanding of prehistoric human selection of temper and the evolution of ceramic technology. Although both carbonate-based tempers are currently thought to offer the same benefits during the initial phase of pottery production, their contrasting post firing properties would have provided distinct benefits in different contexts. Future assessments of the Morrison Village ceramic assemblage should focus on residue analysis, or other functional indicators, to support or falsify this hypothesis. PMID:29579085
Zhao, Xue-Qiao; Phan, Binh An P; Chu, Baocheng; Bray, Frank; Moore, Andrew B; Polissar, Nayak L; Dodge, J Theodore; Lee, Colin D; Hatsukami, Thomas S; Yuan, Chun
2007-08-01
In vivo testing of the lipid depletion hypothesis in human beings during lipid-modifying therapy has not been possible until recent developments in magnetic resonance imaging (MRI). The Carotid Plaque Composition Study is a prospective, randomized study designed to test the lipid depletion hypothesis in vivo. One hundred twenty-three subjects with coronary artery disease (CAD) or carotid disease and with levels of apolipoprotein B > or = 120 mg/dL (low-density lipoprotein levels 100-190 mg/dL) were enrolled and randomized to (1) single therapy--atorvastatin alone, placebos for extended release (ER)-niacin and colesevelam; (2) double therapy--atorvastatin plus ER-niacin (2 g/d), and placebo for colesevelam; (3) triple therapy--atorvastatin, ER-niacin, plus colesevelam (3.8 g/d). All subjects will undergo MRI scans of bilateral carotid arteries at baseline and annually for 3 years for a total of 4 examinations while on active therapy. Among these 123 subjects with mean age of 55 years and mean body mass index of 30 kg/m2, 73% are male, 43% have a family history of premature cardiovascular disease, 37% have had a previous myocardial infarction, 80% have clinically established CAD, 52% are hypertensive, 12% have diabetes, 23% are current smokers, and 47% meet the criteria for metabolic syndrome. The baseline carotid disease is evaluated using a MRI-modified American Heart Association lesion type definition. Of the 123 enrolled subjects, 40% have type III lesions with small eccentric plaque, 52% have type IV to V lesions with a necrotic core, and only 4% have calcified plaque based on the most diseased carotid location. The Carotid Plaque Composition Study uses a state-of-the-art imaging technology and comprehensive lipid management to test the plaque lipid depletion hypothesis in CAD subjects.
USDA-ARS?s Scientific Manuscript database
This study tests the hypothesis that phylogenetic classification can predict whether A. pullulans strains will produce useful levels of the commercial polysaccharide, pullulan, or the valuable enzyme, xylanase. To test this hypothesis, 19 strains of A. pullulans with previously described phenotypes...
Do pressures to publish increase scientists' bias? An empirical support from US States Data.
Fanelli, Daniele
2010-04-21
The growing competition and "publish or perish" culture in academia might conflict with the objectivity and integrity of research, because it forces scientists to produce "publishable" results at all costs. Papers are less likely to be published and to be cited if they report "negative" results (results that fail to support the tested hypothesis). Therefore, if publication pressures increase scientific bias, the frequency of "positive" results in the literature should be higher in the more competitive and "productive" academic environments. This study verified this hypothesis by measuring the frequency of positive results in a large random sample of papers with a corresponding author based in the US. Across all disciplines, papers were more likely to support a tested hypothesis if their corresponding authors were working in states that, according to NSF data, produced more academic papers per capita. The size of this effect increased when controlling for state's per capita R&D expenditure and for study characteristics that previous research showed to correlate with the frequency of positive results, including discipline and methodology. Although the confounding effect of institutions' prestige could not be excluded (researchers in the more productive universities could be the most clever and successful in their experiments), these results support the hypothesis that competitive academic environments increase not only scientists' productivity but also their bias. The same phenomenon might be observed in other countries where academic competition and pressures to publish are high.
DOE Office of Scientific and Technical Information (OSTI.GOV)
De Buizer, James M.; Bartkiewicz, Anna; Szymczak, Marian, E-mail: jdebuizer@sofia.usra.edu
2012-08-01
Milliarcsecond very long baseline interferometry maps of regions containing 6.7 GHz methanol maser emission have lead to the recent discovery of ring-like distributions of maser spots and the plausible hypothesis that they may be tracing circumstellar disks around forming high-mass stars. We aimed to test this hypothesis by imaging these regions in the near- and mid-infrared at high spatial resolution and compare the observed emission to the expected infrared morphologies as inferred from the geometries of the maser rings. In the near-infrared we used the Gemini North adaptive optics system of ALTAIR/NIRI, while in the mid-infrared we used the combinationmore » of the Gemini South instrument T-ReCS and super-resolution techniques. Resultant images had a resolution of {approx}150 mas in both the near-infrared and mid-infrared. We discuss the expected distribution of circumstellar material around young and massive accreting (proto)stars and what infrared emission geometries would be expected for the different maser ring orientations under the assumption that the masers are coming from within circumstellar disks. Based upon the observed infrared emission geometries for the four targets in our sample and the results of spectral energy distribution modeling of the massive young stellar objects associated with the maser rings, we do not find compelling evidence in support of the hypothesis that methanol masers rings reside in circumstellar disks.« less
Huang, Peng; Ou, Ai-hua; Piantadosi, Steven; Tan, Ming
2014-11-01
We discuss the problem of properly defining treatment superiority through the specification of hypotheses in clinical trials. The need to precisely define the notion of superiority in a one-sided hypothesis test problem has been well recognized by many authors. Ideally designed null and alternative hypotheses should correspond to a partition of all possible scenarios of underlying true probability models P={P(ω):ω∈Ω} such that the alternative hypothesis Ha={P(ω):ω∈Ωa} can be inferred upon the rejection of null hypothesis Ho={P(ω):ω∈Ω(o)} However, in many cases, tests are carried out and recommendations are made without a precise definition of superiority or a specification of alternative hypothesis. Moreover, in some applications, the union of probability models specified by the chosen null and alternative hypothesis does not constitute a completed model collection P (i.e., H(o)∪H(a) is smaller than P). This not only imposes a strong non-validated assumption of the underlying true models, but also leads to different superiority claims depending on which test is used instead of scientific plausibility. Different ways to partition P fro testing treatment superiority often have different implications on sample size, power, and significance in both efficacy and comparative effectiveness trial design. Such differences are often overlooked. We provide a theoretical framework for evaluating the statistical properties of different specification of superiority in typical hypothesis testing. This can help investigators to select proper hypotheses for treatment comparison inclinical trial design. Copyright © 2014 Elsevier Inc. All rights reserved.
Selective interactions among Rh, ABO, and sex ratio of newborns.
Valenzuela, C Y; Walton, R
1985-01-01
The hypothesis that the Rh and ABO blood systems behave like the HLA system in relation to mother-conception tolerance-rejection mechanisms was tested in 25,501 mother-infant pairs. According to this hypothesis, heterozygotes carrying a paternal gene that is not present in their mothers should be better tolerated than homozygotes. Significantly more BO infants born to AO mothers. AO infants born to BO mothers, Rh(+) heterozygotes born to Rh(-) mothers, and less significantly AO infants born to OO mothers confirm the hypothesis. Fewer homozygotes occurred in Rh(-) infants born to Rh(+) mothers and in O infants born to non-O mothers. Deviations from the Hardy-Weinberg equilibrium found in the ABO system were modified by the Rh and sex of the infant. These data strongly support the hypothesis that at least two feto-maternal systems influence the destiny of pregnancies: the classical known incompatibility system which operates late in pregnancy and a new one which is based on the induction of maternal tolerance early in pregnancy: maternal tolerance seems to be better elicited by heterozygous eggs or embryos carrying a gene not present in the mother. The data also support the hypothesis that the sex ratio is influenced by feto-maternal tolerance-rejection mechanisms associated with the ABO and Rh systems.
What Explains Gender Gaps in Maths Achievement in Primary Schools in Kenya?
ERIC Educational Resources Information Center
Ngware, Moses W.; Ciera, James; Abuya, Benta A.; Oketch, Moses; Mutisya, Maurice
2012-01-01
This paper aims to improve the understanding of classroom-based gender differences that may lead to differential opportunities to learn provided to girls and boys in low and high performing primary schools in Kenya. The paper uses an opportunity to learn framework and tests the hypothesis that teaching practices and classroom interactions explain…
Data-Driven Learning of Speech Acts Based on Corpora of DVD Subtitles
ERIC Educational Resources Information Center
Kitao, S. Kathleen; Kitao, Kenji
2013-01-01
Data-driven learning (DDL) is an inductive approach to language learning in which students study examples of authentic language and use them to find patterns of language use. This inductive approach to learning has the advantages of being learner-centered, encouraging hypothesis testing and learner autonomy, and helping develop learning skills.…
Definition and Formulation of Scientific Prediction and Its Role in Inquiry-Based Laboratories
ERIC Educational Resources Information Center
Mauldin, Robert F.
2011-01-01
The formulation of a scientific prediction by students in college-level laboratories is proposed. This activity will develop the students' ability to apply abstract concepts via deductive reasoning. For instances in which a hypothesis will be tested by an experiment, students should develop a prediction that states what sort of experimental…
The Emergence of a Learning Progression in Middle School Chemistry
ERIC Educational Resources Information Center
Johnson, Philip; Tymms, Peter
2011-01-01
Previously, a small scale, interview-based, 3-year longitudinal study (ages 11-14) in one school had suggested a learning progression related to the concept of a substance. This article presents the results of a large-scale, cross-sectional study which used Rasch modeling to test the hypothesis of the learning progression. Data were collected from…
ERIC Educational Resources Information Center
Liu, Xiufeng
2006-01-01
Based on current theories of chemistry learning, this study intends to test a hypothesis that computer modeling enhanced hands-on chemistry laboratories are more effective than hands-on laboratories or computer modeling laboratories alone in facilitating high school students' understanding of chemistry concepts. Thirty-three high school chemistry…
Acquiring, Representing, and Evaluating a Competence Model of Diagnostic Strategy.
ERIC Educational Resources Information Center
Clancey, William J.
This paper describes NEOMYCIN, a computer program that models one physician's diagnostic reasoning within a limited area of medicine. NEOMYCIN's knowledge base and reasoning procedure constitute a model of how human knowledge is organized and how it is used in diagnosis. The hypothesis is tested that such a procedure can be used to simulate both…
ERIC Educational Resources Information Center
Meng, Yi; Tan, Jing; Li, Jing
2017-01-01
Drawing upon the componential theory of creativity, cognitive evaluation theory and social exchange theory, the study reported in this paper tested a mediating model based on the hypothesis that abusive supervision negatively influences creativity sequentially through leader-member exchange (LMX) and intrinsic motivation. The study employed…
ERIC Educational Resources Information Center
McLeod, Jane D.; Owens, Timothy J.
2004-01-01
Our analysis focuses on the implications of social status characteristics for children's psychological well-being. Drawing on social evaluation theories and stress-based explanations, we hypothesized that disadvantage cumulates across statuses (the double jeopardy hypothesis) and over time as children move into the adolescent years. To test this…
Effect of Subject Types on the Production of Auxiliary "Is" in Young English-Speaking Children
ERIC Educational Resources Information Center
Guo, Ling-Yu; Owen, Amanda J.; Tomblin, J. Bruce
2010-01-01
Purpose: In this study, the authors tested the unique checking constraint (UCC) hypothesis and the usage-based approach concerning why young children variably use tense and agreement morphemes in obligatory contexts by examining the effect of subject types on the production of auxiliary "is". Method: Twenty typically developing 3-year-olds were…
ERIC Educational Resources Information Center
de Walque, Damien
2010-01-01
This paper tests the hypothesis that education improves health and increases life expectancy. The analysis of smoking histories shows that after 1950, when information about the dangers of tobacco started to diffuse, the prevalence of smoking declined earlier and most dramatically for college graduates. I construct panels based on smoking…
A comparative potency method for cancer risk assessment has been developed based upon a constant relative potency hypothesis. This method was developed and tested using data from a battery of short-term mutagenesis bioassays, animal tumorigenicity data and human lung cancer risk ...
Mechanism-Based Causal Reasoning in Young Children
ERIC Educational Resources Information Center
Buchanan, David W.; Sobel, David M.
2011-01-01
The hypothesis that children develop an understanding of causal mechanisms was tested across 3 experiments. In Experiment 1 (N = 48), preschoolers had to choose as efficacious either a cause that had worked in the past, but was now disconnected from its effect, or a cause that had failed to work previously, but was now connected. Four-year-olds…
ERIC Educational Resources Information Center
Streibel, Michael; And Others
1987-01-01
Describes an advice-giving computer system being developed for genetics education called MENDEL that is based on research in learning, genetics problem solving, and expert systems. The value of MENDEL as a design tool and the tutorial function are stressed. Hypothesis testing, graphics, and experiential learning are also discussed. (Author/LRW)
A test of the orthographic recoding hypothesis
NASA Astrophysics Data System (ADS)
Gaygen, Daniel E.
2003-04-01
The Orthographic Recoding Hypothesis [D. E. Gaygen and P. A. Luce, Percept. Psychophys. 60, 465-483 (1998)] was tested. According to this hypothesis, listeners recognize spoken words heard for the first time by mapping them onto stored representations of the orthographic forms of the words. Listeners have a stable orthographic representation of words, but no phonological representation, when those words have been read frequently but never heard or spoken. Such may be the case for low frequency words such as jargon. Three experiments using visually and auditorily presented nonword stimuli tested this hypothesis. The first two experiments were explicit tests of memory (old-new tests) for words presented visually. In the first experiment, the recognition of auditorily presented nonwords was facilitated when they previously appeared on a visually presented list. The second experiment was similar, but included a concurrent articulation task during a visual word list presentation, thus preventing covert rehearsal of the nonwords. The results were similar to the first experiment. The third experiment was an indirect test of memory (auditory lexical decision task) for visually presented nonwords. Auditorily presented nonwords were identified as nonwords significantly more slowly if they had previously appeared on the visually presented list accompanied by a concurrent articulation task.
Hamilton, Maryellen; Geraci, Lisa
2006-01-01
According to leading theories, the picture superiority effect is driven by conceptual processing, yet this effect has been difficult to obtain using conceptual implicit memory tests. We hypothesized that the picture superiority effect results from conceptual processing of a picture's distinctive features rather than a picture's semantic features. To test this hypothesis, we used 2 conceptual implicit general knowledge tests; one cued conceptually distinctive features (e.g., "What animal has large eyes?") and the other cued semantic features (e.g., "What animal is the figurehead of Tootsie Roll?"). Results showed a picture superiority effect only on the conceptual test using distinctive cues, supporting our hypothesis that this effect is mediated by conceptual processing of a picture's distinctive features.
Why do mothers favor girls and fathers, boys? : A hypothesis and a test of investment disparity.
Godoy, Ricardo; Reyes-García, Victoria; McDade, Thomas; Tanner, Susan; Leonard, William R; Huanca, Tomás; Vadez, Vincent; Patel, Karishma
2006-06-01
Growing evidence suggests mothers invest more in girls than boys and fathers more in boys than girls. We develop a hypothesis that predicts preference for girls by the parent facing more resource constraints and preference for boys by the parent facing less constraint. We test the hypothesis with panel data from the Tsimane', a foraging-farming society in the Bolivian Amazon. Tsimane' mothers face more resource constraints than fathers. As predicted, mother's wealth protected girl's BMI, but father's wealth had weak effects on boy's BMI. Numerous tests yielded robust results, including those that controlled for fixed effects of child and household.
Bundschuh, Mirco; Newman, Michael C; Zubrod, Jochen P; Seitz, Frank; Rosenfeldt, Ricki R; Schulz, Ralf
2015-03-01
We argued recently that the positive predictive value (PPV) and the negative predictive value (NPV) are valuable metrics to include during null hypothesis significance testing: They inform the researcher about the probability of statistically significant and non-significant test outcomes actually being true. Although commonly misunderstood, a reported p value estimates only the probability of obtaining the results or more extreme results if the null hypothesis of no effect was true. Calculations of the more informative PPV and NPV require a priori estimate of the probability (R). The present document discusses challenges of estimating R.
Wu, Yu-Hsiang; Stangl, Elizabeth; Pang, Carol; Zhang, Xuyang
2014-02-01
Little is known regarding the acoustic features of a stimulus used by listeners to determine the acceptable noise level (ANL). Features suggested by previous research include speech intelligibility (noise is unacceptable when it degrades speech intelligibility to a certain degree; the intelligibility hypothesis) and loudness (noise is unacceptable when the speech-to-noise loudness ratio is poorer than a certain level; the loudness hypothesis). The purpose of the study was to investigate if speech intelligibility or loudness is the criterion feature that determines ANL. To achieve this, test conditions were chosen so that the intelligibility and loudness hypotheses would predict different results. In Experiment 1, the effect of audiovisual (AV) and binaural listening on ANL was investigated; in Experiment 2, the effect of interaural correlation (ρ) on ANL was examined. A single-blinded, repeated-measures design was used. Thirty-two and twenty-five younger adults with normal hearing participated in Experiments 1 and 2, respectively. In Experiment 1, both ANL and speech recognition performance were measured using the AV version of the Connected Speech Test (CST) in three conditions: AV-binaural, auditory only (AO)-binaural, and AO-monaural. Lipreading skill was assessed using the Utley lipreading test. In Experiment 2, ANL and speech recognition performance were measured using the Hearing in Noise Test (HINT) in three binaural conditions, wherein the interaural correlation of noise was varied: ρ = 1 (N(o)S(o) [a listening condition wherein both speech and noise signals are identical across two ears]), -1 (NπS(o) [a listening condition wherein speech signals are identical across two ears whereas the noise signals of two ears are 180 degrees out of phase]), and 0 (N(u)S(o) [a listening condition wherein speech signals are identical across two ears whereas noise signals are uncorrelated across ears]). The results were compared to the predictions made based on the intelligibility and loudness hypotheses. The results of the AV and AO conditions appeared to support the intelligibility hypothesis due to the significant correlation between visual benefit in ANL (AV re: AO ANL) and (1) visual benefit in CST performance (AV re: AO CST) and (2) lipreading skill. The results of the N(o)S(o), NπS(o), and N(u)S(o) conditions negated the intelligibility hypothesis because binaural processing benefit (NπS(o) re: N(o)S(o), and N(u)S(o) re: N(o)S(o)) in ANL was not correlated to that in HINT performance. Instead, the results somewhat supported the loudness hypothesis because the pattern of ANL results across the three conditions (N(o)S(o) ≈ NπS(o) ≈ N(u)S(o) ANL) was more consistent with what was predicted by the loudness hypothesis (N(o)S(o) ≈ NπS(o) < N(u)S(o) ANL) than by the intelligibility hypothesis (NπS(o) < N(u)S(o) < N(o)S(o) ANL). The results of the binaural and monaural conditions supported neither hypothesis because (1) binaural benefit (binaural re: monaural) in ANL was not correlated to that in speech recognition performance, and (2) the pattern of ANL results across conditions (binaural < monaural ANL) was not consistent with the prediction made based on previous binaural loudness summation research (binaural ≥ monaural ANL). The study suggests that listeners may use multiple acoustic features to make ANL judgments. The binaural/monaural results showing that neither hypothesis was supported further indicate that factors other than speech intelligibility and loudness, such as psychological factors, may affect ANL. The weightings of different acoustic features in ANL judgments may vary widely across individuals and listening conditions. American Academy of Audiology.
Hiring a Gay Man, Taking a Risk?: A Lab Experiment on Employment Discrimination and Risk Aversion.
Baert, Stijn
2018-01-01
We investigate risk aversion as a driver of labor market discrimination against homosexual men. We show that more hiring discrimination by more risk-averse employers is consistent with taste-based and statistical discrimination. To test this hypothesis we conduct a scenario experiment in which experimental employers take a fictitious hiring decision concerning a heterosexual or homosexual male job candidate. In addition, participants are surveyed on their risk aversion and other characteristics that might correlate with this risk aversion. Analysis of the (post-)experimental data confirms our hypothesis. The likelihood of a beneficial hiring decision for homosexual male candidates decreases by 31.7% when employers are a standard deviation more risk-averse.
Animal Models for Testing the DOHaD Hypothesis
Since the seminal work in human populations by David Barker and colleagues, several species of animals have been used in the laboratory to test the Developmental Origins of Health and Disease (DOHaD) hypothesis. Rats, mice, guinea pigs, sheep, pigs and non-human primates have bee...
A "Projective" Test of the Golden Section Hypothesis.
ERIC Educational Resources Information Center
Lee, Chris; Adams-Webber, Jack
1987-01-01
In a projective test of the golden section hypothesis, 24 high school students rated themselves and 10 comic strip characters on basis of 12 bipolar constructs. Overall proportion of cartoon figures which subjects assigned to positive poles of constructs was very close to golden section. (Author/NB)
Peterson, Chris J; Dosch, Jerald J; Carson, Walter P
2014-08-01
The nucleation hypothesis appears to explain widespread patterns of succession in tropical pastures, specifically the tendency for isolated trees to promote woody species recruitment. Still, the nucleation hypothesis has usually been tested explicitly for only short durations and in some cases isolated trees fail to promote woody recruitment. Moreover, at times, nucleation occurs in other key habitat patches. Thus, we propose an extension, the matrix discontinuity hypothesis: woody colonization will occur in focal patches that function to mitigate the herbaceous vegetation effects, thus providing safe sites or regeneration niches. We tested predictions of the classical nucleation hypothesis, the matrix discontinuity hypothesis, and a distance from forest edge hypothesis, in five abandoned pastures in Costa Rica, across the first 11 years of succession. Our findings confirmed the matrix discontinuity hypothesis: specifically, rotting logs and steep slopes significantly enhanced woody colonization. Surprisingly, isolated trees did not consistently significantly enhance recruitment; only larger trees did so. Finally, woody recruitment consistently decreased with distance from forest. Our results as well as results from others suggest that the nucleation hypothesis needs to be broadened beyond its historical focus on isolated trees or patches; the matrix discontinuity hypothesis focuses attention on a suite of key patch types or microsites that promote woody species recruitment. We argue that any habitat discontinuities that ameliorate the inhibition by dense graminoid layers will be foci for recruitment. Such patches could easily be manipulated to speed the transition of pastures to closed canopy forests.
Humans have evolved specialized skills of social cognition: the cultural intelligence hypothesis.
Herrmann, Esther; Call, Josep; Hernàndez-Lloreda, Maráa Victoria; Hare, Brian; Tomasello, Michael
2007-09-07
Humans have many cognitive skills not possessed by their nearest primate relatives. The cultural intelligence hypothesis argues that this is mainly due to a species-specific set of social-cognitive skills, emerging early in ontogeny, for participating and exchanging knowledge in cultural groups. We tested this hypothesis by giving a comprehensive battery of cognitive tests to large numbers of two of humans' closest primate relatives, chimpanzees and orangutans, as well as to 2.5-year-old human children before literacy and schooling. Supporting the cultural intelligence hypothesis and contradicting the hypothesis that humans simply have more "general intelligence," we found that the children and chimpanzees had very similar cognitive skills for dealing with the physical world but that the children had more sophisticated cognitive skills than either of the ape species for dealing with the social world.
Assessment of resampling methods for causality testing: A note on the US inflation behavior
Kyrtsou, Catherine; Kugiumtzis, Dimitris; Diks, Cees
2017-01-01
Different resampling methods for the null hypothesis of no Granger causality are assessed in the setting of multivariate time series, taking into account that the driving-response coupling is conditioned on the other observed variables. As appropriate test statistic for this setting, the partial transfer entropy (PTE), an information and model-free measure, is used. Two resampling techniques, time-shifted surrogates and the stationary bootstrap, are combined with three independence settings (giving a total of six resampling methods), all approximating the null hypothesis of no Granger causality. In these three settings, the level of dependence is changed, while the conditioning variables remain intact. The empirical null distribution of the PTE, as the surrogate and bootstrapped time series become more independent, is examined along with the size and power of the respective tests. Additionally, we consider a seventh resampling method by contemporaneously resampling the driving and the response time series using the stationary bootstrap. Although this case does not comply with the no causality hypothesis, one can obtain an accurate sampling distribution for the mean of the test statistic since its value is zero under H0. Results indicate that as the resampling setting gets more independent, the test becomes more conservative. Finally, we conclude with a real application. More specifically, we investigate the causal links among the growth rates for the US CPI, money supply and crude oil. Based on the PTE and the seven resampling methods, we consistently find that changes in crude oil cause inflation conditioning on money supply in the post-1986 period. However this relationship cannot be explained on the basis of traditional cost-push mechanisms. PMID:28708870
Assessment of resampling methods for causality testing: A note on the US inflation behavior.
Papana, Angeliki; Kyrtsou, Catherine; Kugiumtzis, Dimitris; Diks, Cees
2017-01-01
Different resampling methods for the null hypothesis of no Granger causality are assessed in the setting of multivariate time series, taking into account that the driving-response coupling is conditioned on the other observed variables. As appropriate test statistic for this setting, the partial transfer entropy (PTE), an information and model-free measure, is used. Two resampling techniques, time-shifted surrogates and the stationary bootstrap, are combined with three independence settings (giving a total of six resampling methods), all approximating the null hypothesis of no Granger causality. In these three settings, the level of dependence is changed, while the conditioning variables remain intact. The empirical null distribution of the PTE, as the surrogate and bootstrapped time series become more independent, is examined along with the size and power of the respective tests. Additionally, we consider a seventh resampling method by contemporaneously resampling the driving and the response time series using the stationary bootstrap. Although this case does not comply with the no causality hypothesis, one can obtain an accurate sampling distribution for the mean of the test statistic since its value is zero under H0. Results indicate that as the resampling setting gets more independent, the test becomes more conservative. Finally, we conclude with a real application. More specifically, we investigate the causal links among the growth rates for the US CPI, money supply and crude oil. Based on the PTE and the seven resampling methods, we consistently find that changes in crude oil cause inflation conditioning on money supply in the post-1986 period. However this relationship cannot be explained on the basis of traditional cost-push mechanisms.
1986-09-01
HYPOTHESIS TEST .................... 20 III. TIME TO GET RATED TWO FACTOR ANOVA RESULTS ......... 23 IV. TIME TO GET RATED TUKEY’S PAIRED COvfl’PARISON... TEST RESULTS A ............................................ 24 V. TIME TO GET RATED TUKEY’S PAIRED COMPARISON TEST RESULTS B...25 VI. SINGLE FACTOR ANOVA HYPOTHESIS TEST #I............... 27 VII. AT: TIME TO GET RATED ANOVA TEST RESULTS ............. 30
An estimation of distribution method for infrared target detection based on Copulas
NASA Astrophysics Data System (ADS)
Wang, Shuo; Zhang, Yiqun
2015-10-01
Track-before-detect (TBD) based target detection involves a hypothesis test of merit functions which measure each track as a possible target track. Its accuracy depends on the precision of the distribution of merit functions, which determines the threshold for a test. Generally, merit functions are regarded Gaussian, and on this basis the distribution is estimated, which is true for most methods such as the multiple hypothesis tracking (MHT). However, merit functions for some other methods such as the dynamic programming algorithm (DPA) are non-Guassian and cross-correlated. Since existing methods cannot reasonably measure the correlation, the exact distribution can hardly be estimated. If merit functions are assumed Guassian and independent, the error between an actual distribution and its approximation may occasionally over 30 percent, and is divergent by propagation. Hence, in this paper, we propose a novel estimation of distribution method based on Copulas, by which the distribution can be estimated precisely, where the error is less than 1 percent without propagation. Moreover, the estimation merely depends on the form of merit functions and the structure of a tracking algorithm, and is invariant to measurements. Thus, the distribution can be estimated in advance, greatly reducing the demand for real-time calculation of distribution functions.
Sub-word based Arabic handwriting analysis for writer identification
NASA Astrophysics Data System (ADS)
Maliki, Makki; Al-Jawad, Naseer; Jassim, Sabah
2013-05-01
Analysing a text or part of it is key to handwriting identification. Generally, handwriting is learnt over time and people develop habits in the style of writing. These habits are embedded in special parts of handwritten text. In Arabic each word consists of one or more sub-word(s). The end of each sub-word is considered to be a connect stroke. The main hypothesis in this paper is that sub-words are essential reflection of Arabic writer's habits that could be exploited for writer identification. Testing this hypothesis will be based on experiments that evaluate writer's identification, mainly using K nearest neighbor from group of sub-words extracted from longer text. The experimental results show that using a group of sub-words could be used to identify the writer with a successful rate between 52.94 % to 82.35% when top1 is used, and it can go up to 100% when top5 is used based on K nearest neighbor. The results show that majority of writers are identified using 7 sub-words with a reliability confident of about 90% (i.e. 90% of the rejected templates have significantly larger distances to the tested example than the distance from the correctly identified template). However previous work, using a complete word, shows successful rate of at most 90% in top 10.
NASA Astrophysics Data System (ADS)
Psaltis, Dimitrios; Özel, Feryal; Chan, Chi-Kwan; Marrone, Daniel P.
2015-12-01
The half opening angle of a Kerr black hole shadow is always equal to (5 ± 0.2)GM/Dc2, where M is the mass of the black hole and D is its distance from the Earth. Therefore, measuring the size of a shadow and verifying whether it is within this 4% range constitutes a null hypothesis test of general relativity. We show that the black hole in the center of the Milky Way, Sgr A*, is the optimal target for performing this test with upcoming observations using the Event Horizon Telescope (EHT). We use the results of optical/IR monitoring of stellar orbits to show that the mass-to-distance ratio for Sgr A* is already known to an accuracy of ∼4%. We investigate our prior knowledge of the properties of the scattering screen between Sgr A* and the Earth, the effects of which will need to be corrected for in order for the black hole shadow to appear sharp against the background emission. Finally, we explore an edge detection scheme for interferometric data and a pattern matching algorithm based on the Hough/Radon transform and demonstrate that the shadow of the black hole at 1.3 mm can be localized, in principle, to within ∼9%. All these results suggest that our prior knowledge of the properties of the black hole, of scattering broadening, and of the accretion flow can only limit this general relativistic null hypothesis test with EHT observations of Sgr A* to ≲10%.
Sensory discrimination and intelligence: testing Spearman's other hypothesis.
Deary, Ian J; Bell, P Joseph; Bell, Andrew J; Campbell, Mary L; Fazal, Nicola D
2004-01-01
At the centenary of Spearman's seminal 1904 article, his general intelligence hypothesis remains one of the most influential in psychology. Less well known is the article's other hypothesis that there is "a correspondence between what may provisionally be called 'General Discrimination' and 'General Intelligence' which works out with great approximation to one or absoluteness" (Spearman, 1904, p. 284). Studies that do not find high correlations between psychometric intelligence and single sensory discrimination tests do not falsify this hypothesis. This study is the first directly to address Spearman's general intelligence-general sensory discrimination hypothesis. It attempts to replicate his findings with a similar sample of schoolchildren. In a well-fitting structural equation model of the data, general intelligence and general discrimination correlated .92. In a reanalysis of data published byActon and Schroeder (2001), general intelligence and general sensory ability correlated .68 in men and women. One hundred years after its conception, Spearman's other hypothesis achieves some confirmation. The association between general intelligence and general sensory ability remains to be replicated and explained.
NASA Astrophysics Data System (ADS)
Noble, Clifford Elliott, II
2002-09-01
The problem. The purpose of this study was to investigate the ability of three single-task instruments---(a) the Test of English as a Foreign Language, (b) the Aviation Test of Spoken English, and (c) the Single Manual-Tracking Test---and three dual-task instruments---(a) the Concurrent Manual-Tracking and Communication Test, (b) the Certified Flight Instructor's Test, and (c) the Simulation-Based English Test---to predict the language performance of 10 Chinese student pilots speaking English as a second language when operating single-engine and multiengine aircraft within American airspace. Method. This research implemented a correlational design to investigate the ability of the six described instruments to predict the mean score of the criterion evaluation, which was the Examiner's Test. This test assessed the oral communication skill of student pilots on the flight portion of the terminal checkride in the Piper Cadet, Piper Seminole, and Beechcraft King Air airplanes. Results. Data from the Single Manual-Tracking Test, as well as the Concurrent Manual-Tracking and Communication Test, were discarded due to performance ceiling effects. Hypothesis 1, which stated that the average correlation between the mean scores of the dual-task evaluations and that of the Examiner's Test would predict the mean score of the criterion evaluation with a greater degree of accuracy than that of single-task evaluations, was not supported. Hypothesis 2, which stated that the correlation between the mean scores of the participants on the Simulation-Based English Test and the Examiner's Test would predict the mean score of the criterion evaluation with a greater degree of accuracy than that of all single- and dual-task evaluations, was also not supported. The findings suggest that single- and dual-task assessments administered after initial flight training are equivalent predictors of language performance when piloting single-engine and multiengine aircraft.
Dynamic test input generation for multiple-fault isolation
NASA Technical Reports Server (NTRS)
Schaefer, Phil
1990-01-01
Recent work is Causal Reasoning has provided practical techniques for multiple fault diagnosis. These techniques provide a hypothesis/measurement diagnosis cycle. Using probabilistic methods, they choose the best measurements to make, then update fault hypotheses in response. For many applications such as computers and spacecraft, few measurement points may be accessible, or values may change quickly as the system under diagnosis operates. In these cases, a hypothesis/measurement cycle is insufficient. A technique is presented for a hypothesis/test-input/measurement diagnosis cycle. In contrast to generating tests a priori for determining device functionality, it dynamically generates tests in response to current knowledge about fault probabilities. It is shown how the mathematics previously used for measurement specification can be applied to the test input generation process. An example from an efficient implementation called Multi-Purpose Causal (MPC) is presented.
Masicampo, E J; Baumeister, Roy F
2008-03-01
This experiment used the attraction effect to test the hypothesis that ingestion of sugar can reduce reliance on intuitive, heuristic-based decision making. In the attraction effect, a difficult choice between two options is swayed by the presence of a seemingly irrelevant "decoy" option. We replicated this effect and the finding that the effect increases when people have depleted their mental resources performing a previous self-control task. Our hypothesis was based on the assumption that effortful processes require and consume relatively large amounts of glucose (brain fuel), and that this use of glucose is why people use heuristic strategies after exerting self-control. Before performing any tasks, some participants drank lemonade sweetened with sugar, which restores blood glucose, whereas others drank lemonade containing a sugar substitute. Only lemonade with sugar reduced the attraction effect. These results show one way in which the body (blood glucose) interacts with the mind (self-control and reliance on heuristics).
Wilder-Smith, A; Lover, A; Kittayapong, P; Burnham, G
2011-06-01
Dengue infection causes a significant economic, social and medical burden in affected populations in over 100 countries in the tropics and sub-tropics. Current dengue control efforts have generally focused on vector control but have not shown major impact. School-aged children are especially vulnerable to infection, due to sustained human-vector-human transmission in the close proximity environments of schools. Infection in children has a higher rate of complications, including dengue hemorrhagic fever and shock syndromes, than infections in adults. There is an urgent need for integrated and complementary population-based strategies to protect vulnerable children. We hypothesize that insecticide-treated school uniforms will reduce the incidence of dengue in school-aged children. The hypothesis would need to be tested in a community based randomized trial. If proven to be true, insecticide-treated school uniforms would be a cost-effective and scalable community based strategy to reduce the burden of dengue in children. Copyright © 2011 Elsevier Ltd. All rights reserved.
Synergy of SOCS-1 Inhibition and Microbial-Based Cancer Vaccines
2014-11-01
response without causing additional risk to the patient. The goal of our proposal is to modify a live- attenuated vaccine vector based on the food -borne...response after vaccination with a live-‐‑attenuated L. monocytogenes. Aim 3: Test the hypothesis that secretion of a SOCS-‐‑1 small peptide ...efficient internalization of pathogens and dying cells, processing of this material into peptide antigens that are presented in the context of major
Wilson, Anna J; Revkin, Susannah K; Cohen, David; Cohen, Laurent; Dehaene, Stanislas
2006-01-01
Background In a companion article [1], we described the development and evaluation of software designed to remediate dyscalculia. This software is based on the hypothesis that dyscalculia is due to a "core deficit" in number sense or in its access via symbolic information. Here we review the evidence for this hypothesis, and present results from an initial open-trial test of the software in a sample of nine 7–9 year old children with mathematical difficulties. Methods Children completed adaptive training on numerical comparison for half an hour a day, four days a week over a period of five-weeks. They were tested before and after intervention on their performance in core numerical tasks: counting, transcoding, base-10 comprehension, enumeration, addition, subtraction, and symbolic and non-symbolic numerical comparison. Results Children showed specific increases in performance on core number sense tasks. Speed of subitizing and numerical comparison increased by several hundred msec. Subtraction accuracy increased by an average of 23%. Performance on addition and base-10 comprehension tasks did not improve over the period of the study. Conclusion Initial open-trial testing showed promising results, and suggested that the software was successful in increasing number sense over the short period of the study. However these results need to be followed up with larger, controlled studies. The issues of transfer to higher-level tasks, and of the best developmental time window for intervention also need to be addressed. PMID:16734906
Visualization-based analysis of multiple response survey data
NASA Astrophysics Data System (ADS)
Timofeeva, Anastasiia
2017-11-01
During the survey, the respondents are often allowed to tick more than one answer option for a question. Analysis and visualization of such data have difficulties because of the need for processing multiple response variables. With standard representation such as pie and bar charts, information about the association between different answer options is lost. The author proposes a visualization approach for multiple response variables based on Venn diagrams. For a more informative representation with a large number of overlapping groups it is suggested to use similarity and association matrices. Some aggregate indicators of dissimilarity (similarity) are proposed based on the determinant of the similarity matrix and the maximum eigenvalue of association matrix. The application of the proposed approaches is well illustrated by the example of the analysis of advertising sources. Intersection of sets indicates that the same consumer audience is covered by several advertising sources. This information is very important for the allocation of the advertising budget. The differences between target groups in advertising sources are of interest. To identify such differences the hypothesis of homogeneity and independence are tested. Recent approach to the problem are briefly reviewed and compared. An alternative procedure is suggested. It is based on partition of a consumer audience into pairwise disjoint subsets and includes hypothesis testing of the difference between the population proportions. It turned out to be more suitable for the real problem being solved.
Using VITA Service Learning Experiences to Teach Hypothesis Testing and P-Value Analysis
ERIC Educational Resources Information Center
Drougas, Anne; Harrington, Steve
2011-01-01
This paper describes a hypothesis testing project designed to capture student interest and stimulate classroom interaction and communication. Using an online survey instrument, the authors collected student demographic information and data regarding university service learning experiences. Introductory statistics students performed a series of…
A Rational Analysis of the Selection Task as Optimal Data Selection.
ERIC Educational Resources Information Center
Oaksford, Mike; Chater, Nick
1994-01-01
Experimental data on human reasoning in hypothesis-testing tasks is reassessed in light of a Bayesian model of optimal data selection in inductive hypothesis testing. The rational analysis provided by the model suggests that reasoning in such tasks may be rational rather than subject to systematic bias. (SLD)
Random Effects Structure for Confirmatory Hypothesis Testing: Keep It Maximal
ERIC Educational Resources Information Center
Barr, Dale J.; Levy, Roger; Scheepers, Christoph; Tily, Harry J.
2013-01-01
Linear mixed-effects models (LMEMs) have become increasingly prominent in psycholinguistics and related areas. However, many researchers do not seem to appreciate how random effects structures affect the generalizability of an analysis. Here, we argue that researchers using LMEMs for confirmatory hypothesis testing should minimally adhere to the…
USDA-ARS?s Scientific Manuscript database
The effects of bias (over and underestimates) in estimates of disease severity on hypothesis testing using different assessment methods was explored. Nearest percent estimates (NPE), the Horsfall-Barratt (H-B) scale, and two different linear category scales (10% increments, with and without addition...
A Multivariate Test of the Bott Hypothesis in an Urban Irish Setting
ERIC Educational Resources Information Center
Gordon, Michael; Downing, Helen
1978-01-01
Using a sample of 686 married Irish women in Cork City the Bott hypothesis was tested, and the results of a multivariate regression analysis revealed that neither network connectedness nor the strength of the respondent's emotional ties to the network had any explanatory power. (Author)
Polarization, Definition, and Selective Media Learning.
ERIC Educational Resources Information Center
Tichenor, P. J.; And Others
The traditional hypothesis that extreme attitudinal positions on controversial issues are likely to produce low understanding of messages on these issues--especially when the messages represent opposing views--is tested. Data for test of the hypothesis are from two field studies, each dealing with reader attitudes and decoding of one news article…
The Lasting Effects of Introductory Economics Courses.
ERIC Educational Resources Information Center
Sanders, Philip
1980-01-01
Reports research which tests the Stigler Hypothesis. The hypothesis suggests that students who have taken introductory economics courses and those who have not show little difference in test performance five years after completing college. Results of the author's research illustrate that economics students do retain some knowledge of economics…
Rosas, Antonio; Bastir, Markus; Alarcón, Jose Antonio; Kuroe, Kazuto
2008-09-01
To test the hypothesis that midline basicranial orientation and posterior cranial base length are discriminating factors between adults of different populations and its potential maxillo/mandibular disharmonies. Twenty-nine 2D landmarks of the midline cranial base, the face and the mandible of dry skull X-rays from three major populations (45 Asians, 34 Africans, 64 Europeans) were digitized and analysed by geometric morphometrics. We used, first, MANOVA to test for mean shape differences between populations; then, principal components analysis (PCA) to assess the overall variation in the sample and finally, canonical variate analysis (CVA) with jack-knife validations (N=1000) to analyse the anatomical features that best distinguished among populations. Significant mean shapes differences were shown between populations (P<0.001). CVA revealed two significant axes of discrimination (P<0.001). Jack-knife validation correctly identified 92% of 15,000 unknowns. In Africans the whole cranial base is rotated into a forward-downward position, while in Asians it is rotated in the opposite way. The Europeans occupied an intermediate position. African and Asian samples showed a maxillo/mandibular prognathism. African prognathism was produced by an anterior positioned maxilla, Asian prognathism by retruded anterior cranial base and increase of the posterior cranial base length. Europeans showed a trend towards retracted mandibles with relatively shorter posterior cranial bases. The results supported the hypothesis that basicranial orientation and posterior cranial base length are valid factors to distinguish between geographic groups. The whole craniofacial configuration underlying a particular maxillo-facial disharmony must be considered in diagnosis, growth predictions and resulting treatment planning.
Deblauwe, Vincent; Kennel, Pol; Couteron, Pierre
2012-01-01
Background Independence between observations is a standard prerequisite of traditional statistical tests of association. This condition is, however, violated when autocorrelation is present within the data. In the case of variables that are regularly sampled in space (i.e. lattice data or images), such as those provided by remote-sensing or geographical databases, this problem is particularly acute. Because analytic derivation of the null probability distribution of the test statistic (e.g. Pearson's r) is not always possible when autocorrelation is present, we propose instead the use of a Monte Carlo simulation with surrogate data. Methodology/Principal Findings The null hypothesis that two observed mapped variables are the result of independent pattern generating processes is tested here by generating sets of random image data while preserving the autocorrelation function of the original images. Surrogates are generated by matching the dual-tree complex wavelet spectra (and hence the autocorrelation functions) of white noise images with the spectra of the original images. The generated images can then be used to build the probability distribution function of any statistic of association under the null hypothesis. We demonstrate the validity of a statistical test of association based on these surrogates with both actual and synthetic data and compare it with a corrected parametric test and three existing methods that generate surrogates (randomization, random rotations and shifts, and iterative amplitude adjusted Fourier transform). Type I error control was excellent, even with strong and long-range autocorrelation, which is not the case for alternative methods. Conclusions/Significance The wavelet-based surrogates are particularly appropriate in cases where autocorrelation appears at all scales or is direction-dependent (anisotropy). We explore the potential of the method for association tests involving a lattice of binary data and discuss its potential for validation of species distribution models. An implementation of the method in Java for the generation of wavelet-based surrogates is available online as supporting material. PMID:23144961
Concerns regarding a call for pluralism of information theory and hypothesis testing
Lukacs, P.M.; Thompson, W.L.; Kendall, W.L.; Gould, W.R.; Doherty, P.F.; Burnham, K.P.; Anderson, D.R.
2007-01-01
1. Stephens et al . (2005) argue for `pluralism? in statistical analysis, combining null hypothesis testing and information-theoretic (I-T) methods. We show that I-T methods are more informative even in single variable problems and we provide an ecological example. 2. I-T methods allow inferences to be made from multiple models simultaneously. We believe multimodel inference is the future of data analysis, which cannot be achieved with null hypothesis-testing approaches. 3. We argue for a stronger emphasis on critical thinking in science in general and less reliance on exploratory data analysis and data dredging. Deriving alternative hypotheses is central to science; deriving a single interesting science hypothesis and then comparing it to a default null hypothesis (e.g. `no difference?) is not an efficient strategy for gaining knowledge. We think this single-hypothesis strategy has been relied upon too often in the past. 4. We clarify misconceptions presented by Stephens et al . (2005). 5. We think inference should be made about models, directly linked to scientific hypotheses, and their parameters conditioned on data, Prob(Hj| data). I-T methods provide a basis for this inference. Null hypothesis testing merely provides a probability statement about the data conditioned on a null model, Prob(data |H0). 6. Synthesis and applications. I-T methods provide a more informative approach to inference. I-T methods provide a direct measure of evidence for or against hypotheses and a means to consider simultaneously multiple hypotheses as a basis for rigorous inference. Progress in our science can be accelerated if modern methods can be used intelligently; this includes various I-T and Bayesian methods.
Developmental vitamin D deficiency and risk of schizophrenia: a 10-year update.
McGrath, John J; Burne, Thomas H; Féron, François; Mackay-Sim, Allan; Eyles, Darryl W
2010-11-01
There is an urgent need to generate and test candidate risk factors that may explain gradients in the incidence of schizophrenia. Based on clues from epidemiology, we proposed that developmental vitamin D deficiency may contribute to the risk of developing schizophrenia. This hypothesis may explain diverse epidemiological findings including season of birth, the latitude gradients in incidence and prevalence, the increased risk in dark-skinned migrants to certain countries, and the urban-rural gradient. Animal experiments demonstrate that transient prenatal hypovitaminosis D is associated with persisting changes in brain structure and function, including convergent evidence of altered dopaminergic function. A recent case-control study based on neonatal blood samples identified a significant association between neonatal vitamin D status and risk of schizophrenia. This article provides a concise summary of the epidemiological and animal experimental research that has explored this hypothesis.
Jose de Carli, Gabriel; Campos Pereira, Tiago
2017-09-01
Spontaneous parthenogenetic and androgenetic events occur in humans, but they result in tumours: the ovarian teratoma and the hydatidiform mole, respectively. However, the observation of fetiform (ovarian) teratomas, the serependious identification of several chimeric human parthenotes and androgenotes in the last two decades, along with the creation of viable bi-maternal mice in the laboratory based on minor genetic interferences, raises the question of whether natural cases of clinically healthy human parthenotes have gone unnoticed to science. Here we present a hypothesis based on three elements to support the existence of such elusive individuals: mutations affecting (i) genomic imprinting, (ii) meiosis and (iii) oocyte activation. Additionally, we suggest that the routine practice of whole genome sequencing on every single newborn worldwide will be the ultimate test to this controversial, yet astonishing hypothesis. Finally, several medical implications of such intriguing event are presented. Copyright © 2017 Elsevier Ltd. All rights reserved.
Adaptive ingredients against food spoilage in Japanese cuisine.
Ohtsubo, Yohsuke
2009-12-01
Billing and Sherman proposed the antimicrobial hypothesis to explain the worldwide spice use pattern. The present study explored whether two antimicrobial ingredients (i.e. spices and vinegar) are used in ways consistent with the antimicrobial hypothesis. Four specific predictions were tested: meat-based recipes would call for more spices/vinegar than vegetable-based recipes; summer recipes would call for more spices/vinegar than winter recipes; recipes in hotter regions would call for more spices/vinegar; and recipes including unheated ingredients would call for more spices/vinegar. Spice/vinegar use patterns were compiled from two types of traditional Japanese cookbooks. Dataset I included recipes provided by elderly Japanese housewives. Dataset II included recipes provided by experts in traditional Japanese foods. The analyses of Dataset I revealed that the vinegar use pattern conformed to the predictions. In contrast, analyses of Dataset II generally supported the predictions in terms of spices, but not vinegar.
Filling the gap in functional trait databases: use of ecological hypotheses to replace missing data.
Taugourdeau, Simon; Villerd, Jean; Plantureux, Sylvain; Huguenin-Elie, Olivier; Amiaud, Bernard
2014-04-01
Functional trait databases are powerful tools in ecology, though most of them contain large amounts of missing values. The goal of this study was to test the effect of imputation methods on the evaluation of trait values at species level and on the subsequent calculation of functional diversity indices at community level using functional trait databases. Two simple imputation methods (average and median), two methods based on ecological hypotheses, and one multiple imputation method were tested using a large plant trait database, together with the influence of the percentage of missing data and differences between functional traits. At community level, the complete-case approach and three functional diversity indices calculated from grassland plant communities were included. At the species level, one of the methods based on ecological hypothesis was for all traits more accurate than imputation with average or median values, but the multiple imputation method was superior for most of the traits. The method based on functional proximity between species was the best method for traits with an unbalanced distribution, while the method based on the existence of relationships between traits was the best for traits with a balanced distribution. The ranking of the grassland communities for their functional diversity indices was not robust with the complete-case approach, even for low percentages of missing data. With the imputation methods based on ecological hypotheses, functional diversity indices could be computed with a maximum of 30% of missing data, without affecting the ranking between grassland communities. The multiple imputation method performed well, but not better than single imputation based on ecological hypothesis and adapted to the distribution of the trait values for the functional identity and range of the communities. Ecological studies using functional trait databases have to deal with missing data using imputation methods corresponding to their specific needs and making the most out of the information available in the databases. Within this framework, this study indicates the possibilities and limits of single imputation methods based on ecological hypothesis and concludes that they could be useful when studying the ranking of communities for their functional diversity indices.
Filling the gap in functional trait databases: use of ecological hypotheses to replace missing data
Taugourdeau, Simon; Villerd, Jean; Plantureux, Sylvain; Huguenin-Elie, Olivier; Amiaud, Bernard
2014-01-01
Functional trait databases are powerful tools in ecology, though most of them contain large amounts of missing values. The goal of this study was to test the effect of imputation methods on the evaluation of trait values at species level and on the subsequent calculation of functional diversity indices at community level using functional trait databases. Two simple imputation methods (average and median), two methods based on ecological hypotheses, and one multiple imputation method were tested using a large plant trait database, together with the influence of the percentage of missing data and differences between functional traits. At community level, the complete-case approach and three functional diversity indices calculated from grassland plant communities were included. At the species level, one of the methods based on ecological hypothesis was for all traits more accurate than imputation with average or median values, but the multiple imputation method was superior for most of the traits. The method based on functional proximity between species was the best method for traits with an unbalanced distribution, while the method based on the existence of relationships between traits was the best for traits with a balanced distribution. The ranking of the grassland communities for their functional diversity indices was not robust with the complete-case approach, even for low percentages of missing data. With the imputation methods based on ecological hypotheses, functional diversity indices could be computed with a maximum of 30% of missing data, without affecting the ranking between grassland communities. The multiple imputation method performed well, but not better than single imputation based on ecological hypothesis and adapted to the distribution of the trait values for the functional identity and range of the communities. Ecological studies using functional trait databases have to deal with missing data using imputation methods corresponding to their specific needs and making the most out of the information available in the databases. Within this framework, this study indicates the possibilities and limits of single imputation methods based on ecological hypothesis and concludes that they could be useful when studying the ranking of communities for their functional diversity indices. PMID:24772273
Exploring the multiple-hit hypothesis of preterm white matter damage using diffusion MRI.
Barnett, Madeleine L; Tusor, Nora; Ball, Gareth; Chew, Andrew; Falconer, Shona; Aljabar, Paul; Kimpton, Jessica A; Kennea, Nigel; Rutherford, Mary; David Edwards, A; Counsell, Serena J
2018-01-01
Preterm infants are at high risk of diffuse white matter injury and adverse neurodevelopmental outcome. The multiple hit hypothesis suggests that the risk of white matter injury increases with cumulative exposure to multiple perinatal risk factors. Our aim was to test this hypothesis in a large cohort of preterm infants using diffusion weighted magnetic resonance imaging (dMRI). We studied 491 infants (52% male) without focal destructive brain lesions born at < 34 weeks, who underwent structural and dMRI at a specialist Neonatal Imaging Centre. The median (range) gestational age (GA) at birth was 30 + 1 (23 + 2 -33 + 5 ) weeks and median postmenstrual age at scan was 42 + 1 (38-45) weeks. dMRI data were analyzed using tract based spatial statistics and the relationship between dMRI measures in white matter and individual perinatal risk factors was assessed. We tested the hypothesis that increased exposure to perinatal risk factors was associated with lower fractional anisotropy (FA), and higher radial, axial and mean diffusivity (RD, AD, MD) in white matter. Neurodevelopmental performance was investigated using the Bayley Scales of Infant and Toddler Development, Third Edition (BSITD-III) in a subset of 381 infants at 20 months corrected age. We tested the hypothesis that lower FA and higher RD, AD and MD in white matter were associated with poorer neurodevelopmental performance. Identified risk factors for diffuse white matter injury were lower GA at birth, fetal growth restriction, increased number of days requiring ventilation and parenteral nutrition, necrotizing enterocolitis and male sex. Clinical chorioamnionitis and patent ductus arteriosus were not associated with white matter injury. Multivariate analysis demonstrated that fetal growth restriction, increased number of days requiring ventilation and parenteral nutrition were independently associated with lower FA values. Exposure to cumulative risk factors was associated with reduced white matter FA and FA values at term equivalent age were associated with subsequent neurodevelopmental performance. This study suggests multiple perinatal risk factors have an independent association with diffuse white matter injury at term equivalent age and exposure to multiple perinatal risk factors exacerbates dMRI defined, clinically significant white matter injury. Our findings support the multiple hit hypothesis for preterm white matter injury.
Smid, Henderikus G. O. M.; Bruggeman, Richard; Martens, Sander
2013-01-01
Background Schizophrenia is associated with impairments of the perception of objects, but how this affects higher cognitive functions, whether this impairment is already present after recent onset of psychosis, and whether it is specific for schizophrenia related psychosis, is not clear. We therefore tested the hypothesis that because schizophrenia is associated with impaired object perception, schizophrenia patients should differ in shifting attention between objects compared to healthy controls. To test this hypothesis, a task was used that allowed us to separately observe space-based and object-based covert orienting of attention. To examine whether impairment of object-based visual attention is related to higher order cognitive functions, standard neuropsychological tests were also administered. Method Patients with recent onset psychosis and normal controls performed the attention task, in which space- and object-based attention shifts were induced by cue-target sequences that required reorienting of attention within an object, or reorienting attention between objects. Results Patients with and without schizophrenia showed slower than normal spatial attention shifts, but the object-based component of attention shifts in patients was smaller than normal. Schizophrenia was specifically associated with slowed right-to-left attention shifts. Reorienting speed was significantly correlated with verbal memory scores in controls, and with visual attention scores in patients, but not with speed-of-processing scores in either group. Conclusions deficits of object-perception and spatial attention shifting are not only associated with schizophrenia, but are common to all psychosis patients. Schizophrenia patients only differed by having abnormally slow right-to-left visual field reorienting. Deficits of object-perception and spatial attention shifting are already present after recent onset of psychosis. Studies investigating visual spatial attention should take into account the separable effects of space-based and object-based shifting of attention. Impaired reorienting in patients was related to impaired visual attention, but not to deficits of processing speed and verbal memory. PMID:23536901
Smid, Henderikus G O M; Bruggeman, Richard; Martens, Sander
2013-01-01
Schizophrenia is associated with impairments of the perception of objects, but how this affects higher cognitive functions, whether this impairment is already present after recent onset of psychosis, and whether it is specific for schizophrenia related psychosis, is not clear. We therefore tested the hypothesis that because schizophrenia is associated with impaired object perception, schizophrenia patients should differ in shifting attention between objects compared to healthy controls. To test this hypothesis, a task was used that allowed us to separately observe space-based and object-based covert orienting of attention. To examine whether impairment of object-based visual attention is related to higher order cognitive functions, standard neuropsychological tests were also administered. Patients with recent onset psychosis and normal controls performed the attention task, in which space- and object-based attention shifts were induced by cue-target sequences that required reorienting of attention within an object, or reorienting attention between objects. Patients with and without schizophrenia showed slower than normal spatial attention shifts, but the object-based component of attention shifts in patients was smaller than normal. Schizophrenia was specifically associated with slowed right-to-left attention shifts. Reorienting speed was significantly correlated with verbal memory scores in controls, and with visual attention scores in patients, but not with speed-of-processing scores in either group. deficits of object-perception and spatial attention shifting are not only associated with schizophrenia, but are common to all psychosis patients. Schizophrenia patients only differed by having abnormally slow right-to-left visual field reorienting. Deficits of object-perception and spatial attention shifting are already present after recent onset of psychosis. Studies investigating visual spatial attention should take into account the separable effects of space-based and object-based shifting of attention. Impaired reorienting in patients was related to impaired visual attention, but not to deficits of processing speed and verbal memory.
de Melo, Warita Alves; Lima-Ribeiro, Matheus S.; Terribile, Levi Carina
2016-01-01
Studies based on contemporary plant occurrences and pollen fossil records have proposed that the current disjunct distribution of seasonally dry tropical forests (SDTFs) across South America is the result of fragmentation of a formerly widespread and continuously distributed dry forest during the arid climatic conditions associated with the Last Glacial Maximum (LGM), which is known as the modern-day dry forest refugia hypothesis. We studied the demographic history of Tabebuia rosealba (Bignoniaceae) to understand the disjunct geographic distribution of South American SDTFs based on statistical phylogeography and ecological niche modeling (ENM). We specifically tested the dry forest refugia hypothesis; i.e., if the multiple and isolated patches of SDTFs are current climatic relicts of a widespread and continuously distributed dry forest during the LGM. We sampled 235 individuals across 18 populations in Central Brazil and analyzed the polymorphisms at chloroplast (trnS-trnG, psbA-trnH and ycf6-trnC intergenic spacers) and nuclear (ITS nrDNA) genomes. We performed coalescence simulations of alternative hypotheses under demographic expectations from two a priori biogeographic hypotheses (1. the Pleistocene Arc hypothesis and, 2. a range shift to Amazon Basin) and other two demographic expectances predicted by ENMs (3. expansion throughout the Neotropical South America, including Amazon Basin, and 4. retraction during the LGM). Phylogenetic analyses based on median-joining network showed haplotype sharing among populations with evidence of incomplete lineage sorting. Coalescent analyses showed smaller effective population sizes for T. roseoalba during the LGM compared to the present-day. Simulations and ENM also showed that its current spatial pattern of genetic diversity is most likely due to a scenario of range retraction during the LGM instead of the fragmentation from a once extensive and largely contiguous SDTF across South America, not supporting the South American dry forest refugia hypothesis. PMID:27458982
de Melo, Warita Alves; Lima-Ribeiro, Matheus S; Terribile, Levi Carina; Collevatti, Rosane G
2016-01-01
Studies based on contemporary plant occurrences and pollen fossil records have proposed that the current disjunct distribution of seasonally dry tropical forests (SDTFs) across South America is the result of fragmentation of a formerly widespread and continuously distributed dry forest during the arid climatic conditions associated with the Last Glacial Maximum (LGM), which is known as the modern-day dry forest refugia hypothesis. We studied the demographic history of Tabebuia rosealba (Bignoniaceae) to understand the disjunct geographic distribution of South American SDTFs based on statistical phylogeography and ecological niche modeling (ENM). We specifically tested the dry forest refugia hypothesis; i.e., if the multiple and isolated patches of SDTFs are current climatic relicts of a widespread and continuously distributed dry forest during the LGM. We sampled 235 individuals across 18 populations in Central Brazil and analyzed the polymorphisms at chloroplast (trnS-trnG, psbA-trnH and ycf6-trnC intergenic spacers) and nuclear (ITS nrDNA) genomes. We performed coalescence simulations of alternative hypotheses under demographic expectations from two a priori biogeographic hypotheses (1. the Pleistocene Arc hypothesis and, 2. a range shift to Amazon Basin) and other two demographic expectances predicted by ENMs (3. expansion throughout the Neotropical South America, including Amazon Basin, and 4. retraction during the LGM). Phylogenetic analyses based on median-joining network showed haplotype sharing among populations with evidence of incomplete lineage sorting. Coalescent analyses showed smaller effective population sizes for T. roseoalba during the LGM compared to the present-day. Simulations and ENM also showed that its current spatial pattern of genetic diversity is most likely due to a scenario of range retraction during the LGM instead of the fragmentation from a once extensive and largely contiguous SDTF across South America, not supporting the South American dry forest refugia hypothesis.
Caricati, Luca
2017-01-01
The status-legitimacy hypothesis was tested by analyzing cross-national data about social inequality. Several indicators were used as indexes of social advantage: social class, personal income, and self-position in the social hierarchy. Moreover, inequality and freedom in nations, as indexed by Gini and by the human freedom index, were considered. Results from 36 nations worldwide showed no support for the status-legitimacy hypothesis. The perception that income distribution was fair tended to increase as social advantage increased. Moreover, national context increased the difference between advantaged and disadvantaged people in the perception of social fairness: Contrary to the status-legitimacy hypothesis, disadvantaged people were more likely than advantaged people to perceive income distribution as too large, and this difference increased in nations with greater freedom and equality. The implications for the status-legitimacy hypothesis are discussed.
Tests of the Giant Impact Hypothesis
NASA Technical Reports Server (NTRS)
Jones, J. H.
1998-01-01
The giant impact hypothesis has gained popularity as a means of explaining a volatile-depleted Moon that still has a chemical affinity to the Earth. As Taylor's Axiom decrees, the best models of lunar origin are testable, but this is difficult with the giant impact model. The energy associated with the impact would be sufficient to totally melt and partially vaporize the Earth. And this means that there should he no geological vestige of Barber times. Accordingly, it is important to devise tests that may be used to evaluate the giant impact hypothesis. Three such tests are discussed here. None of these is supportive of the giant impact model, but neither do they disprove it.
Genetics and recent human evolution.
Templeton, Alan R
2007-07-01
Starting with "mitochondrial Eve" in 1987, genetics has played an increasingly important role in studies of the last two million years of human evolution. It initially appeared that genetic data resolved the basic models of recent human evolution in favor of the "out-of-Africa replacement" hypothesis in which anatomically modern humans evolved in Africa about 150,000 years ago, started to spread throughout the world about 100,000 years ago, and subsequently drove to complete genetic extinction (replacement) all other human populations in Eurasia. Unfortunately, many of the genetic studies on recent human evolution have suffered from scientific flaws, including misrepresenting the models of recent human evolution, focusing upon hypothesis compatibility rather than hypothesis testing, committing the ecological fallacy, and failing to consider a broader array of alternative hypotheses. Once these flaws are corrected, there is actually little genetic support for the out-of-Africa replacement hypothesis. Indeed, when genetic data are used in a hypothesis-testing framework, the out-of-Africa replacement hypothesis is strongly rejected. The model of recent human evolution that emerges from a statistical hypothesis-testing framework does not correspond to any of the traditional models of human evolution, but it is compatible with fossil and archaeological data. These studies also reveal that any one gene or DNA region captures only a small part of human evolutionary history, so multilocus studies are essential. As more and more loci became available, genetics will undoubtedly offer additional insights and resolutions of human evolution.
A test of the hopelessness theory of depression in unemployed young adults.
Lynd-Stevenson, R M
1996-02-01
Recent research has failed to support the prediction based on hopelessness theory that hopelessness mediates the full relationship between attributional style for negative outcomes and depression. A re-examination of hopelessness theory, however, provides the hypothesis that a measure of hopelessness containing items directly relevant to an ongoing negative life-event will mediate the full relationship between attributional style for negative outcomes and depression. Hopelessness theory was extended with a second hypothesis that attributional style for positive outcomes is involved in the aetiology of depression and that hopelessness also mediates the full relationship between attributional style for positive outcomes and depression. The third hypothesis was that a series of "background variables' (e.g. age, sex) omitted in previous research would be implicated in the generation of depression. The three hypotheses were tested and supported with data collected from a sample of young unemployed adults. A further aspect of hopelessness theory overlooked in most research is an ability to account for reductions in depression associated with the cessation of a negative life-event and occurrence of a positive life-event. The hopelessness theory and the three hypotheses were again supported with data collected from individuals who were unemployed and others who had recently undergone the transition from unemployment to employment.
Stoddard, Mary Caswell; Fayet, Annette L.; Kilner, Rebecca M.; Hinde, Camilla A.
2012-01-01
Many passerine birds lay white eggs with reddish brown speckles produced by protoporphyrin pigment. However, the function of these spots is contested. Recently, the sexually selected eggshell coloration (SSEC) hypothesis proposed that eggshell color is a sexually selected signal through which a female advertises her quality (and hence the potential quality of her future young) to her male partner, thereby encouraging him to contribute more to breeding attempts. We performed a test of the SSEC hypothesis in a common passerine, the great tit Parus major. We used a double cross-fostering design to determine whether males change their provisioning behavior based on eggshell patterns they observe at the nest. We also tested the assumption that egg patterning reflects female and/or offspring quality. Because birds differ from humans in their color and pattern perception, we used digital photography and models of bird vision to quantify egg patterns objectively. Neither male provisioning nor chick growth was related to the pattern of eggs males observed during incubation. Although heavy females laid paler, less speckled eggs, these eggs did not produce chicks that grew faster. Therefore, we conclude that the SSEC hypothesis is an unlikely explanation for the evolution of egg speckling in great tits. PMID:22815730
Stenner, A Jackson; Fisher, William P; Stone, Mark H; Burdick, Donald S
2013-01-01
Rasch's unidimensional models for measurement show how to connect object measures (e.g., reader abilities), measurement mechanisms (e.g., machine-generated cloze reading items), and observational outcomes (e.g., counts correct on reading instruments). Substantive theory shows what interventions or manipulations to the measurement mechanism can be traded off against a change to the object measure to hold the observed outcome constant. A Rasch model integrated with a substantive theory dictates the form and substance of permissible interventions. Rasch analysis, absent construct theory and an associated specification equation, is a black box in which understanding may be more illusory than not. Finally, the quantitative hypothesis can be tested by comparing theory-based trade-off relations with observed trade-off relations. Only quantitative variables (as measured) support such trade-offs. Note that to test the quantitative hypothesis requires more than manipulation of the algebraic equivalencies in the Rasch model or descriptively fitting data to the model. A causal Rasch model involves experimental intervention/manipulation on either reader ability or text complexity or a conjoint intervention on both simultaneously to yield a successful prediction of the resultant observed outcome (count correct). We conjecture that when this type of manipulation is introduced for individual reader text encounters and model predictions are consistent with observations, the quantitative hypothesis is sustained.
Stenner, A. Jackson; Fisher, William P.; Stone, Mark H.; Burdick, Donald S.
2013-01-01
Rasch's unidimensional models for measurement show how to connect object measures (e.g., reader abilities), measurement mechanisms (e.g., machine-generated cloze reading items), and observational outcomes (e.g., counts correct on reading instruments). Substantive theory shows what interventions or manipulations to the measurement mechanism can be traded off against a change to the object measure to hold the observed outcome constant. A Rasch model integrated with a substantive theory dictates the form and substance of permissible interventions. Rasch analysis, absent construct theory and an associated specification equation, is a black box in which understanding may be more illusory than not. Finally, the quantitative hypothesis can be tested by comparing theory-based trade-off relations with observed trade-off relations. Only quantitative variables (as measured) support such trade-offs. Note that to test the quantitative hypothesis requires more than manipulation of the algebraic equivalencies in the Rasch model or descriptively fitting data to the model. A causal Rasch model involves experimental intervention/manipulation on either reader ability or text complexity or a conjoint intervention on both simultaneously to yield a successful prediction of the resultant observed outcome (count correct). We conjecture that when this type of manipulation is introduced for individual reader text encounters and model predictions are consistent with observations, the quantitative hypothesis is sustained. PMID:23986726
Extracurricular participation and academic outcomes: testing the over-scheduling hypothesis.
Fredricks, Jennifer A
2012-03-01
There is a growing concern that some youth are overscheduled in extracurricular activities, and that this increasing involvement has negative consequences for youth functioning. This article used data from the Educational Longitudinal Study (ELS: 2002), a nationally representative and ethnically diverse longitudinal sample of American high school students, to evaluate this hypothesis (N = 13,130; 50.4% female). On average, 10th graders participated in between 2 and 3 extracurricular activities, for an average of 5 h per week. Only a small percentage of 10th graders reported participating in extracurricular activities at high levels. Moreover, a large percentage of the sample reported no involvement in school-based extracurricular contexts in the after-school hours. Controlling for some demographic factors, prior achievement, and school size, the breadth (i.e., number of extracurricular activities) and the intensity (i.e., time in extracurricular activities) of participation at 10th grade were positively associated with math achievement test scores, grades, and educational expectations at 12th grade. Breadth and intensity of participation at 10th grade also predicted educational status at 2 years post high school. In addition, the non-linear function was significant. At higher breadth and intensity, the academic adjustment of youth declined. Implications of the findings for the over-scheduling hypothesis are discussed.
NASA Astrophysics Data System (ADS)
Ren, Xiaoqiang; Yan, Jiaqi; Mo, Yilin
2018-03-01
This paper studies binary hypothesis testing based on measurements from a set of sensors, a subset of which can be compromised by an attacker. The measurements from a compromised sensor can be manipulated arbitrarily by the adversary. The asymptotic exponential rate, with which the probability of error goes to zero, is adopted to indicate the detection performance of a detector. In practice, we expect the attack on sensors to be sporadic, and therefore the system may operate with all the sensors being benign for extended period of time. This motivates us to consider the trade-off between the detection performance of a detector, i.e., the probability of error, when the attacker is absent (defined as efficiency) and the worst-case detection performance when the attacker is present (defined as security). We first provide the fundamental limits of this trade-off, and then propose a detection strategy that achieves these limits. We then consider a special case, where there is no trade-off between security and efficiency. In other words, our detection strategy can achieve the maximal efficiency and the maximal security simultaneously. Two extensions of the secure hypothesis testing problem are also studied and fundamental limits and achievability results are provided: 1) a subset of sensors, namely "secure" sensors, are assumed to be equipped with better security countermeasures and hence are guaranteed to be benign, 2) detection performance with unknown number of compromised sensors. Numerical examples are given to illustrate the main results.
Age Dedifferentiation Hypothesis: Evidence form the WAIS III.
ERIC Educational Resources Information Center
Juan-Espinosa, Manuel; Garcia, Luis F.; Escorial, Sergio; Rebollo, Irene; Colom, Roberto; Abad, Francisco J.
2002-01-01
Used the Spanish standardization of the Wechsler Adult Intelligence Scale III (WAIS III) (n=1,369) to test the age dedifferentiation hypothesis. Results show no changes in the percentage of variance accounted for by "g" and four group factors when restriction of range is controlled. Discusses an age indifferentation hypothesis. (SLD)
ERIC Educational Resources Information Center
Charalambous, Charalambos Y.; Kyriakides, Ermis
2017-01-01
For years scholars have attended to either generic or content-specific teaching practices attempting to understand instructional quality and its effects on student learning. Drawing on the TIMSS 2007 and 2011 databases, this exploratory study empirically tests the hypothesis that attending to both types of practices can help better explain student…
Winston P. Smith; Scott M. Gende; Jeffrey V. Nichols
2005-01-01
Management indicator species (MIS) often are selected because their life history and demographics are thought to reflect a suite of ecosystem conditions that are too difficult or costly to measure directly. The northern flying squirrel (Glaucomys sabrinus) has been proposed as an MIS of temperate rain forest of southeastern Alaska based on previous...
Predictors of Study Success from a Teacher's Perspective of the Quality of the Built Environment
ERIC Educational Resources Information Center
Kok, Herman; Mobach, Mark; Omta, Onno
2015-01-01
The article aims to find predictors of study success from a teacher's perspective that relate to the built environment. The research is based on a national online survey among 1752 teachers at 18 Dutch Universities of Applied Sciences. Multivariate data analyses were used to test the hypothesis that the quality of spatial and functional aspects at…
ERIC Educational Resources Information Center
Montani, Francesco; Battistelli, Adalgisa; Odoardi, Carlo
2017-01-01
Building on goal-regulation theory, we develop and test the hypothesis that proactive goal generation fosters individual innovative work behavior. Consistent with a resource-based perspective, we further examine two-three-way interactions to assess whether the link between proactive goal generation and innovative behavior is jointly moderated by…
ERIC Educational Resources Information Center
Tomczak, Ewa; Ewert, Anna
2015-01-01
We examine cross-linguistic influence in the processing of motion sentences by L2 users from an embodied cognition perspective. The experiment employs a priming paradigm to test two hypotheses based on previous action and motion research in cognitive psychology. The first hypothesis maintains that conceptual representations of motion are embodied…
ERIC Educational Resources Information Center
Rolison, Jonathan J.; Evans, Jonathan St. B. T.; Dennis, Ian; Walsh, Clare R.
2012-01-01
Multiple cue probability learning (MCPL) involves learning to predict a criterion based on a set of novel cues when feedback is provided in response to each judgment made. But to what extent does MCPL require controlled attention and explicit hypothesis testing? The results of two experiments show that this depends on cue polarity. Learning about…
USDA-ARS?s Scientific Manuscript database
The accuracy and precision of the Horsfall-Barratt (H-B) scale has been questioned, and some of the psychophysical law on which it is based found to be inappropriate. It has not been demonstrated whether use of the H-B scale systematically affects the outcome of hypothesis testing. A simulation mode...
ERIC Educational Resources Information Center
Green, Dido; Lingam, Raghu; Mattocks, Calum; Riddoch, Chris; Ness, Andy; Emond, Alan
2011-01-01
The aim of the current study was to test the hypothesis that children with probable Developmental Coordination Disorder have an increased risk of reduced moderate to vigorous physical activity (MVPA), using data from a large population based study. Prospectively collected data from 4331 children (boys = 2065, girls = 2266) who had completed motor…