ON THE SUBJECT OF HYPOTHESIS TESTING
Ugoni, Antony
1993-01-01
In this paper, the definition of a statistical hypothesis is discussed, and the considerations which need to be addressed when testing a hypothesis. In particular, the p-value, significance level, and power of a test are reviewed. Finally, the often quoted confidence interval is given a brief introduction. PMID:17989768
One-way ANOVA based on interval information
NASA Astrophysics Data System (ADS)
Hesamian, Gholamreza
2016-08-01
This paper deals with extending the one-way analysis of variance (ANOVA) to the case where the observed data are represented by closed intervals rather than real numbers. In this approach, first a notion of interval random variable is introduced. Especially, a normal distribution with interval parameters is introduced to investigate hypotheses about the equality of interval means or test the homogeneity of interval variances assumption. Moreover, the least significant difference (LSD method) for investigating multiple comparison of interval means is developed when the null hypothesis about the equality of means is rejected. Then, at a given interval significance level, an index is applied to compare the interval test statistic and the related interval critical value as a criterion to accept or reject the null interval hypothesis of interest. Finally, the method of decision-making leads to some degrees to accept or reject the interval hypotheses. An applied example will be used to show the performance of this method.
ERIC Educational Resources Information Center
Wilcox, Rand R.; Serang, Sarfaraz
2017-01-01
The article provides perspectives on p values, null hypothesis testing, and alternative techniques in light of modern robust statistical methods. Null hypothesis testing and "p" values can provide useful information provided they are interpreted in a sound manner, which includes taking into account insights and advances that have…
Estimating equivalence with quantile regression
Cade, B.S.
2011-01-01
Equivalence testing and corresponding confidence interval estimates are used to provide more enlightened statistical statements about parameter estimates by relating them to intervals of effect sizes deemed to be of scientific or practical importance rather than just to an effect size of zero. Equivalence tests and confidence interval estimates are based on a null hypothesis that a parameter estimate is either outside (inequivalence hypothesis) or inside (equivalence hypothesis) an equivalence region, depending on the question of interest and assignment of risk. The former approach, often referred to as bioequivalence testing, is often used in regulatory settings because it reverses the burden of proof compared to a standard test of significance, following a precautionary principle for environmental protection. Unfortunately, many applications of equivalence testing focus on establishing average equivalence by estimating differences in means of distributions that do not have homogeneous variances. I discuss how to compare equivalence across quantiles of distributions using confidence intervals on quantile regression estimates that detect differences in heterogeneous distributions missed by focusing on means. I used one-tailed confidence intervals based on inequivalence hypotheses in a two-group treatment-control design for estimating bioequivalence of arsenic concentrations in soils at an old ammunition testing site and bioequivalence of vegetation biomass at a reclaimed mining site. Two-tailed confidence intervals based both on inequivalence and equivalence hypotheses were used to examine quantile equivalence for negligible trends over time for a continuous exponential model of amphibian abundance. ?? 2011 by the Ecological Society of America.
Confidence intervals for single-case effect size measures based on randomization test inversion.
Michiels, Bart; Heyvaert, Mieke; Meulders, Ann; Onghena, Patrick
2017-02-01
In the current paper, we present a method to construct nonparametric confidence intervals (CIs) for single-case effect size measures in the context of various single-case designs. We use the relationship between a two-sided statistical hypothesis test at significance level α and a 100 (1 - α) % two-sided CI to construct CIs for any effect size measure θ that contain all point null hypothesis θ values that cannot be rejected by the hypothesis test at significance level α. This method of hypothesis test inversion (HTI) can be employed using a randomization test as the statistical hypothesis test in order to construct a nonparametric CI for θ. We will refer to this procedure as randomization test inversion (RTI). We illustrate RTI in a situation in which θ is the unstandardized and the standardized difference in means between two treatments in a completely randomized single-case design. Additionally, we demonstrate how RTI can be extended to other types of single-case designs. Finally, we discuss a few challenges for RTI as well as possibilities when using the method with other effect size measures, such as rank-based nonoverlap indices. Supplementary to this paper, we provide easy-to-use R code, which allows the user to construct nonparametric CIs according to the proposed method.
ERIC Educational Resources Information Center
Tryon, Warren W.; Lewis, Charles
2008-01-01
Evidence of group matching frequently takes the form of a nonsignificant test of statistical difference. Theoretical hypotheses of no difference are also tested in this way. These practices are flawed in that null hypothesis statistical testing provides evidence against the null hypothesis and failing to reject H[subscript 0] is not evidence…
Fagerland, Morten W; Sandvik, Leiv; Mowinckel, Petter
2011-04-13
The number of events per individual is a widely reported variable in medical research papers. Such variables are the most common representation of the general variable type called discrete numerical. There is currently no consensus on how to compare and present such variables, and recommendations are lacking. The objective of this paper is to present recommendations for analysis and presentation of results for discrete numerical variables. Two simulation studies were used to investigate the performance of hypothesis tests and confidence interval methods for variables with outcomes {0, 1, 2}, {0, 1, 2, 3}, {0, 1, 2, 3, 4}, and {0, 1, 2, 3, 4, 5}, using the difference between the means as an effect measure. The Welch U test (the T test with adjustment for unequal variances) and its associated confidence interval performed well for almost all situations considered. The Brunner-Munzel test also performed well, except for small sample sizes (10 in each group). The ordinary T test, the Wilcoxon-Mann-Whitney test, the percentile bootstrap interval, and the bootstrap-t interval did not perform satisfactorily. The difference between the means is an appropriate effect measure for comparing two independent discrete numerical variables that has both lower and upper bounds. To analyze this problem, we encourage more frequent use of parametric hypothesis tests and confidence intervals.
Biostatistics Series Module 2: Overview of Hypothesis Testing.
Hazra, Avijit; Gogtay, Nithya
2016-01-01
Hypothesis testing (or statistical inference) is one of the major applications of biostatistics. Much of medical research begins with a research question that can be framed as a hypothesis. Inferential statistics begins with a null hypothesis that reflects the conservative position of no change or no difference in comparison to baseline or between groups. Usually, the researcher has reason to believe that there is some effect or some difference which is the alternative hypothesis. The researcher therefore proceeds to study samples and measure outcomes in the hope of generating evidence strong enough for the statistician to be able to reject the null hypothesis. The concept of the P value is almost universally used in hypothesis testing. It denotes the probability of obtaining by chance a result at least as extreme as that observed, even when the null hypothesis is true and no real difference exists. Usually, if P is < 0.05 the null hypothesis is rejected and sample results are deemed statistically significant. With the increasing availability of computers and access to specialized statistical software, the drudgery involved in statistical calculations is now a thing of the past, once the learning curve of the software has been traversed. The life sciences researcher is therefore free to devote oneself to optimally designing the study, carefully selecting the hypothesis tests to be applied, and taking care in conducting the study well. Unfortunately, selecting the right test seems difficult initially. Thinking of the research hypothesis as addressing one of five generic research questions helps in selection of the right hypothesis test. In addition, it is important to be clear about the nature of the variables (e.g., numerical vs. categorical; parametric vs. nonparametric) and the number of groups or data sets being compared (e.g., two or more than two) at a time. The same research question may be explored by more than one type of hypothesis test. While this may be of utility in highlighting different aspects of the problem, merely reapplying different tests to the same issue in the hope of finding a P < 0.05 is a wrong use of statistics. Finally, it is becoming the norm that an estimate of the size of any effect, expressed with its 95% confidence interval, is required for meaningful interpretation of results. A large study is likely to have a small (and therefore "statistically significant") P value, but a "real" estimate of the effect would be provided by the 95% confidence interval. If the intervals overlap between two interventions, then the difference between them is not so clear-cut even if P < 0.05. The two approaches are now considered complementary to one another.
Biostatistics Series Module 2: Overview of Hypothesis Testing
Hazra, Avijit; Gogtay, Nithya
2016-01-01
Hypothesis testing (or statistical inference) is one of the major applications of biostatistics. Much of medical research begins with a research question that can be framed as a hypothesis. Inferential statistics begins with a null hypothesis that reflects the conservative position of no change or no difference in comparison to baseline or between groups. Usually, the researcher has reason to believe that there is some effect or some difference which is the alternative hypothesis. The researcher therefore proceeds to study samples and measure outcomes in the hope of generating evidence strong enough for the statistician to be able to reject the null hypothesis. The concept of the P value is almost universally used in hypothesis testing. It denotes the probability of obtaining by chance a result at least as extreme as that observed, even when the null hypothesis is true and no real difference exists. Usually, if P is < 0.05 the null hypothesis is rejected and sample results are deemed statistically significant. With the increasing availability of computers and access to specialized statistical software, the drudgery involved in statistical calculations is now a thing of the past, once the learning curve of the software has been traversed. The life sciences researcher is therefore free to devote oneself to optimally designing the study, carefully selecting the hypothesis tests to be applied, and taking care in conducting the study well. Unfortunately, selecting the right test seems difficult initially. Thinking of the research hypothesis as addressing one of five generic research questions helps in selection of the right hypothesis test. In addition, it is important to be clear about the nature of the variables (e.g., numerical vs. categorical; parametric vs. nonparametric) and the number of groups or data sets being compared (e.g., two or more than two) at a time. The same research question may be explored by more than one type of hypothesis test. While this may be of utility in highlighting different aspects of the problem, merely reapplying different tests to the same issue in the hope of finding a P < 0.05 is a wrong use of statistics. Finally, it is becoming the norm that an estimate of the size of any effect, expressed with its 95% confidence interval, is required for meaningful interpretation of results. A large study is likely to have a small (and therefore “statistically significant”) P value, but a “real” estimate of the effect would be provided by the 95% confidence interval. If the intervals overlap between two interventions, then the difference between them is not so clear-cut even if P < 0.05. The two approaches are now considered complementary to one another. PMID:27057011
Sample Size Calculation for Estimating or Testing a Nonzero Squared Multiple Correlation Coefficient
ERIC Educational Resources Information Center
Krishnamoorthy, K.; Xia, Yanping
2008-01-01
The problems of hypothesis testing and interval estimation of the squared multiple correlation coefficient of a multivariate normal distribution are considered. It is shown that available one-sided tests are uniformly most powerful, and the one-sided confidence intervals are uniformly most accurate. An exact method of calculating sample size to…
A test of the reward-value hypothesis.
Smith, Alexandra E; Dalecki, Stefan J; Crystal, Jonathon D
2017-03-01
Rats retain source memory (memory for the origin of information) over a retention interval of at least 1 week, whereas their spatial working memory (radial maze locations) decays within approximately 1 day. We have argued that different forgetting functions dissociate memory systems. However, the two tasks, in our previous work, used different reward values. The source memory task used multiple pellets of a preferred food flavor (chocolate), whereas the spatial working memory task provided access to a single pellet of standard chow-flavored food at each location. Thus, according to the reward-value hypothesis, enhanced performance in the source memory task stems from enhanced encoding/memory of a preferred reward. We tested the reward-value hypothesis by using a standard 8-arm radial maze task to compare spatial working memory accuracy of rats rewarded with either multiple chocolate or chow pellets at each location using a between-subjects design. The reward-value hypothesis predicts superior accuracy for high-valued rewards. We documented equivalent spatial memory accuracy for high- and low-value rewards. Importantly, a 24-h retention interval produced equivalent spatial working memory accuracy for both flavors. These data are inconsistent with the reward-value hypothesis and suggest that reward value does not explain our earlier findings that source memory survives unusually long retention intervals.
A test of the reward-contrast hypothesis.
Dalecki, Stefan J; Panoz-Brown, Danielle E; Crystal, Jonathon D
2017-12-01
Source memory, a facet of episodic memory, is the memory of the origin of information. Whereas source memory in rats is sustained for at least a week, spatial memory degraded after approximately a day. Different forgetting functions may suggest that two memory systems (source memory and spatial memory) are dissociated. However, in previous work, the two tasks used baiting conditions consisting of chocolate and chow flavors; notably, the source memory task used the relatively better flavor. Thus, according to the reward-contrast hypothesis, when chocolate and chow were presented within the same context (i.e., within a single radial maze trial), the chocolate location was more memorable than the chow location because of contrast. We tested the reward-contrast hypothesis using baiting configurations designed to produce reward-contrast. The reward-contrast hypothesis predicts that under these conditions, spatial memory will survive a 24-h retention interval. We documented elimination of spatial memory performance after a 24-h retention interval using a reward-contrast baiting pattern. These data suggest that reward contrast does not explain our earlier findings that source memory survives unusually long retention intervals. Copyright © 2017 Elsevier B.V. All rights reserved.
A Bayesian bird's eye view of ‘Replications of important results in social psychology’
Schönbrodt, Felix D.; Yao, Yuling; Gelman, Andrew; Wagenmakers, Eric-Jan
2017-01-01
We applied three Bayesian methods to reanalyse the preregistered contributions to the Social Psychology special issue ‘Replications of Important Results in Social Psychology’ (Nosek & Lakens. 2014 Registered reports: a method to increase the credibility of published results. Soc. Psychol. 45, 137–141. (doi:10.1027/1864-9335/a000192)). First, individual-experiment Bayesian parameter estimation revealed that for directed effect size measures, only three out of 44 central 95% credible intervals did not overlap with zero and fell in the expected direction. For undirected effect size measures, only four out of 59 credible intervals contained values greater than 0.10 (10% of variance explained) and only 19 intervals contained values larger than 0.05. Second, a Bayesian random-effects meta-analysis for all 38 t-tests showed that only one out of the 38 hierarchically estimated credible intervals did not overlap with zero and fell in the expected direction. Third, a Bayes factor hypothesis test was used to quantify the evidence for the null hypothesis against a default one-sided alternative. Only seven out of 60 Bayes factors indicated non-anecdotal support in favour of the alternative hypothesis (BF10>3), whereas 51 Bayes factors indicated at least some support for the null hypothesis. We hope that future analyses of replication success will embrace a more inclusive statistical approach by adopting a wider range of complementary techniques. PMID:28280547
Confidence Intervals for Effect Sizes: Applying Bootstrap Resampling
ERIC Educational Resources Information Center
Banjanovic, Erin S.; Osborne, Jason W.
2016-01-01
Confidence intervals for effect sizes (CIES) provide readers with an estimate of the strength of a reported statistic as well as the relative precision of the point estimate. These statistics offer more information and context than null hypothesis statistic testing. Although confidence intervals have been recommended by scholars for many years,…
Kruschke, John K; Liddell, Torrin M
2018-02-01
In the practice of data analysis, there is a conceptual distinction between hypothesis testing, on the one hand, and estimation with quantified uncertainty on the other. Among frequentists in psychology, a shift of emphasis from hypothesis testing to estimation has been dubbed "the New Statistics" (Cumming 2014). A second conceptual distinction is between frequentist methods and Bayesian methods. Our main goal in this article is to explain how Bayesian methods achieve the goals of the New Statistics better than frequentist methods. The article reviews frequentist and Bayesian approaches to hypothesis testing and to estimation with confidence or credible intervals. The article also describes Bayesian approaches to meta-analysis, randomized controlled trials, and power analysis.
ERIC Educational Resources Information Center
Hoekstra, Rink; Johnson, Addie; Kiers, Henk A. L.
2012-01-01
The use of confidence intervals (CIs) as an addition or as an alternative to null hypothesis significance testing (NHST) has been promoted as a means to make researchers more aware of the uncertainty that is inherent in statistical inference. Little is known, however, about whether presenting results via CIs affects how readers judge the…
Interval timing in genetically modified mice: a simple paradigm
Balci, F.; Papachristos, E. B.; Gallistel, C. R.; Brunner, D.; Gibson, J.; Shumyatsky, G. P.
2009-01-01
We describe a behavioral screen for the quantitative study of interval timing and interval memory in mice. Mice learn to switch from a short-latency feeding station to a long-latency station when the short latency has passed without a feeding. The psychometric function is the cumulative distribution of switch latencies. Its median measures timing accuracy and its interquartile interval measures timing precision. Next, using this behavioral paradigm, we have examined mice with a gene knockout of the receptor for gastrin-releasing peptide that show enhanced (i.e. prolonged) freezing in fear conditioning. We have tested the hypothesis that the mutants freeze longer because they are more uncertain than wild types about when to expect the electric shock. The knockouts however show normal accuracy and precision in timing, so we have rejected this alternative hypothesis. Last, we conduct the pharmacological validation of our behavioral screen using D-amphetamine and methamphetamine. We suggest including the analysis of interval timing and temporal memory in tests of genetically modified mice for learning and memory and argue that our paradigm allows this to be done simply and efficiently. PMID:17696995
Interval timing in genetically modified mice: a simple paradigm.
Balci, F; Papachristos, E B; Gallistel, C R; Brunner, D; Gibson, J; Shumyatsky, G P
2008-04-01
We describe a behavioral screen for the quantitative study of interval timing and interval memory in mice. Mice learn to switch from a short-latency feeding station to a long-latency station when the short latency has passed without a feeding. The psychometric function is the cumulative distribution of switch latencies. Its median measures timing accuracy and its interquartile interval measures timing precision. Next, using this behavioral paradigm, we have examined mice with a gene knockout of the receptor for gastrin-releasing peptide that show enhanced (i.e. prolonged) freezing in fear conditioning. We have tested the hypothesis that the mutants freeze longer because they are more uncertain than wild types about when to expect the electric shock. The knockouts however show normal accuracy and precision in timing, so we have rejected this alternative hypothesis. Last, we conduct the pharmacological validation of our behavioral screen using d-amphetamine and methamphetamine. We suggest including the analysis of interval timing and temporal memory in tests of genetically modified mice for learning and memory and argue that our paradigm allows this to be done simply and efficiently.
Genome-wide detection of intervals of genetic heterogeneity associated with complex traits
Llinares-López, Felipe; Grimm, Dominik G.; Bodenham, Dean A.; Gieraths, Udo; Sugiyama, Mahito; Rowan, Beth; Borgwardt, Karsten
2015-01-01
Motivation: Genetic heterogeneity, the fact that several sequence variants give rise to the same phenotype, is a phenomenon that is of the utmost interest in the analysis of complex phenotypes. Current approaches for finding regions in the genome that exhibit genetic heterogeneity suffer from at least one of two shortcomings: (i) they require the definition of an exact interval in the genome that is to be tested for genetic heterogeneity, potentially missing intervals of high relevance, or (ii) they suffer from an enormous multiple hypothesis testing problem due to the large number of potential candidate intervals being tested, which results in either many false positives or a lack of power to detect true intervals. Results: Here, we present an approach that overcomes both problems: it allows one to automatically find all contiguous sequences of single nucleotide polymorphisms in the genome that are jointly associated with the phenotype. It also solves both the inherent computational efficiency problem and the statistical problem of multiple hypothesis testing, which are both caused by the huge number of candidate intervals. We demonstrate on Arabidopsis thaliana genome-wide association study data that our approach can discover regions that exhibit genetic heterogeneity and would be missed by single-locus mapping. Conclusions: Our novel approach can contribute to the genome-wide discovery of intervals that are involved in the genetic heterogeneity underlying complex phenotypes. Availability and implementation: The code can be obtained at: http://www.bsse.ethz.ch/mlcb/research/bioinformatics-and-computational-biology/sis.html. Contact: felipe.llinares@bsse.ethz.ch Supplementary information: Supplementary data are available at Bioinformatics online. PMID:26072488
Evaluating the Equal-Interval Hypothesis with Test Score Scales
ERIC Educational Resources Information Center
Domingue, Benjamin Webre
2012-01-01
In psychometrics, it is difficult to verify that measurement instruments can be used to produce numeric values with the desirable property that differences between units are equal-interval because the attributes being measured are latent. The theory of additive conjoint measurement (e.g., Krantz, Luce, Suppes, & Tversky, 1971, ACM) guarantees…
Test-Retest Gains in WAIS Scores after Four Retest Intervals.
ERIC Educational Resources Information Center
Catron, David W.; Thompson, Claudia C.
1979-01-01
To analyze the hypothesis that retest gain scores would decrease in a decelerating curve to zero-gain as the retest interval increased, male college students were administered the WAIS on two occasions with a retest at either 1 week, 2 months, or 4 months. (Author/SJL)
New Approaches to Robust Confidence Intervals for Location: A Simulation Study.
1984-06-01
obtain a denominator for the test statistic. Those statistics based on location estimates derived from Hampel’s redescending influence function or v...defined an influence function for a test in terms of the behavior of its P-values when the data are sampled from a model distribution modified by point...proposal could be used for interval estimation as well as hypothesis testing, the extension is immediate. Once an influence function has been defined
Testing 40 Predictions from the Transtheoretical Model Again, with Confidence
ERIC Educational Resources Information Center
Velicer, Wayne F.; Brick, Leslie Ann D.; Fava, Joseph L.; Prochaska, James O.
2013-01-01
Testing Theory-based Quantitative Predictions (TTQP) represents an alternative to traditional Null Hypothesis Significance Testing (NHST) procedures and is more appropriate for theory testing. The theory generates explicit effect size predictions and these effect size estimates, with related confidence intervals, are used to test the predictions.…
[Experimental testing of Pflüger's reflex hypothesis of menstruation in late 19th century].
Simmer, H H
1980-07-01
Pflüger's hypothesis of a nerve reflex as the cause of menstruation published in 1865 and accepted by many, nonetheless did not lead to experimental investigations for 25 years. According to this hypothesis the nerve reflex starts in the ovary by an increase of the intraovarian pressure by the growing follicles. In 1884 Adolph Kehrer proposed a program to test the nerve reflex, but only in 1890, Cohnstein artificially increased the intraovarian pressure in women by bimanual compression from the outside and the vagina. His results were not convincing. Six years later, Strassmann injected fluids into ovaries of animals and obtained changes in the uterus resembling those of oestrus. His results seemed to verify a prognosis derived from Pflüger's hypothesis. Thus, after a long interval, that hypothesis had become a paradigma. Though reasons can be given for the delay, it is little understood, why experimental testing started so late.
Bayes Factor Approaches for Testing Interval Null Hypotheses
ERIC Educational Resources Information Center
Morey, Richard D.; Rouder, Jeffrey N.
2011-01-01
Psychological theories are statements of constraint. The role of hypothesis testing in psychology is to test whether specific theoretical constraints hold in data. Bayesian statistics is well suited to the task of finding supporting evidence for constraint, because it allows for comparing evidence for 2 hypotheses against each another. One issue…
Saraf, Sanatan; Mathew, Thomas; Roy, Anindya
2015-01-01
For the statistical validation of surrogate endpoints, an alternative formulation is proposed for testing Prentice's fourth criterion, under a bivariate normal model. In such a setup, the criterion involves inference concerning an appropriate regression parameter, and the criterion holds if the regression parameter is zero. Testing such a null hypothesis has been criticized in the literature since it can only be used to reject a poor surrogate, and not to validate a good surrogate. In order to circumvent this, an equivalence hypothesis is formulated for the regression parameter, namely the hypothesis that the parameter is equivalent to zero. Such an equivalence hypothesis is formulated as an alternative hypothesis, so that the surrogate endpoint is statistically validated when the null hypothesis is rejected. Confidence intervals for the regression parameter and tests for the equivalence hypothesis are proposed using bootstrap methods and small sample asymptotics, and their performances are numerically evaluated and recommendations are made. The choice of the equivalence margin is a regulatory issue that needs to be addressed. The proposed equivalence testing formulation is also adopted for other parameters that have been proposed in the literature on surrogate endpoint validation, namely, the relative effect and proportion explained.
ERIC Educational Resources Information Center
Hogan, Lindsey C.; Bell, Matthew; Olson, Ryan
2009-01-01
The vigilance reinforcement hypothesis (VRH) asserts that errors in signal detection tasks are partially explained by operant reinforcement and extinction processes. VRH predictions were tested with a computerized baggage screening task. Our experiment evaluated the effects of signal schedule (extinction vs. variable interval 6 min) and visual…
Accelerated spike resampling for accurate multiple testing controls.
Harrison, Matthew T
2013-02-01
Controlling for multiple hypothesis tests using standard spike resampling techniques often requires prohibitive amounts of computation. Importance sampling techniques can be used to accelerate the computation. The general theory is presented, along with specific examples for testing differences across conditions using permutation tests and for testing pairwise synchrony and precise lagged-correlation between many simultaneously recorded spike trains using interval jitter.
ERIC Educational Resources Information Center
Wagstaff, David A.; Elek, Elvira; Kulis, Stephen; Marsiglia, Flavio
2009-01-01
A nonparametric bootstrap was used to obtain an interval estimate of Pearson's "r," and test the null hypothesis that there was no association between 5th grade students' positive substance use expectancies and their intentions to not use substances. The students were participating in a substance use prevention program in which the unit of…
Chiba, Yasutaka
2017-09-01
Fisher's exact test is commonly used to compare two groups when the outcome is binary in randomized trials. In the context of causal inference, this test explores the sharp causal null hypothesis (i.e. the causal effect of treatment is the same for all subjects), but not the weak causal null hypothesis (i.e. the causal risks are the same in the two groups). Therefore, in general, rejection of the null hypothesis by Fisher's exact test does not mean that the causal risk difference is not zero. Recently, Chiba (Journal of Biometrics and Biostatistics 2015; 6: 244) developed a new exact test for the weak causal null hypothesis when the outcome is binary in randomized trials; the new test is not based on any large sample theory and does not require any assumption. In this paper, we extend the new test; we create a version of the test applicable to a stratified analysis. The stratified exact test that we propose is general in nature and can be used in several approaches toward the estimation of treatment effects after adjusting for stratification factors. The stratified Fisher's exact test of Jung (Biometrical Journal 2014; 56: 129-140) tests the sharp causal null hypothesis. This test applies a crude estimator of the treatment effect and can be regarded as a special case of our proposed exact test. Our proposed stratified exact test can be straightforwardly extended to analysis of noninferiority trials and to construct the associated confidence interval. © 2017 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Glacial Cycles Influence Marine Methane Hydrate Formation
NASA Astrophysics Data System (ADS)
Malinverno, A.; Cook, A. E.; Daigle, H.; Oryan, B.
2018-01-01
Methane hydrates in fine-grained continental slope sediments often occupy isolated depth intervals surrounded by hydrate-free sediments. As they are not connected to deep gas sources, these hydrate deposits have been interpreted as sourced by in situ microbial methane. We investigate here the hypothesis that these isolated hydrate accumulations form preferentially in sediments deposited during Pleistocene glacial lowstands that contain relatively large amounts of labile particulate organic carbon, leading to enhanced microbial methanogenesis. To test this hypothesis, we apply an advection-diffusion-reaction model with a time-dependent organic carbon deposition controlled by glacioeustatic sea level variations. In the model, hydrate forms in sediments with greater organic carbon content deposited during the penultimate glacial cycle ( 120-240 ka). The model predictions match hydrate-bearing intervals detected in three sites drilled on the northern Gulf of Mexico continental slope, supporting the hypothesis of hydrate formation driven by enhanced organic carbon burial during glacial lowstands.
Rahman, Nafisur; Kashif, Mohammad
2010-03-01
Point and interval hypothesis tests performed to validate two simple and economical, kinetic spectrophotometric methods for the assay of lansoprazole are described. The methods are based on the formation of chelate complex of the drug with Fe(III) and Zn(II). The reaction is followed spectrophotometrically by measuring the rate of change of absorbance of coloured chelates of the drug with Fe(III) and Zn(II) at 445 and 510 nm, respectively. The stoichiometric ratio of lansoprazole to Fe(III) and Zn(II) complexes were found to be 1:1 and 2:1, respectively. The initial-rate and fixed-time methods are adopted for determination of drug concentrations. The calibration graphs are linear in the range 50-200 µg ml⁻¹ (initial-rate method), 20-180 µg ml⁻¹ (fixed-time method) for lansoprazole-Fe(III) complex and 120-300 (initial-rate method), and 90-210 µg ml⁻¹ (fixed-time method) for lansoprazole-Zn(II) complex. The inter-day and intra-day precision data showed good accuracy and precision of the proposed procedure for analysis of lansoprazole. The point and interval hypothesis tests indicate that the proposed procedures are not biased. Copyright © 2010 John Wiley & Sons, Ltd.
Hypothesis testing and earthquake prediction.
Jackson, D D
1996-04-30
Requirements for testing include advance specification of the conditional rate density (probability per unit time, area, and magnitude) or, alternatively, probabilities for specified intervals of time, space, and magnitude. Here I consider testing fully specified hypotheses, with no parameter adjustments or arbitrary decisions allowed during the test period. Because it may take decades to validate prediction methods, it is worthwhile to formulate testable hypotheses carefully in advance. Earthquake prediction generally implies that the probability will be temporarily higher than normal. Such a statement requires knowledge of "normal behavior"--that is, it requires a null hypothesis. Hypotheses can be tested in three ways: (i) by comparing the number of actual earth-quakes to the number predicted, (ii) by comparing the likelihood score of actual earthquakes to the predicted distribution, and (iii) by comparing the likelihood ratio to that of a null hypothesis. The first two tests are purely self-consistency tests, while the third is a direct comparison of two hypotheses. Predictions made without a statement of probability are very difficult to test, and any test must be based on the ratio of earthquakes in and out of the forecast regions.
Hypothesis testing and earthquake prediction.
Jackson, D D
1996-01-01
Requirements for testing include advance specification of the conditional rate density (probability per unit time, area, and magnitude) or, alternatively, probabilities for specified intervals of time, space, and magnitude. Here I consider testing fully specified hypotheses, with no parameter adjustments or arbitrary decisions allowed during the test period. Because it may take decades to validate prediction methods, it is worthwhile to formulate testable hypotheses carefully in advance. Earthquake prediction generally implies that the probability will be temporarily higher than normal. Such a statement requires knowledge of "normal behavior"--that is, it requires a null hypothesis. Hypotheses can be tested in three ways: (i) by comparing the number of actual earth-quakes to the number predicted, (ii) by comparing the likelihood score of actual earthquakes to the predicted distribution, and (iii) by comparing the likelihood ratio to that of a null hypothesis. The first two tests are purely self-consistency tests, while the third is a direct comparison of two hypotheses. Predictions made without a statement of probability are very difficult to test, and any test must be based on the ratio of earthquakes in and out of the forecast regions. PMID:11607663
Decentralized Hypothesis Testing in Energy Harvesting Wireless Sensor Networks
NASA Astrophysics Data System (ADS)
Tarighati, Alla; Gross, James; Jalden, Joakim
2017-09-01
We consider the problem of decentralized hypothesis testing in a network of energy harvesting sensors, where sensors make noisy observations of a phenomenon and send quantized information about the phenomenon towards a fusion center. The fusion center makes a decision about the present hypothesis using the aggregate received data during a time interval. We explicitly consider a scenario under which the messages are sent through parallel access channels towards the fusion center. To avoid limited lifetime issues, we assume each sensor is capable of harvesting all the energy it needs for the communication from the environment. Each sensor has an energy buffer (battery) to save its harvested energy for use in other time intervals. Our key contribution is to formulate the problem of decentralized detection in a sensor network with energy harvesting devices. Our analysis is based on a queuing-theoretic model for the battery and we propose a sensor decision design method by considering long term energy management at the sensors. We show how the performance of the system changes for different battery capacities. We then numerically show how our findings can be used in the design of sensor networks with energy harvesting sensors.
Glacial cycles influence marine methane hydrate formation
Malinverno, A.; Cook, A. E.; Daigle, H.; ...
2018-01-12
Methane hydrates in fine-grained continental slope sediments often occupy isolated depth intervals surrounded by hydrate-free sediments. As they are not connected to deep gas sources, these hydrate deposits have been interpreted as sourced by in situ microbial methane. We investigate here the hypothesis that these isolated hydrate accumulations form preferentially in sediments deposited during Pleistocene glacial lowstands that contain relatively large amounts of labile particulate organic carbon, leading to enhanced microbial methanogenesis. To test this hypothesis, we apply an advection-diffusion-reaction model with a time-dependent organic carbon deposition controlled by glacioeustatic sea level variations. In the model, hydrate forms in sedimentsmore » with greater organic carbon content deposited during the penultimate glacial cycle (~120-240 ka). As a result, the model predictions match hydrate-bearing intervals detected in three sites drilled on the northern Gulf of Mexico continental slope, supporting the hypothesis of hydrate formation driven by enhanced organic carbon burial during glacial lowstands.« less
Glacial cycles influence marine methane hydrate formation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Malinverno, A.; Cook, A. E.; Daigle, H.
Methane hydrates in fine-grained continental slope sediments often occupy isolated depth intervals surrounded by hydrate-free sediments. As they are not connected to deep gas sources, these hydrate deposits have been interpreted as sourced by in situ microbial methane. We investigate here the hypothesis that these isolated hydrate accumulations form preferentially in sediments deposited during Pleistocene glacial lowstands that contain relatively large amounts of labile particulate organic carbon, leading to enhanced microbial methanogenesis. To test this hypothesis, we apply an advection-diffusion-reaction model with a time-dependent organic carbon deposition controlled by glacioeustatic sea level variations. In the model, hydrate forms in sedimentsmore » with greater organic carbon content deposited during the penultimate glacial cycle (~120-240 ka). As a result, the model predictions match hydrate-bearing intervals detected in three sites drilled on the northern Gulf of Mexico continental slope, supporting the hypothesis of hydrate formation driven by enhanced organic carbon burial during glacial lowstands.« less
Elaborating Selected Statistical Concepts with Common Experience.
ERIC Educational Resources Information Center
Weaver, Kenneth A.
1992-01-01
Presents ways of elaborating statistical concepts so as to make course material more meaningful for students. Describes examples using exclamations, circus and cartoon characters, and falling leaves to illustrate variability, null hypothesis testing, and confidence interval. Concludes that the exercises increase student comprehension of the text…
Performing Contrast Analysis in Factorial Designs: From NHST to Confidence Intervals and Beyond
Wiens, Stefan; Nilsson, Mats E.
2016-01-01
Because of the continuing debates about statistics, many researchers may feel confused about how to analyze and interpret data. Current guidelines in psychology advocate the use of effect sizes and confidence intervals (CIs). However, researchers may be unsure about how to extract effect sizes from factorial designs. Contrast analysis is helpful because it can be used to test specific questions of central interest in studies with factorial designs. It weighs several means and combines them into one or two sets that can be tested with t tests. The effect size produced by a contrast analysis is simply the difference between means. The CI of the effect size informs directly about direction, hypothesis exclusion, and the relevance of the effects of interest. However, any interpretation in terms of precision or likelihood requires the use of likelihood intervals or credible intervals (Bayesian). These various intervals and even a Bayesian t test can be obtained easily with free software. This tutorial reviews these methods to guide researchers in answering the following questions: When I analyze mean differences in factorial designs, where can I find the effects of central interest, and what can I learn about their effect sizes? PMID:29805179
Phillips, Alan; Fletcher, Chrissie; Atkinson, Gary; Channon, Eddie; Douiri, Abdel; Jaki, Thomas; Maca, Jeff; Morgan, David; Roger, James Henry; Terrill, Paul
2013-01-01
In May 2012, the Committee of Health and Medicinal Products issued a concept paper on the need to review the points to consider document on multiplicity issues in clinical trials. In preparation for the release of the updated guidance document, Statisticians in the Pharmaceutical Industry held a one-day expert group meeting in January 2013. Topics debated included multiplicity and the drug development process, the usefulness and limitations of newly developed strategies to deal with multiplicity, multiplicity issues arising from interim decisions and multiregional development, and the need for simultaneous confidence intervals (CIs) corresponding to multiple test procedures. A clear message from the meeting was that multiplicity adjustments need to be considered when the intention is to make a formal statement about efficacy or safety based on hypothesis tests. Statisticians have a key role when designing studies to assess what adjustment really means in the context of the research being conducted. More thought during the planning phase needs to be given to multiplicity adjustments for secondary endpoints given these are increasing in importance in differentiating products in the market place. No consensus was reached on the role of simultaneous CIs in the context of superiority trials. It was argued that unadjusted intervals should be employed as the primary purpose of the intervals is estimation, while the purpose of hypothesis testing is to formally establish an effect. The opposing view was that CIs should correspond to the test decision whenever possible. Copyright © 2013 John Wiley & Sons, Ltd.
Pataky, Todd C; Vanrenterghem, Jos; Robinson, Mark A
2015-05-01
Biomechanical processes are often manifested as one-dimensional (1D) trajectories. It has been shown that 1D confidence intervals (CIs) are biased when based on 0D statistical procedures, and the non-parametric 1D bootstrap CI has emerged in the Biomechanics literature as a viable solution. The primary purpose of this paper was to clarify that, for 1D biomechanics datasets, the distinction between 0D and 1D methods is much more important than the distinction between parametric and non-parametric procedures. A secondary purpose was to demonstrate that a parametric equivalent to the 1D bootstrap exists in the form of a random field theory (RFT) correction for multiple comparisons. To emphasize these points we analyzed six datasets consisting of force and kinematic trajectories in one-sample, paired, two-sample and regression designs. Results showed, first, that the 1D bootstrap and other 1D non-parametric CIs were qualitatively identical to RFT CIs, and all were very different from 0D CIs. Second, 1D parametric and 1D non-parametric hypothesis testing results were qualitatively identical for all six datasets. Last, we highlight the limitations of 1D CIs by demonstrating that they are complex, design-dependent, and thus non-generalizable. These results suggest that (i) analyses of 1D data based on 0D models of randomness are generally biased unless one explicitly identifies 0D variables before the experiment, and (ii) parametric and non-parametric 1D hypothesis testing provide an unambiguous framework for analysis when one׳s hypothesis explicitly or implicitly pertains to whole 1D trajectories. Copyright © 2015 Elsevier Ltd. All rights reserved.
Unadjusted Bivariate Two-Group Comparisons: When Simpler is Better.
Vetter, Thomas R; Mascha, Edward J
2018-01-01
Hypothesis testing involves posing both a null hypothesis and an alternative hypothesis. This basic statistical tutorial discusses the appropriate use, including their so-called assumptions, of the common unadjusted bivariate tests for hypothesis testing and thus comparing study sample data for a difference or association. The appropriate choice of a statistical test is predicated on the type of data being analyzed and compared. The unpaired or independent samples t test is used to test the null hypothesis that the 2 population means are equal, thereby accepting the alternative hypothesis that the 2 population means are not equal. The unpaired t test is intended for comparing dependent continuous (interval or ratio) data from 2 study groups. A common mistake is to apply several unpaired t tests when comparing data from 3 or more study groups. In this situation, an analysis of variance with post hoc (posttest) intragroup comparisons should instead be applied. Another common mistake is to apply a series of unpaired t tests when comparing sequentially collected data from 2 study groups. In this situation, a repeated-measures analysis of variance, with tests for group-by-time interaction, and post hoc comparisons, as appropriate, should instead be applied in analyzing data from sequential collection points. The paired t test is used to assess the difference in the means of 2 study groups when the sample observations have been obtained in pairs, often before and after an intervention in each study subject. The Pearson chi-square test is widely used to test the null hypothesis that 2 unpaired categorical variables, each with 2 or more nominal levels (values), are independent of each other. When the null hypothesis is rejected, 1 concludes that there is a probable association between the 2 unpaired categorical variables. When comparing 2 groups on an ordinal or nonnormally distributed continuous outcome variable, the 2-sample t test is usually not appropriate. The Wilcoxon-Mann-Whitney test is instead preferred. When making paired comparisons on data that are ordinal, or continuous but nonnormally distributed, the Wilcoxon signed-rank test can be used. In analyzing their data, researchers should consider the continued merits of these simple yet equally valid unadjusted bivariate statistical tests. However, the appropriate use of an unadjusted bivariate test still requires a solid understanding of its utility, assumptions (requirements), and limitations. This understanding will mitigate the risk of misleading findings, interpretations, and conclusions.
Lunar phases and crisis center telephone calls.
Wilson, J E; Tobacyk, J J
1990-02-01
The lunar hypothesis, that is, the notion that lunar phases can directly affect human behavior, was tested by time-series analysis of 4,575 crisis center telephone calls (all calls recorded for a 6-month interval). As expected, the lunar hypothesis was not supported. The 28-day lunar cycle accounted for less than 1% of the variance of the frequency of crisis center calls. Also, as hypothesized from an attribution theory framework, crisis center workers reported significantly greater belief in lunar effects than a non-crisis-center-worker comparison group.
Zhang, Fanghong; Miyaoka, Etsuo; Huang, Fuping; Tanaka, Yutaka
2015-01-01
The problem for establishing noninferiority is discussed between a new treatment and a standard (control) treatment with ordinal categorical data. A measure of treatment effect is used and a method of specifying noninferiority margin for the measure is provided. Two Z-type test statistics are proposed where the estimation of variance is constructed under the shifted null hypothesis using U-statistics. Furthermore, the confidence interval and the sample size formula are given based on the proposed test statistics. The proposed procedure is applied to a dataset from a clinical trial. A simulation study is conducted to compare the performance of the proposed test statistics with that of the existing ones, and the results show that the proposed test statistics are better in terms of the deviation from nominal level and the power.
Quantile regression models of animal habitat relationships
Cade, Brian S.
2003-01-01
Typically, all factors that limit an organism are not measured and included in statistical models used to investigate relationships with their environment. If important unmeasured variables interact multiplicatively with the measured variables, the statistical models often will have heterogeneous response distributions with unequal variances. Quantile regression is an approach for estimating the conditional quantiles of a response variable distribution in the linear model, providing a more complete view of possible causal relationships between variables in ecological processes. Chapter 1 introduces quantile regression and discusses the ordering characteristics, interval nature, sampling variation, weighting, and interpretation of estimates for homogeneous and heterogeneous regression models. Chapter 2 evaluates performance of quantile rankscore tests used for hypothesis testing and constructing confidence intervals for linear quantile regression estimates (0 ≤ τ ≤ 1). A permutation F test maintained better Type I errors than the Chi-square T test for models with smaller n, greater number of parameters p, and more extreme quantiles τ. Both versions of the test required weighting to maintain correct Type I errors when there was heterogeneity under the alternative model. An example application related trout densities to stream channel width:depth. Chapter 3 evaluates a drop in dispersion, F-ratio like permutation test for hypothesis testing and constructing confidence intervals for linear quantile regression estimates (0 ≤ τ ≤ 1). Chapter 4 simulates from a large (N = 10,000) finite population representing grid areas on a landscape to demonstrate various forms of hidden bias that might occur when the effect of a measured habitat variable on some animal was confounded with the effect of another unmeasured variable (spatially and not spatially structured). Depending on whether interactions of the measured habitat and unmeasured variable were negative (interference interactions) or positive (facilitation interactions), either upper (τ > 0.5) or lower (τ < 0.5) quantile regression parameters were less biased than mean rate parameters. Sampling (n = 20 - 300) simulations demonstrated that confidence intervals constructed by inverting rankscore tests provided valid coverage of these biased parameters. Quantile regression was used to estimate effects of physical habitat resources on a bivalve mussel (Macomona liliana) in a New Zealand harbor by modeling the spatial trend surface as a cubic polynomial of location coordinates.
Oyanedel, Carlos N; Binder, Sonja; Kelemen, Eduard; Petersen, Kimberley; Born, Jan; Inostroza, Marion
2014-12-15
Our previous experiments showed that sleep in rats enhances consolidation of hippocampus dependent episodic-like memory, i.e. the ability to remember an event bound into specific spatio-temporal context. Here we tested the hypothesis that this enhancing effect of sleep is linked to the occurrence of slow oscillatory and spindle activity during slow wave sleep (SWS). Rats were tested on an episodic-like memory task and on three additional tasks covering separately the where (object place recognition), when (temporal memory), and what (novel object recognition) components of episodic memory. In each task, the sample phase (encoding) was followed by an 80-min retention interval that covered either a period of regular morning sleep or sleep deprivation. Memory during retrieval was tested using preferential exploration of novelty vs. familiarity. Consistent with previous findings, the rats which had slept during the retention interval showed significantly stronger episodic-like memory and spatial memory, and a trend of improved temporal memory (although not significant). Object recognition memory was similarly retained across sleep and sleep deprivation retention intervals. Recall of episodic-like memory was associated with increased slow oscillatory activity (0.85-2.0Hz) during SWS in the retention interval. Spatial memory was associated with increased proportions of SWS. Against our hypothesis, a relationship between spindle activity and episodic-like memory performance was not detected, but spindle activity was associated with object recognition memory. The results provide support for the role of SWS and slow oscillatory activity in consolidating hippocampus-dependent memory, the role of spindles in this process needs to be further examined. Copyright © 2014 The Authors. Published by Elsevier B.V. All rights reserved.
ERIC Educational Resources Information Center
Dunst, Carl J.; Hamby, Deborah W.
2012-01-01
This paper includes a nontechnical description of methods for calculating effect sizes in intellectual and developmental disability studies. Different hypothetical studies are used to illustrate how null hypothesis significance testing (NHST) and effect size findings can result in quite different outcomes and therefore conflicting results. Whereas…
Asymmetry of Reinforcement and Punishment in Human Choice
ERIC Educational Resources Information Center
Rasmussen, Erin B.; Newland, M. Christopher
2008-01-01
The hypothesis that a penny lost is valued more highly than a penny earned was tested in human choice. Five participants clicked a computer mouse under concurrent variable-interval schedules of monetary reinforcement. In the no-punishment condition, the schedules arranged monetary gain. In the punishment conditions, a schedule of monetary loss was…
Interpreting “statistical hypothesis testing” results in clinical research
Sarmukaddam, Sanjeev B.
2012-01-01
Difference between “Clinical Significance and Statistical Significance” should be kept in mind while interpreting “statistical hypothesis testing” results in clinical research. This fact is already known to many but again pointed out here as philosophy of “statistical hypothesis testing” is sometimes unnecessarily criticized mainly due to failure in considering such distinction. Randomized controlled trials are also wrongly criticized similarly. Some scientific method may not be applicable in some peculiar/particular situation does not mean that the method is useless. Also remember that “statistical hypothesis testing” is not for decision making and the field of “decision analysis” is very much an integral part of science of statistics. It is not correct to say that “confidence intervals have nothing to do with confidence” unless one understands meaning of the word “confidence” as used in context of confidence interval. Interpretation of the results of every study should always consider all possible alternative explanations like chance, bias, and confounding. Statistical tests in inferential statistics are, in general, designed to answer the question “How likely is the difference found in random sample(s) is due to chance” and therefore limitation of relying only on statistical significance in making clinical decisions should be avoided. PMID:22707861
Unicorns do exist: a tutorial on "proving" the null hypothesis.
Streiner, David L
2003-12-01
Introductory statistics classes teach us that we can never prove the null hypothesis; all we can do is reject or fail to reject it. However, there are times when it is necessary to try to prove the nonexistence of a difference between groups. This most often happens within the context of comparing a new treatment against an established one and showing that the new intervention is not inferior to the standard. This article first outlines the logic of "noninferiority" testing by differentiating between the null hypothesis (that which we are trying to nullify) and the "nill" hypothesis (there is no difference), reversing the role of the null and alternate hypotheses, and defining an interval within which groups are said to be equivalent. We then work through an example and show how to calculate sample sizes for noninferiority studies.
Solnik, Stanislaw; Qiao, Mu; Latash, Mark L.
2017-01-01
This study tested two hypotheses on the nature of unintentional force drifts elicited by removing visual feedback during accurate force production tasks. The role of working memory (memory hypothesis) was explored in tasks with continuous force production, intermittent force production, and rest intervals over the same time interval. The assumption of unintentional drifts in referent coordinate for the fingertips was tested using manipulations of visual feedback: Young healthy subjects performed accurate steady-state force production tasks by pressing with the two index fingers on individual force sensors with visual feedback on the total force, sharing ratio, both, or none. Predictions based on the memory hypothesis have been falsified. In particular, we observed consistent force drifts to lower force values during continuous force production trials only. No force drift or drifts to higher forces were observed during intermittent force production trials and following rest intervals. The hypotheses based on the idea of drifts in referent finger coordinates have been confirmed. In particular, we observed superposition of two drift processes: A drift of total force to lower magnitudes and a drift of the sharing ratio to 50:50. When visual feedback on total force only was provided, the two finger forces showed drifts in opposite directions. We interpret the findings as evidence for the control of motor actions with changes in referent coordinates for participating effectors. Unintentional drifts in performance are viewed as natural relaxation processes in the involved systems; their typical time reflects stability in the direction of the drift. The magnitude of the drift was higher in the right (dominant) hand, which is consistent with the dynamic dominance hypothesis. PMID:28168396
A Gaussian Model-Based Probabilistic Approach for Pulse Transit Time Estimation.
Jang, Dae-Geun; Park, Seung-Hun; Hahn, Minsoo
2016-01-01
In this paper, we propose a new probabilistic approach to pulse transit time (PTT) estimation using a Gaussian distribution model. It is motivated basically by the hypothesis that PTTs normalized by RR intervals follow the Gaussian distribution. To verify the hypothesis, we demonstrate the effects of arterial compliance on the normalized PTTs using the Moens-Korteweg equation. Furthermore, we observe a Gaussian distribution of the normalized PTTs on real data. In order to estimate the PTT using the hypothesis, we first assumed that R-waves in the electrocardiogram (ECG) can be correctly identified. The R-waves limit searching ranges to detect pulse peaks in the photoplethysmogram (PPG) and to synchronize the results with cardiac beats--i.e., the peaks of the PPG are extracted within the corresponding RR interval of the ECG as pulse peak candidates. Their probabilities of being the actual pulse peak are then calculated using a Gaussian probability function. The parameters of the Gaussian function are automatically updated when a new pulse peak is identified. This update makes the probability function adaptive to variations of cardiac cycles. Finally, the pulse peak is identified as the candidate with the highest probability. The proposed approach is tested on a database where ECG and PPG waveforms are collected simultaneously during the submaximal bicycle ergometer exercise test. The results are promising, suggesting that the method provides a simple but more accurate PTT estimation in real applications.
Brown, Angus M
2010-04-01
The objective of the method described in this paper is to develop a spreadsheet template for the purpose of comparing multiple sample means. An initial analysis of variance (ANOVA) test on the data returns F--the test statistic. If F is larger than the critical F value drawn from the F distribution at the appropriate degrees of freedom, convention dictates rejection of the null hypothesis and allows subsequent multiple comparison testing to determine where the inequalities between the sample means lie. A variety of multiple comparison methods are described that return the 95% confidence intervals for differences between means using an inclusive pairwise comparison of the sample means. 2009 Elsevier Ireland Ltd. All rights reserved.
ERIC Educational Resources Information Center
Bahrick, Lorraine E.; Hernandez-Reif, Maria; Pickens, Jeffrey N.
1997-01-01
Tested hypothesis from Bahrick and Pickens' infant attention model that retrieval cues increase memory accessibility and shift visual preferences toward greater novelty to resemble recent memories. Found that after retention intervals associated with remote or intermediate memory, previous familiarity preferences shifted to null or novelty…
ERIC Educational Resources Information Center
Msetfi, Rachel M.; Murphy, Robin A.; Simpson, Jane; Kornbrot, Diana E.
2005-01-01
The perception of the effectiveness of instrumental actions is influenced by depressed mood. Depressive realism (DR) is the claim that depressed people are particularly accurate in evaluating instrumentality. In two experiments, the authors tested the DR hypothesis using an action-outcome contingency judgment task. DR effects were a function of…
Harnessing Multivariate Statistics for Ellipsoidal Data in Structural Geology
NASA Astrophysics Data System (ADS)
Roberts, N.; Davis, J. R.; Titus, S.; Tikoff, B.
2015-12-01
Most structural geology articles do not state significance levels, report confidence intervals, or perform regressions to find trends. This is, in part, because structural data tend to include directions, orientations, ellipsoids, and tensors, which are not treatable by elementary statistics. We describe a full procedural methodology for the statistical treatment of ellipsoidal data. We use a reconstructed dataset of deformed ooids in Maryland from Cloos (1947) to illustrate the process. Normalized ellipsoids have five degrees of freedom and can be represented by a second order tensor. This tensor can be permuted into a five dimensional vector that belongs to a vector space and can be treated with standard multivariate statistics. Cloos made several claims about the distribution of deformation in the South Mountain fold, Maryland, and we reexamine two particular claims using hypothesis testing: 1) octahedral shear strain increases towards the axial plane of the fold; 2) finite strain orientation varies systematically along the trend of the axial trace as it bends with the Appalachian orogen. We then test the null hypothesis that the southern segment of South Mountain is the same as the northern segment. This test illustrates the application of ellipsoidal statistics, which combine both orientation and shape. We report confidence intervals for each test, and graphically display our results with novel plots. This poster illustrates the importance of statistics in structural geology, especially when working with noisy or small datasets.
Face recognition ability matures late: evidence from individual differences in young adults.
Susilo, Tirta; Germine, Laura; Duchaine, Bradley
2013-10-01
Does face recognition ability mature early in childhood (early maturation hypothesis) or does it continue to develop well into adulthood (late maturation hypothesis)? This fundamental issue in face recognition is typically addressed by comparing child and adult participants. However, the interpretation of such studies is complicated by children's inferior test-taking abilities and general cognitive functions. Here we examined the developmental trajectory of face recognition ability in an individual differences study of 18-33 year-olds (n = 2,032), an age interval in which participants are competent test takers with comparable general cognitive functions. We found a positive association between age and face recognition, controlling for nonface visual recognition, verbal memory, sex, and own-race bias. Our study supports the late maturation hypothesis in face recognition, and illustrates how individual differences investigations of young adults can address theoretical issues concerning the development of perceptual and cognitive abilities. PsycINFO Database Record (c) 2013 APA, all rights reserved.
Statistical power analysis in wildlife research
Steidl, R.J.; Hayes, J.P.
1997-01-01
Statistical power analysis can be used to increase the efficiency of research efforts and to clarify research results. Power analysis is most valuable in the design or planning phases of research efforts. Such prospective (a priori) power analyses can be used to guide research design and to estimate the number of samples necessary to achieve a high probability of detecting biologically significant effects. Retrospective (a posteriori) power analysis has been advocated as a method to increase information about hypothesis tests that were not rejected. However, estimating power for tests of null hypotheses that were not rejected with the effect size observed in the study is incorrect; these power estimates will always be a??0.50 when bias adjusted and have no relation to true power. Therefore, retrospective power estimates based on the observed effect size for hypothesis tests that were not rejected are misleading; retrospective power estimates are only meaningful when based on effect sizes other than the observed effect size, such as those effect sizes hypothesized to be biologically significant. Retrospective power analysis can be used effectively to estimate the number of samples or effect size that would have been necessary for a completed study to have rejected a specific null hypothesis. Simply presenting confidence intervals can provide additional information about null hypotheses that were not rejected, including information about the size of the true effect and whether or not there is adequate evidence to 'accept' a null hypothesis as true. We suggest that (1) statistical power analyses be routinely incorporated into research planning efforts to increase their efficiency, (2) confidence intervals be used in lieu of retrospective power analyses for null hypotheses that were not rejected to assess the likely size of the true effect, (3) minimum biologically significant effect sizes be used for all power analyses, and (4) if retrospective power estimates are to be reported, then the I?-level, effect sizes, and sample sizes used in calculations must also be reported.
High Impact = High Statistical Standards? Not Necessarily So
Tressoldi, Patrizio E.; Giofré, David; Sella, Francesco; Cumming, Geoff
2013-01-01
What are the statistical practices of articles published in journals with a high impact factor? Are there differences compared with articles published in journals with a somewhat lower impact factor that have adopted editorial policies to reduce the impact of limitations of Null Hypothesis Significance Testing? To investigate these questions, the current study analyzed all articles related to psychological, neuropsychological and medical issues, published in 2011 in four journals with high impact factors: Science, Nature, The New England Journal of Medicine and The Lancet, and three journals with relatively lower impact factors: Neuropsychology, Journal of Experimental Psychology-Applied and the American Journal of Public Health. Results show that Null Hypothesis Significance Testing without any use of confidence intervals, effect size, prospective power and model estimation, is the prevalent statistical practice used in articles published in Nature, 89%, followed by articles published in Science, 42%. By contrast, in all other journals, both with high and lower impact factors, most articles report confidence intervals and/or effect size measures. We interpreted these differences as consequences of the editorial policies adopted by the journal editors, which are probably the most effective means to improve the statistical practices in journals with high or low impact factors. PMID:23418533
High impact = high statistical standards? Not necessarily so.
Tressoldi, Patrizio E; Giofré, David; Sella, Francesco; Cumming, Geoff
2013-01-01
What are the statistical practices of articles published in journals with a high impact factor? Are there differences compared with articles published in journals with a somewhat lower impact factor that have adopted editorial policies to reduce the impact of limitations of Null Hypothesis Significance Testing? To investigate these questions, the current study analyzed all articles related to psychological, neuropsychological and medical issues, published in 2011 in four journals with high impact factors: Science, Nature, The New England Journal of Medicine and The Lancet, and three journals with relatively lower impact factors: Neuropsychology, Journal of Experimental Psychology-Applied and the American Journal of Public Health. Results show that Null Hypothesis Significance Testing without any use of confidence intervals, effect size, prospective power and model estimation, is the prevalent statistical practice used in articles published in Nature, 89%, followed by articles published in Science, 42%. By contrast, in all other journals, both with high and lower impact factors, most articles report confidence intervals and/or effect size measures. We interpreted these differences as consequences of the editorial policies adopted by the journal editors, which are probably the most effective means to improve the statistical practices in journals with high or low impact factors.
Co-Variation of Tonality in the Music and Speech of Different Cultures
Han, Shui' er; Sundararajan, Janani; Bowling, Daniel Liu; Lake, Jessica; Purves, Dale
2011-01-01
Whereas the use of discrete pitch intervals is characteristic of most musical traditions, the size of the intervals and the way in which they are used is culturally specific. Here we examine the hypothesis that these differences arise because of a link between the tonal characteristics of a culture's music and its speech. We tested this idea by comparing pitch intervals in the traditional music of three tone language cultures (Chinese, Thai and Vietnamese) and three non-tone language cultures (American, French and German) with pitch intervals between voiced speech segments. Changes in pitch direction occur more frequently and pitch intervals are larger in the music of tone compared to non-tone language cultures. More frequent changes in pitch direction and larger pitch intervals are also apparent in the speech of tone compared to non-tone language cultures. These observations suggest that the different tonal preferences apparent in music across cultures are closely related to the differences in the tonal characteristics of voiced speech. PMID:21637716
A probabilistic method for testing and estimating selection differences between populations
He, Yungang; Wang, Minxian; Huang, Xin; Li, Ran; Xu, Hongyang; Xu, Shuhua; Jin, Li
2015-01-01
Human populations around the world encounter various environmental challenges and, consequently, develop genetic adaptations to different selection forces. Identifying the differences in natural selection between populations is critical for understanding the roles of specific genetic variants in evolutionary adaptation. Although numerous methods have been developed to detect genetic loci under recent directional selection, a probabilistic solution for testing and quantifying selection differences between populations is lacking. Here we report the development of a probabilistic method for testing and estimating selection differences between populations. By use of a probabilistic model of genetic drift and selection, we showed that logarithm odds ratios of allele frequencies provide estimates of the differences in selection coefficients between populations. The estimates approximate a normal distribution, and variance can be estimated using genome-wide variants. This allows us to quantify differences in selection coefficients and to determine the confidence intervals of the estimate. Our work also revealed the link between genetic association testing and hypothesis testing of selection differences. It therefore supplies a solution for hypothesis testing of selection differences. This method was applied to a genome-wide data analysis of Han and Tibetan populations. The results confirmed that both the EPAS1 and EGLN1 genes are under statistically different selection in Han and Tibetan populations. We further estimated differences in the selection coefficients for genetic variants involved in melanin formation and determined their confidence intervals between continental population groups. Application of the method to empirical data demonstrated the outstanding capability of this novel approach for testing and quantifying differences in natural selection. PMID:26463656
Tian, Guo-Liang; Li, Hui-Qiong
2017-08-01
Some existing confidence interval methods and hypothesis testing methods in the analysis of a contingency table with incomplete observations in both margins entirely depend on an underlying assumption that the sampling distribution of the observed counts is a product of independent multinomial/binomial distributions for complete and incomplete counts. However, it can be shown that this independency assumption is incorrect and can result in unreliable conclusions because of the under-estimation of the uncertainty. Therefore, the first objective of this paper is to derive the valid joint sampling distribution of the observed counts in a contingency table with incomplete observations in both margins. The second objective is to provide a new framework for analyzing incomplete contingency tables based on the derived joint sampling distribution of the observed counts by developing a Fisher scoring algorithm to calculate maximum likelihood estimates of parameters of interest, the bootstrap confidence interval methods, and the bootstrap testing hypothesis methods. We compare the differences between the valid sampling distribution and the sampling distribution under the independency assumption. Simulation studies showed that average/expected confidence-interval widths of parameters based on the sampling distribution under the independency assumption are shorter than those based on the new sampling distribution, yielding unrealistic results. A real data set is analyzed to illustrate the application of the new sampling distribution for incomplete contingency tables and the analysis results again confirm the conclusions obtained from the simulation studies.
Change and Continuity in Student Achievement from Grades 3 to 5: A Policy Dilemma
ERIC Educational Resources Information Center
McCaslin, Mary; Burross, Heidi Legg; Good, Thomas L.
2005-01-01
In this article we examine student performance on mandated tests in grades 3, 4, and 5 in one state. We focus on this interval, which w e term "the fourth grade window," based on our hypothesis that students in grade four are particularly vulnerable to decrements in achievement. The national focus on the third grade as the critical…
Experimental design, power and sample size for animal reproduction experiments.
Chapman, Phillip L; Seidel, George E
2008-01-01
The present paper concerns statistical issues in the design of animal reproduction experiments, with emphasis on the problems of sample size determination and power calculations. We include examples and non-technical discussions aimed at helping researchers avoid serious errors that may invalidate or seriously impair the validity of conclusions from experiments. Screen shots from interactive power calculation programs and basic SAS power calculation programs are presented to aid in understanding statistical power and computing power in some common experimental situations. Practical issues that are common to most statistical design problems are briefly discussed. These include one-sided hypothesis tests, power level criteria, equality of within-group variances, transformations of response variables to achieve variance equality, optimal specification of treatment group sizes, 'post hoc' power analysis and arguments for the increased use of confidence intervals in place of hypothesis tests.
Lyons-Amos, Mark; Padmadas, Sabu S; Durrant, Gabriele B
2014-08-11
To test the contraceptive confidence hypothesis in a modern context. The hypothesis is that women using effective or modern contraceptive methods have increased contraceptive confidence and hence a shorter interval between marriage and first birth than users of ineffective or traditional methods. We extend the hypothesis to incorporate the role of abortion, arguing that it acts as a substitute for contraception in the study context. Moldova, a country in South-East Europe. Moldova exhibits high use of traditional contraceptive methods and abortion compared with other European countries. Data are from a secondary analysis of the 2005 Moldovan Demographic and Health Survey, a nationally representative sample survey. 5377 unmarried women were selected. The outcome measure was the interval between marriage and first birth. This was modelled using a piecewise-constant hazard regression, with abortion and contraceptive method types as primary variables along with relevant sociodemographic controls. Women with high contraceptive confidence (modern method users) have a higher cumulative hazard of first birth 36 months following marriage (0.88 (0.87 to 0.89)) compared with women with low contraceptive confidence (traditional method users, cumulative hazard: 0.85 (0.84 to 0.85)). This is consistent with the contraceptive confidence hypothesis. There is a higher cumulative hazard of first birth among women with low (0.80 (0.79 to 0.80)) and moderate abortion propensities (0.76 (0.75 to 0.77)) than women with no abortion propensity (0.73 (0.72 to 0.74)) 24 months after marriage. Effective contraceptive use tends to increase contraceptive confidence and is associated with a shorter interval between marriage and first birth. Increased use of abortion also tends to increase contraceptive confidence and shorten birth duration, although this effect is non-linear-women with a very high use of abortion tend to have lengthy intervals between marriage and first birth. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://group.bmj.com/group/rights-licensing/permissions.
Echolocation system of the bottlenose dolphin
NASA Astrophysics Data System (ADS)
Dubrovsky, N. A.
2004-05-01
The hypothesis put forward by Vel’min and Dubrovsky [1] is discussed. The hypothesis suggests that bottlenose dolphins possess two functionally separate auditory subsystems: one of them serves for analyzing extraneous sounds, as in nonecholocating terrestrial animals, and the other performs the analysis of echoes caused by the echolocation clicks of the animal itself. The first subsystem is called passive hearing, and the second, active hearing. The results of experimental studies of dolphin’s echolocation system are discussed to confirm the proposed hypothesis. For the active hearing of dolphins, the notion of a critical interval is considered as the interval of time within which the formation of a merged auditory image of the echolocation object is formed when all echo highlights of the echo from this object fall within the critical interval.
The temporal organization of behavior on periodic food schedules.
Reid, A K; Bacha, G; Morán, C
1993-01-01
Various theories of temporal control and schedule induction imply that periodic schedules temporally modulate an organism's motivational states within interreinforcement intervals. This speculation has been fueled by frequently observed multimodal activity distributions created by averaging across interreinforcement intervals. We tested this hypothesis by manipulating the cost associated with schedule-induced activities and the availability of other activities to determine the degree to which (a) the temporal distributions of activities within the interreinforcement interval are fixed or can be temporally displaced, (b) rats can reallocate activities across different interreinforcement intervals, and (c) noninduced activities can substitute for schedule-induced activities. Obtained multimodal activity distributions created by averaging across interreinforcement intervals were not representative of the transitions occurring within individual intervals, so the averaged multimodal distributions should not be assumed to represent changes in the subject's motivational states within the interval. Rather, the multimodal distributions often result from averaging across interreinforcement intervals in which only a single activity occurs. A direct influence of the periodic schedule on the motivational states implies that drinking and running should occur at different periods within the interval, but in three experiments the starting times of drinking and running within interreinforcement intervals were equal. Thus, the sequential pattern of drinking and running on periodic schedules does not result from temporal modulation of motivational states within interreinforcement intervals. PMID:8433061
A probabilistic method for testing and estimating selection differences between populations.
He, Yungang; Wang, Minxian; Huang, Xin; Li, Ran; Xu, Hongyang; Xu, Shuhua; Jin, Li
2015-12-01
Human populations around the world encounter various environmental challenges and, consequently, develop genetic adaptations to different selection forces. Identifying the differences in natural selection between populations is critical for understanding the roles of specific genetic variants in evolutionary adaptation. Although numerous methods have been developed to detect genetic loci under recent directional selection, a probabilistic solution for testing and quantifying selection differences between populations is lacking. Here we report the development of a probabilistic method for testing and estimating selection differences between populations. By use of a probabilistic model of genetic drift and selection, we showed that logarithm odds ratios of allele frequencies provide estimates of the differences in selection coefficients between populations. The estimates approximate a normal distribution, and variance can be estimated using genome-wide variants. This allows us to quantify differences in selection coefficients and to determine the confidence intervals of the estimate. Our work also revealed the link between genetic association testing and hypothesis testing of selection differences. It therefore supplies a solution for hypothesis testing of selection differences. This method was applied to a genome-wide data analysis of Han and Tibetan populations. The results confirmed that both the EPAS1 and EGLN1 genes are under statistically different selection in Han and Tibetan populations. We further estimated differences in the selection coefficients for genetic variants involved in melanin formation and determined their confidence intervals between continental population groups. Application of the method to empirical data demonstrated the outstanding capability of this novel approach for testing and quantifying differences in natural selection. © 2015 He et al.; Published by Cold Spring Harbor Laboratory Press.
Lindsen, Job P; de Jong, Ritske
2010-10-01
Lien, Ruthruff, Remington, & Johnston (2005) reported residual switch cost differences between stimulus-response (S-R) pairs and proposed the partial-mapping preparation (PMP) hypothesis, which states that advance preparation will typically be limited to a subset of S-R pairs because of structural capacity limitations, to account for these differences. Alternatively, the failure-to-engage (FTE) hypothesis does not allow for differences in probability of advance preparation between S-R pairs within a set; it accounts for residual switch cost differences by assuming that benefits of advance preparation may differ between S-R pairs. Three Experiments were designed to test between these hypotheses. No capacity limitations of the type assumed by the PMP hypothesis were found for many participants in Experiment 1. In Experiments 2 and 3, no evidence was found for the dependency of residual switch cost differences between S-R pairs on response-stimulus interval that is predicted by the PMP hypothesis. Mixture-model analysis of reaction times distributions in Experiment 3 provided strong support for the FTE hypothesis over the PMP hypothesis. Simulation studies with a computational implementation of the FTE hypothesis showed that it is able to account in great detail for the results of the present study. Together, these results provide strong evidence against the PMP hypothesis and support the FTE hypothesis that advance preparation probabilistically fails or succeeds at the level of the task set. (PsycINFO Database Record (c) 2010 APA, all rights reserved).
Hazard ratio estimation and inference in clinical trials with many tied event times.
Mehrotra, Devan V; Zhang, Yiwei
2018-06-13
The medical literature contains numerous examples of randomized clinical trials with time-to-event endpoints in which large numbers of events accrued over relatively short follow-up periods, resulting in many tied event times. A generally common feature across such examples was that the logrank test was used for hypothesis testing and the Cox proportional hazards model was used for hazard ratio estimation. We caution that this common practice is particularly risky in the setting of many tied event times for two reasons. First, the estimator of the hazard ratio can be severely biased if the Breslow tie-handling approximation for the Cox model (the default in SAS and Stata software) is used. Second, the 95% confidence interval for the hazard ratio can include one even when the corresponding logrank test p-value is less than 0.05. To help establish a better practice, with applicability for both superiority and noninferiority trials, we use theory and simulations to contrast Wald and score tests based on well-known tie-handling approximations for the Cox model. Our recommendation is to report the Wald test p-value and corresponding confidence interval based on the Efron approximation. The recommended test is essentially as powerful as the logrank test, the accompanying point and interval estimates of the hazard ratio have excellent statistical properties even in settings with many tied event times, inferential alignment between the p-value and confidence interval is guaranteed, and implementation is straightforward using commonly used software. Copyright © 2018 John Wiley & Sons, Ltd.
Suicides in Active-Duty Enlisted Navy Personnel
1989-07-03
percent confidence intervals (95 percent CI) were calculated assuming a Poisson distribution ( Lilienfeld & Lilienfeld , 1980). Variables analyzed in this...34efforts to improve the status of women in the Army played a role in this reduction but we lack data to test this hypothesis (Rothberg et al., 1988...recruit population. Military Medicine, 141:327-331. Jobes, D.A., Berman, A.L., & Josselsen, A.R. (1986). The impact of psychological autopsies on
Cadenaro, Milena; Navarra, Chiara Ottavia; Mazzoni, Annalisa; Nucci, Cesare; Matis, Bruce A; Di Lenarda, Roberto; Breschi, Lorenzo
2010-04-01
In an in vivo study, the authors tested the hypothesis that no difference in enamel surface roughness is detectable either during or after bleaching with a high-concentration in-office whitening agent. The authors performed profilometric and scanning electron microscopic (SEM) analyses of epoxy resin replicas of the upper right incisors of 20 participants at baseline (control) and after each bleaching treatment with a 38 percent hydrogen peroxide whitening agent, applied four times, at one-week intervals. The authors used analysis of variance for repeated measures to analyze the data statistically. The profilometric analysis of the enamel surface replicas after the in vivo bleaching protocol showed no significant difference in surface roughness parameters (P > .05) compared with those at baseline, irrespective of the time interval. Results of the correlated SEM analysis showed no relevant alteration on the enamel surface. Results of this in vivo study support the tested hypothesis that the application of a 38 percent hydrogen peroxide in-office whitening agent does not alter enamel surface roughness, even after multiple applications. The use of a 38 percent hydrogen peroxide in-office whitening agent induced no roughness alterations of the enamel surface, even after prolonged and repeated applications.
Mesquita, Janaina A; Lacerda-Santos, Rogério; Sampaio, Gêisa A M; Godoy, Gustavo P; Nonaka, Cassiano F W; Alves, Pollianna M
2017-01-01
The focus of this study was to test the hypothesis that there would be no difference between the biocompatibility of resin-modified glass ionomer cements. Sixty male Wistar rats were selected and divided into four groups: Control Group; Crosslink Group; RMO Group and Transbond Group. The materials were inserted into rat subcutaneous tissue. After time intervals of 7, 15 and 30 days morphological analyses were performed. The histological parameters assessed were: inflammatory infiltrate intensity; reaction of multinucleated giant cells; edema; necrosis; granulation reaction; young fibroblasts and collagenization. The results obtained were statistically analyzed by the Kruskal-Wallis and Dunn test (P<0.05). After 7 days, Groups RMO and Transbond showed intense inflammatory infiltrate (P=0.004), only Group RMO presented greater expression of multinucleated giant cell reaction (P=0.003) compared with the control group. After the time intervals of 15 and 30 days, there was evidence of light/moderate inflammatory infiltrate, lower level of multinucleated giant cell reaction and thicker areas of young fibroblasts in all the groups. The hypothesis was rejected. The Crosslink cement provided good tissue response, since it demonstrated a lower level of inflammatory infiltrate and higher degree of collagenization, while RMO demonstrated the lowest level of biocompatibility.
A Test by Any Other Name: P Values, Bayes Factors, and Statistical Inference.
Stern, Hal S
2016-01-01
Procedures used for statistical inference are receiving increased scrutiny as the scientific community studies the factors associated with insuring reproducible research. This note addresses recent negative attention directed at p values, the relationship of confidence intervals and tests, and the role of Bayesian inference and Bayes factors, with an eye toward better understanding these different strategies for statistical inference. We argue that researchers and data analysts too often resort to binary decisions (e.g., whether to reject or accept the null hypothesis) in settings where this may not be required.
Merchant, Hugo; Honing, Henkjan
2013-01-01
We propose a decomposition of the neurocognitive mechanisms that might underlie interval-based timing and rhythmic entrainment. Next to reviewing the concepts central to the definition of rhythmic entrainment, we discuss recent studies that suggest rhythmic entrainment to be specific to humans and a selected group of bird species, but, surprisingly, is not obvious in non-human primates. On the basis of these studies we propose the gradual audiomotor evolution hypothesis that suggests that humans fully share interval-based timing with other primates, but only partially share the ability of rhythmic entrainment (or beat-based timing). This hypothesis accommodates the fact that non-human primates (i.e., macaques) performance is comparable to humans in single interval tasks (such as interval reproduction, categorization, and interception), but show differences in multiple interval tasks (such as rhythmic entrainment, synchronization, and continuation). Furthermore, it is in line with the observation that macaques can, apparently, synchronize in the visual domain, but show less sensitivity in the auditory domain. And finally, while macaques are sensitive to interval-based timing and rhythmic grouping, the absence of a strong coupling between the auditory and motor system of non-human primates might be the reason why macaques cannot rhythmically entrain in the way humans do.
Autocorrelation of location estimates and the analysis of radiotracking data
Otis, D.L.; White, Gary C.
1999-01-01
The wildlife literature has been contradictory about the importance of autocorrelation in radiotracking data used for home range estimation and hypothesis tests of habitat selection. By definition, the concept of a home range involves autocorrelated movements, but estimates or hypothesis tests based on sampling designs that predefine a time frame of interest, and that generate representative samples of an animal's movement during this time frame, should not be affected by length of the sampling interval and autocorrelation. Intensive sampling of the individual's home range and habitat use during the time frame of the study leads to improved estimates for the individual, but use of location estimates as the sample unit to compare across animals is pseudoreplication. We therefore recommend against use of habitat selection analysis techniques that use locations instead of individuals as the sample unit. We offer a general outline for sampling designs for radiotracking studies.
Lee, Peter N
2015-03-20
The "gateway hypothesis" usually refers to the possibility that the taking up of habit A, which is considered harmless (or less harmful), may lead to the subsequent taking up of another habit, B, which is considered harmful (or more harmful). Possible approaches to designing and analysing studies to test the hypothesis are discussed. Evidence relating to the use of snus (A) as a gateway for smoking (B) is then evaluated in detail. The importance of having appropriate data available on the sequence of use of A and B and on other potential confounding factors that may lead to the taking up of B is emphasised. Where randomised trials are impractical, the preferred designs include the prospective cohort study in which ever use of A and of B is recorded at regular intervals, and the cross-sectional survey in which time of starting to use A and B is recorded. Both approaches allow time-stratified analytical methods to be used, in which, in each time period, risk of initiating B among never users of B at the start of the interval is compared according to prior use of A. Adjustment in analysis for the potential confounding factors is essential. Of 11 studies of possible relevance conducted in Sweden, Finland or Norway, only one seriously addresses potential confounding by those other factors involved in the initiation of smoking. Furthermore, 5 of the 11 studies are of a design that does not allow proper testing of the gateway hypothesis for various reasons, and the analysis is unsatisfactory, sometimes seriously, in all the remaining six. While better analyses could be attempted for some of the six studies identified as having appropriate design, the issues of confounding remain, and more studies are clearly needed. To obtain a rapid answer, a properly designed cross-sectional survey is recommended.
Lansdorp-Vogelaar, Iris; van Ballegooijen, Marjolein; Boer, Rob; Zauber, Ann; Habbema, J Dik F
2009-06-01
Estimates of the fecal occult blood test (FOBT) (Hemoccult II) sensitivity differed widely between screening trials and led to divergent conclusions on the effects of FOBT screening. We used microsimulation modeling to estimate a preclinical colorectal cancer (CRC) duration and sensitivity for unrehydrated FOBT from the data of 3 randomized controlled trials of Minnesota, Nottingham, and Funen. In addition to 2 usual hypotheses on the sensitivity of FOBT, we tested a novel hypothesis where sensitivity is linked to the stage of clinical diagnosis in the situation without screening. We used the MISCAN-Colon microsimulation model to estimate sensitivity and duration, accounting for differences between the trials in demography, background incidence, and trial design. We tested 3 hypotheses for FOBT sensitivity: sensitivity is the same for all preclinical CRC stages, sensitivity increases with each stage, and sensitivity is higher for the stage in which the cancer would have been diagnosed in the absence of screening than for earlier stages. Goodness-of-fit was evaluated by comparing expected and observed rates of screen-detected and interval CRC. The hypothesis with a higher sensitivity in the stage of clinical diagnosis gave the best fit. Under this hypothesis, sensitivity of FOBT was 51% in the stage of clinical diagnosis and 19% in earlier stages. The average duration of preclinical CRC was estimated at 6.7 years. Our analysis corroborated a long duration of preclinical CRC, with FOBT most sensitive in the stage of clinical diagnosis. (c) 2009 American Cancer Society.
Confidence intervals for correlations when data are not normal.
Bishara, Anthony J; Hittner, James B
2017-02-01
With nonnormal data, the typical confidence interval of the correlation (Fisher z') may be inaccurate. The literature has been unclear as to which of several alternative methods should be used instead, and how extreme a violation of normality is needed to justify an alternative. Through Monte Carlo simulation, 11 confidence interval methods were compared, including Fisher z', two Spearman rank-order methods, the Box-Cox transformation, rank-based inverse normal (RIN) transformation, and various bootstrap methods. Nonnormality often distorted the Fisher z' confidence interval-for example, leading to a 95 % confidence interval that had actual coverage as low as 68 %. Increasing the sample size sometimes worsened this problem. Inaccurate Fisher z' intervals could be predicted by a sample kurtosis of at least 2, an absolute sample skewness of at least 1, or significant violations of normality hypothesis tests. Only the Spearman rank-order and RIN transformation methods were universally robust to nonnormality. Among the bootstrap methods, an observed imposed bootstrap came closest to accurate coverage, though it often resulted in an overly long interval. The results suggest that sample nonnormality can justify avoidance of the Fisher z' interval in favor of a more robust alternative. R code for the relevant methods is provided in supplementary materials.
Waynforth, David
2015-10-01
Human birth interval length is indicative of the level of parental investment that a child will receive: a short interval following birth means that parental resources must be split with a younger sibling during a period when the older sibling remains highly dependent on their parents. From a life-history theoretical perspective, it is likely that there are evolved mechanisms that serve to maximize fitness depending on context. One context that would be expected to result in short birth intervals, and lowered parental investment, is after a child with low expected fitness is born. Here, data drawn from a longitudinal British birth cohort study were used to test whether birth intervals were shorter following the birth of a child with a long-term health problem. Data on the timing of 4543 births were analysed using discrete-time event history analysis. The results were consistent with the hypothesis: birth intervals were shorter following the birth of a child diagnosed by a medical professional with a severe but non-fatal medical condition. Covariates in the analysis were also significantly associated with birth interval length: births of twins or multiple births, and relationship break-up were associated with significantly longer birth intervals. © 2015 The Author(s).
Wilkes, E J A; Cowling, A; Woodgate, R G; Hughes, K J
2016-10-15
Faecal egg counts (FEC) are used widely for monitoring of parasite infection in animals, treatment decision-making and estimation of anthelmintic efficacy. When a single count or sample mean is used as a point estimate of the expectation of the egg distribution over some time interval, the variability in the egg density is not accounted for. Although variability, including quantifying sources, of egg count data has been described, the spatiotemporal distribution of nematode eggs in faeces is not well understood. We believe that statistical inference about the mean egg count for treatment decision-making has not been used previously. The aim of this study was to examine the density of Parascaris eggs in solution and faeces and to describe the use of hypothesis testing for decision-making. Faeces from two foals with Parascaris burdens were mixed with magnesium sulphate solution and 30 McMaster chambers were examined to determine the egg distribution in a well-mixed solution. To examine the distribution of eggs in faeces from an individual animal, three faecal piles from a foal with a known Parascaris burden were obtained, from which 81 counts were performed. A single faecal sample was also collected daily from 20 foals on three consecutive days and a FEC was performed on three separate portions of each sample. As appropriate, Poisson or negative binomial confidence intervals for the distribution mean were calculated. Parascaris eggs in a well-mixed solution conformed to a homogeneous Poisson process, while the egg density in faeces was not homogeneous, but aggregated. This study provides an extension from homogeneous to inhomogeneous Poisson processes, leading to an understanding of why Poisson and negative binomial distributions correspondingly provide a good fit for egg count data. The application of one-sided hypothesis tests for decision-making is presented. Copyright © 2016 Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Azila Che Musa, Nor; Mahmud, Zamalia; Baharun, Norhayati
2017-09-01
One of the important skills that is required from any student who are learning statistics is knowing how to solve statistical problems correctly using appropriate statistical methods. This will enable them to arrive at a conclusion and make a significant contribution and decision for the society. In this study, a group of 22 students majoring in statistics at UiTM Shah Alam were given problems relating to topics on testing of hypothesis which require them to solve the problems using confidence interval, traditional and p-value approach. Hypothesis testing is one of the techniques used in solving real problems and it is listed as one of the difficult concepts for students to grasp. The objectives of this study is to explore students’ perceived and actual ability in solving statistical problems and to determine which item in statistical problem solving that students find difficult to grasp. Students’ perceived and actual ability were measured based on the instruments developed from the respective topics. Rasch measurement tools such as Wright map and item measures for fit statistics were used to accomplish the objectives. Data were collected and analysed using Winsteps 3.90 software which is developed based on the Rasch measurement model. The results showed that students’ perceived themselves as moderately competent in solving the statistical problems using confidence interval and p-value approach even though their actual performance showed otherwise. Item measures for fit statistics also showed that the maximum estimated measures were found on two problems. These measures indicate that none of the students have attempted these problems correctly due to reasons which include their lack of understanding in confidence interval and probability values.
Learned helplessness in the rat: effect of response topography in a within-subject design.
dos Santos, Cristiano Valerio; Gehm, Tauane; Hunziker, Maria Helena Leite
2011-02-01
Three experiments investigated learned helplessness in rats manipulating response topography within-subject and different intervals between treatment and tests among groups. In Experiment 1, rats previously exposed to inescapable shocks were tested under an escape contingency where either jumping or nose poking was required to terminate shocks; tests were run either 1, 14 or 28 days after treatment. Most rats failed to jump, as expected, but learned to nose poke, regardless of the interval between treatment and tests and order of testing. The same results were observed in male and female rats from a different laboratory (Experiment 2) and despite increased exposure to the escape contingencies using a within-subject design (Experiment 3). Furthermore, no evidence of helplessness reversal was observed, since animals failed to jump even after having learned to nose-poke in a previous test session. These results are not consistent with a learned helplessness hypothesis, which claims that shock (un)controllability is the key variable responsible for the effect. They are nonetheless consistent with the view that inescapable shocks enhance control by irrelevant features of the relationship between the environment and behavior. Copyright © 2010 Elsevier B.V. All rights reserved.
Précis of statistical significance: rationale, validity, and utility.
Chow, S L
1998-04-01
The null-hypothesis significance-test procedure (NHSTP) is defended in the context of the theory-corroboration experiment, as well as the following contrasts: (a) substantive hypotheses versus statistical hypotheses, (b) theory corroboration versus statistical hypothesis testing, (c) theoretical inference versus statistical decision, (d) experiments versus nonexperimental studies, and (e) theory corroboration versus treatment assessment. The null hypothesis can be true because it is the hypothesis that errors are randomly distributed in data. Moreover, the null hypothesis is never used as a categorical proposition. Statistical significance means only that chance influences can be excluded as an explanation of data; it does not identify the nonchance factor responsible. The experimental conclusion is drawn with the inductive principle underlying the experimental design. A chain of deductive arguments gives rise to the theoretical conclusion via the experimental conclusion. The anomalous relationship between statistical significance and the effect size often used to criticize NHSTP is more apparent than real. The absolute size of the effect is not an index of evidential support for the substantive hypothesis. Nor is the effect size, by itself, informative as to the practical importance of the research result. Being a conditional probability, statistical power cannot be the a priori probability of statistical significance. The validity of statistical power is debatable because statistical significance is determined with a single sampling distribution of the test statistic based on H0, whereas it takes two distributions to represent statistical power or effect size. Sample size should not be determined in the mechanical manner envisaged in power analysis. It is inappropriate to criticize NHSTP for nonstatistical reasons. At the same time, neither effect size, nor confidence interval estimate, nor posterior probability can be used to exclude chance as an explanation of data. Neither can any of them fulfill the nonstatistical functions expected of them by critics.
Characteristics and evolution of writing impairment in Alzheimer's disease.
Platel, H; Lambert, J; Eustache, F; Cadet, B; Dary, M; Viader, F; Lechevalier, B
1993-11-01
Rapcsak et al. (Archs Neurol. 46, 65-67, 1989) proposed a hypothesis describing the evolution of agraphic impairments in dementia of the Alzheimer type (DAT): lexico-semantic disturbances at the beginning of the disease, impairments becoming more and more phonological as the dementia becomes more severe. Our study was conducted in an attempt to prove this hypothesis on the basis of an analysis of the changes observed in the agraphia impairment of patients with DAT. A writing test from dictation was proposed to 22 patients twice, with an interval of 9-12 months between the tests. The results show that within 1 year there was little change in the errors made by the patients in the writing test. The changes observed however were all found to develop within the same logical progression (as demonstrated by Correspondence Analysis). These findings made it possible to develop a general hypothesis indicating that the agraphic impairment evolves through three phases in patients with DAT. The first one is a phase of mild impairment (with a few possible phonologically plausible errors). In the second phase non-phonological spelling errors predominate, phonologically plausible errors are fewer and the errors mostly involve irregular words and non-words. The last phase involves more extreme disorders that affect all types of words. We observe many alterations due to impaired graphic motor capacity. This work would tend to confirm the hypothesis proposed by Rapcsak et al. concerning the development of agraphia, and would emphasize the importance of peripheral impairments, especially grapho-motor impairments which come in addition to the lexical and phonological impairments.
A new modeling and inference approach for the Systolic Blood Pressure Intervention Trial outcomes.
Yang, Song; Ambrosius, Walter T; Fine, Lawrence J; Bress, Adam P; Cushman, William C; Raj, Dominic S; Rehman, Shakaib; Tamariz, Leonardo
2018-06-01
Background/aims In clinical trials with time-to-event outcomes, usually the significance tests and confidence intervals are based on a proportional hazards model. Thus, the temporal pattern of the treatment effect is not directly considered. This could be problematic if the proportional hazards assumption is violated, as such violation could impact both interim and final estimates of the treatment effect. Methods We describe the application of inference procedures developed recently in the literature for time-to-event outcomes when the treatment effect may or may not be time-dependent. The inference procedures are based on a new model which contains the proportional hazards model as a sub-model. The temporal pattern of the treatment effect can then be expressed and displayed. The average hazard ratio is used as the summary measure of the treatment effect. The test of the null hypothesis uses adaptive weights that often lead to improvement in power over the log-rank test. Results Without needing to assume proportional hazards, the new approach yields results consistent with previously published findings in the Systolic Blood Pressure Intervention Trial. It provides a visual display of the time course of the treatment effect. At four of the five scheduled interim looks, the new approach yields smaller p values than the log-rank test. The average hazard ratio and its confidence interval indicates a treatment effect nearly a year earlier than a restricted mean survival time-based approach. Conclusion When the hazards are proportional between the comparison groups, the new methods yield results very close to the traditional approaches. When the proportional hazards assumption is violated, the new methods continue to be applicable and can potentially be more sensitive to departure from the null hypothesis.
The effects of environmental support and secondary tasks on visuospatial working memory.
Lilienthal, Lindsey; Hale, Sandra; Myerson, Joel
2014-10-01
In the present experiments, we examined the effects of environmental support on participants' ability to rehearse locations and the role of such support in the effects of secondary tasks on memory span. In Experiment 1, the duration of interitem intervals and the presence of environmental support for visuospatial rehearsal (i.e., the array of possible memory locations) during the interitem intervals were both manipulated across four tasks. When support was provided, memory spans increased as the interitem interval durations increased, consistent with the hypothesis that environmental support facilitates rehearsal. In contrast, when environmental support was not provided, spans decreased as the duration of the interitem intervals increased, consistent with the hypothesis that visuospatial memory representations decay when rehearsal is impeded. In Experiment 2, the ratio of interitem interval duration to intertrial interval duration was kept the same on all four tasks, in order to hold temporal distinctiveness constant, yet forgetting was still observed in the absence of environmental support, consistent with the decay hypothesis. In Experiment 3, the effects of impeding rehearsal were compared to the effects of verbal and visuospatial secondary processing tasks. Forgetting of locations was greater when presentation of to-be-remembered locations alternated with the performance of a secondary task than when rehearsal was impeded by the absence of environmental support. The greatest forgetting occurred when a secondary task required the processing visuospatial information, suggesting that in addition to decay, both domain-specific and domain-general effects contribute to forgetting on visuospatial working memory tasks.
The Effects of Environmental Support and Secondary Tasks on Visuospatial Working Memory
Lilienthal, Lindsey; Hale, Sandra; Myerson, Joel
2014-01-01
The present experiments examined the effects of environmental support on participants’ ability to rehearse locations and its role in the effects of secondary tasks on memory span. In Experiment 1, the duration of inter-item intervals and the presence of environmental support for visuospatial rehearsal (i.e., the array of possible memory locations) during the inter-item intervals were both manipulated across four tasks. When support was provided, memory spans increased as the inter-item interval durations increased, consistent with the hypothesis that environmental support facilitates rehearsal. In contrast, when environmental support was not provided, spans decreased as the duration of the inter-item intervals increased, consistent with the hypothesis that visuospatial memory representations decay when rehearsal is impeded. In Experiment 2, the ratio of inter-item interval duration to inter-trial interval duration was kept the same on all four tasks in order to hold temporal distinctiveness constant, yet forgetting was still observed in the absence of environmental support, consistent with the decay hypothesis. In Experiment 3, the effects of impeding rehearsal were compared to the effects of verbal and visuospatial secondary processing tasks. Forgetting of locations was greater when presentation of to-be-remembered locations alternated with performance of a secondary task than when rehearsal was impeded by the absence of environmental support. The greatest forgetting occurred when a secondary task required processing visuospatial information, suggesting that in addition to decay, both domain-specific and domain-general effects contribute to forgetting on visuospatial working memory tasks. PMID:24874509
Two-condition within-participant statistical mediation analysis: A path-analytic framework.
Montoya, Amanda K; Hayes, Andrew F
2017-03-01
Researchers interested in testing mediation often use designs where participants are measured on a dependent variable Y and a mediator M in both of 2 different circumstances. The dominant approach to assessing mediation in such a design, proposed by Judd, Kenny, and McClelland (2001), relies on a series of hypothesis tests about components of the mediation model and is not based on an estimate of or formal inference about the indirect effect. In this article we recast Judd et al.'s approach in the path-analytic framework that is now commonly used in between-participant mediation analysis. By so doing, it is apparent how to estimate the indirect effect of a within-participant manipulation on some outcome through a mediator as the product of paths of influence. This path-analytic approach eliminates the need for discrete hypothesis tests about components of the model to support a claim of mediation, as Judd et al.'s method requires, because it relies only on an inference about the product of paths-the indirect effect. We generalize methods of inference for the indirect effect widely used in between-participant designs to this within-participant version of mediation analysis, including bootstrap confidence intervals and Monte Carlo confidence intervals. Using this path-analytic approach, we extend the method to models with multiple mediators operating in parallel and serially and discuss the comparison of indirect effects in these more complex models. We offer macros and code for SPSS, SAS, and Mplus that conduct these analyses. (PsycINFO Database Record (c) 2017 APA, all rights reserved).
Male sperm whale acoustic behavior observed from multipaths at a single hydrophone
NASA Astrophysics Data System (ADS)
Laplanche, Christophe; Adam, Olivier; Lopatka, Maciej; Motsch, Jean-François
2005-10-01
Sperm whales generate transient sounds (clicks) when foraging. These clicks have been described as echolocation sounds, a result of having measured the source level and the directionality of these signals and having extrapolated results from biosonar tests made on some small odontocetes. The authors propose a passive acoustic technique requiring only one hydrophone to investigate the acoustic behavior of free-ranging sperm whales. They estimate whale pitch angles from the multipath distribution of click energy. They emphasize the close bond between the sperm whale's physical and acoustic activity, leading to the hypothesis that sperm whales might, like some small odontocetes, control click level and rhythm. An echolocation model estimating the range of the sperm whale's targets from the interclick interval is computed and tested during different stages of the whale's dive. Such a hypothesis on the echolocation process would indicate that sperm whales echolocate their prey layer when initiating their dives and follow a methodic technique when foraging.
Animal choruses emerge from receiver psychology
Greenfield, Michael D.; Esquer-Garrigos, Yareli; Streiff, Réjane; Party, Virginie
2016-01-01
Synchrony and alternation in large animal choruses are often viewed as adaptations by which cooperating males increase their attractiveness to females or evade predators. Alternatively, these seemingly composed productions may simply emerge by default from the receiver psychology of mate choice. This second, emergent property hypothesis has been inferred from findings that females in various acoustic species ignore male calls that follow a neighbor’s by a brief interval, that males often adjust the timing of their call rhythm and reduce the incidence of ineffective, following calls, and from simulations modeling the collective outcome of male adjustments. However, the purported connection between male song timing and female preference has never been tested experimentally, and the emergent property hypothesis has remained speculative. Studying a distinctive katydid species genetically structured as isolated populations, we conducted a comparative phylogenetic analysis of the correlation between male call timing and female preference. We report that across 17 sampled populations male adjustments match the interval over which females prefer leading calls; moreover, this correlation holds after correction for phylogenetic signal. Our study is the first demonstration that male adjustments coevolved with female preferences and thereby confirms the critical link in the emergent property model of chorus evolution. PMID:27670673
Maxcey, Ashleigh M.; Fukuda, Keisuke; Song, Won S.; Woodman, Geoffrey F.
2015-01-01
As researchers who study working memory, we often assume that participants keep a representation of an object in working memory when we present a cue that indicates that object will be tested in a couple of seconds. This intuitively accounts for how well people can remember a cued object relative to their memory for that same object presented without a cue. However, it is possible that this superior memory does not purely reflect storage of the cued object in working memory. We tested the hypothesis that cued presented during a stream of objects, followed by a short retention interval and immediate memory test, change how information is handled by long-term memory. We tested this hypothesis using a family of frontal event-related potentials (ERPs) believed to reflect long-term memory storage. We found that these frontal indices of long-term memory were sensitive to the task relevance of objects signaled by auditory cues, even when objects repeat frequently such that proactive interference was high. Our findings indicate the problematic nature of assuming process purity in the study of working memory, and demonstrate how frequent stimulus repetitions fail to isolate the role of working memory mechanisms. PMID:25604772
Maxcey, Ashleigh M; Fukuda, Keisuke; Song, Won S; Woodman, Geoffrey F
2015-10-01
As researchers who study working memory, we often assume that participants keep a representation of an object in working memory when we present a cue that indicates that the object will be tested in a couple of seconds. This intuitively accounts for how well people can remember a cued object, relative to their memory for that same object presented without a cue. However, it is possible that this superior memory does not purely reflect storage of the cued object in working memory. We tested the hypothesis that cues presented during a stream of objects, followed by a short retention interval and immediate memory test, can change how information is handled by long-term memory. We tested this hypothesis by using a family of frontal event-related potentials believed to reflect long-term memory storage. We found that these frontal indices of long-term memory were sensitive to the task relevance of objects signaled by auditory cues, even when the objects repeated frequently, such that proactive interference was high. Our findings indicate the problematic nature of assuming process purity in the study of working memory, and demonstrate that frequent stimulus repetitions fail to isolate the role of working memory mechanisms.
Understanding the Role of P Values and Hypothesis Tests in Clinical Research.
Mark, Daniel B; Lee, Kerry L; Harrell, Frank E
2016-12-01
P values and hypothesis testing methods are frequently misused in clinical research. Much of this misuse appears to be owing to the widespread, mistaken belief that they provide simple, reliable, and objective triage tools for separating the true and important from the untrue or unimportant. The primary focus in interpreting therapeutic clinical research data should be on the treatment ("oomph") effect, a metaphorical force that moves patients given an effective treatment to a different clinical state relative to their control counterparts. This effect is assessed using 2 complementary types of statistical measures calculated from the data, namely, effect magnitude or size and precision of the effect size. In a randomized trial, effect size is often summarized using constructs, such as odds ratios, hazard ratios, relative risks, or adverse event rate differences. How large a treatment effect has to be to be consequential is a matter for clinical judgment. The precision of the effect size (conceptually related to the amount of spread in the data) is usually addressed with confidence intervals. P values (significance tests) were first proposed as an informal heuristic to help assess how "unexpected" the observed effect size was if the true state of nature was no effect or no difference. Hypothesis testing was a modification of the significance test approach that envisioned controlling the false-positive rate of study results over many (hypothetical) repetitions of the experiment of interest. Both can be helpful but, by themselves, provide only a tunnel vision perspective on study results that ignores the clinical effects the study was conducted to measure.
Tong, Tom K; Fu, Frank H; Eston, Roger; Chung, Pak-Kwong; Quach, Binh; Lu, Kui
2010-11-01
This study examined the hypothesis that chronic (training) and acute (warm-up) loaded ventilatory activities applied to the inspiratory muscles (IM) in an integrated manner would augment the training volume of an interval running program. This in turn would result in additional improvement in the maximum performance of the Yo-Yo intermittent recovery test in comparison with interval training alone. Eighteen male nonprofessional athletes were allocated to either an inspiratory muscle loading (IML) group or control group. Both groups participated in a 6-week interval running program consisting of 3-4 workouts (1-3 sets of various repetitions of selected distance [100-2,400 m] per workout) per week. For the IML group, 4-week IM training (30 inspiratory efforts at 50% maximal static inspiratory pressure [P0] per set, 2 sets·d-1, 6 d·wk-1) was applied before the interval program. Specific IM warm-up (2 sets of 30 inspiratory efforts at 40% P0) was performed before each workout of the program. For the control group, neither IML was applied. In comparison with the control group, the interval training volume as indicated by the repeatability of running bouts at high intensity was approximately 27% greater in the IML group. Greater increase in the maximum performance of the Yo-Yo test (control: 16.9 ± 5.5%; IML: 30.7 ± 4.7% baseline value) was also observed after training. The enhanced exercise performance was partly attributable to the greater reductions in the sensation of breathlessness and whole-body metabolic stress during the Yo-Yo test. These findings show that the combination of chronic and acute IML into a high-intensity interval running program is a beneficial training strategy for enhancing the tolerance to high-intensity intermittent bouts of running.
NASA Astrophysics Data System (ADS)
Malinverno, A.; Cook, A.; Daigle, H.; Oryan, B.
2017-12-01
Methane hydrates in fine-grained marine sediments are often found within veins and fractures occupying discrete depth intervals that are surrounded by hydrate-free sediments. As they are not connected with gas sources beneath the base of the methane hydrate stability zone (MHSZ), these isolated hydrate-bearing intervals have been interpreted as formed by in situ microbial methane. We investigate here the hypothesis that these hydrate deposits form in sediments that were deposited during glacial lowstands and contain higher amounts of labile particulate organic carbon (POC), leading to enhanced microbial methanogenesis. During Pleistocene lowstands, river loads are deposited near the steep top of the continental slope and turbidity currents transport organic-rich, fine-grained sediments to deep waters. Faster sedimentation rates during glacial periods result in better preservation of POC because of decreased exposure times to oxic conditions. The net result is that more labile POC enters the methanogenic zone and more methane is generated in these sediments. To test this hypothesis, we apply an advection-diffusion-reaction model with a time-dependent deposition of labile POC at the seafloor controlled by glacioeustatic sea level variations in the last 250 kyr. The model is run for parameters estimated at three sites drilled by the 2009 Gulf of Mexico Joint Industry Project: Walker Ridge in the Terrebonne Basin (WR313-G and WR313-H) and Green Canyon near the canyon embayment into the Sigsbee Escarpment (GC955-H). In the model, gas hydrate forms in sediments with higher labile POC content deposited during the glacial cycle between 230 and 130 kyr (marine isotope stages 6 and 7). The corresponding depth intervals in the three sites contain hydrates, as shown by high bulk electrical resistivities and resistive subvertical fracture fills. This match supports the hypothesis that enhanced POC burial during glacial lowstands can result in hydrate formation from in situ microbial methanogenesis. Our results have implications for carbon cycling during glacial/interglacial cycles and for hydrate accumulation in the MHSZ. In particular, once hydrate-bearing intervals formed during glacial periods are buried beneath the MHSZ and dissociate, gas bubbles can rise and recycle microbial methane into the MHSZ.
Population-wide folic acid fortification and preterm birth: testing the folate depletion hypothesis.
Naimi, Ashley I; Auger, Nathalie
2015-04-01
We assess whether population-wide folic acid fortification policies were followed by a reduction of preterm and early-term birth rates in Québec among women with short and optimal interpregnancy intervals. We extracted birth certificate data for 1.3 million births between 1981 and 2010 to compute age-adjusted preterm and early-term birth rates stratified by short and optimal interpregnancy intervals. We used Joinpoint regression to detect changes in the preterm and early term birth rates and assess whether these changes coincide with the implementation of population-wide folic acid fortification. A change in the preterm birth rate occurred in 2000 among women with short (95% confidence interval [CI] = 1994, 2005) and optimal (95% CI = 1995, 2008) interpregnancy intervals. Changes in early term birth rates did not coincide with the implementation of folic acid fortification. Our results do not indicate a link between folic acid fortification and early term birth but suggest an improvement in preterm birth rates after implementation of a nationwide folic acid fortification program.
Msetfi, Rachel M; Murphy, Robin A; Simpson, Jane; Kornbrot, Diana E
2005-02-01
The perception of the effectiveness of instrumental actions is influenced by depressed mood. Depressive realism (DR) is the claim that depressed people are particularly accurate in evaluating instrumentality. In two experiments, the authors tested the DR hypothesis using an action-outcome contingency judgment task. DR effects were a function of intertrial interval length and outcome density, suggesting that depressed mood is accompanied by reduced contextual processing rather than increased judgment accuracy. The DR effect was observed only when participants were exposed to extended periods in which no actions or outcomes occurred. This implies that DR may result from an impairment in contextual processing rather than accurate but negative expectations. Therefore, DR is consistent with a cognitive distortion view of depression. ((c) 2005 APA, all rights reserved).
Monitoring and/or Detection of Wellbore Leakage In Energy Storage Wells
NASA Astrophysics Data System (ADS)
Ratigan, J.
2017-12-01
Energy (compressed natural gas, crude oil, NGL, and LPG) storage wells in solution-mined caverns in salt formations are required to be tested for integrity every five years. Rules promulgated for such testing typically assume the cavern interval in the salt formation is inherently impermeable, even though some experience demonstrates that this is not always the case. A protocol for testing the cavern impermeable hypothesis should be developed. The description for the integrity test of the "well" component of the well and cavern storage system was developed more than 30 years ago. However, some of the implicit assumptions inherent to the decades-old well test protocol are no longer applicable to the large diameter, high flow rate wells commonly constructed today. More detailed test protocols are necessary for the more contemporary energy storage wells.
Mazur, Wojciech; Rivera, Jose M; Khoury, Alexander F; Basu, Abhijeet G; Perez-Verdia, Alejandro; Marks, Gary F; Chang, Su Min; Olmos, Leopoldo; Quiñones, Miguel A; Zoghbi, William A
2003-04-01
Exercise (Ex) echocardiography has been shown to have significant prognostic power, independent of other known predictors of risk from an Ex stress test. The purpose of this study was to evaluate a risk index, incorporating echocardiographic and conventional Ex variables, for a more comprehensive risk stratification and identification of a very low-risk group. Two consecutive, mutually exclusive populations referred for treadmill Ex echocardiography with the Bruce protocol were investigated: hypothesis-generating (388 patients; 268 males; age 55 +/- 13 years) and hypothesis-testing (105 patients; 61 males age: 54 +/- 14 years).Cardiac events included cardiac death, myocardial infarction, late revascularization (>90 days), hospital admission for unstable angina, and admission for heart failure. Mean follow-up in the hypothesis-generating population was 3.1 years. There were 38 cardiac events. Independent predictors of events by multivariate analysis were: Ex wall motion score index (odds ratio [OR] = 2.77/Unit; P <.001); ischemic S-T depression > or = 1 mm (OR = 2.84; P =.002); and treadmill time (OR = 0.87/min; P =.037). A risk index was generated on the basis of the multivariate Cox regression model as: risk index = 1.02 (Ex wall motion score index) + 1.04 (S-T change) - 0.14 (treadmill time). The validity of this index was tested in the hypothesis-testing population. Event rates at 3 years were lowest (0%) in the lower quartile of risk index (-1.22 to -0.47), highest (29.6%) in the upper quartile (+0.66 to +2.02), and intermediate (19.2% to 15.3%) in the intermediate quartiles. The OR of the risk index for predicting cardiac events was 2.94/Unit ([95% confidence interval: 1.4 to 6.2]; P =.0043). Echocardiographic and Ex parameters are independent powerful predictors of cardiac events after treadmill stress testing. A risk index can be derived with these parameters for a more comprehensive risk stratification with Ex echocardiography.
Do physical leisure time activities prevent fatigue? A 15 month prospective study of nurses' aides.
Eriksen, W; Bruusgaard, D
2004-06-01
To test the hypothesis that physical leisure time activities reduce the risk of developing persistent fatigue. The hypothesis was tested in a sample that was homogeneous with respect to sex and occupation, with a prospective cohort design. Of 6234 vocationally active, female, Norwegian nurses' aides, not on leave because of illness or pregnancy when they completed a mailed questionnaire in 1999, 5341 (85.7%) completed a second questionnaire 15 months later. The main outcome measure was the prevalence of persistent fatigue-that is, always or usually feeling fatigued in the daytime during the preceding 14 days. In participants without persistent fatigue at baseline, reported engagement in physical leisure time activities for 20 minutes or more at least once a week during the three months before baseline was associated with a reduced risk of persistent fatigue at the follow up (odds ratio = 0.70; 95% confidence interval 0.55 to 0.89), after adjustments for age, affective symptoms, sleeping problems, musculoskeletal pain, long term health problems of any kind, smoking, marital status, tasks of a caring nature during leisure time, and work factors at baseline. The study supports the hypothesis that physical leisure time activities reduce the risk of developing persistent fatigue.
Chromium release from new stainless steel, recycled and nickel-free orthodontic brackets.
Sfondrini, Maria Francesca; Cacciafesta, Vittorio; Maffia, Elena; Massironi, Sarah; Scribante, Andrea; Alberti, Giancarla; Biesuz, Raffaela; Klersy, Catherine
2009-03-01
To test the hypothesis that there is no difference in the amounts of chromium released from new stainless steel brackets, recycled stainless steel brackets, and nickel-free (Ni-free) orthodontic brackets. This in vitro study was performed using a classic batch procedure by immersion of the samples in artificial saliva at various acidities (pH 4.2, 6.5, and 7.6) over an extended time interval (t(1) = 0.25 h, t(2) = 1 h, t(3) = 24 h, t(4) = 48 h, t(5) = 120 h). The amount of chromium release was determined using an atomic absorption spectrophotometer and an inductively coupled plasma atomic emission spectrometer. Statistical analysis included a linear regression model for repeated measures, with calculation of Huber-White robust standard errors to account for intrabracket correlation of data. For post hoc comparisons the Bonferroni correction was applied. The greatest amount of chromium was released from new stainless steel brackets (0.52 +/- 1.083 microg/g), whereas the recycled brackets released 0.27 +/- 0.38 microg/g. The smallest release was measured with Ni-free brackets (0.21 +/- 0.51 microg/g). The difference between recycled brackets and Ni-free brackets was not statistically significant (P = .13). For all brackets, the greatest release (P = .000) was measured at pH 4.2, and a significant increase was reported between all time intervals (P < .002). The hypothesis is rejected, but the amount of chromium released in all test solutions was well below the daily dietary intake level.
Neukirchen, Martin; Schaefer, Maximilian S; Kern, Carolin; Brett, Sarah; Werdehausen, Robert; Rellecke, Philipp; Reyle-Hahn, Matthias; Kienbaum, Peter
2015-09-01
Impaired cardiac repolarization, indicated by prolonged QT interval, may cause critical ventricular arrhythmias. Many anesthetics increase the QT interval by blockade of rapidly acting potassium rectifier channels. Although xenon does not affect these channels in isolated cardiomyocytes, the authors hypothesized that xenon increases the QT interval by direct and/or indirect sympathomimetic effects. Thus, the authors tested the hypothesis that xenon alters the heart rate-corrected cardiac QT (QTc) interval in anesthetic concentrations. The effect of xenon on the QTc interval was evaluated in eight healthy volunteers and in 35 patients undergoing abdominal or trauma surgery. The QTc interval was recorded on subjects in awake state, after their denitrogenation, and during xenon monoanesthesia (FetXe > 0.65). In patients, the QTc interval was recorded while awake, after anesthesia induction with propofol and remifentanil, and during steady state of xenon/remifentanil anesthesia (FetXe > 0.65). The QTc interval was determined from three consecutive cardiac intervals on electrocardiogram printouts in a blinded manner and corrected with Bazett formula. In healthy volunteers, xenon did not alter the QTc interval (mean difference: +0.11 ms [95% CI, -22.4 to 22.7]). In patients, after anesthesia induction with propofol/remifentanil, no alteration of QTc interval was noted. After propofol was replaced with xenon, the QTc interval remained unaffected (417 ± 32 ms vs. awake: 414 ± 25 ms) with a mean difference of 4.4 ms (95% CI, -4.6 to 13.5). Xenon monoanesthesia in healthy volunteers and xenon/remifentanil anesthesia in patients without clinically relevant cardiovascular disease do not increase QTc interval.
NASA Technical Reports Server (NTRS)
Hausdorff, J. M.; Mitchell, S. L.; Firtion, R.; Peng, C. K.; Cudkowicz, M. E.; Wei, J. Y.; Goldberger, A. L.
1997-01-01
Fluctuations in the duration of the gait cycle (the stride interval) display fractal dynamics and long-range correlations in healthy young adults. We hypothesized that these stride-interval correlations would be altered by changes in neurological function associated with aging and certain disease states. To test this hypothesis, we compared the stride-interval time series of 1) healthy elderly subjects and young controls and of 2) subjects with Huntington's disease and healthy controls. Using detrended fluctuation analysis we computed alpha, a measure of the degree to which one stride interval is correlated with previous and subsequent intervals over different time scales. The scaling exponent alpha was significantly lower in elderly subjects compared with young subjects (elderly: 0.68 +/- 0.14; young: 0.87 +/- 0.15; P < 0.003). The scaling exponent alpha was also smaller in the subjects with Huntington's disease compared with disease-free controls (Huntington's disease: 0.60 +/- 0.24; controls: 0.88 +/-0.17; P < 0.005). Moreover, alpha was linearly related to degree of functional impairment in subjects with Huntington's disease (r = 0.78, P < 0.0005). These findings demonstrate that strike-interval fluctuations are more random (i.e., less correlated) in elderly subjects and in subjects with Huntington's disease. Abnormal alterations in the fractal properties of gait dynamics are apparently associated with changes in central nervous system control.
Dietary nucleotides and early growth in formula-fed infants: a randomized controlled trial.
Singhal, Atul; Kennedy, Kathy; Lanigan, J; Clough, Helen; Jenkins, Wendy; Elias-Jones, Alun; Stephenson, Terrence; Dudek, Peter; Lucas, Alan
2010-10-01
Dietary nucleotides are nonprotein nitrogenous compounds that are found in high concentrations in breast milk and are thought to be conditionally essential nutrients in infancy. A high nucleotide intake has been suggested to explain some of the benefits of breastfeeding compared with formula feeding and to promote infant growth. However, relatively few large-scale randomized trials have tested this hypothesis in healthy infants. We tested the hypothesis that nucleotide supplementation of formula benefits early infant growth. Occipitofrontal head circumference, weight, and length were assessed in infants who were randomly assigned to groups fed nucleotide-supplemented (31 mg/L; n=100) or control formula without nucleotide supplementation (n=100) from birth to the age of 20 weeks, and in infants who were breastfed (reference group; n=101). Infants fed with nucleotide-supplemented formula had greater occipitofrontal head circumference at ages 8, 16, and 20 weeks than infants fed control formula (mean difference in z scores at 8 weeks: 0.4 [95% confidence interval: 0.1-0.7]; P=.006) even after adjustment for potential confounding factors (P=.002). Weight at 8 weeks and the increase in both occipitofrontal head circumference and weight from birth to 8 weeks were also greater in infants fed nucleotide-supplemented formula than in those fed control formula. Our data support the hypothesis that nucleotide supplementation leads to increased weight gain and head growth in formula-fed infants. Therefore, nucleotides could be conditionally essential for optimal infant growth in some formula-fed populations. Additional research is needed to test the hypothesis that the benefits of nucleotide supplementation for early head growth, a critical period for brain growth, have advantages for long-term cognitive development.
Pellegrino, J W; Siegel, A W; Dhawan, M
1976-01-01
Picture and word triads were tested in a Brown-Peterson short-term retention task at varying delay intervals (3, 10, or 30 sec) and under acoustic and simultaneous acoustic and visual distraction. Pictures were superior to words at all delay intervals under single acoustic distraction. Dual distraction consistently reduced picture retention while simultaneously facilitating word retention. The results were interpreted in terms of the dual coding hypothesis with modality-specific interference effects in the visual and acoustic processing systems. The differential effects of dual distraction were related to the introduction of visual interference and differential levels of functional acoustic interference across dual and single distraction tasks. The latter was supported by a constant 2/1 ratio in the backward counting rates of the acoustic vs. dual distraction tasks. The results further suggest that retention may not depend on total processing load of the distraction task, per se, but rather that processing load operates within modalities.
Intra-fraction motion of the prostate is a random walk
NASA Astrophysics Data System (ADS)
Ballhausen, H.; Li, M.; Hegemann, N.-S.; Ganswindt, U.; Belka, C.
2015-01-01
A random walk model for intra-fraction motion has been proposed, where at each step the prostate moves a small amount from its current position in a random direction. Online tracking data from perineal ultrasound is used to validate or reject this model against alternatives. Intra-fraction motion of a prostate was recorded by 4D ultrasound (Elekta Clarity system) during 84 fractions of external beam radiotherapy of six patients. In total, the center of the prostate was tracked for 8 h in intervals of 4 s. Maximum likelihood model parameters were fitted to the data. The null hypothesis of a random walk was tested with the Dickey-Fuller test. The null hypothesis of stationarity was tested by the Kwiatkowski-Phillips-Schmidt-Shin test. The increase of variance in prostate position over time and the variability in motility between fractions were analyzed. Intra-fraction motion of the prostate was best described as a stochastic process with an auto-correlation coefficient of ρ = 0.92 ± 0.13. The random walk hypothesis (ρ = 1) could not be rejected (p = 0.27). The static noise hypothesis (ρ = 0) was rejected (p < 0.001). The Dickey-Fuller test rejected the null hypothesis ρ = 1 in 25% to 32% of cases. On average, the Kwiatkowski-Phillips-Schmidt-Shin test rejected the null hypothesis ρ = 0 with a probability of 93% to 96%. The variance in prostate position increased linearly over time (r2 = 0.9 ± 0.1). Variance kept increasing and did not settle at a maximum as would be expected from a stationary process. There was substantial variability in motility between fractions and patients with maximum aberrations from isocenter ranging from 0.5 mm to over 10 mm in one patient alone. In conclusion, evidence strongly suggests that intra-fraction motion of the prostate is a random walk and neither static (like inter-fraction setup errors) nor stationary (like a cyclic motion such as breathing, for example). The prostate tends to drift away from the isocenter during a fraction, and this variance increases with time, such that shorter fractions are beneficial to the problem of intra-fraction motion. As a consequence, fixed safety margins (which would over-compensate at the beginning and under-compensate at the end of a fraction) cannot optimally account for intra-fraction motion. Instead, online tracking and position correction on-the-fly should be considered as the preferred approach to counter intra-fraction motion.
[Dilemma of null hypothesis in ecological hypothesis's experiment test.
Li, Ji
2016-06-01
Experimental test is one of the major test methods of ecological hypothesis, though there are many arguments due to null hypothesis. Quinn and Dunham (1983) analyzed the hypothesis deduction model from Platt (1964) and thus stated that there is no null hypothesis in ecology that can be strictly tested by experiments. Fisher's falsificationism and Neyman-Pearson (N-P)'s non-decisivity inhibit statistical null hypothesis from being strictly tested. Moreover, since the null hypothesis H 0 (α=1, β=0) and alternative hypothesis H 1 '(α'=1, β'=0) in ecological progresses are diffe-rent from classic physics, the ecological null hypothesis can neither be strictly tested experimentally. These dilemmas of null hypothesis could be relieved via the reduction of P value, careful selection of null hypothesis, non-centralization of non-null hypothesis, and two-tailed test. However, the statistical null hypothesis significance testing (NHST) should not to be equivalent to the causality logistical test in ecological hypothesis. Hence, the findings and conclusions about methodological studies and experimental tests based on NHST are not always logically reliable.
A Bayesian framework to estimate diversification rates and their variation through time and space
2011-01-01
Background Patterns of species diversity are the result of speciation and extinction processes, and molecular phylogenetic data can provide valuable information to derive their variability through time and across clades. Bayesian Markov chain Monte Carlo methods offer a promising framework to incorporate phylogenetic uncertainty when estimating rates of diversification. Results We introduce a new approach to estimate diversification rates in a Bayesian framework over a distribution of trees under various constant and variable rate birth-death and pure-birth models, and test it on simulated phylogenies. Furthermore, speciation and extinction rates and their posterior credibility intervals can be estimated while accounting for non-random taxon sampling. The framework is particularly suitable for hypothesis testing using Bayes factors, as we demonstrate analyzing dated phylogenies of Chondrostoma (Cyprinidae) and Lupinus (Fabaceae). In addition, we develop a model that extends the rate estimation to a meta-analysis framework in which different data sets are combined in a single analysis to detect general temporal and spatial trends in diversification. Conclusions Our approach provides a flexible framework for the estimation of diversification parameters and hypothesis testing while simultaneously accounting for uncertainties in the divergence times and incomplete taxon sampling. PMID:22013891
The 'aerobic/resistance/inspiratory muscle training hypothesis in heart failure'.
Laoutaris, Ioannis D
2018-01-01
Evidence from large multicentre exercise intervention trials in heart failure patients, investigating both moderate continuous aerobic training and high intensity interval training, indicates that the 'crème de la crème' exercise programme for this population remains to be found. The 'aerobic/resistance/inspiratory (ARIS) muscle training hypothesis in heart failure' is introduced, suggesting that combined ARIS muscle training may result in maximal exercise pathophysiological and functional benefits in heart failure patients. The hypothesis is based on the decoding of the 'skeletal muscle hypothesis in heart failure' and on revision of experimental evidence to date showing that exercise and functional intolerance in heart failure patients are associated not only with reduced muscle endurance, indication for aerobic training (AT), but also with reduced muscle strength and decreased inspiratory muscle function contributing to weakness, dyspnoea, fatigue and low aerobic capacity, forming the grounds for the addition of both resistance training (RT) and inspiratory muscle training (IMT) to AT. The hypothesis will be tested by comparing all potential exercise combinations, ARIS, AT/RT, AT/IMT, AT, evaluating both functional and cardiac indices in a large sample of heart failure patients of New York Heart Association class II-III and left ventricular ejection fraction ≤35% ad hoc by the multicentre randomized clinical trial, Aerobic Resistance, InSpiratory Training OutcomeS in Heart Failure (ARISTOS-HF trial).
Perception of non-verbal auditory stimuli in Italian dyslexic children.
Cantiani, Chiara; Lorusso, Maria Luisa; Valnegri, Camilla; Molteni, Massimo
2010-01-01
Auditory temporal processing deficits have been proposed as the underlying cause of phonological difficulties in Developmental Dyslexia. The hypothesis was tested in a sample of 20 Italian dyslexic children aged 8-14, and 20 matched control children. Three tasks of auditory processing of non-verbal stimuli, involving discrimination and reproduction of sequences of rapidly presented short sounds were expressly created. Dyslexic subjects performed more poorly than control children, suggesting the presence of a deficit only partially influenced by the duration of the stimuli and of inter-stimulus intervals (ISIs).
Rank score and permutation testing alternatives for regression quantile estimates
Cade, B.S.; Richards, J.D.; Mielke, P.W.
2006-01-01
Performance of quantile rank score tests used for hypothesis testing and constructing confidence intervals for linear quantile regression estimates (0 ≤ τ ≤ 1) were evaluated by simulation for models with p = 2 and 6 predictors, moderate collinearity among predictors, homogeneous and hetero-geneous errors, small to moderate samples (n = 20–300), and central to upper quantiles (0.50–0.99). Test statistics evaluated were the conventional quantile rank score T statistic distributed as χ2 random variable with q degrees of freedom (where q parameters are constrained by H 0:) and an F statistic with its sampling distribution approximated by permutation. The permutation F-test maintained better Type I errors than the T-test for homogeneous error models with smaller n and more extreme quantiles τ. An F distributional approximation of the F statistic provided some improvements in Type I errors over the T-test for models with > 2 parameters, smaller n, and more extreme quantiles but not as much improvement as the permutation approximation. Both rank score tests required weighting to maintain correct Type I errors when heterogeneity under the alternative model increased to 5 standard deviations across the domain of X. A double permutation procedure was developed to provide valid Type I errors for the permutation F-test when null models were forced through the origin. Power was similar for conditions where both T- and F-tests maintained correct Type I errors but the F-test provided some power at smaller n and extreme quantiles when the T-test had no power because of excessively conservative Type I errors. When the double permutation scheme was required for the permutation F-test to maintain valid Type I errors, power was less than for the T-test with decreasing sample size and increasing quantiles. Confidence intervals on parameters and tolerance intervals for future predictions were constructed based on test inversion for an example application relating trout densities to stream channel width:depth.
Deterministic versus evidence-based attitude towards clinical diagnosis.
Soltani, Akbar; Moayyeri, Alireza
2007-08-01
Generally, two basic classes have been proposed for scientific explanation of events. Deductive reasoning emphasizes on reaching conclusions about a hypothesis based on verification of universal laws pertinent to that hypothesis, while inductive or probabilistic reasoning explains an event by calculation of some probabilities for that event to be related to a given hypothesis. Although both types of reasoning are used in clinical practice, evidence-based medicine stresses on the advantages of the second approach for most instances in medical decision making. While 'probabilistic or evidence-based' reasoning seems to involve more mathematical formulas at the first look, this attitude is more dynamic and less imprisoned by the rigidity of mathematics comparing with 'deterministic or mathematical attitude'. In the field of medical diagnosis, appreciation of uncertainty in clinical encounters and utilization of likelihood ratio as measure of accuracy seem to be the most important characteristics of evidence-based doctors. Other characteristics include use of series of tests for refining probability, changing diagnostic thresholds considering external evidences and nature of the disease, and attention to confidence intervals to estimate uncertainty of research-derived parameters.
Rammsayer, Thomas; Ulrich, Rolf
2011-05-01
The distinct timing hypothesis suggests a sensory mechanism for processing of durations in the range of milliseconds and a cognitively controlled mechanism for processing of longer durations. To test this hypothesis, we employed a dual-task approach to investigate the effects of maintenance and elaborative rehearsal on temporal processing of brief and long durations. Unlike mere maintenance rehearsal, elaborative rehearsal as a secondary task involved transfer of information from working to long-term memory and elaboration of information to enhance storage in long-term memory. Duration discrimination of brief intervals was not affected by a secondary cognitive task that required either maintenance or elaborative rehearsal. Concurrent elaborative rehearsal, however, impaired discrimination of longer durations as compared to maintenance rehearsal and a control condition with no secondary task. These findings endorse the distinct timing hypothesis and are in line with the notion that executive functions, such as continuous memory updating and active transfer of information into long-term memory interfere with temporal processing of durations in the second, but not in the millisecond range. 2011 Elsevier B.V. All rights reserved.
Fung, Tak; Keenan, Kevin
2014-01-01
The estimation of population allele frequencies using sample data forms a central component of studies in population genetics. These estimates can be used to test hypotheses on the evolutionary processes governing changes in genetic variation among populations. However, existing studies frequently do not account for sampling uncertainty in these estimates, thus compromising their utility. Incorporation of this uncertainty has been hindered by the lack of a method for constructing confidence intervals containing the population allele frequencies, for the general case of sampling from a finite diploid population of any size. In this study, we address this important knowledge gap by presenting a rigorous mathematical method to construct such confidence intervals. For a range of scenarios, the method is used to demonstrate that for a particular allele, in order to obtain accurate estimates within 0.05 of the population allele frequency with high probability (> or = 95%), a sample size of > 30 is often required. This analysis is augmented by an application of the method to empirical sample allele frequency data for two populations of the checkerspot butterfly (Melitaea cinxia L.), occupying meadows in Finland. For each population, the method is used to derive > or = 98.3% confidence intervals for the population frequencies of three alleles. These intervals are then used to construct two joint > or = 95% confidence regions, one for the set of three frequencies for each population. These regions are then used to derive a > or = 95%% confidence interval for Jost's D, a measure of genetic differentiation between the two populations. Overall, the results demonstrate the practical utility of the method with respect to informing sampling design and accounting for sampling uncertainty in studies of population genetics, important for scientific hypothesis-testing and also for risk-based natural resource management.
The effect of lithium chloride on one-trial passive avoidance learning in rats.
Johnson, F N
1976-01-01
1 Expression of a one-trial passive avoidance learning response in rats was examined following injections of lithium chloride or sodium chloride before and after initial training and before the first day of testing. Five tests were given at daily intervals, 24 h after training being the time of the first test. 2. Lithium given before the first day of testing impaired response expression on the first and all subsequent days of testing; the rate of extinction was unaffected. 3. Given both before and immediately after initial training, lithium impaired response expression on the first day of testing but slowed down the subsequent rate of extinction, leading eventually to improved performance on the fifth day, as compared with placebo-treated control subjects. 4. The results are interpreted in the light of the hypothesis that lithium impaired the central processing of sensory information. PMID:1252666
Long-term mobile phone use and brain tumor risk.
Lönn, Stefan; Ahlbom, Anders; Hall, Per; Feychting, Maria
2005-03-15
Handheld mobile phones were introduced in Sweden during the late 1980s. The purpose of this population-based, case-control study was to test the hypothesis that long-term mobile phone use increases the risk of brain tumors. The authors identified all cases aged 20-69 years who were diagnosed with glioma or meningioma during 2000-2002 in certain parts of Sweden. Randomly selected controls were stratified on age, gender, and residential area. Detailed information about mobile phone use was collected from 371 (74%) glioma and 273 (85%) meningioma cases and 674 (71%) controls. For regular mobile phone use, the odds ratio was 0.8 (95% confidence interval: 0.6, 1.0) for glioma and 0.7 (95% confidence interval: 0.5, 0.9) for meningioma. Similar results were found for more than 10 years' duration of mobile phone use. No risk increase was found for ipsilateral phone use for tumors located in the temporal and parietal lobes. Furthermore, the odds ratio did not increase, regardless of tumor histology, type of phone, and amount of use. This study includes a large number of long-term mobile phone users, and the authors conclude that the data do not support the hypothesis that mobile phone use is related to an increased risk of glioma or meningioma.
Remembering Left–Right Orientation of Pictures
Bartlett, James C.; Gernsbacher, Morton Ann; Till, Robert E.
2015-01-01
In a study of recognition memory for pictures, we observed an asymmetry in classifying test items as “same” versus “different” in left–right orientation: Identical copies of previously viewed items were classified more accurately than left–right reversals of those items. Response bias could not explain this asymmetry, and, moreover, correct “same” and “different” classifications were independently manipulable: Whereas repetition of input pictures (one vs. two presentations) affected primarily correct “same” classifications, retention interval (3 hr vs. 1 week) affected primarily correct “different” classifications. In addition, repetition but not retention interval affected judgments that previously seen pictures (both identical and reversed) were “old”. These and additional findings supported a dual-process hypothesis that links “same” classifications to high familiarity, and “different” classifications to conscious sampling of images of previously viewed pictures. PMID:2949051
Increased sex ratio in Russia and Cuba after Chernobyl: a radiological hypothesis
2013-01-01
Background The ratio of male to female offspring at birth may be a simple and non-invasive way to monitor the reproductive health of a population. Except in societies where selective abortion skews the sex ratio, approximately 105 boys are born for every 100 girls. Generally, the human sex ratio at birth is remarkably constant in large populations. After the Chernobyl nuclear power plant accident in April 1986, a long lasting significant elevation in the sex ratio has been found in Russia, i.e. more boys or fewer girls compared to expectation were born. Recently, also for Cuba an escalated sex ratio from 1987 onward has been documented and discussed in the scientific literature. Presentation of the hypothesis By the end of the eighties of the last century in Cuba as much as about 60% of the food imports were provided by the former Soviet Union. Due to its difficult economic situation, Cuba had neither the necessary insight nor the political strength to circumvent the detrimental genetic effects of imported radioactively contaminated foodstuffs after Chernobyl. We propose that the long term stable sex ratio increase in Cuba is essentially due to ionizing radiation. Testing of the hypothesis A synoptic trend analysis of Russian and Cuban annual sex ratios discloses upward jumps in 1987. The estimated jump height from 1986 to 1987 in Russia measures 0.51% with a 95% confidence interval (0.28, 0.75), p value < 0.0001. In Cuba the estimated jump height measures 2.99% (2.39, 3.60), p value < 0.0001. The hypothesis may be tested by reconstruction of imports from the world markets to Cuba and by radiological analyses of remains in Cuba for Cs-137 and Sr-90. Implications of the hypothesis If the evidence for the hypothesis is strengthened, there is potential to learn about genetic radiation risks and to prevent similar effects in present and future exposure situations. PMID:23947741
Jackson, Dan; Bowden, Jack
2016-09-07
Confidence intervals for the between study variance are useful in random-effects meta-analyses because they quantify the uncertainty in the corresponding point estimates. Methods for calculating these confidence intervals have been developed that are based on inverting hypothesis tests using generalised heterogeneity statistics. Whilst, under the random effects model, these new methods furnish confidence intervals with the correct coverage, the resulting intervals are usually very wide, making them uninformative. We discuss a simple strategy for obtaining 95 % confidence intervals for the between-study variance with a markedly reduced width, whilst retaining the nominal coverage probability. Specifically, we consider the possibility of using methods based on generalised heterogeneity statistics with unequal tail probabilities, where the tail probability used to compute the upper bound is greater than 2.5 %. This idea is assessed using four real examples and a variety of simulation studies. Supporting analytical results are also obtained. Our results provide evidence that using unequal tail probabilities can result in shorter 95 % confidence intervals for the between-study variance. We also show some further results for a real example that illustrates how shorter confidence intervals for the between-study variance can be useful when performing sensitivity analyses for the average effect, which is usually the parameter of primary interest. We conclude that using unequal tail probabilities when computing 95 % confidence intervals for the between-study variance, when using methods based on generalised heterogeneity statistics, can result in shorter confidence intervals. We suggest that those who find the case for using unequal tail probabilities convincing should use the '1-4 % split', where greater tail probability is allocated to the upper confidence bound. The 'width-optimal' interval that we present deserves further investigation.
Effects of demographic and health variables on Rasch scaled cognitive scores.
Zelinski, Elizabeth M; Gilewski, Michael J
2003-08-01
To determine whether demographic and health variables interact to predict cognitive scores in Asset and Health Dynamics of the Oldest-Old (AHEAD), a representative survey of older Americans, as a test of the developmental discontinuity hypothesis. Rasch modeling procedures were used to rescale cognitive measures into interval scores, equating scales across measures, making it possible to compare predictor effects directly. Rasch scaling also reduces the likelihood of obtaining spurious interactions. Tasks included combined immediate and delayed recall, the Telephone Interview for Cognitive Status (TICS), Series 7, and an overall cognitive score. Demographic variables most strongly predicted performance on all scores, with health variables having smaller effects. Age interacted with both demographic and health variables, but patterns of effects varied. Demographic variables have strong effects on cognition. The developmental discontinuity hypothesis that health variables have stronger effects than demographic ones on cognition in older adults was not supported.
Test Expectation Enhances Memory Consolidation across Both Sleep and Wake
Wamsley, Erin J.; Hamilton, Kelly; Graveline, Yvette; Manceor, Stephanie; Parr, Elaine
2016-01-01
Memory consolidation benefits from post-training sleep. However, recent studies suggest that sleep does not uniformly benefit all memory, but instead prioritizes information that is important to the individual. Here, we examined the effect of test expectation on memory consolidation across sleep and wakefulness. Following reports that information with strong “future relevance” is preferentially consolidated during sleep, we hypothesized that test expectation would enhance memory consolidation across a period of sleep, but not across wakefulness. To the contrary, we found that expectation of a future test enhanced memory for both spatial and motor learning, but that this effect was equivalent across both wake and sleep retention intervals. These observations differ from those of least two prior studies, and fail to support the hypothesis that the “future relevance” of learned material moderates its consolidation selectively during sleep. PMID:27760193
van der Ven, E; Dalman, C; Wicks, S; Allebeck, P; Magnusson, C; van Os, J; Selten, J P
2015-03-01
The selection hypothesis posits that the increased rates of psychosis observed among migrants are due to selective migration of people who are predisposed to develop the disorder. To test this hypothesis, we examined whether risk factors for psychosis are more prevalent among future emigrants. A cohort of 49,321 Swedish military conscripts was assessed at age 18 years on cannabis use, IQ, psychiatric diagnosis, social adjustment, history of trauma and urbanicity of place of upbringing. Through data linkage we examined whether these exposures predicted emigration out of Sweden. We also calculated the emigrants' hypothetical relative risk compared with non-emigrants for developing a non-affective psychotic disorder. Low IQ [odds ratio (OR) 0.5, 95% confidence interval (95% CI) 0.3-0.9] and 'poor social adjustment' (OR 0.4, 95% CI 0.2-0.8) were significantly less prevalent among prospective emigrants, whereas a history of urban upbringing (OR 2.3, 95% CI 1.4-3.7) was significantly more common. Apart from a non-significant increase in cannabis use among emigrants (OR 1.6, 95% CI 0.8-3.1), there were no major group differences in any other risk factors. Compared to non-emigrants, hypothetical relative risks for developing non-affective psychotic disorder were 0.7 (95% CI 0.4-1.2) and 0.8 (95% CI 0.7-1.0), respectively, for emigrants narrowly and broadly defined. This study adds to an increasing body of evidence opposing the selection hypothesis.
Travis, Penny B; Goodman, Karen J; O'Rourke, Kathleen M; Groves, Frank D; Sinha, Debajyoti; Nicholas, Joyce S; VanDerslice, Jim; Lackland, Daniel; Mena, Kristina D
2010-03-01
The mode of transmission of Helicobacter pylori, a bacterium causing gastric cancer and peptic ulcer disease, is unknown although waterborne transmission is a likely pathway. This study investigated the hypothesis that access to treated water and a sanitary sewerage system reduces the H. pylori incidence rate, using data from 472 participants in a cohort study that followed children in Juarez, Mexico, and El Paso, Texas, from April 1998, with caretaker interviews and the urea breath test for detecting H. pylori infection at target intervals of six months from birth through 24 months of age. The unadjusted hazard ratio comparing bottled/vending machine water to a municipal water supply was 0.71 (95% confidence interval (CI): 0.50, 1.01) and comparing a municipal sewer connection to a septic tank or cesspool, 0.85 (95% CI: 0.60, 1.20). After adjustment for maternal education and country, the hazard ratios decreased slightly to 0.70 (95% confidence interval: 0.49, 1.00) and 0.77 (95% confidence interval: 0.50, 1.21), respectively. These results provide moderate support for potential waterborne transmission of H. pylori.
Wu, Mixia; Shu, Yu; Li, Zhaohai; Liu, Aiyi
2016-01-01
A sequential design is proposed to test whether the accuracy of a binary diagnostic biomarker meets the minimal level of acceptance. The accuracy of a binary diagnostic biomarker is a linear combination of the marker’s sensitivity and specificity. The objective of the sequential method is to minimize the maximum expected sample size under the null hypothesis that the marker’s accuracy is below the minimal level of acceptance. The exact results of two-stage designs based on Youden’s index and efficiency indicate that the maximum expected sample sizes are smaller than the sample sizes of the fixed designs. Exact methods are also developed for estimation, confidence interval and p-value concerning the proposed accuracy index upon termination of the sequential testing. PMID:26947768
Inhibition of return is not impaired but masked by increased facilitation in schizophrenia patients.
Kalogeropoulou, Fenia; Woodruff, Peter W R; Vivas, Ana B
2015-01-01
When attention is attracted to an irrelevant location, performance on a subsequent target is hindered at that location in relation to novel, not previously attended, locations. This phenomenon is known as inhibition of return (IOR). Previous research has shown that IOR is not observed, or its onset is delayed, in schizophrenia patients. In the present study, the authors tested the hypothesis that IOR may be intact but masked by increased facilitation in schizophrenia patients. To test this hypothesis, they used a procedure that usually reduces or eliminates the early facilitation. In the first experiment, the authors used the typical single-cue IOR task in the group of healthy adults (N = 28) and in a group of schizophrenia patients (N = 32). In the second experiment, they manipulated cue-target discriminability by presenting spatially overlapping cues and targets where the cues were more intense than the targets. In Experiment 1, they did not find significant IOR effects in the group of schizophrenia patients, even with cue-target intervals as long as 3,200 ms. However, in Experiment 2, IOR effects were significant at the 350- and 450-ms cue-target intervals for healthy and patients, respectively. This is the first study that shows that schizophrenia patients can actually show inhibitory effects very similar to healthy controls, even when no help is provided to shift their attention away from the irrelevant location. The authors suggest that inhibition is intact in schizophrenia patients, but it is usually masked by increased facilitation. PsycINFO Database Record (c) 2015 APA, all rights reserved.
Rothmann, Mark
2005-01-01
When testing the equality of means from two different populations, a t-test or large sample normal test tend to be performed. For these tests, when the sample size or design for the second sample is dependent on the results of the first sample, the type I error probability is altered for each specific possibility in the null hypothesis. We will examine the impact on the type I error probabilities for two confidence interval procedures and procedures using test statistics when the design for the second sample or experiment is dependent on the results from the first sample or experiment (or series of experiments). Ways for controlling a desired maximum type I error probability or a desired type I error rate will be discussed. Results are applied to the setting of noninferiority comparisons in active controlled trials where the use of a placebo is unethical.
Victim Responses to Violence: The Effect of Alcohol Context on Crime Labeling.
Brennan, Iain
2016-03-01
The labeling of an incident as a crime is an essential precursor to the use of criminal law, but the contextual factors that influence this decision are unknown. One such context that is a frequent setting for violence is the barroom. This study explored how the setting of a violent incident is related to the decision by victims to label it as a crime. It tested the hypothesis that violent incidents that took place in or around a licensed premises were less likely to be regarded as crimes than violence in other settings. The hypothesis was tested using a pooled sample of respondents from successive waves of the British Crime Survey (2002/2003-2010/2011). Logistic regression models controlled for demographic factors, victim behavioral characteristics, and incident-specific factors including the seriousness of the violence. Respondents who were in or around a licensed premises at the time of victimization were less likely to regard that violence as a crime (adjusted odds ratio = 0.48, 95% confidence intervals [CIs] = [0.34, 0.67]) than respondents who were victimized in other locations. Despite a disproportionate amount of violence taking place in barrooms, it appears that the criminal nature of violence in these spaces is discounted by victims. The findings emphasize how context affects victim interpretations of crime and suggest a victim-centered reconceptualization of the "moral holiday" hypothesis of alcohol settings. © The Author(s) 2015.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Malinverno, Alberto; Cook, Ann; Daigle, Hugh
Methane hydrates in fine-grained marine sediments are often found within veins and fractures occupying discrete depth intervals that are surrounded by hydrate-free sediments. As they are not connected with gas sources beneath the base of the methane hydrate stability zone (MHSZ), these isolated hydrate-bearing intervals have been interpreted as formed by in situ microbial methane. We investigate here the hypothesis that these hydrate deposits form in sediments that were deposited during glacial lowstands and contain higher amounts of labile particulate organic carbon (POC), leading to enhanced microbial methanogenesis. During Pleistocene lowstands, river loads are deposited near the steep top ofmore » the continental slope and turbidity currents transport organic-rich, fine-grained sediments to deep waters. Faster sedimentation rates during glacial periods result in better preservation of POC because of decreased exposure times to oxic conditions. The net result is that more labile POC enters the methanogenic zone and more methane is generated in these sediments. To test this hypothesis, we apply an advection-diffusion-reaction model with a time-dependent deposition of labile POC at the seafloor controlled by glacioeustatic sea level variations in the last 250 kyr. The model is run for parameters estimated at three sites drilled by the 2009 Gulf of Mexico Joint Industry Project: Walker Ridge in the Terrebonne Basin (WR313-G and WR313-H) and Green Canyon near the canyon embayment into the Sigsbee Escarpment (GC955-H). In the model, gas hydrate forms in sediments with higher labile POC content deposited during the glacial cycle between 230 and 130 kyr (marine isotope stages 6 and 7). The corresponding depth intervals in the three sites contain hydrates, as shown by high bulk electrical resistivities and resistive subvertical fracture fills. This match supports the hypothesis that enhanced POC burial during glacial lowstands can result in hydrate formation from in situ microbial methanogenesis. Our results have implications for carbon cycling during glacial/interglacial cycles and for hydrate accumulation in the MHSZ. In particular, once hydrate-bearing intervals formed during glacial periods are buried beneath the MHSZ and dissociate, gas bubbles can rise and recycle microbial methane into the MHSZ.« less
Long working hours and use of psychotropic medicine: a follow-up study with register linkage.
Hannerz, Harald; Albertsen, Karen
2016-03-01
This study aimed to investigate the possibility of a prospective association between long working hours and use of psychotropic medicine. Survey data drawn from random samples of the general working population of Denmark in the time period 1995-2010 were linked to national registers covering all inhabitants. The participants were followed for first occurrence of redeemed prescriptions for psychotropic medicine. The primary analysis included 25,959 observations (19,259 persons) and yielded a total of 2914 new cases of psychotropic drug use in 99,018 person-years at risk. Poisson regression was used to model incidence rates of redeemed prescriptions for psychotropic medicine as a function of working hours (32-40, 41-48, >48 hours/week). The analysis was controlled for gender, age, sample, shift work, and socioeconomic status. A likelihood ratio test was used to test the null hypothesis, which stated that the incidence rates were independent of weekly working hours. The likelihood ratio test did not reject the null hypothesis (P=0.085). The rate ratio (RR) was 1.04 [95% confidence interval (95% CI) 0.94-1.15] for the contrast 41-48 versus 32-40 work hours/week and 1.15 (95% CI 1.02-1.30) for >48 versus 32-40 hours/week. None of the rate ratios that were estimated in the present study were statistically significant after adjustment for multiple testing. However, stratified analyses, in which 30 RR were estimated, generated the hypothesis that overtime work (>48 hours/week) might be associated with an increased risk among night or shift workers (RR=1.51, 95% CI 1.15-1.98). The present study did not find a statistically significant association between long working hours and incidence of psychotropic drug usage among Danish employees.
Thorvaldsson, Valgeir; Skoog, Ingmar; Johansson, Boo
2017-03-01
Terminal decline (TD) refers to acceleration in within-person cognitive decline prior to death. The cognitive reserve hypothesis postulates that individuals with higher IQ are able to better tolerate age-related increase in brain pathologies. On average, they will exhibit a later onset of TD, but once they start to decline, their trajectory is steeper relative to those with lower IQ. We tested these predictions using data from initially nondemented individuals (n = 179) in the H70-study repeatedly measured at ages 70, 75, 79, 81, 85, 88, 90, 92, 95, 97, 99, and 100, or until death, on cognitive tests of perceptual-and-motor-speed and spatial and verbal ability. We quantified IQ using the Raven's Coloured Progressive Matrices (RCPM) test administrated at age 70. We fitted random change point TD models to the data, within a Bayesian framework, conditioned on IQ, age of death, education, and sex. In line with predictions, we found that 1 additional standard deviation on the IQ scale was associated with a delay in onset of TD by 1.87 (95% highest density interval [HDI; 0.20, 4.08]) years on speed, 1.96 (95% HDI [0.15, 3.54]) years on verbal ability, but only 0.88 (95% HDI [-0.93, 3.49]) year on spatial ability. Higher IQ was associated with steeper rate of decline within the TD phase on measures of speed and verbal ability, whereas results on spatial ability were nonconclusive. Our findings provide partial support for the cognitive reserve hypothesis and demonstrate that IQ can be a significant moderator of cognitive change trajectories in old age. (PsycINFO Database Record (c) 2017 APA, all rights reserved).
Tracking a changing environment: optimal sampling, adaptive memory and overnight effects.
Dunlap, Aimee S; Stephens, David W
2012-02-01
Foraging in a variable environment presents a classic problem of decision making with incomplete information. Animals must track the changing environment, remember the best options and make choices accordingly. While several experimental studies have explored the idea that sampling behavior reflects the amount of environmental change, we take the next logical step in asking how change influences memory. We explore the hypothesis that memory length should be tied to the ecological relevance and the value of the information learned, and that environmental change is a key determinant of the value of memory. We use a dynamic programming model to confirm our predictions and then test memory length in a factorial experiment. In our experimental situation we manipulate rates of change in a simple foraging task for blue jays over a 36 h period. After jays experienced an experimentally determined change regime, we tested them at a range of retention intervals, from 1 to 72 h. Manipulated rates of change influenced learning and sampling rates: subjects sampled more and learned more quickly in the high change condition. Tests of retention revealed significant interactions between retention interval and the experienced rate of change. We observed a striking and surprising difference between the high and low change treatments at the 24h retention interval. In agreement with earlier work we find that a circadian retention interval is special, but we find that the extent of this 'specialness' depends on the subject's prior experience of environmental change. Specifically, experienced rates of change seem to influence how subjects balance recent information against past experience in a way that interacts with the passage of time. Copyright © 2011 Elsevier B.V. All rights reserved.
Kertai, Miklos D; Ji, Yunqi; Li, Yi-Ju; Mathew, Joseph P; Daubert, James P; Podgoreanu, Mihai V
2016-04-01
We characterized cardiac surgery-induced dynamic changes of the corrected QT (QTc) interval and tested the hypothesis that genetic factors are associated with perioperative QTc prolongation independent of clinical and procedural factors. All study subjects were ascertained from a prospective study of patients who underwent elective cardiac surgery during August 1999 to April 2002. We defined a prolonged QTc interval as > 440 msec, measured from 24-hr pre- and postoperative 12-lead electrocardiograms. The association of 37 single nucleotide polymorphisms (SNPs) in 21 candidate genes -involved in modulating arrhythmia susceptibility pathways with postoperative QTc changes- was investigated in a two-stage design with a stage I cohort (n = 497) nested within a stage II cohort (n = 957). Empirical P values (Pemp) were obtained by permutation tests with 10,000 repeats. After adjusting for clinical and procedural risk factors, we selected four SNPs (P value range, 0.03-0.1) in stage I, which we then tested in the stage II cohort. Two functional SNPs in the pro-inflammatory cytokine interleukin-1β (IL1β), rs1143633 (odds ratio [OR], 0.71; 95% confidence interval [CI], 0.53 to 0.95; Pemp = 0.02) and rs16944 (OR, 1.31; 95% CI, 1.01 to 1.70; Pemp = 0.04), remained independent predictors of postoperative QTc prolongation. The ability of a clinico-genetic model incorporating the two IL1B polymorphisms to classify patients at risk for developing prolonged postoperative QTc was superior to a clinical model alone, with a net reclassification improvement of 0.308 (P = 0.0003) and an integrated discrimination improvement of 0.02 (P = 0.000024). The results suggest a contribution of IL1β in modulating susceptibility to postoperative QTc prolongation after cardiac surgery.
On the analysis of studies of choice
Mullins, Eamonn; Agunwamba, Christian C.; Donohoe, Anthony J.
1982-01-01
In a review of 103 sets of data from 23 different studies of choice, Baum (1979) concluded that whereas undermatching was most commonly observed for responses, the time measure generally conformed to the matching relation. A reexamination of the evidence presented by Baum concludes that undermatching is the most commonly observed finding for both measures. Use of the coefficient of determination by both Baum (1979) and de Villiers (1977) for assessing when matching occurs is criticized on statistical grounds. An alternative to the loss-in-predictability criterion used by Baum (1979) is proposed. This alternative statistic has a simple operational meaning and is related to the usual F-ratio test. It can therefore be used as a formal test of the hypothesis that matching occurs. Baum (1979) also suggests that slope values of between .90 and 1.11 can be considered good approximations to matching. It is argued that the establishment of a fixed interval as a criterion for determining when matching occurs, is inappropriate. A confidence interval based on the data from any given experiment is suggested as a more useful method of assessment. PMID:16812271
The impact of path crossing on visuo-spatial serial memory: encoding or rehearsal effect?
Parmentier, Fabrice B R; Andrés, Pilar
2006-11-01
The determinants of visuo-spatial serial memory have been the object of little research, despite early evidence that not all sequences are equally remembered. Recently, empirical evidence was reported indicating that the complexity of the path formed by the to-be-remembered locations impacted on recall performance, defined for example by the presence of crossings in the path formed by successive locations (Parmentier, Elford, & Maybery, 2005). In this study, we examined whether this effect reflects rehearsal or encoding processes. We examined the effect of a retention interval and spatial interference on the ordered recall of spatial sequences with and without path crossings. Path crossings decreased recall performance, as did a retention interval. In line with the encoding hypothesis, but in contrast with the rehearsal hypothesis, the effect of crossing was not affected by the retention interval nor by tapping. The possible nature of the impact of path crossing on encoding mechanisms is discussed.
Female-female mounting among goats stimulates sexual performance in males.
Shearer, Meagan K; Katz, Larry S
2006-06-01
The hypothesis that female-female mounting is proceptivity in goats, in that male goats are aroused by the visual cues of this mounting behavior, was tested. Once a week, male goats were randomly selected and placed in a test pen in which they were allowed to observe one of six selected social or sexual stimulus conditions. The stimulus conditions were one familiar male with two estrous females (MEE); three estrous females that displayed female-female mounting (E(m)); three estrous females that did not mount (E(nm)); three non-estrous females (N(E)); three familiar males (M); and no animals in the pen (Empty). After 10 min, the stimulus animals were removed, and an estrous female was placed in the test pen with the male for a 20-min sexual performance test. During sexual performance tests, the frequencies and latencies of all sexual behaviors were recorded. This procedure was repeated so all males (n = 6) were tested once each test day, and all the stimulus conditions were presented each test day. This was repeated weekly until all males had been exposed to each stimulus condition. Viewing mounting behavior, whether male-female or female-female, increased the total number of sexual behaviors displayed, increased ejaculation frequency, and decreased latency to first mount and ejaculation, post-ejaculatory interval, and the interval between ejaculations. We conclude that male goats are aroused by the visual cues of mounting behavior, and that female-female mounting is proceptivity in goats.
Visual-Spatial Attention Aids the Maintenance of Object Representations in Visual Working Memory
Williams, Melonie; Pouget, Pierre; Boucher, Leanne; Woodman, Geoffrey F.
2013-01-01
Theories have proposed that the maintenance of object representations in visual working memory is aided by a spatial rehearsal mechanism. In this study, we used two different approaches to test the hypothesis that overt and covert visual-spatial attention mechanisms contribute to the maintenance of object representations in visual working memory. First, we tracked observers’ eye movements while remembering a variable number of objects during change-detection tasks. We observed that during the blank retention interval, participants spontaneously shifted gaze to the locations that the objects had occupied in the memory array. Next, we hypothesized that if attention mechanisms contribute to the maintenance of object representations, then drawing attention away from the object locations during the retention interval would impair object memory during these change-detection tasks. Supporting this prediction, we found that attending to the fixation point in anticipation of a brief probe stimulus during the retention interval reduced change-detection accuracy even on the trials in which no probe occurred. These findings support models of working memory in which visual-spatial selection mechanisms contribute to the maintenance of object representations. PMID:23371773
1987-09-01
Pollutants by Gas Chromatographic Headspace Analysis. J. Chrom . 260:23-32. Miller, R. E. 1984. Confidence Intervals and Hypothesis Tests. Chem. Engr...tabulation of the injection peak areas, Henry’s law constant estimates, and Coefficient of Variation (COV) values for the component at five temperatures...I 15.1897 (4) I 14.5788 I 19.7121 1 16 6428 Injection: (1) 1 16158 I 2596 38628 Peak Area] (2) 1 154846 1 281438 1 261148 (3) 4673 1 64736 1 63322 (4
Upper gastrointestinal bleeding in patients with CKD.
Liang, Chih-Chia; Wang, Su-Ming; Kuo, Huey-Liang; Chang, Chiz-Tzung; Liu, Jiung-Hsiun; Lin, Hsin-Hung; Wang, I-Kuan; Yang, Ya-Fei; Lu, Yueh-Ju; Chou, Che-Yi; Huang, Chiu-Ching
2014-08-07
Patients with CKD receiving maintenance dialysis are at risk for upper gastrointestinal bleeding. However, the risk of upper gastrointestinal bleeding in patients with early CKD who are not receiving dialysis is unknown. The hypothesis was that their risk of upper gastrointestinal bleeding is negatively linked to renal function. To test this hypothesis, the association between eGFR and risk of upper gastrointestinal bleeding in patients with stages 3-5 CKD who were not receiving dialysis was analyzed. Patients with stages 3-5 CKD in the CKD program from 2003 to 2009 were enrolled and prospectively followed until December of 2012 to monitor the development of upper gastrointestinal bleeding. The risk of upper gastrointestinal bleeding was analyzed using competing-risks regression with time-varying covariates. In total, 2968 patients with stages 3-5 CKD who were not receiving dialysis were followed for a median of 1.9 years. The incidence of upper gastrointestinal bleeding per 100 patient-years was 3.7 (95% confidence interval, 3.5 to 3.9) in patients with stage 3 CKD, 5.0 (95% confidence interval, 4.8 to 5.3) in patients with stage 4 CKD, and 13.9 (95% confidence interval, 13.1 to 14.8) in patients with stage 5 CKD. Higher eGFR was associated with a lower risk of upper gastrointestinal bleeding (P=0.03), with a subdistribution hazard ratio of 0.93 (95% confidence interval, 0.87 to 0.99) for every 5 ml/min per 1.73 m(2) higher eGFR. A history of upper gastrointestinal bleeding (P<0.001) and lower serum albumin (P=0.004) were independently associated with higher upper gastrointestinal bleeding risk. In patients with CKD who are not receiving dialysis, lower renal function is associated with higher risk for upper gastrointestinal bleeding. The risk is higher in patients with previous upper gastrointestinal bleeding history and low serum albumin. Copyright © 2014 by the American Society of Nephrology.
2014-01-01
Background Thresholds for statistical significance are insufficiently demonstrated by 95% confidence intervals or P-values when assessing results from randomised clinical trials. First, a P-value only shows the probability of getting a result assuming that the null hypothesis is true and does not reflect the probability of getting a result assuming an alternative hypothesis to the null hypothesis is true. Second, a confidence interval or a P-value showing significance may be caused by multiplicity. Third, statistical significance does not necessarily result in clinical significance. Therefore, assessment of intervention effects in randomised clinical trials deserves more rigour in order to become more valid. Methods Several methodologies for assessing the statistical and clinical significance of intervention effects in randomised clinical trials were considered. Balancing simplicity and comprehensiveness, a simple five-step procedure was developed. Results For a more valid assessment of results from a randomised clinical trial we propose the following five-steps: (1) report the confidence intervals and the exact P-values; (2) report Bayes factor for the primary outcome, being the ratio of the probability that a given trial result is compatible with a ‘null’ effect (corresponding to the P-value) divided by the probability that the trial result is compatible with the intervention effect hypothesised in the sample size calculation; (3) adjust the confidence intervals and the statistical significance threshold if the trial is stopped early or if interim analyses have been conducted; (4) adjust the confidence intervals and the P-values for multiplicity due to number of outcome comparisons; and (5) assess clinical significance of the trial results. Conclusions If the proposed five-step procedure is followed, this may increase the validity of assessments of intervention effects in randomised clinical trials. PMID:24588900
Biomechanical Cadaveric Evaluation of Partial Acute Peroneal Tendon Tears.
Wagner, Emilio; Wagner, Pablo; Ortiz, Cristian; Radkievich, Ruben; Palma, Felipe; Guzmán-Venegas, Rodrigo
2018-06-01
No clear guideline or solid evidence exists for peroneal tendon tears to determine when to repair, resect, or perform a tenodesis on the damaged tendon. The objective of this study was to analyze the mechanical behavior of cadaveric peroneal tendons artificially damaged and tested in a cyclic and failure mode. The hypothesis was that no failure would be observed in the cyclic phase. Eight cadaveric long leg specimens were tested on a specially designed frame. A longitudinal full thickness tendon defect was created, 3 cm in length, behind the tip of the fibula, compromising 66% of the visible width of the peroneal tendons. Cyclic testing was initially performed between 50 and 200 N, followed by a load-to-failure test. Tendon elongation and load to rupture were measured. No tendon failed or lengthened during cyclic testing. The mean load to failure for peroneus brevis was 416 N (95% confidence interval, 351-481 N) and for the peroneus longus was 723 N (95% confidence interval, 578-868 N). All failures were at the level of the defect created. In a cadaveric model of peroneal tendon tears, 33% of remaining peroneal tendon could resist high tensile forces, above the physiologic threshold. Some peroneal tendon tears can be treated conservatively without risking spontaneous ruptures. When surgically treating a symptomatic peroneal tendon tear, increased efforts may be undertaken to repair tears previously considered irreparable.
Ischemic preconditioning enhances critical power during a 3 minute all-out cycling test.
Griffin, Patrick J; Ferguson, Richard A; Gissane, Conor; Bailey, Stephen J; Patterson, Stephen D
2018-05-01
This study tested the hypothesis that ischemic preconditioning (IPC) would increase critical power (CP) during a 3 minute all-out cycling test. Twelve males completed two 3 minute all-out cycling tests, in a crossover design, separated by 7 days. These tests were preceded by IPC (4 x 5 minute intervals at 220 mmHg bilateral leg occlusion) or SHAM treatment (4 x 5 minute intervals at 20 mmHg bilateral leg occlusion). CP was calculated as the mean power output during the final 30 s of the 3 minute test with W' taken as the total work done above CP. Muscle oxygenation was measured throughout the exercise period. There was a 15.3 ± 0.3% decrease in muscle oxygenation (TSI; [Tissue saturation index]) during the IPC stimulus, relative to SHAM. CP was significantly increased (241 ± 65 W vs. 234 ± 67 W), whereas W' (18.4 ± 3.8 vs 17.9 ± 3.7 kJ) and total work done (TWD) were not different (61.1 ± 12.7 vs 60.8 ± 12.7 kJ), between the IPC and SHAM trials. IPC enhanced CP during a 3 minute all-out cycling test without impacting W' or TWD. The improved CP after IPC might contribute towards the effect of IPC on endurance performance.
Life History Correlates and Extinction Risk of Capital-Breeding Fishes
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jager, Yetta; Vila-Gispert, Dr Anna; Rose, Kenneth A.
2008-03-01
We consider a distinction for fishes, often made for birds and reptiles, between capital-breeding and income-breeding species. Species that follow a capital-breeding strategy tend to evolve longer intervals between reproductive events and tend to have characteristics that we associate with higher extinction risk. To examine whether these ideas are relevant for fishes, we assembled life-history data for fish species, including an index of extinction risk, the interval between spawning events, the degree of parental care, and whether or not the species migrates to spawn. These data were used to evaluate two hypotheses: 1) fish species with a major accessory activitymore » to spawning (migration or parental care) spawn less often and 2) fish species that spawn less often are at greater risk of extinction. We tested these hypotheses by applying two alternative statistical methods that account for phylogenetic correlation in cross-taxon comparisons. The two methods predicted average intervals between spawning events 0.13 to 0.20 years longer for fishes with a major accessory activity. Both accessories, above-average parental care and spawning migration, were individually associated with longer average spawning intervals. We conclude that the capital-breeding paradigm is relevant for fishes. We also confirmed the second hypothesis, that species in higher IUCN extinction risk categories had longer average spawning intervals. Further research is needed to understand the relationship between extinction risk and spawning interval, within the broader context of life history traits and aquatic habitats.« less
Time Determines the Neural Circuit Underlying Associative Fear Learning
Guimarãis, Marta; Gregório, Ana; Cruz, Andreia; Guyon, Nicolas; Moita, Marta A.
2011-01-01
Ultimately associative learning is a function of the temporal features and relationships between experienced stimuli. Nevertheless how time affects the neural circuit underlying this form of learning remains largely unknown. To address this issue, we used single-trial auditory trace fear conditioning and varied the length of the interval between tone and foot-shock. Through temporary inactivation of the amygdala, medial prefrontal-cortex (mPFC), and dorsal-hippocampus in rats, we tested the hypothesis that different temporal intervals between the tone and the shock influence the neuronal structures necessary for learning. With this study we provide the first experimental evidence showing that temporarily inactivating the amygdala before training impairs auditory fear learning when there is a temporal gap between the tone and the shock. Moreover, imposing a short interval (5 s) between the two stimuli also relies on the mPFC, while learning the association across a longer interval (40 s) becomes additionally dependent on a third structure, the dorsal-hippocampus. Thus, our results suggest that increasing the interval length between tone and shock leads to the involvement of an increasing number of brain areas in order for the association between the two stimuli to be acquired normally. These findings demonstrate that the temporal relationship between events is a key factor in determining the neuronal mechanisms underlying associative fear learning. PMID:22207842
Kuhn, Andrew Warren; Solomon, Gary S
2014-01-01
Computerized neuropsychological testing batteries have provided a time-efficient and cost-efficient way to assess and manage the neurocognitive aspects of patients with sport-related concussion. These tests are straightforward and mostly self-guided, reducing the degree of clinician involvement required by traditional clinical neuropsychological paper-and-pencil tests. To determine if self-reported supervision status affected computerized neurocognitive baseline test performance in high school athletes. Retrospective cohort study. Supervised testing took place in high school computer libraries or sports medicine clinics. Unsupervised testing took place at the participant's home or another location with computer access. From 2007 to 2012, high school athletes across middle Tennessee (n = 3771) completed computerized neurocognitive baseline testing (Immediate Post-Concussion Assessment and Cognitive Testing [ImPACT]). They reported taking the test either supervised by a sports medicine professional or unsupervised. These athletes (n = 2140) were subjected to inclusion and exclusion criteria and then matched based on age, sex, and number of prior concussions. We extracted demographic and performance-based data from each de-identified baseline testing record. Paired t tests were performed between the self-reported supervised and unsupervised groups, comparing the following ImPACT baseline composite scores: verbal memory, visual memory, visual motor (processing) speed, reaction time, impulse control, and total symptom score. For differences that reached P < .05, the Cohen d was calculated to measure the effect size. Lastly, a χ(2) analysis was conducted to compare the rate of invalid baseline testing between the groups. All statistical tests were performed at the 95% confidence interval level. Self-reported supervised athletes demonstrated better visual motor (processing) speed (P = .004; 95% confidence interval [0.28, 1.52]; d = 0.12) and faster reaction time (P < .001; 95% confidence interval [-0.026, -0.014]; d = 0.21) composite scores than self-reported unsupervised athletes. Speed-based tasks were most affected by self-reported supervision status, although the effect sizes were relatively small. These data lend credence to the hypothesis that supervision status may be a factor in the evaluation of ImPACT baseline test scores.
Explorations in statistics: hypothesis tests and P values.
Curran-Everett, Douglas
2009-06-01
Learning about statistics is a lot like learning about science: the learning is more meaningful if you can actively explore. This second installment of Explorations in Statistics delves into test statistics and P values, two concepts fundamental to the test of a scientific null hypothesis. The essence of a test statistic is that it compares what we observe in the experiment to what we expect to see if the null hypothesis is true. The P value associated with the magnitude of that test statistic answers this question: if the null hypothesis is true, what proportion of possible values of the test statistic are at least as extreme as the one I got? Although statisticians continue to stress the limitations of hypothesis tests, there are two realities we must acknowledge: hypothesis tests are ingrained within science, and the simple test of a null hypothesis can be useful. As a result, it behooves us to explore the notions of hypothesis tests, test statistics, and P values.
Tormos, José María; Barrios, Carlos; Pascual-Leone, Alvaro
2009-01-01
The aetiology of idiopathic scoliosis (IS) remains unknown; however, there is a growing body of evidence suggesting that the spine deformity could be the expression of a subclinical nervous system disorder. A defective sensory input or an anomalous sensorimotor integration may lead to an abnormal postural tone and therefore the development of a spine deformity. Inhibition of the motor cortico-cortical excitability is abnormal in dystonia. Therefore, the study of cortico-cortical inhibition may shed some insight into the dystonia hypothesis regarding the pathophysiology of IS. Paired pulse transcranial magnetic stimulation was used to study cortico-cortical inhibition and facilitation in nine adolescents with IS, five teenagers with congenital scoliosis (CS) and eight healthy age-matched controls. The effect of a previous conditioning stimulus (80% intensity of resting motor threshold) on the amplitude of the motor-evoked potential induced by the test stimulus (120% of resting motor threshold) was examined at various interstimulus intervals (ISIs) in both abductor pollicis brevis muscles. The results of healthy adolescents and those with CS showed a marked inhibitory effect of the conditioning stimulus on the response to the test stimulus at interstimulus intervals shorter than 6 ms. These findings do not differ from those reported for normal adults. However, children with IS revealed an abnormally reduced cortico-cortical inhibition at the short ISIs. Cortico-cortical inhibition was practically normal on the side of the scoliotic convexity while it was significantly reduced on the side of the scoliotic concavity. In conclusion, these findings support the hypothesis that a dystonic dysfunction underlies in IS. Asymmetrical cortical hyperexcitability may play an important role in the pathogenesis of IS and represents an objective neurophysiological finding that could be used clinically. PMID:20033462
Packman-Braun, R
1988-01-01
The purpose of this study was to investigate, in a sample of patients with hemiparesis secondary to cerebrovascular accident, the relationship between the ratio of stimulus on time to off time and muscle fatigue using a commercial electrical stimulation unit. An experimental model was used to test the hypothesis that the smaller the stimulus off time relative to stimulus on time, the greater will be the muscle fatigue over time. The wrist extensor muscles of 18 patients with hemiparesis were stimulated electrically, and isometric force output was recorded continuously using an adapted strain gauge-recorder apparatus. For each testing session, peak on time of the electrical stimulus was set at 5 seconds, and off time was set at 5, 15, or 25 seconds. Six randomly assigned treatment groups participated in three separate treatment sessions in a different order at 48-hour intervals. Treatment sessions were continued either until wrist extensor muscle force output decreased to 50% of its initial value or for a maximum of 30 minutes. Data analysis revealed that significant differences in muscle tension developed among all duty cycles (p less than .01). Duty-cycle ratios of 1:1, 1:3, and 1:5 were shown to be progressively less fatiguing. Within the limits of this investigation, the 1:5 duty-cycle ratio was determined to be the best suited for initial use in programs of prolonged stimulation to the wrist extensor muscles of patients with hemiparesis. The hypothesis was accepted that the smaller the stimulus off time (rest interval) with respect to the stimulus on time, the greater will be the muscle fatigue over time.
Cognitive aspects of haptic form recognition by blind and sighted subjects.
Bailes, S M; Lambert, R M
1986-11-01
Studies using haptic form recognition tasks have generally concluded that the adventitiously blind perform better than the congenitally blind, implicating the importance of early visual experience in improved spatial functioning. The hypothesis was tested that the adventitiously blind have retained some ability to encode successive information obtained haptically in terms of a global visual representation, while the congenitally blind use a coding system based on successive inputs. Eighteen blind (adventitiously and congenitally) and 18 sighted (blindfolded and performing with vision) subjects were tested on their recognition of raised line patterns when the standard was presented in segments: in immediate succession, or with unfilled intersegmental delays of 5, 10, or 15 seconds. The results did not support the above hypothesis. Three main findings were obtained: normally sighted subjects were both faster and more accurate than the other groups; all groups improved in accuracy of recognition as a function of length of interstimulus interval; sighted subjects tended to report using strategies with a strong verbal component while the blind tended to rely on imagery coding. These results are explained in terms of information-processing theory consistent with dual encoding systems in working memory.
Sun, Yanqing; Sun, Liuquan; Zhou, Jie
2013-07-01
This paper studies the generalized semiparametric regression model for longitudinal data where the covariate effects are constant for some and time-varying for others. Different link functions can be used to allow more flexible modelling of longitudinal data. The nonparametric components of the model are estimated using a local linear estimating equation and the parametric components are estimated through a profile estimating function. The method automatically adjusts for heterogeneity of sampling times, allowing the sampling strategy to depend on the past sampling history as well as possibly time-dependent covariates without specifically model such dependence. A [Formula: see text]-fold cross-validation bandwidth selection is proposed as a working tool for locating an appropriate bandwidth. A criteria for selecting the link function is proposed to provide better fit of the data. Large sample properties of the proposed estimators are investigated. Large sample pointwise and simultaneous confidence intervals for the regression coefficients are constructed. Formal hypothesis testing procedures are proposed to check for the covariate effects and whether the effects are time-varying. A simulation study is conducted to examine the finite sample performances of the proposed estimation and hypothesis testing procedures. The methods are illustrated with a data example.
Chen, Weijie; Wunderlich, Adam; Petrick, Nicholas; Gallas, Brandon D
2014-10-01
We treat multireader multicase (MRMC) reader studies for which a reader's diagnostic assessment is converted to binary agreement (1: agree with the truth state, 0: disagree with the truth state). We present a mathematical model for simulating binary MRMC data with a desired correlation structure across readers, cases, and two modalities, assuming the expected probability of agreement is equal for the two modalities ([Formula: see text]). This model can be used to validate the coverage probabilities of 95% confidence intervals (of [Formula: see text], [Formula: see text], or [Formula: see text] when [Formula: see text]), validate the type I error of a superiority hypothesis test, and size a noninferiority hypothesis test (which assumes [Formula: see text]). To illustrate the utility of our simulation model, we adapt the Obuchowski-Rockette-Hillis (ORH) method for the analysis of MRMC binary agreement data. Moreover, we use our simulation model to validate the ORH method for binary data and to illustrate sizing in a noninferiority setting. Our software package is publicly available on the Google code project hosting site for use in simulation, analysis, validation, and sizing of MRMC reader studies with binary agreement data.
Chen, Weijie; Wunderlich, Adam; Petrick, Nicholas; Gallas, Brandon D.
2014-01-01
Abstract. We treat multireader multicase (MRMC) reader studies for which a reader’s diagnostic assessment is converted to binary agreement (1: agree with the truth state, 0: disagree with the truth state). We present a mathematical model for simulating binary MRMC data with a desired correlation structure across readers, cases, and two modalities, assuming the expected probability of agreement is equal for the two modalities (P1=P2). This model can be used to validate the coverage probabilities of 95% confidence intervals (of P1, P2, or P1−P2 when P1−P2=0), validate the type I error of a superiority hypothesis test, and size a noninferiority hypothesis test (which assumes P1=P2). To illustrate the utility of our simulation model, we adapt the Obuchowski–Rockette–Hillis (ORH) method for the analysis of MRMC binary agreement data. Moreover, we use our simulation model to validate the ORH method for binary data and to illustrate sizing in a noninferiority setting. Our software package is publicly available on the Google code project hosting site for use in simulation, analysis, validation, and sizing of MRMC reader studies with binary agreement data. PMID:26158051
Sequential parallel comparison design with binary and time-to-event outcomes.
Silverman, Rachel Kloss; Ivanova, Anastasia; Fine, Jason
2018-04-30
Sequential parallel comparison design (SPCD) has been proposed to increase the likelihood of success of clinical trials especially trials with possibly high placebo effect. Sequential parallel comparison design is conducted with 2 stages. Participants are randomized between active therapy and placebo in stage 1. Then, stage 1 placebo nonresponders are rerandomized between active therapy and placebo. Data from the 2 stages are pooled to yield a single P value. We consider SPCD with binary and with time-to-event outcomes. For time-to-event outcomes, response is defined as a favorable event prior to the end of follow-up for a given stage of SPCD. We show that for these cases, the usual test statistics from stages 1 and 2 are asymptotically normal and uncorrelated under the null hypothesis, leading to a straightforward combined testing procedure. In addition, we show that the estimators of the treatment effects from the 2 stages are asymptotically normal and uncorrelated under the null and alternative hypothesis, yielding confidence interval procedures with correct coverage. Simulations and real data analysis demonstrate the utility of the binary and time-to-event SPCD. Copyright © 2018 John Wiley & Sons, Ltd.
NASA Astrophysics Data System (ADS)
Lupien, R.; Russell, J. M.; Cohen, A. S.; Feibel, C. S.; Beck, C.; Castañeda, I. S.
2016-12-01
Climate change is thought to play a critical role in human evolution; however, this hypothesis is difficult to test due to a lack of long, high-quality paleoclimate records from key hominin fossil locales. To address this issue, we examine Plio-Pleistocene lake sediment drill cores from East Africa that were recovered by the Hominin Sites and Paleolakes Drilling Project, an international effort to study the environment in which our hominin ancestors evolved and dispersed. With new data we test various evolutionary hypotheses, such as the "variability selection" hypothesis, which posits that high-frequency environmental variations selected for generalist traits that allowed hominins to expand into variable environments. We analyzed organic geochemical signals of climate in lake cores from West Turkana, Kenya, which span 1.87-1.38 Ma and contain the first fossils from Homo erectus. In particular, we present a compound-specific hydrogen isotopic analysis of terrestrial plant waxes (δDwax) that records regional hydrology. The amount effect dominates water isotope fractionation in the tropics; therefore, these data are interpreted to reflect mean annual rainfall, which affects vegetation structure and thus, hominin habitats. The canonical view of East Africa is that climate became drier and increasingly felt high-latitude glacial-interglacial cycles during the Plio-Pleistocene. However, the drying trend seen in some records is not evident in Turkana δDwax, signifying instead a climate with a steady mean state. Spectral and moving variance analyses indicate paleohydrological variations related to both high-latitude glaciation (41 ky cycle) and local insolation-forced monsoons (21 ky cycle). An interval of particularly high-amplitude rainfall variation occurs at 1.7 Ma, which coincides with the intensification of the Walker Circulation. These results identify high- and low-latitude controls on East African paleohydrology during Homo erectus evolution. In particular, the interval of high-amplitude variability coincides with hominin evolution changes and lends support for the "variability selection" hypothesis. Similar analyses of a drill core from Northern Awash, Ethiopia ( 3.3-2.9 Ma) will be presented to compare Pliocene and Pleistocene climate variations.
Cavus, H A; Msetfi, Rachel M
2016-11-01
When there is no contingency between actions and outcomes, but outcomes occur frequently, people tend to judge that they have control over those outcomes, a phenomenon known as the outcome density (OD) effect. Recent studies show that the OD effect depends on the duration of the temporal interval between action-outcome conjunctions, with longer intervals inducing stronger effects. However, under some circumstances OD effect is reduced, for example when participants are mildly depressed. We reasoned that working memory (WM) plays an important role in learning of context; with reduced WM capacity to process contextual information during intertrial intervals (ITIs) during contingency learning might lead to reduced OD effects (limited capacity hypothesis). To test this, we used a novel dual-task procedure that increases the WM load during the ITIs of an operant (e.g., action-outcome) contingency learning task to impact contextual learning. We tested our hypotheses in groups of students with zero (Experiments 1, N=34), and positive contingencies (Experiment 2, N=34). The findings indicated that WM load during the ITIs reduced the OD effects compared to no load conditions (Experiment 1 and 2). In Experiment 2, we observed reduced OD effects on action judgements under high load in zero and positive contingencies. However, the participants' judgements were still sensitive to the difference between zero and positive contingencies. We discuss the implications of our findings for the effects of depression and context in contingency learning. Copyright © 2016 Elsevier B.V. All rights reserved.
Smith, P; Linscott, L L; Vadivelu, S; Zhang, B; Leach, J L
2016-05-01
Widening of the occipital condyle-C1 interval is the most specific and sensitive means of detecting atlanto-occipital dislocation. Recent studies attempting to define normal measurements of the condyle-C1 interval in children have varied substantially. This study was performed to test the null hypothesis that condyle-C1 interval morphology and joint measurements do not change as a function of age. Imaging review of subjects undergoing CT of the upper cervical spine for reasons unrelated to trauma or developmental abnormality was performed. Four equidistant measurements were obtained for each bilateral condyle-C1 interval on sagittal and coronal images. The cohort was divided into 7 age groups to calculate the mean, SD, and 95% CIs for the average condyle-C1 interval in both planes. The prevalence of a medial occipital condyle notch was calculated. Two hundred forty-eight joints were measured in 124 subjects with an age range of 2 days to 22 years. The condyle-C1 interval varies substantially by age. Average coronal measurements are larger and more variable than sagittal measurements. The medial occipital condyle notch is most prevalent from 1 to 12 years and is uncommon in older adolescents and young adults. The condyle-C1 interval increases during the first several years of life, is largest in the 2- to 4-year age range, and then decreases through late childhood and adolescence. A single threshold value to detect atlanto-occipital dissociation may not be sensitive and specific for all age groups. Application of this normative data to documented cases of atlanto-occipital injury is needed to determine clinical utility. © 2016 by American Journal of Neuroradiology.
Stochastic model of financial markets reproducing scaling and memory in volatility return intervals
NASA Astrophysics Data System (ADS)
Gontis, V.; Havlin, S.; Kononovicius, A.; Podobnik, B.; Stanley, H. E.
2016-11-01
We investigate the volatility return intervals in the NYSE and FOREX markets. We explain previous empirical findings using a model based on the interacting agent hypothesis instead of the widely-used efficient market hypothesis. We derive macroscopic equations based on the microscopic herding interactions of agents and find that they are able to reproduce various stylized facts of different markets and different assets with the same set of model parameters. We show that the power-law properties and the scaling of return intervals and other financial variables have a similar origin and could be a result of a general class of non-linear stochastic differential equations derived from a master equation of an agent system that is coupled by herding interactions. Specifically, we find that this approach enables us to recover the volatility return interval statistics as well as volatility probability and spectral densities for the NYSE and FOREX markets, for different assets, and for different time-scales. We find also that the historical S&P500 monthly series exhibits the same volatility return interval properties recovered by our proposed model. Our statistical results suggest that human herding is so strong that it persists even when other evolving fluctuations perturbate the financial system.
Net Reclassification Indices for Evaluating Risk-Prediction Instruments: A Critical Review
Kerr, Kathleen F.; Wang, Zheyu; Janes, Holly; McClelland, Robyn L.; Psaty, Bruce M.; Pepe, Margaret S.
2014-01-01
Net reclassification indices have recently become popular statistics for measuring the prediction increment of new biomarkers. We review the various types of net reclassification indices and their correct interpretations. We evaluate the advantages and disadvantages of quantifying the prediction increment with these indices. For pre-defined risk categories, we relate net reclassification indices to existing measures of the prediction increment. We also consider statistical methodology for constructing confidence intervals for net reclassification indices and evaluate the merits of hypothesis testing based on such indices. We recommend that investigators using net reclassification indices should report them separately for events (cases) and nonevents (controls). When there are two risk categories, the components of net reclassification indices are the same as the changes in the true-positive and false-positive rates. We advocate use of true- and false-positive rates and suggest it is more useful for investigators to retain the existing, descriptive terms. When there are three or more risk categories, we recommend against net reclassification indices because they do not adequately account for clinically important differences in shifts among risk categories. The category-free net reclassification index is a new descriptive device designed to avoid pre-defined risk categories. However, it suffers from many of the same problems as other measures such as the area under the receiver operating characteristic curve. In addition, the category-free index can mislead investigators by overstating the incremental value of a biomarker, even in independent validation data. When investigators want to test a null hypothesis of no prediction increment, the well-established tests for coefficients in the regression model are superior to the net reclassification index. If investigators want to use net reclassification indices, confidence intervals should be calculated using bootstrap methods rather than published variance formulas. The preferred single-number summary of the prediction increment is the improvement in net benefit. PMID:24240655
Identifying Galactic Cosmic Ray Origins With Super-TIGER
NASA Technical Reports Server (NTRS)
deNolfo, Georgia; Binns, W. R.; Israel, M. H.; Christian, E. R.; Mitchell, J. W.; Hams, T.; Link, J. T.; Sasaki, M.; Labrador, A. W.; Mewaldt, R. A.;
2009-01-01
Super-TIGER (Super Trans-Iron Galactic Element Recorder) is a new long-duration balloon-borne instrument designed to test and clarify an emerging model of cosmic-ray origins and models for atomic processes by which nuclei are selected for acceleration. A sensitive test of the origin of cosmic rays is the measurement of ultra heavy elemental abundances (Z > or equal 30). Super-TIGER is a large-area (5 sq m) instrument designed to measure the elements in the interval 30 < or equal Z < or equal 42 with individual-element resolution and high statistical precision, and make exploratory measurements through Z = 60. It will also measure with high statistical accuracy the energy spectra of the more abundant elements in the interval 14 < or equal Z < or equal 30 at energies 0.8 < or equal E < or equal 10 GeV/nucleon. These spectra will give a sensitive test of the hypothesis that microquasars or other sources could superpose spectral features on the otherwise smooth energy spectra previously measured with less statistical accuracy. Super-TIGER builds on the heritage of the smaller TIGER, which produced the first well-resolved measurements of elemental abundances of the elements Ga-31, Ge-32, and Se-34. We present the Super-TIGER design, schedule, and progress to date, and discuss the relevance of UH measurements to cosmic-ray origins.
Simplified Estimation and Testing in Unbalanced Repeated Measures Designs.
Spiess, Martin; Jordan, Pascal; Wendt, Mike
2018-05-07
In this paper we propose a simple estimator for unbalanced repeated measures design models where each unit is observed at least once in each cell of the experimental design. The estimator does not require a model of the error covariance structure. Thus, circularity of the error covariance matrix and estimation of correlation parameters and variances are not necessary. Together with a weak assumption about the reason for the varying number of observations, the proposed estimator and its variance estimator are unbiased. As an alternative to confidence intervals based on the normality assumption, a bias-corrected and accelerated bootstrap technique is considered. We also propose the naive percentile bootstrap for Wald-type tests where the standard Wald test may break down when the number of observations is small relative to the number of parameters to be estimated. In a simulation study we illustrate the properties of the estimator and the bootstrap techniques to calculate confidence intervals and conduct hypothesis tests in small and large samples under normality and non-normality of the errors. The results imply that the simple estimator is only slightly less efficient than an estimator that correctly assumes a block structure of the error correlation matrix, a special case of which is an equi-correlation matrix. Application of the estimator and the bootstrap technique is illustrated using data from a task switch experiment based on an experimental within design with 32 cells and 33 participants.
Speech rhythm in Kannada speaking adults who stutter.
Maruthy, Santosh; Venugopal, Sahana; Parakh, Priyanka
2017-10-01
A longstanding hypothesis about the underlying mechanisms of stuttering suggests that speech disfluencies may be associated with problems in timing and temporal patterning of speech events. Fifteen adults who do and do not stutter read five sentences, and from these, the vocalic and consonantal durations were measured. Using these, pairwise variability index (raw PVI for consonantal intervals and normalised PVI for vocalic intervals) and interval based rhythm metrics (PercV, DeltaC, DeltaV, VarcoC and VarcoV) were calculated for all the participants. Findings suggested higher mean values in adults who stutter when compared to adults who do not stutter for all the rhythm metrics except for VarcoV. Further, statistically significant difference between the two groups was found for all the rhythm metrics except for VarcoV. Combining the present results with consistent prior findings based on rhythm deficits in children and adults who stutter, there appears to be strong empirical support for the hypothesis that individuals who stutter may have deficits in generation of rhythmic speech patterns.
Statistical inference for tumor growth inhibition T/C ratio.
Wu, Jianrong
2010-09-01
The tumor growth inhibition T/C ratio is commonly used to quantify treatment effects in drug screening tumor xenograft experiments. The T/C ratio is converted to an antitumor activity rating using an arbitrary cutoff point and often without any formal statistical inference. Here, we applied a nonparametric bootstrap method and a small sample likelihood ratio statistic to make a statistical inference of the T/C ratio, including both hypothesis testing and a confidence interval estimate. Furthermore, sample size and power are also discussed for statistical design of tumor xenograft experiments. Tumor xenograft data from an actual experiment were analyzed to illustrate the application.
Traumatic Brain Injury, Sleep Quality, and Suicidal Ideation in Iraq/Afghanistan Era Veterans.
DeBeer, Bryann B; Kimbrel, Nathan A; Mendoza, Corina; Davidson, Dena; Meyer, Eric C; La Bash, Heidi; Gulliver, Suzy Bird; Morissette, Sandra B
2017-07-01
The objective of this study was to test the hypothesis that sleep quality mediates the association between traumatic brain injury (TBI) history and current suicidal ideation. Measures of TBI history, sleep quality, and suicidal ideation were administered to 130 Iraq/Afghanistan veterans. As expected, sleep quality mediated the effect of TBI history on current suicidal ideation (indirect effect, 0.0082; 95% confidence interval, 0.0019-0.0196), such that history of TBI was associated with worse sleep quality, which was, in turn, associated with increased suicidal ideation. These findings highlight the importance of assessing TBI history and sleep quality during suicide risk assessments for veterans.
After p Values: The New Statistics for Undergraduate Neuroscience Education.
Calin-Jageman, Robert J
2017-01-01
Statistical inference is a methodological cornerstone for neuroscience education. For many years this has meant inculcating neuroscience majors into null hypothesis significance testing with p values. There is increasing concern, however, about the pervasive misuse of p values. It is time to start planning statistics curricula for neuroscience majors that replaces or de-emphasizes p values. One promising alternative approach is what Cumming has dubbed the "New Statistics", an approach that emphasizes effect sizes, confidence intervals, meta-analysis, and open science. I give an example of the New Statistics in action and describe some of the key benefits of adopting this approach in neuroscience education.
The Super-TIGER Instrument to Probe Galactic Cosmic-Ray Origins
NASA Astrophysics Data System (ADS)
Ward, John E.
2013-04-01
Super-TIGER is a large area (5.4 m^2) balloon-borne instrument designed to measure cosmic-ray nuclei in the charge interval 30 <= Z <= 42 with individual-element resolution and high statistical precision, and make exploratory measurements through Z = 56. These measurements will provide sensitive tests of the emerging model of cosmic-ray origins in OB associations and models of the mechanism for selection of nuclei for acceleration. Furthermore, Super-TIGER will measure with high statistical accuracy the energy spectra of the more abundant elements in the interval 10 <= Z <= 28 at energies 0.8 < E < 10 GeV/nucleon to test the hypothesis that nearby micro-quasars could superpose features on the energy spectra. Super-TIGER, which builds on the heritage of the smaller TIGER, was constructed by a collaboration involving WUSTL, NASA/GSFC, Caltech, JPL and U Minn. It was successfully launched from Antarctica in December 2012, collecting high-quality data for over one month. Particle charge and energy were measured with a combination of plastic scintillators, acrylic and silica-aerogel Cherenkov detectors, and a scintillating fiber hodoscope. Details of the flight, instrument performance, data analysis and preliminary results of the Super-TIGER flight will be presented.
Seeking health information on the web: positive hypothesis testing.
Kayhan, Varol Onur
2013-04-01
The goal of this study is to investigate positive hypothesis testing among consumers of health information when they search the Web. After demonstrating the extent of positive hypothesis testing using Experiment 1, we conduct Experiment 2 to test the effectiveness of two debiasing techniques. A total of 60 undergraduate students searched a tightly controlled online database developed by the authors to test the validity of a hypothesis. The database had four abstracts that confirmed the hypothesis and three abstracts that disconfirmed it. Findings of Experiment 1 showed that majority of participants (85%) exhibited positive hypothesis testing. In Experiment 2, we found that the recommendation technique was not effective in reducing positive hypothesis testing since none of the participants assigned to this server could retrieve disconfirming evidence. Experiment 2 also showed that the incorporation technique successfully reduced positive hypothesis testing since 75% of the participants could retrieve disconfirming evidence. Positive hypothesis testing on the Web is an understudied topic. More studies are needed to validate the effectiveness of the debiasing techniques discussed in this study and develop new techniques. Search engine developers should consider developing new options for users so that both confirming and disconfirming evidence can be presented in search results as users test hypotheses using search engines. Copyright © 2012 Elsevier Ireland Ltd. All rights reserved.
Liu, C C; Crone, N E; Franaszczuk, P J; Cheng, D T; Schretlen, D S; Lenz, F A
2011-08-25
The current model of fear conditioning suggests that it is mediated through modules involving the amygdala (AMY), hippocampus (HIP), and frontal lobe (FL). We now test the hypothesis that habituation and acquisition stages of a fear conditioning protocol are characterized by different event-related causal interactions (ERCs) within and between these modules. The protocol used the painful cutaneous laser as the unconditioned stimulus and ERC was estimated by analysis of local field potentials recorded through electrodes implanted for investigation of epilepsy. During the prestimulus interval of the habituation stage FL>AMY ERC interactions were common. For comparison, in the poststimulus interval of the habituation stage, only a subdivision of the FL (dorsolateral prefrontal cortex, dlPFC) still exerted the FL>AMY ERC interaction (dlFC>AMY). For a further comparison, during the poststimulus interval of the acquisition stage, the dlPFC>AMY interaction persisted and an AMY>FL interaction appeared. In addition to these ERC interactions between modules, the results also show ERC interactions within modules. During the poststimulus interval, HIP>HIP ERC interactions were more common during acquisition, and deep hippocampal contacts exerted causal interactions on superficial contacts, possibly explained by connectivity between the perihippocampal gyrus and the HIP. During the prestimulus interval of the habituation stage, AMY>AMY ERC interactions were commonly found, while interactions between the deep and superficial AMY (indirect pathway) were independent of intervals and stages. These results suggest that the network subserving fear includes distributed or widespread modules, some of which are themselves "local networks." ERC interactions between and within modules can be either static or change dynamically across intervals or stages of fear conditioning. Copyright © 2011 IBRO. Published by Elsevier Ltd. All rights reserved.
Information extraction from dynamic PS-InSAR time series using machine learning
NASA Astrophysics Data System (ADS)
van de Kerkhof, B.; Pankratius, V.; Chang, L.; van Swol, R.; Hanssen, R. F.
2017-12-01
Due to the increasing number of SAR satellites, with shorter repeat intervals and higher resolutions, SAR data volumes are exploding. Time series analyses of SAR data, i.e. Persistent Scatterer (PS) InSAR, enable the deformation monitoring of the built environment at an unprecedented scale, with hundreds of scatterers per km2, updated weekly. Potential hazards, e.g. due to failure of aging infrastructure, can be detected at an early stage. Yet, this requires the operational data processing of billions of measurement points, over hundreds of epochs, updating this data set dynamically as new data come in, and testing whether points (start to) behave in an anomalous way. Moreover, the quality of PS-InSAR measurements is ambiguous and heterogeneous, which will yield false positives and false negatives. Such analyses are numerically challenging. Here we extract relevant information from PS-InSAR time series using machine learning algorithms. We cluster (group together) time series with similar behaviour, even though they may not be spatially close, such that the results can be used for further analysis. First we reduce the dimensionality of the dataset in order to be able to cluster the data, since applying clustering techniques on high dimensional datasets often result in unsatisfying results. Our approach is to apply t-distributed Stochastic Neighbor Embedding (t-SNE), a machine learning algorithm for dimensionality reduction of high-dimensional data to a 2D or 3D map, and cluster this result using Density-Based Spatial Clustering of Applications with Noise (DBSCAN). The results show that we are able to detect and cluster time series with similar behaviour, which is the starting point for more extensive analysis into the underlying driving mechanisms. The results of the methods are compared to conventional hypothesis testing as well as a Self-Organising Map (SOM) approach. Hypothesis testing is robust and takes the stochastic nature of the observations into account, but is time consuming. Therefore, we successively apply our machine learning approach with the hypothesis testing approach in order to benefit from both the reduced computation time of the machine learning approach as from the robust quality metrics of hypothesis testing. We acknowledge support from NASA AISTNNX15AG84G (PI V. Pankratius)
Su, Zhong; Zhang, Lisha; Ramakrishnan, V; Hagan, Michael; Anscher, Mitchell
2011-05-01
To evaluate both the Calypso Systems' (Calypso Medical Technologies, Inc., Seattle, WA) localization accuracy in the presence of wireless metal-oxide-semiconductor field-effect transistor (MOSFET) dosimeters of dose verification system (DVS, Sicel Technologies, Inc., Morrisville, NC) and the dosimeters' reading accuracy in the presence of wireless electromagnetic transponders inside a phantom. A custom-made, solid-water phantom was fabricated with space for transponders and dosimeters. Two inserts were machined with positioning grooves precisely matching the dimensions of the transponders and dosimeters and were arranged in orthogonal and parallel orientations, respectively. To test the transponder localization accuracy with/without presence of dosimeters (hypothesis 1), multivariate analyses were performed on transponder-derived localization data with and without dosimeters at each preset distance to detect statistically significant localization differences between the control and test sets. To test dosimeter dose-reading accuracy with/without presence of transponders (hypothesis 2), an approach of alternating the transponder presence in seven identical fraction dose (100 cGy) deliveries and measurements was implemented. Two-way analysis of variance was performed to examine statistically significant dose-reading differences between the two groups and the different fractions. A relative-dose analysis method was also used to evaluate transponder impact on dose-reading accuracy after dose-fading effect was removed by a second-order polynomial fit. Multivariate analysis indicated that hypothesis 1 was false; there was a statistically significant difference between the localization data from the control and test sets. However, the upper and lower bounds of the 95% confidence intervals of the localized positional differences between the control and test sets were less than 0.1 mm, which was significantly smaller than the minimum clinical localization resolution of 0.5 mm. For hypothesis 2, analysis of variance indicated that there was no statistically significant difference between the dosimeter readings with and without the presence of transponders. Both orthogonal and parallel configurations had difference of polynomial-fit dose to measured dose values within 1.75%. The phantom study indicated that the Calypso System's localization accuracy was not affected clinically due to the presence of DVS wireless MOSFET dosimeters and the dosimeter-measured doses were not affected by the presence of transponders. Thus, the same patients could be implanted with both transponders and dosimeters to benefit from improved accuracy of radiotherapy treatments offered by conjunctional use of the two systems.
Orbit-spin coupling and the interannual variability of global-scale dust storm occurrence on Mars
NASA Astrophysics Data System (ADS)
Shirley, James H.; Mischna, Michael A.
2017-05-01
A new physical hypothesis predicts that a weak coupling of the orbital and rotational motions of extended bodies may give rise to a modulation of circulatory flows within their atmospheres. Driven cycles of intensification and relaxation of large-scale circulatory flows are predicted, with the phasing of these changes linked directly to the rate of change of the orbital angular momentum, dL/dt, with respect to inertial frames. We test the hypothesis that global-scale dust storms (GDS) on Mars may occur when periods of circulatory intensification (associated with positive and negative extrema of the dL/dt waveform) coincide with the southern summer dust storm season on Mars. The orbit-spin coupling hypothesis additionally predicts that the intervening 'transitional' periods, which are characterized by the disappearance and subsequent sign change of dL/dt, may be unfavorable for the occurrence of GDS, when they occur during the southern summer dust storm season. These hypotheses are strongly supported by comparisons between calculated dynamical time series of dL/dt and historic observations. All of the nine known global-scale dust storms on Mars took place during Mars years when circulatory intensification during the dust storm season is 'retrodicted' under the orbit-spin coupling hypothesis. None of the historic global-scale dust storms of our catalog occurred during transitional intervals. Orbit-spin coupling appears to play an important role in the excitation of the interannual variability of the atmospheric circulation of Mars.
Nearby grandmother enhances calf survival and reproduction in Asian elephants
Lahdenperä, Mirkka; Mar, Khyne U.; Lummaa, Virpi
2016-01-01
Usually animals reproduce into old age, but a few species such as humans and killer whales can live decades after their last reproduction. The grandmother hypothesis proposes that such life-history evolved through older females switching to invest in their existing (grand)offspring, thereby increasing their inclusive fitness and selection for post-reproductive lifespan. However, positive grandmother effects are also found in non-menopausal taxa, but evidence of their associated fitness effects is rare and only a few tests of the hypothesis in such species exist. Here we investigate the grandmother effects in Asian elephants. Using a multigenerational demographic dataset on semi-captive elephants in Myanmar, we found that grandcalves from young mothers (<20 years) had 8 times lower mortality risk if the grandmother resided with her grandcalf compared to grandmothers residing elsewhere. Resident grandmothers also decreased their daughters’ inter-birth intervals by one year. In contrast to the hypothesis predictions, the grandmother’s own reproductive status did not modify such grandmother benefits. That elephant grandmothers increased their inclusive fitness by enhancing their daughter’s reproductive rate and success irrespective of their own reproductive status suggests that fitness-enhancing grandmaternal effects are widespread, and challenge the view that grandmother effects alone select for menopause coupled with long post-reproductive lifespan. PMID:27282468
Nearby grandmother enhances calf survival and reproduction in Asian elephants.
Lahdenperä, Mirkka; Mar, Khyne U; Lummaa, Virpi
2016-06-10
Usually animals reproduce into old age, but a few species such as humans and killer whales can live decades after their last reproduction. The grandmother hypothesis proposes that such life-history evolved through older females switching to invest in their existing (grand)offspring, thereby increasing their inclusive fitness and selection for post-reproductive lifespan. However, positive grandmother effects are also found in non-menopausal taxa, but evidence of their associated fitness effects is rare and only a few tests of the hypothesis in such species exist. Here we investigate the grandmother effects in Asian elephants. Using a multigenerational demographic dataset on semi-captive elephants in Myanmar, we found that grandcalves from young mothers (<20 years) had 8 times lower mortality risk if the grandmother resided with her grandcalf compared to grandmothers residing elsewhere. Resident grandmothers also decreased their daughters' inter-birth intervals by one year. In contrast to the hypothesis predictions, the grandmother's own reproductive status did not modify such grandmother benefits. That elephant grandmothers increased their inclusive fitness by enhancing their daughter's reproductive rate and success irrespective of their own reproductive status suggests that fitness-enhancing grandmaternal effects are widespread, and challenge the view that grandmother effects alone select for menopause coupled with long post-reproductive lifespan.
Influence of platform switching on bone-level alterations: a three-year randomized clinical trial.
Enkling, N; Jöhren, P; Katsoulis, J; Bayer, S; Jervøe-Storm, P-M; Mericske-Stern, R; Jepsen, S
2013-12-01
The concept of platform switching has been introduced to implant dentistry based on clinical observations of reduced peri-implant crestal bone loss. However, published data are controversial, and most studies are limited to 12 months. The aim of the present randomized clinical trial was to test the hypothesis that platform switching has a positive impact on crestal bone-level changes after 3 years. Two implants with a diameter of 4 mm were inserted crestally in the posterior mandible of 25 patients. The intraindividual allocation of platform switching (3.3-mm platform) and the standard implant (4-mm platform) was randomized. After 3 months of submerged healing, single-tooth crowns were cemented. Patients were followed up at short intervals for monitoring of healing and oral hygiene. Statistical analysis for the influence of time and platform type on bone levels employed the Brunner-Langer model. At 3 years, the mean radiographic peri-implant bone loss was 0.69 ± 0.43 mm (platform switching) and 0.74 ± 0.57 mm (standard platform). The mean intraindividual difference was 0.05 ± 0.58 mm (95% confidence interval: -0.19, 0.29). Crestal bone-level alteration depended on time (p < .001) but not on platform type (p = .363). The present randomized clinical trial could not confirm the hypothesis of a reduced peri-implant crestal bone loss, when implants had been restored according to the concept of platform switching.
Levin, Gregory P; Emerson, Sarah C; Emerson, Scott S
2014-09-01
Many papers have introduced adaptive clinical trial methods that allow modifications to the sample size based on interim estimates of treatment effect. There has been extensive commentary on type I error control and efficiency considerations, but little research on estimation after an adaptive hypothesis test. We evaluate the reliability and precision of different inferential procedures in the presence of an adaptive design with pre-specified rules for modifying the sampling plan. We extend group sequential orderings of the outcome space based on the stage at stopping, likelihood ratio statistic, and sample mean to the adaptive setting in order to compute median-unbiased point estimates, exact confidence intervals, and P-values uniformly distributed under the null hypothesis. The likelihood ratio ordering is found to average shorter confidence intervals and produce higher probabilities of P-values below important thresholds than alternative approaches. The bias adjusted mean demonstrates the lowest mean squared error among candidate point estimates. A conditional error-based approach in the literature has the benefit of being the only method that accommodates unplanned adaptations. We compare the performance of this and other methods in order to quantify the cost of failing to plan ahead in settings where adaptations could realistically be pre-specified at the design stage. We find the cost to be meaningful for all designs and treatment effects considered, and to be substantial for designs frequently proposed in the literature. © 2014, The International Biometric Society.
Lêng, Chhian Hūi; Wang, Jung-Der
2016-01-01
To test the hypothesis that gardening is beneficial for survival after taking time-dependent comorbidities, mobility, and depression into account in a longitudinal middle-aged (50-64 years) and older (≥65 years) cohort in Taiwan. The cohort contained 5,058 nationally sampled adults ≥50 years old from the Taiwan Longitudinal Study on Aging (1996-2007). Gardening was defined as growing flowers, gardening, or cultivating potted plants for pleasure with five different frequencies. We calculated hazard ratios for the mortality risks of gardening and adjusted the analysis for socioeconomic status, health behaviors and conditions, depression, mobility limitations, and comorbidities. Survival models also examined time-dependent effects and risks in each stratum contingent upon baseline mobility and depression. Sensitivity analyses used imputation methods for missing values. Daily home gardening was associated with a high survival rate (hazard ratio: 0.82; 95% confidence interval: 0.71-0.94). The benefits were robust for those with mobility limitations, but without depression at baseline (hazard ratio: 0.64, 95% confidence interval: 0.48-0.87) when adjusted for time-dependent comorbidities, mobility limitations, and depression. Chronic or relapsed depression weakened the protection of gardening. For those without mobility limitations and not depressed at baseline, gardening had no effect. Sensitivity analyses using different imputation methods yielded similar results and corroborated the hypothesis. Daily gardening for pleasure was associated with reduced mortality for Taiwanese >50 years old with mobility limitations but without depression.
Modulation of V1 Spike Response by Temporal Interval of Spatiotemporal Stimulus Sequence
Kim, Taekjun; Kim, HyungGoo R.; Kim, Kayeon; Lee, Choongkil
2012-01-01
The spike activity of single neurons of the primary visual cortex (V1) becomes more selective and reliable in response to wide-field natural scenes compared to smaller stimuli confined to the classical receptive field (RF). However, it is largely unknown what aspects of natural scenes increase the selectivity of V1 neurons. One hypothesis is that modulation by surround interaction is highly sensitive to small changes in spatiotemporal aspects of RF surround. Such a fine-tuned modulation would enable single neurons to hold information about spatiotemporal sequences of oriented stimuli, which extends the role of V1 neurons as a simple spatiotemporal filter confined to the RF. In the current study, we examined the hypothesis in the V1 of awake behaving monkeys, by testing whether the spike response of single V1 neurons is modulated by temporal interval of spatiotemporal stimulus sequence encompassing inside and outside the RF. We used two identical Gabor stimuli that were sequentially presented with a variable stimulus onset asynchrony (SOA): the preceding one (S1) outside the RF and the following one (S2) in the RF. This stimulus configuration enabled us to examine the spatiotemporal selectivity of response modulation from a focal surround region. Although S1 alone did not evoke spike responses, visual response to S2 was modulated for SOA in the range of tens of milliseconds. These results suggest that V1 neurons participate in processing spatiotemporal sequences of oriented stimuli extending outside the RF. PMID:23091631
Peter, Beate
2013-01-01
This study tested the hypothesis that children with speech sound disorder have generalized slowed motor speeds. It evaluated associations among oral and hand motor speeds and measures of speech (articulation and phonology) and language (receptive vocabulary, sentence comprehension, sentence imitation), in 11 children with moderate to severe SSD and 11 controls. Syllable durations from a syllable repetition task served as an estimate of maximal oral movement speed. In two imitation tasks, nonwords and clapped rhythms, unstressed vowel durations and quarter-note clap intervals served as estimates of oral and hand movement speed, respectively. Syllable durations were significantly correlated with vowel durations and hand clap intervals. Sentence imitation was correlated with all three timed movement measures. Clustering on syllable repetition durations produced three clusters that also differed in sentence imitation scores. Results are consistent with limited movement speeds across motor systems and SSD subtypes defined by motor speeds as a corollary of expressive language abilities. PMID:22411590
Driven to distraction: A lack of change gives rise to mind wandering.
Faber, Myrthe; Radvansky, Gabriel A; D'Mello, Sidney K
2018-04-01
How does the dynamic structure of the external world direct attention? We examined the relationship between event structure and attention to test the hypothesis that narrative shifts (both theoretical and perceived) negatively predict attentional lapses. Self-caught instances of mind wandering were collected while 108 participants watched a 32.5 min film called The Red Balloon. We used theoretical codings of situational change and human perceptions of event boundaries to predict mind wandering in 5-s intervals. Our findings suggest a temporal alignment between the structural dynamics of the film and mind wandering reports. Specifically, the number of situational changes and likelihood of perceiving event boundaries in the prior 0-15 s interval negatively predicted mind wandering net of low-level audiovisual features. Thus, mind wandering is less likely to occur when there is more event change, suggesting that narrative shifts keep attention from drifting inwards. Copyright © 2018 Elsevier B.V. All rights reserved.
Effects of musical training on sound pattern processing in high-school students.
Wang, Wenjung; Staffaroni, Laura; Reid, Errold; Steinschneider, Mitchell; Sussman, Elyse
2009-05-01
Recognizing melody in music involves detection of both the pitch intervals and the silence between sequentially presented sounds. This study tested the hypothesis that active musical training in adolescents facilitates the ability to passively detect sequential sound patterns compared to musically non-trained age-matched peers. Twenty adolescents, aged 15-18 years, were divided into groups according to their musical training and current experience. A fixed order tone pattern was presented at various stimulus rates while electroencephalogram was recorded. The influence of musical training on passive auditory processing of the sound patterns was assessed using components of event-related brain potentials (ERPs). The mismatch negativity (MMN) ERP component was elicited in different stimulus onset asynchrony (SOA) conditions in non-musicians than musicians, indicating that musically active adolescents were able to detect sound patterns across longer time intervals than age-matched peers. Musical training facilitates detection of auditory patterns, allowing the ability to automatically recognize sequential sound patterns over longer time periods than non-musical counterparts.
Peter, Beate
2012-12-01
This study tested the hypothesis that children with speech sound disorder have generalized slowed motor speeds. It evaluated associations among oral and hand motor speeds and measures of speech (articulation and phonology) and language (receptive vocabulary, sentence comprehension, sentence imitation), in 11 children with moderate to severe SSD and 11 controls. Syllable durations from a syllable repetition task served as an estimate of maximal oral movement speed. In two imitation tasks, nonwords and clapped rhythms, unstressed vowel durations and quarter-note clap intervals served as estimates of oral and hand movement speed, respectively. Syllable durations were significantly correlated with vowel durations and hand clap intervals. Sentence imitation was correlated with all three timed movement measures. Clustering on syllable repetition durations produced three clusters that also differed in sentence imitation scores. Results are consistent with limited movement speeds across motor systems and SSD subtypes defined by motor speeds as a corollary of expressive language abilities.
Szyda, Joanna; Liu, Zengting; Zatoń-Dobrowolska, Magdalena; Wierzbicki, Heliodor; Rzasa, Anna
2008-01-01
We analysed data from a selective DNA pooling experiment with 130 individuals of the arctic fox (Alopex lagopus), which originated from 2 different types regarding body size. The association between alleles of 6 selected unlinked molecular markers and body size was tested by using univariate and multinomial logistic regression models, applying odds ratio and test statistics from the power divergence family. Due to the small sample size and the resulting sparseness of the data table, in hypothesis testing we could not rely on the asymptotic distributions of the tests. Instead, we tried to account for data sparseness by (i) modifying confidence intervals of odds ratio; (ii) using a normal approximation of the asymptotic distribution of the power divergence tests with different approaches for calculating moments of the statistics; and (iii) assessing P values empirically, based on bootstrap samples. As a result, a significant association was observed for 3 markers. Furthermore, we used simulations to assess the validity of the normal approximation of the asymptotic distribution of the test statistics under the conditions of small and sparse samples.
Complexity of cardiovascular rhythms during head-up tilt test by entropy of patterns.
Wejer, Dorota; Graff, Beata; Makowiec, Danuta; Budrejko, Szymon; Struzik, Zbigniew R
2017-05-01
The head-up tilt (HUT) test, which provokes transient dynamical alterations in the regulation of cardiovascular system, provides insights into complex organization of this system. Based on signals with heart period intervals (RR-intervals) and/or systolic blood pressure (SBP), differences in the cardiovascular regulation between vasovagal patients (VVS) and the healthy people group (CG) are investigated. Short-term relations among signal data represented symbolically by three-beat patterns allow to qualify and quantify the complexity of the cardiovascular regulation by Shannon entropy. Four types of patterns: permutation, ordinal, deterministic and dynamical, are used, and different resolutions of signal values in the the symbolization are applied in order to verify how entropy of patterns depends on a way in which values of signals are preprocessed. At rest, in the physiologically important signal resolution ranges, independently of the type of patterns used in estimates, the complexity of SBP signals in VVS is different from the complexity found in CG. Entropy of VVS is higher than CG what could be interpreted as substantial presence of noisy ingredients in SBP of VVS. After tilting this relation switches. Entropy of CG occurs significantly higher than VVS for SBP signals. In the case of RR-intervals and large resolutions, the complexity after the tilt becomes reduced when compared to the complexity of RR-intervals at rest for both groups. However, in the case of VVS patients this reduction is significantly stronger than in CG. Our observations about opposite switches in entropy between CG and VVS might support a hypothesis that baroreflex in VVS affects stronger the heart rate because of the inefficient regulation (possibly impaired local vascular tone alternations) of the blood pressure.
Assessing Mediational Models: Testing and Interval Estimation for Indirect Effects.
Biesanz, Jeremy C; Falk, Carl F; Savalei, Victoria
2010-08-06
Theoretical models specifying indirect or mediated effects are common in the social sciences. An indirect effect exists when an independent variable's influence on the dependent variable is mediated through an intervening variable. Classic approaches to assessing such mediational hypotheses ( Baron & Kenny, 1986 ; Sobel, 1982 ) have in recent years been supplemented by computationally intensive methods such as bootstrapping, the distribution of the product methods, and hierarchical Bayesian Markov chain Monte Carlo (MCMC) methods. These different approaches for assessing mediation are illustrated using data from Dunn, Biesanz, Human, and Finn (2007). However, little is known about how these methods perform relative to each other, particularly in more challenging situations, such as with data that are incomplete and/or nonnormal. This article presents an extensive Monte Carlo simulation evaluating a host of approaches for assessing mediation. We examine Type I error rates, power, and coverage. We study normal and nonnormal data as well as complete and incomplete data. In addition, we adapt a method, recently proposed in statistical literature, that does not rely on confidence intervals (CIs) to test the null hypothesis of no indirect effect. The results suggest that the new inferential method-the partial posterior p value-slightly outperforms existing ones in terms of maintaining Type I error rates while maximizing power, especially with incomplete data. Among confidence interval approaches, the bias-corrected accelerated (BC a ) bootstrapping approach often has inflated Type I error rates and inconsistent coverage and is not recommended; In contrast, the bootstrapped percentile confidence interval and the hierarchical Bayesian MCMC method perform best overall, maintaining Type I error rates, exhibiting reasonable power, and producing stable and accurate coverage rates.
Safety of a rapid diagnostic protocol with accelerated stress testing.
Soremekun, Olan A; Hamedani, Azita; Shofer, Frances S; O'Conor, Katie J; Svenson, James; Hollander, Judd E
2014-02-01
Most patients at low to intermediate risk for an acute coronary syndrome (ACS) receive a 12- to 24-hour "rule out." Recently, trials have found that a coronary computed tomographic angiography-based strategy is more efficient. If stress testing were performed within the same time frame as coronary computed tomographic angiography, the 2 strategies would be more similar. We tested the hypothesis that stress testing can safely be performed within several hours of presentation. We performed a retrospective cohort study of patients presenting to a university hospital from January 1, 2009, to December 31, 2011, with potential ACS. Patients placed in a clinical pathway that performed stress testing after 2 negative troponin values 2 hours apart were included. We excluded patients with ST-elevation myocardial infarction or with an elevated initial troponin. The main outcome was safety of immediate stress testing defined as the absence of death or acute myocardial infarction (defined as elevated troponin within 24 hours after the test). A total of 856 patients who presented with potential ACS were enrolled in the clinical pathway and included in this study. Patients had a median age of 55.0 (interquartile range, 48-62) years. Chest pain was the chief concern in 86%, and pain was present on arrival in 73% of the patients. There were no complications observed during the stress test. There were 0 deaths (95% confidence interval, 0%-0.46%) and 4 acute myocardial infarctions within 24 hours (0.5%; 95% confidence interval, 0.14%-1.27%). The peak troponins were small (0.06, 0.07, 0.07, and 0.19 ng/mL). Patients who present to the ED with potential ACS can safely undergo a rapid diagnostic protocol with stress testing. © 2013.
Kertai, Miklos D.; Ji, Yunqi; Li, Yi-Ju; Mathew, Joseph P.; Daubert, James P.; Podgoreanu, Mihai V.
2016-01-01
Background We characterized cardiac surgery-induced dynamic changes of the corrected QT (QTc) interval and tested the hypothesis that genetic factors are associated with perioperative QTc prolongation independent of clinical and procedural factors. Methods All study subjects were ascertained from a prospective study of patients who underwent elective cardiac surgery during August 1999 to April 2002. We defined a prolonged QTc interval as >440 msec, measured from 24-hr pre- and postoperative 12-lead electrocardiograms. The association of 37 single nucleotide polymorphisms (SNPs) in 21 candidate genes – involved in modulating arrhythmia susceptibility pathways with postoperative QTc changes–was investigated in a two-stage design with a stage I cohort (n = 497) nested within a stage II cohort (n = 957). Empirical P values (Pemp) were obtained by permutation tests with 10,000 repeats. Results After adjusting for clinical and procedural risk factors, we selected four SNPs (P value range, 0.03-0.1) in stage I, which we then tested in the stage II cohort. Two functional SNPs in the pro-inflammatory cytokine interleukin-1β (IL1β), rs1143633 (odds ratio [OR], 0.71; 95% confidence interval [CI], 0.53 to 0.95; Pemp = 0.02) and rs16944 (OR, 1.31; 95% CI, 1.01 to 1.70; Pemp = 0.04), remained independent predictors of postoperative QTc prolongation. The ability of a clinico-genetic model incorporating the two IL1B polymorphisms to classify patients at risk for developing prolonged postoperative QTc was superior to a clinical model alone, with a net reclassification improvement of 0.308 (P = 0.0003) and an integrated discrimination improvement of 0.02 (P = 0.000024). Conclusion The results suggest a contribution of IL1β in modulating susceptibility to postoperative QTc prolongation after cardiac surgery. PMID:26858093
The phase shift hypothesis for the circadian component of winter depression
Lewy, Alfred J.; Rough, Jennifer N.; Songer, Jeannine B.; Mishra, Neelam; Yuhas, Krista; Emens, Jonathan S.
2007-01-01
The finding that bright light can suppress melatonin production led to the study of two situations, indeed, models, of light deprivation: totally blind people and winterdepressives. The leading hypothesis for winter depression (seasonal affective disorder, or SAD) is the phase shift hypothesis (PSH). The PSH was recently established in a study in which SAD patients were given low-dose melatonin in the afternoon/evening to cause phase advances, or in the morning to cause phase delays, or placebo. The prototypical phase-delayed patient as well as the smaller subgroup of phase-advanced patients, optimally responded to melatonin given at the correct time. Symptom severity improved as circadian misalignment was corrected. Orcadian misalignment is best measured as the time interval between the dim light melatonin onset (DLMO) and mid-sleep. Using the operational definition of the plasma DLMO as the interpolated time when melatonin levels continuously rise above the threshold of 10 pglmL, the average interval between DLMO and mid-sleep in healthy controls is 6 hours, which is associated with optimal mood in SAD patients. PMID:17969866
Connectivity precedes function in the development of the visual word form area.
Saygin, Zeynep M; Osher, David E; Norton, Elizabeth S; Youssoufian, Deanna A; Beach, Sara D; Feather, Jenelle; Gaab, Nadine; Gabrieli, John D E; Kanwisher, Nancy
2016-09-01
What determines the cortical location at which a given functionally specific region will arise in development? We tested the hypothesis that functionally specific regions develop in their characteristic locations because of pre-existing differences in the extrinsic connectivity of that region to the rest of the brain. We exploited the visual word form area (VWFA) as a test case, scanning children with diffusion and functional imaging at age 5, before they learned to read, and at age 8, after they learned to read. We found the VWFA developed functionally in this interval and that its location in a particular child at age 8 could be predicted from that child's connectivity fingerprints (but not functional responses) at age 5. These results suggest that early connectivity instructs the functional development of the VWFA, possibly reflecting a general mechanism of cortical development.
New methods of testing nonlinear hypothesis using iterative NLLS estimator
NASA Astrophysics Data System (ADS)
Mahaboob, B.; Venkateswarlu, B.; Mokeshrayalu, G.; Balasiddamuni, P.
2017-11-01
This research paper discusses the method of testing nonlinear hypothesis using iterative Nonlinear Least Squares (NLLS) estimator. Takeshi Amemiya [1] explained this method. However in the present research paper, a modified Wald test statistic due to Engle, Robert [6] is proposed to test the nonlinear hypothesis using iterative NLLS estimator. An alternative method for testing nonlinear hypothesis using iterative NLLS estimator based on nonlinear hypothesis using iterative NLLS estimator based on nonlinear studentized residuals has been proposed. In this research article an innovative method of testing nonlinear hypothesis using iterative restricted NLLS estimator is derived. Pesaran and Deaton [10] explained the methods of testing nonlinear hypothesis. This paper uses asymptotic properties of nonlinear least squares estimator proposed by Jenrich [8]. The main purpose of this paper is to provide very innovative methods of testing nonlinear hypothesis using iterative NLLS estimator, iterative NLLS estimator based on nonlinear studentized residuals and iterative restricted NLLS estimator. Eakambaram et al. [12] discussed least absolute deviation estimations versus nonlinear regression model with heteroscedastic errors and also they studied the problem of heteroscedasticity with reference to nonlinear regression models with suitable illustration. William Grene [13] examined the interaction effect in nonlinear models disused by Ai and Norton [14] and suggested ways to examine the effects that do not involve statistical testing. Peter [15] provided guidelines for identifying composite hypothesis and addressing the probability of false rejection for multiple hypotheses.
Sonuga-Barke, Edmund J S; Elgie, Sarah; Hall, Martin
2005-07-20
Children with Attention Deficit/Hyperactivity Disorder (ADHD) often perform poorly on tasks requiring sustained and systematic attention to stimuli for extended periods of time. The current paper tested the hypothesis that such deficits are the result of observable abnormalities in search behaviour (e.g., attention-onset, -duration and -sequencing), and therefore can be explained without reference to deficits in non-observable (i.e., cognitive) processes. Forty boys (20 ADHD and 20 controls) performed a computer-based complex discrimination task adapted from the Matching Familiar Figures Task with four different fixed search interval lengths (5-, 10-, 15- and 20-s). Children with ADHD identified fewer targets than controls (p < 0.001), initiated searches later, spent less time attending to stimuli, and searched in a less intensive and less systematic way (p's < 0.05). There were significant univariate associations between ADHD, task performance and search behaviour. However, there was no support for the hypothesis that abnormalities in search carried the effect of ADHD on performance. The pattern of results in fact suggested that abnormal attending during testing is a statistical marker, rather than a mediator, of ADHD performance deficits. The results confirm the importance of examining covert processes, as well as behavioural abnormalities when trying to understand the psychopathophyiology of ADHD.
Mass extinctions drove increased global faunal cosmopolitanism on the supercontinent Pangaea.
Button, David J; Lloyd, Graeme T; Ezcurra, Martín D; Butler, Richard J
2017-10-10
Mass extinctions have profoundly impacted the evolution of life through not only reducing taxonomic diversity but also reshaping ecosystems and biogeographic patterns. In particular, they are considered to have driven increased biogeographic cosmopolitanism, but quantitative tests of this hypothesis are rare and have not explicitly incorporated information on evolutionary relationships. Here we quantify faunal cosmopolitanism using a phylogenetic network approach for 891 terrestrial vertebrate species spanning the late Permian through Early Jurassic. This key interval witnessed the Permian-Triassic and Triassic-Jurassic mass extinctions, the onset of fragmentation of the supercontinent Pangaea, and the origins of dinosaurs and many modern vertebrate groups. Our results recover significant increases in global faunal cosmopolitanism following both mass extinctions, driven mainly by new, widespread taxa, leading to homogenous 'disaster faunas'. Cosmopolitanism subsequently declines in post-recovery communities. These shared patterns in both biotic crises suggest that mass extinctions have predictable influences on animal distribution and may shed light on biodiversity loss in extant ecosystems.Mass extinctions are thought to produce 'disaster faunas', communities dominated by a small number of widespread species. Here, Button et al. develop a phylogenetic network approach to test this hypothesis and find that mass extinctions did increase faunal cosmopolitanism across Pangaea during the late Palaeozoic and early Mesozoic.
Kenyon, Chris R; Buyze, Jozefien
2015-01-01
The prevalence of both gender inequality and HIV prevalence vary considerably both within all developing countries and within those in sub-Saharan Africa. We test the hypothesis that the extent of gender inequality is associated with national peak HIV prevalence. Linear regression was used to test the association between national peak HIV prevalence and three markers of gender equality - the gender-related development index (GDI), the gender empowerment measure (GEM), and the gender inequality index (GII). No evidence was found of a positive relationship between gender inequality and HIV prevalence, either in the analyses of all developing countries or those limited to Africa. In the bivariate analyses limited to Africa, there was a positive association between the two measures of gender "equality" and peak HIV prevalence (GDI: coefficient 28, 95% confidence interval (CI) 9.1-46.8; GEM: coefficient 54.8, 95% CI 20.5-89.1). There was also a negative association between the marker of gender "inequality" and peak HIV prevalence (GII: coefficient -66.9, 95% CI -112.8 to -21.0). These associations all disappeared on multivariate analyses. We could not find any evidence to support the hypothesis that variations in the extent of gender inequality explain variations in HIV prevalence in developing countries.
Bangert, Marc; Wiedemann, Anna; Jabusch, Hans-Christian
2014-01-01
Variability of Practice (VOP) refers to the acquisition of a particular target movement by practicing a range of varying targets rather than by focusing on fixed repetitions of the target only. VOP has been demonstrated to have beneficial effects on transfer to a novel task and on skill consolidation. This study extends the line of research to musical practice. In a task resembling a barrier-knockdown paradigm, 36 music students trained to perform a wide left-hand interval leap on the piano. Performance at the target distance was tested before and after a 30-min standardized training session. The high-variability group (VAR) practiced four different intervals including the target. Another group (FIX) practiced the target interval only. A third group (SPA) performed spaced practice on the target only, interweaving with periods of not playing. Transfer was tested by introducing an interval novel to either group. After a 24-h period with no further exposure to the instrument, performance was retested. All groups performed at comparable error levels before training, after training, and after the retention (RET) interval. At transfer, however, the FIX group, unlike the other groups, committed significantly more errors than in the target task. After the RET period, the effect was washed out for the FIX group but then was present for VAR. Thus, the results provide only partial support for the VOP hypothesis for the given setting. Additional exploratory observations suggest tentative benefits of VOP regarding execution speed, loudness, and performance confidence. We derive specific hypotheses and specific recommendations regarding sample selection and intervention duration for future investigations. Furthermore, the proposed leap task measurement is shown to be (a) robust enough to serve as a standard framework for studies in the music domain, yet (b) versatile enough to allow for a wide range of designs not previously investigated for music on a standardized basis. PMID:25157223
Reference values for rotational thromboelastometry (ROTEM) in clinically healthy cats.
Marly-Voquer, Charlotte; Riond, Barbara; Jud Schefer, Rahel; Kutter, Annette P N
2017-03-01
To establish reference intervals for rotational thromboelastometry (ROTEM) using feline blood. Prospective study. University teaching hospital. Twenty-three clinically healthy cats between 1 and 15 years. For each cat, whole blood was collected via jugular or medial saphenous venipuncture, and blood was placed into a serum tube, a tube containing potassium-EDTA, and tubes containing 3.2% sodium citrate. The tubes were maintained at 37°C for a maximum of 30 minutes before coagulation testing. ROTEM tests included the EXTEM, INTEM, FIBTEM, and APTEM assays. In addition, prothrombin time, activated partial thromboplastin time, thrombin time, and fibrinogen concentration (Clauss method) were analyzed for each cat. Reference intervals for ROTEM were calculated using the 2.5-97.5 th percentile for each parameter, and correlation with the standard coagulation profile was performed. Compared to people, clinically healthy cats had similar values for the EXTEM and INTEM assays, but had lower plasma fibrinogen concentrations (0.9-2.2 g/L), resulting in weaker maximum clot firmness (MCF, 3-10 mm) in the FIBTEM test. In 18 cats, maximum lysis (ML) values in the APTEM test were higher than in the EXTEM test, which seems unlikely to have occurred in the presence of aprotinin. It is possible that the observed high maximum lysis values were due to clot retraction rather than true clot lysis. Further studies will be required to test this hypothesis. Cats have a weaker clot in the FIBTEM test, but have a similar clot strength to human blood in the other ROTEM assays, which may be due to a stronger contribution of platelets compared to that found in people. In cats, careful interpretation of the results to diagnose hyperfibrinolysis is advised, especially with the APTEM test, until further data are available. © Veterinary Emergency and Critical Care Society 2017.
Yost, Chad L; Jackson, Lily J; Stone, Jeffery R; Cohen, Andrew S
2018-03-01
The temporal proximity of the ∼74 ka Toba supereruption to a putative 100-50 ka human population bottleneck is the basis for the volcanic winter/weak Garden of Eden hypothesis, which states that the eruption caused a 6-year-long global volcanic winter and reduced the effective population of anatomically modern humans (AMH) to fewer than 10,000 individuals. To test this hypothesis, we sampled two cores collected from Lake Malawi with cryptotephra previously fingerprinted to the Toba supereruption. Phytolith and charcoal samples were continuously collected at ∼3-4 mm (∼8-9 yr) intervals above and below the Toba cryptotephra position, with no stratigraphic breaks. For samples synchronous or proximal to the Toba interval, we found no change in low elevation tree cover, or in cool climate C 3 and warm season C 4 xerophytic and mesophytic grass abundance that is outside of normal variability. A spike in locally derived charcoal and xerophytic C 4 grasses immediately after the Toba eruption indicates reduced precipitation and die-off of at least some afromontane vegetation, but does not signal volcanic winter conditions. A review of Toba tuff petrological and melt inclusion studies suggest a Tambora-like 50 to 100 Mt SO 2 atmospheric injection. However, most Toba climate models use SO 2 values that are one to two orders of magnitude higher, thereby significantly overestimating the amount of cooling. A review of recent genetic studies finds no support for a genetic bottleneck at or near ∼74 ka. Based on these previous studies and our new paleoenvironmental data, we find no support for the Toba catastrophe hypothesis and conclude that the Toba supereruption did not 1) produce a 6-year-long volcanic winter in eastern Africa, 2) cause a genetic bottleneck among African AMH populations, or 3) bring humanity to the brink of extinction. Copyright © 2017 Elsevier Ltd. All rights reserved.
Alken, R G; Belz, G G
1984-01-01
We tested the hypothesis that differences exist in the pharmacodynamic pattern of different cardiac glycosides. We conducted a randomized, placebo-controlled study in normal volunteers and evaluated the effects of weekly increased oral dosing of digoxin (n = 10; from 0.25 to 1.0 mg/day), meproscillarin (n = 10; from 0.5 to 2.0 mg/day), and placebo (n = 5). To determine the glycoside effects, corrected electromechanical systole (QS2c) was used to measure inotropy and the PQ interval to test dromotropy. Red-green discrimination and critical flicker fusion (CFF) assessed visual functions. Subjective complaints were collected using rating lists. Both glycosides dose dependently shortened QS2c and prolonged PQ interval. PQ prolongations over +20 ms occurred in seven of 10 digoxin subjects, in two of 10 meproscillarin, and in one of five placebo. Equi-inotropic response, identified at 12 ms mean QS2c shortening, revealed the relative potency of digoxin to be 2.4 times higher than meproscillarin; this ratio increased to sevenfold for equi-effective negative dromotropic effects at 12 ms mean PQ prolongation. Each drug was associated with a dominant subjective complaint: digoxin with anergy and meproscillarin with diarrhea. Red-green discrimination was better under meproscillarin and CFF was depressed by digoxin. The results indicate that pharmacodynamic differences exist between cardiac glycosides. A differential use of various glycosides should be considered and tested clinically.
Statistics for Radiology Research.
Obuchowski, Nancy A; Subhas, Naveen; Polster, Joshua
2017-02-01
Biostatistics is an essential component in most original research studies in imaging. In this article we discuss five key statistical concepts for study design and analyses in modern imaging research: statistical hypothesis testing, particularly focusing on noninferiority studies; imaging outcomes especially when there is no reference standard; dealing with the multiplicity problem without spending all your study power; relevance of confidence intervals in reporting and interpreting study results; and finally tools for assessing quantitative imaging biomarkers. These concepts are presented first as examples of conversations between investigator and biostatistician, and then more detailed discussions of the statistical concepts follow. Three skeletal radiology examples are used to illustrate the concepts. Thieme Medical Publishers 333 Seventh Avenue, New York, NY 10001, USA.
Estimation of Renyi exponents in random cascades
Troutman, Brent M.; Vecchia, Aldo V.
1999-01-01
We consider statistical estimation of the Re??nyi exponent ??(h), which characterizes the scaling behaviour of a singular measure ?? defined on a subset of Rd. The Re??nyi exponent is defined to be lim?????0 [{log M??(h)}/(-log ??)], assuming that this limit exists, where M??(h) = ??i??h(??i) and, for ??>0, {??i} are the cubes of a ??-coordinate mesh that intersect the support of ??. In particular, we demonstrate asymptotic normality of the least-squares estimator of ??(h) when the measure ?? is generated by a particular class of multiplicative random cascades, a result which allows construction of interval estimates and application of hypothesis tests for this scaling exponent. Simulation results illustrating this asymptotic normality are presented. ?? 1999 ISI/BS.
Spiesberger, John L
2013-02-01
The hypothesis tested is that internal gravity waves limit the coherent integration time of sound at 1346 km in the Pacific ocean at 133 Hz and a pulse resolution of 0.06 s. Six months of continuous transmissions at about 18 min intervals are examined. The source and receiver are mounted on the bottom of the ocean with timing governed by atomic clocks. Measured variability is only due to fluctuations in the ocean. A model for the propagation of sound through fluctuating internal waves is run without any tuning with data. Excellent resemblance is found between the model and data's probability distributions of integration time up to five hours.
Hypothesis testing in hydrology: Theory and practice
NASA Astrophysics Data System (ADS)
Kirchner, James; Pfister, Laurent
2017-04-01
Well-posed hypothesis tests have spurred major advances in hydrological theory. However, a random sample of recent research papers suggests that in hydrology, as in other fields, hypothesis formulation and testing rarely correspond to the idealized model of the scientific method. Practices such as "p-hacking" or "HARKing" (Hypothesizing After the Results are Known) are major obstacles to more rigorous hypothesis testing in hydrology, along with the well-known problem of confirmation bias - the tendency to value and trust confirmations more than refutations - among both researchers and reviewers. Hypothesis testing is not the only recipe for scientific progress, however: exploratory research, driven by innovations in measurement and observation, has also underlain many key advances. Further improvements in observation and measurement will be vital to both exploratory research and hypothesis testing, and thus to advancing the science of hydrology.
Double dynamic scaling in human communication dynamics
NASA Astrophysics Data System (ADS)
Wang, Shengfeng; Feng, Xin; Wu, Ye; Xiao, Jinhua
2017-05-01
In the last decades, human behavior has been deeply understanding owing to the huge quantities data of human behavior available for study. The main finding in human dynamics shows that temporal processes consist of high-activity bursty intervals alternating with long low-activity periods. A model, assuming the initiator of bursty follow a Poisson process, is widely used in the modeling of human behavior. Here, we provide further evidence for the hypothesis that different bursty intervals are independent. Furthermore, we introduce a special threshold to quantitatively distinguish the time scales of complex dynamics based on the hypothesis. Our results suggest that human communication behavior is a composite process of double dynamics with midrange memory length. The method for calculating memory length would enhance the performance of many sequence-dependent systems, such as server operation and topic identification.
2011-01-01
Background The reproductive ground plan hypothesis of social evolution suggests that reproductive controls of a solitary ancestor have been co-opted during social evolution, facilitating the division of labor among social insect workers. Despite substantial empirical support, the generality of this hypothesis is not universally accepted. Thus, we investigated the prediction of particular genes with pleiotropic effects on ovarian traits and social behavior in worker honey bees as a stringent test of the reproductive ground plan hypothesis. We complemented these tests with a comprehensive genome scan for additional quantitative trait loci (QTL) to gain a better understanding of the genetic architecture of the ovary size of honey bee workers, a morphological trait that is significant for understanding social insect caste evolution and general insect biology. Results Back-crossing hybrid European x Africanized honey bee queens to the Africanized parent colony generated two study populations with extraordinarily large worker ovaries. Despite the transgressive ovary phenotypes, several previously mapped QTL for social foraging behavior demonstrated ovary size effects, confirming the prediction of pleiotropic genetic effects on reproductive traits and social behavior. One major QTL for ovary size was detected in each backcross, along with several smaller effects and two QTL for ovary asymmetry. One of the main ovary size QTL coincided with a major QTL for ovary activation, explaining 3/4 of the phenotypic variance, although no simple positive correlation between ovary size and activation was observed. Conclusions Our results provide strong support for the reproductive ground plan hypothesis of evolution in study populations that are independent of the genetic stocks that originally led to the formulation of this hypothesis. As predicted, worker ovary size is genetically linked to multiple correlated traits of the complex division of labor in worker honey bees, known as the pollen hoarding syndrome. The genetic architecture of worker ovary size presumably consists of a combination of trait-specific loci and general regulators that affect the whole behavioral syndrome and may even play a role in caste determination. Several promising candidate genes in the QTL intervals await further study to clarify their potential role in social insect evolution and the regulation of insect fertility in general. PMID:21489230
Abnormalities of the QT interval in primary disorders of autonomic failure.
Choy, A M; Lang, C C; Roden, D M; Robertson, D; Wood, A J; Robertson, R M; Biaggioni, I
1998-10-01
Experimental evidence shows that activation of the autonomic nervous system influences ventricular repolarization and, therefore, the QT interval on the ECG. To test the hypothesis that the QT interval is abnormal in autonomic dysfunction, we examined ECGs in patients with severe primary autonomic failure and in patients with congenital dopamine beta-hydroxylase (DbetaH) deficiency who are unable to synthesize norepinephrine and epinephrine. Maximal QT and rate-corrected QT (QTc) intervals and adjusted QTc dispersion [(maximal QTc - minimum QTc on 12 lead ECG)/square root of the number of leads measured] were determined in blinded fashion from ECGs of 67 patients with primary autonomic failure (36 patients with multiple system atrophy [MSA], and 31 patients with pure autonomic failure [PAF]) and 17 age- and sex-matched healthy controls. ECGs of 5 patients with congenital DbetaH deficiency and 6 age- and sex-matched controls were also analyzed. Patients with MSA and PAF had significantly prolonged maximum QTc intervals (492+/-58 ms(1/2) and 502+/-61 ms(1/2) [mean +/- SD]), respectively, compared with controls (450+/-18 ms(1/2), P < .05 and P < .01, respectively). A similar but not significant trend was observed for QT. QTc dispersion was also increased in MSA (40+/-20 ms(1/2), P < .05 vs controls) and PAF patients (32+/-19 ms(1/2), NS) compared with controls (21+/-5 ms(1/2)). In contrast, patients with congenital DbetaH deficiency did not have significantly different RR, QT, QTc intervals, or QTc dispersion when compared with controls. Patients with primary autonomic failure who have combined parasympathetic and sympathetic failure have abnormally prolonged QT interval and increased QT dispersion. However, QT interval in patients with congenital DbetaH deficiency was not significantly different from controls. It is possible, therefore, that QT abnormalities in patients with primary autonomic failure are not solely caused by lesions of the sympathetic nervous system, and that the parasympathetic nervous system is likely to have a modulatory role in ventricular repolarization.
Testing the null hypothesis: the forgotten legacy of Karl Popper?
Wilkinson, Mick
2013-01-01
Testing of the null hypothesis is a fundamental aspect of the scientific method and has its basis in the falsification theory of Karl Popper. Null hypothesis testing makes use of deductive reasoning to ensure that the truth of conclusions is irrefutable. In contrast, attempting to demonstrate the new facts on the basis of testing the experimental or research hypothesis makes use of inductive reasoning and is prone to the problem of the Uniformity of Nature assumption described by David Hume in the eighteenth century. Despite this issue and the well documented solution provided by Popper's falsification theory, the majority of publications are still written such that they suggest the research hypothesis is being tested. This is contrary to accepted scientific convention and possibly highlights a poor understanding of the application of conventional significance-based data analysis approaches. Our work should remain driven by conjecture and attempted falsification such that it is always the null hypothesis that is tested. The write up of our studies should make it clear that we are indeed testing the null hypothesis and conforming to the established and accepted philosophical conventions of the scientific method.
Hunt, Kathleen E.; Lysiak, Nadine S.; Moore, Michael J.; Rolland, Rosalind M.
2016-01-01
Reproduction of mysticete whales is difficult to monitor, and basic parameters, such as pregnancy rate and inter-calving interval, remain unknown for many populations. We hypothesized that baleen plates (keratinous strips that grow downward from the palate of mysticete whales) might record previous pregnancies, in the form of high-progesterone regions in the sections of baleen that grew while the whale was pregnant. To test this hypothesis, longitudinal baleen progesterone profiles from two adult female North Atlantic right whales (Eubalaena glacialis) that died as a result of ship strike were compared with dates of known pregnancies inferred from calf sightings and post-mortem data. We sampled a full-length baleen plate from each female at 4 cm intervals from base (newest baleen) to tip (oldest baleen), each interval representing ∼60 days of baleen growth, with high-progesterone areas then sampled at 2 or 1 cm intervals. Pulverized baleen powder was assayed for progesterone using enzyme immunoassay. The date of growth of each sampling location on the baleen plate was estimated based on the distance from the base of the plate and baleen growth rates derived from annual cycles of stable isotope ratios. Baleen progesterone profiles from both whales showed dramatic elevations (two orders of magnitude higher than baseline) in areas corresponding to known pregnancies. Baleen hormone analysis shows great potential for estimation of recent reproductive history, inter-calving interval and general reproductive biology in this species and, possibly, in other mysticete whales. PMID:27293762
Selective Attention in Pigeon Temporal Discrimination.
Subramaniam, Shrinidhi; Kyonka, Elizabeth
2017-07-27
Cues can vary in how informative they are about when specific outcomes, such as food availability, will occur. This study was an experimental investigation of the functional relation between cue informativeness and temporal discrimination in a peak-interval (PI) procedure. Each session consisted of fixed-interval (FI) 2-s and 4-s schedules of food and occasional, 12-s PI trials during which pecks had no programmed consequences. Across conditions, the phi (ϕ) correlation between key light color and FI schedule value was manipulated. Red and green key lights signaled the onset of either or both FI schedules. Different colors were either predictive (ϕ = 1), moderately predictive (ϕ = 0.2-0.8), or not predictive (ϕ = 0) of a specific FI schedule. This study tested the hypothesis that temporal discrimination is a function of the momentary conditional probability of food; that is, pigeons peck the most at either 2 s or 4 s when ϕ = 1 and peck at both intervals when ϕ < 1. Response distributions were bimodal Gaussian curves; distributions from red- and green-key PI trials converged when ϕ ≤ 0.6. Peak times estimated by summed Gaussian functions, averaged across conditions and pigeons, were 1.85 s and 3.87 s, however, pigeons did not always maximize the momentary probability of food. When key light color was highly correlated with FI schedules (ϕ ≥ 0.6), estimates of peak times indicated that temporal discrimination accuracy was reduced at the unlikely interval, but not the likely interval. The mechanism of this reduced temporal discrimination accuracy could be interpreted as an attentional process.
Unscaled Bayes factors for multiple hypothesis testing in microarray experiments.
Bertolino, Francesco; Cabras, Stefano; Castellanos, Maria Eugenia; Racugno, Walter
2015-12-01
Multiple hypothesis testing collects a series of techniques usually based on p-values as a summary of the available evidence from many statistical tests. In hypothesis testing, under a Bayesian perspective, the evidence for a specified hypothesis against an alternative, conditionally on data, is given by the Bayes factor. In this study, we approach multiple hypothesis testing based on both Bayes factors and p-values, regarding multiple hypothesis testing as a multiple model selection problem. To obtain the Bayes factors we assume default priors that are typically improper. In this case, the Bayes factor is usually undetermined due to the ratio of prior pseudo-constants. We show that ignoring prior pseudo-constants leads to unscaled Bayes factor which do not invalidate the inferential procedure in multiple hypothesis testing, because they are used within a comparative scheme. In fact, using partial information from the p-values, we are able to approximate the sampling null distribution of the unscaled Bayes factor and use it within Efron's multiple testing procedure. The simulation study suggests that under normal sampling model and even with small sample sizes, our approach provides false positive and false negative proportions that are less than other common multiple hypothesis testing approaches based only on p-values. The proposed procedure is illustrated in two simulation studies, and the advantages of its use are showed in the analysis of two microarray experiments. © The Author(s) 2011.
No evidence for spectral jamming avoidance in echolocation behavior of foraging pipistrelle bats
Götze, Simone; Koblitz, Jens C.; Denzinger, Annette; Schnitzler, Hans-Ulrich
2016-01-01
Frequency shifts in signals of bats flying near conspecifics have been interpreted as a spectral jamming avoidance response (JAR). However, several prerequisites supporting a JAR hypothesis have not been controlled for in previous studies. We recorded flight and echolocation behavior of foraging Pipistrellus pipistrellus while flying alone and with a conspecific and tested whether frequency changes were due to a spectral JAR with an increased frequency difference, or whether changes could be explained by other reactions. P. pipistrellus reacted to conspecifics with a reduction of sound duration and often also pulse interval, accompanied by an increase in terminal frequency. This reaction is typical of behavioral situations where targets of interest have captured the bat’s attention and initiated a more detailed exploration. All observed frequency changes were predicted by the attention reaction alone, and do not support the JAR hypothesis of increased frequency separation. Reaction distances of 1–11 m suggest that the attention response may be elicited either by detection of the conspecific by short range active echolocation or by long range passive acoustic detection of echolocation calls. PMID:27502900
Bruxism and Dental Implants: A Meta-Analysis.
Chrcanovic, Bruno Ramos; Albrektsson, Tomas; Wennerberg, Ann
2015-10-01
To test the null hypothesis of no difference in the implant failure rates, postoperative infection, and marginal bone loss after the insertion of dental implants in bruxers compared with the insertion in non-bruxers against the alternative hypothesis of a difference. An electronic search was undertaken in June 2014. Eligibility criteria included clinical studies, either randomized or not. Ten publications were included with a total of 760 implants inserted in bruxers (49 failures; 6.45%) and 2989 in non-bruxers (109 failures; 3.65%). Due to lack of information, meta-analyses for the outcomes "postoperative infection" and "marginal bone loss" were not possible. A risk ratio of 2.93 was found (95% confidence interval, 1.48-5.81; P = 0.002). These results cannot suggest that the insertion of dental implants in bruxers affects the implant failure rates due to a limited number of published studies, all characterized by a low level of specificity, and most of them deal with a limited number of cases without a control group. Therefore, the real effect of bruxing habits on the osseointegration and survival of endosteal dental implants is still not well established.
A Critique of One-Tailed Hypothesis Test Procedures in Business and Economics Statistics Textbooks.
ERIC Educational Resources Information Center
Liu, Tung; Stone, Courtenay C.
1999-01-01
Surveys introductory business and economics statistics textbooks and finds that they differ over the best way to explain one-tailed hypothesis tests: the simple null-hypothesis approach or the composite null-hypothesis approach. Argues that the composite null-hypothesis approach contains methodological shortcomings that make it more difficult for…
Dynamic association rules for gene expression data analysis.
Chen, Shu-Chuan; Tsai, Tsung-Hsien; Chung, Cheng-Han; Li, Wen-Hsiung
2015-10-14
The purpose of gene expression analysis is to look for the association between regulation of gene expression levels and phenotypic variations. This association based on gene expression profile has been used to determine whether the induction/repression of genes correspond to phenotypic variations including cell regulations, clinical diagnoses and drug development. Statistical analyses on microarray data have been developed to resolve gene selection issue. However, these methods do not inform us of causality between genes and phenotypes. In this paper, we propose the dynamic association rule algorithm (DAR algorithm) which helps ones to efficiently select a subset of significant genes for subsequent analysis. The DAR algorithm is based on association rules from market basket analysis in marketing. We first propose a statistical way, based on constructing a one-sided confidence interval and hypothesis testing, to determine if an association rule is meaningful. Based on the proposed statistical method, we then developed the DAR algorithm for gene expression data analysis. The method was applied to analyze four microarray datasets and one Next Generation Sequencing (NGS) dataset: the Mice Apo A1 dataset, the whole genome expression dataset of mouse embryonic stem cells, expression profiling of the bone marrow of Leukemia patients, Microarray Quality Control (MAQC) data set and the RNA-seq dataset of a mouse genomic imprinting study. A comparison of the proposed method with the t-test on the expression profiling of the bone marrow of Leukemia patients was conducted. We developed a statistical way, based on the concept of confidence interval, to determine the minimum support and minimum confidence for mining association relationships among items. With the minimum support and minimum confidence, one can find significant rules in one single step. The DAR algorithm was then developed for gene expression data analysis. Four gene expression datasets showed that the proposed DAR algorithm not only was able to identify a set of differentially expressed genes that largely agreed with that of other methods, but also provided an efficient and accurate way to find influential genes of a disease. In the paper, the well-established association rule mining technique from marketing has been successfully modified to determine the minimum support and minimum confidence based on the concept of confidence interval and hypothesis testing. It can be applied to gene expression data to mine significant association rules between gene regulation and phenotype. The proposed DAR algorithm provides an efficient way to find influential genes that underlie the phenotypic variance.
GIGGLE: a search engine for large-scale integrated genome analysis.
Layer, Ryan M; Pedersen, Brent S; DiSera, Tonya; Marth, Gabor T; Gertz, Jason; Quinlan, Aaron R
2018-02-01
GIGGLE is a genomics search engine that identifies and ranks the significance of genomic loci shared between query features and thousands of genome interval files. GIGGLE (https://github.com/ryanlayer/giggle) scales to billions of intervals and is over three orders of magnitude faster than existing methods. Its speed extends the accessibility and utility of resources such as ENCODE, Roadmap Epigenomics, and GTEx by facilitating data integration and hypothesis generation.
GIGGLE: a search engine for large-scale integrated genome analysis
Layer, Ryan M; Pedersen, Brent S; DiSera, Tonya; Marth, Gabor T; Gertz, Jason; Quinlan, Aaron R
2018-01-01
GIGGLE is a genomics search engine that identifies and ranks the significance of genomic loci shared between query features and thousands of genome interval files. GIGGLE (https://github.com/ryanlayer/giggle) scales to billions of intervals and is over three orders of magnitude faster than existing methods. Its speed extends the accessibility and utility of resources such as ENCODE, Roadmap Epigenomics, and GTEx by facilitating data integration and hypothesis generation. PMID:29309061
Langberg, Kurt; Phillips, Matthew; Rueppell, Olav
2018-04-01
The rate of genomic recombination displays evolutionary plasticity and can even vary in response to environmental factors. The western honey bee (Apis mellifera L.) has an extremely high genomic recombination rate but the mechanistic basis for this genome-wide upregulation is not understood. Based on the hypothesis that meiotic recombination and DNA damage repair share common mechanisms in honey bees as in other organisms, we predicted that oxidative stress leads to an increase in recombination rate in honey bees. To test this prediction, we subjected honey bee queens to oxidative stress by paraquat injection and measured the rates of genomic recombination in select genome intervals of offspring produced before and after injection. The evaluation of 26 genome intervals in a total of over 1750 offspring of 11 queens by microsatellite genotyping revealed several significant effects but no overall evidence for a mechanistic link between oxidative stress and increased recombination was found. The results weaken the notion that DNA repair enzymes have a regulatory function in the high rate of meiotic recombination of honey bees, but they do not provide evidence against functional overlap between meiotic recombination and DNA damage repair in honey bees and more mechanistic studies are needed.
Schwing, Patrick T; Chanton, Jeffrey P; Romero, Isabel C; Hollander, David J; Goddard, Ethan A; Brooks, Gregg R; Larson, Rebekka A
2018-06-01
Following the Deepwater Horizon (DWH) event in 2010, hydrocarbons were deposited on the continental slope in the northeastern Gulf of Mexico through marine oil snow sedimentation and flocculent accumulation (MOSSFA). The objective of this study was to test the hypothesis that benthic foraminiferal δ 13 C would record this depositional event. From December 2010 to August 2014, a time-series of sediment cores was collected at two impacted sites and one control site in the northeastern Gulf of Mexico. Short-lived radioisotopes ( 210 Pb and 234 Th) were employed to establish the pre-DWH, DWH, and post-DWH intervals. Benthic foraminifera (Cibicidoides spp. and Uvigerina spp.) were isolated from these intervals for δ 13 C measurement. A modest (0.2-0.4‰), but persistent δ 13 C depletion in the DWH intervals of impacted sites was observed over a two-year period. This difference was significantly beyond the pre-DWH (background) variability and demonstrated that benthic foraminiferal calcite recorded the depositional event. The longevity of the depletion in the δ 13 C record suggested that benthic foraminifera may have recorded the change in organic matter caused by MOSSFA from 2010 to 2012. These findings have implications for assessing the subsurface spatial distribution of the DWH MOSSFA event. Copyright © 2018 Elsevier Ltd. All rights reserved.
Colon cancer in Chile before and after the start of the flour fortification program with folic acid.
Hirsch, Sandra; Sanchez, Hugo; Albala, Cecilia; de la Maza, María Pía; Barrera, Gladys; Leiva, Laura; Bunout, Daniel
2009-04-01
Folate depletion is associated with an increased risk of colorectal carcinogenesis. A temporal association between folic acid fortification of enriched cereal grains and an increase in the incidence of colorectal cancer in the USA and Canada has, however, been recently reported. To compare the rates of hospital discharges owing to colon cancer in Chile before and after the start of the mandatory flour fortification program with 220 microg of synthetic folic acid/100 g of wheat flour. Cancer and cardiovascular hospital discharge rates were compared using rate ratios between two study periods, 1992-1996, before folic acid fortification and 2001-2004, after the flour fortification with folic acid was established in the country. Standard errors of the log rate ratio to derive confidence intervals, and to test the null hypothesis of no difference, were calculated. The highest rate ratio between the two periods was for colon cancer in the group aged 45-64 years (rate ratio: 2.6, confidence interval: 99% 2.93-2.58) and in the 65-79 years (rate ratio: 2.9, confidence interval: 99% 3.25-2.86). Our data provide new evidence that a folate fortification program could be associated with an additional risk of colon cancer.
Pole, Jason D.; Mustard, Cameron A.; To, Teresa; Beyene, Joseph; Allen, Alexander C.
2010-01-01
This study was designed to test the hypothesis that fetal exposure to corticosteroids in the antenatal period is an independent risk factor for the development of asthma in early childhood with little or no effect in later childhood. A population-based cohort study of all pregnant women who resided in Nova Scotia, Canada, and gave birth to a singleton fetus between 1989 and 1998 was undertaken. After a priori specified exclusions, 80,448 infants were available for analysis. Using linked health care utilization records, incident asthma cases developed after 36 months of age were identified. Extended Cox proportional hazards models were used to estimate hazard ratios while controlling for confounders. Exposure to corticosteroids during pregnancy was associated with a risk of asthma in childhood between 3–5 years of age: adjusted hazard ratio of 1.19 (95% confidence interval: 1.03, 1.39), with no association noted after 5 years of age: adjusted hazard ratio for 5–7 years was 1.06 (95% confidence interval: 0.86, 1.30) and for 8 or greater years was 0.74 (95% confidence interval: 0.54, 1.03). Antenatal steroid therapy appears to be an independent risk factor for the development of asthma between 3 and 5 years of age. PMID:21490744
Cryptocurrency price drivers: Wavelet coherence analysis revisited
2018-01-01
Cryptocurrencies have experienced recent surges in interest and price. It has been discovered that there are time intervals where cryptocurrency prices and certain online and social media factors appear related. In addition it has been noted that cryptocurrencies are prone to experience intervals of bubble-like price growth. The hypothesis investigated here is that relationships between online factors and price are dependent on market regime. In this paper, wavelet coherence is used to study co-movement between a cryptocurrency price and its related factors, for a number of examples. This is used alongside a well-known test for financial asset bubbles to explore whether relationships change dependent on regime. The primary finding of this work is that medium-term positive correlations between online factors and price strengthen significantly during bubble-like regimes of the price series; this explains why these relationships have previously been seen to appear and disappear over time. A secondary finding is that short-term relationships between the chosen factors and price appear to be caused by particular market events (such as hacks / security breaches), and are not consistent from one time interval to another in the effect of the factor upon the price. In addition, for the first time, wavelet coherence is used to explore the relationships between different cryptocurrencies. PMID:29668765
Cryptocurrency price drivers: Wavelet coherence analysis revisited.
Phillips, Ross C; Gorse, Denise
2018-01-01
Cryptocurrencies have experienced recent surges in interest and price. It has been discovered that there are time intervals where cryptocurrency prices and certain online and social media factors appear related. In addition it has been noted that cryptocurrencies are prone to experience intervals of bubble-like price growth. The hypothesis investigated here is that relationships between online factors and price are dependent on market regime. In this paper, wavelet coherence is used to study co-movement between a cryptocurrency price and its related factors, for a number of examples. This is used alongside a well-known test for financial asset bubbles to explore whether relationships change dependent on regime. The primary finding of this work is that medium-term positive correlations between online factors and price strengthen significantly during bubble-like regimes of the price series; this explains why these relationships have previously been seen to appear and disappear over time. A secondary finding is that short-term relationships between the chosen factors and price appear to be caused by particular market events (such as hacks / security breaches), and are not consistent from one time interval to another in the effect of the factor upon the price. In addition, for the first time, wavelet coherence is used to explore the relationships between different cryptocurrencies.
A Holocene record of ocean productivity and upwelling from the northern California continental slope
Addison, Jason A.; Barron, John A.; Finney, Bruce P.; Kusler, Jennifer E.; Bukry, David; Heusser, Linda E.; Alexander, Clark R.
2018-01-01
The Holocene upwelling history of the northern California continental slope is examined using the high-resolution record of TN062-O550 (40.9°N, 124.6°W, 550 m water depth). This 7-m-long marine sediment core spans the last ∼7500 years, and we use it to test the hypothesis that marine productivity in the California Current System (CCS) driven by coastal upwelling has co-varied with Holocene millennial-scale warm intervals. A combination of biogenic sediment concentrations (opal, total organic C, and total N), stable isotopes (organic matter δ13C and bulk sedimentary δ15N), and key microfossil indicators of upwelling were used to test this hypothesis. The record of biogenic accumulation in TN062-O550 shows considerable Holocene variability despite being located within 50 km of the mouth of the Eel River, which is one of the largest sources of terrigenous sediment to the Northeast Pacific Ocean margin. A key time interval beginning at ∼2900 calibrated years before present (cal yr BP) indicates the onset of modern upwelling in the CCS, and this period also corresponds to the most intense period of upwelling in the last 7500 years. When these results are placed into a regional CCS context during the Holocene, it was found that the timing of upwelling intensification at TN062-O550 corresponds closely to that seen at nearby ODP Site 1019, as well as in the Santa Barbara Basin of southern California. Other CCS records with less refined age control show similar results, which suggest late Holocene upwelling intensification may be synchronous throughout the CCS. Based on the strong correspondence between the alkenone sea surface temperature record at ODP Site 1019 and the onset of late Holocene upwelling in northern California, we suggest that CCS warming may be conducive to upwelling intensification, though future changes are unclear as the mechanisms forcing SST variability may differ.
Mickenautsch, Steffen; Yengopal, Veerasamy
2013-01-01
Background Naïve-indirect comparisons are comparisons between competing clinical interventions’ evidence from separate (uncontrolled) trials. Direct comparisons are comparisons within randomised control trials (RCTs). The objective of this empirical study is to test the null-hypothesis that trends and performance differences inferred from naïve-indirect comparisons and from direct comparisons/RCTs regarding the failure rates of amalgam and direct high-viscosity glass-ionomer cement (HVGIC) restorations in permanent posterior teeth have similar direction and magnitude. Methods A total of 896 citations were identified through systematic literature search. From these, ten and two uncontrolled clinical longitudinal studies for HVGIC and amalgam, respectively, were included for naïve-indirect comparison and could be matched with three out twenty RCTs. Summary effects sizes were computed as Odds ratios (OR; 95% Confidence intervals) and compared with those from RCTs. Trend directions were inferred from 95% Confidence interval overlaps and direction of point estimates; magnitudes of performance differences were inferred from the median point estimates (OR) with 25% and 75% percentile range, for both types of comparison. Mann-Whitney U test was applied to test for statistically significant differences between point estimates of both comparison types. Results Trends and performance differences inferred from naïve-indirect comparison based on evidence from uncontrolled clinical longitudinal studies and from direct comparisons based on RCT evidence are not the same. The distributions of the point estimates differed significantly for both comparison types (Mann–Whitney U = 25, nindirect = 26; ndirect = 8; p = 0.0013, two-tailed). Conclusion The null-hypothesis was rejected. Trends and performance differences inferred from either comparison between HVGIC and amalgam restorations failure rates in permanent posterior teeth are not the same. It is recommended that clinical practice guidance regarding HVGICs should rest on direct comparisons via RCTs and not on naïve-indirect comparisons based on uncontrolled longitudinal studies in order to avoid inflation of effect estimates. PMID:24205220
Mickenautsch, Steffen; Yengopal, Veerasamy
2013-01-01
Naïve-indirect comparisons are comparisons between competing clinical interventions' evidence from separate (uncontrolled) trials. Direct comparisons are comparisons within randomised control trials (RCTs). The objective of this empirical study is to test the null-hypothesis that trends and performance differences inferred from naïve-indirect comparisons and from direct comparisons/RCTs regarding the failure rates of amalgam and direct high-viscosity glass-ionomer cement (HVGIC) restorations in permanent posterior teeth have similar direction and magnitude. A total of 896 citations were identified through systematic literature search. From these, ten and two uncontrolled clinical longitudinal studies for HVGIC and amalgam, respectively, were included for naïve-indirect comparison and could be matched with three out twenty RCTs. Summary effects sizes were computed as Odds ratios (OR; 95% Confidence intervals) and compared with those from RCTs. Trend directions were inferred from 95% Confidence interval overlaps and direction of point estimates; magnitudes of performance differences were inferred from the median point estimates (OR) with 25% and 75% percentile range, for both types of comparison. Mann-Whitney U test was applied to test for statistically significant differences between point estimates of both comparison types. Trends and performance differences inferred from naïve-indirect comparison based on evidence from uncontrolled clinical longitudinal studies and from direct comparisons based on RCT evidence are not the same. The distributions of the point estimates differed significantly for both comparison types (Mann-Whitney U = 25, n(indirect) = 26; n(direct) = 8; p = 0.0013, two-tailed). The null-hypothesis was rejected. Trends and performance differences inferred from either comparison between HVGIC and amalgam restorations failure rates in permanent posterior teeth are not the same. It is recommended that clinical practice guidance regarding HVGICs should rest on direct comparisons via RCTs and not on naïve-indirect comparisons based on uncontrolled longitudinal studies in order to avoid inflation of effect estimates.
Late-Life Depression is Not Associated with Dementia Related Pathology
Wilson, Robert S.; Boyle, Patricia A.; Capuano, Ana W.; Shah, Raj C.; Hoganson, George M.; Nag, Sukriti; Bennett, David A.
2015-01-01
Objective To test the hypothesis that late-life depression is associated with dementia related pathology. Method Older participants (n=1,965) in 3 longitudinal clinical-pathologic cohort studies who had no cognitive impairment at baseline underwent annual clinical evaluations for a mean of 8.0 years (SD = 5.0). We defined depression diagnostically, as major depression during the study period, and psychometrically, as elevated depressive symptoms during the study period, and established their relation to cognitive outcomes (incident dementia, rate of cognitive decline). A total of 657 participants died and underwent a uniform neuropathologic examination. We estimated the association of depression with 6 dementia related markers (tau tangles, beta-amyloid plaques, Lewy bodies, hippocampal sclerosis, gross and microscopic infarcts) in logistic regression models. Results In the full cohort, 9.4% were diagnosed with major depression and 8.6% had chronically elevated depressive symptoms, both of which were related to adverse cognitive outcomes. In the 657 persons who died and had a neuropathologic examination, higher beta-amyloid plaque burden was associated with higher likelihood of major depression (present in 11.0%; odds ratio = 1.392, 95% confidence interval = 1.088, 1.780) but not with elevated depressive symptoms (present in 11.3%; odds ratio = 0.919, 95% confidence interval = 0.726, 1.165). None of the other pathologic markers was related to either of the depression measures. Neither dementia nor antidepressant medication modified the relation of pathology to depression. Conclusion The results do not support the hypothesis that major depression is associated with dementia related pathology. PMID:26237627
Hydrogen-rich water affected blood alkalinity in physically active men.
Ostojic, Sergej M; Stojanovic, Marko D
2014-01-01
Possible appliance of effective and safe alkalizing agent in the treatment of metabolic acidosis could be of particular interest to humans experiencing an increase in plasma acidity, such as exercise-induced acidosis. In the present study we tested the hypothesis that the daily oral intake of 2L of hydrogen-rich water (HRW) for 14 days would increase arterial blood alkalinity at baseline and post-exercise as compared with the placebo. This study was a randomized, double blind, placebo-controlled trial involving 52 presumably healthy physically active male volunteers. Twenty-six participants received HRW and 26 a placebo (tap water) for 14 days. Arterial blood pH, partial pressure for carbon dioxide (pCO2), and bicarbonates were measured at baseline and postexercise at the start (day 0) and at the end of the intervention period (day 14). Intake of HRW significantly increased fasting arterial blood pH by 0.04 (95% confidence interval; 0.01 - 0.08; p < 0.001), and postexercise pH by 0.07 (95% confidence interval; 0.01 - 0.10; p = 0.03) after 14 days of intervention. Fasting bicarbonates were significantly higher in the HRW trial after the administration regimen as compared with the preadministration (30.5 ± 1.9 mEq/L vs. 28.3 ± 2.3 mEq/L; p < 0.0001). No volunteers withdrew before the end of the study, and no participant reported any vexatious side effects of supplementation. These results support the hypothesis that HRW administration is safe and may have an alkalizing effect in young physically active men.
Lêng, Chhian Hūi; Wang, Jung-Der
2016-01-01
Aims To test the hypothesis that gardening is beneficial for survival after taking time-dependent comorbidities, mobility, and depression into account in a longitudinal middle-aged (50–64 years) and older (≥65 years) cohort in Taiwan. Methods The cohort contained 5,058 nationally sampled adults ≥50 years old from the Taiwan Longitudinal Study on Aging (1996–2007). Gardening was defined as growing flowers, gardening, or cultivating potted plants for pleasure with five different frequencies. We calculated hazard ratios for the mortality risks of gardening and adjusted the analysis for socioeconomic status, health behaviors and conditions, depression, mobility limitations, and comorbidities. Survival models also examined time-dependent effects and risks in each stratum contingent upon baseline mobility and depression. Sensitivity analyses used imputation methods for missing values. Results Daily home gardening was associated with a high survival rate (hazard ratio: 0.82; 95% confidence interval: 0.71–0.94). The benefits were robust for those with mobility limitations, but without depression at baseline (hazard ratio: 0.64, 95% confidence interval: 0.48–0.87) when adjusted for time-dependent comorbidities, mobility limitations, and depression. Chronic or relapsed depression weakened the protection of gardening. For those without mobility limitations and not depressed at baseline, gardening had no effect. Sensitivity analyses using different imputation methods yielded similar results and corroborated the hypothesis. Conclusion Daily gardening for pleasure was associated with reduced mortality for Taiwanese >50 years old with mobility limitations but without depression. PMID:27486315
Rashid, Jahidur; Patel, Brijeshkumar; Nozik-Grayck, Eva; McMurtry, Ivan F; Stenmark, Kurt R; Ahsan, Fakhrul
2017-03-28
The practice of treating PAH patients with oral or intravenous sildenafil suffers from the limitations of short dosing intervals, peripheral vasodilation, unwanted side effects, and restricted use in pediatric patients. In this study, we sought to test the hypothesis that inhalable poly(lactic-co-glycolic acid) (PLGA) particles of sildenafil prolong the release of the drug, produce pulmonary specific vasodilation, reduce the systemic exposure of the drug, and may be used as an alternative to oral sildenafil in the treatment of PAH. Thus, we prepared porous PLGA particles of sildenafil using a water-in-oil-in-water double emulsion solvent evaporation method with polyethyleneimine (PEI) as a porosigen and characterized the formulations for surface morphology, respirability, in-vitro drug release, and evaluated for in vivo absorption, alveolar macrophage uptake, and safety. PEI increased the particle porosity, drug entrapment, and produced drug release for 36h. Fluorescent particles showed reduced uptake by alveolar macrophages. The polymeric particles were safe to rat pulmonary arterial smooth muscle cell and to the lungs, as evidenced by the cytotoxicity assay and analyses of the injury markers in the bronchoalveolar lavage fluid, respectively. Intratracheally administered sildenafil particles elicited more pulmonary specific and sustained vasodilation in SUGEN-5416/hypoxia-induced PAH rats than oral, intravenous, or intratracheal plain sildenafil did, when administered at the same dose. Overall, true to the hypothesis, this study shows that inhaled PLGA particles of sildenafil can be administered, as a substitute for oral form of sildenafil, at a reduced dose and longer dosing interval. Copyright © 2017 Elsevier B.V. All rights reserved.
P value and the theory of hypothesis testing: an explanation for new researchers.
Biau, David Jean; Jolles, Brigitte M; Porcher, Raphaël
2010-03-01
In the 1920s, Ronald Fisher developed the theory behind the p value and Jerzy Neyman and Egon Pearson developed the theory of hypothesis testing. These distinct theories have provided researchers important quantitative tools to confirm or refute their hypotheses. The p value is the probability to obtain an effect equal to or more extreme than the one observed presuming the null hypothesis of no effect is true; it gives researchers a measure of the strength of evidence against the null hypothesis. As commonly used, investigators will select a threshold p value below which they will reject the null hypothesis. The theory of hypothesis testing allows researchers to reject a null hypothesis in favor of an alternative hypothesis of some effect. As commonly used, investigators choose Type I error (rejecting the null hypothesis when it is true) and Type II error (accepting the null hypothesis when it is false) levels and determine some critical region. If the test statistic falls into that critical region, the null hypothesis is rejected in favor of the alternative hypothesis. Despite similarities between the two, the p value and the theory of hypothesis testing are different theories that often are misunderstood and confused, leading researchers to improper conclusions. Perhaps the most common misconception is to consider the p value as the probability that the null hypothesis is true rather than the probability of obtaining the difference observed, or one that is more extreme, considering the null is true. Another concern is the risk that an important proportion of statistically significant results are falsely significant. Researchers should have a minimum understanding of these two theories so that they are better able to plan, conduct, interpret, and report scientific experiments.
Influence of artificial sweetener on human blood glucose concentration.
Skokan, Ilse; Endler, P Christian; Wulkersdorfer, Beatrix; Magometschnigg, Dieter; Spranger, Heinz
2007-10-05
Artificial sweeteners, such as saccharin or cyclamic acid are synthetically manufactured sweetenings. Known for their low energetic value they serve especially diabetic and adipose patients as sugar substitutes. It has been hypothesized that the substitution of sugar with artificial sweeteners may induce a decrease of the blood glucose. The aim of this study was to determine the reliability of this hypothesis by comparing the influence of regular table sugar and artificial sweeteners on the blood glucose concentration. In this pilot-study 16 patients were included suffering from adiposity, pre-diabetes and hypertension. In the sense of a cross-over design, three test trials were performed at intervals of several weeks. Each trial was followed by a test free interval. Within one test trial each patient consumed 150 ml test solution (water) that contained either 6 g of table sugar ("Kandisin") with sweetener free serving as control group. Tests were performed within 1 hr after lunch to ensure conditions comparable to patients having a desert. Every participant had to determine their blood glucose concentration immediately before and 5, 15, 30 and 60 minutes after the intake of the test solution. For statistics an analysis of variance was performed. The data showed no significant changes in the blood glucose concentration. Neither the application of sugar (F(4;60) = 1.645; p = .175) nor the consumption of an artificial sweetener (F(2.068;31.023) = 1.551; p > .05) caused significant fluctuations in the blood sugar levels. Over a time frame of 60 minutes in the control group a significant decrease of the blood sugar concentration was found (F(2.457;36.849) = 4.005; p = .020) as a physiological reaction during lunch digestion.
Debates—Hypothesis testing in hydrology: Theory and practice
NASA Astrophysics Data System (ADS)
Pfister, Laurent; Kirchner, James W.
2017-03-01
The basic structure of the scientific method—at least in its idealized form—is widely championed as a recipe for scientific progress, but the day-to-day practice may be different. Here, we explore the spectrum of current practice in hypothesis formulation and testing in hydrology, based on a random sample of recent research papers. This analysis suggests that in hydrology, as in other fields, hypothesis formulation and testing rarely correspond to the idealized model of the scientific method. Practices such as "p-hacking" or "HARKing" (Hypothesizing After the Results are Known) are major obstacles to more rigorous hypothesis testing in hydrology, along with the well-known problem of confirmation bias—the tendency to value and trust confirmations more than refutations—among both researchers and reviewers. Nonetheless, as several examples illustrate, hypothesis tests have played an essential role in spurring major advances in hydrological theory. Hypothesis testing is not the only recipe for scientific progress, however. Exploratory research, driven by innovations in measurement and observation, has also underlain many key advances. Further improvements in observation and measurement will be vital to both exploratory research and hypothesis testing, and thus to advancing the science of hydrology.
Use of mobile phones in Norway and risk of intracranial tumours.
Klaeboe, Lars; Blaasaas, Karl Gerhard; Tynes, Tore
2007-04-01
To test the hypothesis that exposure to radio-frequency electromagnetic fields from mobile phones increases the incidence of gliomas, meningiomas and acoustic neuromas in adults. The incident cases were of patients aged 19-69 years who were diagnosed during 2001-2002 in Southern Norway. Population controls were selected and frequency-matched for age, sex, and residential area. Detailed information about mobile phone use was collected from 289 glioma (response rate 77%), 207 meningioma patients (71%), and 45 acoustic neuroma patients (68%) and from 358 (69%) controls. For regular mobile phone use, defined as use on average at least once a week or more for at least 6 months, the odds ratio was 0.6 (95% confidence interval 0.4-0.9) for gliomas, 0.8 (95% confidence interval 0.5-1.1) for meningiomas and 0.5 (95% confidence interval 0.2-1.0) for acoustic neuromas. Similar results were found with mobile phone use for 6 years or more for gliomas and acoustic neuromas. An exception was meningiomas, where the odds ratio was 1.2 (95% confidence interval 0.6-2.2). Furthermore, no increasing trend was observed for gliomas or acoustic neuromas by increasing duration of regular use, the time since first regular use or cumulative use of mobile phones. The results from the present study indicate that use of mobile phones is not associated with an increased risk of gliomas, meningiomas or acoustic neuromas.
A leaf wax biomarker record of early Pleistocene hydroclimate from West Turkana, Kenya
NASA Astrophysics Data System (ADS)
Lupien, R. L.; Russell, J. M.; Feibel, C.; Beck, C.; Castañeda, I.; Deino, A.; Cohen, A. S.
2018-04-01
Climate is thought to play a critical role in human evolution; however, this hypothesis is difficult to test due to a lack of long, high-quality paleoclimate records from key hominin fossil locales. To address this issue, we analyzed organic geochemical indicators of climate in a drill core from West Turkana, Kenya that spans ∼1.9-1.4 Ma, an interval that includes several important hominin evolutionary transitions. We analyzed the hydrogen isotopic composition of terrestrial plant waxes (δDwax) to reconstruct orbital-timescale changes in regional hydrology and their relationship with global climate forcings and the hominin fossil record. Our data indicate little change in the long-term mean hydroclimate during this interval, in contrast to inferred changes in the level of Lake Turkana, suggesting that lake level may be responding dominantly to deltaic progradation or tectonically-driven changes in basin configuration as opposed to hydroclimate. Time-series spectral analyses of the isotopic data reveal strong precession-band (21 kyr) periodicity, indicating that regional hydroclimate was strongly affected by changes in insolation. We observe an interval of particularly high-amplitude hydrologic variation at ∼1.7 Ma, which occurs during a time of high orbital eccentricity hence large changes in processionally-driven insolation amplitude. This interval overlaps with multiple hominin species turnovers, the appearance of new stone tool technology, and hominin dispersal out of Africa, supporting the notion that climate variability played an important role in hominin evolution.
Joung, Boyoung; Park, Hyung-Wook; Maruyama, Mitsunori; Tang, Liang; Song, Juan; Han, Seongwook; Piccirillo, Gianfranco; Weiss, James N.; Lin, Shien-Fong; Chen, Peng-Sheng
2012-01-01
Background Anodal stimulation hyperpolarizes cell membrane and increases intracellular Ca2+ (Cai) transient. This study tested the hypothesis that The maximum slope of Cai decline (–(dCai/dt)max) corresponds to the timing of anodal dip on the strength-interval curve and the initiation of repetitive responses and ventricular fibrillation (VF) after a premature stimulus (S2). Methods and Results We simultaneously mapped membrane potential (Vm) and Cai in 23 rabbit ventricles. A dip was observed on the anodal strength-interval curve. During the anodal dip, ventricles were captured by anodal break excitation directly under the S2 electrode. The Cai following anodal stimuli is larger than that following cathodal stimuli. The S1-S2 intervals of the anodal dip (203 ± 10 ms) coincided with the -(dCai/dt)max (199 ± 10 ms, p=NS). BAPTA-AM (n=3), INCX inhibition by low extracellular Na+ (n=3), and combined ryanodine and thapsigargin infusion (n=2) eliminated the anodal supernormality. Strong S2 during the relative refractory period (n=5) induced 29 repetitive responses and 10 VF episodes. The interval between S2 and the first non-driven beat was coincidental with the time of -(dCai/dt)max. Conclusions Larger Cai transient and INCX activation induced by anodal stimulation produces anodal supernormality. Time of maximum INCX activation is coincidental to the induction of non- driven beats from the Cai sinkhole after a strong premature stimulation. PMID:21301131
Bayesian inference for psychology. Part II: Example applications with JASP.
Wagenmakers, Eric-Jan; Love, Jonathon; Marsman, Maarten; Jamil, Tahira; Ly, Alexander; Verhagen, Josine; Selker, Ravi; Gronau, Quentin F; Dropmann, Damian; Boutin, Bruno; Meerhoff, Frans; Knight, Patrick; Raj, Akash; van Kesteren, Erik-Jan; van Doorn, Johnny; Šmíra, Martin; Epskamp, Sacha; Etz, Alexander; Matzke, Dora; de Jong, Tim; van den Bergh, Don; Sarafoglou, Alexandra; Steingroever, Helen; Derks, Koen; Rouder, Jeffrey N; Morey, Richard D
2018-02-01
Bayesian hypothesis testing presents an attractive alternative to p value hypothesis testing. Part I of this series outlined several advantages of Bayesian hypothesis testing, including the ability to quantify evidence and the ability to monitor and update this evidence as data come in, without the need to know the intention with which the data were collected. Despite these and other practical advantages, Bayesian hypothesis tests are still reported relatively rarely. An important impediment to the widespread adoption of Bayesian tests is arguably the lack of user-friendly software for the run-of-the-mill statistical problems that confront psychologists for the analysis of almost every experiment: the t-test, ANOVA, correlation, regression, and contingency tables. In Part II of this series we introduce JASP ( http://www.jasp-stats.org ), an open-source, cross-platform, user-friendly graphical software package that allows users to carry out Bayesian hypothesis tests for standard statistical problems. JASP is based in part on the Bayesian analyses implemented in Morey and Rouder's BayesFactor package for R. Armed with JASP, the practical advantages of Bayesian hypothesis testing are only a mouse click away.
Teaching Hypothesis Testing by Debunking a Demonstration of Telepathy.
ERIC Educational Resources Information Center
Bates, John A.
1991-01-01
Discusses a lesson designed to demonstrate hypothesis testing to introductory college psychology students. Explains that a psychology instructor demonstrated apparent psychic abilities to students. Reports that students attempted to explain the instructor's demonstrations through hypothesis testing and revision. Provides instructions on performing…
Su, Zhong; Zhang, Lisha; Ramakrishnan, V.; Hagan, Michael; Anscher, Mitchell
2011-01-01
Purpose: To evaluate both the Calypso Systems’ (Calypso Medical Technologies, Inc., Seattle, WA) localization accuracy in the presence of wireless metal–oxide–semiconductor field-effect transistor (MOSFET) dosimeters of dose verification system (DVS, Sicel Technologies, Inc., Morrisville, NC) and the dosimeters’ reading accuracy in the presence of wireless electromagnetic transponders inside a phantom.Methods: A custom-made, solid-water phantom was fabricated with space for transponders and dosimeters. Two inserts were machined with positioning grooves precisely matching the dimensions of the transponders and dosimeters and were arranged in orthogonal and parallel orientations, respectively. To test the transponder localization accuracy with∕without presence of dosimeters (hypothesis 1), multivariate analyses were performed on transponder-derived localization data with and without dosimeters at each preset distance to detect statistically significant localization differences between the control and test sets. To test dosimeter dose-reading accuracy with∕without presence of transponders (hypothesis 2), an approach of alternating the transponder presence in seven identical fraction dose (100 cGy) deliveries and measurements was implemented. Two-way analysis of variance was performed to examine statistically significant dose-reading differences between the two groups and the different fractions. A relative-dose analysis method was also used to evaluate transponder impact on dose-reading accuracy after dose-fading effect was removed by a second-order polynomial fit.Results: Multivariate analysis indicated that hypothesis 1 was false; there was a statistically significant difference between the localization data from the control and test sets. However, the upper and lower bounds of the 95% confidence intervals of the localized positional differences between the control and test sets were less than 0.1 mm, which was significantly smaller than the minimum clinical localization resolution of 0.5 mm. For hypothesis 2, analysis of variance indicated that there was no statistically significant difference between the dosimeter readings with and without the presence of transponders. Both orthogonal and parallel configurations had difference of polynomial-fit dose to measured dose values within 1.75%.Conclusions: The phantom study indicated that the Calypso System’s localization accuracy was not affected clinically due to the presence of DVS wireless MOSFET dosimeters and the dosimeter-measured doses were not affected by the presence of transponders. Thus, the same patients could be implanted with both transponders and dosimeters to benefit from improved accuracy of radiotherapy treatments offered by conjunctional use of the two systems. PMID:21776780
Lash, Ayhan Aytekin; Plonczynski, Donna J; Sehdev, Amikar
2011-01-01
To compare the inclusion and the influences of selected variables on hypothesis testing during the 1980s and 1990s. In spite of the emphasis on conducting inquiry consistent with the tenets of logical positivism, there have been no studies investigating the frequency and patterns of hypothesis testing in nursing research The sample was obtained from the journal Nursing Research which was the research journal with the highest circulation during the study period under study. All quantitative studies published during the two decades including briefs and historical studies were included in the analyses A retrospective design was used to select the sample. Five years from the 1980s and 1990s each were randomly selected from the journal, Nursing Research. Of the 582 studies, 517 met inclusion criteria. Findings suggest that there has been a decline in the use of hypothesis testing in the last decades of the 20th century. Further research is needed to identify the factors that influence the conduction of research with hypothesis testing. Hypothesis testing in nursing research showed a steady decline from the 1980s to 1990s. Research purposes of explanation, and prediction/ control increased the likelihood of hypothesis testing. Hypothesis testing strengthens the quality of the quantitative studies, increases the generality of findings and provides dependable knowledge. This is particularly true for quantitative studies that aim to explore, explain and predict/control phenomena and/or test theories. The findings also have implications for doctoral programmes, research preparation of nurse-investigators, and theory testing.
A shift from significance test to hypothesis test through power analysis in medical research.
Singh, G
2006-01-01
Medical research literature until recently, exhibited substantial dominance of the Fisher's significance test approach of statistical inference concentrating more on probability of type I error over Neyman-Pearson's hypothesis test considering both probability of type I and II error. Fisher's approach dichotomises results into significant or not significant results with a P value. The Neyman-Pearson's approach talks of acceptance or rejection of null hypothesis. Based on the same theory these two approaches deal with same objective and conclude in their own way. The advancement in computing techniques and availability of statistical software have resulted in increasing application of power calculations in medical research and thereby reporting the result of significance tests in the light of power of the test also. Significance test approach, when it incorporates power analysis contains the essence of hypothesis test approach. It may be safely argued that rising application of power analysis in medical research may have initiated a shift from Fisher's significance test to Neyman-Pearson's hypothesis test procedure.
2011-01-01
Background Although many biological databases are applying semantic web technologies, meaningful biological hypothesis testing cannot be easily achieved. Database-driven high throughput genomic hypothesis testing requires both of the capabilities of obtaining semantically relevant experimental data and of performing relevant statistical testing for the retrieved data. Tissue Microarray (TMA) data are semantically rich and contains many biologically important hypotheses waiting for high throughput conclusions. Methods An application-specific ontology was developed for managing TMA and DNA microarray databases by semantic web technologies. Data were represented as Resource Description Framework (RDF) according to the framework of the ontology. Applications for hypothesis testing (Xperanto-RDF) for TMA data were designed and implemented by (1) formulating the syntactic and semantic structures of the hypotheses derived from TMA experiments, (2) formulating SPARQLs to reflect the semantic structures of the hypotheses, and (3) performing statistical test with the result sets returned by the SPARQLs. Results When a user designs a hypothesis in Xperanto-RDF and submits it, the hypothesis can be tested against TMA experimental data stored in Xperanto-RDF. When we evaluated four previously validated hypotheses as an illustration, all the hypotheses were supported by Xperanto-RDF. Conclusions We demonstrated the utility of high throughput biological hypothesis testing. We believe that preliminary investigation before performing highly controlled experiment can be benefited. PMID:21342584
Knowledge dimensions in hypothesis test problems
NASA Astrophysics Data System (ADS)
Krishnan, Saras; Idris, Noraini
2012-05-01
The reformation in statistics education over the past two decades has predominantly shifted the focus of statistical teaching and learning from procedural understanding to conceptual understanding. The emphasis of procedural understanding is on the formulas and calculation procedures. Meanwhile, conceptual understanding emphasizes students knowing why they are using a particular formula or executing a specific procedure. In addition, the Revised Bloom's Taxonomy offers a twodimensional framework to describe learning objectives comprising of the six revised cognition levels of original Bloom's taxonomy and four knowledge dimensions. Depending on the level of complexities, the four knowledge dimensions essentially distinguish basic understanding from the more connected understanding. This study identifiesthe factual, procedural and conceptual knowledgedimensions in hypothesis test problems. Hypothesis test being an important tool in making inferences about a population from sample informationis taught in many introductory statistics courses. However, researchers find that students in these courses still have difficulty in understanding the underlying concepts of hypothesis test. Past studies also show that even though students can perform the hypothesis testing procedure, they may not understand the rationale of executing these steps or know how to apply them in novel contexts. Besides knowing the procedural steps in conducting a hypothesis test, students must have fundamental statistical knowledge and deep understanding of the underlying inferential concepts such as sampling distribution and central limit theorem. By identifying the knowledge dimensions of hypothesis test problems in this study, suitable instructional and assessment strategies can be developed in future to enhance students' learning of hypothesis test as a valuable inferential tool.
Toward Joint Hypothesis-Tests Seismic Event Screening Analysis: Ms|mb and Event Depth
DOE Office of Scientific and Technical Information (OSTI.GOV)
Anderson, Dale; Selby, Neil
2012-08-14
Well established theory can be used to combine single-phenomenology hypothesis tests into a multi-phenomenology event screening hypothesis test (Fisher's and Tippett's tests). Commonly used standard error in Ms:mb event screening hypothesis test is not fully consistent with physical basis. Improved standard error - Better agreement with physical basis, and correctly partitions error to include Model Error as a component of variance, correctly reduces station noise variance through network averaging. For 2009 DPRK test - Commonly used standard error 'rejects' H0 even with better scaling slope ({beta} = 1, Selby et al.), improved standard error 'fails to rejects' H0.
Robust misinterpretation of confidence intervals.
Hoekstra, Rink; Morey, Richard D; Rouder, Jeffrey N; Wagenmakers, Eric-Jan
2014-10-01
Null hypothesis significance testing (NHST) is undoubtedly the most common inferential technique used to justify claims in the social sciences. However, even staunch defenders of NHST agree that its outcomes are often misinterpreted. Confidence intervals (CIs) have frequently been proposed as a more useful alternative to NHST, and their use is strongly encouraged in the APA Manual. Nevertheless, little is known about how researchers interpret CIs. In this study, 120 researchers and 442 students-all in the field of psychology-were asked to assess the truth value of six particular statements involving different interpretations of a CI. Although all six statements were false, both researchers and students endorsed, on average, more than three statements, indicating a gross misunderstanding of CIs. Self-declared experience with statistics was not related to researchers' performance, and, even more surprisingly, researchers hardly outperformed the students, even though the students had not received any education on statistical inference whatsoever. Our findings suggest that many researchers do not know the correct interpretation of a CI. The misunderstandings surrounding p-values and CIs are particularly unfortunate because they constitute the main tools by which psychologists draw conclusions from data.
Fundamentals in Biostatistics for Research in Pediatric Dentistry: Part I - Basic Concepts.
Garrocho-Rangel, J A; Ruiz-Rodríguez, M S; Pozos-Guillén, A J
The purpose of this report was to provide the reader with some basic concepts in order to better understand the significance and reliability of the results of any article on Pediatric Dentistry. Currently, Pediatric Dentists need the best evidence available in the literature on which to base their diagnoses and treatment decisions for the children's oral care. Basic understanding of Biostatistics plays an important role during the entire Evidence-Based Dentistry (EBD) process. This report describes Biostatistics fundamentals in order to introduce the basic concepts used in statistics, such as summary measures, estimation, hypothesis testing, effect size, level of significance, p value, confidence intervals, etc., which are available to Pediatric Dentists interested in reading or designing original clinical or epidemiological studies.
Personality change at the intersection of autonomic arousal and stress.
Hart, Daniel; Eisenberg, Nancy; Valiente, Carlos
2007-06-01
We hypothesized that personality change in children can be predicted by the interaction of family risk with susceptibility to autonomic arousal and that children characterized by both high-risk families and highly reactive autonomic nervous systems tend to show maladaptive change. This hypothesis was tested in a 6-year longitudinal study in which personality-type prototypicality, problem behavior, and negative emotional intensity were measured at 2-year intervals. The results indicated that children who both had exaggerated skin conductance responses (a measure of autonomic reactivity) and were living in families with multiple risk factors were most likely to develop an undercontrolled personality type and to exhibit increases in problem behavior and negative emotional intensity. The implications of the results for understanding personality change are discussed.
Murayama, Junko; Kashiwagi, Toshihiro; Kashiwagi, Asako; Mimura, Masaru
2004-10-01
Pre- and postmorbid singing of a patient with amusia due to a right-hemispheric infarction was analyzed acoustically. This particular patient had a premorbid tape recording of her own singing without accompaniment. Appropriateness of pitch interval and rhythm was evaluated based on ratios of pitch and duration between neighboring notes. The results showed that melodic contours and rhythm were preserved but individual pitch intervals were conspicuously distorted. Our results support a hypothesis that pitch and rhythm are subserved by independent neural subsystems. We concluded that action-related acoustic information for controlling pitch intervals is stored in the right hemisphere.
Fatigue Failure of External Hexagon Connections on Cemented Implant-Supported Crowns.
Malta Barbosa, João; Navarro da Rocha, Daniel; Hirata, Ronaldo; Freitas, Gileade; Bonfante, Estevam A; Coelho, Paulo G
2018-01-17
To evaluate the probability of survival and failure modes of different external hexagon connection systems restored with anterior cement-retained single-unit crowns. The postulated null hypothesis was that there would be no differences under accelerated life testing. Fifty-four external hexagon dental implants (∼4 mm diameter) were used for single cement-retained crown replacement and divided into 3 groups: (3i) Full OSSEOTITE, Biomet 3i (n = 18); (OL) OEX P4, Osseolife Implants (n = 18); and (IL) Unihex, Intra-Lock International (n = 18). Abutments were torqued to the implants, and maxillary central incisor crowns were cemented and subjected to step-stress-accelerated life testing in water. Use-level probability Weibull curves and probability of survival for a mission of 100,000 cycles at 200 N (95% 2-sided confidence intervals) were calculated. Stereo and scanning electron microscopes were used for failure inspection. The beta values for 3i, OL, and IL (1.60, 1.69, and 1.23, respectively) indicated that fatigue accelerated the failure of the 3 groups. Reliability for the 3i and OL (41% and 68%, respectively) was not different between each other, but both were significantly lower than IL group (98%). Abutment screw fracture was the failure mode consistently observed in all groups. Because the reliability was significantly different between the 3 groups, our postulated null hypothesis was rejected.
Ni, W; Song, X; Cui, J
2014-03-01
The purpose of this study was to test the mutant selection window (MSW) hypothesis with Escherichia coli exposed to levofloxacin in a rabbit model and to compare in vivo and in vitro exposure thresholds that restrict the selection of fluoroquinolone-resistant mutants. Local infection with E. coli was established in rabbits, and the infected animals were treated orally with various doses of levofloxacin once a day for five consecutive days. Changes in levofloxacin concentration and levofloxacin susceptibility were monitored at the site of infection. The MICs of E. coli increased when levofloxacin concentrations at the site of infection fluctuated between the lower and upper boundaries of the MSW, defined in vitro as the minimum inhibitory concentration (MIC99) and the mutant prevention concentration (MPC), respectively. The pharmacodynamic thresholds at which resistant mutants are not selected in vivo was estimated as AUC24/MPC > 20 h or AUC24/MIC > 60 h, where AUC24 is the area under the drug concentration time curve in a 24-h interval. Our finding demonstrated that the MSW existed in vivo. The AUC24/MPC ratio that prevented resistant mutants from being selected estimated in vivo is consistent with that observed in vitro, indicating it might be a reliable index for guiding the optimization of antimicrobial treatment regimens for suppression of the selection of antimicrobial resistance.
Heightened risk of preterm birth and growth restriction after a first-born son.
Bruckner, Tim A; Mayo, Jonathan A; Gould, Jeffrey B; Stevenson, David K; Lewis, David B; Shaw, Gary M; Carmichael, Suzan L
2015-10-01
In Scandinavia, delivery of a first-born son elevates the risk of preterm delivery and intrauterine growth restriction of the next-born infant. External validity of these results remains unclear. We test this hypothesis for preterm delivery and growth restriction using the linked California birth cohort file. We examined the hypothesis separately by race and/or ethnicity. We retrieved data on 2,852,976 births to 1,426,488 mothers with at least two live births. Our within-mother tests applied Cox proportional hazards (preterm delivery, defined as less than 37 weeks gestation) and linear regression models (birth weight for gestational age percentiles). For non-Hispanic whites, Hispanics, Asians, and American Indian and/or Alaska Natives, analyses indicate heightened risk of preterm delivery and growth restriction after a first-born male. The race-specific hazard ratios for preterm delivery range from 1.07 to 1.18. Regression coefficients for birth weight for gestational age percentile range from -0.73 to -1.49. The 95% confidence intervals for all these estimates do not contain the null. By contrast, we could not reject the null for non-Hispanic black mothers. Whereas California findings generally support those from Scandinavia, the null results among non-Hispanic black mothers suggest that we do not detect adverse outcomes after a first-born male in all racial and/or ethnic groups. Copyright © 2015 Elsevier Inc. All rights reserved.
NASA Astrophysics Data System (ADS)
Lundberg, J.; Conrad, J.; Rolke, W.; Lopez, A.
2010-03-01
A C++ class was written for the calculation of frequentist confidence intervals using the profile likelihood method. Seven combinations of Binomial, Gaussian, Poissonian and Binomial uncertainties are implemented. The package provides routines for the calculation of upper and lower limits, sensitivity and related properties. It also supports hypothesis tests which take uncertainties into account. It can be used in compiled C++ code, in Python or interactively via the ROOT analysis framework. Program summaryProgram title: TRolke version 2.0 Catalogue identifier: AEFT_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEFT_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: MIT license No. of lines in distributed program, including test data, etc.: 3431 No. of bytes in distributed program, including test data, etc.: 21 789 Distribution format: tar.gz Programming language: ISO C++. Computer: Unix, GNU/Linux, Mac. Operating system: Linux 2.6 (Scientific Linux 4 and 5, Ubuntu 8.10), Darwin 9.0 (Mac-OS X 10.5.8). RAM:˜20 MB Classification: 14.13. External routines: ROOT ( http://root.cern.ch/drupal/) Nature of problem: The problem is to calculate a frequentist confidence interval on the parameter of a Poisson process with statistical or systematic uncertainties in signal efficiency or background. Solution method: Profile likelihood method, Analytical Running time:<10 seconds per extracted limit.
VO2 responses to intermittent swimming sets at velocity associated with VO2max.
Libicz, Sebastien; Roels, Belle; Millet, Gregoire P
2005-10-01
While the physiological adaptations following endurance training are relatively well understood, in swimming there is a dearth of knowledge regarding the metabolic responses to interval training (IT). The hypothesis tested predicted that two different endurance swimming IT sets would induce differences in the total time the subjects swam at a high percentage of maximal oxygen consumption (VO(2)max). Ten trained triathletes underwent an incremental test to exhaustion in swimming so that the swimming velocity associated with VO(2)max (vVO(2)max) could be determined. This was followed by a maximal 400-m test and two intermittent sets at vVO(2)max: (a) 16 x 50 m with 15-s rest (IT(50)); (b) 8 x 100 m with 30-s rest (IT(100)). The times sustained above 95% VO(2)max (68.50 +/- 62.69 vs. 145.01 +/- 165.91 sec) and 95% HRmax (146.67 +/- 131.99 vs. 169.78 +/- 203.45 sec, p = 0.54) did not differ between IT(50) and IT(100)(values are mean +/- SD). In conclusion, swimming IT sets of equal time duration at vVO(2)max but of differing work-interval durations led to slightly different VO(2)and HR responses. The time spent above 95% of VO(2)max was twice as long in IT(100) as in IT (50), and a large variability between mean VO(2)and HR values was also observed.
Some consequences of using the Horsfall-Barratt scale for hypothesis testing
USDA-ARS?s Scientific Manuscript database
Comparing treatment effects by hypothesis testing is a common practice in plant pathology. Nearest percent estimates (NPEs) of disease severity were compared to Horsfall-Barratt (H-B) scale data to explore whether there was an effect of assessment method on hypothesis testing. A simulation model ba...
Hypothesis Testing in Task-Based Interaction
ERIC Educational Resources Information Center
Choi, Yujeong; Kilpatrick, Cynthia
2014-01-01
Whereas studies show that comprehensible output facilitates L2 learning, hypothesis testing has received little attention in Second Language Acquisition (SLA). Following Shehadeh (2003), we focus on hypothesis testing episodes (HTEs) in which learners initiate repair of their own speech in interaction. In the context of a one-way information gap…
Classroom-Based Strategies to Incorporate Hypothesis Testing in Functional Behavior Assessments
ERIC Educational Resources Information Center
Lloyd, Blair P.; Weaver, Emily S.; Staubitz, Johanna L.
2017-01-01
When results of descriptive functional behavior assessments are unclear, hypothesis testing can help school teams understand how the classroom environment affects a student's challenging behavior. This article describes two hypothesis testing strategies that can be used in classroom settings: structural analysis and functional analysis. For each…
Hypothesis Testing in the Real World
ERIC Educational Resources Information Center
Miller, Jeff
2017-01-01
Critics of null hypothesis significance testing suggest that (a) its basic logic is invalid and (b) it addresses a question that is of no interest. In contrast to (a), I argue that the underlying logic of hypothesis testing is actually extremely straightforward and compelling. To substantiate that, I present examples showing that hypothesis…
ERIC Educational Resources Information Center
Kwon, Yong-Ju; Jeong, Jin-Su; Park, Yun-Bok
2006-01-01
The purpose of the present study was to test the hypothesis that student's abductive reasoning skills play an important role in the generation of hypotheses on pendulum motion tasks. To test the hypothesis, a hypothesis-generating test on pendulum motion, and a prior-belief test about pendulum motion were developed and administered to a sample of…
Association of Serum Magnesium on Mortality in Patients Admitted to the Intensive Cardiac Care Unit.
Naksuk, Niyada; Hu, Tiffany; Krittanawong, Chayakrit; Thongprayoon, Charat; Sharma, Sunita; Park, Jae Yoon; Rosenbaum, Andrew N; Gaba, Prakriti; Killu, Ammar M; Sugrue, Alan M; Peeraphatdit, Thoetchai; Herasevich, Vitaly; Bell, Malcolm R; Brady, Peter A; Kapa, Suraj; Asirvatham, Samuel J
2017-02-01
Although electrolyte disturbances may affect cardiac action potential, little is known about the association between serum magnesium and corrected QT (QTc) interval as well as clinical outcomes. A consecutive 8498 patients admitted to the Mayo Clinic Hospital-Rochester cardiac care unit (CCU) from January 1, 2004 through December 31, 2013 with 2 or more documented serum magnesium levels, were studied to test the hypothesis that serum magnesium levels are associated with in-hospital mortality, sudden cardiac death, and QTc interval. Patients were 67 ± 15 years; 62.2% were male. The primary diagnoses for CCU admissions were acute myocardial infarction (50.7%) and acute decompensated heart failure (42.5%), respectively. Patients with higher magnesium levels were older, more likely male, and had lower glomerular filtration rates. After multivariate analyses adjusted for clinical characteristics including kidney disease and serum potassium, admission serum magnesium levels were not associated with QTc interval or sudden cardiac death. However, the admission magnesium levels ≥2.4 mg/dL were independently associated with an increase in mortality when compared with the reference level (2.0 to <2.2 mg/dL), having an adjusted odds ratio of 1.80 and a 95% confidence interval of 1.25-2.59. The sensitivity analysis examining the association between postadmission magnesium and analysis that excluded patients with kidney failure and those with abnormal serum potassium yielded similar results. This retrospective study unexpectedly observed no association between serum magnesium levels and QTc interval or sudden cardiac death. However, serum magnesium ≥2.4 mg/dL was an independent predictor of increased hospital morality among CCU patients. Copyright © 2016 Elsevier Inc. All rights reserved.
Keteyian, Steven J; Hibner, Brooks A; Bronsteen, Kyle; Kerrigan, Dennis; Aldred, Heather A; Reasons, Lisa M; Saval, Mathew A; Brawner, Clinton A; Schairer, John R; Thompson, Tracey M S; Hill, Jason; McCulloch, Derek; Ehrman, Jonathon K
2014-01-01
We tested the hypothesis that higher-intensity interval training (HIIT) could be deployed into a standard cardiac rehabilitation (CR) setting and would result in a greater increase in cardiorespiratory fitness (ie, peak oxygen uptake, (·)VO₂) versus moderate-intensity continuous training (MCT). Thirty-nine patients participating in a standard phase 2 CR program were randomized to HIIT or MCT; 15 patients and 13 patients in the HIIT and MCT groups, respectively, completed CR and baseline and followup cardiopulmonary exercise testing. No patients in either study group experienced an event that required hospitalization during or within 3 hours after exercise. The changes in resting heart rate and blood pressure at followup testing were similar for both HIIT and MCT. (·)VO₂ at ventilatory-derived anaerobic threshold increased more (P < .05) with HIIT (3.0 ± 2.8 mL·kg⁻¹·min⁻¹) versus MCT (0.7 ± 2.2 mL·kg⁻¹·min⁻¹). During followup testing, submaximal heart rate at the end of stage 2 of the exercise test was significantly lower within both the HIIT and MCT groups, with no difference noted between groups. Peak (·)VO₂ improved more after CR in patients in HIIT versus MCT (3.6 ± 3.1 mL·kg⁻¹·min⁻¹ vs 1.7 ± 1.7 mL·kg⁻¹·min⁻¹; P < .05). Among patients with stable coronary heart disease on evidence-based therapy, HIIT was successfully integrated into a standard CR setting and, when compared to MCT, resulted in greater improvement in peak exercise capacity and submaximal endurance.
Making Knowledge Delivery Failsafe: Adding Step Zero in Hypothesis Testing
ERIC Educational Resources Information Center
Pan, Xia; Zhou, Qiang
2010-01-01
Knowledge of statistical analysis is increasingly important for professionals in modern business. For example, hypothesis testing is one of the critical topics for quality managers and team workers in Six Sigma training programs. Delivering the knowledge of hypothesis testing effectively can be an important step for the incapable learners or…
Statistics 101 for Radiologists.
Anvari, Arash; Halpern, Elkan F; Samir, Anthony E
2015-10-01
Diagnostic tests have wide clinical applications, including screening, diagnosis, measuring treatment effect, and determining prognosis. Interpreting diagnostic test results requires an understanding of key statistical concepts used to evaluate test efficacy. This review explains descriptive statistics and discusses probability, including mutually exclusive and independent events and conditional probability. In the inferential statistics section, a statistical perspective on study design is provided, together with an explanation of how to select appropriate statistical tests. Key concepts in recruiting study samples are discussed, including representativeness and random sampling. Variable types are defined, including predictor, outcome, and covariate variables, and the relationship of these variables to one another. In the hypothesis testing section, we explain how to determine if observed differences between groups are likely to be due to chance. We explain type I and II errors, statistical significance, and study power, followed by an explanation of effect sizes and how confidence intervals can be used to generalize observed effect sizes to the larger population. Statistical tests are explained in four categories: t tests and analysis of variance, proportion analysis tests, nonparametric tests, and regression techniques. We discuss sensitivity, specificity, accuracy, receiver operating characteristic analysis, and likelihood ratios. Measures of reliability and agreement, including κ statistics, intraclass correlation coefficients, and Bland-Altman graphs and analysis, are introduced. © RSNA, 2015.
Sanderson, David J; Good, Mark A; Skelton, Kathryn; Sprengel, Rolf; Seeburg, Peter H; Rawlins, J Nicholas P; Bannerman, David M
2009-06-01
The GluA1 AMPA receptor subunit is a key mediator of hippocampal synaptic plasticity and is especially important for a rapidly-induced, short-lasting form of potentiation. GluA1 gene deletion impairs hippocampus-dependent, spatial working memory, but spares hippocampus-dependent spatial reference memory. These findings may reflect the necessity of GluA1-dependent synaptic plasticity for short-term memory of recently visited places, but not for the ability to form long-term associations between a particular spatial location and an outcome. This hypothesis is in concordance with the theory that short-term and long-term memory depend on dissociable psychological processes. In this study we tested GluA1-/- mice on both short-term and long-term spatial memory using a simple novelty preference task. Mice were given a series of repeated exposures to a particular spatial location (the arm of a Y-maze) before their preference for a novel spatial location (the unvisited arm of the maze) over the familiar spatial location was assessed. GluA1-/- mice were impaired if the interval between the trials was short (1 min), but showed enhanced spatial memory if the interval between the trials was long (24 h). This enhancement was caused by the interval between the exposure trials rather than the interval prior to the test, thus demonstrating enhanced learning and not simply enhanced performance or expression of memory. This seemingly paradoxical enhancement of hippocampus-dependent spatial learning may be caused by GluA1 gene deletion reducing the detrimental effects of short-term memory on subsequent long-term learning. Thus, these results support a dual-process model of memory in which short-term and long-term memory are separate and sometimes competitive processes.
Physician attitudes about prehospital 12-lead ECGs in chest pain patients.
Brainard, Andrew H; Froman, Philip; Alarcon, Maria E; Raynovich, Bill; Tandberg, Dan
2002-01-01
The prehospital 12-lead electrocardiogram (ECG) has become a standard of care. For the prehospital 12-lead ECG to be useful clinically, however, cardiologists and emergency physicians (EP) must view the test as useful. This study measured physician attitudes about the prehospital 12-lead ECG. This study tested the hypothesis that physicians had "no opinion" regarding the prehospital 12-lead ECG. An anonymous survey was conducted to measure EP and cardiologist attitudes toward prehospital 12-lead ECGs. Hypothesis tests against "no opinion" (VAS = 50 mm) were made with 95% confidence intervals (CIs), and intergroup comparisons were made with the Student's t-test. Seventy-one of 87 (81.6%) surveys were returned. Twenty-five (67.6%) cardiologists responded and 45 (90%) EPs responded. Both groups of physicians viewed prehospital 12-lead ECGs as beneficial (mean = 69 mm; 95% CI = 65-74 mm). All physicians perceived that ECGs positively influence preparation of staff (mean = 63 mm; 95% CI = 60-72 mm) and that ECGs transmitted to hospitals would be beneficial (mean = 66 mm; 95% CI = 60-72 mm). Cardiologists had more favorable opinions than did EPs. The ability of paramedics to interpret ECGs was not seen as important (mean = 50 mm; 95% CI = 43-56 mm). The justifiable increase in field time was perceived to be 3.2 minutes (95% CI = 2.7-3.8 minutes), with 23 (32.8%) preferring that it be done on scene, 46 (65.7%) during transport, and one (1.4%) not at all. Prehospital 12-lead ECGs generally are perceived as worthwhile by cardiologists and EPs. Cardiologists have a higher opinion of the value and utility of field ECGs. Since the reduction in mortality from the 12-lead ECG is small, it is likely that positive physician attitudes are attributable to other factors.
Testing of Hypothesis in Equivalence and Non Inferiority Trials-A Concept.
Juneja, Atul; Aggarwal, Abha R; Adhikari, Tulsi; Pandey, Arvind
2016-04-01
Establishing the appropriate hypothesis is one of the important steps for carrying out the statistical tests/analysis. Its understanding is important for interpreting the results of statistical analysis. The current communication attempts to provide the concept of testing of hypothesis in non inferiority and equivalence trials, where the null hypothesis is just reverse of what is set up for conventional superiority trials. It is similarly looked for rejection for establishing the fact the researcher is intending to prove. It is important to mention that equivalence or non inferiority cannot be proved by accepting the null hypothesis of no difference. Hence, establishing the appropriate statistical hypothesis is extremely important to arrive at meaningful conclusion for the set objectives in research.
A hypothesis on the biological origins and social evolution of music and dance.
Wang, Tianyan
2015-01-01
The origins of music and musical emotions is still an enigma, here I propose a comprehensive hypothesis on the origins and evolution of music, dance, and speech from a biological and sociological perspective. I suggest that every pitch interval between neighboring notes in music represents corresponding movement pattern through interpreting the Doppler effect of sound, which not only provides a possible explanation for the transposition invariance of music, but also integrates music and dance into a common form-rhythmic movements. Accordingly, investigating the origins of music poses the question: why do humans appreciate rhythmic movements? I suggest that human appreciation of rhythmic movements and rhythmic events developed from the natural selection of organisms adapting to the internal and external rhythmic environments. The perception and production of, as well as synchronization with external and internal rhythms are so vital for an organism's survival and reproduction, that animals have a rhythm-related reward and emotion (RRRE) system. The RRRE system enables the appreciation of rhythmic movements and events, and is integral to the origination of music, dance and speech. The first type of rewards and emotions (rhythm-related rewards and emotions, RRREs) are evoked by music and dance, and have biological and social functions, which in turn, promote the evolution of music, dance and speech. These functions also evoke a second type of rewards and emotions, which I name society-related rewards and emotions (SRREs). The neural circuits of RRREs and SRREs develop in species formation and personal growth, with congenital and acquired characteristics, respectively, namely music is the combination of nature and culture. This hypothesis provides probable selection pressures and outlines the evolution of music, dance, and speech. The links between the Doppler effect and the RRREs and SRREs can be empirically tested, making the current hypothesis scientifically concrete.
Statistical significance versus clinical relevance.
van Rijn, Marieke H C; Bech, Anneke; Bouyer, Jean; van den Brand, Jan A J G
2017-04-01
In March this year, the American Statistical Association (ASA) posted a statement on the correct use of P-values, in response to a growing concern that the P-value is commonly misused and misinterpreted. We aim to translate these warnings given by the ASA into a language more easily understood by clinicians and researchers without a deep background in statistics. Moreover, we intend to illustrate the limitations of P-values, even when used and interpreted correctly, and bring more attention to the clinical relevance of study findings using two recently reported studies as examples. We argue that P-values are often misinterpreted. A common mistake is saying that P < 0.05 means that the null hypothesis is false, and P ≥0.05 means that the null hypothesis is true. The correct interpretation of a P-value of 0.05 is that if the null hypothesis were indeed true, a similar or more extreme result would occur 5% of the times upon repeating the study in a similar sample. In other words, the P-value informs about the likelihood of the data given the null hypothesis and not the other way around. A possible alternative related to the P-value is the confidence interval (CI). It provides more information on the magnitude of an effect and the imprecision with which that effect was estimated. However, there is no magic bullet to replace P-values and stop erroneous interpretation of scientific results. Scientists and readers alike should make themselves familiar with the correct, nuanced interpretation of statistical tests, P-values and CIs. © The Author 2017. Published by Oxford University Press on behalf of ERA-EDTA. All rights reserved.
Familiarity speeds up visual short-term memory consolidation.
Xie, Weizhen; Zhang, Weiwei
2017-06-01
Existing long-term memory (LTM) can boost the number of retained representations over a short delay in visual short-term memory (VSTM). However, it is unclear whether and how prior LTM affects the initial process of transforming fragile sensory inputs into durable VSTM representations (i.e., VSTM consolidation). The consolidation speed hypothesis predicts faster consolidation for familiar relative to unfamiliar stimuli. Alternatively, the perceptual boost hypothesis predicts that the advantage in perceptual processing of familiar stimuli should add a constant boost for familiar stimuli during VSTM consolidation. To test these competing hypotheses, the present study examined how the large variance in participants' prior multimedia experience with Pokémon affected VSTM for Pokémon. In Experiment 1, the amount of time allowed for VSTM consolidation was manipulated by presenting consolidation masks at different intervals after the onset of to-be-remembered Pokémon characters. First-generation Pokémon characters that participants were more familiar with were consolidated faster into VSTM as compared with recent-generation Pokémon characters that participants were less familiar with. These effects were absent in participants who were unfamiliar with both generations of Pokémon. Although familiarity also increased the number of retained Pokémon characters when consolidation was uninterrupted but still incomplete due to insufficient encoding time in Experiment 1, this capacity effect was absent in Experiment 2 when consolidation was allowed to complete with sufficient encoding time. Together, these results support the consolidation speed hypothesis over the perceptual boost hypothesis and highlight the importance of assessing experimental effects on both processing and representation aspects of VSTM. (PsycINFO Database Record (c) 2017 APA, all rights reserved).
TI-59 Programs for Multiple Regression.
1980-05-01
general linear hypothesis model of full rank [ Graybill , 19611 can be written as Y = x 8 + C , s-N(O,o 2I) nxl nxk kxl nxl where Y is the vector of n...a "reduced model " solution, and confidence intervals for linear functions of the coefficients can be obtained using (x’x) and a2, based on the t...O107)l UA.LLL. Library ModuIe NASTER -Puter 0NTINA Cards 1 PROGRAM DESCRIPTION (s s 2 ror the general linear hypothesis model Y - XO + C’ calculates
Cadenaro, Milena; Breschi, Lorenzo; Nucci, Cesare; Antoniolli, Francesca; Visintini, Erika; Prati, Carlo; Matis, Bruce A; Di Lenarda, Roberto
2008-01-01
This study evaluated the morphological effects produced in vivo by two in-office bleaching agents on enamel surface roughness using a noncontact profilometric analysis of epoxy replicas. The null hypothesis tested was that there would be no difference in the micromorphology of the enamel surface during or after bleaching with two different bleaching agents. Eighteen subjects were selected and randomly assigned to two treatment groups (n=9). The tooth whitening materials tested were 38% hydrogen peroxide (HP) (Opalescence Xtra Boost) and 35% carbamide peroxide (CP) (Rembrandt Quik Start). The bleaching agents were applied in accordance with manufacturer protocols. The treatments were repeated four times at one-week intervals. High precision impressions of the upper right incisor were taken at baseline as the control (CTRL) and after each bleaching treatment (T0: first application, T1: second application at one week, T2: third application at two weeks and T3: fourth application at three weeks). Epoxy resin replicas were poured from impressions, and the surface roughness was analyzed by means of a non-contact profilometer (Talysurf CLI 1000). Epoxy replicas were then observed using SEM. All data were statistically analyzed using ANOVA and differences were determined with a t-test. No significant differences in surface roughness were found on enamel replicas using either 38% hydrogen peroxide or 35% carbamide peroxide in vivo. This in vivo study supports the null hypothesis that two in-office bleaching agents, with either a high concentration of hydrogen or carbamide peroxide, do not alter enamel surface roughness, even after multiple applications.
Periodicity of extinction: A 1988 update
NASA Technical Reports Server (NTRS)
Sepkowski, J. John, Jr.
1988-01-01
The hypothesis that events of mass extinction recur periodically at approximately 26 my intervals is an empirical claim based on analysis of data from the fossil record. The hypothesis has become closely linked with catastrophism because several events in the periodic series are associated with evidence of extraterrestrial impacts, and terrestrial forcing mechanisms with long, periodic recurrences are not easily conceived. Astronomical mechanisms that have been hypothesized include undetected solar companions and solar oscillation about the galactic plane, which induce comet showers and result in impacts on Earth at regular intervals. Because these mechanisms are speculative, they have been the subject of considerable controversy, as has the hypothesis of periodicity of extinction. In response to criticisms and uncertainties, a data base was developed on times of extinction of marine animal genera. A time series is given and analyzed with 49 sample points for the per-genus extinction rate from the Late Permian to the Recent. An unexpected pattern in the data is the uniformity of magnitude of many of the periodic extinction events. Observations suggest that the sequence of extinction events might be the result of two sets of mechanisms: a periodic forcing that normally induces only moderate amounts of extinction, and independent incidents or catastrophes that, when coincident with the periodic forcing, amplify its signal and produce major-mass extinctions.
Facio, Flavia M; Sapp, Julie C; Linn, Amy; Biesecker, Leslie G
2012-10-10
Massively-parallel sequencing (MPS) technologies create challenges for informed consent of research participants given the enormous scale of the data and the wide range of potential results. We propose that the consent process in these studies be based on whether they use MPS to test a hypothesis or to generate hypotheses. To demonstrate the differences in these approaches to informed consent, we describe the consent processes for two MPS studies. The purpose of our hypothesis-testing study is to elucidate the etiology of rare phenotypes using MPS. The purpose of our hypothesis-generating study is to test the feasibility of using MPS to generate clinical hypotheses, and to approach the return of results as an experimental manipulation. Issues to consider in both designs include: volume and nature of the potential results, primary versus secondary results, return of individual results, duty to warn, length of interaction, target population, and privacy and confidentiality. The categorization of MPS studies as hypothesis-testing versus hypothesis-generating can help to clarify the issue of so-called incidental or secondary results for the consent process, and aid the communication of the research goals to study participants.
An Exercise for Illustrating the Logic of Hypothesis Testing
ERIC Educational Resources Information Center
Lawton, Leigh
2009-01-01
Hypothesis testing is one of the more difficult concepts for students to master in a basic, undergraduate statistics course. Students often are puzzled as to why statisticians simply don't calculate the probability that a hypothesis is true. This article presents an exercise that forces students to lay out on their own a procedure for testing a…
Hypothesis Testing Using Spatially Dependent Heavy Tailed Multisensor Data
2014-12-01
Office of Research 113 Bowne Hall Syracuse, NY 13244 -1200 ABSTRACT HYPOTHESIS TESTING USING SPATIALLY DEPENDENT HEAVY-TAILED MULTISENSOR DATA Report...consistent with the null hypothesis of linearity and can be used to estimate the distribution of a test statistic that can discrimi- nate between the null... Test for nonlinearity. Histogram is generated using the surrogate data. The statistic of the original time series is represented by the solid line
Phytosterol glycosides reduce cholesterol absorption in humans
Lin, Xiaobo; Ma, Lina; Racette, Susan B.; Anderson Spearie, Catherine L.; Ostlund, Richard E.
2009-01-01
Dietary phytosterols inhibit intestinal cholesterol absorption and regulate whole body cholesterol excretion and balance. However, they are biochemically heterogeneous and a portion is glycosylated in some foods with unknown effects on biological activity. We tested the hypothesis that phytosterol glycosides reduce cholesterol absorption in humans. Phytosterol glycosides were extracted and purified from soy lecithin in a novel two-step process. Cholesterol absorption was measured in a series of three single-meal tests given at intervals of 2 wk to each of 11 healthy subjects. In a randomized crossover design, participants received ∼300 mg of added phytosterols in the form of phytosterol glycosides or phytosterol esters, or placebo in a test breakfast also containing 30 mg cholesterol-d7. Cholesterol absorption was estimated by mass spectrometry of plasma cholesterol-d7 enrichment 4–5 days after each test. Compared with the placebo test, phytosterol glycosides reduced cholesterol absorption by 37.6 ± 4.8% (P < 0.0001) and phytosterol esters 30.6 ± 3.9% (P = 0.0001). These results suggest that natural phytosterol glycosides purified from lecithin are bioactive in humans and should be included in methods of phytosterol analysis and tables of food phytosterol content. PMID:19246636
Small sample mediation testing: misplaced confidence in bootstrapped confidence intervals.
Koopman, Joel; Howe, Michael; Hollenbeck, John R; Sin, Hock-Peng
2015-01-01
Bootstrapping is an analytical tool commonly used in psychology to test the statistical significance of the indirect effect in mediation models. Bootstrapping proponents have particularly advocated for its use for samples of 20-80 cases. This advocacy has been heeded, especially in the Journal of Applied Psychology, as researchers are increasingly utilizing bootstrapping to test mediation with samples in this range. We discuss reasons to be concerned with this escalation, and in a simulation study focused specifically on this range of sample sizes, we demonstrate not only that bootstrapping has insufficient statistical power to provide a rigorous hypothesis test in most conditions but also that bootstrapping has a tendency to exhibit an inflated Type I error rate. We then extend our simulations to investigate an alternative empirical resampling method as well as a Bayesian approach and demonstrate that they exhibit comparable statistical power to bootstrapping in small samples without the associated inflated Type I error. Implications for researchers testing mediation hypotheses in small samples are presented. For researchers wishing to use these methods in their own research, we have provided R syntax in the online supplemental materials. (c) 2015 APA, all rights reserved.
Corrosion behavior of self-ligating and conventional metal brackets.
Maia, Lúcio Henrique Esmeraldo Gurgel; Lopes Filho, Hibernon; Ruellas, Antônio Carlos de Oliveira; Araújo, Mônica Tirre de Souza; Vaitsman, Delmo Santiago
2014-01-01
To test the null hypothesis that the aging process in self-ligating brackets is not higher than in conventional brackets. Twenty-five conventional (GN-3M/Unitek; GE-GAC; VE-Aditek) and 25 self-ligating (SCs-3M/Unitek; INs-GAC; ECs-Aditek) metal brackets from three manufacturers (n = 150) were submitted to aging process in 0.9% NaCl solution at a constant temperature of 37 ± 1°C for 21 days. The content of nickel, chromium and iron ions in the solution collected at intervals of 7, 14 and 21 days was quantified by atomic absorption spectrophotometry. After the aging process, the brackets were analyzed by scanning electron microscopy (SEM) under 22X and 1,000X magnifications. Comparison of metal release in self-ligating and conventional brackets from the same manufacturer proved that the SCs group released more nickel (p < 0.05) than the GN group after 7 and 14 days, but less chromium (p < 0.05) after 14 days and less iron (p < 0.05) at the three experimental time intervals. The INs group released less iron (p < 0.05) than the GE group after 7 days and less nickel, chromium and iron (p < 0.05) after 14 and 21 days. The ECs group released more nickel, chromium and iron (p < 0.05) than the VE group after 14 days, but released less nickel and chromium (p < 0.05) after 7 days and less chromium and iron (p < 0.05) after 21 days. The SEM analysis revealed alterations on surface topography of conventional and self-ligating brackets. The aging process in self-ligating brackets was not greater than in conventional brackets from the same manufacturer. The null hypothesis was accepted.
Corrosion behavior of self-ligating and conventional metal brackets
Maia, Lúcio Henrique Esmeraldo Gurgel; Lopes Filho, Hibernon; Ruellas, Antônio Carlos de Oliveira; Araújo, Mônica Tirre de Souza; Vaitsman, Delmo Santiago
2014-01-01
Objective To test the null hypothesis that the aging process in self-ligating brackets is not higher than in conventional brackets. Methods Twenty-five conventional (GN-3M/Unitek; GE-GAC; VE-Aditek) and 25 self-ligating (SCs-3M/Unitek; INs-GAC; ECs-Aditek) metal brackets from three manufacturers (n = 150) were submitted to aging process in 0.9% NaCl solution at a constant temperature of 37 ± 1ºC for 21 days. The content of nickel, chromium and iron ions in the solution collected at intervals of 7, 14 and 21 days was quantified by atomic absorption spectrophotometry. After the aging process, the brackets were analyzed by scanning electron microscopy (SEM) under 22X and 1,000X magnifications. Results Comparison of metal release in self-ligating and conventional brackets from the same manufacturer proved that the SCs group released more nickel (p < 0.05) than the GN group after 7 and 14 days, but less chromium (p < 0.05) after 14 days and less iron (p < 0.05) at the three experimental time intervals. The INs group released less iron (p < 0.05) than the GE group after 7 days and less nickel, chromium and iron (p < 0.05) after 14 and 21 days. The ECs group released more nickel, chromium and iron (p < 0.05) than the VE group after 14 days, but released less nickel and chromium (p < 0.05) after 7 days and less chromium and iron (p < 0.05) after 21 days. The SEM analysis revealed alterations on surface topography of conventional and self-ligating brackets. Conclusions The aging process in self-ligating brackets was not greater than in conventional brackets from the same manufacturer. The null hypothesis was accepted. PMID:24945521
Sylvester, Chad M.; Hudziak, James J.; Gaffrey, Michael S.; Barch, Deanna M.; Luby, Joan L.
2015-01-01
Attention biases towards threatening and sad stimuli are associated with pediatric anxiety and depression, respectively. The basic cognitive mechanisms associated with attention biases in youth, however, remain unclear. Here, we tested the hypothesis that threat bias (selective attention for threatening versus neutral stimuli) but not sad bias relies on stimulus-driven attention. We collected measures of stimulus-driven attention, threat bias, sad bias, and current clinical symptoms in youth with a history of an anxiety disorder and/or depression (ANX/DEP; n=40) as well as healthy controls (HC; n=33). Stimulus-driven attention was measured with a non-emotional spatial orienting task, while threat bias and sad bias were measured at a short time interval (150 ms) with a spatial orienting task using emotional faces and at a longer time interval (500 ms) using a dot-probe task. In ANX/DEP but not HC, early attention bias towards threat was negatively correlated with later attention bias to threat, suggesting that early threat vigilance was associated with later threat avoidance. Across all subjects, stimulus-driven orienting was not correlated with early threat bias but was negatively correlated with later threat bias, indicating that rapid stimulus-driven orienting is linked to later threat avoidance. No parallel relationships were detected for sad bias. Current symptoms of depression but not anxiety were related to decreased stimulus-driven attention. Together, these results are consistent with the hypothesis that threat bias but not sad bias relies on stimulus-driven attention. These results inform the design of attention bias modification programs that aim to reverse threat biases and reduce symptoms associated with pediatric anxiety and depression. PMID:25702927
Suárez-Medina, Ramón; Venero-Fernández, Silvia Josefina; Britton, John; Fogarty, Andrew W
2016-09-01
The increase in prevalence of obesity is a possible risk factor for asthma in developed countries. As the people of Cuba experienced an acute population-based decrease in weight in the 1990s, we tested the hypothesis that national weight loss and subsequent weight gain was associated a reciprocal changes in asthma mortality. Data were obtained on mortality rates from asthma and COPD in Cuba from 1964 to 2014, along with data on prevalence of obesity for this period. Joinpoint analysis was used to identify inflexion points in the data. Although the prevalence of obesity from 1990 to 1995 decreased from 14% to 7%, over the same time period the rate of asthma mortality increased from 4.5 deaths per 100,000 population to 5.4 deaths per 100,000 population. In 2010, the obesity prevalence subsequently increased to 15% in 2010, while the asthma mortality rate dropped to 2.3 deaths per 100,000 population. The optimal model for fit of asthma mortality over time gave an increasing linear association from 1964 to 1995 (95% confidence interval for inflexion point: 1993 to 1997), followed by a decrease in asthma mortality rates from 1995 to 1999 (95% confidence interval for inflexion point: 1997 to 2002). These national data do not support the hypothesis that population-based changes in weight are associated with asthma mortality. Other possible explanations for the large decreases in asthma mortality rates include changes in pollution or better delivery of medical care over the same time period. Copyright © 2016 Elsevier Ltd. All rights reserved.
Sylvester, Chad M; Hudziak, James J; Gaffrey, Michael S; Barch, Deanna M; Luby, Joan L
2016-02-01
Attention biases towards threatening and sad stimuli are associated with pediatric anxiety and depression, respectively. The basic cognitive mechanisms associated with attention biases in youth, however, remain unclear. Here, we tested the hypothesis that threat bias (selective attention for threatening versus neutral stimuli) but not sad bias relies on stimulus-driven attention. We collected measures of stimulus-driven attention, threat bias, sad bias, and current clinical symptoms in youth with a history of an anxiety disorder and/or depression (ANX/DEP; n = 40) as well as healthy controls (HC; n = 33). Stimulus-driven attention was measured with a non-emotional spatial orienting task, while threat bias and sad bias were measured at a short time interval (150 ms) with a spatial orienting task using emotional faces and at a longer time interval (500 ms) using a dot-probe task. In ANX/DEP but not HC, early attention bias towards threat was negatively correlated with later attention bias to threat, suggesting that early threat vigilance was associated with later threat avoidance. Across all subjects, stimulus-driven orienting was not correlated with early threat bias but was negatively correlated with later threat bias, indicating that rapid stimulus-driven orienting is linked to later threat avoidance. No parallel relationships were detected for sad bias. Current symptoms of depression but not anxiety were related to decreased stimulus-driven attention. Together, these results are consistent with the hypothesis that threat bias but not sad bias relies on stimulus-driven attention. These results inform the design of attention bias modification programs that aim to reverse threat biases and reduce symptoms associated with pediatric anxiety and depression.
Household contact with pets and birds and risk of lymphoma.
Bellizzi, Saverio; Cocco, Pierluigi; Zucca, Mariagrazia; D'Andrea, Ileana; Sesler, Simonetta; Monne, Maria; Onida, Angela; Piras, Giovanna; Uras, Antonella; Angelucci, Emanuele; Gabbas, Attilio; Rais, Marco; Nitsch, Dorothea; Ennas, Maria G
2011-02-01
Contact with household pets has been suggested to be inversely associated with lymphoma risk. We tested the hypothesis in a case-control study of lymphoma in the Sardinia region of Italy. Cases were 326 patients, first diagnosed with lymphoma in 1999-2003. Controls were 464 population controls, frequency matched to cases by age, gender, and area of residence. In person interviews included self-reported household contact with pets and birds, type of pet(s), and age at starting contact. Frequent contact with birds was inversely associated with lymphoma, and particularly B-cell non-Hodgkin lymphoma (odds ratio [OR] = 0.6, 95% confidence interval [95% CI]: 0.4, 0.9). Contact with chickens accounted for this inverse association, which was strongest for first contact occurring at age ≤8 years (OR = 0.4, 95% CI: 0.2, 1.0). No association was observed when first contact occurred at age 9 or older. Contact with any pets was inversely associated with risk of diffuse large B-cell lymphoma (OR = 0.4, 95% CI: 0.2, 1.0), but not other lymphoma subtypes. Our results support the hypothesis that early-life exposure to pets, birds and particularly with chickens might be associated with a reduced risk of lymphoma.
Smits, Jasper A J; Tart, Candyce D; Presnell, Katherine; Rosenfield, David; Otto, Michael W
2010-01-01
A growing body of work suggests that obese adults are less likely to adhere to exercise than normal-weight adults because they experience greater levels of discomfort and distress during exercise sessions. The present study introduces and provides a preliminary test of the hypothesis that the distress experienced during exercise among persons with elevated body mass index is particularly high among those who fear somatic arousal (i.e. elevated anxiety sensitivity [AS]). Young adults were randomly assigned to complete 20 min of treadmill exercise (at 70% of their age-adjusted predicted maximum heart rate) or 20 min of rest. Body mass, AS, and negative affect were measured at baseline, and fear was measured at 4-min intervals during the experimental phase. Consistent with the authors' hypothesis, there was a significant Exercise x BMI x ASI interaction (sr(2) = .08), suggesting that the greatest fear levels during exercise were observed among participants with high body mass, but only if they also had elevated AS. These findings offer a new approach for identifying specific vulnerable individuals and have clear clinical implications, given that the amplification factor of AS can be modified with clinical intervention.
NASA Astrophysics Data System (ADS)
Tarduno, John; Bono, Richard; Cottrell, Rory
2015-04-01
Recent estimates of core thermal conductivity are larger than prior values by a factor of approximately three. These new estimates suggest that the inner core is a relatively young feature, perhaps as young as 500 million years old, and that the core-mantle heat flux required to drive the early dynamo was greater than previously assumed (Nimmo, 2015). Here, we focus on paleomagnetic studies of two key time intervals important for understanding core evolution in light of the revisions of core conductivity values. 1. Hadean to Paleoarchean (4.4-3.4 Ga). Single silicate crystal paleointensity analyses suggest a relatively strong magnetic field at 3.4-3.45 Ga (Tarduno et al., 2010). Paleointenity data from zircons of the Jack Hills (Western Australia) further suggest the presence of a geodynamo between 3.5 and 3.6 Ga (Tarduno and Cottrell, 2014). We will discuss our efforts to test for the absence/presence of the geodynamo in older Eoarchean and Hadean times. 2. Ediacaran to Early Cambrian (~635-530 Ma). Disparate directions seen in some paleomagnetic studies from this time interval have been interpreted as recording inertial interchange true polar wander (IITPW). Recent single silicate paleomagnetic analyses fail to find evidence for IITPW; instead a reversing field overprinted by secondary magnetizations is defined (Bono and Tarduno, 2015). Preliminary analyses suggest the field may have been unusually weak. We will discuss our on-going tests of the hypothesis that this interval represents the time of onset of inner core growth. References: Bono, R.K. & Tarduno, J.A., Geology, in press (2015); Nimmo, F., Treatise Geophys., in press (2015); Tarduno, J.A., et al., Science (2010); Tarduno, J.A. & Cottrell, R.D., AGU Fall Meeting (2014).
Pithon, Matheus Melo; dos Santos, Rogerio Lacerda; Judice, Renata Lima Pasini; de Assuncao, Paulo Sergio; Restle, Luciana
2013-11-01
Sterilisation using peracetic acid (PAA) has been advocated for orthodontic elastic bands. However, cane-loaded elastomeric ligatures can also become contaminated during processing, packaging, and manipulation before placement in the oral cavity and are therefore susceptible, and possible causes, of cross-contamination. To test the hypothesis that 0.25% peracetic acid (PAA), following the sterilisation of elastomers, influences the cytotoxicity of elastomeric ligatures on L929 cell lines. Four hundred and eighty silver elastomeric ligatures were divided into 4 groups of 120 ligatures to produce, Group TP (latex natural, bulk pack, TP Orthodontics), Group M1 (Polyurethane, bulk pack, Morelli), Group M2 (Polyurethane, cane-loaded, Morelli) and Group U (Polyurethane, cane-loaded, Uniden). Of the 120 ligatures in each group, 100 were sterilised in 0.25% PAA at time intervals (N = 20) of 1 hour, 2 hours, 3 hours, 4 hours and 5 hours. The 20 remaining elastomeric ligatures in each group were not sterilised and served as controls. Cytotoxicity was assessed using L929 cell lines and a dye-uptake method. Analysis of variance (ANOVA), followed by the Tukey post hoc test (p < 0.05) determined statistical relevance. There was a significant difference between TP, Morelli and Uniden elastomerics (p < 0.05), but no difference between the two types of Morelli elastomerics at the 1 hour time interval. In addition, there was a significant difference between Group CC and the other groups assessed, except between Groups CC and TP at the 1 hour time interval. The non-sterilised elastomeric ligatures showed similar cell viability to that observed after 1 hour of standard sterilisation. PAA did not significantly influence the cytotoxicity of elastomeric ligatures after a sterilisation time of 1 hour and is therefore recommended for clinical use.
Lower-body negative-pressure exercise and bed-rest-mediated orthostatic intolerance
NASA Technical Reports Server (NTRS)
Schneider, Suzanne M.; Watenpaugh, Donald E.; Lee, Stuart M C.; Ertl, Andrew C.; Williams, W. Jon; Ballard, Richard E.; Hargens, Alan R.
2002-01-01
PURPOSE: Supine, moderate exercise is ineffective in maintaining orthostatic tolerance after bed rest (BR). Our purpose was to test the hypothesis that adding an orthostatic stress during exercise would maintain orthostatic function after BR. METHODS: Seven healthy men completed duplicate 15-d 6 degrees head-down tilt BR using a crossover design. During one BR, subjects did not exercise (CON). During another BR, subjects exercised for 40 min.d(-1) on a supine treadmill against 50-60 mm Hg LBNP (EX). Exercise training consisted of an interval exercise protocol of 2- to 3-min intervals alternating between 41 and 65% (.)VO(2max). Before and after BR, an LBNP tolerance test was performed in which the LBNP chamber was decompressed in 10-mm Hg stages every 3 min until presyncope. RESULTS: LBNP tolerance, as assessed by the cumulative stress index (CSI) decreased after BR in both the CON (830 +/- 144, pre-BR vs 524 +/- 56 mm Hg.min, post-BR) and the EX (949 +/- 118 pre-BR vs 560 +/- 44 mm Hg.min, post-BR) conditions. However, subtolerance (0 to -50 mm Hg LBNP) heart rates were lower and systolic blood pressures were better maintained after BR in the EX condition compared with CON. CONCLUSION: Moderate exercise performed against LBNP simulating an upright 1-g environment failed to protect orthostatic tolerance after 15 d of BR.
Hyltén-Cavallius, Louise; Iepsen, Eva W; Christiansen, Michael; Graff, Claus; Linneberg, Allan; Pedersen, Oluf; Holst, Jens J; Hansen, Torben; Torekov, Signe S; Kanters, Jørgen K
2017-08-01
Both hypoglycemia and severe hyperglycemia constitute known risk factors for cardiac repolarization changes potentially leading to malignant arrhythmias. Patients with loss of function mutations in KCNQ1 are characterized by long QT syndrome (LQTS) and may be at increased risk for glucose-induced repolarization disturbances. The purpose of this study was to test the hypothesis that KCNQ1 LQTS patients are at particular risk for cardiac repolarization changes during the relative hyperglycemia that occurs after an oral glucose load. Fourteen KCNQ1 LQTS patients and 28 control participants matched for gender, body mass index, and age underwent a 3-hour oral 75-g glucose tolerance test with ECGs obtained at 7 time points. Fridericia corrected QT interval (QTcF), Bazett corrected QT interval (QTcB), and the Morphology Combination Score (MCS) were calculated. QTc and MCS increased in both groups. MCS remained elevated until 150 minutes after glucose ingestion, and the maximal change from baseline was larger among KCNQ1 LQTS patients compared with control subjects (0.28 ± 0.27 vs 0.15 ± 0.13; P <.05). Relative hyperglycemia induced by ingestion of 75-g glucose caused cardiac repolarization disturbances that were more severe in KCNQ1 LQTS patients compared with control subjects. Copyright © 2017 Heart Rhythm Society. Published by Elsevier Inc. All rights reserved.
40 CFR 1065.672 - Drift correction.
Code of Federal Regulations, 2013 CFR
2013-07-01
... interval gas analyzer response to the span gas concentration. x postspan = post-test interval gas analyzer... = post-test interval gas analyzer response to the zero gas concentration. Example: x refzero = 0 µmol/mol... occurred before one or more previous test intervals. (4) For any post-test interval concentrations, use...
40 CFR 1065.672 - Drift correction.
Code of Federal Regulations, 2010 CFR
2010-07-01
... interval gas analyzer response to the span gas concentration. x postspan = post-test interval gas analyzer... = post-test interval gas analyzer response to the zero gas concentration. Example: x refzero = 0 µmol/mol... occurred before one or more previous test intervals. (4) For any post-test interval concentrations, use...
40 CFR 1065.672 - Drift correction.
Code of Federal Regulations, 2014 CFR
2014-07-01
... interval gas analyzer response to the span gas concentration. x postspan = post-test interval gas analyzer... = post-test interval gas analyzer response to the zero gas concentration. Example: x refzero = 0 µmol/mol... occurred before one or more previous test intervals. (4) For any post-test interval concentrations, use...
40 CFR 1065.672 - Drift correction.
Code of Federal Regulations, 2012 CFR
2012-07-01
... interval gas analyzer response to the span gas concentration. x postspan = post-test interval gas analyzer... = post-test interval gas analyzer response to the zero gas concentration. Example: x refzero = 0 µmol/mol... occurred before one or more previous test intervals. (4) For any post-test interval concentrations, use...
40 CFR 1065.672 - Drift correction.
Code of Federal Regulations, 2011 CFR
2011-07-01
... interval gas analyzer response to the span gas concentration. x postspan = post-test interval gas analyzer... = post-test interval gas analyzer response to the zero gas concentration. Example: x refzero = 0 µmol/mol... occurred before one or more previous test intervals. (4) For any post-test interval concentrations, use...
Abnormalities of the QT interval in primary disorders of autonomic failure
NASA Technical Reports Server (NTRS)
Choy, A. M.; Lang, C. C.; Roden, D. M.; Robertson, D.; Wood, A. J.; Robertson, R. M.; Biaggioni, I.
1998-01-01
BACKGROUND: Experimental evidence shows that activation of the autonomic nervous system influences ventricular repolarization and, therefore, the QT interval on the ECG. To test the hypothesis that the QT interval is abnormal in autonomic dysfunction, we examined ECGs in patients with severe primary autonomic failure and in patients with congenital dopamine beta-hydroxylase (DbetaH) deficiency who are unable to synthesize norepinephrine and epinephrine. SUBJECTS AND METHODS: Maximal QT and rate-corrected QT (QTc) intervals and adjusted QTc dispersion [(maximal QTc - minimum QTc on 12 lead ECG)/square root of the number of leads measured] were determined in blinded fashion from ECGs of 67 patients with primary autonomic failure (36 patients with multiple system atrophy [MSA], and 31 patients with pure autonomic failure [PAF]) and 17 age- and sex-matched healthy controls. ECGs of 5 patients with congenital DbetaH deficiency and 6 age- and sex-matched controls were also analyzed. RESULTS: Patients with MSA and PAF had significantly prolonged maximum QTc intervals (492+/-58 ms(1/2) and 502+/-61 ms(1/2) [mean +/- SD]), respectively, compared with controls (450+/-18 ms(1/2), P < .05 and P < .01, respectively). A similar but not significant trend was observed for QT. QTc dispersion was also increased in MSA (40+/-20 ms(1/2), P < .05 vs controls) and PAF patients (32+/-19 ms(1/2), NS) compared with controls (21+/-5 ms(1/2)). In contrast, patients with congenital DbetaH deficiency did not have significantly different RR, QT, QTc intervals, or QTc dispersion when compared with controls. CONCLUSIONS: Patients with primary autonomic failure who have combined parasympathetic and sympathetic failure have abnormally prolonged QT interval and increased QT dispersion. However, QT interval in patients with congenital DbetaH deficiency was not significantly different from controls. It is possible, therefore, that QT abnormalities in patients with primary autonomic failure are not solely caused by lesions of the sympathetic nervous system, and that the parasympathetic nervous system is likely to have a modulatory role in ventricular repolarization.
The role of responsibility and fear of guilt in hypothesis-testing.
Mancini, Francesco; Gangemi, Amelia
2006-12-01
Recent theories argue that both perceived responsibility and fear of guilt increase obsessive-like behaviours. We propose that hypothesis-testing might account for this effect. Both perceived responsibility and fear of guilt would influence subjects' hypothesis-testing, by inducing a prudential style. This style implies focusing on and confirming the worst hypothesis, and reiterating the testing process. In our experiment, we manipulated the responsibility and fear of guilt of 236 normal volunteers who executed a deductive task. The results show that perceived responsibility is the main factor that influenced individuals' hypothesis-testing. Fear of guilt has however a significant additive effect. Guilt-fearing participants preferred to carry on with the diagnostic process, even when faced with initial favourable evidence, whereas participants in the responsibility condition only did so when confronted with an unfavourable evidence. Implications for the understanding of obsessive-compulsive disorder (OCD) are discussed.
NASA Technical Reports Server (NTRS)
Furlan, R.; Porta, A.; Costa, F.; Tank, J.; Baker, L.; Schiavi, R.; Robertson, D.; Malliani, A.; Mosqueda-Garcia, R.
2000-01-01
BACKGROUND: We tested the hypothesis that a common oscillatory pattern might characterize the rhythmic discharge of muscle sympathetic nerve activity (MSNA) and the spontaneous variability of heart rate and systolic arterial pressure (SAP) during a physiological increase of sympathetic activity induced by the head-up tilt maneuver. METHODS AND RESULTS: Ten healthy subjects underwent continuous recordings of ECG, intra-arterial pressure, respiratory activity, central venous pressure, and MSNA, both in the recumbent position and during 75 degrees head-up tilt. Venous samplings for catecholamine assessment were obtained at rest and during the fifth minute of tilt. Spectrum and cross-spectrum analyses of R-R interval, SAP, and MSNA variabilities and of respiratory activity provided the low (LF, 0.1 Hz) and high frequency (HF, 0.27 Hz) rhythmic components of each signal and assessed their linear relationships. Compared with the recumbent position, tilt reduced central venous pressure, but blood pressure was unchanged. Heart rate, MSNA, and plasma epinephrine and norepinephrine levels increased, suggesting a marked enhancement of overall sympathetic activity. During tilt, LF(MSNA) increased compared with the level in the supine position; this mirrored similar changes observed in the LF components of R-R interval and SAP variabilities. The increase of LF(MSNA) was proportional to the amount of the sympathetic discharge. The coupling between LF components of MSNA and R-R interval and SAP variabilities was enhanced during tilt compared with rest. CONCLUSIONS: During the sympathetic activation induced by tilt, a similar oscillatory pattern based on an increased LF rhythmicity characterized the spontaneous variability of neural sympathetic discharge, R-R interval, and arterial pressure.
Mamai-Homata, Eleni; Koletsi-Kounari, Haroula; Margaritis, Vasileios
2016-01-01
Background: The aim of this study was to investigate the oral health status and behavior of Greek dental students over time, and to meta-analyze these findings to test the widely documented hypothesis that women have better oral health behavior, oral hygiene, and periodontal status but higher dental caries rates than men. Materials and Methods: A total sample of 385 students was examined using identical indices to assess oral health and behavioral data initially in 1981 while the years 2000 and 2010 were selected due to significant changes that took place in the dental curriculum in the 1990s and 2000s. Data by gender concerning the outcome variables recorded in every one of the three surveys were analyzed using Mantel–Haenszel and continuous outcomes methods. Results: A significant improvement in the oral health status and behavior of students was observed over time. The meta-analysis of data by gender showed that females brushed their teeth significantly more often than males [summary odds ratio (OR): 1.95 and 95% confidence interval (CI): 1.08–3.54]. Males and females were found to have a similar risk of developing dental caries. Conclusion: The hypothesis that young women have better oral hygiene habits compared to men was confirmed. However, the hypothesis that women have better oral hygiene and periodontal status but exhibit higher dental caries experience than men was not supported by the findings of the study. PMID:27011935
Bandera, Francesco; Generati, Greta; Pellegrino, Marta; Donghi, Valeria; Alfonzetti, Eleonora; Gaeta, Maddalena; Villani, Simona; Guazzi, Marco
2014-09-01
Several cardiovascular diseases are characterized by an impaired O2 kinetic during exercise. The lack of a linear increase of Δoxygen consumption (VO2)/ΔWork Rate (WR) relationship, as assessed by expired gas analysis, is considered an indicator of abnormal cardiovascular efficiency. We aimed at describing the frequency of ΔVO2/ΔWR flattening in a symptomatic population of cardiac patients, characterizing its functional profile, and testing the hypothesis that dynamic pulmonary hypertension and right ventricular contractile reserve play a major role as cardiac determinants. We studied 136 patients, with different cardiovascular diseases, referred for exertional dyspnoea. Cardiopulmonary exercise test combined with simultaneous exercise echocardiography was performed using a symptom-limited protocol. ΔVO2/ΔWR flattening was observed in 36 patients (group A, 26.5% of population) and was associated with a globally worse functional profile (reduced peak VO2, anaerobic threshold, O2 pulse, impaired VE/VCO2). At univariate analysis, exercise ejection fraction, exercise mitral regurgitation, rest and exercise tricuspid annular plane systolic excursion, exercise systolic pulmonary artery pressure, and exercise cardiac output were all significantly (P<0.05) impaired in group A. The multivariate analysis identified exercise systolic pulmonary artery pressure (odds ratio, 1.06; confidence interval, 1.01-1.11; P=0.01) and exercise tricuspid annular plane systolic excursion (odds ratio, 0.88; confidence interval, 0.80-0.97; P=0.01) as main cardiac determinants of ΔVO2/ΔWR flattening; female sex was strongly associated (odds ratio, 6.10; confidence interval, 2.11-17.7; P<0.01). In patients symptomatic for dyspnea, the occurrence of ΔVO2/ΔWR flattening reflects a significantly impaired functional phenotype whose main cardiac determinants are the excessive systolic pulmonary artery pressure increase and the reduced peak right ventricular longitudinal systolic function. © 2014 American Heart Association, Inc.
A statistical test to show negligible trend
Philip M. Dixon; Joseph H.K. Pechmann
2005-01-01
The usual statistical tests of trend are inappropriate for demonstrating the absence of trend. This is because failure to reject the null hypothesis of no trend does not prove that null hypothesis. The appropriate statistical method is based on an equivalence test. The null hypothesis is that the trend is not zero, i.e., outside an a priori specified equivalence region...
Birth defects in relation to Bendectin use in pregnancy. II. Pyloric stenosis.
Mitchell, A A; Schwingl, P J; Rosenberg, L; Louik, C; Shapiro, S
1983-12-01
To test the hypothesis that the use of Bendectin in pregnancy increases the risk of pyloric stenosis, we determined rates of antenatal Bendectin exposure among 325 infants with pyloric stenosis and among two control groups comprising infants with other defects; one consisted of 3,153 infants with other conditions, and the other, a subset of that group, consisted of 724 infants with defects that may have had their origins at any time in pregnancy. Comparisons between the cases and the two control series yielded estimated relative risks of 0.9 (95% confidence interval, 0.6 to 1.2) and 1.0 (0.7 to 1.4), respectively. The findings from this large case-control study suggest that Bendectin does not increase the risk of pyloric stenosis.
Maternal lung cancer and testicular cancer risk in the offspring.
Kaijser, Magnus; Akre, Olof; Cnattingius, Sven; Ekbom, Anders
2003-07-01
It has been hypothesized that smoking during pregnancy could increase the offspring's risk for testicular cancer. This hypothesis is indirectly supported by both ecological studies and studies of cancer aggregations within families. However, results from analytical epidemiological studies are not consistent, possibly due to methodological difficulties. To further study the association between smoking during pregnancy and testicular cancer, we did a population-based cohort study on cancer risk among offspring of women diagnosed with lung cancer. Through the use of the Swedish Cancer Register and the Swedish Second-Generation Register, we identified 8,430 women who developed lung cancer between 1958 and 1997 and delivered sons between 1941 and 1979. Cancer cases among the male offspring were then identified through the Swedish Cancer Register. Standardized incidence ratios were computed, using 95% confidence intervals. We identified 12,592 male offspring of mothers with a subsequent diagnosis of lung cancer, and there were 40 cases of testicular cancer (standardized incidence ratio, 1.90; 95% confidence interval, 1.35-2.58). The association was independent of maternal lung cancer subtype, and the risk of testicular cancer increased stepwise with decreasing time interval between birth and maternal lung cancer diagnosis. Our results support the hypothesis that exposure to cigarette smoking in utero increases the risk of testicular cancer.
van de Venter, Ec; Oliver, I; Stuart, J M
2015-02-12
Timely outbreak investigations are central in containing communicable disease outbreaks; despite this, no guidance currently exists on expectations of timeliness for investigations. A literature review was conducted to assess the length of epidemiological outbreak investigations in Europe in peer-reviewed publications. We determined time intervals between outbreak declaration to hypothesis generation, and hypothesis generation to availability of results from an analytical study. Outbreaks were classified into two groups: those with a public health impact across regions within a country and requiring national coordination (level 3) and those with a severe or catastrophic impact requiring direction at national level (levels 4 and 5). Investigations in Europe published between 2003 and 2013 were reviewed. We identified 86 papers for review: 63 level 3 and 23 level 4 and 5 investigations. Time intervals were ascertained from 55 papers. The median period for completion of an analytical study was 15 days (range: 4-32) for levels 4 and 5 and 31 days (range: 9-213) for level 3 investigations. Key factors influencing the speed of completing analytical studies were outbreak level, severity of infection and study design. Our findings suggest that guidance for completing analytical studies could usefully be provided, with different time intervals according to outbreak severity.
Using Bayes to get the most out of non-significant results.
Dienes, Zoltan
2014-01-01
No scientific conclusion follows automatically from a statistically non-significant result, yet people routinely use non-significant results to guide conclusions about the status of theories (or the effectiveness of practices). To know whether a non-significant result counts against a theory, or if it just indicates data insensitivity, researchers must use one of: power, intervals (such as confidence or credibility intervals), or else an indicator of the relative evidence for one theory over another, such as a Bayes factor. I argue Bayes factors allow theory to be linked to data in a way that overcomes the weaknesses of the other approaches. Specifically, Bayes factors use the data themselves to determine their sensitivity in distinguishing theories (unlike power), and they make use of those aspects of a theory's predictions that are often easiest to specify (unlike power and intervals, which require specifying the minimal interesting value in order to address theory). Bayes factors provide a coherent approach to determining whether non-significant results support a null hypothesis over a theory, or whether the data are just insensitive. They allow accepting and rejecting the null hypothesis to be put on an equal footing. Concrete examples are provided to indicate the range of application of a simple online Bayes calculator, which reveal both the strengths and weaknesses of Bayes factors.
Longitudinal Dimensionality of Adolescent Psychopathology: Testing the Differentiation Hypothesis
ERIC Educational Resources Information Center
Sterba, Sonya K.; Copeland, William; Egger, Helen L.; Costello, E. Jane; Erkanli, Alaattin; Angold, Adrian
2010-01-01
Background: The differentiation hypothesis posits that the underlying liability distribution for psychopathology is of low dimensionality in young children, inflating diagnostic comorbidity rates, but increases in dimensionality with age as latent syndromes become less correlated. This hypothesis has not been adequately tested with longitudinal…
A large scale test of the gaming-enhancement hypothesis.
Przybylski, Andrew K; Wang, John C
2016-01-01
A growing research literature suggests that regular electronic game play and game-based training programs may confer practically significant benefits to cognitive functioning. Most evidence supporting this idea, the gaming-enhancement hypothesis , has been collected in small-scale studies of university students and older adults. This research investigated the hypothesis in a general way with a large sample of 1,847 school-aged children. Our aim was to examine the relations between young people's gaming experiences and an objective test of reasoning performance. Using a Bayesian hypothesis testing approach, evidence for the gaming-enhancement and null hypotheses were compared. Results provided no substantive evidence supporting the idea that having preference for or regularly playing commercially available games was positively associated with reasoning ability. Evidence ranged from equivocal to very strong in support for the null hypothesis over what was predicted. The discussion focuses on the value of Bayesian hypothesis testing for investigating electronic gaming effects, the importance of open science practices, and pre-registered designs to improve the quality of future work.
The QT interval in lightning injury with implications for the cessation of metabolism hypothesis
NASA Technical Reports Server (NTRS)
Andrews, Christopher J.; Colquhoun, David M.; Darveniza, Mat
1991-01-01
An hypothesis is presented to provide an alternative to the Cessation of Metabolism hypothesis often invoked in lightning injury. Cessation of Metabolism has been proposed to explain the observation of good recovery after a prolonged period in cardiac arrest in some lightning injured patients. Reevaluation of EEGs from lightning injured patients show a high incidence of QT prolongation. Reexamination of the cases used to support Cessation of Metabolism also reveals little evidence to justify the hypothesis. The finding of QT prolongation coupled with the hyperadrenergic state said to exist in lightning injury, may promote a state of episodic induction of and recovery from Torsade de Pointes Ventricular Tachycardia (VT). Histological examination of the myocardium supports the new hypothesis. This the first concerted description of lightning injury as one of the general causes of QT prolongation. It appears to occur frequently after lightning injury, is a prerequisite of and predisposes to episodes of Torsade de Pointes VT. These electrocardiographic abnormalities explain Cessation of Metabolism and recognition may change management and lead to greater survival.
Schaarup, Clara; Hartvigsen, Gunnar; Larsen, Lars Bo; Tan, Zheng-Hua; Årsand, Eirik; Hejlesen, Ole Kristian
2015-01-01
The Online Diabetes Exercise System was developed to motivate people with Type 2 diabetes to do a 25 minutes low-volume high-intensity interval training program. In a previous multi-method evaluation of the system, several usability issues were identified and corrected. Despite the thorough testing, it was unclear whether all usability problems had been identified using the multi-method evaluation. Our hypothesis was that adding the eye-tracking triangulation to the multi-method evaluation would increase the accuracy and completeness when testing the usability of the system. The study design was an Eye-tracking Triangulation; conventional eye-tracking with predefined tasks followed by The Post-Experience Eye-Tracked Protocol (PEEP). Six Areas of Interests were the basis for the PEEP-session. The eye-tracking triangulation gave objective and subjective results, which are believed to be highly relevant for designing, implementing, evaluating and optimizing systems in the field of health informatics. Future work should include testing the method on a larger and more representative group of users and apply the method on different system types.
Reporting Practices and Use of Quantitative Methods in Canadian Journal Articles in Psychology.
Counsell, Alyssa; Harlow, Lisa L
2017-05-01
With recent focus on the state of research in psychology, it is essential to assess the nature of the statistical methods and analyses used and reported by psychological researchers. To that end, we investigated the prevalence of different statistical procedures and the nature of statistical reporting practices in recent articles from the four major Canadian psychology journals. The majority of authors evaluated their research hypotheses through the use of analysis of variance (ANOVA), t -tests, and multiple regression. Multivariate approaches were less common. Null hypothesis significance testing remains a popular strategy, but the majority of authors reported a standardized or unstandardized effect size measure alongside their significance test results. Confidence intervals on effect sizes were infrequently employed. Many authors provided minimal details about their statistical analyses and less than a third of the articles presented on data complications such as missing data and violations of statistical assumptions. Strengths of and areas needing improvement for reporting quantitative results are highlighted. The paper concludes with recommendations for how researchers and reviewers can improve comprehension and transparency in statistical reporting.
Null but not void: considerations for hypothesis testing.
Shaw, Pamela A; Proschan, Michael A
2013-01-30
Standard statistical theory teaches us that once the null and alternative hypotheses have been defined for a parameter, the choice of the statistical test is clear. Standard theory does not teach us how to choose the null or alternative hypothesis appropriate to the scientific question of interest. Neither does it tell us that in some cases, depending on which alternatives are realistic, we may want to define our null hypothesis differently. Problems in statistical practice are frequently not as pristinely summarized as the classic theory in our textbooks. In this article, we present examples in statistical hypothesis testing in which seemingly simple choices are in fact rich with nuance that, when given full consideration, make the choice of the right hypothesis test much less straightforward. Published 2012. This article is a US Government work and is in the public domain in the USA.
Effect of climate-related mass extinctions on escalation in molluscs
NASA Astrophysics Data System (ADS)
Hansen, Thor A.; Kelley, Patricia H.; Melland, Vicky D.; Graham, Scott E.
1999-12-01
We test the hypothesis that escalated species (e.g., those with antipredatory adaptations such as heavy armor) are more vulnerable to extinctions caused by changes in climate. If this hypothesis is valid, recovery faunas after climate-related extinctions should include significantly fewer species with escalated shell characteristics, and escalated species should undergo greater rates of extinction than nonescalated species. This hypothesis is tested for the Cretaceous-Paleocene, Eocene-Oligocene, middle Miocene, and Pliocene-Pleistocene mass extinctions. Gastropod and bivalve molluscs from the U.S. coastal plain were evaluated for 10 shell characters that confer resistance to predators. Of 40 tests, one supported the hypothesis; highly ornamented gastropods underwent greater levels of Pliocene-Pleistocene extinction than did nonescalated species. All remaining tests were nonsignificant. The hypothesis that escalated species are more vulnerable to climate-related mass extinctions is not supported.
NASA Technical Reports Server (NTRS)
Iwasaki, K. I.; Zhang, R.; Zuckerman, J. H.; Pawelczyk, J. A.; Levine, B. D.; Blomqvist, C. G. (Principal Investigator)
2000-01-01
Adaptation to head-down-tilt bed rest leads to an apparent abnormality of baroreflex regulation of cardiac period. We hypothesized that this "deconditioning response" could primarily be a result of hypovolemia, rather than a unique adaptation of the autonomic nervous system to bed rest. To test this hypothesis, nine healthy subjects underwent 2 wk of -6 degrees head-down bed rest. One year later, five of these same subjects underwent acute hypovolemia with furosemide to produce the same reductions in plasma volume observed after bed rest. We took advantage of power spectral and transfer function analysis to examine the dynamic relationship between blood pressure (BP) and R-R interval. We found that 1) there were no significant differences between these two interventions with respect to changes in numerous cardiovascular indices, including cardiac filling pressures, arterial pressure, cardiac output, or stroke volume; 2) normalized high-frequency (0.15-0.25 Hz) power of R-R interval variability decreased significantly after both conditions, consistent with similar degrees of vagal withdrawal; 3) transfer function gain (BP to R-R interval), used as an index of arterial-cardiac baroreflex sensitivity, decreased significantly to a similar extent after both conditions in the high-frequency range; the gain also decreased similarly when expressed as BP to heart rate x stroke volume, which provides an index of the ability of the baroreflex to alter BP by modifying systemic flow; and 4) however, the low-frequency (0.05-0.15 Hz) power of systolic BP variability decreased after bed rest (-22%) compared with an increase (+155%) after acute hypovolemia, suggesting a differential response for the regulation of vascular resistance (interaction, P < 0.05). The similarity of changes in the reflex control of the circulation under both conditions is consistent with the hypothesis that reductions in plasma volume may be largely responsible for the observed changes in cardiac baroreflex control after bed rest. However, changes in vasomotor function associated with these two conditions may be different and may suggest a cardiovascular remodeling after bed rest.
2018-01-01
This study tested the hypothesis that object-based attention modulates the discrimination of level increments in stop-consonant noise bursts. With consonant-vowel-consonant (CvC) words consisting of an ≈80-dB vowel (v), a pre-vocalic (Cv) and a post-vocalic (vC) stop-consonant noise burst (≈60-dB SPL), we measured discrimination thresholds (LDTs) for level increments (ΔL) in the noise bursts presented either in CvC context or in isolation. In the 2-interval 2-alternative forced-choice task, each observation interval presented a CvC word (e.g., /pæk/ /pæk/), and normal-hearing participants had to discern ΔL in the Cv or vC burst. Based on the linguistic word labels, the auditory events of each trial were perceived as two auditory objects (Cv-v-vC and Cv-v-vC) that group together the bursts and vowels, hindering selective attention to ΔL. To discern ΔL in Cv or vC, the events must be reorganized into three auditory objects: the to-be-attended pre-vocalic (Cv–Cv) or post-vocalic burst pair (vC–vC), and the to-be-ignored vowel pair (v–v). Our results suggest that instead of being automatic this reorganization requires training, in spite of using familiar CvC words. Relative to bursts in isolation, bursts in context always produced inferior ΔL discrimination accuracy (a context effect), which depended strongly on the acoustic separation between the bursts and the vowel, being much keener for the object apart from (post-vocalic) than for the object adjoining (pre-vocalic) the vowel (a temporal-position effect). Variability in CvC dimensions that did not alter the noise-burst perceptual grouping had minor effects on discrimination accuracy. In addition to being robust and persistent, these effects are relatively general, evincing in forced-choice tasks with one or two observation intervals, with or without variability in the temporal position of ΔL, and with either fixed or roving CvC standards. The results lend support to the hypothesis. PMID:29364931
Algorithmic complexity of real financial markets
NASA Astrophysics Data System (ADS)
Mansilla, R.
2001-12-01
A new approach to the understanding of complex behavior of financial markets index using tools from thermodynamics and statistical physics is developed. Physical complexity, a quantity rooted in the Kolmogorov-Chaitin theory is applied to binary sequences built up from real time series of financial markets indexes. The study is based on NASDAQ and Mexican IPC data. Different behaviors of this quantity are shown when applied to the intervals of series placed before crashes and to intervals when no financial turbulence is observed. The connection between our results and the efficient market hypothesis is discussed.
Mas, Aran; Noble, Peter-John M; Cripps, Peter J; Batchelor, Daniel J; Graham, Peter; German, Alexander J
2012-07-28
Enzyme treatment is the mainstay for management of exocrine pancreatic insufficiency (EPI) in dogs. 'Enteric-coated' preparations have been developed to protect the enzyme from degradation in the stomach, but their efficacy has not been critically evaluated. The hypothesis of the current study was that enteric coating would have no effect on the efficacy of pancreatic enzyme treatment for dogs with EPI.Thirty-eight client-owned dogs with naturally occurring EPI were included in this multicentre, blinded, randomised controlled trial. Dogs received either an enteric-coated enzyme preparation (test treatment) or an identical preparation without the enteric coating (control treatment) over a period of 56 days. There were no significant differences in either signalment or cobalamin status (where cobalamin deficient or not) between the dogs on the test and control treatments. Body weight and body condition score increased in both groups during the trial (P<0.001) but the magnitude of increase was greater for the test treatment compared with the control treatment (P<0.001). By day 56, mean body weight increase was 17% (95% confidence interval 11-23%) in the test treatment group and 9% (95% confidence interval 4-15%) in the control treatment group. The dose of enzyme required increased over time (P<0.001) but there was no significant difference between treatments at any time point (P=0.225). Clinical disease severity score decreased over time for both groups (P=0.011) and no difference was noted between groups (P=0.869). No significant adverse effects were reported, for either treatment, for the duration of the trial. Enteric coating a pancreatic enzyme treatment improves response in canine EPI.
On Restructurable Control System Theory
NASA Technical Reports Server (NTRS)
Athans, M.
1983-01-01
The state of stochastic system and control theory as it impacts restructurable control issues is addressed. The multivariable characteristics of the control problem are addressed. The failure detection/identification problem is discussed as a multi-hypothesis testing problem. Control strategy reconfiguration, static multivariable controls, static failure hypothesis testing, dynamic multivariable controls, fault-tolerant control theory, dynamic hypothesis testing, generalized likelihood ratio (GLR) methods, and adaptive control are discussed.
ERIC Educational Resources Information Center
Marmolejo-Ramos, Fernando; Cousineau, Denis
2017-01-01
The number of articles showing dissatisfaction with the null hypothesis statistical testing (NHST) framework has been progressively increasing over the years. Alternatives to NHST have been proposed and the Bayesian approach seems to have achieved the highest amount of visibility. In this last part of the special issue, a few alternative…
Revised standards for statistical evidence.
Johnson, Valen E
2013-11-26
Recent advances in Bayesian hypothesis testing have led to the development of uniformly most powerful Bayesian tests, which represent an objective, default class of Bayesian hypothesis tests that have the same rejection regions as classical significance tests. Based on the correspondence between these two classes of tests, it is possible to equate the size of classical hypothesis tests with evidence thresholds in Bayesian tests, and to equate P values with Bayes factors. An examination of these connections suggest that recent concerns over the lack of reproducibility of scientific studies can be attributed largely to the conduct of significance tests at unjustifiably high levels of significance. To correct this problem, evidence thresholds required for the declaration of a significant finding should be increased to 25-50:1, and to 100-200:1 for the declaration of a highly significant finding. In terms of classical hypothesis tests, these evidence standards mandate the conduct of tests at the 0.005 or 0.001 level of significance.
Barkla, D H; Tutton, P M
1983-10-01
Normal and DMH-treated male rats aged 18-20 weeks underwent surgical transection and anastomosis of the transverse colon. Animals were subsequently killed at intervals of 14, 30 and 72 days. Three hours prior to sacrifice animals were injected with vinblastine sulphate and mitotic indices were subsequently estimated in histological sections. Possible differences between experimental and control groups were tested using a Student's t-test. The results show that the accumulated mitotic indices in normal and DMH-treated colon are statistically similar. The results also show that transection and anastomosis stimulates cell division in both normal and DMH-treated colon and that the increase is of greater amplitude and more prolonged duration in the DMH-treated rats. Carcinomas developed close to the line of anastomosis in DMH-treated but not in control rats. The results support the hypothesis that non-specific injury to hyperplastic colonic epithelium promotes carcinogenesis.
Lunt, Mark
2015-07-01
In the first article in this series we explored the use of linear regression to predict an outcome variable from a number of predictive factors. It assumed that the predictive factors were measured on an interval scale. However, this article shows how categorical variables can also be included in a linear regression model, enabling predictions to be made separately for different groups and allowing for testing the hypothesis that the outcome differs between groups. The use of interaction terms to measure whether the effect of a particular predictor variable differs between groups is also explained. An alternative approach to testing the difference between groups of the effect of a given predictor, which consists of measuring the effect in each group separately and seeing whether the statistical significance differs between the groups, is shown to be misleading. © The Author 2013. Published by Oxford University Press on behalf of the British Society for Rheumatology. All rights reserved. For Permissions, please email: journals.permissions@oup.com.
Shin, Jaehyuck; Kim, Yong Chul; Lee, Sang Chul; Kim, Jae Hun
2013-11-01
Transforaminal epidural steroid injection (TFESI) is a useful treatment modality for pain management. Most complications of TFESI are minor and transient. However, there is a risk of serious complications such as nerve injury, spinal cord infarct, or paraplegia. Some of the risks are related to direct injury to the vessel or intravascular injection of the particulate steroid. We prospectively tested the hypothesis that the intravascular injection rate of the Whitacre needle is lower than that of the Quincke needle during TFESI. This study was a randomized trial of 1376 TFESIs at the S1 level. We collected data of age, gender, height, weight, laterality (right/left), history of lumbosacral spine operation, history of appropriate interval discontinuation of anticoagulation medicines, and underlying disease. During the S1 TFESI, intrasacral bone contact, a blood aspiration test, and real-time fluoroscopy of the intravascular injection using contrast media were investigated. There were no significant differences in the intravascular injection rate with respect to age, gender, height, weight, hypertension, diabetes mellitus, laterality, history of lumbosacral spine operation, or history of appropriate interval discontinuation of anticoagulation medicines. Intravascular injection was significantly associated with a blood aspiration test (P < 0.001), needle tip type (P = 0.002), intrasacral bone contact (P < 0.001), and physicians (some P < 0.05). The use of Quincke needles and intrasacral bone contact increased the rate of intravascular injection. To reduce the risk of intravascular injection, the use of Whitacre needles without intrasacral bone contact may be a safer and more effective approach.
NASA Astrophysics Data System (ADS)
Thompson, E. M.; Hewlett, J. B.; Baise, L. G.; Vogel, R. M.
2011-01-01
Annual maximum (AM) time series are incomplete (i.e., censored) when no events are included above the assumed censoring threshold (i.e., magnitude of completeness). We introduce a distrtibutional hypothesis test for left-censored Gumbel observations based on the probability plot correlation coefficient (PPCC). Critical values of the PPCC hypothesis test statistic are computed from Monte-Carlo simulations and are a function of sample size, censoring level, and significance level. When applied to a global catalog of earthquake observations, the left-censored Gumbel PPCC tests are unable to reject the Gumbel hypothesis for 45 of 46 seismic regions. We apply four different field significance tests for combining individual tests into a collective hypothesis test. None of the field significance tests are able to reject the global hypothesis that AM earthquake magnitudes arise from a Gumbel distribution. Because the field significance levels are not conclusive, we also compute the likelihood that these field significance tests are unable to reject the Gumbel model when the samples arise from a more complex distributional alternative. A power study documents that the censored Gumbel PPCC test is unable to reject some important and viable Generalized Extreme Value (GEV) alternatives. Thus, we cannot rule out the possibility that the global AM earthquake time series could arise from a GEV distribution with a finite upper bound, also known as a reverse Weibull distribution. Our power study also indicates that the binomial and uniform field significance tests are substantially more powerful than the more commonly used Bonferonni and false discovery rate multiple comparison procedures.
Carson, Michael P; Morgan, Benjamin; Gussman, Debra; Brown, Monica; Rothenberg, Karen; Wisner, Theresa A
2015-02-01
Over 70% of women with gestational diabetes mellitus (GDM) will develop diabetes mellitus (DM), but only 30% follow through with the recommended postpartum oral glucose tolerance testing (OGTT). HbA1c is approved to diagnose DM, and combined with a fasting plasma glucose it can identify 93% of patients with dysglycemia. We tested the hypothesis that a single blood draw to assess for dysglycemia at the postpartum visit could improve testing rates compared with those required to obtain an OGTT at an outside laboratory. Prospective cohort study of all women with GDM who delivered between July 2010 and December 2011. When insurance status required testing at an outside laboratory an OGTT was ordered, when insurance allowed testing at our center a random sugar and HbA1c were drawn at the postpartum visit (SUGAR Protocol). Of the 40 women, 36 attended a postpartum visit. In the SUGAR arm, 19 of 19 (100%) were tested versus 9 of 17 (53%) in the OGTT arm; relative risk of testing was 1.9 (95% confidence interval, 1.2-3.0). 36% were glucose intolerant. This pilot study found that an in-office testing model doubled the rate of postpartum testing in this clinic population, and was reasonably sensitive at detecting dysglycemia. Thieme Medical Publishers 333 Seventh Avenue, New York, NY 10001, USA.
Test of association: which one is the most appropriate for my study?
Gonzalez-Chica, David Alejandro; Bastos, João Luiz; Duquia, Rodrigo Pereira; Bonamigo, Renan Rangel; Martínez-Mesa, Jeovany
2015-01-01
Hypothesis tests are statistical tools widely used for assessing whether or not there is an association between two or more variables. These tests provide a probability of the type 1 error (p-value), which is used to accept or reject the null study hypothesis. To provide a practical guide to help researchers carefully select the most appropriate procedure to answer the research question. We discuss the logic of hypothesis testing and present the prerequisites of each procedure based on practical examples.
Improving the Crossing-SIBTEST Statistic for Detecting Non-uniform DIF.
Chalmers, R Philip
2018-06-01
This paper demonstrates that, after applying a simple modification to Li and Stout's (Psychometrika 61(4):647-677, 1996) CSIBTEST statistic, an improved variant of the statistic could be realized. It is shown that this modified version of CSIBTEST has a more direct association with the SIBTEST statistic presented by Shealy and Stout (Psychometrika 58(2):159-194, 1993). In particular, the asymptotic sampling distributions and general interpretation of the effect size estimates are the same for SIBTEST and the new CSIBTEST. Given the more natural connection to SIBTEST, it is shown that Li and Stout's hypothesis testing approach is insufficient for CSIBTEST; thus, an improved hypothesis testing procedure is required. Based on the presented arguments, a new chi-squared-based hypothesis testing approach is proposed for the modified CSIBTEST statistic. Positive results from a modest Monte Carlo simulation study strongly suggest the original CSIBTEST procedure and randomization hypothesis testing approach should be replaced by the modified statistic and hypothesis testing method.
Central tendency effects in time interval reproduction in autism
Karaminis, Themelis; Cicchini, Guido Marco; Neil, Louise; Cappagli, Giulia; Aagten-Murphy, David; Burr, David; Pellicano, Elizabeth
2016-01-01
Central tendency, the tendency of judgements of quantities (lengths, durations etc.) to gravitate towards their mean, is one of the most robust perceptual effects. A Bayesian account has recently suggested that central tendency reflects the integration of noisy sensory estimates with prior knowledge representations of a mean stimulus, serving to improve performance. The process is flexible, so prior knowledge is weighted more heavily when sensory estimates are imprecise, requiring more integration to reduce noise. In this study we measure central tendency in autism to evaluate a recent theoretical hypothesis suggesting that autistic perception relies less on prior knowledge representations than typical perception. If true, autistic children should show reduced central tendency than theoretically predicted from their temporal resolution. We tested autistic and age- and ability-matched typical children in two child-friendly tasks: (1) a time interval reproduction task, measuring central tendency in the temporal domain; and (2) a time discrimination task, assessing temporal resolution. Central tendency reduced with age in typical development, while temporal resolution improved. Autistic children performed far worse in temporal discrimination than the matched controls. Computational simulations suggested that central tendency was much less in autistic children than predicted by theoretical modelling, given their poor temporal resolution. PMID:27349722
Yamada, M; Mizuta, K; Ito, Y; Furuta, M; Sawai, S; Miyata, H
1999-10-01
A hypothesis has been advanced that the autonomic nervous dysfunction (AND) relates to the development of vertigo in Meniere's disease (MD). We also studied the causal relationship between AND and vertigo in MD. We evaluated autonomic nervous function in 17 patients with MD (five men and 12 women ranging in age from 16 to 70 years) by classifying them by their stages of attack and interval of vertigo and with power spectral analysis (PSA) of heart rate variability. Fourteen healthy volunteers were also tested as controls. At the interval stage, parasympathetic nervous hypofunction and significant depression of sympathetic response due to postural changes from the supine to the standing position were observed in many of those patients. At the attack stage, sympathetic nervous hypofunction was observed in some of the patients. These findings lead us to the conclusion that AND relates to vertigo in MD as a predisposing factor. However, the question of whether AND relates as a trigger or as a consequence of vertigo in MD has not been adequately solved in this study. We will make further studies on circadian variation of autonomic nervous function.
Bond, Vernon; Curry, Bryan H.; Adams, R. George; Asadi, M. Sadegh; Stancil, Kimani A.; Millis, Richard M.; Haddad, Georges E.
2014-01-01
Previous studies have shown that beetroot juice (BJ) decreases systolic blood pressure (SBP) and oxygen demand. This study tests the hypothesis that a beetroot juice (BJ) treatment increases heart rate variability (HRV) measured by the average standard deviation of normal-normal electrocardiogram RR intervals (SDNN) and the low frequency (LF), mainly sympathetic, fast Fourier transform spectral index of HRV. The subjects were 13 healthy young adult African-American females. Placebo control orange juice (OJ) and BJ treatments were given on separate days. Blood nitric oxide [NO], SBP and RR intervals were measured at rest and at constant workloads set to 40% and 80% of the predetermined VO2peak. Two hours after ingestion the BJ treatment increased [NO] and decreased SBP. BJ also increased SDNN at rest and at the 40% VO2peak workload, without significant effects on LF. SDNN was significantly greater after the BJ than after the OJ treatment, across the two physical activity conditions and SDNN was (negatively) correlated with SBP. These results suggest that BJ decreases SBP and increases HRV at rest and during aerobic exercise. Similar results in subjects with prehypertension or hypertension could translate to a dietary nitrate treatment for hypertension. PMID:25401100
Signal enhancement, not active suppression, follows the contingent capture of visual attention.
Livingstone, Ashley C; Christie, Gregory J; Wright, Richard D; McDonald, John J
2017-02-01
Irrelevant visual cues capture attention when they possess a task-relevant feature. Electrophysiologically, this contingent capture of attention is evidenced by the N2pc component of the visual event-related potential (ERP) and an enlarged ERP positivity over the occipital hemisphere contralateral to the cued location. The N2pc reflects an early stage of attentional selection, but presently it is unclear what the contralateral ERP positivity reflects. One hypothesis is that it reflects the perceptual enhancement of the cued search-array item; another hypothesis is that it is time-locked to the preceding cue display and reflects active suppression of the cue itself. Here, we varied the time interval between a cue display and a subsequent target display to evaluate these competing hypotheses. The results demonstrated that the contralateral ERP positivity is tightly time-locked to the appearance of the search display rather than the cue display, thereby supporting the perceptual enhancement hypothesis and disconfirming the cue-suppression hypothesis. (PsycINFO Database Record (c) 2017 APA, all rights reserved).
Tomao, Federica; D'Incalci, Maurizio; Biagioli, Elena; Peccatori, Fedro A; Colombo, Nicoletta
2017-09-15
The platinum-free interval is the most important predictive factor of a response to subsequent lines of chemotherapy and the most important prognostic factor for progression-free and overall survival in patients with recurrent epithelial ovarian cancer. A nonplatinum regimen is generally considered the most appropriate approach when the disease recurs very early after the end of chemotherapy, whereas platinum-based chemotherapy is usually adopted when the platinum-free interval exceeds 12 months. However, the therapeutic management of patients with intermediate sensitivity (ie, when the relapse occurs between 6 and 12 months) remains debatable. Preclinical and clinical data suggest that the extension of platinum-free interval (using a nonplatinum-based regimen) might restore platinum sensitivity, thus allowing survival improvement. The objective of this review was to critically analyze preclinical and clinical evidences supporting this hypothesis. Cancer 2017;123:3450-9. © 2017 American Cancer Society. © 2017 American Cancer Society.
The Importance of Teaching Power in Statistical Hypothesis Testing
ERIC Educational Resources Information Center
Olinsky, Alan; Schumacher, Phyllis; Quinn, John
2012-01-01
In this paper, we discuss the importance of teaching power considerations in statistical hypothesis testing. Statistical power analysis determines the ability of a study to detect a meaningful effect size, where the effect size is the difference between the hypothesized value of the population parameter under the null hypothesis and the true value…
The Relation between Parental Values and Parenting Behavior: A Test of the Kohn Hypothesis.
ERIC Educational Resources Information Center
Luster, Tom; And Others
1989-01-01
Used data on 65 mother-infant dyads to test Kohn's hypothesis concerning the relation between values and parenting behavior. Findings support Kohn's hypothesis that parents who value self-direction would emphasize supportive function of parenting and parents who value conformity would emphasize their obligations to impose restraints. (Author/NB)
Cognitive Biases in the Interpretation of Autonomic Arousal: A Test of the Construal Bias Hypothesis
ERIC Educational Resources Information Center
Ciani, Keith D.; Easter, Matthew A.; Summers, Jessica J.; Posada, Maria L.
2009-01-01
According to Bandura's construal bias hypothesis, derived from social cognitive theory, persons with the same heightened state of autonomic arousal may experience either pleasant or deleterious emotions depending on the strength of perceived self-efficacy. The current study tested this hypothesis by proposing that college students' preexisting…
Overgaard, Morten; Lindeløv, Jonas; Svejstrup, Stinna; Døssing, Marianne; Hvid, Tanja; Kauffmann, Oliver; Mouridsen, Kim
2013-01-01
This paper reports an experiment intended to test a particular hypothesis derived from blindsight research, which we name the “source misidentification hypothesis.” According to this hypothesis, a subject may be correct about a stimulus without being correct about how she had access to this knowledge (whether the stimulus was visual, auditory, or something else). We test this hypothesis in healthy subjects, asking them to report whether a masked stimulus was presented auditorily or visually, what the stimulus was, and how clearly they experienced the stimulus using the Perceptual Awareness Scale (PAS). We suggest that knowledge about perceptual modality may be a necessary precondition in order to issue correct reports of which stimulus was presented. Furthermore, we find that PAS ratings correlate with correctness, and that subjects are at chance level when reporting no conscious experience of the stimulus. To demonstrate that particular levels of reporting accuracy are obtained, we employ a statistical strategy, which operationally tests the hypothesis of non-equality, such that the usual rejection of the null-hypothesis admits the conclusion of equivalence. PMID:23508677
A large scale test of the gaming-enhancement hypothesis
Wang, John C.
2016-01-01
A growing research literature suggests that regular electronic game play and game-based training programs may confer practically significant benefits to cognitive functioning. Most evidence supporting this idea, the gaming-enhancement hypothesis, has been collected in small-scale studies of university students and older adults. This research investigated the hypothesis in a general way with a large sample of 1,847 school-aged children. Our aim was to examine the relations between young people’s gaming experiences and an objective test of reasoning performance. Using a Bayesian hypothesis testing approach, evidence for the gaming-enhancement and null hypotheses were compared. Results provided no substantive evidence supporting the idea that having preference for or regularly playing commercially available games was positively associated with reasoning ability. Evidence ranged from equivocal to very strong in support for the null hypothesis over what was predicted. The discussion focuses on the value of Bayesian hypothesis testing for investigating electronic gaming effects, the importance of open science practices, and pre-registered designs to improve the quality of future work. PMID:27896035
Invited Commentary: The Need for Cognitive Science in Methodology.
Greenland, Sander
2017-09-15
There is no complete solution for the problem of abuse of statistics, but methodological training needs to cover cognitive biases and other psychosocial factors affecting inferences. The present paper discusses 3 common cognitive distortions: 1) dichotomania, the compulsion to perceive quantities as dichotomous even when dichotomization is unnecessary and misleading, as in inferences based on whether a P value is "statistically significant"; 2) nullism, the tendency to privilege the hypothesis of no difference or no effect when there is no scientific basis for doing so, as when testing only the null hypothesis; and 3) statistical reification, treating hypothetical data distributions and statistical models as if they reflect known physical laws rather than speculative assumptions for thought experiments. As commonly misused, null-hypothesis significance testing combines these cognitive problems to produce highly distorted interpretation and reporting of study results. Interval estimation has so far proven to be an inadequate solution because it involves dichotomization, an avenue for nullism. Sensitivity and bias analyses have been proposed to address reproducibility problems (Am J Epidemiol. 2017;186(6):646-647); these methods can indeed address reification, but they can also introduce new distortions via misleading specifications for bias parameters. P values can be reframed to lessen distortions by presenting them without reference to a cutoff, providing them for relevant alternatives to the null, and recognizing their dependence on all assumptions used in their computation; they nonetheless require rescaling for measuring evidence. I conclude that methodological development and training should go beyond coverage of mechanistic biases (e.g., confounding, selection bias, measurement error) to cover distortions of conclusions produced by statistical methods and psychosocial forces. © The Author(s) 2017. Published by Oxford University Press on behalf of the Johns Hopkins Bloomberg School of Public Health. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.
Laidler, Matthew R; Tourdjman, Mathieu; Buser, Genevieve L; Hostetler, Trevor; Repp, Kimberly K; Leman, Richard; Samadpour, Mansour; Keene, William E
2013-10-01
An outbreak of Escherichia coli O157:H7 was identified in Oregon through an increase in Shiga toxin-producing E. coli cases with an indistinguishable, novel pulsed-field gel electrophoresis (PFGE) subtyping pattern. We defined confirmed cases as persons from whom E. coli O157:H7 with the outbreak PFGE pattern was cultured during July-August 2011, and presumptive cases as persons having a household relationship with a case testing positive for E. coli O157:H7 and coincident diarrheal illness. We conducted an investigation that included structured hypothesis-generating interviews, a matched case-control study, and environmental and traceback investigations. We identified 15 cases. Six cases were hospitalized, including 4 with hemolytic uremic syndrome (HUS). Two cases with HUS died. Illness was significantly associated with strawberry consumption from roadside stands or farmers' markets (matched odds ratio, 19.6; 95% confidence interval, 2.9-∞). A single farm was identified as the source of contaminated strawberries. Ten of 111 (9%) initial environmental samples from farm A were positive for E. coli O157:H7. All samples testing positive for E. coli O157:H7 contained deer feces, and 5 tested farm fields had ≥ 1 sample positive with the outbreak PFGE pattern. The investigation identified fresh strawberries as a novel vehicle for E. coli O157:H7 infection, implicated deer feces as the source of contamination, and highlights problems concerning produce contamination by wildlife and regulatory exemptions for locally grown produce. A comprehensive hypothesis-generating questionnaire enabled rapid identification of the implicated product. Good agricultural practices are key barriers to wildlife fecal contamination of produce.
Testing the association between the incidence of schizophrenia and social capital in an urban area.
Kirkbride, J B; Boydell, J; Ploubidis, G B; Morgan, C; Dazzan, P; McKenzie, K; Murray, R M; Jones, P B
2008-08-01
Social capital has been considered aetiologically important in schizophrenia but the empirical evidence to support this hypothesis is absent. We tested whether social capital, measured at the neighbourhood level, was associated with the incidence of schizophrenia (ICD-10 F20). MethodWe administered a cross-sectional questionnaire on social capital to 5% of the adult population in 33 neighbourhoods (wards) in South London (n=16 459). The questionnaire contained items relating to two social capital constructs: social cohesion and trust (SC&T) and social disorganization (SocD). Schizophrenia incidence rates, estimated using data from the Aetiology and Ethnicity in Schizophrenia and Other Psychoses (AESOP) study, provided the outcome. We used multi-level Poisson regression to test our hypothesis while controlling for individual- and neighbourhood-level characteristics. We identified 148 cases during 565 576 person-years at-risk. Twenty-six per cent of the variation in incidence rates was attributable to neighbourhood-level characteristics. Response from the social capital survey was 25.7%. The association between SC&T and schizophrenia was U-shaped. Compared with neighbourhoods with medial levels of SC&T, incidence rates were significantly higher in neighbourhoods with low [incidence rates ratio (IRR) 2.0, 95% confidence interval (CI) 1.2-3.3] and high (IRR 2.5, 95% CI 1.3-4.8) levels of SC&T, independent of age, sex, ethnicity, ethnic density, ethnic fragmentation and socio-economic deprivation. ConclusionNeighbourhood variation in SC&T was non-linearly associated with the incidence of schizophrenia within an urban area. Neighbourhoods with low SC&T may fail to mediate social stress whereas high SC&T neighbourhoods may have greater informal social control or may increase the risk of schizophrenia for residents excluded from accessing available social capital.
Benzo, Roberto P; Chang, Chung-Chou H; Farrell, Max H; Kaplan, Robert; Ries, Andrew; Martinez, Fernando J; Wise, Robert; Make, Barry; Sciurba, Frank
2010-01-01
Chronic obstructive pulmonary disease (COPD) is a leading cause of death and 70% of the cost of COPD is due to hospitalizations. Self-reported daily physical activity and health status have been reported as predictors of a hospitalization in COPD but are not routinely assessed. We tested the hypothesis that self-reported daily physical activity and health status assessed by a simple question were predictors of a hospitalization in a well-characterized cohort of patients with severe emphysema. Investigators gathered daily physical activity and health status data assessed by a simple question in 597 patients with severe emphysema and tested the association of those patient-reported outcomes to the occurrence of a hospitalization in the following year. Multiple logistic regression analyses were used to determine predictors of hospitalization during the first 12 months after randomization. The two variables tested in the hypothesis were significant predictors of a hospitalization after adjusting for all univariable significant predictors: >2 h of physical activity per week had a protective effect [odds ratio (OR) 0.60; 95% confidence interval (95% CI) 0.41-0.88] and self-reported health status as fair or poor had a deleterious effect (OR 1.57; 95% CI 1.10-2.23). In addition, two other variables became significant in the multivariate model: total lung capacity (every 10% increase) had a protective effect (OR 0.88; 95% CI 0.78-0.99) and self-reported anxiety had a deleterious effect (OR 1.75; 95% CI 1.13-2.70). Self-reported daily physical activity and health status are independently associated with COPD hospitalizations. Our findings, assessed by simple questions, suggest the value of patient-reported outcomes in developing risk assessment tools that are easy to use.
McCarthy-Jones, Simon
2018-05-01
Whilst evidence is mounting that childhood sexual abuse (CSA) can be a cause of auditory verbal hallucinations (AVH), it is unclear what factors mediate this relation. Recent evidence suggests that post-traumatic symptomatology may mediate the CSA-AVH relation in clinical populations, although this hypothesis has not yet been tested in the general population. There is also reason to believe that obsessive ideation could mediate the CSA-AVH relation. To test for evidence to falsify the hypotheses that post-traumatic symptomatology, obsessions, compulsions, anxiety and depression mediate the relation between CSA and AVH in a general population sample. Indirect effects of CSA on AVH via potential mediators were tested for, using a regression-based approach employing data from the 2007 Adult Psychiatric Morbidity Survey (n = 5788). After controlling for demographics, IQ and child physical abuse, it was found that CSA, IQ, post-traumatic symptomatology and compulsions predicted lifetime experience of AVH. Mediation analyses found significant indirect effects of CSA on AVH via post-traumatic symptomatology [odds ratio (OR): 1.11; 95% confidence interval (CI):1.00-1.29] and compulsions (OR: 1.10, 95% CI: 1.01-1.28). These findings offer further support for the hypothesis that post-traumatic symptomatology is a mediator of the CSA-AVH relation. Although no evidence was found for obsessional thoughts as a mediating variable, a potential mediating role for compulsions is theoretically intriguing. This study's findings reiterate the need to ask about experiences of childhood adversity and post-traumatic symptomology in people with AVH, as well as the likely therapeutic importance of trauma-informed and trauma-based interventions for this population.
McAuley, J D; Stewart, A L; Webber, E S; Cromwell, H C; Servatius, R J; Pang, K C H
2009-12-01
Inbred Wistar-Kyoto (WKY) rats have been proposed as a model of anxiety vulnerability as they display behavioral inhibition and a constellation of learning and reactivity abnormalities relative to outbred Sprague-Dawley (SD) rats. Together, the behaviors of the WKY rat suggest a hypervigilant state that may contribute to its anxiety vulnerability. To test this hypothesis, open-field behavior, acoustic startle, pre-pulse inhibition and timing behavior were assessed in WKY and Sprague-Dawley (SD) rats. Timing behavior was evaluated using a modified version of the peak-interval timing procedure. Training and testing of timing first occurred without audio-visual (AV) interference. Following this initial test, AV interference was included on some trials. Overall, WKY rats took much longer to leave the center of the arena, made fewer line crossings, and reared less, than did SD rats. WKY rats showed much greater startle responses to acoustic stimuli and significantly greater pre-pulse inhibition than did the SD rats. During timing conditions without AV interference, timing accuracy for both strains was similar; peak times for WKY and SD rats were not different. During interference conditions, however, the timing behavior of the two strains was very different. Whereas peak times for SD rats were similar between non-interference and interference conditions, peak times for WKY rats were shorter and response rates higher in interference conditions than in non-interference conditions. The enhanced acoustic startle response, greater prepulse inhibition and altered timing behavior with audio-visual interference supports a characterization of WKY strain as hypervigilant and provides further evidence for the use of the WKY strain as a model of anxiety vulnerability.
ERIC Educational Resources Information Center
SAW, J.G.
THIS PAPER DEALS WITH SOME TESTS OF HYPOTHESIS FREQUENTLY ENCOUNTERED IN THE ANALYSIS OF MULTIVARIATE DATA. THE TYPE OF HYPOTHESIS CONSIDERED IS THAT WHICH THE STATISTICIAN CAN ANSWER IN THE NEGATIVE OR AFFIRMATIVE. THE DOOLITTLE METHOD MAKES IT POSSIBLE TO EVALUATE THE DETERMINANT OF A MATRIX OF HIGH ORDER, TO SOLVE A MATRIX EQUATION, OR TO…
NASA Astrophysics Data System (ADS)
Koch, Wolfgang
1996-05-01
Sensor data processing in a dense target/dense clutter environment is inevitably confronted with data association conflicts which correspond with the multiple hypothesis character of many modern approaches (MHT: multiple hypothesis tracking). In this paper we analyze the efficiency of retrodictive techniques that generalize standard fixed interval smoothing to MHT applications. 'Delayed estimation' based on retrodiction provides uniquely interpretable and accurate trajectories from ambiguous MHT output if a certain time delay is tolerated. In a Bayesian framework the theoretical background of retrodiction and its intimate relation to Bayesian MHT is sketched. By a simulated example with two closely-spaced targets, relatively low detection probabilities, and rather high false return densities, we demonstrate the benefits of retrodiction and quantitatively discuss the achievable track accuracies and the time delays involved for typical radar parameters.
Young, Anna M.; Cordier, Breanne; Mundry, Roger; Wright, Timothy F.
2014-01-01
In many social species group, members share acoustically similar calls. Functional hypotheses have been proposed for call sharing, but previous studies have been limited by an inability to distinguish among these hypotheses. We examined the function of vocal sharing in female budgerigars with a two-part experimental design that allowed us to distinguish between two functional hypotheses. The social association hypothesis proposes that shared calls help animals mediate affiliative and aggressive interactions, while the password hypothesis proposes that shared calls allow animals to distinguish group identity and exclude nonmembers. We also tested the labeling hypothesis, a mechanistic explanation which proposes that shared calls are used to address specific individuals within the sender–receiver relationship. We tested the social association hypothesis by creating four–member flocks of unfamiliar female budgerigars (Melopsittacus undulatus) and then monitoring the birds’ calls, social behaviors, and stress levels via fecal glucocorticoid metabolites. We tested the password hypothesis by moving immigrants into established social groups. To test the labeling hypothesis, we conducted additional recording sessions in which individuals were paired with different group members. The social association hypothesis was supported by the development of multiple shared call types in each cage and a correlation between the number of shared call types and the number of aggressive interactions between pairs of birds. We also found support for calls serving as a labeling mechanism using discriminant function analysis with a permutation procedure. Our results did not support the password hypothesis, as there was no difference in stress or directed behaviors between immigrant and control birds. PMID:24860236
Andersson, Charlotte; Quiroz, Rene; Enserro, Danielle; Larson, Martin G; Hamburg, Naomi M; Vita, Joseph A; Levy, Daniel; Benjamin, Emelia J; Mitchell, Gary F; Vasan, Ramachandran S
2016-09-01
High arterial stiffness seems to be causally involved in the pathogenesis of hypertension. We tested the hypothesis that offspring of parents with hypertension may display higher arterial stiffness before clinically manifest hypertension, given that hypertension is a heritable condition. We compared arterial tonometry measures in a sample of 1564 nonhypertensive Framingham Heart Study third-generation cohort participants (mean age: 38 years; 55% women) whose parents were enrolled in the Framingham Offspring Study. A total of 468, 715, and 381 participants had 0 (referent), 1, and 2 parents with hypertension. Parental hypertension was associated with greater offspring mean arterial pressure (multivariable-adjusted estimate=2.9 mm Hg; 95% confidence interval, 1.9-3.9, and 4.2 mm Hg; 95% confidence interval, 2.9-5.5, for 1 and 2 parents with hypertension, respectively; P<0.001 for both) and with greater forward pressure wave amplitude (1.6 mm Hg; 95% confidence interval, 0.6-2.7, and 1.9 mm Hg; 95% confidence interval, 0.6-3.2, for 1 and 2 parents with hypertension, respectively; P=0.003 for both). Carotid-femoral pulse wave velocity and augmentation index displayed similar dose-dependent relations with parental hypertension in sex-, age-, and height-adjusted models, but associations were attenuated on further adjustment. Offspring with at least 1 parent in the upper quartile of augmentation index and carotid-femoral pulse wave velocity had significantly higher values themselves (P≤0.02). In conclusion, in this community-based sample of young, nonhypertensive adults, we observed greater arterial stiffness in offspring of parents with hypertension. These observations are consistent with higher vascular stiffness at an early stage in the pathogenesis of hypertension. © 2016 American Heart Association, Inc.
Joung, Boyoung; Park, Hyung-Wook; Maruyama, Mitsunori; Tang, Liang; Song, Juan; Han, Seongwook; Piccirillo, Gianfranco; Weiss, James N; Lin, Shien-Fong; Chen, Peng-Sheng
2011-01-01
Anodal stimulation hyperpolarizes the cell membrane and increases the intracellular Ca(2+) (Ca(i)) transient. This study tested the hypothesis that the maximum slope of the Ca(i) decline (-(dCa(i)/dt)(max)) corresponds to the timing of anodal dip on the strength-interval curve and the initiation of repetitive responses and ventricular fibrillation (VF) after a premature stimulus (S(2)). We simultaneously mapped the membrane potential (V(m)) and Ca(i) in 23 rabbit ventricles. A dip in the anodal strength-interval curve was observed. During the anodal dip, ventricles were captured by anodal break excitation directly under the S(2) electrode. The Ca(i) following anodal stimuli is larger than that following cathodal stimuli. The S(1)-S(2) intervals of the anodal dip (203±10 ms) coincided with the -(dCa(i)/dt)(max) (199±10 ms, P=NS). BAPTA-AM (n=3), inhibition of the electrogenic Na(+)-Ca(2+) exchanger current (I(NCX)) by low extracellular Na(+) (n=3), and combined ryanodine and thapsigargin infusion (n=2) eliminated the anodal supernormality. Strong S(2) during the relative refractory period (n=5) induced 29 repetitive responses and 10 VF episodes. The interval between S(2) and the first non-driven beat was coincidental with the time of -(dCa(i)/dt)(max). Larger Ca(i) transient and I(NCX) activation induced by anodal stimulation produces anodal supernormality. The time of maximum I(NCX) activation is coincidental to the induction of non-driven beats from the Ca(i) sinkhole after a strong premature stimulation. All rights are reserved to the Japanese Circulation Society.
Abnormal P-Wave Axis and Ischemic Stroke: The ARIC Study (Atherosclerosis Risk In Communities).
Maheshwari, Ankit; Norby, Faye L; Soliman, Elsayed Z; Koene, Ryan J; Rooney, Mary R; O'Neal, Wesley T; Alonso, Alvaro; Chen, Lin Y
2017-08-01
Abnormal P-wave axis (aPWA) has been linked to incident atrial fibrillation and mortality; however, the relationship between aPWA and stroke has not been reported. We hypothesized that aPWA is associated with ischemic stroke independent of atrial fibrillation and other stroke risk factors and tested our hypothesis in the ARIC study (Atherosclerosis Risk In Communities), a community-based prospective cohort study. We included 15 102 participants (aged 54.2±5.7 years; 55.2% women; 26.5% blacks) who attended the baseline examination (1987-1989) and without prevalent stroke. We defined aPWA as any value outside 0 to 75° using 12-lead ECGs obtained during study visits. Each case of incident ischemic stroke was classified in accordance with criteria from the National Survey of Stroke by a computer algorithm and adjudicated by physician review. Multivariable Cox regression was used to estimate hazard ratios and 95% confidence intervals for the association of aPWA with stroke. During a mean follow-up of 20.2 years, there were 657 incident ischemic stroke cases. aPWA was independently associated with a 1.50-fold (95% confidence interval, 1.22-1.85) increased risk of ischemic stroke in the multivariable model that included atrial fibrillation. When subtyped, aPWA was associated with a 2.04-fold (95% confidence interval, 1.42-2.95) increased risk of cardioembolic stroke and a 1.32-fold (95% confidence interval, 1.03-1.71) increased risk of thrombotic stroke. aPWA is independently associated with ischemic stroke. This association seems to be stronger for cardioembolic strokes. Collectively, our findings suggest that alterations in atrial electric activation may predispose to cardiac thromboembolism independent of atrial fibrillation. © 2017 American Heart Association, Inc.
Weaker Seniors Exhibit Motor Cortex Hypoexcitability and Impairments in Voluntary Activation
Taylor, Janet L.; Hong, S. Lee; Law, Timothy D.; Russ, David W.
2015-01-01
Background. Weakness predisposes seniors to a fourfold increase in functional limitations. The potential for age-related degradation in nervous system function to contribute to weakness and physical disability has garnered much interest of late. In this study, we tested the hypothesis that weaker seniors have impairments in voluntary (neural) activation and increased indices of GABAergic inhibition of the motor cortex, assessed using transcranial magnetic stimulation. Methods. Young adults (N = 46; 21.2±0.5 years) and seniors (N = 42; 70.7±0.9 years) had their wrist flexion strength quantified along with voluntary activation capacity (by comparing voluntary and electrically evoked forces). Single-pulse transcranial magnetic stimulation was used to measure motor-evoked potential amplitude and silent period duration during isometric contractions at 15% and 30% of maximum strength. Paired-pulse transcranial magnetic stimulation was used to measure intracortical facilitation and short-interval and long-interval intracortical inhibition. The primary analysis compared seniors to young adults. The secondary analysis compared stronger seniors (top two tertiles) to weaker seniors (bottom tertile) based on strength relative to body weight. Results. The most novel findings were that weaker seniors exhibited: (i) a 20% deficit in voluntary activation; (ii) ~20% smaller motor-evoked potentials during the 30% contraction task; and (iii) nearly twofold higher levels of long-interval intracortical inhibition under resting conditions. Conclusions. These findings indicate that weaker seniors exhibit significant impairments in voluntary activation, and that this impairment may be mechanistically associated with increased GABAergic inhibition of the motor cortex. PMID:25834195
Infant head circumference growth is saltatory and coupled to length growth.
Lampl, Michelle; Johnson, Michael L
2011-05-01
Rapid growth rates of head circumference and body size during infancy have been reported to predict developmental pathologies that emerge during childhood. This study investigated whether growth in head circumference was concordant with growth in body length. Forty infants (16 males) were followed between the ages of 2 days and 21 months for durations ranging from 4 to 21 months (2616 measurements). Longitudinal anthropometric measurements were assessed weekly (n=12), semi-weekly (n=24) and daily (n=4) during home visits. Individual head circumference growth was investigated for the presence of saltatory patterns. Coincident analysis tested the null hypothesis that head growth was randomly coupled to length growth. Head circumference growth during infancy is saltatory (p<0.05), characterized by median increments of 0.20 cm (95% confidence interval, 0.10-0.30 cm) in 24-h, separated by intervals of no growth ranging from 1 to 21 days. Daily assessments identified that head growth saltations were coupled to length growth saltations within a median time frame of 2 days (interquartile 0-4, range 1-8 days). Assessed at semi-weekly and weekly intervals, an average 82% (SD 0.13) of head growth saltations was non-randomly concordant with length growth (p≤0.006). Normal infant head circumference grows by intermittent, episodic saltations that are temporally coupled to growth in total body length by a process of integrated physiology that remains to be described. Copyright © 2011 Elsevier Ltd. All rights reserved.
Reinforcement of schedule-induced drinking in rats by lick-contingent shortening of food delivery.
Álvarez, Beatriz; Íbias, Javier; Pellón, Ricardo
2016-12-01
Schedule-induced drinking has been a theoretical question of concern ever since it was first described more than 50 years ago. It has been classified as adjunctive behavior; that is, behavior that is induced by an incentive but not reinforced by it. Nevertheless, some authors have argued against this view, claiming that adjunctive drinking is actually a type of operant behavior. If this were true, schedule-induced drinking should be controlled by its consequences, which is the major definition of an operant. The present study tested this hypothesis. In a first experimental phase, a single pellet of food was delivered at regular 90-s intervals, but the interfood interval could be shortened depending on the rat's licking. The degree of contingency between licking the bottle spout and hastening the delivery of the food pellet was 100 %, 50 %, and 0 % for 3 separate groups of animals. Rats that could shorten the interval (100 % and 50 % contingency) drank at a higher rate than those that could not (0 %), and the level of acquisition was positively related to the degree of contingency. In a second phase of the experiment, all groups were exposed to a 100 % contingency, which resulted in all rats developing high levels of schedule-induced drinking. Licking is enhanced if it hastens reinforcement, and can do so at delay characteristics of those present in studies of schedule-induced drinking, thus supporting the view that adjunctive behavior is an operant.
Phase II design with sequential testing of hypotheses within each stage.
Poulopoulou, Stavroula; Karlis, Dimitris; Yiannoutsos, Constantin T; Dafni, Urania
2014-01-01
The main goal of a Phase II clinical trial is to decide, whether a particular therapeutic regimen is effective enough to warrant further study. The hypothesis tested by Fleming's Phase II design (Fleming, 1982) is [Formula: see text] versus [Formula: see text], with level [Formula: see text] and with a power [Formula: see text] at [Formula: see text], where [Formula: see text] is chosen to represent the response probability achievable with standard treatment and [Formula: see text] is chosen such that the difference [Formula: see text] represents a targeted improvement with the new treatment. This hypothesis creates a misinterpretation mainly among clinicians that rejection of the null hypothesis is tantamount to accepting the alternative, and vice versa. As mentioned by Storer (1992), this introduces ambiguity in the evaluation of type I and II errors and the choice of the appropriate decision at the end of the study. Instead of testing this hypothesis, an alternative class of designs is proposed in which two hypotheses are tested sequentially. The hypothesis [Formula: see text] versus [Formula: see text] is tested first. If this null hypothesis is rejected, the hypothesis [Formula: see text] versus [Formula: see text] is tested next, in order to examine whether the therapy is effective enough to consider further testing in a Phase III study. For the derivation of the proposed design the exact binomial distribution is used to calculate the decision cut-points. The optimal design parameters are chosen, so as to minimize the average sample number (ASN) under specific upper bounds for error levels. The optimal values for the design were found using a simulated annealing method.
Kumaraswamy autoregressive moving average models for double bounded environmental data
NASA Astrophysics Data System (ADS)
Bayer, Fábio Mariano; Bayer, Débora Missio; Pumi, Guilherme
2017-12-01
In this paper we introduce the Kumaraswamy autoregressive moving average models (KARMA), which is a dynamic class of models for time series taking values in the double bounded interval (a,b) following the Kumaraswamy distribution. The Kumaraswamy family of distribution is widely applied in many areas, especially hydrology and related fields. Classical examples are time series representing rates and proportions observed over time. In the proposed KARMA model, the median is modeled by a dynamic structure containing autoregressive and moving average terms, time-varying regressors, unknown parameters and a link function. We introduce the new class of models and discuss conditional maximum likelihood estimation, hypothesis testing inference, diagnostic analysis and forecasting. In particular, we provide closed-form expressions for the conditional score vector and conditional Fisher information matrix. An application to environmental real data is presented and discussed.
Nonword repetition in lexical decision: support for two opposing processes.
Wagenmakers, Eric-Jan; Zeelenberg, René; Steyvers, Mark; Shiffrin, Richard; Raaijmakers, Jeroen
2004-10-01
We tested and confirmed the hypothesis that the prior presentation of nonwords in lexical decision is the net result of two opposing processes: (1) a relatively fast inhibitory process based on global familiarity; and (2) a relatively slow facilitatory process based on the retrieval of specific episodic information. In three studies, we manipulated speed-stress to influence the balance between the two processes. Experiment 1 showed item-specific improvement for repeated nonwords in a standard "respond-when-ready" lexical decision task. Experiment 2 used a 400-ms deadline procedure and showed performance for nonwords to be unaffected by up to four prior presentations. In Experiment 3 we used a signal-to-respond procedure with variable time intervals and found negative repetition priming for repeated nonwords. These results can be accounted for by dual-process models of lexical decision.
Late winter survival of female mallards in Arkansas
Dugger, B.D.; Reinecke, K.J.; Fredrickson, L.H.
1994-01-01
Determining factors that limit winter survival of waterfowl is necessary to develop effective management plans. We radiomarked immature and adult female mallards (Anas platyrhynchos) after the 1988 and 1989 hunting seasons in eastcentral Arkansas to test whether natural mortality sources and habitat conditions during late winter limit seasonal survival. We used data from 92 females to calculate survival estimates. We observed no mortalities during 2,510 exposure days, despite differences in habitat conditions between years. We used the binomial distribution to calculate daily and 30-day survival estimates plus 95% confidence intervals of 0.9988 ltoreq 0.9997 ltoreq 1.00 and 0.9648 ltoreq 0.9925 ltoreq 1.00, respectively. Our data indirectly support the hypothesis that hunting mortality and habitat conditions during the hunting season are the major determinants of winter survival for female mallards in Arkansas.
An Extension of RSS-based Model Comparison Tests for Weighted Least Squares
2012-08-22
use the model comparison test statistic to analyze the null hypothesis. Under the null hypothesis, the weighted least squares cost functional is JWLS ...q̂WLSH ) = 10.3040×106. Under the alternative hypothesis, the weighted least squares cost functional is JWLS (q̂WLS) = 8.8394 × 106. Thus the model
Kambeitz, Joseph; Abi-Dargham, Anissa; Kapur, Shitij; Howes, Oliver D
2014-06-01
The hypothesis that cortical dopaminergic alterations underlie aspects of schizophrenia has been highly influential. To bring together and evaluate the imaging evidence for dopaminergic alterations in cortical and other extrastriatal regions in schizophrenia. Electronic databases were searched for in vivo molecular studies of extrastriatal dopaminergic function in schizophrenia. Twenty-three studies (278 patients and 265 controls) were identified. Clinicodemographic and imaging variables were extracted and effect sizes determined for the dopaminergic measures. There were sufficient data to permit meta-analyses for the temporal cortex, thalamus and substantia nigra but not for other regions. The meta-analysis of dopamine D2/D3 receptor availability found summary effect sizes of d = -0.32 (95% CI -0.68 to 0.03) for the thalamus, d = -0.23 (95% CI -0.54 to 0.07) for the temporal cortex and d = 0.04 (95% CI -0.92 to 0.99) for the substantia nigra. Confidence intervals were wide and all included no difference between groups. Evidence for other measures/regions is limited because of the small number of studies and in some instances inconsistent findings, although significant differences were reported for D2/D3 receptors in the cingulate and uncus, for D1 receptors in the prefrontal cortex and for dopamine transporter availability in the thalamus. There is a relative paucity of direct evidence for cortical dopaminergic alterations in schizophrenia, and findings are inconclusive. This is surprising given the wide influence of the hypothesis. Large, well-controlled studies in drug-naive patients are warranted to definitively test this hypothesis. Royal College of Psychiatrists.
Using Bayes to get the most out of non-significant results
Dienes, Zoltan
2014-01-01
No scientific conclusion follows automatically from a statistically non-significant result, yet people routinely use non-significant results to guide conclusions about the status of theories (or the effectiveness of practices). To know whether a non-significant result counts against a theory, or if it just indicates data insensitivity, researchers must use one of: power, intervals (such as confidence or credibility intervals), or else an indicator of the relative evidence for one theory over another, such as a Bayes factor. I argue Bayes factors allow theory to be linked to data in a way that overcomes the weaknesses of the other approaches. Specifically, Bayes factors use the data themselves to determine their sensitivity in distinguishing theories (unlike power), and they make use of those aspects of a theory’s predictions that are often easiest to specify (unlike power and intervals, which require specifying the minimal interesting value in order to address theory). Bayes factors provide a coherent approach to determining whether non-significant results support a null hypothesis over a theory, or whether the data are just insensitive. They allow accepting and rejecting the null hypothesis to be put on an equal footing. Concrete examples are provided to indicate the range of application of a simple online Bayes calculator, which reveal both the strengths and weaknesses of Bayes factors. PMID:25120503
Mechanisms of midsession reversal accuracy: Memory for preceding events and timing.
Smith, Aaron P; Beckmann, Joshua S; Zentall, Thomas R
2017-01-01
The midsession reversal task involves a simultaneous discrimination between 2 stimuli (S1 and S2) in which, for the first half of each session, choice of S1 is reinforced and, for the last half, choice of S2 is reinforced. On this task, pigeons appear to time the occurrence of the reversal rather than using feedback from previous trials, resulting in increased numbers of errors. In the present experiments, we tested the hypothesis that pigeons make so many errors because they fail to remember the last response made and/or the consequence of making that response both of which are needed ideally as cues to respond on the next trial. To facilitate memory, during the 5-s intertrial interval, we differentially lit a houselight correlated with the prior response to S1 or S2 and maintained the hopper light when that response was correct. A control group received uncorrelated houselights and no maintained hopper light. To test for continued use of temporal information, both groups received probe sessions in which the intertrial interval was either halved or doubled. Providing relevant reminder cues of the stimulus chosen and its consequence resulted in improved reversal accuracy and reduced disruption from probe sessions compared with irrelevant cues. Nevertheless, despite the reminder cues, the pigeons in both groups appeared to continue to time the point in the session at which the reversal occurred. (PsycINFO Database Record (c) 2017 APA, all rights reserved).
Hypothesis testing of scientific Monte Carlo calculations.
Wallerberger, Markus; Gull, Emanuel
2017-11-01
The steadily increasing size of scientific Monte Carlo simulations and the desire for robust, correct, and reproducible results necessitates rigorous testing procedures for scientific simulations in order to detect numerical problems and programming bugs. However, the testing paradigms developed for deterministic algorithms have proven to be ill suited for stochastic algorithms. In this paper we demonstrate explicitly how the technique of statistical hypothesis testing, which is in wide use in other fields of science, can be used to devise automatic and reliable tests for Monte Carlo methods, and we show that these tests are able to detect some of the common problems encountered in stochastic scientific simulations. We argue that hypothesis testing should become part of the standard testing toolkit for scientific simulations.
Hypothesis testing of scientific Monte Carlo calculations
NASA Astrophysics Data System (ADS)
Wallerberger, Markus; Gull, Emanuel
2017-11-01
The steadily increasing size of scientific Monte Carlo simulations and the desire for robust, correct, and reproducible results necessitates rigorous testing procedures for scientific simulations in order to detect numerical problems and programming bugs. However, the testing paradigms developed for deterministic algorithms have proven to be ill suited for stochastic algorithms. In this paper we demonstrate explicitly how the technique of statistical hypothesis testing, which is in wide use in other fields of science, can be used to devise automatic and reliable tests for Monte Carlo methods, and we show that these tests are able to detect some of the common problems encountered in stochastic scientific simulations. We argue that hypothesis testing should become part of the standard testing toolkit for scientific simulations.
Sex ratios in the two Germanies: a test of the economic stress hypothesis.
Catalano, Ralph A
2003-09-01
Literature describing temporal variation in the secondary sex ratio among humans reports an association between population stressors and declines in the odds of male birth. Explanations of this phenomenon draw on reports that stressed females spontaneously abort male more than female fetuses, and that stressed males exhibit reduced sperm motility. This work has led to the argument that population stress induced by a declining economy reduces the human sex ratio. No direct test of this hypothesis appears in the literature. Here, a test is offered based on a comparison of the sex ratio in East and West Germany for the years 1946 to 1999. The theory suggests that the East German sex ratio should be lower in 1991, when East Germany's economy collapsed, than expected from its own history and from the sex ratio in West Germany. The hypothesis is tested using time-series modelling methods. The data support the hypothesis. The sex ratio in East Germany was at its lowest in 1991. This first direct test supports the hypothesis that economic decline reduces the human sex ratio.
Liaw, Jen-Jiuan
2003-06-01
This study tested the use of a developmentally supportive care (DSC) training program in the form of videotaped and personalized instruction to increase nurses' cognitive abilities for assessing preterm infant behavioral signals and offering supportive care. The study used a two-group pre-test post-test quasi-experimental repeated measures design. The participants were 25 NICU nurses, 13 in the intervention group, and 12 in the control group. An instrument developed for the purpose of the study was a video test that measured the effectiveness of the DSC training. The video test questionnaires were administered to the participants twice with an interval of four weeks. ANCOVA controlling the baseline scores was used for data analysis. In general, the results support the hypothesis that nurses' cognitive abilities were enhanced after the DSC training. The increase in nurses' cognitive abilities is the prerequisite for behavioral change, based on the assumptions of Bandura's Social Cognitive Learning Theory (Bandura, 1986). As nurses' cognitive abilities increased, it would be possible that nurse behaviors in taking care of these preterm infants might change. Therefore, the author recommends that in order to improve NICU care quality and the outcomes of preterm infants, the concepts of developmentally supportive care be incorporated into NICU caregiving practice by educating nurses.
Understanding suicide terrorism: premature dismissal of the religious-belief hypothesis.
Liddle, James R; Machluf, Karin; Shackelford, Todd K
2010-07-06
We comment on work by Ginges, Hansen, and Norenzayan (2009), in which they compare two hypotheses for predicting individual support for suicide terrorism: the religious-belief hypothesis and the coalitional-commitment hypothesis. Although we appreciate the evidence provided in support of the coalitional-commitment hypothesis, we argue that their method of testing the religious-belief hypothesis is conceptually flawed, thus calling into question their conclusion that the religious-belief hypothesis has been disconfirmed. In addition to critiquing the methodology implemented by Ginges et al., we provide suggestions on how the religious-belief hypothesis may be properly tested. It is possible that the premature and unwarranted conclusions reached by Ginges et al. may deter researchers from examining the effect of specific religious beliefs on support for terrorism, and we hope that our comments can mitigate this possibility.
Feldman, Anatol G; Latash, Mark L
2005-02-01
Criticisms of the equilibrium point (EP) hypothesis have recently appeared that are based on misunderstandings of some of its central notions. Starting from such interpretations of the hypothesis, incorrect predictions are made and tested. When the incorrect predictions prove false, the hypothesis is claimed to be falsified. In particular, the hypothesis has been rejected based on the wrong assumptions that it conflicts with empirically defined joint stiffness values or that it is incompatible with violations of equifinality under certain velocity-dependent perturbations. Typically, such attempts use notions describing the control of movements of artificial systems in place of physiologically relevant ones. While appreciating constructive criticisms of the EP hypothesis, we feel that incorrect interpretations have to be clarified by reiterating what the EP hypothesis does and does not predict. We conclude that the recent claims of falsifying the EP hypothesis and the calls for its replacement by EMG-force control hypothesis are unsubstantiated. The EP hypothesis goes far beyond the EMG-force control view. In particular, the former offers a resolution for the famous posture-movement paradox while the latter fails to resolve it.
Preference for a stimulus that follows a relatively aversive event: contrast or delay reduction?
Singer, Rebecca A; Berry, Laura M; Zentall, Thomas R
2007-03-01
Several types of contrast effects have been identified including incentive contrast, anticipatory contrast, and behavioral contrast. Clement, Feltus, Kaiser, and Zentall (2000) proposed a type of contrast that appears to be different from these others and called it within-trial contrast. In this form of contrast the relative value of a reinforcer depends on the events that occur immediately prior to the reinforcer. Reinforcers that follow relatively aversive events are preferred over those that follow less aversive events. In many cases the delay reduction hypothesis proposed by Fantino (1969) also can account for such effects. The current experiments provide a direct test of the delay reduction and contrast hypotheses by manipulating the schedule of reinforcement while holding trial duration constant. In Experiment 1, preference for fixed-interval (FI) versus differential-reinforcement-of-other-behavior (DRO) schedules of reinforcement was assessed. Some pigeons preferred one schedule over the other while others demonstrated a position (side) preference. Thus, no systematic preference was found. In Experiment 2, a simultaneous color discrimination followed the FI or DRO schedule, and following training, preference was assessed by presenting the two positive stimuli simultaneously. Consistent with the contrast hypothesis, pigeons showed a significant preference for the positive stimulus that in training had followed their less preferred schedule.
NASA Astrophysics Data System (ADS)
Horn, M. H.; Whitcombe, C. D.
2015-06-01
We tested the hypothesis that the Elegant Tern (Thalasseus elegans), a plunge-diving predator, is an indicator of changes in the prey community in southern California coastal waters. Shannon diversity (H‧) of the tern's diet determined from dropped fish collected variously at the three nesting sites for 18 years over a 21-year interval (1993-2013) showed no significant change in diet diversity. Based on a species-accumulation curve, total diet species represented about 70% of an extrapolated asymptotic richness. Abundance patterns of five prey species making up > 75% of prey numbers for all years were compared with abundance patterns of the same species in independent surveys obtained from zooplankton tows, bottom trawls and power-plant entrapments. Three of the five species - northern anchovy, kelp pipefish and California lizardfish - showed significant, positive correlations between diet and survey abundances. Even though the tern's diet has been dominated by anchovy and pipefish, its diet is still broad, with prey taxa representing > 75% of the 42 species groups making up the California shelf fish fauna. Altogether, our results support the hypothesis that the Elegant Tern, with its flexible diet, is a qualitative indicator, a sentinel, of changes in the prey communities in southern California coastal waters.
Barber, R; Plumb, M; Smith, A G; Cesar, C E; Boulton, E; Jeffreys, A J; Dubrova, Y E
2000-12-20
To test the hypothesis that mouse germline expanded simple tandem repeat (ESTR) mutations are associated with recombination events during spermatogenesis, crossover frequencies were compared with germline mutation rates at ESTR loci in male mice acutely exposed to 1Gy of X-rays or to 10mg/kg of the anticancer drug cisplatin. Ionising radiation resulted in a highly significant 2.7-3.6-fold increase in ESTR mutation rate in males mated 4, 5 and 6 weeks after exposure, but not 3 weeks after exposure. In contrast, irradiation had no effect on meiotic crossover frequencies assayed on six chromosomes using 25 polymorphic microsatellite loci spaced at approximately 20cM intervals and covering 421cM of the mouse genome. Paternal exposure to cisplatin did not affect either ESTR mutation rates or crossover frequencies, despite a report that cisplatin can increase crossover frequency in mice. Correlation analysis did not reveal any associations between the paternal ESTR mutation rate and crossover frequency in unexposed males and in those exposed to X-rays or cisplatin. This study does not, therefore, support the hypothesis that mutation induction at mouse ESTR loci results from a general genome-wide increase in meiotic recombination rate.
Iron and infection: An investigation of the optimal iron hypothesis in Lima, Peru.
Dorsey, Achsah F; Thompson, Amanda L; Kleinman, Ronald E; Duggan, Christopher P; Penny, Mary E
2018-02-19
This article explores the optimal iron hypothesis through secondary data analysis of the association between hemoglobin levels and morbidity among children living in Canto Grande, a peri-urban community located on the outskirts of Lima, Peru. Risk ratios were used to test whether lower iron status, assessed using the HemoCue B-Hemoglobin System, was associated with an increased relative risk of morbidity symptoms compared to iron replete status, controlling for infant age, sex, weight for height z-score, maternal education, and repeated measures in 515 infants aged 6-12 months. Infants with fewer current respiratory and diarrheal morbidity symptoms had a lower risk of low iron deficiency compared to participants who were iron replete (P < .10). Infants with fewer current respiratory infection symptoms had a statistically significant (P < .05) reduction in risk of moderate iron deficiency compared to infants who were iron replete. In this study, morbidity status was not predictive of iron deficient status over a six-month interval period, but nonreplete iron status was shown to be associated with current morbidity symptoms. These results support investigating iron status as an allostatic system that responds to infection adaptively, rather than expecting an optimal preinfection value. © 2018 Wiley Periodicals, Inc.
Thermal regimes of Mexican spotted owl nest stands
Joseph L. Ganey
2004-01-01
To evaluate the hypothesis that spotted owls (Strix occidentalis) select habitats with cool microclimates to avoid high daytime temperatures, I sampled thermal regimes in nest areas used by Mexican spotted owls (S. o. lucida) in northern Arizona. I sampled air temperature at 30-min intervals in 30 pairs of nest and random sites...
Action perception as hypothesis testing.
Donnarumma, Francesco; Costantini, Marcello; Ambrosini, Ettore; Friston, Karl; Pezzulo, Giovanni
2017-04-01
We present a novel computational model that describes action perception as an active inferential process that combines motor prediction (the reuse of our own motor system to predict perceived movements) and hypothesis testing (the use of eye movements to disambiguate amongst hypotheses). The system uses a generative model of how (arm and hand) actions are performed to generate hypothesis-specific visual predictions, and directs saccades to the most informative places of the visual scene to test these predictions - and underlying hypotheses. We test the model using eye movement data from a human action observation study. In both the human study and our model, saccades are proactive whenever context affords accurate action prediction; but uncertainty induces a more reactive gaze strategy, via tracking the observed movements. Our model offers a novel perspective on action observation that highlights its active nature based on prediction dynamics and hypothesis testing. Copyright © 2017 The Authors. Published by Elsevier Ltd.. All rights reserved.
NASA Technical Reports Server (NTRS)
Pikkujamsa, S. M.; Makikallio, T. H.; Sourander, L. B.; Raiha, I. J.; Puukka, P.; Skytta, J.; Peng, C. K.; Goldberger, A. L.; Huikuri, H. V.
1999-01-01
BACKGROUND: New methods of R-R interval variability based on fractal scaling and nonlinear dynamics ("chaos theory") may give new insights into heart rate dynamics. The aims of this study were to (1) systematically characterize and quantify the effects of aging from early childhood to advanced age on 24-hour heart rate dynamics in healthy subjects; (2) compare age-related changes in conventional time- and frequency-domain measures with changes in newly derived measures based on fractal scaling and complexity (chaos) theory; and (3) further test the hypothesis that there is loss of complexity and altered fractal scaling of heart rate dynamics with advanced age. METHODS AND RESULTS: The relationship between age and cardiac interbeat (R-R) interval dynamics from childhood to senescence was studied in 114 healthy subjects (age range, 1 to 82 years) by measurement of the slope, beta, of the power-law regression line (log power-log frequency) of R-R interval variability (10(-4) to 10(-2) Hz), approximate entropy (ApEn), short-term (alpha(1)) and intermediate-term (alpha(2)) fractal scaling exponents obtained by detrended fluctuation analysis, and traditional time- and frequency-domain measures from 24-hour ECG recordings. Compared with young adults (<40 years old, n=29), children (<15 years old, n=27) showed similar complexity (ApEn) and fractal correlation properties (alpha(1), alpha(2), beta) of R-R interval dynamics despite lower spectral and time-domain measures. Progressive loss of complexity (decreased ApEn, r=-0.69, P<0.001) and alterations of long-term fractal-like heart rate behavior (increased alpha(2), r=0.63, decreased beta, r=-0.60, P<0.001 for both) were observed thereafter from middle age (40 to 60 years, n=29) to old age (>60 years, n=29). CONCLUSIONS: Cardiac interbeat interval dynamics change markedly from childhood to old age in healthy subjects. Children show complexity and fractal correlation properties of R-R interval time series comparable to those of young adults, despite lower overall heart rate variability. Healthy aging is associated with R-R interval dynamics showing higher regularity and altered fractal scaling consistent with a loss of complex variability.
ERIC Educational Resources Information Center
Besken, Miri
2016-01-01
The perceptual fluency hypothesis claims that items that are easy to perceive at encoding induce an illusion that they will be easier to remember, despite the finding that perception does not generally affect recall. The current set of studies tested the predictions of the perceptual fluency hypothesis with a picture generation manipulation.…
Adolescents' Body Image Trajectories: A Further Test of the Self-Equilibrium Hypothesis
ERIC Educational Resources Information Center
Morin, Alexandre J. S.; Maïano, Christophe; Scalas, L. Francesca; Janosz, Michel; Litalien, David
2017-01-01
The self-equilibrium hypothesis underlines the importance of having a strong core self, which is defined as a high and developmentally stable self-concept. This study tested this hypothesis in relation to body image (BI) trajectories in a sample of 1,006 adolescents (M[subscript age] = 12.6, including 541 males and 465 females) across a 4-year…
ERIC Educational Resources Information Center
Trafimow, David
2017-01-01
There has been much controversy over the null hypothesis significance testing procedure, with much of the criticism centered on the problem of inverse inference. Specifically, p gives the probability of the finding (or one more extreme) given the null hypothesis, whereas the null hypothesis significance testing procedure involves drawing a…
ERIC Educational Resources Information Center
Lee, Jungmin
2016-01-01
This study tested the Bennett hypothesis by examining whether four-year colleges changed listed tuition and fees, the amount of institutional grants per student, and room and board charges after their states implemented statewide merit-based aid programs. According to the Bennett hypothesis, increases in government financial aid make it easier for…
Human female orgasm as evolved signal: a test of two hypotheses.
Ellsworth, Ryan M; Bailey, Drew H
2013-11-01
We present the results of a study designed to empirically test predictions derived from two hypotheses regarding human female orgasm behavior as an evolved communicative trait or signal. One hypothesis tested was the female fidelity hypothesis, which posits that human female orgasm signals a woman's sexual satisfaction and therefore her likelihood of future fidelity to a partner. The other was sire choice hypothesis, which posits that women's orgasm behavior signals increased chances of fertilization. To test the two hypotheses of human female orgasm, we administered a questionnaire to 138 females and 121 males who reported that they were currently in a romantic relationship. Key predictions of the female fidelity hypothesis were not supported. In particular, orgasm was not associated with female sexual fidelity nor was orgasm associated with male perceptions of partner sexual fidelity. However, faked orgasm was associated with female sexual infidelity and lower male relationship satisfaction. Overall, results were in greater support of the sire choice signaling hypothesis than the female fidelity hypothesis. Results also suggest that male satisfaction with, investment in, and sexual fidelity to a mate are benefits that favored the selection of orgasmic signaling in ancestral females.
Luo, Liqun; Zhao, Wei; Weng, Tangmei
2016-01-01
The Trivers-Willard hypothesis predicts that high-status parents will bias their investment to sons, whereas low-status parents will bias their investment to daughters. Among humans, tests of this hypothesis have yielded mixed results. This study tests the hypothesis using data collected among contemporary peasants in Central South China. We use current family status (rated by our informants) and father's former class identity (assigned by the Chinese Communist Party in the early 1950s) as measures of parental status, and proportion of sons in offspring and offspring's years of education as measures of parental investment. Results show that (i) those families with a higher former class identity such as landlord and rich peasant tend to have a higher socioeconomic status currently, (ii) high-status parents are more likely to have sons than daughters among their biological offspring, and (iii) in higher-status families, the years of education obtained by sons exceed that obtained by daughters to a larger extent than in lower-status families. Thus, the first assumption and the two predictions of the hypothesis are supported by this study. This article contributes a contemporary Chinese case to the testing of the Trivers-Willard hypothesis.
Hypothesis testing of a change point during cognitive decline among Alzheimer's disease patients.
Ji, Ming; Xiong, Chengjie; Grundman, Michael
2003-10-01
In this paper, we present a statistical hypothesis test for detecting a change point over the course of cognitive decline among Alzheimer's disease patients. The model under the null hypothesis assumes a constant rate of cognitive decline over time and the model under the alternative hypothesis is a general bilinear model with an unknown change point. When the change point is unknown, however, the null distribution of the test statistics is not analytically tractable and has to be simulated by parametric bootstrap. When the alternative hypothesis that a change point exists is accepted, we propose an estimate of its location based on the Akaike's Information Criterion. We applied our method to a data set from the Neuropsychological Database Initiative by implementing our hypothesis testing method to analyze Mini Mental Status Exam scores based on a random-slope and random-intercept model with a bilinear fixed effect. Our result shows that despite large amount of missing data, accelerated decline did occur for MMSE among AD patients. Our finding supports the clinical belief of the existence of a change point during cognitive decline among AD patients and suggests the use of change point models for the longitudinal modeling of cognitive decline in AD research.
NASA Astrophysics Data System (ADS)
Menne, Matthew J.; Williams, Claude N., Jr.
2005-10-01
An evaluation of three hypothesis test statistics that are commonly used in the detection of undocumented changepoints is described. The goal of the evaluation was to determine whether the use of multiple tests could improve undocumented, artificial changepoint detection skill in climate series. The use of successive hypothesis testing is compared to optimal approaches, both of which are designed for situations in which multiple undocumented changepoints may be present. In addition, the importance of the form of the composite climate reference series is evaluated, particularly with regard to the impact of undocumented changepoints in the various component series that are used to calculate the composite.In a comparison of single test changepoint detection skill, the composite reference series formulation is shown to be less important than the choice of the hypothesis test statistic, provided that the composite is calculated from the serially complete and homogeneous component series. However, each of the evaluated composite series is not equally susceptible to the presence of changepoints in its components, which may be erroneously attributed to the target series. Moreover, a reference formulation that is based on the averaging of the first-difference component series is susceptible to random walks when the composition of the component series changes through time (e.g., values are missing), and its use is, therefore, not recommended. When more than one test is required to reject the null hypothesis of no changepoint, the number of detected changepoints is reduced proportionately less than the number of false alarms in a wide variety of Monte Carlo simulations. Consequently, a consensus of hypothesis tests appears to improve undocumented changepoint detection skill, especially when reference series homogeneity is violated. A consensus of successive hypothesis tests using a semihierarchic splitting algorithm also compares favorably to optimal solutions, even when changepoints are not hierarchic.
Yakoob, Mohammad Y; Shi, Peilin; Willett, Walter C; Rexrode, Kathryn M; Campos, Hannia; Orav, E John; Hu, Frank B; Mozaffarian, Dariush
2016-04-26
In prospective studies, the relationship of self-reported consumption of dairy foods with risk of diabetes mellitus is inconsistent. Few studies have assessed dairy fat, using circulating biomarkers, and incident diabetes mellitus. We tested the hypothesis that circulating fatty acid biomarkers of dairy fat, 15:0, 17:0, and t-16:1n-7, are associated with lower incident diabetes mellitus. Among 3333 adults aged 30 to 75 years and free of prevalent diabetes mellitus at baseline, total plasma and erythrocyte fatty acids were measured in blood collected in 1989 to 1990 (Nurses' Health Study) and 1993 to 1994 (Health Professionals Follow-Up Study). Incident diabetes mellitus through 2010 was confirmed by a validated supplementary questionnaire based on symptoms, diagnostic tests, and medications. Risk was assessed by using Cox proportional hazards, with cohort findings combined by meta-analysis. During mean±standard deviation follow-up of 15.2±5.6 years, 277 new cases of diabetes mellitus were diagnosed. In pooled multivariate analyses adjusting for demographics, metabolic risk factors, lifestyle, diet, and other circulating fatty acids, individuals with higher plasma 15:0 had a 44% lower risk of diabetes mellitus (quartiles 4 versus 1, hazard ratio, 0.56; 95% confidence interval, 0.37-0.86; P-trend=0.01); higher plasma 17:0, 43% lower risk (hazard ratio, 0.57; 95% confidence interval, 0.39-0.83; P-trend=0.01); and higher t-16:1n-7, 52% lower risk (hazard ratio, 0.48; 95% confidence interval, 0.33-0.70; P-trend <0.001). Findings were similar for erythrocyte 15:0, 17:0, and t-16:1n-7, although with broader confidence intervals that only achieved statistical significance for 17:0. In 2 prospective cohorts, higher plasma dairy fatty acid concentrations were associated with lower incident diabetes mellitus. Results were similar for erythrocyte 17:0. Our findings highlight the need to better understand the potential health effects of dairy fat, and the dietary and metabolic determinants of these fatty acids. © 2016 American Heart Association, Inc.
Time-dependent influence of sensorimotor set on automatic responses in perturbed stance
NASA Technical Reports Server (NTRS)
Chong, R. K.; Horak, F. B.; Woollacott, M. H.; Peterson, B. W. (Principal Investigator)
1999-01-01
These experiments tested the hypothesis that the ability to change sensorimotor set quickly for automatic responses depends on the time interval between successive surface perturbations. Sensorimotor set refers to the influence of prior experience or context on the state of the sensorimotor system. Sensorimotor set for postural responses was influenced by first giving subjects a block of identical backward translations of the support surface, causing forward sway and automatic gastrocnemius responses. The ability to change set quickly was inferred by measuring the suppression of the stretched antagonist gastrocnemius responses to toes-up rotations causing backward sway, following the translations. Responses were examined under short (10-14 s) and long (19-24 s) inter-trial intervals in young healthy subjects. The results showed that subjects in the long-interval group changed set immediately by suppressing gastrocnemius to 51% of translation responses within the first rotation and continued to suppress them over succeeding rotations. In contrast, subjects in the short-interval group did not change set immediately, but required two or more rotations to suppress gastrocnemius responses. By the last rotation, the short-interval group suppressed gastrocnemius responses to 33%, similar to the long-interval group of 29%. Associated surface plantarflexor torque resulting from these responses showed similar results. When rotation and translation perturbations alternated, however, the short-interval group was not able to suppress gastrocnemius responses to rotations as much as the long-interval group, although they did suppress more than in the first rotation trial after a series of translations. Set for automatic responses appears to linger, from one trial to the next. Specifically, sensorimotor set is more difficult to change when surface perturbations are given in close succession, making it appear as if set has become progressively stronger. A strong set does not mean that responses become larger over consecutive trials. Rather, it is inferred by the extent of difficulty in changing a response when it is appropriate to do so. These results suggest that the ability to change sensorimotor set quickly is sensitive to whether the change is required after a long or a short series of a prior different response, which in turn depends on the time interval between successive trials. Different rate of gastrocnemius suppression to toes-up rotation of the support surface have been reported in previous studies. This may be partially explained by different inter-trial time intervals demonstrated in this study.
Bayesian Methods for Determining the Importance of Effects
USDA-ARS?s Scientific Manuscript database
Criticisms have plagued the frequentist null-hypothesis significance testing (NHST) procedure since the day it was created from the Fisher Significance Test and Hypothesis Test of Jerzy Neyman and Egon Pearson. Alternatives to NHST exist in frequentist statistics, but competing methods are also avai...
Testing for purchasing power parity in the long-run for ASEAN-5
NASA Astrophysics Data System (ADS)
Choji, Niri Martha; Sek, Siok Kun
2017-04-01
For more than a decade, there has been a substantial interest in testing for the validity of the purchasing power parity (PPP) hypothesis empirically. This paper performs a test on revealing a long-run relative Purchasing Power Parity for a group of ASEAN-5 countries for the period of 1996-2016 using monthly data. For this purpose, we used the Pedroni co-integration method to test for the long-run hypothesis of purchasing power parity. We first tested for the stationarity of the variables and found that the variables are non-stationary at levels but stationary at first difference. Results of the Pedroni test rejected the null hypothesis of no co-integration meaning that we have enough evidence to support PPP in the long-run for the ASEAN-5 countries over the period of 1996-2016. In other words, the rejection of null hypothesis implies a long-run relation between nominal exchange rates and relative prices.
UNIFORMLY MOST POWERFUL BAYESIAN TESTS
Johnson, Valen E.
2014-01-01
Uniformly most powerful tests are statistical hypothesis tests that provide the greatest power against a fixed null hypothesis among all tests of a given size. In this article, the notion of uniformly most powerful tests is extended to the Bayesian setting by defining uniformly most powerful Bayesian tests to be tests that maximize the probability that the Bayes factor, in favor of the alternative hypothesis, exceeds a specified threshold. Like their classical counterpart, uniformly most powerful Bayesian tests are most easily defined in one-parameter exponential family models, although extensions outside of this class are possible. The connection between uniformly most powerful tests and uniformly most powerful Bayesian tests can be used to provide an approximate calibration between p-values and Bayes factors. Finally, issues regarding the strong dependence of resulting Bayes factors and p-values on sample size are discussed. PMID:24659829
Biocompatibility of “On-Command” Dissolvable Tympanostomy Tube in the Rat Model
Mai, Johnny P.; Dumont, Matthieu; Rossi, Christopher; Cleary, Kevin; Wiedermann, Joshua; Reilly, Brian K.
2016-01-01
Objectives/Hypothesis A prototype tympanostomy tube, composed of (polybutyl/methyl methacrylate-co-dimethyl amino ethyl methacrylate (PBM)), was tested to (1) evaluate the effect of PBM tubes on rat dermis as a corollary for biocompatibility and (2) to observe the efficacy of dissolution with isopropyl alcohol (iPrOH) and ethanol (EtOH). Subjects and Methods A two-part study was conducted to assess biocompatible substance with inducible dissolvability as a critical characteristic for a newly engineered tympanostomy tube. First, tympanostomy tubes were inserted subcutaneously in 10 rats, which served as an animal model for biosafety and compared to traditional tubes with respect to histologic reaction. Tissue surrounding the PBM prototype tubes was submitted for histopathology and demonstrated no tissue reactivity or signs of major inflammation. In the second part, we evaluated the dissolvability of the tube with either isopropyl alcohol, ethanol, ofloxacin, ciprodex, water, and soapy water. PBM tubes were exposed to decreasing concentrations of iPrOH and EtOH with interval qualitative assessment of dissolution. Results (1) Histologic examination did not reveal pathology with PBM tubes; (2) Concentrations of at least 50% iPrOH and EtOH dissolve PBM tubes within 48 hours while concentrations of at least 75% iPrOH and EtOH were required for dissolution when exposure was limited to four 20-minute intervals. Conclusion PBM is biocompatible in the rat model. Additionally, PBM demonstrates rapid dissolution upon alcohol-based stimuli, validating the proof-of-concept of dissolvable “on-command” or biocommandible ear tubes. Further testing of PBM is needed with a less ototoxic dissolver and in a better simulated middle ear environment, before testing can be performed in humans. PMID:27796039
Effect of Mobile Phone Radiofrequency Electromagnetic Fields on.
Umar, Z U; Abubakar, M B; Ige, J; Igbokwe, U V; Mojiminiyi, F B O; Isezuo, S A
2014-12-29
Since cell phones emit radiofrequency electromagnetic fields (EMFs), this study tested the hypothesis that cell phones placed near the heart may interfere with the electrical rhythm of the heart or affect the blood pressure. Following informed consent, eighteen randomly selected apparently healthy male volunteers aged 21.44 ± 0.53 years had their blood pressure, pulse rates and ECG measured before and after acute exposure to a cell phone. The ECG parameters obtained were: heart rate (HR), QRS complex duration (QRS), PR interval (PR) and Corrected QT interval (QTc). Results are presented as mean ± SEM. Statistical analyses were done using two-tailed paired t test for blood pressure and pulse rate data and one way ANOVA with a post hoc Tukey test for the ECG data. P<0.05 was considered statistically significant. The blood pressure and pulse rates before and after exposure to the cell phone showed no significant difference. The ECG parameters (HR: beats/min, QRS:ms, PR:ms and QTc respectively) did not differ before (66.33 ± 2.50, 91.78 ± 1.36, 151.67 ± 5.39 and 395.44 ± 4.96), during (66.33 ± 2.40, 91.11 ± 1.61, 153.67 ± 5.06 and 394.33 ± 4.05) and after calls (67.22 ± 2.77, 91.11 ± 1.67, 157.44 ± 4.46 and 396.56 ± 4.93) compared to baseline (67.17 ± 2.19, 94.33 ± 1.57, 150.56 ± 4.93 and 399.56 ± 3.88). These results suggest that acute exposure to EMFs from cell phones placed near the heart may not interfere with the electrical activity of the heart or blood pressure in healthy individuals.
A hypothesis on the biological origins and social evolution of music and dance
Wang, Tianyan
2015-01-01
The origins of music and musical emotions is still an enigma, here I propose a comprehensive hypothesis on the origins and evolution of music, dance, and speech from a biological and sociological perspective. I suggest that every pitch interval between neighboring notes in music represents corresponding movement pattern through interpreting the Doppler effect of sound, which not only provides a possible explanation for the transposition invariance of music, but also integrates music and dance into a common form—rhythmic movements. Accordingly, investigating the origins of music poses the question: why do humans appreciate rhythmic movements? I suggest that human appreciation of rhythmic movements and rhythmic events developed from the natural selection of organisms adapting to the internal and external rhythmic environments. The perception and production of, as well as synchronization with external and internal rhythms are so vital for an organism's survival and reproduction, that animals have a rhythm-related reward and emotion (RRRE) system. The RRRE system enables the appreciation of rhythmic movements and events, and is integral to the origination of music, dance and speech. The first type of rewards and emotions (rhythm-related rewards and emotions, RRREs) are evoked by music and dance, and have biological and social functions, which in turn, promote the evolution of music, dance and speech. These functions also evoke a second type of rewards and emotions, which I name society-related rewards and emotions (SRREs). The neural circuits of RRREs and SRREs develop in species formation and personal growth, with congenital and acquired characteristics, respectively, namely music is the combination of nature and culture. This hypothesis provides probable selection pressures and outlines the evolution of music, dance, and speech. The links between the Doppler effect and the RRREs and SRREs can be empirically tested, making the current hypothesis scientifically concrete. PMID:25741232
Solarin, Sakiru Adebola; Gil-Alana, Luis Alberiko; Al-Mulali, Usama
2018-04-13
In this article, we have examined the hypothesis of convergence of renewable energy consumption in 27 OECD countries. However, instead of relying on classical techniques, which are based on the dichotomy between stationarity I(0) and nonstationarity I(1), we consider a more flexible approach based on fractional integration. We employ both parametric and semiparametric techniques. Using parametric methods, evidence of convergence is found in the cases of Mexico, Switzerland and Sweden along with the USA, Portugal, the Czech Republic, South Korea and Spain, and employing semiparametric approaches, we found evidence of convergence in all these eight countries along with Australia, France, Japan, Greece, Italy and Poland. For the remaining 13 countries, even though the orders of integration of the series are smaller than one in all cases except Germany, the confidence intervals are so wide that we cannot reject the hypothesis of unit roots thus not finding support for the hypothesis of convergence.
Determinants of postfire recovery and succession in mediterranean-climate shrublands of California
Keeley, J.E.; Fotheringham, C.J.; Baer-Keeley, M.
2005-01-01
Evergreen chaparral and semideciduous sage scrub shrublands were studied for five years after fires in order to evaluate hypothesized determinants of postfire recovery and succession. Residual species present in the immediate postfire environment dominated early succession. By the fifth year postfire, roughly half of the species were colonizers not present in the first year, but they comprised only 7-14% cover. Successional changes were evaluated in the context of four hypotheses: (1) event-dependent, (2) fire interval, (3) self-regulatory, and (4) environmental filter hypotheses. Characteristics specific to the fire event, for example, fire severity and annual fluctuations in precipitation, were important determinants of patterns of change in cover and density, supporting the "event-dependent" hypothesis. The "fire interval" hypothesis is also supported, primarily through the impact of short intervals on reproductive failure in obligate seeding shrubs and the impact of long intervals on fuel accumulation and resultant fire severity. Successional changes in woody cover were correlated with decreases in herb cover, indicating support for "self-regulatory" effects. Across this landscape there were strong "environmental filter" effects that resulted in complex patterns of postfire recovery and succession between coastal and interior associations of both vegetation types. Of relevance to fire managers is the finding that postfire recovery patterns are substantially slower in the interior sage scrub formations, and thus require different management strategies than coastal formations. Also, in sage scrub (but not chaparral), prefire stand age is positively correlated with fire severity, and negatively correlated with postfire cover. Differential responses to fire severity suggest that landscapes with combinations of high and low severity may lead to enhanced biodiversity. Predicting postfire management needs is complicated by the fact that vegetation recovery is significantly controlled by patterns of precipitation. ?? 2005 by the Ecological Society of America.
Timing and Causality in the Generation of Learned Eyelid Responses
Sánchez-Campusano, Raudel; Gruart, Agnès; Delgado-García, José M.
2011-01-01
The cerebellum-red nucleus-facial motoneuron (Mn) pathway has been reported as being involved in the proper timing of classically conditioned eyelid responses. This special type of associative learning serves as a model of event timing for studying the role of the cerebellum in dynamic motor control. Here, we have re-analyzed the firing activities of cerebellar posterior interpositus (IP) neurons and orbicularis oculi (OO) Mns in alert behaving cats during classical eyeblink conditioning, using a delay paradigm. The aim was to revisit the hypothesis that the IP neurons (IPns) can be considered a neuronal phase-modulating device supporting OO Mns firing with an emergent timing mechanism and an explicit correlation code during learned eyelid movements. Optimized experimental and computational tools allowed us to determine the different causal relationships (temporal order and correlation code) during and between trials. These intra- and inter-trial timing strategies expanding from sub-second range (millisecond timing) to longer-lasting ranges (interval timing) expanded the functional domain of cerebellar timing beyond motor control. Interestingly, the results supported the above-mentioned hypothesis. The causal inferences were influenced by the precise motor and pre-motor spike timing in the cause-effect interval, and, in addition, the timing of the learned responses depended on cerebellar–Mn network causality. Furthermore, the timing of CRs depended upon the probability of simulated causal conditions in the cause-effect interval and not the mere duration of the inter-stimulus interval. In this work, the close relation between timing and causality was verified. It could thus be concluded that the firing activities of IPns may be related more to the proper performance of ongoing CRs (i.e., the proper timing as a consequence of the pertinent causality) than to their generation and/or initiation. PMID:21941469
When Null Hypothesis Significance Testing Is Unsuitable for Research: A Reassessment.
Szucs, Denes; Ioannidis, John P A
2017-01-01
Null hypothesis significance testing (NHST) has several shortcomings that are likely contributing factors behind the widely debated replication crisis of (cognitive) neuroscience, psychology, and biomedical science in general. We review these shortcomings and suggest that, after sustained negative experience, NHST should no longer be the default, dominant statistical practice of all biomedical and psychological research. If theoretical predictions are weak we should not rely on all or nothing hypothesis tests. Different inferential methods may be most suitable for different types of research questions. Whenever researchers use NHST they should justify its use, and publish pre-study power calculations and effect sizes, including negative findings. Hypothesis-testing studies should be pre-registered and optimally raw data published. The current statistics lite educational approach for students that has sustained the widespread, spurious use of NHST should be phased out.
When Null Hypothesis Significance Testing Is Unsuitable for Research: A Reassessment
Szucs, Denes; Ioannidis, John P. A.
2017-01-01
Null hypothesis significance testing (NHST) has several shortcomings that are likely contributing factors behind the widely debated replication crisis of (cognitive) neuroscience, psychology, and biomedical science in general. We review these shortcomings and suggest that, after sustained negative experience, NHST should no longer be the default, dominant statistical practice of all biomedical and psychological research. If theoretical predictions are weak we should not rely on all or nothing hypothesis tests. Different inferential methods may be most suitable for different types of research questions. Whenever researchers use NHST they should justify its use, and publish pre-study power calculations and effect sizes, including negative findings. Hypothesis-testing studies should be pre-registered and optimally raw data published. The current statistics lite educational approach for students that has sustained the widespread, spurious use of NHST should be phased out. PMID:28824397
Sreenivasan, Vidhyapriya; Bobier, William R
2015-06-01
This research tested the hypothesis that the successful treatment of convergence insufficiency (CI) with vision-training (VT) procedures, leads to an increased capacity of vergence adaptation (VAdapt) allowing a more rapid downward adjustment of the convergence accommodation cross-link. Nine subjects with CI were recruited from a clinical population, based upon reduced fusional vergence amplitudes, receded near point of convergence or symptomology. VAdapt and the resulting changes to convergence accommodation (CA) were measured at specific intervals over 15 min (pre-training). Separate clinical measures of the accommodative convergence cross link, horizontal fusion limits and near point of convergence were taken and a symptomology questionnaire completed. Subjects then participated in a VT program composed of 2.5h at home and 1h in-office weekly for 12-14 weeks. Clinical testing was done weekly. VAdapt and CA measures were retaken once clinical measures normalized for 2 weeks (mid-training) and then again when symptoms had cleared (post-training). VAdapt and CA responses as well as the clinical measures were taken on a control group showing normal clinical findings. Six subjects provided complete data sets. CI clinical findings reached normal levels between 4 and 7 weeks of training but symptoms, VAdapt, and CA output remained significantly different from the controls until 12-14 weeks. The hypothesis was retained. The reduced VAdapt and excessive CA found in CI were normalized through orthoptic treatment. This time course was underestimated by clinical findings but matched symptom amelioration. Copyright © 2015 Elsevier Ltd. All rights reserved.
Test of a motor theory of long-term auditory memory
Schulze, Katrin; Vargha-Khadem, Faraneh; Mishkin, Mortimer
2012-01-01
Monkeys can easily form lasting central representations of visual and tactile stimuli, yet they seem unable to do the same with sounds. Humans, by contrast, are highly proficient in auditory long-term memory (LTM). These mnemonic differences within and between species raise the question of whether the human ability is supported in some way by speech and language, e.g., through subvocal reproduction of speech sounds and by covert verbal labeling of environmental stimuli. If so, the explanation could be that storing rapidly fluctuating acoustic signals requires assistance from the motor system, which is uniquely organized to chain-link rapid sequences. To test this hypothesis, we compared the ability of normal participants to recognize lists of stimuli that can be easily reproduced, labeled, or both (pseudowords, nonverbal sounds, and words, respectively) versus their ability to recognize a list of stimuli that can be reproduced or labeled only with great difficulty (reversed words, i.e., words played backward). Recognition scores after 5-min delays filled with articulatory-suppression tasks were relatively high (75–80% correct) for all sound types except reversed words; the latter yielded scores that were not far above chance (58% correct), even though these stimuli were discriminated nearly perfectly when presented as reversed-word pairs at short intrapair intervals. The combined results provide preliminary support for the hypothesis that participation of the oromotor system may be essential for laying down the memory of speech sounds and, indeed, that speech and auditory memory may be so critically dependent on each other that they had to coevolve. PMID:22511719
Test of a motor theory of long-term auditory memory.
Schulze, Katrin; Vargha-Khadem, Faraneh; Mishkin, Mortimer
2012-05-01
Monkeys can easily form lasting central representations of visual and tactile stimuli, yet they seem unable to do the same with sounds. Humans, by contrast, are highly proficient in auditory long-term memory (LTM). These mnemonic differences within and between species raise the question of whether the human ability is supported in some way by speech and language, e.g., through subvocal reproduction of speech sounds and by covert verbal labeling of environmental stimuli. If so, the explanation could be that storing rapidly fluctuating acoustic signals requires assistance from the motor system, which is uniquely organized to chain-link rapid sequences. To test this hypothesis, we compared the ability of normal participants to recognize lists of stimuli that can be easily reproduced, labeled, or both (pseudowords, nonverbal sounds, and words, respectively) versus their ability to recognize a list of stimuli that can be reproduced or labeled only with great difficulty (reversed words, i.e., words played backward). Recognition scores after 5-min delays filled with articulatory-suppression tasks were relatively high (75-80% correct) for all sound types except reversed words; the latter yielded scores that were not far above chance (58% correct), even though these stimuli were discriminated nearly perfectly when presented as reversed-word pairs at short intrapair intervals. The combined results provide preliminary support for the hypothesis that participation of the oromotor system may be essential for laying down the memory of speech sounds and, indeed, that speech and auditory memory may be so critically dependent on each other that they had to coevolve.
Kim, J H; Ohara, S; Lenz, F A
2009-04-01
Primate thalamic action potential bursts associated with low-threshold spikes (LTS) occur during waking sensory and motor activity. We now test the hypothesis that different firing and LTS burst characteristics occur during quiet wakefulness (spontaneous condition) versus mental arithmetic (counting condition). This hypothesis was tested by thalamic recordings during the surgical treatment of tremor. Across all neurons and epochs, preburst interspike intervals (ISIs) were bimodal at median values, consistent with the duration of type A and type B gamma-aminobutyric acid inhibitory postsynaptic potentials. Neuronal spike trains (117 neurons) were categorized by joint ISI distributions into those firing as LTS bursts (G, grouped), firing as single spikes (NG, nongrouped), or firing as single spikes with sporadic LTS bursting (I, intermediate). During the spontaneous condition (46 neurons) only I spike trains changed category. Overall, burst rates (BRs) were lower and firing rates (FRs) were higher during the counting versus the spontaneous condition. Spike trains in the G category sometimes changed to I and NG categories at the transition from the spontaneous to the counting condition, whereas those in the I category often changed to NG. Among spike trains that did not change category by condition, G spike trains had lower BRs during counting, whereas NG spike trains had higher FRs. BRs were significantly greater than zero for G and I categories during wakefulness (both conditions). The changes between the spontaneous and counting conditions are most pronounced for the I category, which may be a transitional firing pattern between the bursting (G) and relay modes of thalamic firing (NG).
Macherey, Olivier; Cazals, Yves
2016-01-01
Most cochlear implants (CIs) stimulate the auditory nerve with trains of symmetric biphasic pulses consisting of two phases of opposite polarity. Animal and human studies have shown that both polarities can elicit neural responses. In human CI listeners, studies have shown that at suprathreshold levels, the anodic phase is more effective than the cathodic phase. In contrast, animal studies usually show the opposite trend. Although the reason for this discrepancy remains unclear, computational modelling results have proposed that the degeneration of the peripheral processes of the neurons could lead to a higher efficiency of anodic stimulation. We tested this hypothesis in ten guinea pigs who were deafened with an injection of sysomycin and implanted with a single ball electrode inserted in the first turn of the cochlea. Animals were tested at regular intervals between 1 week after deafening and up to 1 year for some of them. Our hypothesis was that if the effect of polarity is determined by the presence or absence of peripheral processes, the difference in polarity efficiency should change over time because of a progressive neural degeneration. Stimuli consisted of charge-balanced symmetric and asymmetric pulses allowing us to observe the response to each polarity individually. For all stimuli, the inferior colliculus evoked potential was measured. Results show that the cathodic phase was more effective than the anodic phase and that this remained so even several months after deafening. This suggests that neural degeneration cannot entirely account for the higher efficiency of anodic stimulation observed in human CI listeners.
Preparing for the first meeting with a statistician.
De Muth, James E
2008-12-15
Practical statistical issues that should be considered when performing data collection and analysis are reviewed. The meeting with a statistician should take place early in the research development before any study data are collected. The process of statistical analysis involves establishing the research question, formulating a hypothesis, selecting an appropriate test, sampling correctly, collecting data, performing tests, and making decisions. Once the objectives are established, the researcher can determine the characteristics or demographics of the individuals required for the study, how to recruit volunteers, what type of data are needed to answer the research question(s), and the best methods for collecting the required information. There are two general types of statistics: descriptive and inferential. Presenting data in a more palatable format for the reader is called descriptive statistics. Inferential statistics involve making an inference or decision about a population based on results obtained from a sample of that population. In order for the results of a statistical test to be valid, the sample should be representative of the population from which it is drawn. When collecting information about volunteers, researchers should only collect information that is directly related to the study objectives. Important information that a statistician will require first is an understanding of the type of variables involved in the study and which variables can be controlled by researchers and which are beyond their control. Data can be presented in one of four different measurement scales: nominal, ordinal, interval, or ratio. Hypothesis testing involves two mutually exclusive and exhaustive statements related to the research question. Statisticians should not be replaced by computer software, and they should be consulted before any research data are collected. When preparing to meet with a statistician, the pharmacist researcher should be familiar with the steps of statistical analysis and consider several questions related to the study to be conducted.
NASA Astrophysics Data System (ADS)
Lo Brutto, M.; Spera, M. G.
2011-09-01
The Temple of Olympian Zeus in Agrigento (Italy) was one of the largest temple and at the same time one of the most original of all the Greek architecture. We don't know exactly how it was because the temple is now almost completely destroyed but it is very well-known for the presence of the Telamons. The Telamons were giant statues (about 8 meters high) probably located outside the temple to fill the interval between the columns. In accordance with the theory most accredited by archaeologists the Telamons were a decorative element and also a support for the structure. However, this hypothesis has never been scientifically proven. One Telamon has been reassembled and is shown at the Archaeological Museum of Agrigento. In 2009 a group of researchers at the University of Palermo has begun a study to test the hypothesis that the Telamons support the weight of the upper part of the temple. The study consists of a 3D survey of the Telamon, to reconstruct a detailed 3D digital model, and of a structural analysis with the Finite Element Method (FEM) to test the possibility that the Telamon could to support the weight of the upper portion of the temple. In this work the authors describe the 3D survey of Telamon carry out with Range-Based Modelling (RBM) and Image-Based Modeling (IBM). The RBM was performed with a TOF laser scanner while the IBM with the ZScan system of Menci Software and Image Master of Topcon. Several tests were conducted to analyze the accuracy of the different 3D models and to evaluate the difference between laser scanning and photogrammetric data. Moreover, an appropriate data reduction to generate a 3D model suitable for FEM analysis was tested.
Perneger, Thomas V; Combescure, Christophe
2017-07-01
Published P-values provide a window into the global enterprise of medical research. The aim of this study was to use the distribution of published P-values to estimate the relative frequencies of null and alternative hypotheses and to seek irregularities suggestive of publication bias. This cross-sectional study included P-values published in 120 medical research articles in 2016 (30 each from the BMJ, JAMA, Lancet, and New England Journal of Medicine). The observed distribution of P-values was compared with expected distributions under the null hypothesis (i.e., uniform between 0 and 1) and the alternative hypothesis (strictly decreasing from 0 to 1). P-values were categorized according to conventional levels of statistical significance and in one-percent intervals. Among 4,158 recorded P-values, 26.1% were highly significant (P < 0.001), 9.1% were moderately significant (P ≥ 0.001 to < 0.01), 11.7% were weakly significant (P ≥ 0.01 to < 0.05), and 53.2% were nonsignificant (P ≥ 0.05). We noted three irregularities: (1) high proportion of P-values <0.001, especially in observational studies, (2) excess of P-values equal to 1, and (3) about twice as many P-values less than 0.05 compared with those more than 0.05. The latter finding was seen in both randomized trials and observational studies, and in most types of analyses, excepting heterogeneity tests and interaction tests. Under plausible assumptions, we estimate that about half of the tested hypotheses were null and the other half were alternative. This analysis suggests that statistical tests published in medical journals are not a random sample of null and alternative hypotheses but that selective reporting is prevalent. In particular, significant results are about twice as likely to be reported as nonsignificant results. Copyright © 2017 Elsevier Inc. All rights reserved.
Testing fundamental ecological concepts with a Pythium-Prunus pathosystem
USDA-ARS?s Scientific Manuscript database
The study of plant-pathogen interactions has enabled tests of basic ecological concepts on plant community assembly (Janzen-Connell Hypothesis) and plant invasion (Enemy Release Hypothesis). We used a field experiment to (#1) test whether Pythium effects depended on host (seedling) density and/or d...
A checklist to facilitate objective hypothesis testing in social psychology research.
Washburn, Anthony N; Morgan, G Scott; Skitka, Linda J
2015-01-01
Social psychology is not a very politically diverse area of inquiry, something that could negatively affect the objectivity of social psychological theory and research, as Duarte et al. argue in the target article. This commentary offers a number of checks to help researchers uncover possible biases and identify when they are engaging in hypothesis confirmation and advocacy instead of hypothesis testing.
Nan Liu; Hai Ren; Sufen Yuan; Qinfeng Guo; Long Yang
2013-01-01
The relative importance of facilitation and competition between pairwise plants across abiotic stress gradients as predicted by the stress-gradient hypothesis has been confirmed in arid and temperate ecosystems, but the hypothesis has rarely been tested in tropical systems, particularly across nutrient gradients. The current research examines the interactions between a...
Phase II Clinical Trials: D-methionine to Reduce Noise-Induced Hearing Loss
2012-03-01
loss (NIHL) and tinnitus in our troops. Hypotheses: Primary Hypothesis: Administration of oral D-methionine prior to and during weapons...reduce or prevent noise-induced tinnitus . Primary outcome to test the primary hypothesis: Pure tone air-conduction thresholds. Primary outcome to...test the secondary hypothesis: Tinnitus questionnaires. Specific Aims: 1. To determine whether administering oral D-methionine (D-met) can
An omnibus test for the global null hypothesis.
Futschik, Andreas; Taus, Thomas; Zehetmayer, Sonja
2018-01-01
Global hypothesis tests are a useful tool in the context of clinical trials, genetic studies, or meta-analyses, when researchers are not interested in testing individual hypotheses, but in testing whether none of the hypotheses is false. There are several possibilities how to test the global null hypothesis when the individual null hypotheses are independent. If it is assumed that many of the individual null hypotheses are false, combination tests have been recommended to maximize power. If, however, it is assumed that only one or a few null hypotheses are false, global tests based on individual test statistics are more powerful (e.g. Bonferroni or Simes test). However, usually there is no a priori knowledge on the number of false individual null hypotheses. We therefore propose an omnibus test based on cumulative sums of the transformed p-values. We show that this test yields an impressive overall performance. The proposed method is implemented in an R-package called omnibus.
Explorations in Statistics: Hypothesis Tests and P Values
ERIC Educational Resources Information Center
Curran-Everett, Douglas
2009-01-01
Learning about statistics is a lot like learning about science: the learning is more meaningful if you can actively explore. This second installment of "Explorations in Statistics" delves into test statistics and P values, two concepts fundamental to the test of a scientific null hypothesis. The essence of a test statistic is that it compares what…
Planned Hypothesis Tests Are Not Necessarily Exempt from Multiplicity Adjustment
ERIC Educational Resources Information Center
Frane, Andrew V.
2015-01-01
Scientific research often involves testing more than one hypothesis at a time, which can inflate the probability that a Type I error (false discovery) will occur. To prevent this Type I error inflation, adjustments can be made to the testing procedure that compensate for the number of tests. Yet many researchers believe that such adjustments are…
ERIC Educational Resources Information Center
Malda, Maike; van de Vijver, Fons J. R.; Temane, Q. Michael
2010-01-01
In this study, cross-cultural differences in cognitive test scores are hypothesized to depend on a test's cultural complexity (Cultural Complexity Hypothesis: CCH), here conceptualized as its content familiarity, rather than on its cognitive complexity (Spearman's Hypothesis: SH). The content familiarity of tests assessing short-term memory,…
The effect of inter-set rest intervals on resistance exercise-induced muscle hypertrophy.
Henselmans, Menno; Schoenfeld, Brad J
2014-12-01
Due to a scarcity of longitudinal trials directly measuring changes in muscle girth, previous recommendations for inter-set rest intervals in resistance training programs designed to stimulate muscular hypertrophy were primarily based on the post-exercise endocrinological response and other mechanisms theoretically related to muscle growth. New research regarding the effects of inter-set rest interval manipulation on resistance training-induced muscular hypertrophy is reviewed here to evaluate current practices and provide directions for future research. Of the studies measuring long-term muscle hypertrophy in groups employing different rest intervals, none have found superior muscle growth in the shorter compared with the longer rest interval group and one study has found the opposite. Rest intervals less than 1 minute can result in acute increases in serum growth hormone levels and these rest intervals also decrease the serum testosterone to cortisol ratio. Long-term adaptations may abate the post-exercise endocrinological response and the relationship between the transient change in hormonal production and chronic muscular hypertrophy is highly contentious and appears to be weak. The relationship between the rest interval-mediated effect on immune system response, muscle damage, metabolic stress, or energy production capacity and muscle hypertrophy is still ambiguous and largely theoretical. In conclusion, the literature does not support the hypothesis that training for muscle hypertrophy requires shorter rest intervals than training for strength development or that predetermined rest intervals are preferable to auto-regulated rest periods in this regard.
Nikolakopoulou, Adriani; Mavridis, Dimitris; Furukawa, Toshi A; Cipriani, Andrea; Tricco, Andrea C; Straus, Sharon E; Siontis, George C M; Egger, Matthias; Salanti, Georgia
2018-02-28
To examine whether the continuous updating of networks of prospectively planned randomised controlled trials (RCTs) ("living" network meta-analysis) provides strong evidence against the null hypothesis in comparative effectiveness of medical interventions earlier than the updating of conventional, pairwise meta-analysis. Empirical study of the accumulating evidence about the comparative effectiveness of clinical interventions. Database of network meta-analyses of RCTs identified through searches of Medline, Embase, and the Cochrane Database of Systematic Reviews until 14 April 2015. Network meta-analyses published after January 2012 that compared at least five treatments and included at least 20 RCTs. Clinical experts were asked to identify in each network the treatment comparison of greatest clinical interest. Comparisons were excluded for which direct and indirect evidence disagreed, based on side, or node, splitting test (P<0.10). Cumulative pairwise and network meta-analyses were performed for each selected comparison. Monitoring boundaries of statistical significance were constructed and the evidence against the null hypothesis was considered to be strong when the monitoring boundaries were crossed. A significance level was defined as α=5%, power of 90% (β=10%), and an anticipated treatment effect to detect equal to the final estimate from the network meta-analysis. The frequency and time to strong evidence was compared against the null hypothesis between pairwise and network meta-analyses. 49 comparisons of interest from 44 networks were included; most (n=39, 80%) were between active drugs, mainly from the specialties of cardiology, endocrinology, psychiatry, and rheumatology. 29 comparisons were informed by both direct and indirect evidence (59%), 13 by indirect evidence (27%), and 7 by direct evidence (14%). Both network and pairwise meta-analysis provided strong evidence against the null hypothesis for seven comparisons, but for an additional 10 comparisons only network meta-analysis provided strong evidence against the null hypothesis (P=0.002). The median time to strong evidence against the null hypothesis was 19 years with living network meta-analysis and 23 years with living pairwise meta-analysis (hazard ratio 2.78, 95% confidence interval 1.00 to 7.72, P=0.05). Studies directly comparing the treatments of interest continued to be published for eight comparisons after strong evidence had become evident in network meta-analysis. In comparative effectiveness research, prospectively planned living network meta-analyses produced strong evidence against the null hypothesis more often and earlier than conventional, pairwise meta-analyses. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://group.bmj.com/group/rights-licensing/permissions.
Nikolakopoulou, Adriani; Mavridis, Dimitris; Furukawa, Toshi A; Cipriani, Andrea; Tricco, Andrea C; Straus, Sharon E; Siontis, George C M; Egger, Matthias
2018-01-01
Abstract Objective To examine whether the continuous updating of networks of prospectively planned randomised controlled trials (RCTs) (“living” network meta-analysis) provides strong evidence against the null hypothesis in comparative effectiveness of medical interventions earlier than the updating of conventional, pairwise meta-analysis. Design Empirical study of the accumulating evidence about the comparative effectiveness of clinical interventions. Data sources Database of network meta-analyses of RCTs identified through searches of Medline, Embase, and the Cochrane Database of Systematic Reviews until 14 April 2015. Eligibility criteria for study selection Network meta-analyses published after January 2012 that compared at least five treatments and included at least 20 RCTs. Clinical experts were asked to identify in each network the treatment comparison of greatest clinical interest. Comparisons were excluded for which direct and indirect evidence disagreed, based on side, or node, splitting test (P<0.10). Outcomes and analysis Cumulative pairwise and network meta-analyses were performed for each selected comparison. Monitoring boundaries of statistical significance were constructed and the evidence against the null hypothesis was considered to be strong when the monitoring boundaries were crossed. A significance level was defined as α=5%, power of 90% (β=10%), and an anticipated treatment effect to detect equal to the final estimate from the network meta-analysis. The frequency and time to strong evidence was compared against the null hypothesis between pairwise and network meta-analyses. Results 49 comparisons of interest from 44 networks were included; most (n=39, 80%) were between active drugs, mainly from the specialties of cardiology, endocrinology, psychiatry, and rheumatology. 29 comparisons were informed by both direct and indirect evidence (59%), 13 by indirect evidence (27%), and 7 by direct evidence (14%). Both network and pairwise meta-analysis provided strong evidence against the null hypothesis for seven comparisons, but for an additional 10 comparisons only network meta-analysis provided strong evidence against the null hypothesis (P=0.002). The median time to strong evidence against the null hypothesis was 19 years with living network meta-analysis and 23 years with living pairwise meta-analysis (hazard ratio 2.78, 95% confidence interval 1.00 to 7.72, P=0.05). Studies directly comparing the treatments of interest continued to be published for eight comparisons after strong evidence had become evident in network meta-analysis. Conclusions In comparative effectiveness research, prospectively planned living network meta-analyses produced strong evidence against the null hypothesis more often and earlier than conventional, pairwise meta-analyses. PMID:29490922
Four applications of permutation methods to testing a single-mediator model.
Taylor, Aaron B; MacKinnon, David P
2012-09-01
Four applications of permutation tests to the single-mediator model are described and evaluated in this study. Permutation tests work by rearranging data in many possible ways in order to estimate the sampling distribution for the test statistic. The four applications to mediation evaluated here are the permutation test of ab, the permutation joint significance test, and the noniterative and iterative permutation confidence intervals for ab. A Monte Carlo simulation study was used to compare these four tests with the four best available tests for mediation found in previous research: the joint significance test, the distribution of the product test, and the percentile and bias-corrected bootstrap tests. We compared the different methods on Type I error, power, and confidence interval coverage. The noniterative permutation confidence interval for ab was the best performer among the new methods. It successfully controlled Type I error, had power nearly as good as the most powerful existing methods, and had better coverage than any existing method. The iterative permutation confidence interval for ab had lower power than do some existing methods, but it performed better than any other method in terms of coverage. The permutation confidence interval methods are recommended when estimating a confidence interval is a primary concern. SPSS and SAS macros that estimate these confidence intervals are provided.
Is it better to select or to receive? Learning via active and passive hypothesis testing.
Markant, Douglas B; Gureckis, Todd M
2014-02-01
People can test hypotheses through either selection or reception. In a selection task, the learner actively chooses observations to test his or her beliefs, whereas in reception tasks data are passively encountered. People routinely use both forms of testing in everyday life, but the critical psychological differences between selection and reception learning remain poorly understood. One hypothesis is that selection learning improves learning performance by enhancing generic cognitive processes related to motivation, attention, and engagement. Alternatively, we suggest that differences between these 2 learning modes derives from a hypothesis-dependent sampling bias that is introduced when a person collects data to test his or her own individual hypothesis. Drawing on influential models of sequential hypothesis-testing behavior, we show that such a bias (a) can lead to the collection of data that facilitates learning compared with reception learning and (b) can be more effective than observing the selections of another person. We then report a novel experiment based on a popular category learning paradigm that compares reception and selection learning. We additionally compare selection learners to a set of "yoked" participants who viewed the exact same sequence of observations under reception conditions. The results revealed systematic differences in performance that depended on the learner's role in collecting information and the abstract structure of the problem.
Testing for purchasing power parity in 21 African countries using several unit root tests
NASA Astrophysics Data System (ADS)
Choji, Niri Martha; Sek, Siok Kun
2017-04-01
Purchasing power parity is used as a basis for international income and expenditure comparison through the exchange rate theory. However, empirical studies show disagreement on the validity of PPP. In this paper, we conduct the testing on the validity of PPP using panel data approach. We apply seven different panel unit root tests to test the validity of the purchasing power parity (PPP) hypothesis based on the quarterly data on real effective exchange rate for 21 African countries from the period 1971: Q1-2012: Q4. All the results of the seven tests rejected the hypothesis of stationarity meaning that absolute PPP does not hold in those African Countries. This result confirmed the claim from previous studies that standard panel unit tests fail to support the PPP hypothesis.
Does Testing Increase Spontaneous Mediation in Learning Semantically Related Paired Associates?
ERIC Educational Resources Information Center
Cho, Kit W.; Neely, James H.; Brennan, Michael K.; Vitrano, Deana; Crocco, Stephanie
2017-01-01
Carpenter (2011) argued that the testing effect she observed for semantically related but associatively unrelated paired associates supports the mediator effectiveness hypothesis. This hypothesis asserts that after the cue-target pair "mother-child" is learned, relative to restudying mother-child, a review test in which…
Smoking and occupational allergy in workers in a platinum refinery.
Venables, K. M.; Dally, M. B.; Nunn, A. J.; Stevens, J. F.; Stephens, R.; Farrer, N.; Hunter, J. V.; Stewart, M.; Hughes, E. G.; Newman Taylor, A. J.
1989-01-01
OBJECTIVE--To test the hypothesis that smoking increases the risk of sensitisation by occupational allergens. DESIGN--Historical prospective cohort study. SETTING--Platinum refinery. SUBJECTS--91 Workers (86 men) who started work between 1 January 1973 and 31 December 1974 and whose smoking habit and atopic state (on skin prick testing with common allergens) had been noted at joining. MAIN OUTCOME MEASURES--Results of skin prick tests with platinum salts carried out routinely every three to six months and records of any respiratory symptoms noted by the refinery's occupational health service. Follow up was until 1980 or until leaving refinery work, whichever was earlier. RESULTS--57 Workers smoked and 29 were atopic; 22 developed a positive result on skin testing with platinum salts and 49 developed symptoms, including all 22 whose skin test result was positive. Smoking was the only significant predictor of a positive result on skin testing with platinum salts and its effect was greater than that of atopy; the estimated relative risks (95% confidence interval) when both were included in the regression model were: smokers versus non-smokers 5.05 (1.68 to 15.2) and atopic versus non-atopic 2.29 (0.88 to 5.99). Number of cigarettes smoked per day was the only significant predictor of respiratory symptoms. CONCLUSION--Smokers are at increased risk of sensitisation by platinum salts. PMID:2508944
A statistical approach to identify, monitor, and manage incomplete curated data sets.
Howe, Douglas G
2018-04-02
Many biological knowledge bases gather data through expert curation of published literature. High data volume, selective partial curation, delays in access, and publication of data prior to the ability to curate it can result in incomplete curation of published data. Knowing which data sets are incomplete and how incomplete they are remains a challenge. Awareness that a data set may be incomplete is important for proper interpretation, to avoiding flawed hypothesis generation, and can justify further exploration of published literature for additional relevant data. Computational methods to assess data set completeness are needed. One such method is presented here. In this work, a multivariate linear regression model was used to identify genes in the Zebrafish Information Network (ZFIN) Database having incomplete curated gene expression data sets. Starting with 36,655 gene records from ZFIN, data aggregation, cleansing, and filtering reduced the set to 9870 gene records suitable for training and testing the model to predict the number of expression experiments per gene. Feature engineering and selection identified the following predictive variables: the number of journal publications; the number of journal publications already attributed for gene expression annotation; the percent of journal publications already attributed for expression data; the gene symbol; and the number of transgenic constructs associated with each gene. Twenty-five percent of the gene records (2483 genes) were used to train the model. The remaining 7387 genes were used to test the model. One hundred and twenty-two and 165 of the 7387 tested genes were identified as missing expression annotations based on their residuals being outside the model lower or upper 95% confidence interval respectively. The model had precision of 0.97 and recall of 0.71 at the negative 95% confidence interval and precision of 0.76 and recall of 0.73 at the positive 95% confidence interval. This method can be used to identify data sets that are incompletely curated, as demonstrated using the gene expression data set from ZFIN. This information can help both database resources and data consumers gauge when it may be useful to look further for published data to augment the existing expertly curated information.
Hatori, Tsuyoshi; Takemura, Kazuhisa; Fujii, Satoshi; Ideno, Takashi
2011-06-01
This paper presents a new model of category judgment. The model hypothesizes that, when more attention is focused on a category, the psychological range of the category gets narrower (category-focusing hypothesis). We explain this hypothesis by using the metaphor of a "mental-box" model: the more attention that is focused on a mental box (i.e., a category set), the smaller the size of the box becomes (i.e., a cardinal number of the category set). The hypothesis was tested in an experiment (N = 40), where the focus of attention on prescribed verbal categories was manipulated. The obtained data gave support to the hypothesis: category-focusing effects were found in three experimental tasks (regarding the category of "food", "height", and "income"). The validity of the hypothesis was discussed based on the results.
Extending Theory-Based Quantitative Predictions to New Health Behaviors.
Brick, Leslie Ann D; Velicer, Wayne F; Redding, Colleen A; Rossi, Joseph S; Prochaska, James O
2016-04-01
Traditional null hypothesis significance testing suffers many limitations and is poorly adapted to theory testing. A proposed alternative approach, called Testing Theory-based Quantitative Predictions, uses effect size estimates and confidence intervals to directly test predictions based on theory. This paper replicates findings from previous smoking studies and extends the approach to diet and sun protection behaviors using baseline data from a Transtheoretical Model behavioral intervention (N = 5407). Effect size predictions were developed using two methods: (1) applying refined effect size estimates from previous smoking research or (2) using predictions developed by an expert panel. Thirteen of 15 predictions were confirmed for smoking. For diet, 7 of 14 predictions were confirmed using smoking predictions and 6 of 16 using expert panel predictions. For sun protection, 3 of 11 predictions were confirmed using smoking predictions and 5 of 19 using expert panel predictions. Expert panel predictions and smoking-based predictions poorly predicted effect sizes for diet and sun protection constructs. Future studies should aim to use previous empirical data to generate predictions whenever possible. The best results occur when there have been several iterations of predictions for a behavior, such as with smoking, demonstrating that expected values begin to converge on the population effect size. Overall, the study supports necessity in strengthening and revising theory with empirical data.
NASA Astrophysics Data System (ADS)
Kozlowska, M.; Orlecka-Sikora, B.; Kwiatek, G.; Boettcher, M. S.; Dresen, G. H.
2014-12-01
Static stress changes following large earthquakes are known to affect the rate and spatio-temporal distribution of the aftershocks. Here we utilize a unique dataset of M ≥ -3.4 earthquakes following a MW 2.2 earthquake in Mponeng gold mine, South Africa, to investigate this process for nano- and pico- scale seismicity at centimeter length scales in shallow, mining conditions. The aftershock sequence was recorded during a quiet interval in the mine and thus enabled us to perform the analysis using Dietrich's (1994) rate and state dependent friction law. The formulation for earthquake productivity requires estimation of Coulomb stress changes due to the mainshock, the reference seismicity rate, frictional resistance parameter, and the duration of aftershock relaxation time. We divided the area into six depth intervals and for each we estimated the parameters and modeled the spatio-temporal patterns of seismicity rates after the stress perturbation. Comparing the modeled patterns of seismicity with the observed distribution we found that while the spatial patterns match well, the rate of modeled aftershocks is lower than the observed rate. To test our model, we used four metrics of the goodness-of-fit evaluation. Testing procedure allowed rejecting the null hypothesis of no significant difference between seismicity rates only for one depth interval containing the mainshock, for the other, no significant differences have been found. Results show that mining-induced earthquakes may be followed by a stress relaxation expressed through aftershocks located on the rupture plane and in regions of positive Coulomb stress change. Furthermore, we demonstrate that the main features of the temporal and spatial distribution of very small, mining-induced earthquakes at shallow depths can be successfully determined using rate- and state-based stress modeling.
Fructose content and composition of commercial HFCS-sweetened carbonated beverages.
White, J S; Hobbs, L J; Fernandez, S
2015-01-01
The obesigenic and related health effects of caloric sweeteners are subjects of much current research. Consumers can properly adjust their diets to conform to nutritional recommendations only if the sugars composition of foods and beverages is accurately measured and reported, a matter of recent concern. We tested the hypothesis that high-fructose corn syrup (HFCS) used in commercial carbonated beverages conforms to commonly assumed fructose percentages and industry technical specifications, and fulfills beverage product label regulations and Food Chemicals Codex-stipulated standards. A high-pressure liquid chromatography method was developed and verified for analysis of sugars in carbonated beverages sweetened with HFCS-55. The method was used to measure percent fructose in three carbonated beverage categories. Method verification was demonstrated by acceptable linearity (R(2)>0.99), accuracy (94-104% recovery) and precision (RSD < 2%). Fructose comprised 55.58% of total sugars (95% confidence interval 55.51-55.65%), based on 160 total measurements by 2 independent laboratories of 80 randomly selected carbonated beverages sweetened with HFCS-55. The difference in fructose measurements between laboratories was significant but small (0.1%), and lacked relevance. Differences in fructose by product category or by product age were not statistically significant. Total sugars content of carbonated beverages showed close agreement within product categories (95% confidence interval = 0.01-0.54%). Using verified analytical methodology for HFCS-sweetened carbonated beverages, this study confirmed the hypothesis that fructose as a percentage of total sugars is in close agreement with published specifications in industry technical data sheets, published literature values and governmental standards and requirements. Furthermore, total sugars content of commercial beverages is consistent with common industry practices for canned and bottled products and met the US Federal requirements for nutritional labeling and nutrient claims. Prior concerns about composition were likely owing to use of improper and unverified methodology.
Fructose content and composition of commercial HFCS-sweetened carbonated beverages
White, J S; Hobbs, L J; Fernandez, S
2015-01-01
Objective: The obesigenic and related health effects of caloric sweeteners are subjects of much current research. Consumers can properly adjust their diets to conform to nutritional recommendations only if the sugars composition of foods and beverages is accurately measured and reported, a matter of recent concern. We tested the hypothesis that high-fructose corn syrup (HFCS) used in commercial carbonated beverages conforms to commonly assumed fructose percentages and industry technical specifications, and fulfills beverage product label regulations and Food Chemicals Codex-stipulated standards. Design: A high-pressure liquid chromatography method was developed and verified for analysis of sugars in carbonated beverages sweetened with HFCS-55. The method was used to measure percent fructose in three carbonated beverage categories. Method verification was demonstrated by acceptable linearity (R2>0.99), accuracy (94–104% recovery) and precision (RSD<2%). Result: Fructose comprised 55.58% of total sugars (95% confidence interval 55.51–55.65%), based on 160 total measurements by 2 independent laboratories of 80 randomly selected carbonated beverages sweetened with HFCS-55. The difference in fructose measurements between laboratories was significant but small (0.1%), and lacked relevance. Differences in fructose by product category or by product age were not statistically significant. Total sugars content of carbonated beverages showed close agreement within product categories (95% confidence interval=0.01–0.54%). Conclusions: Using verified analytical methodology for HFCS-sweetened carbonated beverages, this study confirmed the hypothesis that fructose as a percentage of total sugars is in close agreement with published specifications in industry technical data sheets, published literature values and governmental standards and requirements. Furthermore, total sugars content of commercial beverages is consistent with common industry practices for canned and bottled products and met the US Federal requirements for nutritional labeling and nutrient claims. Prior concerns about composition were likely owing to use of improper and unverified methodology. PMID:24798032
Taylor, H E; Bramley, D E P
2012-11-01
The provision of written information is a component of the informed consent process for research participants. We conducted a readability analysis to test the hypothesis that the language used in patient information and consent forms in anaesthesia research in Australia and New Zealand does not meet the readability standards or expectations of the Good Clinical Practice Guidelines, the National Health and Medical Research Council in Australia and the Health Research Council of New Zealand. We calculated readability scores for 40 patient information and consent forms using the Simple Measure of Gobbledygook and Flesch-Kincaid formulas. The mean grade level of patient information and consent forms when using the Simple Measure of Gobbledygook and Flesch-Kincaid readability formulas was 12.9 (standard deviation of 0.8, 95% confidence interval 12.6 to 13.1) and 11.9 (standard deviation 1.1, 95% confidence interval 11.6 to 12.3), respectively. This exceeds the average literacy and comprehension of the general population in Australia and New Zealand. Complex language decreases readability and negatively impacts on the informed consent process. Care should be exercised when providing written information to research participants to ensure language and readability is appropriate for the audience.
Deficient attention modulation of lateralized alpha power in schizophrenia.
Kustermann, Thomas; Rockstroh, Brigitte; Kienle, Johanna; Miller, Gregory A; Popov, Tzvetan
2016-06-01
Modulation of 8-14 Hz (alpha) activity in posterior brain regions is associated with covert attention deployment in visuospatial tasks. Alpha power decrease contralateral to to-be-attended stimuli is believed to foster subsequent processing, such as retention of task-relevant input. Degradation of this alpha-regulation mechanism may reflect an early stage of disturbed attention regulation contributing to impaired attention and working memory commonly found in schizophrenia. The present study tested this hypothesis of early disturbed attention regulation by examining alpha power modulation in a lateralized cued delayed response task in 14 schizophrenia patients (SZ) and 25 healthy controls (HC). Participants were instructed to remember the location of a 100-ms saccade-target cue in the left or right visual hemifield in order to perform a delayed saccade to that location after a retention interval. As expected, alpha power decrease during the retention interval was larger in contralateral than ipsilateral posterior regions, and SZ showed less of this lateralization than did HC. In particular, SZ failed to show hemifield-specific alpha modulation in posterior right hemisphere. Results suggest less efficient modulation of alpha oscillations that are considered critical for attention deployment and item encoding and, hence, may affect subsequent spatial working memory performance. © 2016 Society for Psychophysiological Research.
Heavy rainfall events and diarrhea incidence: the role of social and environmental factors.
Carlton, Elizabeth J; Eisenberg, Joseph N S; Goldstick, Jason; Cevallos, William; Trostle, James; Levy, Karen
2014-02-01
The impact of heavy rainfall events on waterborne diarrheal diseases is uncertain. We conducted weekly, active surveillance for diarrhea in 19 villages in Ecuador from February 2004 to April 2007 in order to evaluate whether biophysical and social factors modify vulnerability to heavy rainfall events. A heavy rainfall event was defined as 24-hour rainfall exceeding the 90th percentile value (56 mm) in a given 7-day period within the study period. Mixed-effects Poisson regression was used to test the hypothesis that rainfall in the prior 8 weeks, water and sanitation conditions, and social cohesion modified the relationship between heavy rainfall events and diarrhea incidence. Heavy rainfall events were associated with increased diarrhea incidence following dry periods (incidence rate ratio = 1.39, 95% confidence interval: 1.03, 1.87) and decreased diarrhea incidence following wet periods (incidence rate ratio = 0.74, 95% confidence interval: 0.59, 0.92). Drinking water treatment reduced the deleterious impacts of heavy rainfall events following dry periods. Sanitation, hygiene, and social cohesion did not modify the relationship between heavy rainfall events and diarrhea. Heavy rainfall events appear to affect diarrhea incidence through contamination of drinking water, and they present the greatest health risks following periods of low rainfall. Interventions designed to increase drinking water treatment may reduce climate vulnerability.
van Atteveldt, Nienke; Musacchia, Gabriella; Zion-Golumbic, Elana; Sehatpour, Pejman; Javitt, Daniel C.; Schroeder, Charles
2015-01-01
The brain’s fascinating ability to adapt its internal neural dynamics to the temporal structure of the sensory environment is becoming increasingly clear. It is thought to be metabolically beneficial to align ongoing oscillatory activity to the relevant inputs in a predictable stream, so that they will enter at optimal processing phases of the spontaneously occurring rhythmic excitability fluctuations. However, some contexts have a more predictable temporal structure than others. Here, we tested the hypothesis that the processing of rhythmic sounds is more efficient than the processing of irregularly timed sounds. To do this, we simultaneously measured functional magnetic resonance imaging (fMRI) and electro-encephalograms (EEG) while participants detected oddball target sounds in alternating blocks of rhythmic (e.g., with equal inter-stimulus intervals) or random (e.g., with randomly varied inter-stimulus intervals) tone sequences. Behaviorally, participants detected target sounds faster and more accurately when embedded in rhythmic streams. The fMRI response in the auditory cortex was stronger during random compared to random tone sequence processing. Simultaneously recorded N1 responses showed larger peak amplitudes and longer latencies for tones in the random (vs. the rhythmic) streams. These results reveal complementary evidence for more efficient neural and perceptual processing during temporally predictable sensory contexts. PMID:26579044
Association of Intrauterine and Early-Life Exposures With Age at Menopause in the Sister Study
Steiner, Anne Z.; D'Aloisio, Aimee A.; DeRoo, Lisa A.; Sandler, Dale P.; Baird, Donna D.
2010-01-01
Oocytes are formed in utero; menopause occurs when the oocyte pool is depleted. The authors hypothesized that early-life events could affect the number of a woman's oocytes and determine age at menopause. To test their hypothesis, the authors conducted a secondary analysis of baseline data from 22,165 participants in the Sister Study (2003–2007) who were aged 35–59 years at enrollment. To estimate the association between early-life events and age at natural menopause, the authors used Cox proportional hazards models to estimate hazard ratios with 95% confidence intervals, adjusting for current age, race/ethnicity, education, childhood family income, and smoking history. Earlier menopause was associated with in-utero diethylstilbestrol exposure (hazard ratio (HR) = 1.45, 95% confidence interval (CI): 1.27, 1.65). Suggestive associations included maternal prepregnancy diabetes (HR = 1.33, 95% CI: 0.89, 1.98) and low birth weight (HR = 1.09, 95% CI: 0.99, 1.20). Having a mother aged 35 years or older at birth appeared to be associated with a later age at menopause (HR = 0.95, 95% CI: 0.89, 1.01). Birth order, in-utero smoke exposure, and having been breastfed were not related to age at menopause. In-utero and perinatal events may subsequently influence age at menopause. PMID:20534821
Madison, Guy
2014-03-01
Timing performance becomes less precise for longer intervals, which makes it difficult to achieve simultaneity in synchronisation with a rhythm. The metrical structure of music, characterised by hierarchical levels of binary or ternary subdivisions of time, may function to increase precision by providing additional timing information when the subdivisions are explicit. This hypothesis was tested by comparing synchronisation performance across different numbers of metrical levels conveyed by loudness of sounds, such that the slowest level was loudest and the fastest was softest. Fifteen participants moved their hand with one of 9 inter-beat intervals (IBIs) ranging from 524 to 3,125 ms in 4 metrical level (ML) conditions ranging from 1 (one movement for each sound) to 4 (one movement for every 8th sound). The lowest relative variability (SD/IBI<1.5%) was obtained for the 3 longest IBIs (1600-3,125 ms) and MLs 3-4, significantly less than the smallest value (4-5% at 524-1024 ms) for any ML 1 condition in which all sounds are identical. Asynchronies were also more negative with higher ML. In conclusion, metrical subdivision provides information that facilitates temporal performance, which suggests an underlying neural multi-level mechanism capable of integrating information across levels. © 2013.
Braeye, T; DE Schrijver, K; Wollants, E; van Ranst, M; Verhaegen, J
2015-03-01
SUMMARY On 6 December 2010 a fire in Hemiksem, Belgium, was extinguished by the fire brigade with both river water and tap water. Local physicians were asked to report all cases of gastroenteritis. We conducted a retrospective cohort study among 1000 randomly selected households. We performed a statistical and geospatial analysis. Human stool samples, tap water and river water were tested for pathogens. Of the 1185 persons living in the 528 responding households, 222 (18·7%) reported symptoms of gastroenteritis during the time period 6-13 December. Drinking tap water was significantly associated with an increased risk for gastroenteritis (relative risk 3·67, 95% confidence interval 2·86-4·70) as was place of residence. Campylobacter sp. (2/56), norovirus GI and GII (11/56), rotavirus (1/56) and Giardia lamblia (3/56) were detected in stool samples. Tap water samples tested positive for faecal indicator bacteria and protozoa. The results support the hypothesis that a point-source contamination of the tap water with river water was the cause of the multi-pathogen waterborne outbreak.
Salicylate-Induced Hearing Loss and Gap Detection Deficits in Rats
Radziwon, Kelly E.; Stolzberg, Daniel J.; Urban, Maxwell E.; Bowler, Rachael A.; Salvi, Richard J.
2015-01-01
To test the “tinnitus gap-filling” hypothesis in an animal psychoacoustic paradigm, rats were tested using a go/no-go operant gap detection task in which silent intervals of various durations were embedded within a continuous noise. Gap detection thresholds were measured before and after treatment with a dose of sodium salicylate (200 mg/kg) that reliably induces tinnitus in rats. Noise-burst detection thresholds were also measured to document the amount of hearing loss and aid in interpreting the gap detection results. As in the previous human psychophysical experiments, salicylate had little or no effect on gap thresholds measured in broadband noise presented at high-stimulus levels (30–60 dB SPL); gap detection thresholds were always 10 ms or less. Salicylate also did not affect gap thresholds presented in narrowband noise at 60 dB SPL. Therefore, rats treated with a dose of salicylate that reliably induces tinnitus have no difficulty detecting silent gaps as long as the noise in which they are embedded is clearly audible. PMID:25750635
Critical role of cerebellar fastigial nucleus in programming sequences of saccades
King, Susan A.; Schneider, Rosalyn M.; Serra, Alessandro; Leigh, R. John
2011-01-01
The cerebellum plays an important role in programming accurate saccades. Cerebellar lesions affecting the ocular motor region of the fastigial nucleus (FOR) cause saccadic hypermetria; however, if a second target is presented before a saccade can be initiated (double-step paradigm), saccade hypermetria may be decreased. We tested the hypothesis that the cerebellum, especially FOR, plays a pivotal role in programming sequences of saccades. We studied patients with saccadic hypermetria due either to genetic cerebellar ataxia or surgical lesions affecting FOR and confirmed that the gain of initial saccades made to double-step stimuli was reduced compared with the gain of saccades to single target jumps. Based on measurements of the intersaccadic interval, we found that the ability to perform parallel processing of saccades was reduced or absent in all of our patients with cerebellar disease. Our results support the crucial role of the cerebellum, especially FOR, in programming sequences of saccades. PMID:21950988
Keough, Matthew T; O'Connor, Roisin M
2015-01-01
Reinforcement Sensitivity Theory predicts that those with a strong behavioral inhibition system (BIS) likely experience considerable anxiety and uncertainty during the transition out of university. Accordingly, they may continue to drink heavily to cope during this time (a period associated with normative reductions in heavy drinking), but only if they also have a strong behavioral approach system (BAS) to enhance the anxiolytic effects of drinking. The purpose of this study was to test this hypothesis. Participants completed online measures prior to and at 3-month intervals over the course of the year following graduation. As hypothesized, results showed that an elevated BIS predicted impeded maturing out, but only when the impulsivity facet of BAS was also elevated. In contrast, a strong BIS predicted rapid maturing out if BAS impulsivity was weak. Study findings advance our understanding of BIS-related alcohol misuse trajectories in young adulthood and provide direction for clinical interventions.
Giofrè, David; Cumming, Geoff; Fresc, Luca; Boedker, Ingrid; Tressoldi, Patrizio
2017-01-01
From January 2014, Psychological Science introduced new submission guidelines that encouraged the use of effect sizes, estimation, and meta-analysis (the "new statistics"), required extra detail of methods, and offered badges for use of open science practices. We investigated the use of these practices in empirical articles published by Psychological Science and, for comparison, by the Journal of Experimental Psychology: General, during the period of January 2013 to December 2015. The use of null hypothesis significance testing (NHST) was extremely high at all times and in both journals. In Psychological Science, the use of confidence intervals increased markedly overall, from 28% of articles in 2013 to 70% in 2015, as did the availability of open data (3 to 39%) and open materials (7 to 31%). The other journal showed smaller or much smaller changes. Our findings suggest that journal-specific submission guidelines may encourage desirable changes in authors' practices.
Lambert, Timothy W; Boehmer, Jennifer; Feltham, Jason; Guyn, Lindsay; Shahid, Rizwan
2011-01-01
This paper presents spatial maps of the arsenic, lead, and polycyclic aromatic hydrocarbon (PAH) soil contamination in Sydney, Nova Scotia, Canada. The spatial maps were designed to create exposure cohorts to help understand the observed increase in health effects. To assess whether contamination can be a proxy for exposures, the following hypothesis was tested: residential soils were impacted by the coke oven and steel plant industrial complex. The spatial map showed contaminants are centered on the industrial facility, significantly correlated, and exceed Canadian health risk-based soil quality guidelines. Core samples taken at 5-cm intervals suggest a consistent deposition over time. The concentrations in Sydney significantly exceed background Sydney soil concentrations, and are significantly elevated compared with North Sydney, an adjacent industrial community. The contaminant spatial maps will also be useful for developing cohorts of exposure and guiding risk management decisions.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wang, Chuankuan; Han, Yi; Chen, Jiquan
2013-08-15
Changes in characteristics of snowfall and spring freeze–thaw-cycle (FTC) events under the warming climate make it critical to understand biophysical controls on soil CO2 efflux (RS) in seasonally snow-covered ecosystems. We conducted a snow removal experiment and took year-round continuous automated measurements of RS, soil temperature (T5) and soil volumetric water content at the 5 cm depth (W5) with a half-hour interval in a Chinese temperate forest in 2010–2011. Our objectives were to: (1) develop statistical models to describe the seasonality of RS in this forest; (2) quantify the contribution of seasonal RS to the annual budget; (3) examine biophysicalmore » effects of snowpack on RS; and (4) test the hypothesis that an FTC-induced enhancement of RS is jointly driven by biological and physical processes.« less
Dutke, Stephan; Jaitner, Thomas; Berse, Timo; Barenberg, Jonathan
2014-02-01
Research on effects of acute physical exercise on performance in a concurrent cognitive task has generated equivocal evidence. Processing efficiency theory predicts that concurrent physical exercise can increase resource requirements for sustaining cognitive performance even when the level of performance is unaffected. This hypothesis was tested in a dual-task experiment. Sixty young adults worked on a primary auditory attention task and a secondary interval production task while cycling on a bicycle ergometer. Physical load (cycling) and cognitive load of the primary task were manipulated. Neither physical nor cognitive load affected primary task performance, but both factors interacted on secondary task performance. Sustaining primary task performance under increased physical and/or cognitive load increased resource consumption as indicated by decreased secondary task performance. Results demonstrated that physical exercise effects on cognition might be underestimated when only single task performance is the focus.
A New Stress-Based Model of Political Extremism
Canetti-Nisim, Daphna; Halperin, Eran; Sharvit, Keren; Hobfoll, Stevan E.
2011-01-01
Does exposure to terrorism lead to hostility toward minorities? Drawing on theories from clinical and social psychology, we propose a stress-based model of political extremism in which psychological distress—which is largely overlooked in political scholarship—and threat perceptions mediate the relationship between exposure to terrorism and attitudes toward minorities. To test the model, a representative sample of 469 Israeli Jewish respondents was interviewed on three occasions at six-month intervals. Structural Equation Modeling indicated that exposure to terrorism predicted psychological distress (t1), which predicted perceived threat from Palestinian citizens of Israel (t2), which, in turn, predicted exclusionist attitudes toward Palestinian citizens of Israel (t3). These findings provide solid evidence and a mechanism for the hypothesis that terrorism introduces nondemocratic attitudes threatening minority rights. It suggests that psychological distress plays an important role in political decision making and should be incorporated in models drawing upon political psychology. PMID:22140275
Evaluating elements of trust: Race and class in risk communication in post-Katrina New Orleans.
Battistoli, B F
2016-05-01
This study seeks to determine the relative influence of race and class on trust in sources of messages of environmental risk in post-Katrina New Orleans. It poses two hypotheses to test that influence: H1-African-Americans ("Blacks") trust risk message sources less than European American ("Whites") do and H2-The higher the socioeconomic class, the lower the trust in risk message sources. A 37-question telephone survey (landlines and cellphones) was conducted in Orleans Parish in 2012 (n = 414). The overall margin of error was ±4.8% at a 95% confidence interval. A hierarchical regression analysis revealed that the first hypothesis was rejected, while the second was supported. Additional data analysis revealed that frequency of use of sources of risk information appears to be a positive factor in building trust. © The Author(s) 2015.
Female cowbirds have more accurate spatial memory than males.
Guigueno, Mélanie F; Snow, Danielle A; MacDougall-Shackleton, Scott A; Sherry, David F
2014-02-01
Brown-headed cowbirds (Molothrus ater) are obligate brood parasites. Only females search for host nests and they find host nests one or more days before placing eggs in them. Past work has shown that females have a larger hippocampus than males, but sex differences in spatial cognition have not been extensively investigated. We tested cowbirds for sex and seasonal differences in spatial memory on a foraging task with an ecologically relevant retention interval. Birds were trained to find one rewarded location among 25 after 24 h. Females made significantly fewer errors than males and took more direct paths to the rewarded location than males. Females and males showed similar search times, indicating there was no sex difference in motivation. This sex difference in spatial cognition is the reverse of that observed in some polygynous mammals and is consistent with the hypothesis that spatial cognition is adaptively specialized in this brood-parasitic species.
The Time-Course of Lexical Activation During Sentence Comprehension in People With Aphasia
Ferrill, Michelle; Love, Tracy; Walenski, Matthew; Shapiro, Lewis P.
2012-01-01
Purpose To investigate the time-course of processing of lexical items in auditorily presented canonical (subject–verb–object) constructions in young, neurologically unimpaired control participants and participants with left-hemisphere damage and agrammatic aphasia. Method A cross modal picture priming (CMPP) paradigm was used to test 114 control participants and 8 participants with agrammatic aphasia for priming of a lexical item (direct object noun) immediately after it is initially encountered in the ongoing auditory stream and at 3 additional time points at 400-ms intervals. Results The control participants demonstrated immediate activation of the lexical item, followed by a rapid loss (decay). The participants with aphasia demonstrated delayed activation of the lexical item. Conclusion This evidence supports the hypothesis of a delay in lexical activation in people with agrammatic aphasia. The delay in lexical activation feeds syntactic processing too slowly, contributing to comprehension deficits in people with agrammatic aphasia. PMID:22355007
Afterslip, tremor, and the Denali fault earthquake
Gomberg, Joan; Prejean, Stephanie; Ruppert, Natalia
2012-01-01
We tested the hypothesis that afterslip should be accompanied by tremor using observations of seismic and aseismic deformation surrounding the 2002 M 7.9 Denali fault, Alaska, earthquake (DFE). Afterslip happens more frequently than spontaneous slow slip and has been observed in a wider range of tectonic environments, and thus the existence or absence of tremor accompanying afterslip may provide new clues about tremor generation. We also searched for precursory tremor, as a proxy for posited accelerating slip leading to rupture. Our search yielded no tremor during the five days prior to the DFE or in several intervals in the three months after. This negative result and an array of other observations all may be explained by rupture penetrating below the presumed locked zone into the frictional transition zone. While not unique, such an explanation corroborates previous models of megathrust and transform earthquake ruptures that extend well into the transition zone.
Listeners modulate temporally selective attention during natural speech processing
Astheimer, Lori B.; Sanders, Lisa D.
2009-01-01
Spatially selective attention allows for the preferential processing of relevant stimuli when more information than can be processed in detail is presented simultaneously at distinct locations. Temporally selective attention may serve a similar function during speech perception by allowing listeners to allocate attentional resources to time windows that contain highly relevant acoustic information. To test this hypothesis, event-related potentials were compared in response to attention probes presented in six conditions during a narrative: concurrently with word onsets, beginning 50 and 100 ms before and after word onsets, and at random control intervals. Times for probe presentation were selected such that the acoustic environments of the narrative were matched for all conditions. Linguistic attention probes presented at and immediately following word onsets elicited larger amplitude N1s than control probes over medial and anterior regions. These results indicate that native speakers selectively process sounds presented at specific times during normal speech perception. PMID:18395316
Cylus, Jonathan; Glymour, M. Maria; Avendano, Mauricio
2014-01-01
The recent economic recession has led to increases in suicide, but whether US state unemployment insurance programs ameliorate this association has not been examined. Exploiting US state variations in the generosity of benefit programs between 1968 and 2008, we tested the hypothesis that more generous unemployment benefit programs reduce the impact of economic downturns on suicide. Using state linear fixed-effect models, we found a negative additive interaction between unemployment rates and benefits among the US working-age (20–64 years) population (β = −0.57, 95% confidence interval: −0.86, −0.27; P < 0.001). The finding of a negative additive interaction was robust across multiple model specifications. Our results suggest that the impact of unemployment rates on suicide is offset by the presence of generous state unemployment benefit programs, though estimated effects are small in magnitude. PMID:24939978
Debates—Hypothesis testing in hydrology: Introduction
NASA Astrophysics Data System (ADS)
Blöschl, Günter
2017-03-01
This paper introduces the papers in the "Debates—Hypothesis testing in hydrology" series. The four articles in the series discuss whether and how the process of testing hypotheses leads to progress in hydrology. Repeated experiments with controlled boundary conditions are rarely feasible in hydrology. Research is therefore not easily aligned with the classical scientific method of testing hypotheses. Hypotheses in hydrology are often enshrined in computer models which are tested against observed data. Testability may be limited due to model complexity and data uncertainty. All four articles suggest that hypothesis testing has contributed to progress in hydrology and is needed in the future. However, the procedure is usually not as systematic as the philosophy of science suggests. A greater emphasis on a creative reasoning process on the basis of clues and explorative analyses is therefore needed.
ERIC Educational Resources Information Center
White, Brian
2004-01-01
This paper presents a generally applicable method for characterizing subjects' hypothesis-testing behaviour based on a synthesis that extends on previous work. Beginning with a transcript of subjects' speech and videotape of their actions, a Reasoning Map is created that depicts the flow of their hypotheses, tests, predictions, results, and…
Why Is Test-Restudy Practice Beneficial for Memory? An Evaluation of the Mediator Shift Hypothesis
ERIC Educational Resources Information Center
Pyc, Mary A.; Rawson, Katherine A.
2012-01-01
Although the memorial benefits of testing are well established empirically, the mechanisms underlying this benefit are not well understood. The authors evaluated the mediator shift hypothesis, which states that test-restudy practice is beneficial for memory because retrieval failures during practice allow individuals to evaluate the effectiveness…
Bayesian Approaches to Imputation, Hypothesis Testing, and Parameter Estimation
ERIC Educational Resources Information Center
Ross, Steven J.; Mackey, Beth
2015-01-01
This chapter introduces three applications of Bayesian inference to common and novel issues in second language research. After a review of the critiques of conventional hypothesis testing, our focus centers on ways Bayesian inference can be used for dealing with missing data, for testing theory-driven substantive hypotheses without a default null…
Mayo, Ruth; Alfasi, Dana; Schwarz, Norbert
2014-06-01
Feelings of distrust alert people not to take information at face value, which may influence their reasoning strategy. Using the Wason (1960) rule identification task, we tested whether chronic and temporary distrust increase the use of negative hypothesis testing strategies suited to falsify one's own initial hunch. In Study 1, participants who were low in dispositional trust were more likely to engage in negative hypothesis testing than participants high in dispositional trust. In Study 2, trust and distrust were induced through an alleged person-memory task. Paralleling the effects of chronic distrust, participants exposed to a single distrust-eliciting face were 3 times as likely to engage in negative hypothesis testing as participants exposed to a trust-eliciting face. In both studies, distrust increased negative hypothesis testing, which was associated with better performance on the Wason task. In contrast, participants' initial rule generation was not consistently affected by distrust. These findings provide first evidence that distrust can influence which reasoning strategy people adopt. PsycINFO Database Record (c) 2014 APA, all rights reserved.
Auditory enhancement of increments in spectral amplitude stems from more than one source.
Carcagno, Samuele; Semal, Catherine; Demany, Laurent
2012-10-01
A component of a test sound consisting of simultaneous pure tones perceptually "pops out" if the test sound is preceded by a copy of itself with that component attenuated. Although this "enhancement" effect was initially thought to be purely monaural, it is also observable when the test sound and the precursor sound are presented contralaterally (i.e., to opposite ears). In experiment 1, we assessed the magnitude of ipsilateral and contralateral enhancement as a function of the time interval between the precursor and test sounds (10, 100, or 600 ms). The test sound, randomly transposed in frequency from trial to trial, was followed by a probe tone, either matched or mismatched in frequency to the test sound component which was the target of enhancement. Listeners' ability to discriminate matched probes from mismatched probes was taken as an index of enhancement magnitude. The results showed that enhancement decays more rapidly for ipsilateral than for contralateral precursors, suggesting that ipsilateral enhancement and contralateral enhancement stem from at least partly different sources. It could be hypothesized that, in experiment 1, contralateral precursors were effective only because they provided attentional cues about the target tone frequency. In experiment 2, this hypothesis was tested by presenting the probe tone before the precursor sound rather than after the test sound. Although the probe tone was then serving as a frequency cue, contralateral precursors were again found to produce enhancement. This indicates that contralateral enhancement cannot be explained by cuing alone and is a genuine sensory phenomenon.
Comparison of futility monitoring guidelines using completed phase III oncology trials.
Zhang, Qiang; Freidlin, Boris; Korn, Edward L; Halabi, Susan; Mandrekar, Sumithra; Dignam, James J
2017-02-01
Futility (inefficacy) interim monitoring is an important component in the conduct of phase III clinical trials, especially in life-threatening diseases. Desirable futility monitoring guidelines allow timely stopping if the new therapy is harmful or if it is unlikely to demonstrate to be sufficiently effective if the trial were to continue to its final analysis. There are a number of analytical approaches that are used to construct futility monitoring boundaries. The most common approaches are based on conditional power, sequential testing of the alternative hypothesis, or sequential confidence intervals. The resulting futility boundaries vary considerably with respect to the level of evidence required for recommending stopping the study. We evaluate the performance of commonly used methods using event histories from completed phase III clinical trials of the Radiation Therapy Oncology Group, Cancer and Leukemia Group B, and North Central Cancer Treatment Group. We considered published superiority phase III trials with survival endpoints initiated after 1990. There are 52 studies available for this analysis from different disease sites. Total sample size and maximum number of events (statistical information) for each study were calculated using protocol-specified effect size, type I and type II error rates. In addition to the common futility approaches, we considered a recently proposed linear inefficacy boundary approach with an early harm look followed by several lack-of-efficacy analyses. For each futility approach, interim test statistics were generated for three schedules with different analysis frequency, and early stopping was recommended if the interim result crossed a futility stopping boundary. For trials not demonstrating superiority, the impact of each rule is summarized as savings on sample size, study duration, and information time scales. For negative studies, our results show that the futility approaches based on testing the alternative hypothesis and repeated confidence interval rules yielded less savings (compared to the other two rules). These boundaries are too conservative, especially during the first half of the study (<50% of information). The conditional power rules are too aggressive during the second half of the study (>50% of information) and may stop a trial even when there is a clinically meaningful treatment effect. The linear inefficacy boundary with three or more interim analyses provided the best results. For positive studies, we demonstrated that none of the futility rules would have stopped the trials. The linear inefficacy boundary futility approach is attractive from statistical, clinical, and logistical standpoints in clinical trials evaluating new anti-cancer agents.
In Defense of the Play-Creativity Hypothesis
ERIC Educational Resources Information Center
Silverman, Irwin W.
2016-01-01
The hypothesis that pretend play facilitates the creative thought process in children has received a great deal of attention. In a literature review, Lillard et al. (2013, p. 8) concluded that the evidence for this hypothesis was "not convincing." This article focuses on experimental and training studies that have tested this hypothesis.…
NASA Astrophysics Data System (ADS)
Deng, Claudia; Wang, Ping; Zhang, Xiangming; Wang, Ya
2015-04-01
Microgravity induces less pressure on muscle/bone, which is a major reason for muscle atrophy as well as bone loss. Currently, physical exercise is the only countermeasure used consistently in the U.S. human space program to counteract the microgravity-induced skeletal muscle atrophy and bone loss. However, the routinely almost daily time commitment is significant and represents a potential risk to the accomplishment of other mission operational tasks. Therefore, development of more efficient exercise programs (with less time) to prevent astronauts from muscle atrophy and bone loss are needed. Consider the two types of muscle contraction: exercising forces muscle contraction and prevents microgravity-induced muscle atrophy/bone loss, which is a voluntary response through the motor nervous system; and cold temperature exposure-induced muscle contraction is an involuntary response through the vegetative nervous system, we formed a new hypothesis. The main purpose of this pilot study was to test our hypothesis that exercise at 4 °C is more efficient than at room temperature to prevent microgravity-induced muscle atrophy/bone loss and, consequently reduces physical exercise time. Twenty mice were divided into two groups with or without daily short-term (10 min × 2, at 12 h interval) cold temperature (4 °C) exposure for 30 days. The whole bodyweight, muscle strength and bone density were measured after terminating the experiments. The results from the one-month pilot study support our hypothesis and suggest that it would be reasonable to use more mice, in a microgravity environment and observe for a longer period to obtain a conclusion. We believe that the results from such a study will help to develop efficient exercise, which will finally benefit astronauts' heath and NASA's missions.
Deng, Claudia; Wang, Ping; Zhang, Xiangming; Wang, Ya
2015-01-01
Microgravity induces less pressure on muscle/bone, which is a major reason for muscle atrophy as well as bone loss. Currently, physical exercise is the only countermeasure used consistently in the U.S. human space program to counteract the microgravity-induced skeletal muscle atrophy and bone loss. However, the routinely almost daily time commitment is significant and represents a potential risk to the accomplishment of other mission operational tasks. Therefore, development of more efficient exercise programs (with less time) to prevent astronauts from muscle atrophy and bone loss are needed. Consider the two types of muscle contraction: exercising forces muscle contraction and prevents microgravity-induced muscle atrophy/bone loss, which is a voluntary response through the motor nervous system; and cold temperature exposure-induced muscle contraction is an involuntary response through the vegetative nervous system, we formed a new hypothesis. The main purpose of this pilot study was to test our hypothesis that exercise at 4°C is more efficient than at room temperature to prevent microgravity-induced muscle atrophy/bone loss and, consequently reduces physical exercise time. Twenty mice were divided into two groups with or without daily short-term (10 min × 2, at 12 h interval) cold temperature (4°C) exposure for 30 days. The whole bodyweight, muscle strength and bone density were measured after terminating the experiments. The results from the one-month pilot study support our hypothesis and suggest that it would be reasonable to use more mice, in a microgravity environment and observe for a longer period to obtain a conclusion. We believe that the results from such a study will help to develop efficient exercise, which will finally benefit astronauts’ heath and NASA’s mission. PMID:25821722
Kemp, Brian M.; González-Oliver, Angélica; Malhi, Ripan S.; Monroe, Cara; Schroeder, Kari Britt; Rhett, Gillian; Resendéz, Andres; Peñaloza-Espinosa, Rosenda I.; Buentello-Malo, Leonor; Gorodesky, Clara; Smith, David Glenn
2010-01-01
The Farming/Language Dispersal Hypothesis posits that prehistoric population expansions, precipitated by the innovation or early adop-tion of agriculture, played an important role in the uneven distribution of language families recorded across the world. In this case, the most widely spread language families today came to be distributed at the expense of those that have more restricted distributions. In the Americas, Uto-Aztecan is one such language family that may have been spread across Mesoamerica and the American Southwest by ancient farmers. We evaluated this hypothesis with a large-scale study of mitochondrial DNA (mtDNA) and Y-chromosomal DNA vari-ation in indigenous populations from these regions. Partial correlation coefficients, determined with Mantel tests, show that Y-chromosome variation in indigenous populations from the American Southwest and Mesoamerica correlates significantly with linguistic distances (r = 0.33–0.384; P < 0.02), whereas mtDNA diversity correlates significantly with only geographic distance (r = 0.619; P = 0.002). The lack of correlation between mtDNA and Y-chromosome diversity is consistent with differing population histories of males and females in these regions. Although unlikely, if groups of Uto-Aztecan speakers were responsible for the northward spread of agriculture and their languages from Mesoamerica to the Southwest, this migration was possibly biased to males. However, a recent in situ population expansion within the American Southwest (2,105 years before present; 99.5% confidence interval = 1,273–3,773 YBP), one that probably followed the introduction and intensification of maize agriculture in the region, may have blurred ancient mtDNA patterns, which might otherwise have revealed a closer genetic relationship between females in the Southwest and Mesoamerica. PMID:20351276
The frequentist implications of optional stopping on Bayesian hypothesis tests.
Sanborn, Adam N; Hills, Thomas T
2014-04-01
Null hypothesis significance testing (NHST) is the most commonly used statistical methodology in psychology. The probability of achieving a value as extreme or more extreme than the statistic obtained from the data is evaluated, and if it is low enough, the null hypothesis is rejected. However, because common experimental practice often clashes with the assumptions underlying NHST, these calculated probabilities are often incorrect. Most commonly, experimenters use tests that assume that sample sizes are fixed in advance of data collection but then use the data to determine when to stop; in the limit, experimenters can use data monitoring to guarantee that the null hypothesis will be rejected. Bayesian hypothesis testing (BHT) provides a solution to these ills because the stopping rule used is irrelevant to the calculation of a Bayes factor. In addition, there are strong mathematical guarantees on the frequentist properties of BHT that are comforting for researchers concerned that stopping rules could influence the Bayes factors produced. Here, we show that these guaranteed bounds have limited scope and often do not apply in psychological research. Specifically, we quantitatively demonstrate the impact of optional stopping on the resulting Bayes factors in two common situations: (1) when the truth is a combination of the hypotheses, such as in a heterogeneous population, and (2) when a hypothesis is composite-taking multiple parameter values-such as the alternative hypothesis in a t-test. We found that, for these situations, while the Bayesian interpretation remains correct regardless of the stopping rule used, the choice of stopping rule can, in some situations, greatly increase the chance of experimenters finding evidence in the direction they desire. We suggest ways to control these frequentist implications of stopping rules on BHT.
Bovine Polledness – An Autosomal Dominant Trait with Allelic Heterogeneity
Medugorac, Ivica; Seichter, Doris; Graf, Alexander; Russ, Ingolf; Blum, Helmut; Göpel, Karl Heinrich; Rothammer, Sophie; Förster, Martin; Krebs, Stefan
2012-01-01
The persistent horns are an important trait of speciation for the family Bovidae with complex morphogenesis taking place briefly after birth. The polledness is highly favourable in modern cattle breeding systems but serious animal welfare issues urge for a solution in the production of hornless cattle other than dehorning. Although the dominant inhibition of horn morphogenesis was discovered more than 70 years ago, and the causative mutation was mapped almost 20 years ago, its molecular nature remained unknown. Here, we report allelic heterogeneity of the POLLED locus. First, we mapped the POLLED locus to a ∼381-kb interval in a multi-breed case-control design. Targeted re-sequencing of an enlarged candidate interval (547 kb) in 16 sires with known POLLED genotype did not detect a common allele associated with polled status. In eight sires of Alpine and Scottish origin (four polled versus four horned), we identified a single candidate mutation, a complex 202 bp insertion-deletion event that showed perfect association to the polled phenotype in various European cattle breeds, except Holstein-Friesian. The analysis of the same candidate interval in eight Holsteins identified five candidate variants which segregate as a 260 kb haplotype also perfectly associated with the POLLED gene without recombination or interference with the 202 bp insertion-deletion. We further identified bulls which are progeny tested as homozygous polled but bearing both, 202 bp insertion-deletion and Friesian haplotype. The distribution of genotypes of the two putative POLLED alleles in large semi-random sample (1,261 animals) supports the hypothesis of two independent mutations. PMID:22737241
Xu, Wang Hong; Dai, Qi; Xiang, Yong Bing; Long, Ji Rong; Ruan, Zhi Xian; Cheng, Jia Rong; Zheng, Wei; Shu, Xiao Ou
2007-12-15
Certain polyphenols inhibit the activity of aromatase, a critical enzyme in estrogen synthesis that is coded by the CYP19A1 gene. Consumption of polyphenol-rich foods and beverages, thus, may interact with CYP19A1 genetic polymorphisms in the development of endometrial cancer. The authors tested this hypothesis in the Shanghai Endometrial Cancer Study (1997-2003), a population-based case-control study of 1,204 endometrial cancer cases and 1,212 controls. Dietary information was obtained by use of a validated food frequency questionnaire. Genotypes of CYP19A1 at rs28566535, rs1065779, rs752760, rs700519, and rs1870050 were available for 1,042 cases and 1,035 controls. Unconditional logistic regression models were used to calculate odds ratios and their 95% confidence intervals after adjustment for potential confounding factors. Higher intake of soy foods and tea consumption were both inversely associated with the risk of endometrial cancer, with odds ratios of 0.8 (95% confidence interval: 0.6, 1.0) for the highest versus the lowest tertiles of intake of soy and 0.8 (95% confidence interval: 06, 0.9) for ever tea consumption. The association of single nucleotide polymorphisms rs1065779, rs752760, and rs1870050 with endometrial cancer was modified by tea consumption (p(interaction) < 0.05) but not by soy isoflavone intake. The authors' findings suggest that tea polyphenols may modify the effect of CYP19A1 genetic polymorphisms on the development of endometrial cancer.
Nenko, Ilona; Jasienska, Grazyna
2013-01-01
Women should differ in their reproductive strategies according to their nutritional status. We tested a hypothesis that women who have a good nutritional status early in life, as indicated by a shorter waiting time to the first birth (first birth interval, FBI), are able to afford higher costs of reproduction than women who have worse nutritional condition. We collected data on 377 women who got married between the years 1782 and 1882 in a natural fertility population in rural Poland. The study group was divided into tertiles based on the length of FBI. Women with the shortest FBI had a higher number of children (P = 0.005), higher number of sons (P = 0.01), and shorter mean interbirth intervals (P = 0.06). Women who had ever given birth to twins had shorter FBI than women of singletons (20.1 and 26.1 months, respectively; P = 0.049). Furthermore, women with a shorter FBI, despite having higher costs of reproduction, did not have a different lifespan than women with a longer FBI. Our results suggest that women who were in better energetic condition (shorter length of FBI), achieved higher reproductive success without reduction in lifespan. FBI reflects interindividual variation, which may result from variation in nutritional status early in life and thus may be a good predictor of subsequent reproductive strategy. We propose to use FBI as an indicator of women's nutritional status in studies of historical populations, especially when information about social status is not available. Copyright © 2012 Wiley Periodicals, Inc.
NASA Astrophysics Data System (ADS)
Winkelstern, I. Z.; Surge, D. M.
2010-12-01
Pliocene sea surface temperature (SST) data from the US Atlantic coastal plain is currently insufficient for a detailed understanding of the climatic shifts that occurred during the period. Previous studies, based on oxygen isotope proxy data from marine shells and bryozoan zooid size analysis, have provided constraints on possible annual-scale SST ranges for the region. However, more data are required to fully understand the forcing mechanisms affecting regional Pliocene climate and evaluate modeled temperature projections. Bivalve sclerochronology (growth increment analysis) is an alternative proxy for SST that can provide annually resolved multi-year time series. The method has been validated in previous studies using modern Arctica, Chione, and Mercenaria. We analyzed Pliocene Mercenaria carolinensis shells using sclerochronologic methods and tested the hypothesis that higher SST ranges are reflected in shells selected from the warmest climate interval (3.5-3.3 Ma, upper Yorktown Formation, Virginia) and lower SST ranges are observable in shells selected from the subsequent cooling interval (2.4-1.8 Ma, Chowan River Formation, North Carolina). These results further establish the validity of growth increment analysis using fossil shells and provide the first large dataset (from the region) of reconstructed annual SST from floating time series during these intervals. These data will enhance our knowledge about a warm climate state that has been identified in the 2007 IPCC report as an analogue for expected global warming. Future work will expand this study to include sampling in Florida to gain detailed information about Pliocene SST along a latitudinal gradient.
Kalmbach, Brian; Chitwood, Raymond A.; Mauk, Michael D.
2012-01-01
We have addressed the source and nature of the persistent neural activity that bridges the stimulus-free gap between the conditioned stimulus (CS) and unconditioned stimulus (US) during trace eyelid conditioning. Previous work has demonstrated that this persistent activity is necessary for trace eyelid conditioning: CS-elicited activity in mossy fiber inputs to the cerebellum does not extend into the stimulus-free trace interval, which precludes the cerebellar learning that mediates conditioned response expression. In behaving rabbits we used in vivo recordings from a region of medial prefrontal cortex (mPFC) that is necessary for trace eyelid conditioning to test the hypothesis that neurons there generate activity that persists beyond CS offset. These recordings revealed two patterns of activity during the trace interval that would enable cerebellar learning. Activity in some cells began during the tone CS and persisted to overlap with the US, whereas in other cells, activity began during the stimulus-free trace interval. Injection of anterograde tracers into this same region of mPFC revealed dense labeling in the pontine nuclei, where recordings also revealed tone-evoked persistent activity during trace conditioning. These data suggest a corticopontine pathway that provides an input to the cerebellum during trace conditioning trials that bridges the temporal gap between the CS and US to engage cerebellar learning. As such, trace eyelid conditioning represents a well-characterized and experimentally tractable system that can facilitate mechanistic analyses of cortical persistent activity and how it is used by downstream brain structures to influence behavior. PMID:21957220
Sivakumar, Siddharth S.; Namath, Amalia G.; Tuxhorn, Ingrid E.; Lewis, Stephen J.
2016-01-01
We hypothesized that epilepsy affects the activity of the autonomic nervous system even in the absence of seizures, which should manifest as differences in heart rate variability (HRV) and cardiac cycle. To test this hypothesis, we investigated ECG traces of 91 children and adolescents with generalized epilepsy and 25 neurologically normal controls during 30 min of stage 2 sleep with interictal or normal EEG. Mean heart rate (HR) and high-frequency HRV corresponding to respiratory sinus arrhythmia (RSA) were quantified and compared. Blood pressure (BP) measurements from physical exams of all subjects were also collected and analyzed. RSA was on average significantly stronger in patients with epilepsy, whereas their mean HR was significantly lower after adjusting for age, body mass index, and sex, consistent with increased parasympathetic tone in these patients. In contrast, diastolic (and systolic) BP at rest was not significantly different, indicating that the sympathetic tone is similar. Remarkably, five additional subjects, initially diagnosed as neurologically normal but with enhanced RSA and lower HR, eventually developed epilepsy, suggesting that increased parasympathetic tone precedes the onset of epilepsy in children. ECG waveforms in epilepsy also displayed significantly longer TP intervals (ventricular diastole) relative to the RR interval. The relative TP interval correlated positively with RSA and negatively with HR, suggesting that these parameters are linked through a common mechanism, which we discuss. Altogether, our results provide evidence for imbalanced autonomic function in generalized epilepsy, which may be a key contributing factor to sudden unexpected death in epilepsy. PMID:26888110
Confidence Intervals for Weighted Composite Scores under the Compound Binomial Error Model
ERIC Educational Resources Information Center
Kim, Kyung Yong; Lee, Won-Chan
2018-01-01
Reporting confidence intervals with test scores helps test users make important decisions about examinees by providing information about the precision of test scores. Although a variety of estimation procedures based on the binomial error model are available for computing intervals for test scores, these procedures assume that items are randomly…
TRANSGENIC MOUSE MODELS AND PARTICULATE MATTER (PM)
The hypothesis to be tested is that metal catalyzed oxidative stress can contribute to the biological effects of particulate matter. We acquired several transgenic mouse strains to test this hypothesis. Breeding of the mice was accomplished by Duke University. Particles employed ...
Hypothesis Testing Using the Films of the Three Stooges
ERIC Educational Resources Information Center
Gardner, Robert; Davidson, Robert
2010-01-01
The use of The Three Stooges' films as a source of data in an introductory statistics class is described. The Stooges' films are separated into three populations. Using these populations, students may conduct hypothesis tests with data they collect.
Hovick, Stephen M; Whitney, Kenneth D
2014-01-01
The hypothesis that interspecific hybridisation promotes invasiveness has received much recent attention, but tests of the hypothesis can suffer from important limitations. Here, we provide the first systematic review of studies experimentally testing the hybridisation-invasion (H-I) hypothesis in plants, animals and fungi. We identified 72 hybrid systems for which hybridisation has been putatively associated with invasiveness, weediness or range expansion. Within this group, 15 systems (comprising 34 studies) experimentally tested performance of hybrids vs. their parental species and met our other criteria. Both phylogenetic and non-phylogenetic meta-analyses demonstrated that wild hybrids were significantly more fecund and larger than their parental taxa, but did not differ in survival. Resynthesised hybrids (which typically represent earlier generations than do wild hybrids) did not consistently differ from parental species in fecundity, survival or size. Using meta-regression, we found that fecundity increased (but survival decreased) with generation in resynthesised hybrids, suggesting that natural selection can play an important role in shaping hybrid performance – and thus invasiveness – over time. We conclude that the available evidence supports the H-I hypothesis, with the caveat that our results are clearly driven by tests in plants, which are more numerous than tests in animals and fungi. PMID:25234578
The Harm Done to Reproducibility by the Culture of Null Hypothesis Significance Testing.
Lash, Timothy L
2017-09-15
In the last few years, stakeholders in the scientific community have raised alarms about a perceived lack of reproducibility of scientific results. In reaction, guidelines for journals have been promulgated and grant applicants have been asked to address the rigor and reproducibility of their proposed projects. Neither solution addresses a primary culprit, which is the culture of null hypothesis significance testing that dominates statistical analysis and inference. In an innovative research enterprise, selection of results for further evaluation based on null hypothesis significance testing is doomed to yield a low proportion of reproducible results and a high proportion of effects that are initially overestimated. In addition, the culture of null hypothesis significance testing discourages quantitative adjustments to account for systematic errors and quantitative incorporation of prior information. These strategies would otherwise improve reproducibility and have not been previously proposed in the widely cited literature on this topic. Without discarding the culture of null hypothesis significance testing and implementing these alternative methods for statistical analysis and inference, all other strategies for improving reproducibility will yield marginal gains at best. © The Author(s) 2017. Published by Oxford University Press on behalf of the Johns Hopkins Bloomberg School of Public Health. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.
Sheth, Kevin N; Martini, Sharyl R; Moomaw, Charles J; Koch, Sebastian; Elkind, Mitchell S V; Sung, Gene; Kittner, Steven J; Frankel, Michael; Rosand, Jonathan; Langefeld, Carl D; Comeau, Mary E; Waddy, Salina P; Osborne, Jennifer; Woo, Daniel
2015-12-01
The role of antiepileptic drug (AED) prophylaxis after intracerebral hemorrhage (ICH) remains unclear. This analysis describes prevalence of prophylactic AED use, as directed by treating clinicians, in a prospective ICH cohort and tests the hypothesis that it is associated with poor outcome. Analysis included 744 patients with ICH enrolled in the Ethnic/Racial Variations of Intracerebral Hemorrhage (ERICH) study before November 2012. Baseline clinical characteristics and AED use were recorded in standardized fashion. ICH location and volume were recorded from baseline neuroimaging. We analyzed differences in patient characteristics by AED prophylaxis, and we used logistic regression to test whether AED prophylaxis was associated with poor outcome. The primary outcome was 3-month modified Rankin Scale score, with 4 to 6 considered poor outcome. AEDs were used for prophylaxis in 289 (39%) of the 744 subjects; of these, levetiracetam was used in 89%. Patients with lobar ICH, craniotomy, or larger hematomas were more likely to receive prophlyaxis. Although prophylactic AED use was associated with poor outcome in an unadjusted model (odds ratio, 1.40; 95% confidence interval, 1.04-1.88; P=0.03), this association was no longer significant after adjusting for clinical and demographic characteristics (odds ratio, 1.11; 95% confidence interval, 0.74-1.65; P=0.62). We found no evidence that AED use (predominantly levetiracetam) is independently associated with poor outcome. A prospective study is required to assess for a more modest effect of AED use on outcome after ICH. © 2015 American Heart Association, Inc.
Pharmacotherapy of attention deficit in neurofibromatosis type 1: effects on cognition.
Lidzba, Karen; Granstroem, Sofia; Leark, Robert A; Kraegeloh-Mann, Inge; Mautner, Victor-Felix
2014-08-01
Attention deficit with or without hyperactivity (AD[H]D) is a common comorbidity of neurofibromatosis type 1 (NF 1). We tested the hypothesis that permanent medication with methylphenidate can improve cognitive functioning in children with NF 1 and comorbid AD(H)D. We retrospectively analyzed data of a clinical sample of patients with NF 1 with or without AD(H)D, who underwent standardized neuropsychological diagnostics twice (age range: T1, 6-14 years; T2, 7-16 years; mean interval, 49.09 months). A total of 16 children without AD(H)D (nine females) were compared with 14 unmedicated children with AD(H)D (eight females) and to 13 medicated children with AD(H)D (two females). Effects of medication and attention on cognitive outcome (IQ) were tested by repeated measures analysis of covariance (rmANCOVA). Medicated children with NF 1 improved significantly in full-scale IQ from T1 to T2 (IQ[T1] = 80.38, IQ[T2] = 98.38, confidence interval [diff]: -25.59 to -10.40, p < 0.0001), this effect was not evident for the other groups. With attention measures as covariates, the effect remained marginally significant. Children and adolescents with NF 1 and comorbid AD(H)D may profit from MPH medication regarding general cognition. This effect could be specific for the group of patients with NF 1, and cannot be explained solely by improvements in attention. Controlled, prospective studies are warranted to corroborate our findings. Georg Thieme Verlag KG Stuttgart · New York.
The Impact of Economic Factors and Acquisition Reforms on the Cost of Defense Weapon Systems
2006-03-01
test for homoskedasticity, the Breusch - Pagan test is employed. The null hypothesis of the Breusch - Pagan test is that the variance is equal to zero...made. Using the Breusch - Pagan test shown in Table 19 below, the prob>chi2 is greater than 05.=α , therefore we fail to reject the null hypothesis...overrunpercentfp100 Breusch - Pagan Test (Ho=Constant Variance) Estimated Results Variance Standard Deviation overrunpercent100
Ziehl-Quirós, E Carolina; García-Aguilar, María C; Mellink, Eric
2017-01-24
The relatively small population size and restricted distribution of the Guadalupe fur seal Arctocephalus townsendi could make it highly vulnerable to infectious diseases. We performed a colony-level assessment in this species of the prevalence and presence of Brucella spp. and Leptospira spp., pathogenic bacteria that have been reported in several pinniped species worldwide. Forty-six serum samples were collected in 2014 from pups at Isla Guadalupe, the only place where the species effectively reproduces. Samples were tested for Brucella using 3 consecutive serological tests, and for Leptospira using the microscopic agglutination test. For each bacterium, a Bayesian approach was used to estimate prevalence to exposure, and an epidemiological model was used to test the null hypothesis that the bacterium was present in the colony. No serum sample tested positive for Brucella, and the statistical analyses concluded that the colony was bacterium-free with a 96.3% confidence level. However, a Brucella surveillance program would be highly recommendable. Twelve samples were positive (titers 1:50) to 1 or more serovars of Leptospira. The prevalence was calculated at 27.1% (95% credible interval: 15.6-40.3%), and the posterior analyses indicated that the colony was not Leptospira-free with a 100% confidence level. Serovars Icterohaemorrhagiae, Canicola, and Bratislava were detected, but only further research can unveil whether they affect the fur seal population.
Effects of Item Exposure for Conventional Examinations in a Continuous Testing Environment.
ERIC Educational Resources Information Center
Hertz, Norman R.; Chinn, Roberta N.
This study explored the effect of item exposure on two conventional examinations administered as computer-based tests. A principal hypothesis was that item exposure would have little or no effect on average difficulty of the items over the course of an administrative cycle. This hypothesis was tested by exploring conventional item statistics and…
ERIC Educational Resources Information Center
McNeil, Keith
The use of directional and nondirectional hypothesis testing was examined from the perspectives of textbooks, journal articles, and members of editorial boards. Three widely used statistical texts were reviewed in terms of how directional and nondirectional tests of significance were presented. Texts reviewed were written by: (1) D. E. Hinkle, W.…
The Feminization of School Hypothesis Called into Question among Junior and High School Students
ERIC Educational Resources Information Center
Verniers, Catherine; Martinot, Delphine; Dompnier, Benoît
2016-01-01
Background: The feminization of school hypothesis suggests that boys underachieve in school compared to girls because school rewards feminine characteristics that are at odds with boys' masculine features. Aims: The feminization of school hypothesis lacks empirical evidence. The aim of this study was to test this hypothesis by examining the extent…
Supporting shared hypothesis testing in the biomedical domain.
Agibetov, Asan; Jiménez-Ruiz, Ernesto; Ondrésik, Marta; Solimando, Alessandro; Banerjee, Imon; Guerrini, Giovanna; Catalano, Chiara E; Oliveira, Joaquim M; Patanè, Giuseppe; Reis, Rui L; Spagnuolo, Michela
2018-02-08
Pathogenesis of inflammatory diseases can be tracked by studying the causality relationships among the factors contributing to its development. We could, for instance, hypothesize on the connections of the pathogenesis outcomes to the observed conditions. And to prove such causal hypotheses we would need to have the full understanding of the causal relationships, and we would have to provide all the necessary evidences to support our claims. In practice, however, we might not possess all the background knowledge on the causality relationships, and we might be unable to collect all the evidence to prove our hypotheses. In this work we propose a methodology for the translation of biological knowledge on causality relationships of biological processes and their effects on conditions to a computational framework for hypothesis testing. The methodology consists of two main points: hypothesis graph construction from the formalization of the background knowledge on causality relationships, and confidence measurement in a causality hypothesis as a normalized weighted path computation in the hypothesis graph. In this framework, we can simulate collection of evidences and assess confidence in a causality hypothesis by measuring it proportionally to the amount of available knowledge and collected evidences. We evaluate our methodology on a hypothesis graph that represents both contributing factors which may cause cartilage degradation and the factors which might be caused by the cartilage degradation during osteoarthritis. Hypothesis graph construction has proven to be robust to the addition of potentially contradictory information on the simultaneously positive and negative effects. The obtained confidence measures for the specific causality hypotheses have been validated by our domain experts, and, correspond closely to their subjective assessments of confidences in investigated hypotheses. Overall, our methodology for a shared hypothesis testing framework exhibits important properties that researchers will find useful in literature review for their experimental studies, planning and prioritizing evidence collection acquisition procedures, and testing their hypotheses with different depths of knowledge on causal dependencies of biological processes and their effects on the observed conditions.
O'Gorman, Thomas W
2018-05-01
In the last decade, it has been shown that an adaptive testing method could be used, along with the Robbins-Monro search procedure, to obtain confidence intervals that are often narrower than traditional confidence intervals. However, these confidence interval limits require a great deal of computation and some familiarity with stochastic search methods. We propose a method for estimating the limits of confidence intervals that uses only a few tests of significance. We compare these limits to those obtained by a lengthy Robbins-Monro stochastic search and find that the proposed method is nearly as accurate as the Robbins-Monro search. Adaptive confidence intervals that are produced by the proposed method are often narrower than traditional confidence intervals when the distributions are long-tailed, skewed, or bimodal. Moreover, the proposed method of estimating confidence interval limits is easy to understand, because it is based solely on the p-values from a few tests of significance.
The limits to pride: A test of the pro-anorexia hypothesis.
Cornelius, Talea; Blanton, Hart
2016-01-01
Many social psychological models propose that positive self-conceptions promote self-esteem. An extreme version of this hypothesis is advanced in "pro-anorexia" communities: identifying with anorexia, in conjunction with disordered eating, can lead to higher self-esteem. The current study empirically tested this hypothesis. Results challenge the pro-anorexia hypothesis. Although those with higher levels of pro-anorexia identification trended towards higher self-esteem with increased disordered eating, this did not overcome the strong negative main effect of pro-anorexia identification. These data suggest a more effective strategy for promoting self-esteem is to encourage rejection of disordered eating and an anorexic identity.
Does the Slow-Growth, High-Mortality Hypothesis Apply Below Ground?
Hourston, James E; Bennett, Alison E; Johnson, Scott N; Gange, Alan C
2016-01-01
Belowground tri-trophic study systems present a challenging environment in which to study plant-herbivore-natural enemy interactions. For this reason, belowground examples are rarely available for testing general ecological theories. To redress this imbalance, we present, for the first time, data on a belowground tri-trophic system to test the slow growth, high mortality hypothesis. We investigated whether the differing performance of entomopathogenic nematodes (EPNs) in controlling the common pest black vine weevil Otiorhynchus sulcatus could be linked to differently resistant cultivars of the red raspberry Rubus idaeus. The O. sulcatus larvae recovered from R. idaeus plants showed significantly slower growth and higher mortality on the Glen Rosa cultivar, relative to the more commercially favored Glen Ample cultivar creating a convenient system for testing this hypothesis. Heterorhabditis megidis was found to be less effective at controlling O. sulcatus than Steinernema kraussei, but conformed to the hypothesis. However, S. kraussei maintained high levels of O. sulcatus mortality regardless of how larval growth was influenced by R. idaeus cultivar. We link this to direct effects that S. kraussei had on reducing O. sulcatus larval mass, indicating potential sub-lethal effects of S. kraussei, which the slow-growth, high-mortality hypothesis does not account for. Possible origins of these sub-lethal effects of EPN infection and how they may impact on a hypothesis designed and tested with aboveground predator and parasitoid systems are discussed.
Automatic image equalization and contrast enhancement using Gaussian mixture modeling.
Celik, Turgay; Tjahjadi, Tardi
2012-01-01
In this paper, we propose an adaptive image equalization algorithm that automatically enhances the contrast in an input image. The algorithm uses the Gaussian mixture model to model the image gray-level distribution, and the intersection points of the Gaussian components in the model are used to partition the dynamic range of the image into input gray-level intervals. The contrast equalized image is generated by transforming the pixels' gray levels in each input interval to the appropriate output gray-level interval according to the dominant Gaussian component and the cumulative distribution function of the input interval. To take account of the hypothesis that homogeneous regions in the image represent homogeneous silences (or set of Gaussian components) in the image histogram, the Gaussian components with small variances are weighted with smaller values than the Gaussian components with larger variances, and the gray-level distribution is also used to weight the components in the mapping of the input interval to the output interval. Experimental results show that the proposed algorithm produces better or comparable enhanced images than several state-of-the-art algorithms. Unlike the other algorithms, the proposed algorithm is free of parameter setting for a given dynamic range of the enhanced image and can be applied to a wide range of image types.
A critique of statistical hypothesis testing in clinical research
Raha, Somik
2011-01-01
Many have documented the difficulty of using the current paradigm of Randomized Controlled Trials (RCTs) to test and validate the effectiveness of alternative medical systems such as Ayurveda. This paper critiques the applicability of RCTs for all clinical knowledge-seeking endeavors, of which Ayurveda research is a part. This is done by examining statistical hypothesis testing, the underlying foundation of RCTs, from a practical and philosophical perspective. In the philosophical critique, the two main worldviews of probability are that of the Bayesian and the frequentist. The frequentist worldview is a special case of the Bayesian worldview requiring the unrealistic assumptions of knowing nothing about the universe and believing that all observations are unrelated to each other. Many have claimed that the first belief is necessary for science, and this claim is debunked by comparing variations in learning with different prior beliefs. Moving beyond the Bayesian and frequentist worldviews, the notion of hypothesis testing itself is challenged on the grounds that a hypothesis is an unclear distinction, and assigning a probability on an unclear distinction is an exercise that does not lead to clarity of action. This critique is of the theory itself and not any particular application of statistical hypothesis testing. A decision-making frame is proposed as a way of both addressing this critique and transcending ideological debates on probability. An example of a Bayesian decision-making approach is shown as an alternative to statistical hypothesis testing, utilizing data from a past clinical trial that studied the effect of Aspirin on heart attacks in a sample population of doctors. As a big reason for the prevalence of RCTs in academia is legislation requiring it, the ethics of legislating the use of statistical methods for clinical research is also examined. PMID:22022152
Reassessing the "traditional background hypothesis" for elevated MMPI and MMPI-2 Lie-scale scores.
Rosen, Gerald M; Baldwin, Scott A; Smith, Ronald E
2016-10-01
The Lie (L) scale of the Minnesota Multiphasic Personality Inventory (MMPI) is widely regarded as a measure of conscious attempts to deny common human foibles and to present oneself in an unrealistically positive light. At the same time, the current MMPI-2 manual states that "traditional" and religious backgrounds can account for elevated L scale scores as high as 65T-79T, thereby tempering impression management interpretations for faith-based individuals. To assess the validity of the traditional background hypothesis, we reviewed 11 published studies that employed the original MMPI with religious samples and found that only 1 obtained an elevated mean L score. We then conducted a meta-analysis of 12 published MMPI-2 studies in which we compared L scores of religious samples to the test normative group. The meta-analysis revealed large between-study heterogeneity (I2 = 87.1), L scale scores for religious samples that were somewhat higher but did not approach the upper limits specified in the MMPI-2 manual, and an overall moderate effect size (d¯ = 0.54, p < .001; 95% confidence interval [0.37, 0.70]). Our analyses indicated that religious-group membership accounts, on average, for elevations on L of about 5 t-score points. Whether these scores reflect conscious "fake good" impression management or religious-based virtuousness remains unanswered. (PsycINFO Database Record (c) 2016 APA, all rights reserved).
Effect of between-category similarity on basic-level superiority in pigeons
Lazareva, Olga F.; Soto, Fabián A.; Wasserman, Edward A.
2010-01-01
Children categorize stimuli at the basic level faster than at the superordinate level. We hypothesized that between-category similarity may affect this basic-level superiority effect. Dissimilar categories may be easy to distinguish at the basic level but be difficult to group at the superordinate level, whereas similar categories may be easy to group at the superordinate level but be difficult to distinguish at the basic level. Consequently, similar basic-level categories may produce a superordinate-before-basic learning trend, whereas dissimilar basic-level categories may result in a basic-before-superordinate learning trend. We tested this hypothesis in pigeons by constructing superordinate-level categories out of basic-level categories with known similarity. In Experiment 1, we experimentally evaluated the between-category similarity of four basic-level photographic categories using multiple fixed interval-extinction training (Astley & Wasserman, 1992). We used the resultant similarity matrices in Experiment 2 to construct two superordinate-level categories from basic-level categories with high between-category similarity (cars and persons; chairs and flowers). We then trained pigeons to concurrently classify those photographs into either the proper basic-level category or the proper superordinate-level category. Under these conditions, the pigeons learned the superordinate-level discrimination faster than the basic-level discrimination, confirming our hypothesis that basic-level superiority is affected by between-category similarity. PMID:20600696
DOE Office of Scientific and Technical Information (OSTI.GOV)
Boushey, H.A.
1991-11-01
The study examined the hypothesis that ozone inactivates the enzyme, neutral endopeptidase, responsible for limiting the effects of neuropeptides released from afferent nerve endings. Cough response of capsaicin solution delivered from a nebulizer at 2 min. intervals until two or more coughs were produced. Other endpoints measured included irritative symptoms as rated by the subjects on a nonparametric scale, spirometry, of each concentration of ozone were compared to those of filtered air in a single-blind randomized sequence. The results indicate that a 2 h. exposure to 0.4 ppm of ozone with intermittent light exercise alters the sensitivity of airway nervesmore » that mediate the cough response to inhaled materials. This dose of ozone also caused a change in FEV1. A lower level of ozone, 0.02 ppm, caused a change in neither cough threshold nor FEV1, even when the duration of exposure was extended to three hours. The findings are consistent with the author's hypothesis that ozone may sensitize nerve endings in the airways by inactivating neutral endopeptidase, an enzyme that regulates their activity, but they do not demonstrate that directly examining an effect directly mediated by airway nerves allows detection of effects of ozone at doses below those causing effects detected by standard tests of pulmonary function.« less
Early headgear effects on the eruption pattern of the maxillary canines.
Silvola, Anna-Sofia; Arvonen, Päivi; Julku, Johanna; Lähdesmäki, Raija; Kantomaa, Tuomo; Pirttiniemi, Pertti
2009-05-01
To test the null hypothesis that early headgear (HG) treatment has no effect on the eruption pattern of the maxillary canines in the early mixed dentition. Sixty-eight children (40 boys and 28 girls) with a Class II tendency in occlusion and moderate crowding of the dental arches were randomized into two groups. HG treatment was initiated immediately in the first group. In the second group only minor interceptive procedures were performed during the first follow-up period of 2 years. Orthopantomograms were taken at the baseline, three times at 1-year intervals, and after growth at the age of 16. Eruption geometry was performed. The space from the maxillary first molar to the lateral incisor was measured on the dental casts. The inclination of the maxillary canine in relation to the midline appeared to be significantly more vertically oriented on the right side in the HG group 1 and 2 years after starting the HG therapy (P = .0098 and P = .0003, respectively). The inclination in relation to the lateral incisors was smaller in the HG group bilaterally after 1 year and 2 years of HG treatment, and on the right side after 3 years of treatment. The hypothesis is rejected. Early HG treatment significantly affects the inclination of the maxillary canine during eruption. The strongest influence was seen after 2 years of HG use, more prominently in the right-side canine.
Hitting Is Contagious in Baseball: Evidence from Long Hitting Streaks
Bock, Joel R.; Maewal, Akhilesh; Gough, David A.
2012-01-01
Data analysis is used to test the hypothesis that “hitting is contagious”. A statistical model is described to study the effect of a hot hitter upon his teammates’ batting during a consecutive game hitting streak. Box score data for entire seasons comprising streaks of length games, including a total observations were compiled. Treatment and control sample groups () were constructed from core lineups of players on the streaking batter’s team. The percentile method bootstrap was used to calculate confidence intervals for statistics representing differences in the mean distributions of two batting statistics between groups. Batters in the treatment group (hot streak active) showed statistically significant improvements in hitting performance, as compared against the control. Mean for the treatment group was found to be to percentage points higher during hot streaks (mean difference increased points), while the batting heat index introduced here was observed to increase by points. For each performance statistic, the null hypothesis was rejected at the significance level. We conclude that the evidence suggests the potential existence of a “statistical contagion effect”. Psychological mechanisms essential to the empirical results are suggested, as several studies from the scientific literature lend credence to contagious phenomena in sports. Causal inference from these results is difficult, but we suggest and discuss several latent variables that may contribute to the observed results, and offer possible directions for future research. PMID:23251507
Smirl, Jonathan D; Haykowsky, Mark J; Nelson, Michael D; Tzeng, Yu-Chieh; Marsden, Katelyn R; Jones, Helen; Ainslie, Philip N
2014-12-01
Heart transplant recipients are at an increased risk for cerebral hemorrhage and ischemic stroke; yet, the exact mechanism for this derangement remains unclear. We hypothesized that alterations in cerebrovascular regulation is principally involved. To test this hypothesis, we studied cerebral pressure-flow dynamics in 8 clinically stable male heart transplant recipients (62±8 years of age and 9±7 years post transplant, mean±SD), 9 male age-matched controls (63±8 years), and 10 male donor controls (27±5 years). To increase blood pressure variability and improve assessment of the pressure-flow dynamics, subjects performed squat-stand maneuvers at 0.05 and 0.10 Hz. Beat-to-beat blood pressure, middle cerebral artery velocity, and end-tidal carbon dioxide were continuously measured during 5 minutes of seated rest and throughout the squat-stand maneuvers. Cardiac baroreceptor sensitivity gain and cerebral pressure-flow responses were assessed with linear transfer function analysis. Heart transplant recipients had reductions in R-R interval power and baroreceptor sensitivity low frequency gain (P<0.01) compared with both control groups; however, these changes were unrelated to transfer function metrics. Thus, in contrast to our hypothesis, the increased risk of cerebrovascular complication after heart transplantation does not seem to be related to alterations in cerebral pressure-flow dynamics. Future research is, therefore, warranted. © 2014 American Heart Association, Inc.
Ganju, Jitendra; Yu, Xinxin; Ma, Guoguang Julie
2013-01-01
Formal inference in randomized clinical trials is based on controlling the type I error rate associated with a single pre-specified statistic. The deficiency of using just one method of analysis is that it depends on assumptions that may not be met. For robust inference, we propose pre-specifying multiple test statistics and relying on the minimum p-value for testing the null hypothesis of no treatment effect. The null hypothesis associated with the various test statistics is that the treatment groups are indistinguishable. The critical value for hypothesis testing comes from permutation distributions. Rejection of the null hypothesis when the smallest p-value is less than the critical value controls the type I error rate at its designated value. Even if one of the candidate test statistics has low power, the adverse effect on the power of the minimum p-value statistic is not much. Its use is illustrated with examples. We conclude that it is better to rely on the minimum p-value rather than a single statistic particularly when that single statistic is the logrank test, because of the cost and complexity of many survival trials. Copyright © 2013 John Wiley & Sons, Ltd.
Alternative Confidence Interval Methods Used in the Diagnostic Accuracy Studies
Gülhan, Orekıcı Temel
2016-01-01
Background/Aim. It is necessary to decide whether the newly improved methods are better than the standard or reference test or not. To decide whether the new diagnostics test is better than the gold standard test/imperfect standard test, the differences of estimated sensitivity/specificity are calculated with the help of information obtained from samples. However, to generalize this value to the population, it should be given with the confidence intervals. The aim of this study is to evaluate the confidence interval methods developed for the differences between the two dependent sensitivity/specificity values on a clinical application. Materials and Methods. In this study, confidence interval methods like Asymptotic Intervals, Conditional Intervals, Unconditional Interval, Score Intervals, and Nonparametric Methods Based on Relative Effects Intervals are used. Besides, as clinical application, data used in diagnostics study by Dickel et al. (2010) has been taken as a sample. Results. The results belonging to the alternative confidence interval methods for Nickel Sulfate, Potassium Dichromate, and Lanolin Alcohol are given as a table. Conclusion. While preferring the confidence interval methods, the researchers have to consider whether the case to be compared is single ratio or dependent binary ratio differences, the correlation coefficient between the rates in two dependent ratios and the sample sizes. PMID:27478491
Alternative Confidence Interval Methods Used in the Diagnostic Accuracy Studies.
Erdoğan, Semra; Gülhan, Orekıcı Temel
2016-01-01
Background/Aim. It is necessary to decide whether the newly improved methods are better than the standard or reference test or not. To decide whether the new diagnostics test is better than the gold standard test/imperfect standard test, the differences of estimated sensitivity/specificity are calculated with the help of information obtained from samples. However, to generalize this value to the population, it should be given with the confidence intervals. The aim of this study is to evaluate the confidence interval methods developed for the differences between the two dependent sensitivity/specificity values on a clinical application. Materials and Methods. In this study, confidence interval methods like Asymptotic Intervals, Conditional Intervals, Unconditional Interval, Score Intervals, and Nonparametric Methods Based on Relative Effects Intervals are used. Besides, as clinical application, data used in diagnostics study by Dickel et al. (2010) has been taken as a sample. Results. The results belonging to the alternative confidence interval methods for Nickel Sulfate, Potassium Dichromate, and Lanolin Alcohol are given as a table. Conclusion. While preferring the confidence interval methods, the researchers have to consider whether the case to be compared is single ratio or dependent binary ratio differences, the correlation coefficient between the rates in two dependent ratios and the sample sizes.
USDA-ARS?s Scientific Manuscript database
This study tests the hypothesis that phylogenetic classification can predict whether A. pullulans strains will produce useful levels of the commercial polysaccharide, pullulan, or the valuable enzyme, xylanase. To test this hypothesis, 19 strains of A. pullulans with previously described phenotypes...
Zeyrek, C D; Zeyrek, F; Sevinc, E; Demir, E
2006-01-01
The prevalence of asthma and allergic diseases has been reported to be higher in urban than in rural areas between developed and underdeveloped countries and within any given country. Studies in Turkey have yielded different results for different regions. This study aimed to investigate the prevalence of asthma and atopy in Sanliurfa, Turkey, and the influence of environmental factors. We recruited 1108 children from different areas of Sanliurfa and administered the questionnaire of the International Study of Asthma and Allergies in Childhood. Items asking for socioeconomic data were also included. Skin prick and purified protein derivative tests were performed on the children. Measles antibodies were determined and feces were analyzed for parasites. The total prevalence of atopic diseases was 8.6% (n = 95/1108), asthma 1.9% (n=21/1108), allergic rhinitis 2.9% (n=32/1108), and allergic conjunctivitis 3.8% (n=42/1108). The rate of atopic diseases was 5.6% (n=32/573) in children attending schools in peripheral, less urban, slum areas while it was 11.8% (n=63/535) in those attending city-center schools (OR, 2.2; 95% confidence interval [CI]; 1.4-3.5; P<.001). Skin prick test positivity was observed in 3.9% (n=43/1108) overall; at schools in slum areas it was 1.9% (n=11/573), whereas at central schools the rate was 6% (n=32/535) (OR, 4.08; 95% CI, 2.03-8.20; P<.001). The prevalence of asthma and atopic diseases was significantly higher in children who have a family history of atopy, attend a central school, live in an apartment, have more rooms in their homes, and enjoy better economic conditions. We found associations between various factors suggested by the hygiene hypothesis and asthma, and very low rates of prevalence of asthma and atopic diseases both in Sanliurfa in comparison with the more developed western regions and in the peripheral slum areas. The hygiene hypothesis is helpful in explaining these observations.
In vivo performance of a reduced-modulus bone cement
NASA Astrophysics Data System (ADS)
Forehand, Brett Ramsey
Total joint replacement has become one of the most common procedures in the area of orthopedics and is often the solution in patients with diseased or injured hip joints. Component loosening is a significant problem and is primarily caused by bone resorption at the bone-cement interface in cemented implants. It is our hypothesis that localized shear stresses are responsible for the resorption. It was previously shown analytically that local stresses at the interface could be reduced by using a cement of lower modulus. A new reduced modulus cement, polybutyl methylmethacrylate (PBMMA), was developed to test the hypothesis. PBMMA was formulated to exist as polybutyl methacrylate filler in a polymethyl methacrylate matrix. The success of PBMMA cement is based largely on the fact that the polybutyl component of the cement will be in the rubbery state at body temperature. In vitro characterization of the cement was undertaken previously and demonstrated a modulus of approximately one-eighth that of conventional bone cement, polymethyl methacrylate (PMMA) and increased fracture toughness. The purpose of this experiment was to perform an in vivo comparison of the two cements. A sheep model was selected. Total hip arthroplasty was performed on 50 ewes using either PBMMA or PMMA. Radiographs were taken at 6 month intervals. At one year, the contralateral femur of each sheep was implanted so that each animal served as its own control, and the animals were sacrificed. The stiffness of the bone-cement interface of the femoral component within the femur was assessed by applying a torque to the femoral component and demonstrated a significant difference in loosening between the cements when the specimens were tested in external rotation (p < 0.007). Evaluation of the mechanical data also suggests that the PBMMA sheep had a greater amount of loosening for each subject, 59% versus 4% for standard PMMA. A radiographic analysis demonstrated more signs of loosening in the PMMA series of subjects. A brief histological examination showed similar bony reaction to both cements, however, study of the interface membrane was not able to be accomplished. Reasons for the rejection of the hypothesis are discussed.
Huang, Peng; Ou, Ai-hua; Piantadosi, Steven; Tan, Ming
2014-11-01
We discuss the problem of properly defining treatment superiority through the specification of hypotheses in clinical trials. The need to precisely define the notion of superiority in a one-sided hypothesis test problem has been well recognized by many authors. Ideally designed null and alternative hypotheses should correspond to a partition of all possible scenarios of underlying true probability models P={P(ω):ω∈Ω} such that the alternative hypothesis Ha={P(ω):ω∈Ωa} can be inferred upon the rejection of null hypothesis Ho={P(ω):ω∈Ω(o)} However, in many cases, tests are carried out and recommendations are made without a precise definition of superiority or a specification of alternative hypothesis. Moreover, in some applications, the union of probability models specified by the chosen null and alternative hypothesis does not constitute a completed model collection P (i.e., H(o)∪H(a) is smaller than P). This not only imposes a strong non-validated assumption of the underlying true models, but also leads to different superiority claims depending on which test is used instead of scientific plausibility. Different ways to partition P fro testing treatment superiority often have different implications on sample size, power, and significance in both efficacy and comparative effectiveness trial design. Such differences are often overlooked. We provide a theoretical framework for evaluating the statistical properties of different specification of superiority in typical hypothesis testing. This can help investigators to select proper hypotheses for treatment comparison inclinical trial design. Copyright © 2014 Elsevier Inc. All rights reserved.
The potential for increased power from combining P-values testing the same hypothesis.
Ganju, Jitendra; Julie Ma, Guoguang
2017-02-01
The conventional approach to hypothesis testing for formal inference is to prespecify a single test statistic thought to be optimal. However, we usually have more than one test statistic in mind for testing the null hypothesis of no treatment effect but we do not know which one is the most powerful. Rather than relying on a single p-value, combining p-values from prespecified multiple test statistics can be used for inference. Combining functions include Fisher's combination test and the minimum p-value. Using randomization-based tests, the increase in power can be remarkable when compared with a single test and Simes's method. The versatility of the method is that it also applies when the number of covariates exceeds the number of observations. The increase in power is large enough to prefer combined p-values over a single p-value. The limitation is that the method does not provide an unbiased estimator of the treatment effect and does not apply to situations when the model includes treatment by covariate interaction.
Hickman, Matthew; Madden, Peter; Henry, John; Baker, Allan; Wallace, Chris; Wakefield, Jon; Stimson, Gerry; Elliott, Paul
2003-04-01
To test the hypothesis that methadone is responsible for a greater increase in overdose deaths than heroin, and causes proportionally more overdose deaths than heroin at weekends. Multivariate analysis of 3961 death certificates mentioning heroin, morphine and/or methadone held on the Office for National Statistics drug-related poisoning mortality database from 1993 to 1998 in England and Wales. Percentage increase in deaths by year by drug, odds ratio (OR) of dying at the weekend from methadone-related overdose compared to dying from heroin/morphine overdose. From 1993 to 1998, annual opiate overdose deaths increased from 378 to 909. There was a 24.7% (95% confidence interval (CI) 22-28%) yearly increase in heroin deaths compared to 9.4% (95% CI 6-13%) for methadone only. This difference was significant (P < 0.001 by test of interaction) after adjustment for sex, age group, polydrug use, area of residence and underlying cause of death. The largest number of deaths occurred on Saturday (673). The OR of death from methadone overdose on Saturday and Sunday was 1.48 (95% CI 1.29-1.71) for methadone-only deaths compared to dying from heroin/morphine at the weekend after adjustment for other covariates, but the OR was not significant (1.09, 95% CI 0.95-1.25) if the weekend was defined as Friday and Saturday. There was no evidence that the threefold increase in deaths over time was due to methadone. There was equivocal support only for the hypothesis that there was an excess of deaths from methadone at weekends. Increased interventions to prevent overdose among injectors in England and Wales are long overdue.
Nevus density and melanoma risk in women: a pooled analysis to test the divergent pathway hypothesis
Olsen, Catherine M.; Zens, Michael S.; Stukel, Therese A.; Sacerdote, Carlotta; Chang, Yu-mei; Armstrong, Bruce K.; Bataille, Veronique; Berwick, Marianne; Elwood, J. Mark; Holly, Elizabeth A.; Kirkpatrick, Connie; Mack, Thomas; Bishop, Julia Newton; Østerlind, Anne; Swerdlow, Anthony J.; Zanetti, Roberto; Green, Adèle C.; Karagas, Margaret R.; Whiteman, David C
2009-01-01
A “divergent pathway” model for the development of cutaneous melanoma has been proposed. The model hypothesizes that melanomas occurring in people with a low tendency to develop nevi will, on average, arise more commonly on habitually sun-exposed body sites such as the head and neck. In contrast, people with an inherent propensity to develop nevi will tend to develop melanomas most often on body sites with large melanocyte populations, such as on the back. We conducted a collaborative analysis to test this hypothesis using the original data from ten case-control studies of melanoma in women (2406 cases and 3119 controls), with assessment of the potential confounding effects of socioeconomic, pigmentary, and sun exposure-related factors. Higher nevus count on the arm was associated specifically with an increased risk of melanoma of the trunk (p for trend=0.0004) and limbs (both upper and lower limb p for trends=0.01), but not of the head and neck (p for trend=0.25). The pooled odds ratios for the highest quartile of non-zero nevus count versus none were 4.6 (95% confidence interval (CI) 2.7–7.6) for melanoma of the trunk, 2.0 (95% CI 0.9–4.5) for the head and neck, 4.2 (95% CI 2.3–7.5) for the upper limbs and 3.4 (95% CI 1.5–7.9) for the lower limbs. Aggregate data from these studies suggest that high nevus counts are strongly associated with melanoma of the trunk but less so if at all of the head and neck. This finding supports different etiologic pathways of melanoma development by anatomic site. PMID:19035450
Childhood Illness and the Gender Gap in Adolescent Education in Low- and Middle-Income Countries.
Alsan, Marcella; Xing, Anlu; Wise, Paul; Darmstadt, Gary L; Bendavid, Eran
2017-07-01
Achieving gender equality in education is an important development goal. We tested the hypothesis that the gender gap in adolescent education is accentuated by illnesses among young children in the household. Using Demographic and Health Surveys on 41 821 households in 38 low- and middle-income countries, we used linear regression to estimate the difference in the probability adolescent girls and boys were in school, and how this gap responded to illness episodes among children <5 years old. To test the hypothesis that investments in child health are related to the gender gap in education, we assessed the relationship between the gender gap and national immunization coverage. In our sample of 120 708 adolescent boys and girls residing in 38 countries, girls were 5.08% less likely to attend school than boys in the absence of a recent illness among young children within the same household (95% confidence interval [CI], 5.50%-4.65%). This gap increased to 7.77% (95% CI, 8.24%-7.30%) and 8.53% (95% CI, 9.32%-7.74%) if the household reported 1 and 2 or more illness episodes, respectively. The gender gap in schooling in response to illness was larger in households with a working mother. Increases in child vaccination rates were associated with a closing of the gender gap in schooling (correlation coefficient = 0.34, P = .02). Illnesses among children strongly predict a widening of the gender gap in education. Investments in early childhood health may have important effects on schooling attainment for adolescent girls. Copyright © 2017 by the American Academy of Pediatrics.
Rhythmic Interlimb Coordination Impairments and the Risk for Developing Mobility Limitations.
James, Eric G; Leveille, Suzanne G; Hausdorff, Jeffrey M; Travison, Thomas; Kennedy, David N; Tucker, Katherine L; Al Snih, Soham; Markides, Kyriakos S; Bean, Jonathan F
2017-08-01
The identification of novel rehabilitative impairments that are risk factors for mobility limitations may improve their prevention and treatment among older adults. We tested the hypothesis that impaired rhythmic interlimb ankle and shoulder coordination are risk factors for subsequent mobility limitations among older adults. We conducted a 1-year prospective cohort study of community-dwelling older adults (N = 99) aged 67 years and older who did not have mobility limitations (Short Physical Performance Battery score > 9) at baseline. Participants performed antiphase coordination of the right and left ankles or shoulders while paced by an auditory metronome. Using multivariable logistic regression, we determined odds ratios (ORs) for mobility limitations at 1-year follow-up as a function of coordination variability and asymmetry. After adjusting for age, sex, body mass index, Mini-Mental State Examination score, number of chronic conditions, and baseline Short Physical Performance Battery score, ORs were significant for developing mobility limitations based on a 1 SD difference in the variability of ankle (OR = 1.88; 95% confidence interval [CI]: 1.16-3.05) and shoulder (OR = 1.96; 95% CI: 1.17-3.29) coordination. ORs were significant for asymmetry of shoulder (OR = 2.11; 95% CI: 1.25-3.57), but not ankle (OR = 0.95; 95% CI: 0.59-1.55) coordination. Similar results were found in unadjusted analyses. The results support our hypothesis that impaired interlimb ankle and shoulder coordination are risk factors for the development of mobility limitations. Future work is needed to further examine the peripheral and central mechanisms underlying this relationship and to test whether enhancing coordination alters mobility limitations. © The Author 2016. Published by Oxford University Press on behalf of The Gerontological Society of America. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.
Spine deviations and orthodontic treatment of asymmetric malocclusions in children
2012-01-01
Background The aim of this randomized clinical trial was to assess the effect of early orthodontic treatment for unilateral posterior cross bite in the late deciduous and early mixed dentition using orthopedic parameters. Methods Early orthodontic treatment was performed by initial maxillary expansion and subsequent activator therapy (Münster treatment concept). The patient sample was initially comprised of 80 patients with unilateral posterior cross bite (mean age 7.3 years, SD 2.1 years). After randomization, 77 children attended the initial examination appointment (therapy = 37, control = 40); 31 children in the therapy group and 35 children in the control group were monitored at the follow-up examination (T2). The mean interval between T1 and T2 was 1.1 years (SD 0.2 years). Rasterstereography was used for back shape analysis at T1 and T2. Using the profile, the kyphotic and lordotic angle, the surface rotation, the lateral deviation, pelvic tilt and pelvic torsion, statistical differences at T1 and T2 between the therapy and control groups were calculated (t-test). Our working hypothesis was, that early orthodontic treatment can induce negative therapeutic changes in body posture through thoracic and lumbar position changes in preadolescents with uniltaral cross bite. Results No clinically relevant differences between the control and the therapy groups at T1 and T2 were found for the parameters of kyphotic and lordotic angle, the surface rotation, lateral deviation, pelvic tilt, and pelvic torsion. Conclusions Our working hypothesis was tested to be not correct (within the limitations of this study). This randomized clinical trial demonstrates that in a juvenile population with unilateral posterior cross bite the selected early orthodontic treatment protocol does not affect negatively the postural parameters. Trial registration DRKS00003497 on DRKS PMID:22906114
A test of the orthographic recoding hypothesis
NASA Astrophysics Data System (ADS)
Gaygen, Daniel E.
2003-04-01
The Orthographic Recoding Hypothesis [D. E. Gaygen and P. A. Luce, Percept. Psychophys. 60, 465-483 (1998)] was tested. According to this hypothesis, listeners recognize spoken words heard for the first time by mapping them onto stored representations of the orthographic forms of the words. Listeners have a stable orthographic representation of words, but no phonological representation, when those words have been read frequently but never heard or spoken. Such may be the case for low frequency words such as jargon. Three experiments using visually and auditorily presented nonword stimuli tested this hypothesis. The first two experiments were explicit tests of memory (old-new tests) for words presented visually. In the first experiment, the recognition of auditorily presented nonwords was facilitated when they previously appeared on a visually presented list. The second experiment was similar, but included a concurrent articulation task during a visual word list presentation, thus preventing covert rehearsal of the nonwords. The results were similar to the first experiment. The third experiment was an indirect test of memory (auditory lexical decision task) for visually presented nonwords. Auditorily presented nonwords were identified as nonwords significantly more slowly if they had previously appeared on the visually presented list accompanied by a concurrent articulation task.
Bayesian analysis of multimethod ego-depletion studies favours the null hypothesis.
Etherton, Joseph L; Osborne, Randall; Stephenson, Katelyn; Grace, Morgan; Jones, Chas; De Nadai, Alessandro S
2018-04-01
Ego-depletion refers to the purported decrease in performance on a task requiring self-control after engaging in a previous task involving self-control, with self-control proposed to be a limited resource. Despite many published studies consistent with this hypothesis, recurrent null findings within our laboratory and indications of publication bias have called into question the validity of the depletion effect. This project used three depletion protocols involved three different depleting initial tasks followed by three different self-control tasks as dependent measures (total n = 840). For each method, effect sizes were not significantly different from zero When data were aggregated across the three different methods and examined meta-analytically, the pooled effect size was not significantly different from zero (for all priors evaluated, Hedges' g = 0.10 with 95% credibility interval of [-0.05, 0.24]) and Bayes factors reflected strong support for the null hypothesis (Bayes factor > 25 for all priors evaluated). © 2018 The British Psychological Society.
The Extended Contact Hypothesis: A Meta-Analysis on 20 Years of Research.
Zhou, Shelly; Page-Gould, Elizabeth; Aron, Arthur; Moyer, Anne; Hewstone, Miles
2018-04-01
According to the extended contact hypothesis, knowing that in-group members have cross-group friends improves attitudes toward this out-group. This meta-analysis covers the 20 years of research that currently exists on the extended contact hypothesis, and consists of 248 effect sizes from 115 studies. The aggregate relationship between extended contact and intergroup attitudes was r = .25, 95% confidence interval (CI) = [.22, .27], which reduced to r = .17, 95% CI = [.14, .19] after removing direct friendship's contribution; these results suggest that extended contact's hypothesized relationship to intergroup attitudes is small-to-medium and exists independently of direct friendship. This relationship was larger when extended contact was perceived versus actual, highlighting the importance of perception in extended contact. Current results on extended contact mostly resembled their direct friendship counterparts, suggesting similarity between these contact types. These unique insights about extended contact and its relationship with direct friendship should enrich and spur growth within this literature.
Hamilton, Maryellen; Geraci, Lisa
2006-01-01
According to leading theories, the picture superiority effect is driven by conceptual processing, yet this effect has been difficult to obtain using conceptual implicit memory tests. We hypothesized that the picture superiority effect results from conceptual processing of a picture's distinctive features rather than a picture's semantic features. To test this hypothesis, we used 2 conceptual implicit general knowledge tests; one cued conceptually distinctive features (e.g., "What animal has large eyes?") and the other cued semantic features (e.g., "What animal is the figurehead of Tootsie Roll?"). Results showed a picture superiority effect only on the conceptual test using distinctive cues, supporting our hypothesis that this effect is mediated by conceptual processing of a picture's distinctive features.
Hypothesis testing for band size detection of high-dimensional banded precision matrices.
An, Baiguo; Guo, Jianhua; Liu, Yufeng
2014-06-01
Many statistical analysis procedures require a good estimator for a high-dimensional covariance matrix or its inverse, the precision matrix. When the precision matrix is banded, the Cholesky-based method often yields a good estimator of the precision matrix. One important aspect of this method is determination of the band size of the precision matrix. In practice, crossvalidation is commonly used; however, we show that crossvalidation not only is computationally intensive but can be very unstable. In this paper, we propose a new hypothesis testing procedure to determine the band size in high dimensions. Our proposed test statistic is shown to be asymptotically normal under the null hypothesis, and its theoretical power is studied. Numerical examples demonstrate the effectiveness of our testing procedure.
Why do mothers favor girls and fathers, boys? : A hypothesis and a test of investment disparity.
Godoy, Ricardo; Reyes-García, Victoria; McDade, Thomas; Tanner, Susan; Leonard, William R; Huanca, Tomás; Vadez, Vincent; Patel, Karishma
2006-06-01
Growing evidence suggests mothers invest more in girls than boys and fathers more in boys than girls. We develop a hypothesis that predicts preference for girls by the parent facing more resource constraints and preference for boys by the parent facing less constraint. We test the hypothesis with panel data from the Tsimane', a foraging-farming society in the Bolivian Amazon. Tsimane' mothers face more resource constraints than fathers. As predicted, mother's wealth protected girl's BMI, but father's wealth had weak effects on boy's BMI. Numerous tests yielded robust results, including those that controlled for fixed effects of child and household.
Bundschuh, Mirco; Newman, Michael C; Zubrod, Jochen P; Seitz, Frank; Rosenfeldt, Ricki R; Schulz, Ralf
2015-03-01
We argued recently that the positive predictive value (PPV) and the negative predictive value (NPV) are valuable metrics to include during null hypothesis significance testing: They inform the researcher about the probability of statistically significant and non-significant test outcomes actually being true. Although commonly misunderstood, a reported p value estimates only the probability of obtaining the results or more extreme results if the null hypothesis of no effect was true. Calculations of the more informative PPV and NPV require a priori estimate of the probability (R). The present document discusses challenges of estimating R.
Reinders, Jörn; Sonntag, Robert; Kretzer, Jan Philippe
2014-11-01
Polyethylene wear (PE) is known to be a limiting factor in total joint replacements. However, a standardized wear test (e.g. ISO standard) can only replicate the complex in vivo loading condition in a simplified form. In this study, two different parameters were analyzed: (a) Bovine serum, as a substitute for synovial fluid, is typically replaced every 500,000 cycles. However, a continuous regeneration takes place in vivo. How does serum-replacement interval affect the wear rate of total knee replacements? (b) Patients with an artificial joint show reduced gait frequencies compared to standardized testing. What is the influence of a reduced frequency? Three knee wear tests were run: (a) reference test (ISO), (b) testing with a shortened lubricant replacement interval, (c) testing with reduced frequency. The wear behavior was determined based on gravimetric measurements and wear particle analysis. The results showed that the reduced test frequency only had a small effect on wear behavior. Testing with 1 Hz frequency is therefore a valid method for wear testing. However, testing with a shortened replacement interval nearly doubled the wear rate. Wear particle analysis revealed only small differences in wear particle size between the different tests. Wear particles were not linearly released within one replacement interval. The ISO standard should be revised to address the marked effects of lubricant replacement interval on wear rate.