GeneTools--application for functional annotation and statistical hypothesis testing.
Beisvag, Vidar; Jünge, Frode K R; Bergum, Hallgeir; Jølsum, Lars; Lydersen, Stian; Günther, Clara-Cecilie; Ramampiaro, Heri; Langaas, Mette; Sandvik, Arne K; Laegreid, Astrid
2006-10-24
Modern biology has shifted from "one gene" approaches to methods for genomic-scale analysis like microarray technology, which allow simultaneous measurement of thousands of genes. This has created a need for tools facilitating interpretation of biological data in "batch" mode. However, such tools often leave the investigator with large volumes of apparently unorganized information. To meet this interpretation challenge, gene-set, or cluster testing has become a popular analytical tool. Many gene-set testing methods and software packages are now available, most of which use a variety of statistical tests to assess the genes in a set for biological information. However, the field is still evolving, and there is a great need for "integrated" solutions. GeneTools is a web-service providing access to a database that brings together information from a broad range of resources. The annotation data are updated weekly, guaranteeing that users get data most recently available. Data submitted by the user are stored in the database, where it can easily be updated, shared between users and exported in various formats. GeneTools provides three different tools: i) NMC Annotation Tool, which offers annotations from several databases like UniGene, Entrez Gene, SwissProt and GeneOntology, in both single- and batch search mode. ii) GO Annotator Tool, where users can add new gene ontology (GO) annotations to genes of interest. These user defined GO annotations can be used in further analysis or exported for public distribution. iii) eGOn, a tool for visualization and statistical hypothesis testing of GO category representation. As the first GO tool, eGOn supports hypothesis testing for three different situations (master-target situation, mutually exclusive target-target situation and intersecting target-target situation). An important additional function is an evidence-code filter that allows users, to select the GO annotations for the analysis. GeneTools is the first "all in one" annotation tool, providing users with a rapid extraction of highly relevant gene annotation data for e.g. thousands of genes or clones at once. It allows a user to define and archive new GO annotations and it supports hypothesis testing related to GO category representations. GeneTools is freely available through www.genetools.no
Knowledge dimensions in hypothesis test problems
NASA Astrophysics Data System (ADS)
Krishnan, Saras; Idris, Noraini
2012-05-01
The reformation in statistics education over the past two decades has predominantly shifted the focus of statistical teaching and learning from procedural understanding to conceptual understanding. The emphasis of procedural understanding is on the formulas and calculation procedures. Meanwhile, conceptual understanding emphasizes students knowing why they are using a particular formula or executing a specific procedure. In addition, the Revised Bloom's Taxonomy offers a twodimensional framework to describe learning objectives comprising of the six revised cognition levels of original Bloom's taxonomy and four knowledge dimensions. Depending on the level of complexities, the four knowledge dimensions essentially distinguish basic understanding from the more connected understanding. This study identifiesthe factual, procedural and conceptual knowledgedimensions in hypothesis test problems. Hypothesis test being an important tool in making inferences about a population from sample informationis taught in many introductory statistics courses. However, researchers find that students in these courses still have difficulty in understanding the underlying concepts of hypothesis test. Past studies also show that even though students can perform the hypothesis testing procedure, they may not understand the rationale of executing these steps or know how to apply them in novel contexts. Besides knowing the procedural steps in conducting a hypothesis test, students must have fundamental statistical knowledge and deep understanding of the underlying inferential concepts such as sampling distribution and central limit theorem. By identifying the knowledge dimensions of hypothesis test problems in this study, suitable instructional and assessment strategies can be developed in future to enhance students' learning of hypothesis test as a valuable inferential tool.
Microarray, proteomic, and metabonomic technologies are becoming increasingly accessible as tools for ecotoxicology research. Effective use of these technologies will depend, at least in part, on the ability to apply these techniques within a paradigm of hypothesis driven researc...
Test of association: which one is the most appropriate for my study?
Gonzalez-Chica, David Alejandro; Bastos, João Luiz; Duquia, Rodrigo Pereira; Bonamigo, Renan Rangel; Martínez-Mesa, Jeovany
2015-01-01
Hypothesis tests are statistical tools widely used for assessing whether or not there is an association between two or more variables. These tests provide a probability of the type 1 error (p-value), which is used to accept or reject the null study hypothesis. To provide a practical guide to help researchers carefully select the most appropriate procedure to answer the research question. We discuss the logic of hypothesis testing and present the prerequisites of each procedure based on practical examples.
Bayesian models based on test statistics for multiple hypothesis testing problems.
Ji, Yuan; Lu, Yiling; Mills, Gordon B
2008-04-01
We propose a Bayesian method for the problem of multiple hypothesis testing that is routinely encountered in bioinformatics research, such as the differential gene expression analysis. Our algorithm is based on modeling the distributions of test statistics under both null and alternative hypotheses. We substantially reduce the complexity of the process of defining posterior model probabilities by modeling the test statistics directly instead of modeling the full data. Computationally, we apply a Bayesian FDR approach to control the number of rejections of null hypotheses. To check if our model assumptions for the test statistics are valid for various bioinformatics experiments, we also propose a simple graphical model-assessment tool. Using extensive simulations, we demonstrate the performance of our models and the utility of the model-assessment tool. In the end, we apply the proposed methodology to an siRNA screening and a gene expression experiment.
Digging up food: excavation stone tool use by wild capuchin monkeys.
Falótico, Tiago; Siqueira, José O; Ottoni, Eduardo B
2017-07-24
Capuchin monkeys at Serra da Capivara National Park (SCNP) usually forage on the ground for roots and fossorial arthropods, digging primarily with their hands but also using stone tools to loosen the soil and aid the digging process. Here we describe the stone tools used for digging by two groups of capuchins on SCNP. Both groups used tools while digging three main food resources: Thiloa glaucocarpa tubers, Ocotea sp roots, and trapdoor spiders. One explanation for the occurrence of tool use in primates is the "necessity hypothesis", which states that the main function of tool use is to obtain fallback food. We tested for this, but only found a positive correlation between plant food availability and the frequency of stone tools' use. Thus, our data do not support the fallback food hypothesis for the use of tools to access burrowed resources.
An omnibus test for the global null hypothesis.
Futschik, Andreas; Taus, Thomas; Zehetmayer, Sonja
2018-01-01
Global hypothesis tests are a useful tool in the context of clinical trials, genetic studies, or meta-analyses, when researchers are not interested in testing individual hypotheses, but in testing whether none of the hypotheses is false. There are several possibilities how to test the global null hypothesis when the individual null hypotheses are independent. If it is assumed that many of the individual null hypotheses are false, combination tests have been recommended to maximize power. If, however, it is assumed that only one or a few null hypotheses are false, global tests based on individual test statistics are more powerful (e.g. Bonferroni or Simes test). However, usually there is no a priori knowledge on the number of false individual null hypotheses. We therefore propose an omnibus test based on cumulative sums of the transformed p-values. We show that this test yields an impressive overall performance. The proposed method is implemented in an R-package called omnibus.
Tool-Use and the Left Hemisphere: What Is Lost in Ideomotor Apraxia?
ERIC Educational Resources Information Center
Sunderland, Alan; Wilkins, Leigh; Dineen, Rob; Dawson, Sophie E.
2013-01-01
Impaired tool related action in ideomotor apraxia is normally ascribed to loss of sensorimotor memories for habitual actions (engrams), but this account has not been tested against a hypothesis of a general deficit in representation of hand-object spatial relationships. Rapid reaching for familiar tools was compared with reaching for abstract…
P value and the theory of hypothesis testing: an explanation for new researchers.
Biau, David Jean; Jolles, Brigitte M; Porcher, Raphaël
2010-03-01
In the 1920s, Ronald Fisher developed the theory behind the p value and Jerzy Neyman and Egon Pearson developed the theory of hypothesis testing. These distinct theories have provided researchers important quantitative tools to confirm or refute their hypotheses. The p value is the probability to obtain an effect equal to or more extreme than the one observed presuming the null hypothesis of no effect is true; it gives researchers a measure of the strength of evidence against the null hypothesis. As commonly used, investigators will select a threshold p value below which they will reject the null hypothesis. The theory of hypothesis testing allows researchers to reject a null hypothesis in favor of an alternative hypothesis of some effect. As commonly used, investigators choose Type I error (rejecting the null hypothesis when it is true) and Type II error (accepting the null hypothesis when it is false) levels and determine some critical region. If the test statistic falls into that critical region, the null hypothesis is rejected in favor of the alternative hypothesis. Despite similarities between the two, the p value and the theory of hypothesis testing are different theories that often are misunderstood and confused, leading researchers to improper conclusions. Perhaps the most common misconception is to consider the p value as the probability that the null hypothesis is true rather than the probability of obtaining the difference observed, or one that is more extreme, considering the null is true. Another concern is the risk that an important proportion of statistically significant results are falsely significant. Researchers should have a minimum understanding of these two theories so that they are better able to plan, conduct, interpret, and report scientific experiments.
The Multisensory Attentional Consequences of Tool Use: A Functional Magnetic Resonance Imaging Study
Holmes, Nicholas P.; Spence, Charles; Hansen, Peter C.; Mackay, Clare E.; Calvert, Gemma A.
2008-01-01
Background Tool use in humans requires that multisensory information is integrated across different locations, from objects seen to be distant from the hand, but felt indirectly at the hand via the tool. We tested the hypothesis that using a simple tool to perceive vibrotactile stimuli results in the enhanced processing of visual stimuli presented at the distal, functional part of the tool. Such a finding would be consistent with a shift of spatial attention to the location where the tool is used. Methodology/Principal Findings We tested this hypothesis by scanning healthy human participants' brains using functional magnetic resonance imaging, while they used a simple tool to discriminate between target vibrations, accompanied by congruent or incongruent visual distractors, on the same or opposite side to the tool. The attentional hypothesis was supported: BOLD response in occipital cortex, particularly in the right hemisphere lingual gyrus, varied significantly as a function of tool position, increasing contralaterally, and decreasing ipsilaterally to the tool. Furthermore, these modulations occurred despite the fact that participants were repeatedly instructed to ignore the visual stimuli, to respond only to the vibrotactile stimuli, and to maintain visual fixation centrally. In addition, the magnitude of multisensory (visual-vibrotactile) interactions in participants' behavioural responses significantly predicted the BOLD response in occipital cortical areas that were also modulated as a function of both visual stimulus position and tool position. Conclusions/Significance These results show that using a simple tool to locate and to perceive vibrotactile stimuli is accompanied by a shift of spatial attention to the location where the functional part of the tool is used, resulting in enhanced processing of visual stimuli at that location, and decreased processing at other locations. This was most clearly observed in the right hemisphere lingual gyrus. Such modulations of visual processing may reflect the functional importance of visuospatial information during human tool use. PMID:18958150
The adaptive value of tool-aided defense against wild animal attacks.
Crabb, Peter B; Elizaga, Andrew
2008-01-01
Throughout history humans have faced the persistent threat of attacks by wild animals, and how humans respond to this problem can make the difference between survival and death. In theory, the use of tools to fend off animal attacks would be more effective than resisting bare-handed, yet evidence for the advantage of tool-aided defense is scarce and equivocal. Two studies of news accounts of wild animal attacks against humans were conducted to test the hypothesis that tool-aided defense is indeed associated with reductions in injuries and deaths. Results of both Study 1 (N=172) and Study 2 (N=370) supported the hypothesis. The observed survival advantage of tool-aided defense for modern humans suggests that this tactic also would have worked for human ancestors who lived more closely to dangerous wild animals. 2008 Wiley-Liss, Inc.
Vanbellingen, Tim; Schumacher, Rahel; Eggenberger, Noëmi; Hopfner, Simone; Cazzoli, Dario; Preisig, Basil C; Bertschi, Manuel; Nyffeler, Thomas; Gutbrod, Klemens; Bassetti, Claudio L; Bohlhalter, Stephan; Müri, René M
2015-05-01
According to the direct matching hypothesis, perceived movements automatically activate existing motor components through matching of the perceived gesture and its execution. The aim of the present study was to test the direct matching hypothesis by assessing whether visual exploration behavior correlate with deficits in gestural imitation in left hemisphere damaged (LHD) patients. Eighteen LHD patients and twenty healthy control subjects took part in the study. Gesture imitation performance was measured by the test for upper limb apraxia (TULIA). Visual exploration behavior was measured by an infrared eye-tracking system. Short videos including forty gestures (20 meaningless and 20 communicative gestures) were presented. Cumulative fixation duration was measured in different regions of interest (ROIs), namely the face, the gesturing hand, the body, and the surrounding environment. Compared to healthy subjects, patients fixated significantly less the ROIs comprising the face and the gesturing hand during the exploration of emblematic and tool-related gestures. Moreover, visual exploration of tool-related gestures significantly correlated with tool-related imitation as measured by TULIA in LHD patients. Patients and controls did not differ in the visual exploration of meaningless gestures, and no significant relationships were found between visual exploration behavior and the imitation of emblematic and meaningless gestures in TULIA. The present study thus suggests that altered visual exploration may lead to disturbed imitation of tool related gestures, however not of emblematic and meaningless gestures. Consequently, our findings partially support the direct matching hypothesis. Copyright © 2015 Elsevier Ltd. All rights reserved.
Reboiro-Jato, Miguel; Arrais, Joel P; Oliveira, José Luis; Fdez-Riverola, Florentino
2014-01-30
The diagnosis and prognosis of several diseases can be shortened through the use of different large-scale genome experiments. In this context, microarrays can generate expression data for a huge set of genes. However, to obtain solid statistical evidence from the resulting data, it is necessary to train and to validate many classification techniques in order to find the best discriminative method. This is a time-consuming process that normally depends on intricate statistical tools. geneCommittee is a web-based interactive tool for routinely evaluating the discriminative classification power of custom hypothesis in the form of biologically relevant gene sets. While the user can work with different gene set collections and several microarray data files to configure specific classification experiments, the tool is able to run several tests in parallel. Provided with a straightforward and intuitive interface, geneCommittee is able to render valuable information for diagnostic analyses and clinical management decisions based on systematically evaluating custom hypothesis over different data sets using complementary classifiers, a key aspect in clinical research. geneCommittee allows the enrichment of microarrays raw data with gene functional annotations, producing integrated datasets that simplify the construction of better discriminative hypothesis, and allows the creation of a set of complementary classifiers. The trained committees can then be used for clinical research and diagnosis. Full documentation including common use cases and guided analysis workflows is freely available at http://sing.ei.uvigo.es/GC/.
1996-09-01
Generalized Likelihood Ratio (GLR) and voting techniques. The third class consisted of multiple hypothesis filter detectors, specifically the MMAE. The...vector version, versus a tensor if we use the matrix version of the power spectral density estimate. Using this notation, we will derive an...as MATLAB , have an intrinsic sample covariance computation available, which makes this method quite easy to implement. In practice, the mean for the
Testing the Münch hypothesis of long distance phloem transport in plants.
Knoblauch, Michael; Knoblauch, Jan; Mullendore, Daniel L; Savage, Jessica A; Babst, Benjamin A; Beecher, Sierra D; Dodgen, Adam C; Jensen, Kaare H; Holbrook, N Michele
2016-06-02
Long distance transport in plants occurs in sieve tubes of the phloem. The pressure flow hypothesis introduced by Ernst Münch in 1930 describes a mechanism of osmotically generated pressure differentials that are supposed to drive the movement of sugars and other solutes in the phloem, but this hypothesis has long faced major challenges. The key issue is whether the conductance of sieve tubes, including sieve plate pores, is sufficient to allow pressure flow. We show that with increasing distance between source and sink, sieve tube conductivity and turgor increases dramatically in Ipomoea nil. Our results provide strong support for the Münch hypothesis, while providing new tools for the investigation of one of the least understood plant tissues.
Estuarine modeling: Does a higher grid resolution improve model performance?
Ecological models are useful tools to explore cause effect relationships, test hypothesis and perform management scenarios. A mathematical model, the Gulf of Mexico Dissolved Oxygen Model (GoMDOM), has been developed and applied to the Louisiana continental shelf of the northern ...
Mangrulkar, Rajesh S.; Watt, John M.; Chapman, Chris M.; Judge, Richard D.; Stern, David T.
2001-01-01
In order to test the hypothesis that self study with a CD-ROM based cardiac auscultation tool would enhance knowledge and skills, we conducted a controlled trial of internal medicine residents and evaluated their performance on a test before and after exposure to the tool. Both intervention and control groups improved their auscultation knowledge and skills scores. However, subjects in the CD-ROM group had significantly higher improvements in skills, knowledge, and total scores than those not exposed to the intervention (all p<0.001). Therefore, protected time for internal medicine residents to use this multimedia computer program enhanced both facets of cardiac auscultation.
Testing the Münch hypothesis of long distance phloem transport in plants
DOE Office of Scientific and Technical Information (OSTI.GOV)
Knoblauch, Michael; Knoblauch, Jan; Mullendore, Daniel L.
Long distance transport in plants occurs in sieve tubes of the phloem. The pressure flow hypothesis introduced by Ernst Münch in 1930 describes a mechanism of osmotically generated pressure differentials that are supposed to drive the movement of sugars and other solutes in the phloem, but this hypothesis has long faced major challenges. The key issue is whether the conductance of sieve tubes, including sieve plate pores, is sufficient to allow pressure flow. We show that with increasing distance between source and sink, sieve tube conductivity and turgor increases dramatically in Ipomoea nil. Our results provide strong support for themore » Münch hypothesis, while providing new tools for the investigation of one of the least understood plant tissues.« less
Testing the Münch hypothesis of long distance phloem transport in plants
Knoblauch, Michael; Knoblauch, Jan; Mullendore, Daniel L.; ...
2016-06-02
Long distance transport in plants occurs in sieve tubes of the phloem. The pressure flow hypothesis introduced by Ernst Münch in 1930 describes a mechanism of osmotically generated pressure differentials that are supposed to drive the movement of sugars and other solutes in the phloem, but this hypothesis has long faced major challenges. The key issue is whether the conductance of sieve tubes, including sieve plate pores, is sufficient to allow pressure flow. We show that with increasing distance between source and sink, sieve tube conductivity and turgor increases dramatically in Ipomoea nil. Our results provide strong support for themore » Münch hypothesis, while providing new tools for the investigation of one of the least understood plant tissues.« less
Modular, Semantics-Based Composition of Biosimulation Models
ERIC Educational Resources Information Center
Neal, Maxwell Lewis
2010-01-01
Biosimulation models are valuable, versatile tools used for hypothesis generation and testing, codification of biological theory, education, and patient-specific modeling. Driven by recent advances in computational power and the accumulation of systems-level experimental data, modelers today are creating models with an unprecedented level of…
Technical intelligence and culture: Nut cracking in humans and chimpanzees.
Boesch, Christophe; Bombjaková, Daša; Boyette, Adam; Meier, Amelia
2017-06-01
According to the technical intelligence hypothesis, humans are superior to all other animal species in understanding and using tools. However, the vast majority of comparative studies between humans and chimpanzees, both proficient tool users, have not controlled for the effects of age, prior knowledge, past experience, rearing conditions, or differences in experimental procedures. We tested whether humans are superior to chimpanzees in selecting better tools, using them more dexteriously, achieving higher performance and gaining access to more resource as predicted under the technical intelligence hypothesis. Aka and Mbendjele hunter-gatherers in the rainforest of Central African Republic and the Republic of Congo, respectively, and Taï chimpanzees in the rainforest of Côte d'Ivoire were observed cracking hard Panda oleosa nuts with different tools, as well as the soft Coula edulis and Elaeis guinensis nuts. The nut-cracking techniques, hammer material selection and two efficiency measures were compared. As predicted, the Aka and the Mbendjele were able to exploit more species of hard nuts in the forest than chimpanzees. However, the chimpanzees were sometimes more efficient than the humans. Social roles differed between the two species, with the Aka and especially the Mbendjele exhibiting cooperation between nut-crackers whereas the chimpanzees were mainly individualistic. Observations of nut-cracking by humans and chimpanzees only partially supported the technical intelligence hypothesis as higher degrees of flexibility in tool selection seen in chimpanzees compensated for use of less efficient tool material than in humans. Nut cracking was a stronger social undertaking in humans than in chimpanzees. © 2017 Wiley Periodicals, Inc.
Suner, Aslı; Karakülah, Gökhan; Dicle, Oğuz
2014-01-01
Statistical hypothesis testing is an essential component of biological and medical studies for making inferences and estimations from the collected data in the study; however, the misuse of statistical tests is widely common. In order to prevent possible errors in convenient statistical test selection, it is currently possible to consult available test selection algorithms developed for various purposes. However, the lack of an algorithm presenting the most common statistical tests used in biomedical research in a single flowchart causes several problems such as shifting users among the algorithms, poor decision support in test selection and lack of satisfaction of potential users. Herein, we demonstrated a unified flowchart; covers mostly used statistical tests in biomedical domain, to provide decision aid to non-statistician users while choosing the appropriate statistical test for testing their hypothesis. We also discuss some of the findings while we are integrating the flowcharts into each other to develop a single but more comprehensive decision algorithm.
Krefeld-Schwalb, Antonia; Witte, Erich H.; Zenker, Frank
2018-01-01
In psychology as elsewhere, the main statistical inference strategy to establish empirical effects is null-hypothesis significance testing (NHST). The recent failure to replicate allegedly well-established NHST-results, however, implies that such results lack sufficient statistical power, and thus feature unacceptably high error-rates. Using data-simulation to estimate the error-rates of NHST-results, we advocate the research program strategy (RPS) as a superior methodology. RPS integrates Frequentist with Bayesian inference elements, and leads from a preliminary discovery against a (random) H0-hypothesis to a statistical H1-verification. Not only do RPS-results feature significantly lower error-rates than NHST-results, RPS also addresses key-deficits of a “pure” Frequentist and a standard Bayesian approach. In particular, RPS aggregates underpowered results safely. RPS therefore provides a tool to regain the trust the discipline had lost during the ongoing replicability-crisis. PMID:29740363
Krefeld-Schwalb, Antonia; Witte, Erich H; Zenker, Frank
2018-01-01
In psychology as elsewhere, the main statistical inference strategy to establish empirical effects is null-hypothesis significance testing (NHST). The recent failure to replicate allegedly well-established NHST-results, however, implies that such results lack sufficient statistical power, and thus feature unacceptably high error-rates. Using data-simulation to estimate the error-rates of NHST-results, we advocate the research program strategy (RPS) as a superior methodology. RPS integrates Frequentist with Bayesian inference elements, and leads from a preliminary discovery against a (random) H 0 -hypothesis to a statistical H 1 -verification. Not only do RPS-results feature significantly lower error-rates than NHST-results, RPS also addresses key-deficits of a "pure" Frequentist and a standard Bayesian approach. In particular, RPS aggregates underpowered results safely. RPS therefore provides a tool to regain the trust the discipline had lost during the ongoing replicability-crisis.
Invited Commentary: Can Issues With Reproducibility in Science Be Blamed on Hypothesis Testing?
Weinberg, Clarice R.
2017-01-01
Abstract In the accompanying article (Am J Epidemiol. 2017;186(6):646–647), Dr. Timothy Lash makes a forceful case that the problems with reproducibility in science stem from our “culture” of null hypothesis significance testing. He notes that when attention is selectively given to statistically significant findings, the estimated effects will be systematically biased away from the null. Here I revisit the recent history of genetic epidemiology and argue for retaining statistical testing as an important part of the tool kit. Particularly when many factors are considered in an agnostic way, in what Lash calls “innovative” research, investigators need a selection strategy to identify which findings are most likely to be genuine, and hence worthy of further study. PMID:28938713
Testing the Münch hypothesis of long distance phloem transport in plants
Knoblauch, Michael; Knoblauch, Jan; Mullendore, Daniel L; Savage, Jessica A; Babst, Benjamin A; Beecher, Sierra D; Dodgen, Adam C; Jensen, Kaare H; Holbrook, N Michele
2016-01-01
Long distance transport in plants occurs in sieve tubes of the phloem. The pressure flow hypothesis introduced by Ernst Münch in 1930 describes a mechanism of osmotically generated pressure differentials that are supposed to drive the movement of sugars and other solutes in the phloem, but this hypothesis has long faced major challenges. The key issue is whether the conductance of sieve tubes, including sieve plate pores, is sufficient to allow pressure flow. We show that with increasing distance between source and sink, sieve tube conductivity and turgor increases dramatically in Ipomoea nil. Our results provide strong support for the Münch hypothesis, while providing new tools for the investigation of one of the least understood plant tissues. DOI: http://dx.doi.org/10.7554/eLife.15341.001 PMID:27253062
Application of Transformations in Parametric Inference
ERIC Educational Resources Information Center
Brownstein, Naomi; Pensky, Marianna
2008-01-01
The objective of the present paper is to provide a simple approach to statistical inference using the method of transformations of variables. We demonstrate performance of this powerful tool on examples of constructions of various estimation procedures, hypothesis testing, Bayes analysis and statistical inference for the stress-strength systems.…
Staitieh, Bashar S; Saghafi, Ramin; Kempker, Jordan A; Schulman, David A
2016-04-01
Hypothesis-driven physical examination emphasizes the role of bedside examination in the refinement of differential diagnoses and improves diagnostic acumen. This approach has not yet been investigated as a tool to improve the ability of higher-level trainees to teach medical students. To assess the effect of teaching hypothesis-driven physical diagnosis to pulmonary fellows on their ability to improve the pulmonary examination skills of first-year medical students. Fellows and students were assessed on teaching and diagnostic skills by self-rating on a Likert scale. One group of fellows received the hypothesis-driven teaching curriculum (the "intervention" group) and another received instruction on head-to-toe examination. Both groups subsequently taught physical diagnosis to a group of first-year medical students. An oral examination was administered to all students after completion of the course. Fellows were comfortable teaching physical diagnosis to students. Students in both groups reported a lack of comfort with the pulmonary examination at the beginning of the course and improvement in their comfort by the end. Students trained by intervention group fellows outperformed students trained by control group fellows in the interpretation of physical findings (P < 0.05). Teaching hypothesis-driven physical examination to higher-level trainees who teach medical students improves the ability of students to interpret physical findings. This benefit should be confirmed using validated testing tools.
Efficacy of a geriatric oral health CD as a learning tool.
Teasdale, Thomas A; Shaikh, Mehtab
2006-12-01
To better prepare professionals to meet the needs of older patients, a self-instructional computer module on geriatric oral health was previously developed. A follow-up study reported here tested the efficacy of this educational tool for improving student knowledge of geriatric oral care. A convenience sampling procedure was used. Sample size calculation revealed that fifty-six subjects were required to meet clinical and statistical criteria. Paired t-test addressed our hypothesis that use of the educational tool is associated with improvement in knowledge. Fifty-eight first-year dental students and nine third-year medical students completed the pre-intervention test and were given the CD-based educational tool. After seven days, all participants completed the post-intervention test. Knowledge of geriatric oral health improved among the sixty-seven students included in this study (p=0.019). When stratified on the basis of viewing the CD-ROM, the subgroup of thirty-eight students who reported not actually reviewing the CD-ROM had no change in their knowledge scores, while the subgroup of twenty-nine students who reported reviewing the CD had a significant improvement in test scores (p<0.001). Use of a self-instructional e-learning tool in geriatric oral health is effective among those students who choose to employ such tools.
Re-examining the gesture engram hypothesis. New perspectives on apraxia of tool use.
Osiurak, François; Jarry, Christophe; Le Gall, Didier
2011-02-01
In everyday life, we are led to reuse the same tools (e.g., fork, hammer, coffee-maker), raising the question as to whether we have to systematically recreate the idea of the manipulation which is associated with these tools. The gesture engram hypothesis offers a straightforward answer to this issue, by suggesting that activation of gesture engrams provides a processing advantage, avoiding portions of the process from being reconstructed de novo with each experience. At first glance, the gesture engram hypothesis appears very plausible. But, behind this beguiling simplicity lies a set of unresolved difficulties: (1) What is the evidence in favour of the idea that the mere observation of a tool is sufficient to activate the corresponding gesture engram? (2) If tool use can be supported by a direct route between a structural description system and gesture engrams, what is the role of knowledge about tool function? (3) And, more importantly, what does it mean to store knowledge about how to manipulate tools? We begin by outlining some of the main formulations of the gesture engram hypothesis. Then, we address each of these issues in more detail. To anticipate our discussion, the gesture engram hypothesis appears to be clearly unsatisfactory, notably because of its incapacity to offer convincing answers to these different issues. We conclude by arguing that neuropsychology may greatly benefit from adopting the hypothesis that the idea of how to manipulate a tool is recreated de novo with each experience, thus opening interesting perspectives for future research on apraxia. Copyright © 2011 Elsevier Ltd. All rights reserved.
Risk-Based, Hypothesis-Driven Framework for Hydrological Field Campaigns with Case Studies
NASA Astrophysics Data System (ADS)
Harken, B.; Rubin, Y.
2014-12-01
There are several stages in any hydrological modeling campaign, including: formulation and analysis of a priori information, data acquisition through field campaigns, inverse modeling, and prediction of some environmental performance metric (EPM). The EPM being predicted could be, for example, contaminant concentration or plume travel time. These predictions often have significant bearing on a decision that must be made. Examples include: how to allocate limited remediation resources between contaminated groundwater sites or where to place a waste repository site. Answering such questions depends on predictions of EPMs using forward models as well as levels of uncertainty related to these predictions. Uncertainty in EPM predictions stems from uncertainty in model parameters, which can be reduced by measurements taken in field campaigns. The costly nature of field measurements motivates a rational basis for determining a measurement strategy that is optimal with respect to the uncertainty in the EPM prediction. The tool of hypothesis testing allows this uncertainty to be quantified by computing the significance of the test resulting from a proposed field campaign. The significance of the test gives a rational basis for determining the optimality of a proposed field campaign. This hypothesis testing framework is demonstrated and discussed using various synthetic case studies. This study involves contaminated aquifers where a decision must be made based on prediction of when a contaminant will arrive at a specified location. The EPM, in this case contaminant travel time, is cast into the hypothesis testing framework. The null hypothesis states that the contaminant plume will arrive at the specified location before a critical amount of time passes, and the alternative hypothesis states that the plume will arrive after the critical time passes. The optimality of different field campaigns is assessed by computing the significance of the test resulting from each one. Evaluating the level of significance caused by a field campaign involves steps including likelihood-based inverse modeling and semi-analytical conditional particle tracking.
Metastatic melanoma moves on: translational science in the era of personalized medicine.
Levesque, Mitchell P; Cheng, Phil F; Raaijmakers, Marieke I G; Saltari, Annalisa; Dummer, Reinhard
2017-03-01
Progress in understanding and treating metastatic melanoma is the result of decades of basic and translational research as well as the development of better in vitro tools for modeling the disease. Here, we review the latest therapeutic options for metastatic melanoma and the known genetic and non-genetic mechanisms of resistance to these therapies, as well as the in vitro toolbox that has provided the greatest insights into melanoma progression. These include next-generation sequencing technologies and more complex 2D and 3D cell culture models to functionally test the data generated by genomics approaches. The combination of hypothesis generating and hypothesis testing paradigms reviewed here will be the foundation for the next phase of metastatic melanoma therapies in the coming years.
STOP using just GO: a multi-ontology hypothesis generation tool for high throughput experimentation
2013-01-01
Background Gene Ontology (GO) enrichment analysis remains one of the most common methods for hypothesis generation from high throughput datasets. However, we believe that researchers strive to test other hypotheses that fall outside of GO. Here, we developed and evaluated a tool for hypothesis generation from gene or protein lists using ontological concepts present in manually curated text that describes those genes and proteins. Results As a consequence we have developed the method Statistical Tracking of Ontological Phrases (STOP) that expands the realm of testable hypotheses in gene set enrichment analyses by integrating automated annotations of genes to terms from over 200 biomedical ontologies. While not as precise as manually curated terms, we find that the additional enriched concepts have value when coupled with traditional enrichment analyses using curated terms. Conclusion Multiple ontologies have been developed for gene and protein annotation, by using a dataset of both manually curated GO terms and automatically recognized concepts from curated text we can expand the realm of hypotheses that can be discovered. The web application STOP is available at http://mooneygroup.org/stop/. PMID:23409969
Krypotos, Angelos-Miltiadis; Klugkist, Irene; Engelhard, Iris M.
2017-01-01
ABSTRACT Threat conditioning procedures have allowed the experimental investigation of the pathogenesis of Post-Traumatic Stress Disorder. The findings of these procedures have also provided stable foundations for the development of relevant intervention programs (e.g. exposure therapy). Statistical inference of threat conditioning procedures is commonly based on p-values and Null Hypothesis Significance Testing (NHST). Nowadays, however, there is a growing concern about this statistical approach, as many scientists point to the various limitations of p-values and NHST. As an alternative, the use of Bayes factors and Bayesian hypothesis testing has been suggested. In this article, we apply this statistical approach to threat conditioning data. In order to enable the easy computation of Bayes factors for threat conditioning data we present a new R package named condir, which can be used either via the R console or via a Shiny application. This article provides both a non-technical introduction to Bayesian analysis for researchers using the threat conditioning paradigm, and the necessary tools for computing Bayes factors easily. PMID:29038683
Short-term earthquake forecasting based on an epidemic clustering model
NASA Astrophysics Data System (ADS)
Console, Rodolfo; Murru, Maura; Falcone, Giuseppe
2016-04-01
The application of rigorous statistical tools, with the aim of verifying any prediction method, requires a univocal definition of the hypothesis, or the model, characterizing the concerned anomaly or precursor, so as it can be objectively recognized in any circumstance and by any observer. This is mandatory to build up on the old-fashion approach consisting only of the retrospective anecdotic study of past cases. A rigorous definition of an earthquake forecasting hypothesis should lead to the objective identification of particular sub-volumes (usually named alarm volumes) of the total time-space volume within which the probability of occurrence of strong earthquakes is higher than the usual. The test of a similar hypothesis needs the observation of a sufficient number of past cases upon which a statistical analysis is possible. This analysis should be aimed to determine the rate at which the precursor has been followed (success rate) or not followed (false alarm rate) by the target seismic event, or the rate at which a target event has been preceded (alarm rate) or not preceded (failure rate) by the precursor. The binary table obtained from this kind of analysis leads to the definition of the parameters of the model that achieve the maximum number of successes and the minimum number of false alarms for a specific class of precursors. The mathematical tools suitable for this purpose may include the definition of Probability Gain or the R-Score, as well as the application of popular plots such as the Molchan error-diagram and the ROC diagram. Another tool for evaluating the validity of a forecasting method is the concept of the likelihood ratio (also named performance factor) of occurrence and non-occurrence of seismic events under different hypotheses. Whatever is the method chosen for building up a new hypothesis, usually based on retrospective data, the final assessment of its validity should be carried out by a test on a new and independent set of observations. The implementation of this step could be problematic for seismicity characterized by long-term recurrence. However, the separation of the data base of the data base collected in the past in two separate sections (one on which the best fit of the parameters is carried out, and the other on which the hypothesis is tested) can be a viable solution, known as retrospective-forward testing. In this study we show examples of application of the above mentioned concepts to the analysis of the Italian catalog of instrumental seismicity, making use of an epidemic algorithm developed to model short-term clustering features. This model, for which a precursory anomaly is just the occurrence of seismic activity, doesn't need the retrospective categorization of earthquakes in terms of foreshocks, mainshocks and aftershocks. It was introduced more than 15 years ago and tested so far in a number of real cases. It is now being run by several seismological centers around the world in forward real-time mode for testing purposes.
Sex differences in tool use acquisition in bonobos (Pan paniscus).
Boose, Klaree J; White, Frances J; Meinelt, Audra
2013-09-01
All the great ape species are known tool users in both the wild and captivity, although there is great variation in ability and behavioral repertoire. Differences in tool use acquisition between chimpanzees and gorillas have been attributed to differing levels of social tolerance as a result of differences in social structure. Chimpanzees also show sex differences in acquisition and both chimpanzees and bonobos demonstrate a female bias in tool use behaviors. Studies of acquisition are limited in the wild and between species comparisons are complicated in captivity by contexts that often do not reflect natural conditions. Here we investigated tool use acquisition in a captive group of naïve bonobos by simulating naturalistic conditions. We constructed an artificial termite mound fashioned after those that occur in the wild and tested individuals within a social group context. We found sex differences in latencies to attempt and to succeed where females attempted to fish, were successful more quickly, and fished more frequently than males. We compared our results to those reported for chimpanzees and gorillas. Males across all three species did not differ in latency to attempt or to succeed. In contrast, bonobo and chimpanzee females succeeded more quickly than did female gorillas. Female bonobos and female chimpanzees did not differ in either latency to attempt or to succeed. We tested the social tolerance hypothesis by investigating the relationship between tool behaviors and number of neighbors present. We also compared these results to those reported for chimpanzees and gorillas and found that bonobos had the fewest numbers of neighbors present. The results of this study do not support the association between number of neighbors and tool behavior reported for chimpanzees. However, bonobos demonstrated a similar sex difference in tool use acquisition, supporting the hypothesis of a female bias in tool use in Pan. © 2013 Wiley Periodicals, Inc.
Clinical prediction of fall risk and white matter abnormalities: a diffusion tensor imaging study
USDA-ARS?s Scientific Manuscript database
The Tinetti scale is a simple clinical tool designed to predict risk of falling by focusing on gait and stance impairment in elderly persons. Gait impairment is also associated with white matter (WM) abnormalities. Objective: To test the hypothesis that elderly subjects at risk for falling, as deter...
Microcomputers in the Classroom: Don't Exclude the Developmentally Disabled.
ERIC Educational Resources Information Center
Schall, William E.; And Others
1985-01-01
This study tested the hypothesis that 15- to 21-year-old educable mentally retarded students could successfully interact with microcomputers and show interest in using them as a learning tool. High interest levels and attention spans and positive microcomputer attitudes displayed by subjects suggest there may be unrealized potential in…
Uno, Hajime; Tian, Lu; Claggett, Brian; Wei, L J
2015-12-10
With censored event time observations, the logrank test is the most popular tool for testing the equality of two underlying survival distributions. Although this test is asymptotically distribution free, it may not be powerful when the proportional hazards assumption is violated. Various other novel testing procedures have been proposed, which generally are derived by assuming a class of specific alternative hypotheses with respect to the hazard functions. The test considered by Pepe and Fleming (1989) is based on a linear combination of weighted differences of the two Kaplan-Meier curves over time and is a natural tool to assess the difference of two survival functions directly. In this article, we take a similar approach but choose weights that are proportional to the observed standardized difference of the estimated survival curves at each time point. The new proposal automatically makes weighting adjustments empirically. The new test statistic is aimed at a one-sided general alternative hypothesis and is distributed with a short right tail under the null hypothesis but with a heavy tail under the alternative. The results from extensive numerical studies demonstrate that the new procedure performs well under various general alternatives with a caution of a minor inflation of the type I error rate when the sample size is small or the number of observed events is small. The survival data from a recent cancer comparative study are utilized for illustrating the implementation of the process. Copyright © 2015 John Wiley & Sons, Ltd.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Raj, Sunny; Jha, Sumit Kumar; Pullum, Laura L.
Validating the correctness of human detection vision systems is crucial for safety applications such as pedestrian collision avoidance in autonomous vehicles. The enormous space of possible inputs to such an intelligent system makes it difficult to design test cases for such systems. In this report, we present our tool MAYA that uses an error model derived from a convolutional neural network (CNN) to explore the space of images similar to a given input image, and then tests the correctness of a given human or object detection system on such perturbed images. We demonstrate the capability of our tool on themore » pre-trained Histogram-of-Oriented-Gradients (HOG) human detection algorithm implemented in the popular OpenCV toolset and the Caffe object detection system pre-trained on the ImageNet benchmark. Our tool may serve as a testing resource for the designers of intelligent human and object detection systems.« less
Saghafi, Ramin; Kempker, Jordan A.; Schulman, David A.
2016-01-01
Rationale: Hypothesis-driven physical examination emphasizes the role of bedside examination in the refinement of differential diagnoses and improves diagnostic acumen. This approach has not yet been investigated as a tool to improve the ability of higher-level trainees to teach medical students. Objectives: To assess the effect of teaching hypothesis-driven physical diagnosis to pulmonary fellows on their ability to improve the pulmonary examination skills of first-year medical students. Methods: Fellows and students were assessed on teaching and diagnostic skills by self-rating on a Likert scale. One group of fellows received the hypothesis-driven teaching curriculum (the “intervention” group) and another received instruction on head-to-toe examination. Both groups subsequently taught physical diagnosis to a group of first-year medical students. An oral examination was administered to all students after completion of the course. Measurements and Main Results: Fellows were comfortable teaching physical diagnosis to students. Students in both groups reported a lack of comfort with the pulmonary examination at the beginning of the course and improvement in their comfort by the end. Students trained by intervention group fellows outperformed students trained by control group fellows in the interpretation of physical findings (P < 0.05). Conclusions: Teaching hypothesis-driven physical examination to higher-level trainees who teach medical students improves the ability of students to interpret physical findings. This benefit should be confirmed using validated testing tools. PMID:26730644
Constrained inversion as a hypothesis testing tool, what can we learn about the lithosphere?
NASA Astrophysics Data System (ADS)
Moorkamp, Max; Stewart, Fishwick; Jones, Alan G.
2017-04-01
Inversion of geophysical data constrained by a reference model is typically used to guide the inversion of low resolution data towards a geologically plausible solution. For example, a migrated seismic section can provide the location of lithological boundaries for potential field inversions. Here we consider the inversion of long-period magnetotelluric data constrained by models generated through surface wave inversion. In this case, we do not consider the surface wave model inherently better in any sense and want to guide the magnetotelluric inversion towards this model, but we want to test the hypothesis that both datasets can be explained by models with similar structure. If the hypothesis test is successful, i.e. we can fit the observations with a conductivity model with structural similarity to the seismic model, we have found an alternative explanation compared to the individual inversion and can use the differences to learn about the resolution of the magnetotelluric data and can improve our interpretation. Conversely, if the test refutes our hypothesis of coincident structure, we have found features in the models that are sensed fundamentally different by both methods which is potentially instructive on the nature of the anomalies. We use a MT dataset acquired in central Botswana over the Okwa terrane and the adjacent Kaapvaal and Zimbabwe Cratons together with a tomographic model for the region to illustrate and test this approach. Here, various conductive structures have been identified that bridge the Moho. Furthermore, the thickness of the lithosphere inferred from the different methods differs. In both cases the question is in how far this is a result of the ill-posed nature of inversion and in how far these differences can be reconciled. Thus this dataset is an ideal test case for our hypothesis testing approach. Finally, we will demonstrate how we can use the results of the constrained inversion to extract conductivity-velocity relationships in the region and gain further insight into the composition and thermal structure of the lithosphere.
Invited Commentary: Can Issues With Reproducibility in Science Be Blamed on Hypothesis Testing?
Weinberg, Clarice R
2017-09-15
In the accompanying article (Am J Epidemiol. 2017;186(6):646-647), Dr. Timothy Lash makes a forceful case that the problems with reproducibility in science stem from our "culture" of null hypothesis significance testing. He notes that when attention is selectively given to statistically significant findings, the estimated effects will be systematically biased away from the null. Here I revisit the recent history of genetic epidemiology and argue for retaining statistical testing as an important part of the tool kit. Particularly when many factors are considered in an agnostic way, in what Lash calls "innovative" research, investigators need a selection strategy to identify which findings are most likely to be genuine, and hence worthy of further study. Published by Oxford University Press on behalf of the Johns Hopkins Bloomberg School of Public Health 2017. This work is written by (a) US Government employee(s) and is in the public domain in the US.
Identification, definition and mapping of terrestrial ecosystems in interior Alaska
NASA Technical Reports Server (NTRS)
Anderson, J. H. (Principal Investigator)
1973-01-01
The author has identified the following significant results. Two new, as yet unfinished vegetation maps are presented. These tend further to substantiate the belief that ERTS-1 imagery is a valuable mapping tool. Newly selected scenes show that vegetation interpretations can be refined through use of non-growing season imagery, particularly through the different spectral characteristics of vegetation lacking foliage and through the effect of vegetation structure on apparent snow cover. Scenes now are available for all test area north of the Alaska Range except Mt. McKinley National Park. No support was obtained for the hypothesis that similar interband ratios, from two areas apparently different spectrally because of different sun angles, would indicate similar surface features. However, attempts to test this hypothesis have so far been casual.
ERIC Educational Resources Information Center
Saylor, John M.
The National Science Foundation (NSF) is providing funds for coalitions of engineering educational institutions to improve the quality of undergraduate engineering education. A hypothesis that is being tested is that people can learn better in environments that allow self-paced and/or collaborative learning. The main tools for providing this…
Rethinking Exams and Letter Grades: How Much Can Teachers Delegate to Students?
ERIC Educational Resources Information Center
Kitchen, Elizabeth; King, Summer H.; Robison, Diane F.; Sudweeks, Richard R.; Bradshaw, William S.; Bell, John D.
2006-01-01
In this article we report a 3-yr study of a large-enrollment Cell Biology course focused on developing student skill in scientific reasoning and data interpretation. Specifically, the study tested the hypothesis that converting the role of exams from summative grading devices to formative tools would increase student success in acquiring those…
Using the Moon as a Tool for Discovery-Oriented Learning.
ERIC Educational Resources Information Center
Cummins, Robert Hays; Ritger, Scott David; Myers, Christopher Adam
1992-01-01
Students test the hypothesis that the moon revolves east to west around the earth, determine by observation approximately how many degrees the moon revolves per night, and develop a scale model of the earth-sun-moon system in this laboratory exercise. Students are actively involved in the scientific process and are introduced to the importance of…
ERIC Educational Resources Information Center
Streibel, Michael; And Others
1987-01-01
Describes an advice-giving computer system being developed for genetics education called MENDEL that is based on research in learning, genetics problem solving, and expert systems. The value of MENDEL as a design tool and the tutorial function are stressed. Hypothesis testing, graphics, and experiential learning are also discussed. (Author/LRW)
Schloss, Patrick D; Handelsman, Jo
2006-10-01
The recent advent of tools enabling statistical inferences to be drawn from comparisons of microbial communities has enabled the focus of microbial ecology to move from characterizing biodiversity to describing the distribution of that biodiversity. Although statistical tools have been developed to compare community structures across a phylogenetic tree, we lack tools to compare the memberships and structures of two communities at a particular operational taxonomic unit (OTU) definition. Furthermore, current tests of community structure do not indicate the similarity of the communities but only report the probability of a statistical hypothesis. Here we present a computer program, SONS, which implements nonparametric estimators for the fraction and richness of OTUs shared between two communities.
NASA Astrophysics Data System (ADS)
Jacob, Rinku; Harikrishnan, K. P.; Misra, R.; Ambika, G.
2018-01-01
Recurrence networks and the associated statistical measures have become important tools in the analysis of time series data. In this work, we test how effective the recurrence network measures are in analyzing real world data involving two main types of noise, white noise and colored noise. We use two prominent network measures as discriminating statistic for hypothesis testing using surrogate data for a specific null hypothesis that the data is derived from a linear stochastic process. We show that the characteristic path length is especially efficient as a discriminating measure with the conclusions reasonably accurate even with limited number of data points in the time series. We also highlight an additional advantage of the network approach in identifying the dimensionality of the system underlying the time series through a convergence measure derived from the probability distribution of the local clustering coefficients. As examples of real world data, we use the light curves from a prominent black hole system and show that a combined analysis using three primary network measures can provide vital information regarding the nature of temporal variability of light curves from different spectroscopic classes.
Iovita, Radu
2011-01-01
Background Recent findings suggest that the North African Middle Stone Age technocomplex known as the Aterian is both much older than previously assumed, and certainly associated with fossils exhibiting anatomically modern human morphology and behavior. The Aterian is defined by the presence of ‘tanged’ or ‘stemmed’ tools, which have been widely assumed to be among the earliest projectile weapon tips. The present study systematically investigates morphological variation in a large sample of Aterian tools to test the hypothesis that these tools were hafted and/or used as projectile weapons. Methodology/Principal Findings Both classical morphometrics and Elliptical Fourier Analysis of tool outlines are used to show that the shape variation in the sample exhibits size-dependent patterns consistent with a reduction of the tools from the tip down, with the tang remaining intact. Additionally, the process of reduction led to increasing side-to-side asymmetries as the tools got smaller. Finally, a comparison of shape-change trajectories between Aterian tools and Late Paleolithic arrowheads from the North German site of Stellmoor reveal significant differences in terms of the amount and location of the variation. Conclusions/Significance The patterns of size-dependent shape variation strongly support the functional hypothesis of Aterian tools as hafted knives or scrapers with alternating active edges, rather than as weapon tips. Nevertheless, the same morphological patterns are interpreted as one of the earliest evidences for a hafting modification, and for the successful combination of different raw materials (haft and stone tip) into one implement, in itself an important achievement in the evolution of hominin technologies. PMID:22216161
ERIC Educational Resources Information Center
Oshima, Jun; Oshima, Ritsuko; Murayama, Isao; Inagaki, Shigenori; Takenaka, Makiko; Nakayama, Hayashi; Yamaguchi, Etsuji
2004-01-01
This paper reports design experiments on two Japanese elementary science lesson units in a sixth-grade classroom supported by computer support for collaborative learning (CSCL) technology as a collaborative reflection tool. We took different approaches in the experiments depending on their instructional goals. In the unit 'air and how things…
NASA Technical Reports Server (NTRS)
Beheshti, Afshin
2018-01-01
GeneLab as a general tool for the scientific community; Utilizing GeneLab datasets to generate hypothesis and determining potential biological targets against health risks due to long-term space missions; How can OpenTarget be used to discover novel drugs to test as countermeasures that can be utilized by astronauts.
Adolescents' Over-Use of the Cyber World--Internet Addiction or Identity Exploration?
ERIC Educational Resources Information Center
Israelashvili, Moshe; Kim, Taejin; Bukobza, Gabriel
2012-01-01
In this study, we tested the hypothesis that the Internet can serve as a valuable tool assisting adolescents in pursuing the developmentally-related need for self concept clarity. Participants in the study were 278 adolescents (48.5% girls; 7th-9th graders) who completed questionnaires relating to their levels of Internet use, Internet addiction,…
Accuracy of Revised and Traditional Parallel Analyses for Assessing Dimensionality with Binary Data
ERIC Educational Resources Information Center
Green, Samuel B.; Redell, Nickalus; Thompson, Marilyn S.; Levy, Roy
2016-01-01
Parallel analysis (PA) is a useful empirical tool for assessing the number of factors in exploratory factor analysis. On conceptual and empirical grounds, we argue for a revision to PA that makes it more consistent with hypothesis testing. Using Monte Carlo methods, we evaluated the relative accuracy of the revised PA (R-PA) and traditional PA…
[Dilemma of null hypothesis in ecological hypothesis's experiment test.
Li, Ji
2016-06-01
Experimental test is one of the major test methods of ecological hypothesis, though there are many arguments due to null hypothesis. Quinn and Dunham (1983) analyzed the hypothesis deduction model from Platt (1964) and thus stated that there is no null hypothesis in ecology that can be strictly tested by experiments. Fisher's falsificationism and Neyman-Pearson (N-P)'s non-decisivity inhibit statistical null hypothesis from being strictly tested. Moreover, since the null hypothesis H 0 (α=1, β=0) and alternative hypothesis H 1 '(α'=1, β'=0) in ecological progresses are diffe-rent from classic physics, the ecological null hypothesis can neither be strictly tested experimentally. These dilemmas of null hypothesis could be relieved via the reduction of P value, careful selection of null hypothesis, non-centralization of non-null hypothesis, and two-tailed test. However, the statistical null hypothesis significance testing (NHST) should not to be equivalent to the causality logistical test in ecological hypothesis. Hence, the findings and conclusions about methodological studies and experimental tests based on NHST are not always logically reliable.
NASA Astrophysics Data System (ADS)
Harken, B.; Geiges, A.; Rubin, Y.
2013-12-01
There are several stages in any hydrological modeling campaign, including: formulation and analysis of a priori information, data acquisition through field campaigns, inverse modeling, and forward modeling and prediction of some environmental performance metric (EPM). The EPM being predicted could be, for example, contaminant concentration, plume travel time, or aquifer recharge rate. These predictions often have significant bearing on some decision that must be made. Examples include: how to allocate limited remediation resources between multiple contaminated groundwater sites, where to place a waste repository site, and what extraction rates can be considered sustainable in an aquifer. Providing an answer to these questions depends on predictions of EPMs using forward models as well as levels of uncertainty related to these predictions. Uncertainty in model parameters, such as hydraulic conductivity, leads to uncertainty in EPM predictions. Often, field campaigns and inverse modeling efforts are planned and undertaken with reduction of parametric uncertainty as the objective. The tool of hypothesis testing allows this to be taken one step further by considering uncertainty reduction in the ultimate prediction of the EPM as the objective and gives a rational basis for weighing costs and benefits at each stage. When using the tool of statistical hypothesis testing, the EPM is cast into a binary outcome. This is formulated as null and alternative hypotheses, which can be accepted and rejected with statistical formality. When accounting for all sources of uncertainty at each stage, the level of significance of this test provides a rational basis for planning, optimization, and evaluation of the entire campaign. Case-specific information, such as consequences prediction error and site-specific costs can be used in establishing selection criteria based on what level of risk is deemed acceptable. This framework is demonstrated and discussed using various synthetic case studies. The case studies involve contaminated aquifers where a decision must be made based on prediction of when a contaminant will arrive at a given location. The EPM, in this case contaminant travel time, is cast into the hypothesis testing framework. The null hypothesis states that the contaminant plume will arrive at the specified location before a critical value of time passes, and the alternative hypothesis states that the plume will arrive after the critical time passes. Different field campaigns are analyzed based on effectiveness in reducing the probability of selecting the wrong hypothesis, which in this case corresponds to reducing uncertainty in the prediction of plume arrival time. To examine the role of inverse modeling in this framework, case studies involving both Maximum Likelihood parameter estimation and Bayesian inversion are used.
NASA Astrophysics Data System (ADS)
Crespo Ramos, Edwin O.
This research was aimed at establishing the differences, if any, between traditional direct teaching and constructive teaching through the use of computer simulations and their effect on pre-service teachers. It's also intended to gain feedback on the users of these simulations as providers of constructive teaching and learning experiences. The experimental framework used a quantitative method with a descriptive focus. The research was guided by two hypothesis and five inquiries. The data was obtained from a group composed of twenty-nine students from a private Metropolitan University in Puerto Rico and elementary school pre-service teachers. They were divided into two sub-groups: experimental and control. Two means were used to collect data: tests and surveys. Quantitative data was analyzed through test "t" for paired samples and the non-parametric Wilcoxon test. The results of the pre and post tests do not provide enough evidence to conclude that using the simulations as learning tools was more effective than traditional teaching. However, the quantitative results obtained were not enough to reject or dismiss the hypothesis Ho1. On the other hand, an overall positive attitude towards these simulations was obtained from the surveys. The importance of including hands-on activities in daily lesson planning was proven and well recognized among practice teachers. After participating and working with these simulations, the practice teachers expressed being convinced that they would definitely use them as teaching tools in the classroom. Due to these results, hypothesis Ho2 was rejected. Evidence also proved that practice teachers need further professional development to improve their skills in the application of these simulations in the classroom environment. The majority of these practice teachers showed concern about not being instructed on important aspects of the use of simulation as part of their college education curriculum towards becoming teachers.
Nursing students' opinions about acupuncture and Chinese medicine.
Weber, J P
1975-01-01
Eighty senior nursing students at the University of San Francisco (USF) were divided at random into four groups of 20. Two groups were pretested on their knowledge of acupuncture and Chinese medicine. One week later a 50-minute class in acupuncture and Chinese medicine was given in a community health class to one of the two pretexted groups and one of the two untested groups. Following the class, the test was given to all four groups. Using the Solomon Four-Group design to measure effects of pretesting and the class content, significant differences were found between the groups on questions seeking differences, to confirm the first hypothesis that an increase in knowledge about acupuncture and Chinese medicine will accompany an increase in its acceptance as a healing tool and the desire to learn more about it. A t-test on the results of pre- and post- tests confirmed the second hypothesis that there will be no difference between groups in history or maturation from one week to the next.
Pion Total Cross Section in Nucleon - Nucleon Collisions
NASA Technical Reports Server (NTRS)
Norbury, John W.
2009-01-01
Total cross section parameterizations for neutral and charged pion production in nucleon - nucleon collisions are compared to experimental data over the projectile momentum range from threshold to 300 GeV. Both proton - proton and proton - neutron reactions are considered. Overall excellent agreement between parameterizations and experiment is found, except for notable disagreements near threshold. In addition, the hypothesis that the neutral pion production cross section can be obtained from the average charged pion cross section is checked. The theoretical formulas presented in the paper obey this hypothesis for projectile momenta below 500 GeV. The results presented provide a test of engineering tools used to calculate the pion component of space radiation.
Is there a link between the crafting of tools and the evolution of cognition?
Taylor, Alex H; Gray, Russell D
2014-11-01
The ability to craft tools is one of the defining features of our species. The technical intelligence hypothesis predicts that tool-making species should have enhanced physical cognition. Here we review how the physical problem-solving performance of tool-making apes and corvids compares to closely related species. We conclude that, while some performance differences have been found, overall the evidence is at best equivocal. We argue that increased sample sizes, novel experimental designs, and a signature-testing approach are required to determine the effect tool crafting has on the evolution of intelligence. WIREs Cogn Sci 2014, 5:693-703. doi: 10.1002/wcs.1322 For further resources related to this article, please visit the WIREs website. The authors have declared no conflicts of interest for this article. © 2014 The Authors. WIREs Cognitive Science published by John Wiley & Sons, Ltd.
Yang, Xiaowei; Nie, Kun
2008-03-15
Longitudinal data sets in biomedical research often consist of large numbers of repeated measures. In many cases, the trajectories do not look globally linear or polynomial, making it difficult to summarize the data or test hypotheses using standard longitudinal data analysis based on various linear models. An alternative approach is to apply the approaches of functional data analysis, which directly target the continuous nonlinear curves underlying discretely sampled repeated measures. For the purposes of data exploration, many functional data analysis strategies have been developed based on various schemes of smoothing, but fewer options are available for making causal inferences regarding predictor-outcome relationships, a common task seen in hypothesis-driven medical studies. To compare groups of curves, two testing strategies with good power have been proposed for high-dimensional analysis of variance: the Fourier-based adaptive Neyman test and the wavelet-based thresholding test. Using a smoking cessation clinical trial data set, this paper demonstrates how to extend the strategies for hypothesis testing into the framework of functional linear regression models (FLRMs) with continuous functional responses and categorical or continuous scalar predictors. The analysis procedure consists of three steps: first, apply the Fourier or wavelet transform to the original repeated measures; then fit a multivariate linear model in the transformed domain; and finally, test the regression coefficients using either adaptive Neyman or thresholding statistics. Since a FLRM can be viewed as a natural extension of the traditional multiple linear regression model, the development of this model and computational tools should enhance the capacity of medical statistics for longitudinal data.
Zhao, Xing; Zhou, Xiao-Hua; Feng, Zijian; Guo, Pengfei; He, Hongyan; Zhang, Tao; Duan, Lei; Li, Xiaosong
2013-01-01
As a useful tool for geographical cluster detection of events, the spatial scan statistic is widely applied in many fields and plays an increasingly important role. The classic version of the spatial scan statistic for the binary outcome is developed by Kulldorff, based on the Bernoulli or the Poisson probability model. In this paper, we apply the Hypergeometric probability model to construct the likelihood function under the null hypothesis. Compared with existing methods, the likelihood function under the null hypothesis is an alternative and indirect method to identify the potential cluster, and the test statistic is the extreme value of the likelihood function. Similar with Kulldorff's methods, we adopt Monte Carlo test for the test of significance. Both methods are applied for detecting spatial clusters of Japanese encephalitis in Sichuan province, China, in 2009, and the detected clusters are identical. Through a simulation to independent benchmark data, it is indicated that the test statistic based on the Hypergeometric model outweighs Kulldorff's statistics for clusters of high population density or large size; otherwise Kulldorff's statistics are superior.
Brion, Mélanie; Pitel, Anne-Lise; Beaunieux, Hélène; Maurage, Pierre
2014-01-01
Korsakoff syndrome (KS) is a neurological state mostly caused by alcohol-dependence and leading to disproportionate episodic memory deficits. KS patients present more severe anterograde amnesia than Alcohol-Dependent Subjects (ADS), which led to the continuum hypothesis postulating a progressive increase in brain and cognitive damages during the evolution from ADS to KS. This hypothesis has been extensively examined for memory but is still debated for other abilities, notably executive functions (EF). EF have up to now been explored by unspecific tasks in KS, and few studies explored their interactions with memory. Exploring EF in KS by specific tasks based on current EF models could thus renew the exploration of the continuum hypothesis. This paper will propose a research program aiming at: (1) clarifying the extent of executive dysfunctions in KS by tasks focusing on specific EF subcomponents; (2) determining the differential EF deficits in ADS and KS; (3) exploring EF-memory interactions in KS with innovative tasks. At the fundamental level, this exploration will test the continuum hypothesis beyond memory. At the clinical level, it will propose new rehabilitation tools focusing on the EF specifically impaired in KS.
Brion, Mélanie; Pitel, Anne-Lise; Beaunieux, Hélène; Maurage, Pierre
2014-01-01
Korsakoff syndrome (KS) is a neurological state mostly caused by alcohol-dependence and leading to disproportionate episodic memory deficits. KS patients present more severe anterograde amnesia than Alcohol-Dependent Subjects (ADS), which led to the continuum hypothesis postulating a progressive increase in brain and cognitive damages during the evolution from ADS to KS. This hypothesis has been extensively examined for memory but is still debated for other abilities, notably executive functions (EF). EF have up to now been explored by unspecific tasks in KS, and few studies explored their interactions with memory. Exploring EF in KS by specific tasks based on current EF models could thus renew the exploration of the continuum hypothesis. This paper will propose a research program aiming at: (1) clarifying the extent of executive dysfunctions in KS by tasks focusing on specific EF subcomponents; (2) determining the differential EF deficits in ADS and KS; (3) exploring EF-memory interactions in KS with innovative tasks. At the fundamental level, this exploration will test the continuum hypothesis beyond memory. At the clinical level, it will propose new rehabilitation tools focusing on the EF specifically impaired in KS. PMID:25071526
A modeling process to understand complex system architectures
NASA Astrophysics Data System (ADS)
Robinson, Santiago Balestrini
2009-12-01
In recent decades, several tools have been developed by the armed forces, and their contractors, to test the capability of a force. These campaign level analysis tools, often times characterized as constructive simulations are generally expensive to create and execute, and at best they are extremely difficult to verify and validate. This central observation, that the analysts are relying more and more on constructive simulations to predict the performance of future networks of systems, leads to the two central objectives of this thesis: (1) to enable the quantitative comparison of architectures in terms of their ability to satisfy a capability without resorting to constructive simulations, and (2) when constructive simulations must be created, to quantitatively determine how to spend the modeling effort amongst the different system classes. The first objective led to Hypothesis A, the first main hypotheses, which states that by studying the relationships between the entities that compose an architecture, one can infer how well it will perform a given capability. The method used to test the hypothesis is based on two assumptions: (1) the capability can be defined as a cycle of functions, and that it (2) must be possible to estimate the probability that a function-based relationship occurs between any two types of entities. If these two requirements are met, then by creating random functional networks, different architectures can be compared in terms of their ability to satisfy a capability. In order to test this hypothesis, a novel process for creating representative functional networks of large-scale system architectures was developed. The process, named the Digraph Modeling for Architectures (DiMA), was tested by comparing its results to those of complex constructive simulations. Results indicate that if the inputs assigned to DiMA are correct (in the tests they were based on time-averaged data obtained from the ABM), DiMA is able to identify which of any two architectures is better more than 98% of the time. The second objective led to Hypothesis B, the second of the main hypotheses. This hypothesis stated that by studying the functional relations, the most critical entities composing the architecture could be identified. The critical entities are those that when their behavior varies slightly, the behavior of the overall architecture varies greatly. These are the entities that must be modeled more carefully and where modeling effort should be expended. This hypothesis was tested by simplifying agent-based models to the non-trivial minimum, and executing a large number of different simulations in order to obtain statistically significant results. The tests were conducted by evolving the complex model without any error induced, and then evolving the model once again for each ranking and assigning error to any of the nodes with a probability inversely proportional to the ranking. The results from this hypothesis test indicate that depending on the structural characteristics of the functional relations, it is useful to use one of two of the intelligent rankings tested, or it is best to expend effort equally amongst all the entities. Random ranking always performed worse than uniform ranking, indicating that if modeling effort is to be prioritized amongst the entities composing the large-scale system architecture, it should be prioritized intelligently. The benefit threshold between intelligent prioritization and no prioritization lays on the large-scale system's chaotic boundary. If the large-scale system behaves chaotically, small variations in any of the entities tends to have a great impact on the behavior of the entire system. Therefore, even low ranking entities can still affect the behavior of the model greatly, and error should not be concentrated in any one entity. It was discovered that the threshold can be identified from studying the structure of the networks, in particular the cyclicity, the Off-diagonal Complexity, and the Digraph Algebraic Connectivity. (Abstract shortened by UMI.)
Kilborn, Joshua P; Jones, David L; Peebles, Ernst B; Naar, David F
2017-04-01
Clustering data continues to be a highly active area of data analysis, and resemblance profiles are being incorporated into ecological methodologies as a hypothesis testing-based approach to clustering multivariate data. However, these new clustering techniques have not been rigorously tested to determine the performance variability based on the algorithm's assumptions or any underlying data structures. Here, we use simulation studies to estimate the statistical error rates for the hypothesis test for multivariate structure based on dissimilarity profiles (DISPROF). We concurrently tested a widely used algorithm that employs the unweighted pair group method with arithmetic mean (UPGMA) to estimate the proficiency of clustering with DISPROF as a decision criterion. We simulated unstructured multivariate data from different probability distributions with increasing numbers of objects and descriptors, and grouped data with increasing overlap, overdispersion for ecological data, and correlation among descriptors within groups. Using simulated data, we measured the resolution and correspondence of clustering solutions achieved by DISPROF with UPGMA against the reference grouping partitions used to simulate the structured test datasets. Our results highlight the dynamic interactions between dataset dimensionality, group overlap, and the properties of the descriptors within a group (i.e., overdispersion or correlation structure) that are relevant to resemblance profiles as a clustering criterion for multivariate data. These methods are particularly useful for multivariate ecological datasets that benefit from distance-based statistical analyses. We propose guidelines for using DISPROF as a clustering decision tool that will help future users avoid potential pitfalls during the application of methods and the interpretation of results.
Development of a StandAlone Surgical Haptic Arm.
Jones, Daniel; Lewis, Andrew; Fischer, Gregory S
2011-01-01
When performing telesurgery with current commercially available Minimally Invasive Robotic Surgery (MIRS) systems, a surgeon cannot feel the tool interactions that are inherent in traditional laparoscopy. It is proposed that haptic feedback in the control of MIRS systems could improve the speed, safety and learning curve of robotic surgery. To test this hypothesis, a standalone surgical haptic arm (SASHA) capable of manipulating da Vinci tools has been designed and fabricated with the additional ability of providing information for haptic feedback. This arm was developed as a research platform for developing and evaluating approaches to telesurgery, including various haptic mappings between master and slave and evaluating the effects of latency.
Understanding the Role of P Values and Hypothesis Tests in Clinical Research.
Mark, Daniel B; Lee, Kerry L; Harrell, Frank E
2016-12-01
P values and hypothesis testing methods are frequently misused in clinical research. Much of this misuse appears to be owing to the widespread, mistaken belief that they provide simple, reliable, and objective triage tools for separating the true and important from the untrue or unimportant. The primary focus in interpreting therapeutic clinical research data should be on the treatment ("oomph") effect, a metaphorical force that moves patients given an effective treatment to a different clinical state relative to their control counterparts. This effect is assessed using 2 complementary types of statistical measures calculated from the data, namely, effect magnitude or size and precision of the effect size. In a randomized trial, effect size is often summarized using constructs, such as odds ratios, hazard ratios, relative risks, or adverse event rate differences. How large a treatment effect has to be to be consequential is a matter for clinical judgment. The precision of the effect size (conceptually related to the amount of spread in the data) is usually addressed with confidence intervals. P values (significance tests) were first proposed as an informal heuristic to help assess how "unexpected" the observed effect size was if the true state of nature was no effect or no difference. Hypothesis testing was a modification of the significance test approach that envisioned controlling the false-positive rate of study results over many (hypothetical) repetitions of the experiment of interest. Both can be helpful but, by themselves, provide only a tunnel vision perspective on study results that ignores the clinical effects the study was conducted to measure.
Explorations in statistics: hypothesis tests and P values.
Curran-Everett, Douglas
2009-06-01
Learning about statistics is a lot like learning about science: the learning is more meaningful if you can actively explore. This second installment of Explorations in Statistics delves into test statistics and P values, two concepts fundamental to the test of a scientific null hypothesis. The essence of a test statistic is that it compares what we observe in the experiment to what we expect to see if the null hypothesis is true. The P value associated with the magnitude of that test statistic answers this question: if the null hypothesis is true, what proportion of possible values of the test statistic are at least as extreme as the one I got? Although statisticians continue to stress the limitations of hypothesis tests, there are two realities we must acknowledge: hypothesis tests are ingrained within science, and the simple test of a null hypothesis can be useful. As a result, it behooves us to explore the notions of hypothesis tests, test statistics, and P values.
Trivedi, Prinal; Edwards, Jode W; Wang, Jelai; Gadbury, Gary L; Srinivasasainagendra, Vinodh; Zakharkin, Stanislav O; Kim, Kyoungmi; Mehta, Tapan; Brand, Jacob P L; Patki, Amit; Page, Grier P; Allison, David B
2005-04-06
Many efforts in microarray data analysis are focused on providing tools and methods for the qualitative analysis of microarray data. HDBStat! (High-Dimensional Biology-Statistics) is a software package designed for analysis of high dimensional biology data such as microarray data. It was initially developed for the analysis of microarray gene expression data, but it can also be used for some applications in proteomics and other aspects of genomics. HDBStat! provides statisticians and biologists a flexible and easy-to-use interface to analyze complex microarray data using a variety of methods for data preprocessing, quality control analysis and hypothesis testing. Results generated from data preprocessing methods, quality control analysis and hypothesis testing methods are output in the form of Excel CSV tables, graphs and an Html report summarizing data analysis. HDBStat! is a platform-independent software that is freely available to academic institutions and non-profit organizations. It can be downloaded from our website http://www.soph.uab.edu/ssg_content.asp?id=1164.
A closure test for time-specific capture-recapture data
Stanley, T.R.; Burnham, K.P.
1999-01-01
The assumption of demographic closure in the analysis of capture-recapture data under closed-population models is of fundamental importance. Yet, little progress has been made in the development of omnibus tests of the closure assumption. We present a closure test for time-specific data that, in principle, tests the null hypothesis of closed-population model M(t) against the open-population Jolly-Seber model as a specific alternative. This test is chi-square, and can be decomposed into informative components that can be interpreted to determine the nature of closure violations. The test is most sensitive to permanent emigration and least sensitive to temporary emigration, and is of intermediate sensitivity to permanent or temporary immigration. This test is a versatile tool for testing the assumption of demographic closure in the analysis of capture-recapture data.
Self-Monitoring Symptoms in Glaucoma: A Feasibility Study of a Web-Based Diary Tool
McDonald, Leanne; Glen, Fiona C.; Taylor, Deanna J.
2017-01-01
Purpose. Glaucoma patients annually spend only a few hours in an eye clinic but spend more than 5000 waking hours engaged in everything else. We propose that patients could self-monitor changes in visual symptoms providing valuable between clinic information; we test the hypothesis that this is feasible using a web-based diary tool. Methods. Ten glaucoma patients with a range of visual field loss took part in an eight-week pilot study. After completing a series of baseline tests, volunteers were prompted to monitor symptoms every three days and complete a diary about their vision during daily life using a bespoke web-based diary tool. Response to an end of a study questionnaire about the usefulness of the exercise was a main outcome measure. Results. Eight of the 10 patients rated the monitoring scheme to be “valuable” or “very valuable.” Completion rate to items was excellent (96%). Themes from a qualitative synthesis of the diary entries related to behavioural aspects of glaucoma. One patient concluded that a constant focus on monitoring symptoms led to negative feelings. Conclusions. A web-based diary tool for monitoring self-reported glaucoma symptoms is practically feasible. The tool must be carefully designed to ensure participants are benefitting, and it is not increasing anxiety. PMID:28546876
Seeking health information on the web: positive hypothesis testing.
Kayhan, Varol Onur
2013-04-01
The goal of this study is to investigate positive hypothesis testing among consumers of health information when they search the Web. After demonstrating the extent of positive hypothesis testing using Experiment 1, we conduct Experiment 2 to test the effectiveness of two debiasing techniques. A total of 60 undergraduate students searched a tightly controlled online database developed by the authors to test the validity of a hypothesis. The database had four abstracts that confirmed the hypothesis and three abstracts that disconfirmed it. Findings of Experiment 1 showed that majority of participants (85%) exhibited positive hypothesis testing. In Experiment 2, we found that the recommendation technique was not effective in reducing positive hypothesis testing since none of the participants assigned to this server could retrieve disconfirming evidence. Experiment 2 also showed that the incorporation technique successfully reduced positive hypothesis testing since 75% of the participants could retrieve disconfirming evidence. Positive hypothesis testing on the Web is an understudied topic. More studies are needed to validate the effectiveness of the debiasing techniques discussed in this study and develop new techniques. Search engine developers should consider developing new options for users so that both confirming and disconfirming evidence can be presented in search results as users test hypotheses using search engines. Copyright © 2012 Elsevier Ireland Ltd. All rights reserved.
Using Pedagogical Tools to Help Hispanics be Successful in Computer Science
NASA Astrophysics Data System (ADS)
Irish, Rodger
Irish, Rodger, Using Pedagogical Tools to Help Hispanics Be Successful in Computer Science. Master of Science (MS), July 2017, 68 pp., 4 tables, 2 figures, references 48 titles. Computer science (CS) jobs are a growing field and pay a living wage, but the Hispanics are underrepresented in this field. This project seeks to give an overview of several contributing factors to this problem. It will then explore some possible solutions to this problem and how a combination of some tools (teaching methods) can create the best possible outcome. It is my belief that this approach can produce successful Hispanics to fill the needed jobs in the CS field. Then the project will test its hypothesis. I will discuss the tools used to measure progress both in the affective and the cognitive domains. I will show how the decision to run a Computer Club was reached and the results of the research. The conclusion will summarize the results and tell of future research that still needs to be done.
Stable-isotope fingerprints of biological agents as forensic tools.
Horita, Juske; Vass, Arpad A
2003-01-01
Naturally occurring stable isotopes of light elements in chemical and biological agents may possess unique "stable-isotope fingerprints" depending on their sources and manufacturing processes. To test this hypothesis, two strains of bacteria (Bacillus globigii and Erwinia agglomerans) were grown under controlled laboratory conditions. We observed that cultured bacteria cells faithfully inherited the isotopic composition (hydrogen, carbon, and nitrogen) of media waters and substrates in predictable manners in terms of bacterial metabolism and that even bacterial cells of the same strain, which grew in media water and substrates of different isotopic compositions, have readily distinguishable isotopic signatures. These "stable-isotopic fingerprints" of chemical and biological agents can be used as forensic tools in the event of biochemical terrorist attacks.
Automated Diagnosis and Control of Complex Systems
NASA Technical Reports Server (NTRS)
Kurien, James; Plaunt, Christian; Cannon, Howard; Shirley, Mark; Taylor, Will; Nayak, P.; Hudson, Benoit; Bachmann, Andrew; Brownston, Lee; Hayden, Sandra;
2007-01-01
Livingstone2 is a reusable, artificial intelligence (AI) software system designed to assist spacecraft, life support systems, chemical plants, or other complex systems by operating with minimal human supervision, even in the face of hardware failures or unexpected events. The software diagnoses the current state of the spacecraft or other system, and recommends commands or repair actions that will allow the system to continue operation. Livingstone2 is an enhancement of the Livingstone diagnosis system that was flight-tested onboard the Deep Space One spacecraft in 1999. This version tracks multiple diagnostic hypotheses, rather than just a single hypothesis as in the previous version. It is also able to revise diagnostic decisions made in the past when additional observations become available. In such cases, Livingstone might arrive at an incorrect hypothesis. Re-architecting and re-implementing the system in C++ has increased performance. Usability has been improved by creating a set of development tools that is closely integrated with the Livingstone2 engine. In addition to the core diagnosis engine, Livingstone2 includes a compiler that translates diagnostic models written in a Java-like language into Livingstone2's language, and a broad set of graphical tools for model development.
Validation of a pregnancy planning measure for Arabic-speaking women.
Almaghaslah, Eman; Rochat, Roger; Farhat, Ghada
2017-01-01
The prevalence of unplanned pregnancy in Saudi Arabia has not been thoroughly investigated. To conduct a psychometric evaluation study of the Arabic version of the London Measure of Unplanned Pregnancy (LMUP). To evaluate the psychometric properties of the LMUP, we conducted a self-administered online survey among 796 ever-married Saudi women aged 20-49 years, and a re-test survey among 24 women. The psychometric properties evaluated included content validity measured by content validity index (CVI), structural validity assessed by exploratory factor analysis (EFA), substantive validity assessed by hypothesis testing, contextual stability for the test-retest assessed by weighted Kappa, and internal consistency assessed by Cronbach's alpha. The psychometric analysis of the Arabic version of LMUP exhibited valid and reliable properties. The CVIs for individual items and at the scale level were >0.7. EFA confirmed a unidimensional extraction of the scale item. Hypothesis testing confirmed expected associations. The tool was stable with weighted kappa = 0.78 and Cronbach's alpha = 0.88. In this study, the validity and reliability of the Arabic version of the LMUP were confirmed according to well-known psychometric criteria. This LMUP version can be used in research studies among Arabic-speaking women to measure unplanned pregnancy and investigate correlates and outcomes related to unplanned pregnancy.
New methods of testing nonlinear hypothesis using iterative NLLS estimator
NASA Astrophysics Data System (ADS)
Mahaboob, B.; Venkateswarlu, B.; Mokeshrayalu, G.; Balasiddamuni, P.
2017-11-01
This research paper discusses the method of testing nonlinear hypothesis using iterative Nonlinear Least Squares (NLLS) estimator. Takeshi Amemiya [1] explained this method. However in the present research paper, a modified Wald test statistic due to Engle, Robert [6] is proposed to test the nonlinear hypothesis using iterative NLLS estimator. An alternative method for testing nonlinear hypothesis using iterative NLLS estimator based on nonlinear hypothesis using iterative NLLS estimator based on nonlinear studentized residuals has been proposed. In this research article an innovative method of testing nonlinear hypothesis using iterative restricted NLLS estimator is derived. Pesaran and Deaton [10] explained the methods of testing nonlinear hypothesis. This paper uses asymptotic properties of nonlinear least squares estimator proposed by Jenrich [8]. The main purpose of this paper is to provide very innovative methods of testing nonlinear hypothesis using iterative NLLS estimator, iterative NLLS estimator based on nonlinear studentized residuals and iterative restricted NLLS estimator. Eakambaram et al. [12] discussed least absolute deviation estimations versus nonlinear regression model with heteroscedastic errors and also they studied the problem of heteroscedasticity with reference to nonlinear regression models with suitable illustration. William Grene [13] examined the interaction effect in nonlinear models disused by Ai and Norton [14] and suggested ways to examine the effects that do not involve statistical testing. Peter [15] provided guidelines for identifying composite hypothesis and addressing the probability of false rejection for multiple hypotheses.
Vreugdenhil, Jettie; Spek, Bea
2018-03-01
Clinical reasoning in patient care is a skill that cannot be observed directly. So far, no reliable, valid instrument exists for the assessment of nursing students' clinical reasoning skills in hospital practice. Lasater's clinical judgment rubric (LCJR), based on Tanner's model "Thinking like a nurse" has been tested, mainly in academic simulation settings. The aim is to develop a Dutch version of the LCJR (D-LCJR) and to test its psychometric properties when used in a hospital traineeship context. A mixed-model approach was used to develop and to validate the instrument. Ten dedicated educational units in a university hospital. A well-mixed group of 52 nursing students, nurse coaches and nurse educators. A Delphi panel developed the D-LCJR. Students' clinical reasoning skills were assessed "live" by nurse coaches, nurse educators and students who rated themselves. The psychometric properties tested during the assessment process are reliability, reproducibility, content validity and construct validity by testing two hypothesis: 1) a positive correlation between assessed and self-reported sum scores (convergent validity) and 2) a linear relation between experience and sum score (clinical validity). The obtained D-LCJR was found to be internally consistent, Cronbach's alpha 0.93. The rubric is also reproducible with intraclass correlations between 0.69 and 0.78. Experts judged it to be content valid. The two hypothesis were both tested significant, supporting evidence for construct validity. The translated and modified LCJR, is a promising tool for the evaluation of nursing students' development in clinical reasoning in hospital traineeships, by students, nurse coaches and nurse educators. More evidence on construct validity is necessary, in particular for students at the end of their hospital traineeship. Based on our research, the D-LCJR applied in hospital traineeships is a usable and reliable tool. Copyright © 2017 Elsevier Ltd. All rights reserved.
Isotopic niches support the resource breadth hypothesis
Rader, Jonathan A.; Newsome, Seth D.; Sabat, Pablo; Chesser, R. Terry; Dillon, Michael E.; Martinez del Rio, Carlos
2017-01-01
Because a broad spectrum of resource use allows species to persist in a wide range of habitat types, and thus permits them to occupy large geographical areas, and because broadly distributed species have access to more diverse resource bases, the resource breadth hypothesis posits that the diversity of resources used by organisms should be positively related with the extent of their geographic ranges.We investigated isotopic niche width in a small radiation of South American birds in the genus Cinclodes. We analysed feathers of 12 species of Cinclodes to test the isotopic version of the resource breadth hypothesis and to examine the correlation between isotopic niche breadth and morphology.We found a positive correlation between the widths of hydrogen and oxygen isotopic niches (which estimate breadth of elevational range) and widths of the carbon and nitrogen isotopic niches (which estimates the diversity of resources consumed, and hence of habitats used). We also found a positive correlation between broad isotopic niches and wing morphology.Our study not only supports the resource breadth hypothesis but it also highlights the usefulness of stable isotope analyses as tools in the exploration of ecological niches. It is an example of a macroecological application of stable isotopes. It also illustrates the importance of scientific collections in ecological studies.
Hypothesis testing in hydrology: Theory and practice
NASA Astrophysics Data System (ADS)
Kirchner, James; Pfister, Laurent
2017-04-01
Well-posed hypothesis tests have spurred major advances in hydrological theory. However, a random sample of recent research papers suggests that in hydrology, as in other fields, hypothesis formulation and testing rarely correspond to the idealized model of the scientific method. Practices such as "p-hacking" or "HARKing" (Hypothesizing After the Results are Known) are major obstacles to more rigorous hypothesis testing in hydrology, along with the well-known problem of confirmation bias - the tendency to value and trust confirmations more than refutations - among both researchers and reviewers. Hypothesis testing is not the only recipe for scientific progress, however: exploratory research, driven by innovations in measurement and observation, has also underlain many key advances. Further improvements in observation and measurement will be vital to both exploratory research and hypothesis testing, and thus to advancing the science of hydrology.
De Meeûs, Thierry
2014-03-01
In population genetics data analysis, researchers are often faced to the problem of decision making from a series of tests of the same null hypothesis. This is the case when one wants to test differentiation between pathogens found on different host species sampled from different locations (as many tests as number of locations). Many procedures are available to date but not all apply to all situations. Finding which tests are significant or if the whole series is significant, when tests are independent or not do not require the same procedures. In this note I describe several procedures, among the simplest and easiest to undertake, that should allow decision making in most (if not all) situations population geneticists (or biologists) should meet, in particular in host-parasite systems. Copyright © 2014 Elsevier B.V. All rights reserved.
Testing the null hypothesis: the forgotten legacy of Karl Popper?
Wilkinson, Mick
2013-01-01
Testing of the null hypothesis is a fundamental aspect of the scientific method and has its basis in the falsification theory of Karl Popper. Null hypothesis testing makes use of deductive reasoning to ensure that the truth of conclusions is irrefutable. In contrast, attempting to demonstrate the new facts on the basis of testing the experimental or research hypothesis makes use of inductive reasoning and is prone to the problem of the Uniformity of Nature assumption described by David Hume in the eighteenth century. Despite this issue and the well documented solution provided by Popper's falsification theory, the majority of publications are still written such that they suggest the research hypothesis is being tested. This is contrary to accepted scientific convention and possibly highlights a poor understanding of the application of conventional significance-based data analysis approaches. Our work should remain driven by conjecture and attempted falsification such that it is always the null hypothesis that is tested. The write up of our studies should make it clear that we are indeed testing the null hypothesis and conforming to the established and accepted philosophical conventions of the scientific method.
Unscaled Bayes factors for multiple hypothesis testing in microarray experiments.
Bertolino, Francesco; Cabras, Stefano; Castellanos, Maria Eugenia; Racugno, Walter
2015-12-01
Multiple hypothesis testing collects a series of techniques usually based on p-values as a summary of the available evidence from many statistical tests. In hypothesis testing, under a Bayesian perspective, the evidence for a specified hypothesis against an alternative, conditionally on data, is given by the Bayes factor. In this study, we approach multiple hypothesis testing based on both Bayes factors and p-values, regarding multiple hypothesis testing as a multiple model selection problem. To obtain the Bayes factors we assume default priors that are typically improper. In this case, the Bayes factor is usually undetermined due to the ratio of prior pseudo-constants. We show that ignoring prior pseudo-constants leads to unscaled Bayes factor which do not invalidate the inferential procedure in multiple hypothesis testing, because they are used within a comparative scheme. In fact, using partial information from the p-values, we are able to approximate the sampling null distribution of the unscaled Bayes factor and use it within Efron's multiple testing procedure. The simulation study suggests that under normal sampling model and even with small sample sizes, our approach provides false positive and false negative proportions that are less than other common multiple hypothesis testing approaches based only on p-values. The proposed procedure is illustrated in two simulation studies, and the advantages of its use are showed in the analysis of two microarray experiments. © The Author(s) 2011.
A Critique of One-Tailed Hypothesis Test Procedures in Business and Economics Statistics Textbooks.
ERIC Educational Resources Information Center
Liu, Tung; Stone, Courtenay C.
1999-01-01
Surveys introductory business and economics statistics textbooks and finds that they differ over the best way to explain one-tailed hypothesis tests: the simple null-hypothesis approach or the composite null-hypothesis approach. Argues that the composite null-hypothesis approach contains methodological shortcomings that make it more difficult for…
Monocular tool control, eye dominance, and laterality in New Caledonian crows.
Martinho, Antone; Burns, Zackory T; von Bayern, Auguste M P; Kacelnik, Alex
2014-12-15
Tool use, though rare, is taxonomically widespread, but morphological adaptations for tool use are virtually unknown. We focus on the New Caledonian crow (NCC, Corvus moneduloides), which displays some of the most innovative tool-related behavior among nonhumans. One of their major food sources is larvae extracted from burrows with sticks held diagonally in the bill, oriented with individual, but not species-wide, laterality. Among possible behavioral and anatomical adaptations for tool use, NCCs possess unusually wide binocular visual fields (up to 60°), suggesting that extreme binocular vision may facilitate tool use. Here, we establish that during natural extractions, tool tips can only be viewed by the contralateral eye. Thus, maintaining binocular view of tool tips is unlikely to have selected for wide binocular fields; the selective factor is more likely to have been to allow each eye to see far enough across the midsagittal line to view the tool's tip monocularly. Consequently, we tested the hypothesis that tool side preference follows eye preference and found that eye dominance does predict tool laterality across individuals. This contrasts with humans' species-wide motor laterality and uncorrelated motor-visual laterality, possibly because bill-held tools are viewed monocularly and move in concert with eyes, whereas hand-held tools are visible to both eyes and allow independent combinations of eye preference and handedness. This difference may affect other models of coordination between vision and mechanical control, not necessarily involving tools. Copyright © 2014 Elsevier Ltd. All rights reserved.
HyQue: evaluating hypotheses using Semantic Web technologies.
Callahan, Alison; Dumontier, Michel; Shah, Nigam H
2011-05-17
Key to the success of e-Science is the ability to computationally evaluate expert-composed hypotheses for validity against experimental data. Researchers face the challenge of collecting, evaluating and integrating large amounts of diverse information to compose and evaluate a hypothesis. Confronted with rapidly accumulating data, researchers currently do not have the software tools to undertake the required information integration tasks. We present HyQue, a Semantic Web tool for querying scientific knowledge bases with the purpose of evaluating user submitted hypotheses. HyQue features a knowledge model to accommodate diverse hypotheses structured as events and represented using Semantic Web languages (RDF/OWL). Hypothesis validity is evaluated against experimental and literature-sourced evidence through a combination of SPARQL queries and evaluation rules. Inference over OWL ontologies (for type specifications, subclass assertions and parthood relations) and retrieval of facts stored as Bio2RDF linked data provide support for a given hypothesis. We evaluate hypotheses of varying levels of detail about the genetic network controlling galactose metabolism in Saccharomyces cerevisiae to demonstrate the feasibility of deploying such semantic computing tools over a growing body of structured knowledge in Bio2RDF. HyQue is a query-based hypothesis evaluation system that can currently evaluate hypotheses about the galactose metabolism in S. cerevisiae. Hypotheses as well as the supporting or refuting data are represented in RDF and directly linked to one another allowing scientists to browse from data to hypothesis and vice versa. HyQue hypotheses and data are available at http://semanticscience.org/projects/hyque.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Smith, Brennan T.; Welch, Tim; Witt, Adam M.
The Multi-Year Plan for Research, Development, and Prototype Testing of Standard Modular Hydropower Technology (MYRP) presents a strategy for specifying, designing, testing, and demonstrating the efficacy of standard modular hydropower (SMH) as an environmentally compatible and cost-optimized renewable electricity generation technology. The MYRP provides the context, background, and vision for testing the SMH hypothesis: if standardization, modularity, and preservation of stream functionality become essential and fully realized features of hydropower technology, project design, and regulatory processes, they will enable previously unrealized levels of new project development with increased acceptance, reduced costs, increased predictability of outcomes, and increased value to stakeholders.more » To achieve success in this effort, the MYRP outlines a framework of stakeholder-validated criteria, models, design tools, testing facilities, and assessment protocols that will facilitate the development of next-generation hydropower technologies.« less
Debates—Hypothesis testing in hydrology: Theory and practice
NASA Astrophysics Data System (ADS)
Pfister, Laurent; Kirchner, James W.
2017-03-01
The basic structure of the scientific method—at least in its idealized form—is widely championed as a recipe for scientific progress, but the day-to-day practice may be different. Here, we explore the spectrum of current practice in hypothesis formulation and testing in hydrology, based on a random sample of recent research papers. This analysis suggests that in hydrology, as in other fields, hypothesis formulation and testing rarely correspond to the idealized model of the scientific method. Practices such as "p-hacking" or "HARKing" (Hypothesizing After the Results are Known) are major obstacles to more rigorous hypothesis testing in hydrology, along with the well-known problem of confirmation bias—the tendency to value and trust confirmations more than refutations—among both researchers and reviewers. Nonetheless, as several examples illustrate, hypothesis tests have played an essential role in spurring major advances in hydrological theory. Hypothesis testing is not the only recipe for scientific progress, however. Exploratory research, driven by innovations in measurement and observation, has also underlain many key advances. Further improvements in observation and measurement will be vital to both exploratory research and hypothesis testing, and thus to advancing the science of hydrology.
Bayesian inference for psychology. Part II: Example applications with JASP.
Wagenmakers, Eric-Jan; Love, Jonathon; Marsman, Maarten; Jamil, Tahira; Ly, Alexander; Verhagen, Josine; Selker, Ravi; Gronau, Quentin F; Dropmann, Damian; Boutin, Bruno; Meerhoff, Frans; Knight, Patrick; Raj, Akash; van Kesteren, Erik-Jan; van Doorn, Johnny; Šmíra, Martin; Epskamp, Sacha; Etz, Alexander; Matzke, Dora; de Jong, Tim; van den Bergh, Don; Sarafoglou, Alexandra; Steingroever, Helen; Derks, Koen; Rouder, Jeffrey N; Morey, Richard D
2018-02-01
Bayesian hypothesis testing presents an attractive alternative to p value hypothesis testing. Part I of this series outlined several advantages of Bayesian hypothesis testing, including the ability to quantify evidence and the ability to monitor and update this evidence as data come in, without the need to know the intention with which the data were collected. Despite these and other practical advantages, Bayesian hypothesis tests are still reported relatively rarely. An important impediment to the widespread adoption of Bayesian tests is arguably the lack of user-friendly software for the run-of-the-mill statistical problems that confront psychologists for the analysis of almost every experiment: the t-test, ANOVA, correlation, regression, and contingency tables. In Part II of this series we introduce JASP ( http://www.jasp-stats.org ), an open-source, cross-platform, user-friendly graphical software package that allows users to carry out Bayesian hypothesis tests for standard statistical problems. JASP is based in part on the Bayesian analyses implemented in Morey and Rouder's BayesFactor package for R. Armed with JASP, the practical advantages of Bayesian hypothesis testing are only a mouse click away.
NASA Astrophysics Data System (ADS)
Nomaguch, Yutaka; Fujita, Kikuo
This paper proposes a design support framework, named DRIFT (Design Rationale Integration Framework of Three layers), which dynamically captures and manages hypothesis and verification in the design process. A core of DRIFT is a three-layered design process model of action, model operation and argumentation. This model integrates various design support tools and captures design operations performed on them. Action level captures the sequence of design operations. Model operation level captures the transition of design states, which records a design snapshot over design tools. Argumentation level captures the process of setting problems and alternatives. The linkage of three levels enables to automatically and efficiently capture and manage iterative hypothesis and verification processes through design operations over design tools. In DRIFT, such a linkage is extracted through the templates of design operations, which are extracted from the patterns embeded in design tools such as Design-For-X (DFX) approaches, and design tools are integrated through ontology-based representation of design concepts. An argumentation model, gIBIS (graphical Issue-Based Information System), is used for representing dependencies among problems and alternatives. A mechanism of TMS (Truth Maintenance System) is used for managing multiple hypothetical design stages. This paper also demonstrates a prototype implementation of DRIFT and its application to a simple design problem. Further, it is concluded with discussion of some future issues.
Teaching Hypothesis Testing by Debunking a Demonstration of Telepathy.
ERIC Educational Resources Information Center
Bates, John A.
1991-01-01
Discusses a lesson designed to demonstrate hypothesis testing to introductory college psychology students. Explains that a psychology instructor demonstrated apparent psychic abilities to students. Reports that students attempted to explain the instructor's demonstrations through hypothesis testing and revision. Provides instructions on performing…
Matheson, Heath E; Familiar, Ariana M; Thompson-Schill, Sharon L
2018-03-02
Theories of embodied cognition propose that we recognize tools in part by reactivating sensorimotor representations of tool use in a process of simulation. If motor simulations play a causal role in tool recognition then performing a concurrent motor task should differentially modulate recognition of experienced vs. non-experienced tools. We sought to test the hypothesis that an incompatible concurrent motor task modulates conceptual processing of learned vs. non-learned objects by directly manipulating the embodied experience of participants. We trained one group to use a set of novel, 3-D printed tools under the pretense that they were preparing for an archeological expedition to Mars (manipulation group); we trained a second group to report declarative information about how the tools are stored (storage group). With this design, familiarity and visual attention to different object parts was similar for both groups, though their qualitative interactions differed. After learning, participants made familiarity judgments of auditorily presented tool names while performing a concurrent motor task or simply sitting at rest. We showed that familiarity judgments were facilitated by motor state-dependence; specifically, in the manipulation group, familiarity was facilitated by a concurrent motor task, whereas in the spatial group familiarity was facilitated while sitting at rest. These results are the first to directly show that manipulation experience differentially modulates conceptual processing of familiar vs. unfamiliar objects, suggesting that embodied representations contribute to recognizing tools.
Leimu, Roosa; Koricheva, Julia
2004-01-01
Temporal changes in the magnitude of research findings have recently been recognized as a general phenomenon in ecology, and have been attributed to the delayed publication of non-significant results and disconfirming evidence. Here we introduce a method of cumulative meta-analysis which allows detection of both temporal trends and publication bias in the ecological literature. To illustrate the application of the method, we used two datasets from recently conducted meta-analyses of studies testing two plant defence theories. Our results revealed three phases in the evolution of the treatment effects. Early studies strongly supported the hypothesis tested, but the magnitude of the effect decreased considerably in later studies. In the latest studies, a trend towards an increase in effect size was observed. In one of the datasets, a cumulative meta-analysis revealed publication bias against studies reporting disconfirming evidence; such studies were published in journals with a lower impact factor compared to studies with results supporting the hypothesis tested. Correlation analysis revealed neither temporal trends nor evidence of publication bias in the datasets analysed. We thus suggest that cumulative meta-analysis should be used as a visual aid to detect temporal trends and publication bias in research findings in ecology in addition to the correlative approach. PMID:15347521
Lash, Ayhan Aytekin; Plonczynski, Donna J; Sehdev, Amikar
2011-01-01
To compare the inclusion and the influences of selected variables on hypothesis testing during the 1980s and 1990s. In spite of the emphasis on conducting inquiry consistent with the tenets of logical positivism, there have been no studies investigating the frequency and patterns of hypothesis testing in nursing research The sample was obtained from the journal Nursing Research which was the research journal with the highest circulation during the study period under study. All quantitative studies published during the two decades including briefs and historical studies were included in the analyses A retrospective design was used to select the sample. Five years from the 1980s and 1990s each were randomly selected from the journal, Nursing Research. Of the 582 studies, 517 met inclusion criteria. Findings suggest that there has been a decline in the use of hypothesis testing in the last decades of the 20th century. Further research is needed to identify the factors that influence the conduction of research with hypothesis testing. Hypothesis testing in nursing research showed a steady decline from the 1980s to 1990s. Research purposes of explanation, and prediction/ control increased the likelihood of hypothesis testing. Hypothesis testing strengthens the quality of the quantitative studies, increases the generality of findings and provides dependable knowledge. This is particularly true for quantitative studies that aim to explore, explain and predict/control phenomena and/or test theories. The findings also have implications for doctoral programmes, research preparation of nurse-investigators, and theory testing.
A shift from significance test to hypothesis test through power analysis in medical research.
Singh, G
2006-01-01
Medical research literature until recently, exhibited substantial dominance of the Fisher's significance test approach of statistical inference concentrating more on probability of type I error over Neyman-Pearson's hypothesis test considering both probability of type I and II error. Fisher's approach dichotomises results into significant or not significant results with a P value. The Neyman-Pearson's approach talks of acceptance or rejection of null hypothesis. Based on the same theory these two approaches deal with same objective and conclude in their own way. The advancement in computing techniques and availability of statistical software have resulted in increasing application of power calculations in medical research and thereby reporting the result of significance tests in the light of power of the test also. Significance test approach, when it incorporates power analysis contains the essence of hypothesis test approach. It may be safely argued that rising application of power analysis in medical research may have initiated a shift from Fisher's significance test to Neyman-Pearson's hypothesis test procedure.
2011-01-01
Background Although many biological databases are applying semantic web technologies, meaningful biological hypothesis testing cannot be easily achieved. Database-driven high throughput genomic hypothesis testing requires both of the capabilities of obtaining semantically relevant experimental data and of performing relevant statistical testing for the retrieved data. Tissue Microarray (TMA) data are semantically rich and contains many biologically important hypotheses waiting for high throughput conclusions. Methods An application-specific ontology was developed for managing TMA and DNA microarray databases by semantic web technologies. Data were represented as Resource Description Framework (RDF) according to the framework of the ontology. Applications for hypothesis testing (Xperanto-RDF) for TMA data were designed and implemented by (1) formulating the syntactic and semantic structures of the hypotheses derived from TMA experiments, (2) formulating SPARQLs to reflect the semantic structures of the hypotheses, and (3) performing statistical test with the result sets returned by the SPARQLs. Results When a user designs a hypothesis in Xperanto-RDF and submits it, the hypothesis can be tested against TMA experimental data stored in Xperanto-RDF. When we evaluated four previously validated hypotheses as an illustration, all the hypotheses were supported by Xperanto-RDF. Conclusions We demonstrated the utility of high throughput biological hypothesis testing. We believe that preliminary investigation before performing highly controlled experiment can be benefited. PMID:21342584
Emphasizing the process of science using demonstrations in conceptual chemistry
NASA Astrophysics Data System (ADS)
Lutz, Courtney A.
The purpose of this project was to teach students a method for employing the process of science in a conceptual chemistry classroom when observing a demonstration of a discrepant event. Students observed six demonstrations throughout a trimester study of chemistry and responded to each demonstration by asking as many questions as they could think of, choosing one testable question to answer by making as many hypotheses as possible, and choosing one hypothesis to make predictions about observed results of this hypothesis when tested. Students were evaluated on their curiosity, confidence, knowledge of the process of science, and knowledge of the nature of science before and after the six demonstrations. Many students showed improvement in using or mastery of the process of science within the context of conceptual chemistry after six intensive experiences with it. Results of the study also showed students gained confidence in their scientific abilities after completing one trimester of conceptual chemistry. Curiosity and knowledge of the nature of science did not show statistically significant improvement according to the assessment tool. This may have been due to the scope of the demonstration and response activities, which focused on the process of science methodology instead of knowledge of the nature of science or the constraints of the assessment tool.
Niknafs, Noushin; Beleva-Guthrie, Violeta; Naiman, Daniel Q.; Karchin, Rachel
2015-01-01
Recent improvements in next-generation sequencing of tumor samples and the ability to identify somatic mutations at low allelic fractions have opened the way for new approaches to model the evolution of individual cancers. The power and utility of these models is increased when tumor samples from multiple sites are sequenced. Temporal ordering of the samples may provide insight into the etiology of both primary and metastatic lesions and rationalizations for tumor recurrence and therapeutic failures. Additional insights may be provided by temporal ordering of evolving subclones—cellular subpopulations with unique mutational profiles. Current methods for subclone hierarchy inference tightly couple the problem of temporal ordering with that of estimating the fraction of cancer cells harboring each mutation. We present a new framework that includes a rigorous statistical hypothesis test and a collection of tools that make it possible to decouple these problems, which we believe will enable substantial progress in the field of subclone hierarchy inference. The methods presented here can be flexibly combined with methods developed by others addressing either of these problems. We provide tools to interpret hypothesis test results, which inform phylogenetic tree construction, and we introduce the first genetic algorithm designed for this purpose. The utility of our framework is systematically demonstrated in simulations. For most tested combinations of tumor purity, sequencing coverage, and tree complexity, good power (≥ 0.8) can be achieved and Type 1 error is well controlled when at least three tumor samples are available from a patient. Using data from three published multi-region tumor sequencing studies of (murine) small cell lung cancer, acute myeloid leukemia, and chronic lymphocytic leukemia, in which the authors reconstructed subclonal phylogenetic trees by manual expert curation, we show how different configurations of our tools can identify either a single tree in agreement with the authors, or a small set of trees, which include the authors’ preferred tree. Our results have implications for improved modeling of tumor evolution and the importance of multi-region tumor sequencing. PMID:26436540
Toward Joint Hypothesis-Tests Seismic Event Screening Analysis: Ms|mb and Event Depth
DOE Office of Scientific and Technical Information (OSTI.GOV)
Anderson, Dale; Selby, Neil
2012-08-14
Well established theory can be used to combine single-phenomenology hypothesis tests into a multi-phenomenology event screening hypothesis test (Fisher's and Tippett's tests). Commonly used standard error in Ms:mb event screening hypothesis test is not fully consistent with physical basis. Improved standard error - Better agreement with physical basis, and correctly partitions error to include Model Error as a component of variance, correctly reduces station noise variance through network averaging. For 2009 DPRK test - Commonly used standard error 'rejects' H0 even with better scaling slope ({beta} = 1, Selby et al.), improved standard error 'fails to rejects' H0.
Optimizing multiple-choice tests as tools for learning.
Little, Jeri L; Bjork, Elizabeth Ligon
2015-01-01
Answering multiple-choice questions with competitive alternatives can enhance performance on a later test, not only on questions about the information previously tested, but also on questions about related information not previously tested-in particular, on questions about information pertaining to the previously incorrect alternatives. In the present research, we assessed a possible explanation for this pattern: When multiple-choice questions contain competitive incorrect alternatives, test-takers are led to retrieve previously studied information pertaining to all of the alternatives in order to discriminate among them and select an answer, with such processing strengthening later access to information associated with both the correct and incorrect alternatives. Supporting this hypothesis, we found enhanced performance on a later cued-recall test for previously nontested questions when their answers had previously appeared as competitive incorrect alternatives in the initial multiple-choice test, but not when they had previously appeared as noncompetitive alternatives. Importantly, however, competitive alternatives were not more likely than noncompetitive alternatives to be intruded as incorrect responses, indicating that a general increased accessibility for previously presented incorrect alternatives could not be the explanation for these results. The present findings, replicated across two experiments (one in which corrective feedback was provided during the initial multiple-choice testing, and one in which it was not), thus strongly suggest that competitive multiple-choice questions can trigger beneficial retrieval processes for both tested and related information, and the results have implications for the effective use of multiple-choice tests as tools for learning.
Isotopic niches support the resource breadth hypothesis.
Rader, Jonathan A; Newsome, Seth D; Sabat, Pablo; Chesser, R Terry; Dillon, Michael E; Martínez Del Rio, Carlos
2017-03-01
Because a broad spectrum of resource use allows species to persist in a wide range of habitat types, and thus permits them to occupy large geographical areas, and because broadly distributed species have access to more diverse resource bases, the resource breadth hypothesis posits that the diversity of resources used by organisms should be positively related with the extent of their geographic ranges. We investigated isotopic niche width in a small radiation of South American birds in the genus Cinclodes. We analysed feathers of 12 species of Cinclodes to test the isotopic version of the resource breadth hypothesis and to examine the correlation between isotopic niche breadth and morphology. We found a positive correlation between the widths of hydrogen and oxygen isotopic niches (which estimate breadth of elevational range) and widths of the carbon and nitrogen isotopic niches (which estimates the diversity of resources consumed, and hence of habitats used). We also found a positive correlation between broad isotopic niches and wing morphology. Our study not only supports the resource breadth hypothesis but it also highlights the usefulness of stable isotope analyses as tools in the exploration of ecological niches. It is an example of a macroecological application of stable isotopes. It also illustrates the importance of scientific collections in ecological studies. © 2016 The Authors. Journal of Animal Ecology © 2016 British Ecological Society.
HyQue: evaluating hypotheses using Semantic Web technologies
2011-01-01
Background Key to the success of e-Science is the ability to computationally evaluate expert-composed hypotheses for validity against experimental data. Researchers face the challenge of collecting, evaluating and integrating large amounts of diverse information to compose and evaluate a hypothesis. Confronted with rapidly accumulating data, researchers currently do not have the software tools to undertake the required information integration tasks. Results We present HyQue, a Semantic Web tool for querying scientific knowledge bases with the purpose of evaluating user submitted hypotheses. HyQue features a knowledge model to accommodate diverse hypotheses structured as events and represented using Semantic Web languages (RDF/OWL). Hypothesis validity is evaluated against experimental and literature-sourced evidence through a combination of SPARQL queries and evaluation rules. Inference over OWL ontologies (for type specifications, subclass assertions and parthood relations) and retrieval of facts stored as Bio2RDF linked data provide support for a given hypothesis. We evaluate hypotheses of varying levels of detail about the genetic network controlling galactose metabolism in Saccharomyces cerevisiae to demonstrate the feasibility of deploying such semantic computing tools over a growing body of structured knowledge in Bio2RDF. Conclusions HyQue is a query-based hypothesis evaluation system that can currently evaluate hypotheses about the galactose metabolism in S. cerevisiae. Hypotheses as well as the supporting or refuting data are represented in RDF and directly linked to one another allowing scientists to browse from data to hypothesis and vice versa. HyQue hypotheses and data are available at http://semanticscience.org/projects/hyque. PMID:21624158
Tenhaven, Christoph; Tipold, Andrea; Fischer, Martin R; Ehlers, Jan P
2013-01-01
Informal and formal lifelong learning is essential at university and in the workplace. Apart from classical learning techniques, Web 2.0 tools can be used. It is controversial whether there is a so-called net generation amongst people under 30. To test the hypothesis that a net generation among students and young veterinarians exists. An online survey of students and veterinarians was conducted in the German-speaking countries which was advertised via online media and traditional print media. 1780 people took part in the survey. Students and veterinarians have different usage patterns regarding social networks (91.9% vs. 69%) and IM (55.9% vs. 24.5%). All tools were predominantly used passively and in private, to a lesser extent also professionally and for studying. The use of Web 2.0 tools is useful, however, teaching information and media skills, preparing codes of conduct for the internet and verification of user generated content is essential.
Gould, Douglas J.; Terrell, Mark A.; Fleming, Jo
2015-01-01
This usability study evaluated users’ perceptions of a multimedia prototype for a new e-learning tool: Anatomy of the Central Nervous System: A Multimedia Course. Usability testing is a collection of formative evaluation methods that inform the developmental design of e-learning tools to maximize user acceptance, satisfaction, and adoption. Sixty-two study participants piloted the prototype and completed a usability questionnaire designed to measure two usability properties: program need and program applicability. Statistical analyses were used to test the hypothesis that the multimedia prototype was well designed and highly usable, it was perceived as: 1) highly needed across a spectrum of educational contexts, 2) highly applicable in supporting the pedagogical processes of teaching and learning neuroanatomy, and 3) was highly usable by all types of users. Three independent variables represented user differences: level of expertise (faculty vs. student), age, and gender. Analysis of the results supports the research hypotheses that the prototype was designed well for different types of users in various educational contexts and for supporting the pedagogy of neuroanatomy. In addition, the results suggest that the multimedia program will be most useful as a neuroanatomy review tool for health-professions students preparing for licensing or board exams. This study demonstrates the importance of integrating quality properties of usability with principles of human learning during the instructional design process for multimedia products. PMID:19177405
Li, Fuyi; Li, Chen; Marquez-Lago, Tatiana T; Leier, André; Akutsu, Tatsuya; Purcell, Anthony W; Smith, A Ian; Lithgow, Trevor; Daly, Roger J; Song, Jiangning; Chou, Kuo-Chen
2018-06-27
Kinase-regulated phosphorylation is a ubiquitous type of post-translational modification (PTM) in both eukaryotic and prokaryotic cells. Phosphorylation plays fundamental roles in many signalling pathways and biological processes, such as protein degradation and protein-protein interactions. Experimental studies have revealed that signalling defects caused by aberrant phosphorylation are highly associated with a variety of human diseases, especially cancers. In light of this, a number of computational methods aiming to accurately predict protein kinase family-specific or kinase-specific phosphorylation sites have been established, thereby facilitating phosphoproteomic data analysis. In this work, we present Quokka, a novel bioinformatics tool that allows users to rapidly and accurately identify human kinase family-regulated phosphorylation sites. Quokka was developed by using a variety of sequence scoring functions combined with an optimized logistic regression algorithm. We evaluated Quokka based on well-prepared up-to-date benchmark and independent test datasets, curated from the Phospho.ELM and UniProt databases, respectively. The independent test demonstrates that Quokka improves the prediction performance compared with state-of-the-art computational tools for phosphorylation prediction. In summary, our tool provides users with high-quality predicted human phosphorylation sites for hypothesis generation and biological validation. The Quokka webserver and datasets are freely available at http://quokka.erc.monash.edu/. Supplementary data are available at Bioinformatics online.
NASA Technical Reports Server (NTRS)
Bauman, William H., III
2010-01-01
The 12-km resolution North American Mesoscale (NAM) model (MesoNAM) is used by the 45th Weather Squadron (45 WS) Launch Weather Officers at Kennedy Space Center (KSC) and Cape Canaveral Air Force Station (CCAFS) to support space launch weather operations. The 45 WS tasked the Applied Meteorology Unit to conduct an objective statistics-based analysis of MesoNAM output compared to wind tower mesonet observations and then develop a an operational tool to display the results. The National Centers for Environmental Prediction began running the current version of the MesoNAM in mid-August 2006. The period of record for the dataset was 1 September 2006 - 31 January 2010. The AMU evaluated MesoNAM hourly forecasts from 0 to 84 hours based on model initialization times of 00, 06, 12 and 18 UTC. The MesoNAM forecast winds, temperature and dew point were compared to the observed values of these parameters from the sensors in the KSC/CCAFS wind tower network. The data sets were stratified by model initialization time, month and onshore/offshore flow for each wind tower. Statistics computed included bias (mean difference), standard deviation of the bias, root mean square error (RMSE) and a hypothesis test for bias = O. Twelve wind towers located in close proximity to key launch complexes were used for the statistical analysis with the sensors on the towers positioned at varying heights to include 6 ft, 30 ft, 54 ft, 60 ft, 90 ft, 162 ft, 204 ft and 230 ft depending on the launch vehicle and associated weather launch commit criteria being evaluated. These twelve wind towers support activities for the Space Shuttle (launch and landing), Delta IV, Atlas V and Falcon 9 launch vehicles. For all twelve towers, the results indicate a diurnal signal in the bias of temperature (T) and weaker but discernable diurnal signal in the bias of dewpoint temperature (T(sub d)) in the MesoNAM forecasts. Also, the standard deviation of the bias and RMSE of T, T(sub d), wind speed and wind direction indicated the model error increased with the forecast period all four parameters. The hypothesis testing uses statistics to determine the probability that a given hypothesis is true. The goal of using the hypothesis test was to determine if the model bias of any of the parameters assessed throughout the model forecast period was statistically zero. For th is dataset, if this test produced a value >= -1 .96 or <= 1.96 for a data point, then the bias at that point was effectively zero and the model forecast for that point was considered to have no error. A graphical user interface (GUI) was developed so the 45 WS would have an operational tool at their disposal that would be easy to navigate among the multiple stratifications of information to include tower locations, month, model initialization times, sensor heights and onshore/offshore flow. The AMU developed the GUI using HyperText Markup Language (HTML) so the tool could be used in most popular web browsers with computers running different operating systems such as Microsoft Windows and Linux.
Hanrahan, Lawrence P.; Anderson, Henry A.; Busby, Brian; Bekkedal, Marni; Sieger, Thomas; Stephenson, Laura; Knobeloch, Lynda; Werner, Mark; Imm, Pamela; Olson, Joseph
2004-01-01
In this article we describe the development of an information system for environmental childhood cancer surveillance. The Wisconsin Cancer Registry annually receives more than 25,000 incident case reports. Approximately 269 cases per year involve children. Over time, there has been considerable community interest in understanding the role the environment plays as a cause of these cancer cases. Wisconsin’s Public Health Information Network (WI-PHIN) is a robust web portal integrating both Health Alert Network and National Electronic Disease Surveillance System components. WI-PHIN is the information technology platform for all public health surveillance programs. Functions include the secure, automated exchange of cancer case data between public health–based and hospital-based cancer registrars; web-based supplemental data entry for environmental exposure confirmation and hypothesis testing; automated data analysis, visualization, and exposure–outcome record linkage; directories of public health and clinical personnel for role-based access control of sensitive surveillance information; public health information dissemination and alerting; and information technology security and critical infrastructure protection. For hypothesis generation, cancer case data are sent electronically to WI-PHIN and populate the integrated data repository. Environmental data are linked and the exposure–disease relationships are explored using statistical tools for ecologic exposure risk assessment. For hypothesis testing, case–control interviews collect exposure histories, including parental employment and residential histories. This information technology approach can thus serve as the basis for building a comprehensive system to assess environmental cancer etiology. PMID:15471739
Waese, Jamie; Fan, Jim; Yu, Hans; Fucile, Geoffrey; Shi, Ruian; Cumming, Matthew; Town, Chris; Stuerzlinger, Wolfgang
2017-01-01
A big challenge in current systems biology research arises when different types of data must be accessed from separate sources and visualized using separate tools. The high cognitive load required to navigate such a workflow is detrimental to hypothesis generation. Accordingly, there is a need for a robust research platform that incorporates all data and provides integrated search, analysis, and visualization features through a single portal. Here, we present ePlant (http://bar.utoronto.ca/eplant), a visual analytic tool for exploring multiple levels of Arabidopsis thaliana data through a zoomable user interface. ePlant connects to several publicly available web services to download genome, proteome, interactome, transcriptome, and 3D molecular structure data for one or more genes or gene products of interest. Data are displayed with a set of visualization tools that are presented using a conceptual hierarchy from big to small, and many of the tools combine information from more than one data type. We describe the development of ePlant in this article and present several examples illustrating its integrative features for hypothesis generation. We also describe the process of deploying ePlant as an “app” on Araport. Building on readily available web services, the code for ePlant is freely available for any other biological species research. PMID:28808136
ON THE SUBJECT OF HYPOTHESIS TESTING
Ugoni, Antony
1993-01-01
In this paper, the definition of a statistical hypothesis is discussed, and the considerations which need to be addressed when testing a hypothesis. In particular, the p-value, significance level, and power of a test are reviewed. Finally, the often quoted confidence interval is given a brief introduction. PMID:17989768
Some consequences of using the Horsfall-Barratt scale for hypothesis testing
USDA-ARS?s Scientific Manuscript database
Comparing treatment effects by hypothesis testing is a common practice in plant pathology. Nearest percent estimates (NPEs) of disease severity were compared to Horsfall-Barratt (H-B) scale data to explore whether there was an effect of assessment method on hypothesis testing. A simulation model ba...
Hypothesis Testing in Task-Based Interaction
ERIC Educational Resources Information Center
Choi, Yujeong; Kilpatrick, Cynthia
2014-01-01
Whereas studies show that comprehensible output facilitates L2 learning, hypothesis testing has received little attention in Second Language Acquisition (SLA). Following Shehadeh (2003), we focus on hypothesis testing episodes (HTEs) in which learners initiate repair of their own speech in interaction. In the context of a one-way information gap…
Classroom-Based Strategies to Incorporate Hypothesis Testing in Functional Behavior Assessments
ERIC Educational Resources Information Center
Lloyd, Blair P.; Weaver, Emily S.; Staubitz, Johanna L.
2017-01-01
When results of descriptive functional behavior assessments are unclear, hypothesis testing can help school teams understand how the classroom environment affects a student's challenging behavior. This article describes two hypothesis testing strategies that can be used in classroom settings: structural analysis and functional analysis. For each…
Hypothesis Testing in the Real World
ERIC Educational Resources Information Center
Miller, Jeff
2017-01-01
Critics of null hypothesis significance testing suggest that (a) its basic logic is invalid and (b) it addresses a question that is of no interest. In contrast to (a), I argue that the underlying logic of hypothesis testing is actually extremely straightforward and compelling. To substantiate that, I present examples showing that hypothesis…
Pruetz, J D; Bertolani, P; Ontl, K Boyer; Lindshield, S; Shelley, M; Wessling, E G
2015-04-01
For anthropologists, meat eating by primates like chimpanzees (Pan troglodytes) warrants examination given the emphasis on hunting in human evolutionary history. As referential models, apes provide insight into the evolution of hominin hunting, given their phylogenetic relatedness and challenges reconstructing extinct hominin behaviour from palaeoanthropological evidence. Among chimpanzees, adult males are usually the main hunters, capturing vertebrate prey by hand. Savannah chimpanzees (P. t. verus) at Fongoli, Sénégal are the only known non-human population that systematically hunts vertebrate prey with tools, making them an important source for hypotheses of early hominin behaviour based on analogy. Here, we test the hypothesis that sex and age patterns in tool-assisted hunting (n=308 cases) at Fongoli occur and differ from chimpanzees elsewhere, and we compare tool-assisted hunting to the overall hunting pattern. Males accounted for 70% of all captures but hunted with tools less than expected based on their representation on hunting days. Females accounted for most tool-assisted hunting. We propose that social tolerance at Fongoli, along with the tool-assisted hunting method, permits individuals other than adult males to capture and retain control of prey, which is uncommon for chimpanzees. We assert that tool-assisted hunting could have similarly been important for early hominins.
ERIC Educational Resources Information Center
Kwon, Yong-Ju; Jeong, Jin-Su; Park, Yun-Bok
2006-01-01
The purpose of the present study was to test the hypothesis that student's abductive reasoning skills play an important role in the generation of hypotheses on pendulum motion tasks. To test the hypothesis, a hypothesis-generating test on pendulum motion, and a prior-belief test about pendulum motion were developed and administered to a sample of…
THE NASA AMES POLYCYCLIC AROMATIC HYDROCARBON INFRARED SPECTROSCOPIC DATABASE: THE COMPUTED SPECTRA
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bauschlicher, C. W.; Ricca, A.; Boersma, C.
The astronomical emission features, formerly known as the unidentified infrared bands, are now commonly ascribed to polycyclic aromatic hydrocarbons (PAHs). The laboratory experiments and computational modeling done at the NASA Ames Research Center to create a collection of PAH IR spectra relevant to test and refine the PAH hypothesis have been assembled into a spectroscopic database. This database now contains over 800 PAH spectra spanning 2-2000 {mu}m (5000-5 cm{sup -1}). These data are now available on the World Wide Web at www.astrochem.org/pahdb. This paper presents an overview of the computational spectra in the database and the tools developed to analyzemore » and interpret astronomical spectra using the database. A description of the online and offline user tools available on the Web site is also presented.« less
Effects of Phasor Measurement Uncertainty on Power Line Outage Detection
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chen, Chen; Wang, Jianhui; Zhu, Hao
2014-12-01
Phasor measurement unit (PMU) technology provides an effective tool to enhance the wide-area monitoring systems (WAMSs) in power grids. Although extensive studies have been conducted to develop several PMU applications in power systems (e.g., state estimation, oscillation detection and control, voltage stability analysis, and line outage detection), the uncertainty aspects of PMUs have not been adequately investigated. This paper focuses on quantifying the impact of PMU uncertainty on power line outage detection and identification, in which a limited number of PMUs installed at a subset of buses are utilized to detect and identify the line outage events. Specifically, the linemore » outage detection problem is formulated as a multi-hypothesis test, and a general Bayesian criterion is used for the detection procedure, in which the PMU uncertainty is analytically characterized. We further apply the minimum detection error criterion for the multi-hypothesis test and derive the expected detection error probability in terms of PMU uncertainty. The framework proposed provides fundamental guidance for quantifying the effects of PMU uncertainty on power line outage detection. Case studies are provided to validate our analysis and show how PMU uncertainty influences power line outage detection.« less
Sun, Yanqing; Sun, Liuquan; Zhou, Jie
2013-07-01
This paper studies the generalized semiparametric regression model for longitudinal data where the covariate effects are constant for some and time-varying for others. Different link functions can be used to allow more flexible modelling of longitudinal data. The nonparametric components of the model are estimated using a local linear estimating equation and the parametric components are estimated through a profile estimating function. The method automatically adjusts for heterogeneity of sampling times, allowing the sampling strategy to depend on the past sampling history as well as possibly time-dependent covariates without specifically model such dependence. A [Formula: see text]-fold cross-validation bandwidth selection is proposed as a working tool for locating an appropriate bandwidth. A criteria for selecting the link function is proposed to provide better fit of the data. Large sample properties of the proposed estimators are investigated. Large sample pointwise and simultaneous confidence intervals for the regression coefficients are constructed. Formal hypothesis testing procedures are proposed to check for the covariate effects and whether the effects are time-varying. A simulation study is conducted to examine the finite sample performances of the proposed estimation and hypothesis testing procedures. The methods are illustrated with a data example.
Identifying biologically relevant differences between metagenomic communities.
Parks, Donovan H; Beiko, Robert G
2010-03-15
Metagenomics is the study of genetic material recovered directly from environmental samples. Taxonomic and functional differences between metagenomic samples can highlight the influence of ecological factors on patterns of microbial life in a wide range of habitats. Statistical hypothesis tests can help us distinguish ecological influences from sampling artifacts, but knowledge of only the P-value from a statistical hypothesis test is insufficient to make inferences about biological relevance. Current reporting practices for pairwise comparative metagenomics are inadequate, and better tools are needed for comparative metagenomic analysis. We have developed a new software package, STAMP, for comparative metagenomics that supports best practices in analysis and reporting. Examination of a pair of iron mine metagenomes demonstrates that deeper biological insights can be gained using statistical techniques available in our software. An analysis of the functional potential of 'Candidatus Accumulibacter phosphatis' in two enhanced biological phosphorus removal metagenomes identified several subsystems that differ between the A.phosphatis stains in these related communities, including phosphate metabolism, secretion and metal transport. Python source code and binaries are freely available from our website at http://kiwi.cs.dal.ca/Software/STAMP CONTACT: beiko@cs.dal.ca Supplementary data are available at Bioinformatics online.
Uomini, Natalie Thaïs; Meyer, Georg Friedrich
2013-01-01
The popular theory that complex tool-making and language co-evolved in the human lineage rests on the hypothesis that both skills share underlying brain processes and systems. However, language and stone tool-making have so far only been studied separately using a range of neuroimaging techniques and diverse paradigms. We present the first-ever study of brain activation that directly compares active Acheulean tool-making and language. Using functional transcranial Doppler ultrasonography (fTCD), we measured brain blood flow lateralization patterns (hemodynamics) in subjects who performed two tasks designed to isolate the planning component of Acheulean stone tool-making and cued word generation as a language task. We show highly correlated hemodynamics in the initial 10 seconds of task execution. Stone tool-making and cued word generation cause common cerebral blood flow lateralization signatures in our participants. This is consistent with a shared neural substrate for prehistoric stone tool-making and language, and is compatible with language evolution theories that posit a co-evolution of language and manual praxis. In turn, our results support the hypothesis that aspects of language might have emerged as early as 1.75 million years ago, with the start of Acheulean technology.
Making Knowledge Delivery Failsafe: Adding Step Zero in Hypothesis Testing
ERIC Educational Resources Information Center
Pan, Xia; Zhou, Qiang
2010-01-01
Knowledge of statistical analysis is increasingly important for professionals in modern business. For example, hypothesis testing is one of the critical topics for quality managers and team workers in Six Sigma training programs. Delivering the knowledge of hypothesis testing effectively can be an important step for the incapable learners or…
Testing of Hypothesis in Equivalence and Non Inferiority Trials-A Concept.
Juneja, Atul; Aggarwal, Abha R; Adhikari, Tulsi; Pandey, Arvind
2016-04-01
Establishing the appropriate hypothesis is one of the important steps for carrying out the statistical tests/analysis. Its understanding is important for interpreting the results of statistical analysis. The current communication attempts to provide the concept of testing of hypothesis in non inferiority and equivalence trials, where the null hypothesis is just reverse of what is set up for conventional superiority trials. It is similarly looked for rejection for establishing the fact the researcher is intending to prove. It is important to mention that equivalence or non inferiority cannot be proved by accepting the null hypothesis of no difference. Hence, establishing the appropriate statistical hypothesis is extremely important to arrive at meaningful conclusion for the set objectives in research.
Testing goodness of fit in regression: a general approach for specified alternatives.
Solari, Aldo; le Cessie, Saskia; Goeman, Jelle J
2012-12-10
When fitting generalized linear models or the Cox proportional hazards model, it is important to have tools to test for lack of fit. Because lack of fit comes in all shapes and sizes, distinguishing among different types of lack of fit is of practical importance. We argue that an adequate diagnosis of lack of fit requires a specified alternative model. Such specification identifies the type of lack of fit the test is directed against so that if we reject the null hypothesis, we know the direction of the departure from the model. The goodness-of-fit approach of this paper allows to treat different types of lack of fit within a unified general framework and to consider many existing tests as special cases. Connections with penalized likelihood and random effects are discussed, and the application of the proposed approach is illustrated with medical examples. Tailored functions for goodness-of-fit testing have been implemented in the R package global test. Copyright © 2012 John Wiley & Sons, Ltd.
Facio, Flavia M; Sapp, Julie C; Linn, Amy; Biesecker, Leslie G
2012-10-10
Massively-parallel sequencing (MPS) technologies create challenges for informed consent of research participants given the enormous scale of the data and the wide range of potential results. We propose that the consent process in these studies be based on whether they use MPS to test a hypothesis or to generate hypotheses. To demonstrate the differences in these approaches to informed consent, we describe the consent processes for two MPS studies. The purpose of our hypothesis-testing study is to elucidate the etiology of rare phenotypes using MPS. The purpose of our hypothesis-generating study is to test the feasibility of using MPS to generate clinical hypotheses, and to approach the return of results as an experimental manipulation. Issues to consider in both designs include: volume and nature of the potential results, primary versus secondary results, return of individual results, duty to warn, length of interaction, target population, and privacy and confidentiality. The categorization of MPS studies as hypothesis-testing versus hypothesis-generating can help to clarify the issue of so-called incidental or secondary results for the consent process, and aid the communication of the research goals to study participants.
An Exercise for Illustrating the Logic of Hypothesis Testing
ERIC Educational Resources Information Center
Lawton, Leigh
2009-01-01
Hypothesis testing is one of the more difficult concepts for students to master in a basic, undergraduate statistics course. Students often are puzzled as to why statisticians simply don't calculate the probability that a hypothesis is true. This article presents an exercise that forces students to lay out on their own a procedure for testing a…
ERIC Educational Resources Information Center
Wilcox, Rand R.; Serang, Sarfaraz
2017-01-01
The article provides perspectives on p values, null hypothesis testing, and alternative techniques in light of modern robust statistical methods. Null hypothesis testing and "p" values can provide useful information provided they are interpreted in a sound manner, which includes taking into account insights and advances that have…
Hypothesis Testing Using Spatially Dependent Heavy Tailed Multisensor Data
2014-12-01
Office of Research 113 Bowne Hall Syracuse, NY 13244 -1200 ABSTRACT HYPOTHESIS TESTING USING SPATIALLY DEPENDENT HEAVY-TAILED MULTISENSOR DATA Report...consistent with the null hypothesis of linearity and can be used to estimate the distribution of a test statistic that can discrimi- nate between the null... Test for nonlinearity. Histogram is generated using the surrogate data. The statistic of the original time series is represented by the solid line
Hand-independent representation of tool-use pantomimes in the left anterior intraparietal cortex.
Ogawa, Kenji; Imai, Fumihito
2016-12-01
Previous neuropsychological studies of ideomotor apraxia (IMA) indicated impairments in pantomime actions for tool use for both right and left hands following lesions of parieto-premotor cortices in the left hemisphere. Using functional magnetic resonance imaging (fMRI) with multi-voxel pattern analysis (MVPA), we tested the hypothesis that the left parieto-premotor cortices are involved in the storage or retrieval of hand-independent representation of tool-use actions. In the fMRI scanner, one of three kinds of tools was displayed in pictures or letters, and the participants made pantomimes of the use of these tools using the right hand for the picture stimuli or with the left hand for the letters. We then used MVPA to classify which kind of tool the subjects were pantomiming. Whole-brain searchlight analysis revealed successful decoding using the activities largely in the contralateral primary sensorimotor region, ipsilateral cerebellum, and bilateral early visual area, which may reflect differences in low-level sensorimotor components for three types of pantomimes. Furthermore, a successful cross-classification between the right and left hands was possible using the activities of the left inferior parietal lobule (IPL) near the junction of the anterior intraparietal sulcus. Our finding indicates that the left anterior intraparietal cortex plays an important role in the production of tool-use pantomimes in a hand-independent manner, and independent of stimuli modality.
Faye, Alexandrine; Jacquin-Courtois, Sophie; Osiurak, François
2018-03-01
The purpose of this study was to deepen our understanding of the cognitive bases of human tool use based on the technical reasoning hypothesis (i.e., the reasoning-based approach). This approach assumes that tool use is supported by the ability to reason about an object's physical properties (e.g., length, weight, strength, etc.) to perform mechanical actions (e.g., lever). In this framework, an important issue is to understand whether left-brain-damaged (LBD) individuals with tool-use deficits are still able to estimate the physical object's properties necessary to use the tool. Eleven LBD patients and 12 control participants performed 3 original experimental tasks: Use-Length (visual evaluation of the length of a stick to bring down a target), Visual-Length (to visually compare objects of different lengths) and Addition-Length (to visually compare added lengths). Participants were also tested on conventional tasks: Familiar Tool Use and Mechanical Problem-Solving (novel tools). LBD patients had more difficulties than controls on both conventional tasks. No significant differences were observed for the 3 experimental tasks. These results extend the reasoning-based approach, stressing that it might not be the representation of length that is impaired in LBD patients, but rather the ability to generate mechanical actions based on physical object properties. (PsycINFO Database Record (c) 2018 APA, all rights reserved).
López-Valcárcel, Beatriz G; González-Martel, Christian; Peiro, Salvador
2018-01-01
Objective Newcomb-Benford’s Law (NBL) proposes a regular distribution for first digits, second digits and digit combinations applicable to many different naturally occurring sources of data. Testing deviations from NBL is used in many datasets as a screening tool for identifying data trustworthiness problems. This study aims to compare public available waiting lists (WL) data from Finland and Spain for testing NBL as an instrument to flag up potential manipulation in WLs. Design Analysis of the frequency of Finnish and Spanish WLs first digits to determine if their distribution is similar to the pattern documented by NBL. Deviations from the expected first digit frequency were analysed using Pearson’s χ2, mean absolute deviation and Kuiper tests. Setting/participants Publicly available WL data from Finland and Spain, two countries with universal health insurance and National Health Systems but characterised by different levels of transparency and good governance standards. Main outcome measures Adjustment of the observed distribution of the numbers reported in Finnish and Spanish WL data to the expected distribution according to NBL. Results WL data reported by the Finnish health system fits first digit NBL according to all statistical tests used (p=0.6519 in χ2 test). For Spanish data, this hypothesis was rejected in all tests (p<0.0001 in χ2 test). Conclusions Testing deviations from NBL distribution can be a useful tool to identify problems with WL data trustworthiness and signalling the need for further testing. PMID:29743333
The role of responsibility and fear of guilt in hypothesis-testing.
Mancini, Francesco; Gangemi, Amelia
2006-12-01
Recent theories argue that both perceived responsibility and fear of guilt increase obsessive-like behaviours. We propose that hypothesis-testing might account for this effect. Both perceived responsibility and fear of guilt would influence subjects' hypothesis-testing, by inducing a prudential style. This style implies focusing on and confirming the worst hypothesis, and reiterating the testing process. In our experiment, we manipulated the responsibility and fear of guilt of 236 normal volunteers who executed a deductive task. The results show that perceived responsibility is the main factor that influenced individuals' hypothesis-testing. Fear of guilt has however a significant additive effect. Guilt-fearing participants preferred to carry on with the diagnostic process, even when faced with initial favourable evidence, whereas participants in the responsibility condition only did so when confronted with an unfavourable evidence. Implications for the understanding of obsessive-compulsive disorder (OCD) are discussed.
Geometric Mechanics for Continuous Swimmers on Granular Material
NASA Astrophysics Data System (ADS)
Dai, Jin; Faraji, Hossein; Schiebel, Perrin; Gong, Chaohui; Travers, Matthew; Hatton, Ross; Goldman, Daniel; Choset, Howie; Biorobotics Lab Collaboration; LaboratoryRobotics; Applied Mechanics (LRAM) Collaboration; Complex Rheology; Biomechanics Lab Collaboration
Animal experiments have shown that Chionactis occipitalis(N =10) effectively undulating on granular substrates exhibits a particular set of waveforms which can be approximated by a sinusoidal variation in curvature, i.e., a serpenoid wave. Furthermore, all snakes tested used a narrow subset of all available waveform parameters, measured as the relative curvature equal to 5.0+/-0.3, and number of waves on the body equal to1.8+/-0.1. We hypothesize that the serpenoid wave of a particular choice of parameters offers distinct benefit for locomotion on granular material. To test this hypothesis, we used a physical model (snake robot) to empirically explore the space of serpenoid motions, which is linearly spanned with two independent continuous serpenoid basis functions. The empirically derived height function map, which is a geometric mechanics tool for analyzing movements of cyclic gaits, showed that displacement per gait cycle increases with amplitude at small amplitudes, but reaches a peak value of 0.55 body-lengths at relative curvature equal to 6.0. This work signifies that with shape basis functions, geometric mechanics tools can be extended for continuous swimmers.
Chiba, Yasutaka
2017-09-01
Fisher's exact test is commonly used to compare two groups when the outcome is binary in randomized trials. In the context of causal inference, this test explores the sharp causal null hypothesis (i.e. the causal effect of treatment is the same for all subjects), but not the weak causal null hypothesis (i.e. the causal risks are the same in the two groups). Therefore, in general, rejection of the null hypothesis by Fisher's exact test does not mean that the causal risk difference is not zero. Recently, Chiba (Journal of Biometrics and Biostatistics 2015; 6: 244) developed a new exact test for the weak causal null hypothesis when the outcome is binary in randomized trials; the new test is not based on any large sample theory and does not require any assumption. In this paper, we extend the new test; we create a version of the test applicable to a stratified analysis. The stratified exact test that we propose is general in nature and can be used in several approaches toward the estimation of treatment effects after adjusting for stratification factors. The stratified Fisher's exact test of Jung (Biometrical Journal 2014; 56: 129-140) tests the sharp causal null hypothesis. This test applies a crude estimator of the treatment effect and can be regarded as a special case of our proposed exact test. Our proposed stratified exact test can be straightforwardly extended to analysis of noninferiority trials and to construct the associated confidence interval. © 2017 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Functional complexity and ecosystem stability: an experimental approach
DOE Office of Scientific and Technical Information (OSTI.GOV)
Van Voris, P.; O'Neill, R.V.; Shugart, H.H.
1978-01-01
The complexity-stability hypothesis was experimentally tested using intact terrestrial microcosms. Functional complexity was defined as the number and significance of component interactions (i.e., population interactions, physical-chemical reactions, biological turnover rates) influenced by nonlinearities, feedbacks, and time delays. It was postulated that functional complexity could be nondestructively measured through analysis of a signal generated from the system. Power spectral analysis of hourly CO/sub 2/ efflux, from eleven old-field microcosms, was analyzed for the number of low frequency peaks and used to rank the functional complexity of each system. Ranking of ecosystem stability was based on the capacity of the system tomore » retain essential nutrients and was measured by net loss of Ca after the system was stressed. Rank correlation supported the hypothesis that increasing ecosystem functional complexity leads to increasing ecosystem stability. The results indicated that complex functional dynamics can serve to stabilize the system. The results also demonstrated that microcosms are useful tools for system-level investigations.« less
Balakumar, Pitchai; Inamdar, Mohammed Naseeruddin; Jagadeesh, Gowraganahalli
2013-04-01
An interactive workshop on 'The Critical Steps for Successful Research: The Research Proposal and Scientific Writing' was conducted in conjunction with the 64(th) Annual Conference of the Indian Pharmaceutical Congress-2012 at Chennai, India. In essence, research is performed to enlighten our understanding of a contemporary issue relevant to the needs of society. To accomplish this, a researcher begins search for a novel topic based on purpose, creativity, critical thinking, and logic. This leads to the fundamental pieces of the research endeavor: Question, objective, hypothesis, experimental tools to test the hypothesis, methodology, and data analysis. When correctly performed, research should produce new knowledge. The four cornerstones of good research are the well-formulated protocol or proposal that is well executed, analyzed, discussed and concluded. This recent workshop educated researchers in the critical steps involved in the development of a scientific idea to its successful execution and eventual publication.
Allergenicity of vertebrate tropomyosins: Challenging an immunological dogma.
González-Fernández, J; Daschner, A; Cuéllar, C
With the exception of tilapia tropomyosin, other anecdotic reports of tropomyosin recognition of vertebrate origin are generally not accompanied by clinical significance and a dogmatic idea is generally accepted about the inexistence of allergenicity of vertebrate tropomyosins, based mainly on sequence similarity evaluations with human tropomyosins. Recently, a specific work-up of a tropomyosin sensitised patient with seafood allergy, demonstrated that the IgE-recognition of tropomyosin from different fish species can be clinically relevant. We hypothesise that some vertebrate tropomyosins could be relevant allergens. The hypothesis is based on the molecular evolution of the proteins and it was tested by in silico methods. Fish, which are primitive vertebrates, could have tropomyosins similar to those of invertebrates. If the hypothesis is confirmed, tropomyosin should be included in different allergy diagnosis tools to improve the medical protocols and management of patients with digestive or cutaneous symptoms after fish intake. Copyright © 2016 SEICAP. Published by Elsevier España, S.L.U. All rights reserved.
NASA Technical Reports Server (NTRS)
Phatak, A. V.
1980-01-01
A systematic analytical approach to the determination of helicopter IFR precision approach requirements is formulated. The approach is based upon the hypothesis that pilot acceptance level or opinion rating of a given system is inversely related to the degree of pilot involvement in the control task. A nonlinear simulation of the helicopter approach to landing task incorporating appropriate models for UH-1H aircraft, the environmental disturbances and the human pilot was developed as a tool for evaluating the pilot acceptance hypothesis. The simulated pilot model is generic in nature and includes analytical representation of the human information acquisition, processing, and control strategies. Simulation analyses in the flight director mode indicate that the pilot model used is reasonable. Results of the simulation are used to identify candidate pilot workload metrics and to test the well known performance-work-load relationship. A pilot acceptance analytical methodology is formulated as a basis for further investigation, development and validation.
Waese, Jamie; Fan, Jim; Pasha, Asher; Yu, Hans; Fucile, Geoffrey; Shi, Ruian; Cumming, Matthew; Kelley, Lawrence A; Sternberg, Michael J; Krishnakumar, Vivek; Ferlanti, Erik; Miller, Jason; Town, Chris; Stuerzlinger, Wolfgang; Provart, Nicholas J
2017-08-01
A big challenge in current systems biology research arises when different types of data must be accessed from separate sources and visualized using separate tools. The high cognitive load required to navigate such a workflow is detrimental to hypothesis generation. Accordingly, there is a need for a robust research platform that incorporates all data and provides integrated search, analysis, and visualization features through a single portal. Here, we present ePlant (http://bar.utoronto.ca/eplant), a visual analytic tool for exploring multiple levels of Arabidopsis thaliana data through a zoomable user interface. ePlant connects to several publicly available web services to download genome, proteome, interactome, transcriptome, and 3D molecular structure data for one or more genes or gene products of interest. Data are displayed with a set of visualization tools that are presented using a conceptual hierarchy from big to small, and many of the tools combine information from more than one data type. We describe the development of ePlant in this article and present several examples illustrating its integrative features for hypothesis generation. We also describe the process of deploying ePlant as an "app" on Araport. Building on readily available web services, the code for ePlant is freely available for any other biological species research. © 2017 American Society of Plant Biologists. All rights reserved.
A statistical test to show negligible trend
Philip M. Dixon; Joseph H.K. Pechmann
2005-01-01
The usual statistical tests of trend are inappropriate for demonstrating the absence of trend. This is because failure to reject the null hypothesis of no trend does not prove that null hypothesis. The appropriate statistical method is based on an equivalence test. The null hypothesis is that the trend is not zero, i.e., outside an a priori specified equivalence region...
NASA Astrophysics Data System (ADS)
Collins, Nathan A.; Hughes, Scott A.
2004-06-01
Astronomical observations have established that extremely compact, massive objects are common in the Universe. It is generally accepted that these objects are, in all likelihood, black holes. As observational technology has improved, it has become possible to test this hypothesis in ever greater detail. In particular, it is or will be possible to measure the properties of orbits deep in the strong field of a black hole candidate (using x-ray timing or future gravitational-wave measurements) and to test whether they have the characteristics of black hole orbits in general relativity. Past work has shown that, in principle, such measurements can be used to map the spacetime of a massive compact object, testing in particular whether the object’s multipolar structure satisfies the rather strict constraints imposed by the black hole hypothesis. Performing such a test in practice requires that we be able to compare against objects with the “wrong” multipole structure. In this paper, we present tools for constructing the spacetimes of bumpy black holes: objects that are almost black holes, but that have some multipoles with the wrong value. In this first analysis, we focus on objects with no angular momentum. Generalization to bumpy Kerr black holes should be straightforward, albeit labor intensive. Our construction has two particularly desirable properties. First, the spacetimes which we present are good deep into the strong field of the object—we do not use a “large r” expansion (except to make contact with weak field intuition). Second, our spacetimes reduce to the exact black hole spacetimes of general relativity in a natural way, by dialing the “bumpiness” of the black hole to zero. We propose that bumpy black holes can be used as the foundation for a null experiment: if black hole candidates are indeed the black holes of general relativity, their bumpiness should be zero. By comparing the properties of orbits in a bumpy spacetime with those measured from an astrophysical source, observations should be able to test this hypothesis, stringently testing whether they are in fact the black holes of general relativity.
ERIC Educational Resources Information Center
Tomovska, Ana
2010-01-01
The contact hypothesis has arguably been the leading theoretical paradigm for educational interventions in divided societies. However most of the studies with children have been quantitative, focusing on contact outcomes and failing to take account of children's views. Therefore this paper presents the findings of a qualitative study of…
Pre-Mission Input Requirements to Enable Successful Sample Collection by A Remote Field/EVA Team
NASA Technical Reports Server (NTRS)
Cohen, B. A.; Lim, D. S. S.; Young, K. E.; Brunner, A.; Elphic, R. E.; Horne, A.; Kerrigan, M. C.; Osinski, G. R.; Skok, J. R.; Squyres, S. W.;
2016-01-01
The FINESSE (Field Investigations to Enable Solar System Science and Exploration) team, part of the Solar System Exploration Virtual Institute (SSERVI), is a field-based research program aimed at generating strategic knowledge in preparation for human and robotic exploration of the Moon, near-Earth asteroids, Phobos and Deimos, and beyond. In contract to other technology-driven NASA analog studies, The FINESSE WCIS activity is science-focused and, moreover, is sampling-focused with the explicit intent to return the best samples for geochronology studies in the laboratory. We used the FINESSE field excursion to the West Clearwater Lake Impact structure (WCIS) as an opportunity to test factors related to sampling decisions. We examined the in situ sample characterization and real-time decision-making process of the astronauts, with a guiding hypothesis that pre-mission training that included detailed background information on the analytical fate of a sample would better enable future astronauts to select samples that would best meet science requirements. We conducted three tests of this hypothesis over several days in the field. Our investigation was designed to document processes, tools and procedures for crew sampling of planetary targets. This was not meant to be a blind, controlled test of crew efficacy, but rather an effort to explicitly recognize the relevant variables that enter into sampling protocol and to be able to develop recommendations for crew and backroom training in future endeavors.
Unadjusted Bivariate Two-Group Comparisons: When Simpler is Better.
Vetter, Thomas R; Mascha, Edward J
2018-01-01
Hypothesis testing involves posing both a null hypothesis and an alternative hypothesis. This basic statistical tutorial discusses the appropriate use, including their so-called assumptions, of the common unadjusted bivariate tests for hypothesis testing and thus comparing study sample data for a difference or association. The appropriate choice of a statistical test is predicated on the type of data being analyzed and compared. The unpaired or independent samples t test is used to test the null hypothesis that the 2 population means are equal, thereby accepting the alternative hypothesis that the 2 population means are not equal. The unpaired t test is intended for comparing dependent continuous (interval or ratio) data from 2 study groups. A common mistake is to apply several unpaired t tests when comparing data from 3 or more study groups. In this situation, an analysis of variance with post hoc (posttest) intragroup comparisons should instead be applied. Another common mistake is to apply a series of unpaired t tests when comparing sequentially collected data from 2 study groups. In this situation, a repeated-measures analysis of variance, with tests for group-by-time interaction, and post hoc comparisons, as appropriate, should instead be applied in analyzing data from sequential collection points. The paired t test is used to assess the difference in the means of 2 study groups when the sample observations have been obtained in pairs, often before and after an intervention in each study subject. The Pearson chi-square test is widely used to test the null hypothesis that 2 unpaired categorical variables, each with 2 or more nominal levels (values), are independent of each other. When the null hypothesis is rejected, 1 concludes that there is a probable association between the 2 unpaired categorical variables. When comparing 2 groups on an ordinal or nonnormally distributed continuous outcome variable, the 2-sample t test is usually not appropriate. The Wilcoxon-Mann-Whitney test is instead preferred. When making paired comparisons on data that are ordinal, or continuous but nonnormally distributed, the Wilcoxon signed-rank test can be used. In analyzing their data, researchers should consider the continued merits of these simple yet equally valid unadjusted bivariate statistical tests. However, the appropriate use of an unadjusted bivariate test still requires a solid understanding of its utility, assumptions (requirements), and limitations. This understanding will mitigate the risk of misleading findings, interpretations, and conclusions.
NASA Astrophysics Data System (ADS)
Azila Che Musa, Nor; Mahmud, Zamalia; Baharun, Norhayati
2017-09-01
One of the important skills that is required from any student who are learning statistics is knowing how to solve statistical problems correctly using appropriate statistical methods. This will enable them to arrive at a conclusion and make a significant contribution and decision for the society. In this study, a group of 22 students majoring in statistics at UiTM Shah Alam were given problems relating to topics on testing of hypothesis which require them to solve the problems using confidence interval, traditional and p-value approach. Hypothesis testing is one of the techniques used in solving real problems and it is listed as one of the difficult concepts for students to grasp. The objectives of this study is to explore students’ perceived and actual ability in solving statistical problems and to determine which item in statistical problem solving that students find difficult to grasp. Students’ perceived and actual ability were measured based on the instruments developed from the respective topics. Rasch measurement tools such as Wright map and item measures for fit statistics were used to accomplish the objectives. Data were collected and analysed using Winsteps 3.90 software which is developed based on the Rasch measurement model. The results showed that students’ perceived themselves as moderately competent in solving the statistical problems using confidence interval and p-value approach even though their actual performance showed otherwise. Item measures for fit statistics also showed that the maximum estimated measures were found on two problems. These measures indicate that none of the students have attempted these problems correctly due to reasons which include their lack of understanding in confidence interval and probability values.
ERIC Educational Resources Information Center
Wilson, Courtney R.; Trautmann, Nancy M.; MaKinster, James G.; Barker, Barbara J.
2010-01-01
A new online tool called "Science Pipes" allows students to conduct biodiversity investigations. With this free tool, students create and run analyses that would otherwise require access to unwieldy data sets and the ability to write computer code. Using these data, students can conduct guided inquiries or hypothesis-driven research to…
A robust null hypothesis for the potential causes of megadrought in western North America
NASA Astrophysics Data System (ADS)
Ault, T.; St George, S.; Smerdon, J. E.; Coats, S.; Mankin, J. S.; Cruz, C. C.; Cook, B.; Stevenson, S.
2017-12-01
The western United States was affected by several megadroughts during the last 1200 years, most prominently during the Medieval Climate Anomaly (MCA: 800 to 1300 CE). A null hypothesis is developed to test the possibility that, given a sufficiently long period of time, these events are inevitable and occur purely as a consequence of internal climate variability. The null distribution of this hypothesis is populated by a linear inverse model (LIM) constructed from global sea-surface temperature anomalies and self-calibrated Palmer Drought Severity Index data for North America. Despite being trained only on seasonal data from the late 20th century, the LIM produces megadroughts that are comparable in their duration, spatial scale, and magnitude as the most severe events of the last 12 centuries. The null hypothesis therefore cannot be rejected with much confidence when considering these features of megadrought, meaning that similar events are possible today, even without any changes to boundary conditions. In contrast, the observed clustering of megadroughts in the MCA, as well as the change in mean hydroclimate between the MCA and the 1500-2000 period, are more likely to have been caused by either external forcing or by internal climate variability not well sampled during the latter half of the Twentieth Century. Finally, the results demonstrate the LIM is a viable tool for determining whether paleoclimate reconstructions events should be ascribed to external forcings, "out of sample" climate mechanisms, or if they are consistent with the variability observed during the recent period.
Longitudinal Dimensionality of Adolescent Psychopathology: Testing the Differentiation Hypothesis
ERIC Educational Resources Information Center
Sterba, Sonya K.; Copeland, William; Egger, Helen L.; Costello, E. Jane; Erkanli, Alaattin; Angold, Adrian
2010-01-01
Background: The differentiation hypothesis posits that the underlying liability distribution for psychopathology is of low dimensionality in young children, inflating diagnostic comorbidity rates, but increases in dimensionality with age as latent syndromes become less correlated. This hypothesis has not been adequately tested with longitudinal…
A large scale test of the gaming-enhancement hypothesis.
Przybylski, Andrew K; Wang, John C
2016-01-01
A growing research literature suggests that regular electronic game play and game-based training programs may confer practically significant benefits to cognitive functioning. Most evidence supporting this idea, the gaming-enhancement hypothesis , has been collected in small-scale studies of university students and older adults. This research investigated the hypothesis in a general way with a large sample of 1,847 school-aged children. Our aim was to examine the relations between young people's gaming experiences and an objective test of reasoning performance. Using a Bayesian hypothesis testing approach, evidence for the gaming-enhancement and null hypotheses were compared. Results provided no substantive evidence supporting the idea that having preference for or regularly playing commercially available games was positively associated with reasoning ability. Evidence ranged from equivocal to very strong in support for the null hypothesis over what was predicted. The discussion focuses on the value of Bayesian hypothesis testing for investigating electronic gaming effects, the importance of open science practices, and pre-registered designs to improve the quality of future work.
Genetic testing in congenital heart disease: A clinical approach
Chaix, Marie A; Andelfinger, Gregor; Khairy, Paul
2016-01-01
Congenital heart disease (CHD) is the most common type of birth defect. Traditionally, a polygenic model defined by the interaction of multiple genes and environmental factors was hypothesized to account for different forms of CHD. It is now understood that the contribution of genetics to CHD extends beyond a single unified paradigm. For example, monogenic models and chromosomal abnormalities have been associated with various syndromic and non-syndromic forms of CHD. In such instances, genetic investigation and testing may potentially play an important role in clinical care. A family tree with a detailed phenotypic description serves as the initial screening tool to identify potentially inherited defects and to guide further genetic investigation. The selection of a genetic test is contingent upon the particular diagnostic hypothesis generated by clinical examination. Genetic investigation in CHD may carry the potential to improve prognosis by yielding valuable information with regards to personalized medical care, confidence in the clinical diagnosis, and/or targeted patient follow-up. Moreover, genetic assessment may serve as a tool to predict recurrence risk, define the pattern of inheritance within a family, and evaluate the need for further family screening. In some circumstances, prenatal or preimplantation genetic screening could identify fetuses or embryos at high risk for CHD. Although genetics may appear to constitute a highly specialized sector of cardiology, basic knowledge regarding inheritance patterns, recurrence risks, and available screening and diagnostic tools, including their strengths and limitations, could assist the treating physician in providing sound counsel. PMID:26981213
Null but not void: considerations for hypothesis testing.
Shaw, Pamela A; Proschan, Michael A
2013-01-30
Standard statistical theory teaches us that once the null and alternative hypotheses have been defined for a parameter, the choice of the statistical test is clear. Standard theory does not teach us how to choose the null or alternative hypothesis appropriate to the scientific question of interest. Neither does it tell us that in some cases, depending on which alternatives are realistic, we may want to define our null hypothesis differently. Problems in statistical practice are frequently not as pristinely summarized as the classic theory in our textbooks. In this article, we present examples in statistical hypothesis testing in which seemingly simple choices are in fact rich with nuance that, when given full consideration, make the choice of the right hypothesis test much less straightforward. Published 2012. This article is a US Government work and is in the public domain in the USA.
Effect of climate-related mass extinctions on escalation in molluscs
NASA Astrophysics Data System (ADS)
Hansen, Thor A.; Kelley, Patricia H.; Melland, Vicky D.; Graham, Scott E.
1999-12-01
We test the hypothesis that escalated species (e.g., those with antipredatory adaptations such as heavy armor) are more vulnerable to extinctions caused by changes in climate. If this hypothesis is valid, recovery faunas after climate-related extinctions should include significantly fewer species with escalated shell characteristics, and escalated species should undergo greater rates of extinction than nonescalated species. This hypothesis is tested for the Cretaceous-Paleocene, Eocene-Oligocene, middle Miocene, and Pliocene-Pleistocene mass extinctions. Gastropod and bivalve molluscs from the U.S. coastal plain were evaluated for 10 shell characters that confer resistance to predators. Of 40 tests, one supported the hypothesis; highly ornamented gastropods underwent greater levels of Pliocene-Pleistocene extinction than did nonescalated species. All remaining tests were nonsignificant. The hypothesis that escalated species are more vulnerable to climate-related mass extinctions is not supported.
A Virtual Environment for People Who Are Blind – A Usability Study
Lahav, O.; Schloerb, D. W.; Kumar, S.; Srinivasan, M. A.
2013-01-01
For most people who are blind, exploring an unknown environment can be unpleasant, uncomfortable, and unsafe. Over the past years, the use of virtual reality as a learning and rehabilitation tool for people with disabilities has been on the rise. This research is based on the hypothesis that the supply of appropriate perceptual and conceptual information through compensatory sensorial channels may assist people who are blind with anticipatory exploration. In this research we developed and tested the BlindAid system, which allows the user to explore a virtual environment. The two main goals of the research were: (a) evaluation of different modalities (haptic and audio) and navigation tools, and (b) evaluation of spatial cognitive mapping employed by people who are blind. Our research included four participants who are totally blind. The preliminary findings confirm that the system enabled participants to develop comprehensive cognitive maps by exploring the virtual environment. PMID:24353744
Morphology of muscle attachment sites in the modern human hand does not reflect muscle architecture.
Williams-Hatala, E M; Hatala, K G; Hiles, S; Rabey, K N
2016-06-23
Muscle attachment sites (entheses) on dry bones are regularly used by paleontologists to infer soft tissue anatomy and to reconstruct behaviors of extinct organisms. This method is commonly applied to fossil hominin hand bones to assess their abilities to participate in Paleolithic stone tool behaviors. Little is known, however, about how or even whether muscle anatomy and activity regimes influence the morphologies of their entheses, especially in the hand. Using the opponens muscles from a sample of modern humans, we tested the hypothesis that aspects of hand muscle architecture that are known to be influenced by behavior correlate with the size and shape of their associated entheses. Results show no consistent relationships between these behaviorally-influenced aspects of muscle architecture and entheseal morphology. Consequently, it is likely premature to infer patterns of behavior, such as stone tool making in fossil hominins, from these same entheses.
Morphology of muscle attachment sites in the modern human hand does not reflect muscle architecture
Williams-Hatala, E. M.; Hatala, K. G.; Hiles, S.; Rabey, K. N.
2016-01-01
Muscle attachment sites (entheses) on dry bones are regularly used by paleontologists to infer soft tissue anatomy and to reconstruct behaviors of extinct organisms. This method is commonly applied to fossil hominin hand bones to assess their abilities to participate in Paleolithic stone tool behaviors. Little is known, however, about how or even whether muscle anatomy and activity regimes influence the morphologies of their entheses, especially in the hand. Using the opponens muscles from a sample of modern humans, we tested the hypothesis that aspects of hand muscle architecture that are known to be influenced by behavior correlate with the size and shape of their associated entheses. Results show no consistent relationships between these behaviorally-influenced aspects of muscle architecture and entheseal morphology. Consequently, it is likely premature to infer patterns of behavior, such as stone tool making in fossil hominins, from these same entheses. PMID:27334440
Podometrics as a Potential Clinical Tool for Glomerular Disease Management.
Kikuchi, Masao; Wickman, Larysa; Hodgin, Jeffrey B; Wiggins, Roger C
2015-05-01
Chronic kidney disease culminating in end-stage kidney disease is a major public health problem costing in excess of $40 billion per year with high morbidity and mortality. Current tools for glomerular disease monitoring lack precision and contribute to poor outcome. The podocyte depletion hypothesis describes the major mechanisms underlying the progression of glomerular diseases, which are responsible for more than 80% of cases of end-stage kidney disease. The question arises of whether this new knowledge can be used to improve outcomes and reduce costs. Podocytes have unique characteristics that make them an attractive monitoring tool. Methodologies for estimating podocyte number, size, density, glomerular volume and other parameters in routine kidney biopsies, and the rate of podocyte detachment from glomeruli into urine (podometrics) now have been developed and validated. They potentially fill important gaps in the glomerular disease monitoring toolbox. The application of these tools to glomerular disease groups shows good correlation with outcome, although data validating their use for individual decision making is not yet available. Given the urgency of the clinical problem, we argue that the time has come to focus on testing these tools for application to individualized clinical decision making toward more effective progression prevention. Copyright © 2015 Elsevier Inc. All rights reserved.
Pérula, Luis Á; Campiñez, Manuel; Bosch, Josep M; Barragán Brun, Nieves; Arboniés, Juan C; Bóveda Fontán, Julia; Martín Alvarez, Remedios; Prados, Jose A; Martín-Rioboó, Enrique; Massons, Josep; Criado, Margarita; Fernández, José Á; Parras, Juan M; Ruiz-Moral, Roger; Novo, Jesús M
2012-11-22
Lifestyle is one of the main determinants of people's health. It is essential to find the most effective prevention strategies to be used to encourage behavioral changes in their patients. Many theories are available that explain change or adherence to specific health behaviors in subjects. In this sense the named Motivational Interviewing has increasingly gained relevance. Few well-validated instruments are available for measuring doctors' communication skills, and more specifically the Motivational Interviewing. The hypothesis of this study is that the Scale for Measuring Motivational Interviewing Skills (EVEM questionnaire) is a valid and reliable instrument for measuring the primary care professionals skills to get behavior change in patients. To test the hypothesis we have designed a prospective, observational, multi-center study to validate a measuring instrument. - Thirty-two primary care centers in Spain. -Sampling and Size: a) face and consensual validity: A group composed of 15 experts in Motivational Interviewing. b) Assessment of the psychometric properties of the scale; 50 physician- patient encounters will be videoed; a total of 162 interviews will be conducted with six standardized patients, and another 200 interviews will be conducted with 50 real patients (n=362). Four physicians will be specially trained to assess 30 interviews randomly selected to test the scale reproducibility. -Measurements for to test the hypothesis: a) Face validity: development of a draft questionnaire based on a theoretical model, by using Delphi-type methodology with experts. b) Scale psychometric properties: intraobservers will evaluate video recorded interviews: content-scalability validity (Exploratory Factor Analysis), internal consistency (Cronbach alpha), intra-/inter-observer reliability (Kappa index, intraclass correlation coefficient, Bland & Altman methodology), generalizability, construct validity and sensitivity to change (Pearson product-moment correlation coefficient). The verification of the hypothesis that EVEM is a valid and reliable tool for assessing motivational interviewing would be a major breakthrough in the current theoretical and practical knowledge, as it could be used to assess if the providers put into practice a patient centered communication style and can be used both for training or researching purposes. TRIALS REGISTRATION Dislip-EM study: NCT01282190 (ClinicalTrials.gov).
Design for Verification: Enabling Verification of High Dependability Software-Intensive Systems
NASA Technical Reports Server (NTRS)
Mehlitz, Peter C.; Penix, John; Markosian, Lawrence Z.; Koga, Dennis (Technical Monitor)
2003-01-01
Strategies to achieve confidence that high-dependability applications are correctly implemented include testing and automated verification. Testing deals mainly with a limited number of expected execution paths. Verification usually attempts to deal with a larger number of possible execution paths. While the impact of architecture design on testing is well known, its impact on most verification methods is not as well understood. The Design for Verification approach considers verification from the application development perspective, in which system architecture is designed explicitly according to the application's key properties. The D4V-hypothesis is that the same general architecture and design principles that lead to good modularity, extensibility and complexity/functionality ratio can be adapted to overcome some of the constraints on verification tools, such as the production of hand-crafted models and the limits on dynamic and static analysis caused by state space explosion.
On Restructurable Control System Theory
NASA Technical Reports Server (NTRS)
Athans, M.
1983-01-01
The state of stochastic system and control theory as it impacts restructurable control issues is addressed. The multivariable characteristics of the control problem are addressed. The failure detection/identification problem is discussed as a multi-hypothesis testing problem. Control strategy reconfiguration, static multivariable controls, static failure hypothesis testing, dynamic multivariable controls, fault-tolerant control theory, dynamic hypothesis testing, generalized likelihood ratio (GLR) methods, and adaptive control are discussed.
ERIC Educational Resources Information Center
Marmolejo-Ramos, Fernando; Cousineau, Denis
2017-01-01
The number of articles showing dissatisfaction with the null hypothesis statistical testing (NHST) framework has been progressively increasing over the years. Alternatives to NHST have been proposed and the Bayesian approach seems to have achieved the highest amount of visibility. In this last part of the special issue, a few alternative…
Kaakinen, Markus; Keipi, Teo; Räsänen, Pekka; Oksanen, Atte
2018-02-01
The wealth of beneficial tools for online interaction, consumption, and access to others also bring new risks for harmful experiences online. This study examines the association between cybercrime victimization and subjective well-being (SWB) and, based on the buffering effect hypothesis, tests the assumption of the protective function of social belonging in cybercrime victimization. Cross-national data from the United States, United Kingdom, Germany, and Finland (N = 3,557; Internet users aged 15-30 years; 49.85 percent female) were analyzed using descriptive statistics and main and moderation effect models. Results show that cybercrime victimization has a negative association with SWB after adjusting for a number of confounding factors. This association concerns both general cybercrime victimization and subcategories such as victimization to offensive cybercrime and cyberfraud. In line with the buffering effect hypothesis, social belonging to offline groups was shown to moderate the negative association between SWB and cybercrime victimization. The same effect was not found in the social belonging to online groups. Overall, the study indicates that, analogously to crime victimization in the offline context, cybercrime is a harmful experience whose negative effects mainly concern those users who have weak social ties offline to aid in coping with such stressors.
Revised standards for statistical evidence.
Johnson, Valen E
2013-11-26
Recent advances in Bayesian hypothesis testing have led to the development of uniformly most powerful Bayesian tests, which represent an objective, default class of Bayesian hypothesis tests that have the same rejection regions as classical significance tests. Based on the correspondence between these two classes of tests, it is possible to equate the size of classical hypothesis tests with evidence thresholds in Bayesian tests, and to equate P values with Bayes factors. An examination of these connections suggest that recent concerns over the lack of reproducibility of scientific studies can be attributed largely to the conduct of significance tests at unjustifiably high levels of significance. To correct this problem, evidence thresholds required for the declaration of a significant finding should be increased to 25-50:1, and to 100-200:1 for the declaration of a highly significant finding. In terms of classical hypothesis tests, these evidence standards mandate the conduct of tests at the 0.005 or 0.001 level of significance.
NASA Astrophysics Data System (ADS)
Thompson, E. M.; Hewlett, J. B.; Baise, L. G.; Vogel, R. M.
2011-01-01
Annual maximum (AM) time series are incomplete (i.e., censored) when no events are included above the assumed censoring threshold (i.e., magnitude of completeness). We introduce a distrtibutional hypothesis test for left-censored Gumbel observations based on the probability plot correlation coefficient (PPCC). Critical values of the PPCC hypothesis test statistic are computed from Monte-Carlo simulations and are a function of sample size, censoring level, and significance level. When applied to a global catalog of earthquake observations, the left-censored Gumbel PPCC tests are unable to reject the Gumbel hypothesis for 45 of 46 seismic regions. We apply four different field significance tests for combining individual tests into a collective hypothesis test. None of the field significance tests are able to reject the global hypothesis that AM earthquake magnitudes arise from a Gumbel distribution. Because the field significance levels are not conclusive, we also compute the likelihood that these field significance tests are unable to reject the Gumbel model when the samples arise from a more complex distributional alternative. A power study documents that the censored Gumbel PPCC test is unable to reject some important and viable Generalized Extreme Value (GEV) alternatives. Thus, we cannot rule out the possibility that the global AM earthquake time series could arise from a GEV distribution with a finite upper bound, also known as a reverse Weibull distribution. Our power study also indicates that the binomial and uniform field significance tests are substantially more powerful than the more commonly used Bonferonni and false discovery rate multiple comparison procedures.
Biostatistics Series Module 2: Overview of Hypothesis Testing.
Hazra, Avijit; Gogtay, Nithya
2016-01-01
Hypothesis testing (or statistical inference) is one of the major applications of biostatistics. Much of medical research begins with a research question that can be framed as a hypothesis. Inferential statistics begins with a null hypothesis that reflects the conservative position of no change or no difference in comparison to baseline or between groups. Usually, the researcher has reason to believe that there is some effect or some difference which is the alternative hypothesis. The researcher therefore proceeds to study samples and measure outcomes in the hope of generating evidence strong enough for the statistician to be able to reject the null hypothesis. The concept of the P value is almost universally used in hypothesis testing. It denotes the probability of obtaining by chance a result at least as extreme as that observed, even when the null hypothesis is true and no real difference exists. Usually, if P is < 0.05 the null hypothesis is rejected and sample results are deemed statistically significant. With the increasing availability of computers and access to specialized statistical software, the drudgery involved in statistical calculations is now a thing of the past, once the learning curve of the software has been traversed. The life sciences researcher is therefore free to devote oneself to optimally designing the study, carefully selecting the hypothesis tests to be applied, and taking care in conducting the study well. Unfortunately, selecting the right test seems difficult initially. Thinking of the research hypothesis as addressing one of five generic research questions helps in selection of the right hypothesis test. In addition, it is important to be clear about the nature of the variables (e.g., numerical vs. categorical; parametric vs. nonparametric) and the number of groups or data sets being compared (e.g., two or more than two) at a time. The same research question may be explored by more than one type of hypothesis test. While this may be of utility in highlighting different aspects of the problem, merely reapplying different tests to the same issue in the hope of finding a P < 0.05 is a wrong use of statistics. Finally, it is becoming the norm that an estimate of the size of any effect, expressed with its 95% confidence interval, is required for meaningful interpretation of results. A large study is likely to have a small (and therefore "statistically significant") P value, but a "real" estimate of the effect would be provided by the 95% confidence interval. If the intervals overlap between two interventions, then the difference between them is not so clear-cut even if P < 0.05. The two approaches are now considered complementary to one another.
Biostatistics Series Module 2: Overview of Hypothesis Testing
Hazra, Avijit; Gogtay, Nithya
2016-01-01
Hypothesis testing (or statistical inference) is one of the major applications of biostatistics. Much of medical research begins with a research question that can be framed as a hypothesis. Inferential statistics begins with a null hypothesis that reflects the conservative position of no change or no difference in comparison to baseline or between groups. Usually, the researcher has reason to believe that there is some effect or some difference which is the alternative hypothesis. The researcher therefore proceeds to study samples and measure outcomes in the hope of generating evidence strong enough for the statistician to be able to reject the null hypothesis. The concept of the P value is almost universally used in hypothesis testing. It denotes the probability of obtaining by chance a result at least as extreme as that observed, even when the null hypothesis is true and no real difference exists. Usually, if P is < 0.05 the null hypothesis is rejected and sample results are deemed statistically significant. With the increasing availability of computers and access to specialized statistical software, the drudgery involved in statistical calculations is now a thing of the past, once the learning curve of the software has been traversed. The life sciences researcher is therefore free to devote oneself to optimally designing the study, carefully selecting the hypothesis tests to be applied, and taking care in conducting the study well. Unfortunately, selecting the right test seems difficult initially. Thinking of the research hypothesis as addressing one of five generic research questions helps in selection of the right hypothesis test. In addition, it is important to be clear about the nature of the variables (e.g., numerical vs. categorical; parametric vs. nonparametric) and the number of groups or data sets being compared (e.g., two or more than two) at a time. The same research question may be explored by more than one type of hypothesis test. While this may be of utility in highlighting different aspects of the problem, merely reapplying different tests to the same issue in the hope of finding a P < 0.05 is a wrong use of statistics. Finally, it is becoming the norm that an estimate of the size of any effect, expressed with its 95% confidence interval, is required for meaningful interpretation of results. A large study is likely to have a small (and therefore “statistically significant”) P value, but a “real” estimate of the effect would be provided by the 95% confidence interval. If the intervals overlap between two interventions, then the difference between them is not so clear-cut even if P < 0.05. The two approaches are now considered complementary to one another. PMID:27057011
Saraf, Sanatan; Mathew, Thomas; Roy, Anindya
2015-01-01
For the statistical validation of surrogate endpoints, an alternative formulation is proposed for testing Prentice's fourth criterion, under a bivariate normal model. In such a setup, the criterion involves inference concerning an appropriate regression parameter, and the criterion holds if the regression parameter is zero. Testing such a null hypothesis has been criticized in the literature since it can only be used to reject a poor surrogate, and not to validate a good surrogate. In order to circumvent this, an equivalence hypothesis is formulated for the regression parameter, namely the hypothesis that the parameter is equivalent to zero. Such an equivalence hypothesis is formulated as an alternative hypothesis, so that the surrogate endpoint is statistically validated when the null hypothesis is rejected. Confidence intervals for the regression parameter and tests for the equivalence hypothesis are proposed using bootstrap methods and small sample asymptotics, and their performances are numerically evaluated and recommendations are made. The choice of the equivalence margin is a regulatory issue that needs to be addressed. The proposed equivalence testing formulation is also adopted for other parameters that have been proposed in the literature on surrogate endpoint validation, namely, the relative effect and proportion explained.
Improving the Crossing-SIBTEST Statistic for Detecting Non-uniform DIF.
Chalmers, R Philip
2018-06-01
This paper demonstrates that, after applying a simple modification to Li and Stout's (Psychometrika 61(4):647-677, 1996) CSIBTEST statistic, an improved variant of the statistic could be realized. It is shown that this modified version of CSIBTEST has a more direct association with the SIBTEST statistic presented by Shealy and Stout (Psychometrika 58(2):159-194, 1993). In particular, the asymptotic sampling distributions and general interpretation of the effect size estimates are the same for SIBTEST and the new CSIBTEST. Given the more natural connection to SIBTEST, it is shown that Li and Stout's hypothesis testing approach is insufficient for CSIBTEST; thus, an improved hypothesis testing procedure is required. Based on the presented arguments, a new chi-squared-based hypothesis testing approach is proposed for the modified CSIBTEST statistic. Positive results from a modest Monte Carlo simulation study strongly suggest the original CSIBTEST procedure and randomization hypothesis testing approach should be replaced by the modified statistic and hypothesis testing method.
NASA Astrophysics Data System (ADS)
Luo, Xichun; Tong, Zhen; Liang, Yingchun
2014-12-01
In this article, the shape transferability of using nanoscale multi-tip diamond tools in the diamond turning for scale-up manufacturing of nanostructures has been demonstrated. Atomistic multi-tip diamond tool models were built with different tool geometries in terms of the difference in the tip cross-sectional shape, tip angle, and the feature of tool tip configuration, to determine their effect on the applied forces and the machined nano-groove geometries. The quality of machined nanostructures was characterized by the thickness of the deformed layers and the dimensional accuracy achieved. Simulation results show that diamond turning using nanoscale multi-tip tools offers tremendous shape transferability in machining nanostructures. Both periodic and non-periodic nano-grooves with different cross-sectional shapes can be successfully fabricated using the multi-tip tools. A hypothesis of minimum designed ratio of tool tip distance to tip base width (L/Wf) of the nanoscale multi-tip diamond tool for the high precision machining of nanostructures was proposed based on the analytical study of the quality of the nanostructures fabricated using different types of the multi-tip tools. Nanometric cutting trials using nanoscale multi-tip diamond tools (different in L/Wf) fabricated by focused ion beam (FIB) were then conducted to verify the hypothesis. The investigations done in this work imply the potential of using the nanoscale multi-tip diamond tool for the deterministic fabrication of period and non-periodic nanostructures, which opens up the feasibility of using the process as a versatile manufacturing technique in nanotechnology.
Prieur, Jacques; Pika, Simone; Blois-Heulin, Catherine; Barbu, Stéphanie
2018-04-14
Understanding variations of apes' laterality between activities is a central issue when investigating the evolutionary origins of human hemispheric specialization of manual functions and language. We assessed laterality of 39 chimpanzees in a non-communication action similar to termite fishing that we compared with data on five frequent conspecific-directed gestures involving a tool previously exploited in the same subjects. We evaluated, first, population-level manual laterality for tool-use in non-communication actions; second, the influence of sociodemographic factors (age, sex, group, and hierarchy) on manual laterality in both non-communication actions and gestures. No significant right-hand bias at the population level was found for non-communication tool use, contrary to our previous findings for gestures involving a tool. A multifactorial analysis revealed that hierarchy and age particularly modulated manual laterality. Dominants and immatures were more right-handed when using a tool in gestures than in non-communication actions. On the contrary, subordinates, adolescents, young and mature adults as well as males were more right-handed when using a tool in non-communication actions than in gestures. Our findings support the hypothesis that some primate species may have a specific left-hemisphere processing gestures distinct from the cerebral system processing non-communication manual actions and to partly support the tool use hypothesis. Copyright © 2018 Elsevier B.V. All rights reserved.
Review of quality assessment tools for the evaluation of pharmacoepidemiological safety studies
Neyarapally, George A; Hammad, Tarek A; Pinheiro, Simone P; Iyasu, Solomon
2012-01-01
Objectives Pharmacoepidemiological studies are an important hypothesis-testing tool in the evaluation of postmarketing drug safety. Despite the potential to produce robust value-added data, interpretation of findings can be hindered due to well-recognised methodological limitations of these studies. Therefore, assessment of their quality is essential to evaluating their credibility. The objective of this review was to evaluate the suitability and relevance of available tools for the assessment of pharmacoepidemiological safety studies. Design We created an a priori assessment framework consisting of reporting elements (REs) and quality assessment attributes (QAAs). A comprehensive literature search identified distinct assessment tools and the prespecified elements and attributes were evaluated. Primary and secondary outcome measures The primary outcome measure was the percentage representation of each domain, RE and QAA for the quality assessment tools. Results A total of 61 tools were reviewed. Most tools were not designed to evaluate pharmacoepidemiological safety studies. More than 50% of the reviewed tools considered REs under the research aims, analytical approach, outcome definition and ascertainment, study population and exposure definition and ascertainment domains. REs under the discussion and interpretation, results and study team domains were considered in less than 40% of the tools. Except for the data source domain, quality attributes were considered in less than 50% of the tools. Conclusions Many tools failed to include critical assessment elements relevant to observational pharmacoepidemiological safety studies and did not distinguish between REs and QAAs. Further, there is a lack of considerations on the relative weights of different domains and elements. The development of a quality assessment tool would facilitate consistent, objective and evidence-based assessments of pharmacoepidemiological safety studies. PMID:23015600
AgBase: supporting functional modeling in agricultural organisms
McCarthy, Fiona M.; Gresham, Cathy R.; Buza, Teresia J.; Chouvarine, Philippe; Pillai, Lakshmi R.; Kumar, Ranjit; Ozkan, Seval; Wang, Hui; Manda, Prashanti; Arick, Tony; Bridges, Susan M.; Burgess, Shane C.
2011-01-01
AgBase (http://www.agbase.msstate.edu/) provides resources to facilitate modeling of functional genomics data and structural and functional annotation of agriculturally important animal, plant, microbe and parasite genomes. The website is redesigned to improve accessibility and ease of use, including improved search capabilities. Expanded capabilities include new dedicated pages for horse, cat, dog, cotton, rice and soybean. We currently provide 590 240 Gene Ontology (GO) annotations to 105 454 gene products in 64 different species, including GO annotations linked to transcripts represented on agricultural microarrays. For many of these arrays, this provides the only functional annotation available. GO annotations are available for download and we provide comprehensive, species-specific GO annotation files for 18 different organisms. The tools available at AgBase have been expanded and several existing tools improved based upon user feedback. One of seven new tools available at AgBase, GOModeler, supports hypothesis testing from functional genomics data. We host several associated databases and provide genome browsers for three agricultural pathogens. Moreover, we provide comprehensive training resources (including worked examples and tutorials) via links to Educational Resources at the AgBase website. PMID:21075795
Tenhaven, Christoph; Tipold, Andrea; Fischer, Martin R.; Ehlers, Jan P.
2013-01-01
Introduction: Informal and formal lifelong learning is essential at university and in the workplace. Apart from classical learning techniques, Web 2.0 tools can be used. It is controversial whether there is a so-called net generation amongst people under 30. Aims: To test the hypothesis that a net generation among students and young veterinarians exists. Methods: An online survey of students and veterinarians was conducted in the German-speaking countries which was advertised via online media and traditional print media. Results: 1780 people took part in the survey. Students and veterinarians have different usage patterns regarding social networks (91.9% vs. 69%) and IM (55.9% vs. 24.5%). All tools were predominantly used passively and in private, to a lesser extent also professionally and for studying. Outlook: The use of Web 2.0 tools is useful, however, teaching information and media skills, preparing codes of conduct for the internet and verification of user generated content is essential. PMID:23467682
NASA Astrophysics Data System (ADS)
Aldowaisan, Tariq; Allahverdi, Ali
2016-07-01
This paper describes the process employed by the Industrial and Management Systems Engineering programme at Kuwait University to continuously improve the programme. Using a continuous improvement framework, the paper demonstrates how various qualitative and quantitative analyses methods, such as hypothesis testing and control charts, have been applied to the results of four assessment tools and other data sources to improve performance. Important improvements include the need to reconsider two student outcomes as they were difficult to implement in courses. In addition, through benchmarking and the engagement of Alumni and Employers, key decisions were made to improve the curriculum and enhance employability.
Emotions and Decisions: Beyond Conceptual Vagueness and the Rationality Muddle.
Volz, Kirsten G; Hertwig, Ralph
2016-01-01
For centuries, decision scholars paid little attention to emotions: Decisions were modeled in normative and descriptive frameworks with little regard for affective processes. Recently, however, an "emotions revolution" has taken place, particularly in the neuroscientific study of decision making, putting emotional processes on an equal footing with cognitive ones. Yet disappointingly little theoretical progress has been made. The concepts and processes discussed often remain vague, and conclusions about the implications of emotions for rationality are contradictory and muddled. We discuss three complementary ways to move the neuroscientific study of emotion and decision making from agenda setting to theory building. The first is to use reverse inference as a hypothesis-discovery rather than a hypothesis-testing tool, unless its utility can be systematically quantified (e.g., through meta-analysis). The second is to capitalize on the conceptual inventory advanced by the behavioral science of emotions, testing those concepts and unveiling the underlying processes. The third is to model the interplay between emotions and decisions, harnessing existing cognitive frameworks of decision making and mapping emotions onto the postulated computational processes. To conclude, emotions (like cognitive strategies) are not rational or irrational per se: How (un)reasonable their influence is depends on their fit with the environment. © The Author(s) 2015.
LSD enhances the emotional response to music.
Kaelen, M; Barrett, F S; Roseman, L; Lorenz, R; Family, N; Bolstridge, M; Curran, H V; Feilding, A; Nutt, D J; Carhart-Harris, R L
2015-10-01
There is renewed interest in the therapeutic potential of psychedelic drugs such as lysergic acid diethylamide (LSD). LSD was used extensively in the 1950s and 1960s as an adjunct in psychotherapy, reportedly enhancing emotionality. Music is an effective tool to evoke and study emotion and is considered an important element in psychedelic-assisted psychotherapy; however, the hypothesis that psychedelics enhance the emotional response to music has yet to be investigated in a modern placebo-controlled study. The present study sought to test the hypothesis that music-evoked emotions are enhanced under LSD. Ten healthy volunteers listened to five different tracks of instrumental music during each of two study days, a placebo day followed by an LSD day, separated by 5-7 days. Subjective ratings were completed after each music track and included a visual analogue scale (VAS) and the nine-item Geneva Emotional Music Scale (GEMS-9). Results demonstrated that the emotional response to music is enhanced by LSD, especially the emotions "wonder", "transcendence", "power" and "tenderness". These findings reinforce the long-held assumption that psychedelics enhance music-evoked emotion, and provide tentative and indirect support for the notion that this effect can be harnessed in the context of psychedelic-assisted psychotherapy. Further research is required to test this link directly.
Strand-seq: a unifying tool for studies of chromosome segregation
Falconer, Ester; Lansdorp, Peter M.
2013-01-01
Non random segregation of sister chromatids has been implicated to help specify daughter cell fate (the Silent Sister Hypothesis [1]) or to protect the genome of long-lived stem cells (the Immortal Strand Hypothesis [2]). The idea that sister chromatids are non-randomly segregated into specific daughter cells is only marginally supported by data in sporadic and often contradictory studies. As a result, the field has moved forward rather slowly. The advent of being able to directly label and differentiate sister chromatids in vivo using fluorescence in situ hybridization [3] was a significant advance for such studies. However, this approach is limited by the need for large tracks of unidirectional repeats on chromosomes and the reliance on quantitative imaging of fluorescent probes and rigorous statistical analysis to discern between the two competing hypotheses. A novel method called Strand-seq which uses next-generation sequencing to assay sister chromatid inheritance patterns independently for each chromosome [4] offers a comprehensive approach to test for non-random segregation. In addition Strand-seq enables studies on the deposition of chromatin marks in relation to DNA replication. This method is expected to help unify the field by testing previous claims of non-random segregation in an unbiased way in many model systems in vitro and in vivo. PMID:23665005
NASA Astrophysics Data System (ADS)
Noel, Jean; Prieto, Juan C.; Styner, Martin
2017-03-01
Functional Analysis of Diffusion Tensor Tract Statistics (FADTTS) is a toolbox for analysis of white matter (WM) fiber tracts. It allows associating diffusion properties along major WM bundles with a set of covariates of interest, such as age, diagnostic status and gender, and the structure of the variability of these WM tract properties. However, to use this toolbox, a user must have an intermediate knowledge in scripting languages (MATLAB). FADTTSter was created to overcome this issue and make the statistical analysis accessible to any non-technical researcher. FADTTSter is actively being used by researchers at the University of North Carolina. FADTTSter guides non-technical users through a series of steps including quality control of subjects and fibers in order to setup the necessary parameters to run FADTTS. Additionally, FADTTSter implements interactive charts for FADTTS' outputs. This interactive chart enhances the researcher experience and facilitates the analysis of the results. FADTTSter's motivation is to improve usability and provide a new analysis tool to the community that complements FADTTS. Ultimately, by enabling FADTTS to a broader audience, FADTTSter seeks to accelerate hypothesis testing in neuroimaging studies involving heterogeneous clinical data and diffusion tensor imaging. This work is submitted to the Biomedical Applications in Molecular, Structural, and Functional Imaging conference. The source code of this application is available in NITRC.
The Importance of Teaching Power in Statistical Hypothesis Testing
ERIC Educational Resources Information Center
Olinsky, Alan; Schumacher, Phyllis; Quinn, John
2012-01-01
In this paper, we discuss the importance of teaching power considerations in statistical hypothesis testing. Statistical power analysis determines the ability of a study to detect a meaningful effect size, where the effect size is the difference between the hypothesized value of the population parameter under the null hypothesis and the true value…
The Relation between Parental Values and Parenting Behavior: A Test of the Kohn Hypothesis.
ERIC Educational Resources Information Center
Luster, Tom; And Others
1989-01-01
Used data on 65 mother-infant dyads to test Kohn's hypothesis concerning the relation between values and parenting behavior. Findings support Kohn's hypothesis that parents who value self-direction would emphasize supportive function of parenting and parents who value conformity would emphasize their obligations to impose restraints. (Author/NB)
Cognitive Biases in the Interpretation of Autonomic Arousal: A Test of the Construal Bias Hypothesis
ERIC Educational Resources Information Center
Ciani, Keith D.; Easter, Matthew A.; Summers, Jessica J.; Posada, Maria L.
2009-01-01
According to Bandura's construal bias hypothesis, derived from social cognitive theory, persons with the same heightened state of autonomic arousal may experience either pleasant or deleterious emotions depending on the strength of perceived self-efficacy. The current study tested this hypothesis by proposing that college students' preexisting…
Overgaard, Morten; Lindeløv, Jonas; Svejstrup, Stinna; Døssing, Marianne; Hvid, Tanja; Kauffmann, Oliver; Mouridsen, Kim
2013-01-01
This paper reports an experiment intended to test a particular hypothesis derived from blindsight research, which we name the “source misidentification hypothesis.” According to this hypothesis, a subject may be correct about a stimulus without being correct about how she had access to this knowledge (whether the stimulus was visual, auditory, or something else). We test this hypothesis in healthy subjects, asking them to report whether a masked stimulus was presented auditorily or visually, what the stimulus was, and how clearly they experienced the stimulus using the Perceptual Awareness Scale (PAS). We suggest that knowledge about perceptual modality may be a necessary precondition in order to issue correct reports of which stimulus was presented. Furthermore, we find that PAS ratings correlate with correctness, and that subjects are at chance level when reporting no conscious experience of the stimulus. To demonstrate that particular levels of reporting accuracy are obtained, we employ a statistical strategy, which operationally tests the hypothesis of non-equality, such that the usual rejection of the null-hypothesis admits the conclusion of equivalence. PMID:23508677
A large scale test of the gaming-enhancement hypothesis
Wang, John C.
2016-01-01
A growing research literature suggests that regular electronic game play and game-based training programs may confer practically significant benefits to cognitive functioning. Most evidence supporting this idea, the gaming-enhancement hypothesis, has been collected in small-scale studies of university students and older adults. This research investigated the hypothesis in a general way with a large sample of 1,847 school-aged children. Our aim was to examine the relations between young people’s gaming experiences and an objective test of reasoning performance. Using a Bayesian hypothesis testing approach, evidence for the gaming-enhancement and null hypotheses were compared. Results provided no substantive evidence supporting the idea that having preference for or regularly playing commercially available games was positively associated with reasoning ability. Evidence ranged from equivocal to very strong in support for the null hypothesis over what was predicted. The discussion focuses on the value of Bayesian hypothesis testing for investigating electronic gaming effects, the importance of open science practices, and pre-registered designs to improve the quality of future work. PMID:27896035
Pinilla, Jaime; López-Valcárcel, Beatriz G; González-Martel, Christian; Peiro, Salvador
2018-05-09
Newcomb-Benford's Law (NBL) proposes a regular distribution for first digits, second digits and digit combinations applicable to many different naturally occurring sources of data. Testing deviations from NBL is used in many datasets as a screening tool for identifying data trustworthiness problems. This study aims to compare public available waiting lists (WL) data from Finland and Spain for testing NBL as an instrument to flag up potential manipulation in WLs. Analysis of the frequency of Finnish and Spanish WLs first digits to determine if their distribution is similar to the pattern documented by NBL. Deviations from the expected first digit frequency were analysed using Pearson's χ 2 , mean absolute deviation and Kuiper tests. Publicly available WL data from Finland and Spain, two countries with universal health insurance and National Health Systems but characterised by different levels of transparency and good governance standards. Adjustment of the observed distribution of the numbers reported in Finnish and Spanish WL data to the expected distribution according to NBL. WL data reported by the Finnish health system fits first digit NBL according to all statistical tests used (p=0.6519 in χ 2 test). For Spanish data, this hypothesis was rejected in all tests (p<0.0001 in χ 2 test). Testing deviations from NBL distribution can be a useful tool to identify problems with WL data trustworthiness and signalling the need for further testing. © Article author(s) (or their employer(s) unless otherwise stated in the text of the article) 2018. All rights reserved. No commercial use is permitted unless otherwise expressly granted.
ERIC Educational Resources Information Center
SAW, J.G.
THIS PAPER DEALS WITH SOME TESTS OF HYPOTHESIS FREQUENTLY ENCOUNTERED IN THE ANALYSIS OF MULTIVARIATE DATA. THE TYPE OF HYPOTHESIS CONSIDERED IS THAT WHICH THE STATISTICIAN CAN ANSWER IN THE NEGATIVE OR AFFIRMATIVE. THE DOOLITTLE METHOD MAKES IT POSSIBLE TO EVALUATE THE DETERMINANT OF A MATRIX OF HIGH ORDER, TO SOLVE A MATRIX EQUATION, OR TO…
Use of buckets as tools by Western lowland gorillas.
Margulis, Susan W; Steele, Gary R; Kleinfelder, Raymond E
2012-01-01
While all great apes have been documented to use tools, gorillas are arguably the least proficient tool users. In 2009, a Western lowland gorilla (Gorilla gorilla gorilla) at the Buffalo Zoo was observed using a bucket, which had been provided as part of normal enrichment, as a tool to collect water. We conducted a brief, ad libitum investigation to confirm the validity of the initial observation. We then carried out a systematic investigation of the behavior in 2010. We collected 72 hr of videotaped data and tested the null hypothesis that the gorillas did not differ in their prevalence of engaging in bucket-use behaviors. We documented that all four adult gorillas in the group used buckets as drinking tools; however, there was significant individual variation in frequency and type of use of buckets. Four of the eight behaviors showed significant variation among individuals. The silverback male and the youngest adult female contacted and held the bucket significantly more than the remaining two adult females. The young female carried and drank from the bucket significantly more than any other individual. Furthermore, she was observed to fill the bucket with water four of the six times during which this behavior was observed. These data provide evidence of the ability of gorillas to utilize tools, given the appropriate environmental conditions. We continue to explore the abilities of gorillas to recognize the functionality of buckets as tools. © 2012 Wiley Periodicals, Inc.
Kruschke, John K; Liddell, Torrin M
2018-02-01
In the practice of data analysis, there is a conceptual distinction between hypothesis testing, on the one hand, and estimation with quantified uncertainty on the other. Among frequentists in psychology, a shift of emphasis from hypothesis testing to estimation has been dubbed "the New Statistics" (Cumming 2014). A second conceptual distinction is between frequentist methods and Bayesian methods. Our main goal in this article is to explain how Bayesian methods achieve the goals of the New Statistics better than frequentist methods. The article reviews frequentist and Bayesian approaches to hypothesis testing and to estimation with confidence or credible intervals. The article also describes Bayesian approaches to meta-analysis, randomized controlled trials, and power analysis.
Speech, stone tool-making and the evolution of language.
Cataldo, Dana Michelle; Migliano, Andrea Bamberg; Vinicius, Lucio
2018-01-01
The 'technological hypothesis' proposes that gestural language evolved in early hominins to enable the cultural transmission of stone tool-making skills, with speech appearing later in response to the complex lithic industries of more recent hominins. However, no flintknapping study has assessed the efficiency of speech alone (unassisted by gesture) as a tool-making transmission aid. Here we show that subjects instructed by speech alone underperform in stone tool-making experiments in comparison to subjects instructed through either gesture alone or 'full language' (gesture plus speech), and also report lower satisfaction with their received instruction. The results provide evidence that gesture was likely to be selected over speech as a teaching aid in the earliest hominin tool-makers; that speech could not have replaced gesturing as a tool-making teaching aid in later hominins, possibly explaining the functional retention of gesturing in the full language of modern humans; and that speech may have evolved for reasons unrelated to tool-making. We conclude that speech is unlikely to have evolved as tool-making teaching aid superior to gesture, as claimed by the technological hypothesis, and therefore alternative views should be considered. For example, gestural language may have evolved to enable tool-making in earlier hominins, while speech may have later emerged as a response to increased trade and more complex inter- and intra-group interactions in Middle Pleistocene ancestors of Neanderthals and Homo sapiens; or gesture and speech may have evolved in parallel rather than in sequence.
Young, Anna M.; Cordier, Breanne; Mundry, Roger; Wright, Timothy F.
2014-01-01
In many social species group, members share acoustically similar calls. Functional hypotheses have been proposed for call sharing, but previous studies have been limited by an inability to distinguish among these hypotheses. We examined the function of vocal sharing in female budgerigars with a two-part experimental design that allowed us to distinguish between two functional hypotheses. The social association hypothesis proposes that shared calls help animals mediate affiliative and aggressive interactions, while the password hypothesis proposes that shared calls allow animals to distinguish group identity and exclude nonmembers. We also tested the labeling hypothesis, a mechanistic explanation which proposes that shared calls are used to address specific individuals within the sender–receiver relationship. We tested the social association hypothesis by creating four–member flocks of unfamiliar female budgerigars (Melopsittacus undulatus) and then monitoring the birds’ calls, social behaviors, and stress levels via fecal glucocorticoid metabolites. We tested the password hypothesis by moving immigrants into established social groups. To test the labeling hypothesis, we conducted additional recording sessions in which individuals were paired with different group members. The social association hypothesis was supported by the development of multiple shared call types in each cage and a correlation between the number of shared call types and the number of aggressive interactions between pairs of birds. We also found support for calls serving as a labeling mechanism using discriminant function analysis with a permutation procedure. Our results did not support the password hypothesis, as there was no difference in stress or directed behaviors between immigrant and control birds. PMID:24860236
Phase II design with sequential testing of hypotheses within each stage.
Poulopoulou, Stavroula; Karlis, Dimitris; Yiannoutsos, Constantin T; Dafni, Urania
2014-01-01
The main goal of a Phase II clinical trial is to decide, whether a particular therapeutic regimen is effective enough to warrant further study. The hypothesis tested by Fleming's Phase II design (Fleming, 1982) is [Formula: see text] versus [Formula: see text], with level [Formula: see text] and with a power [Formula: see text] at [Formula: see text], where [Formula: see text] is chosen to represent the response probability achievable with standard treatment and [Formula: see text] is chosen such that the difference [Formula: see text] represents a targeted improvement with the new treatment. This hypothesis creates a misinterpretation mainly among clinicians that rejection of the null hypothesis is tantamount to accepting the alternative, and vice versa. As mentioned by Storer (1992), this introduces ambiguity in the evaluation of type I and II errors and the choice of the appropriate decision at the end of the study. Instead of testing this hypothesis, an alternative class of designs is proposed in which two hypotheses are tested sequentially. The hypothesis [Formula: see text] versus [Formula: see text] is tested first. If this null hypothesis is rejected, the hypothesis [Formula: see text] versus [Formula: see text] is tested next, in order to examine whether the therapy is effective enough to consider further testing in a Phase III study. For the derivation of the proposed design the exact binomial distribution is used to calculate the decision cut-points. The optimal design parameters are chosen, so as to minimize the average sample number (ASN) under specific upper bounds for error levels. The optimal values for the design were found using a simulated annealing method.
An Extension of RSS-based Model Comparison Tests for Weighted Least Squares
2012-08-22
use the model comparison test statistic to analyze the null hypothesis. Under the null hypothesis, the weighted least squares cost functional is JWLS ...q̂WLSH ) = 10.3040×106. Under the alternative hypothesis, the weighted least squares cost functional is JWLS (q̂WLS) = 8.8394 × 106. Thus the model
Hypothesis testing of scientific Monte Carlo calculations.
Wallerberger, Markus; Gull, Emanuel
2017-11-01
The steadily increasing size of scientific Monte Carlo simulations and the desire for robust, correct, and reproducible results necessitates rigorous testing procedures for scientific simulations in order to detect numerical problems and programming bugs. However, the testing paradigms developed for deterministic algorithms have proven to be ill suited for stochastic algorithms. In this paper we demonstrate explicitly how the technique of statistical hypothesis testing, which is in wide use in other fields of science, can be used to devise automatic and reliable tests for Monte Carlo methods, and we show that these tests are able to detect some of the common problems encountered in stochastic scientific simulations. We argue that hypothesis testing should become part of the standard testing toolkit for scientific simulations.
Hypothesis testing of scientific Monte Carlo calculations
NASA Astrophysics Data System (ADS)
Wallerberger, Markus; Gull, Emanuel
2017-11-01
The steadily increasing size of scientific Monte Carlo simulations and the desire for robust, correct, and reproducible results necessitates rigorous testing procedures for scientific simulations in order to detect numerical problems and programming bugs. However, the testing paradigms developed for deterministic algorithms have proven to be ill suited for stochastic algorithms. In this paper we demonstrate explicitly how the technique of statistical hypothesis testing, which is in wide use in other fields of science, can be used to devise automatic and reliable tests for Monte Carlo methods, and we show that these tests are able to detect some of the common problems encountered in stochastic scientific simulations. We argue that hypothesis testing should become part of the standard testing toolkit for scientific simulations.
Sex ratios in the two Germanies: a test of the economic stress hypothesis.
Catalano, Ralph A
2003-09-01
Literature describing temporal variation in the secondary sex ratio among humans reports an association between population stressors and declines in the odds of male birth. Explanations of this phenomenon draw on reports that stressed females spontaneously abort male more than female fetuses, and that stressed males exhibit reduced sperm motility. This work has led to the argument that population stress induced by a declining economy reduces the human sex ratio. No direct test of this hypothesis appears in the literature. Here, a test is offered based on a comparison of the sex ratio in East and West Germany for the years 1946 to 1999. The theory suggests that the East German sex ratio should be lower in 1991, when East Germany's economy collapsed, than expected from its own history and from the sex ratio in West Germany. The hypothesis is tested using time-series modelling methods. The data support the hypothesis. The sex ratio in East Germany was at its lowest in 1991. This first direct test supports the hypothesis that economic decline reduces the human sex ratio.
Understanding suicide terrorism: premature dismissal of the religious-belief hypothesis.
Liddle, James R; Machluf, Karin; Shackelford, Todd K
2010-07-06
We comment on work by Ginges, Hansen, and Norenzayan (2009), in which they compare two hypotheses for predicting individual support for suicide terrorism: the religious-belief hypothesis and the coalitional-commitment hypothesis. Although we appreciate the evidence provided in support of the coalitional-commitment hypothesis, we argue that their method of testing the religious-belief hypothesis is conceptually flawed, thus calling into question their conclusion that the religious-belief hypothesis has been disconfirmed. In addition to critiquing the methodology implemented by Ginges et al., we provide suggestions on how the religious-belief hypothesis may be properly tested. It is possible that the premature and unwarranted conclusions reached by Ginges et al. may deter researchers from examining the effect of specific religious beliefs on support for terrorism, and we hope that our comments can mitigate this possibility.
ERIC Educational Resources Information Center
Borgmeier, Chris; Horner, Robert H.
2006-01-01
Faced with limited resources, schools require tools that increase the accuracy and efficiency of functional behavioral assessment. Yarbrough and Carr (2000) provided evidence that informant confidence ratings of the likelihood of problem behavior in specific situations offered a promising tool for predicting the accuracy of function-based…
Physical intelligence does matter to cumulative technological culture.
Osiurak, François; De Oliveira, Emmanuel; Navarro, Jordan; Lesourd, Mathieu; Claidière, Nicolas; Reynaud, Emanuelle
2016-08-01
Tool-based culture is not unique to humans, but cumulative technological culture is. The social intelligence hypothesis suggests that this phenomenon is fundamentally based on uniquely human sociocognitive skills (e.g., shared intentionality). An alternative hypothesis is that cumulative technological culture also crucially depends on physical intelligence, which may reflect fluid and crystallized aspects of intelligence and enables people to understand and improve the tools made by predecessors. By using a tool-making-based microsociety paradigm, we demonstrate that physical intelligence is a stronger predictor of cumulative technological performance than social intelligence. Moreover, learners' physical intelligence is critical not only in observational learning but also when learners interact verbally with teachers. Finally, we show that cumulative performance is only slightly influenced by teachers' physical and social intelligence. In sum, human technological culture needs "great engineers" to evolve regardless of the proportion of "great pedagogues." Social intelligence might play a more limited role than commonly assumed, perhaps in tool-use/making situations in which teachers and learners have to share symbolic representations. (PsycINFO Database Record (c) 2016 APA, all rights reserved).
Feldman, Anatol G; Latash, Mark L
2005-02-01
Criticisms of the equilibrium point (EP) hypothesis have recently appeared that are based on misunderstandings of some of its central notions. Starting from such interpretations of the hypothesis, incorrect predictions are made and tested. When the incorrect predictions prove false, the hypothesis is claimed to be falsified. In particular, the hypothesis has been rejected based on the wrong assumptions that it conflicts with empirically defined joint stiffness values or that it is incompatible with violations of equifinality under certain velocity-dependent perturbations. Typically, such attempts use notions describing the control of movements of artificial systems in place of physiologically relevant ones. While appreciating constructive criticisms of the EP hypothesis, we feel that incorrect interpretations have to be clarified by reiterating what the EP hypothesis does and does not predict. We conclude that the recent claims of falsifying the EP hypothesis and the calls for its replacement by EMG-force control hypothesis are unsubstantiated. The EP hypothesis goes far beyond the EMG-force control view. In particular, the former offers a resolution for the famous posture-movement paradox while the latter fails to resolve it.
Action perception as hypothesis testing.
Donnarumma, Francesco; Costantini, Marcello; Ambrosini, Ettore; Friston, Karl; Pezzulo, Giovanni
2017-04-01
We present a novel computational model that describes action perception as an active inferential process that combines motor prediction (the reuse of our own motor system to predict perceived movements) and hypothesis testing (the use of eye movements to disambiguate amongst hypotheses). The system uses a generative model of how (arm and hand) actions are performed to generate hypothesis-specific visual predictions, and directs saccades to the most informative places of the visual scene to test these predictions - and underlying hypotheses. We test the model using eye movement data from a human action observation study. In both the human study and our model, saccades are proactive whenever context affords accurate action prediction; but uncertainty induces a more reactive gaze strategy, via tracking the observed movements. Our model offers a novel perspective on action observation that highlights its active nature based on prediction dynamics and hypothesis testing. Copyright © 2017 The Authors. Published by Elsevier Ltd.. All rights reserved.
Garcea, Frank E.; Dombovy, Mary; Mahon, Bradford Z.
2013-01-01
A number of studies have observed that the motor system is activated when processing the semantics of manipulable objects. Such phenomena have been taken as evidence that simulation over motor representations is a necessary and intermediary step in the process of conceptual understanding. Cognitive neuropsychological evaluations of patients with impairments for action knowledge permit a direct test of the necessity of motor simulation in conceptual processing. Here, we report the performance of a 47-year-old male individual (Case AA) and six age-matched control participants on a number of tests probing action and object knowledge. Case AA had a large left-hemisphere frontal-parietal lesion and hemiplegia affecting his right arm and leg. Case AA presented with impairments for object-associated action production, and his conceptual knowledge of actions was severely impaired. In contrast, his knowledge of objects such as tools and other manipulable objects was largely preserved. The dissociation between action and object knowledge is difficult to reconcile with strong forms of the embodied cognition hypothesis. We suggest that these, and other similar findings, point to the need to develop tractable hypotheses about the dynamics of information exchange among sensory, motor and conceptual processes. PMID:23641205
Grubb, Stephen C.; Maddatu, Terry P.; Bult, Carol J.; Bogue, Molly A.
2009-01-01
The Mouse Phenome Database (MPD; http://www.jax.org/phenome) is an open source, web-based repository of phenotypic and genotypic data on commonly used and genetically diverse inbred strains of mice and their derivatives. MPD is also a facility for query, analysis and in silico hypothesis testing. Currently MPD contains about 1400 phenotypic measurements contributed by research teams worldwide, including phenotypes relevant to human health such as cancer susceptibility, aging, obesity, susceptibility to infectious diseases, atherosclerosis, blood disorders and neurosensory disorders. Electronic access to centralized strain data enables investigators to select optimal strains for many systems-based research applications, including physiological studies, drug and toxicology testing, modeling disease processes and complex trait analysis. The ability to select strains for specific research applications by accessing existing phenotype data can bypass the need to (re)characterize strains, precluding major investments of time and resources. This functionality, in turn, accelerates research and leverages existing community resources. Since our last NAR reporting in 2007, MPD has added more community-contributed data covering more phenotypic domains and implemented several new tools and features, including a new interactive Tool Demo available through the MPD homepage (quick link: http://phenome.jax.org/phenome/trytools). PMID:18987003
Heckman, James; Moon, Seong Hyeok; Pinto, Rodrigo; Savelyev, Peter; Yavitz, Adam
2012-01-01
Social experiments are powerful sources of information about the effectiveness of interventions. In practice, initial randomization plans are almost always compromised. Multiple hypotheses are frequently tested. “Significant” effects are often reported with p-values that do not account for preliminary screening from a large candidate pool of possible effects. This paper develops tools for analyzing data from experiments as they are actually implemented. We apply these tools to analyze the influential HighScope Perry Preschool Program. The Perry program was a social experiment that provided preschool education and home visits to disadvantaged children during their preschool years. It was evaluated by the method of random assignment. Both treatments and controls have been followed from age 3 through age 40. Previous analyses of the Perry data assume that the planned randomization protocol was implemented. In fact, as in many social experiments, the intended randomization protocol was compromised. Accounting for compromised randomization, multiple-hypothesis testing, and small sample sizes, we find statistically significant and economically important program effects for both males and females. We also examine the representativeness of the Perry study. PMID:23255883
Mau, Ted; Palaparthi, Anil; Riede, Tobias; Titze, Ingo R.
2015-01-01
Objectives/Hypothesis To test the hypothesis that subligamental cordectomy produces superior acoustic outcome than subepithelial cordectomy for early (T1-2) glottic cancer that requires complete removal of the superficial lamina propria but does not involve the vocal ligament. Study Design Computer simulation Methods A computational tool for vocal fold surgical planning and simulation (the National Center for Voice and Speech Phonosurgery Optimizer-Simulator) was used to evaluate the acoustic output of alternative vocal fold morphologies. Four morphologies were simulated: normal, subepithelial cordectomy, subligamental cordectomy, and transligamental cordectomy (partial ligament resection). The primary outcome measure was the range of fundamental frequency (F0) and sound pressure level (SPL). A more restricted F0-SPL range was considered less favorable because of reduced acoustic possibilities given the same range of driving subglottic pressure and identical vocal fold posturing. Results Subligamental cordectomy generated solutions covering an F0-SPL range 82% of normal for a rectangular vocal fold. In contrast, transligamental and subepithelial cordectomies produced significantly smaller F0-SPL ranges, 57% and 19% of normal, respectively. Conclusion This study illustrates the use of the Phonosurgery Optimizer-Simulator to test a specific hypothesis regarding the merits of two surgical alternatives. These simulation results provide theoretical support for vocal ligament excision with maximum muscle preservation when superficial lamina propria resection is necessary but the vocal ligament can be spared on oncological grounds. The resection of more tissue may paradoxically allow the eventual recovery of a better speaking voice, assuming glottal width is restored. Application of this conclusion to surgical practice will require confirmatory clinical data. PMID:26010240
Confidence intervals for single-case effect size measures based on randomization test inversion.
Michiels, Bart; Heyvaert, Mieke; Meulders, Ann; Onghena, Patrick
2017-02-01
In the current paper, we present a method to construct nonparametric confidence intervals (CIs) for single-case effect size measures in the context of various single-case designs. We use the relationship between a two-sided statistical hypothesis test at significance level α and a 100 (1 - α) % two-sided CI to construct CIs for any effect size measure θ that contain all point null hypothesis θ values that cannot be rejected by the hypothesis test at significance level α. This method of hypothesis test inversion (HTI) can be employed using a randomization test as the statistical hypothesis test in order to construct a nonparametric CI for θ. We will refer to this procedure as randomization test inversion (RTI). We illustrate RTI in a situation in which θ is the unstandardized and the standardized difference in means between two treatments in a completely randomized single-case design. Additionally, we demonstrate how RTI can be extended to other types of single-case designs. Finally, we discuss a few challenges for RTI as well as possibilities when using the method with other effect size measures, such as rank-based nonoverlap indices. Supplementary to this paper, we provide easy-to-use R code, which allows the user to construct nonparametric CIs according to the proposed method.
NASA Astrophysics Data System (ADS)
Verma, Sneha K.; Chun, Sophia; Liu, Brent J.
2014-03-01
Pain is a common complication after spinal cord injury with prevalence estimates ranging 77% to 81%, which highly affects a patient's lifestyle and well-being. In the current clinical setting paper-based forms are used to classify pain correctly, however, the accuracy of diagnoses and optimal management of pain largely depend on the expert reviewer, which in many cases is not possible because of very few experts in this field. The need for a clinical decision support system that can be used by expert and non-expert clinicians has been cited in literature, but such a system has not been developed. We have designed and developed a stand-alone tool for correctly classifying pain type in spinal cord injury (SCI) patients, using Bayesian decision theory. Various machine learning simulation methods are used to verify the algorithm using a pilot study data set, which consists of 48 patients data set. The data set consists of the paper-based forms, collected at Long Beach VA clinic with pain classification done by expert in the field. Using the WEKA as the machine learning tool we have tested on the 48 patient dataset that the hypothesis that attributes collected on the forms and the pain location marked by patients have very significant impact on the pain type classification. This tool will be integrated with an imaging informatics system to support a clinical study that will test the effectiveness of using Proton Beam radiotherapy for treating spinal cord injury (SCI) related neuropathic pain as an alternative to invasive surgical lesioning.
NASA Technical Reports Server (NTRS)
Loftin, K. C.; Conkin, J.; Powell, M. R.
1997-01-01
BACKGROUND: Several previous studies indicated that exercise during prebreathe with 100% O2 decreased the incidence of hypobaric decompression sickness (DCS). We report a meta-analysis of these investigations combined with a new study in our laboratory to develop a statistical model as a predictive tool for DCS. HYPOTHESIS: Exercise during prebreathe increases N2 elimination in a theoretical 360-min half-time compartment decreasing the incidence of DCS. METHODS: A dose-response probability tissue ratio (TR) model with 95% confidence limits was created for two groups, prebreathe with exercise (n = 113) and resting prebreathe (n = 113), using nonlinear regression analysis with maximum likelihood optimization. RESULTS: The model predicted that prebreathe exercise would reduce the residual N2 in a 360-min half-time compartment to a level analogous to that in a 180-min compartment. This finding supported the hypothesis. The incidence of DCS for the exercise prebreathe group was significantly decreased (Chi-Square = 17.1, p < 0.0001) from the resting prebreathe group. CONCLUSIONS: The results suggested that exercise during prebreathe increases tissue perfusion and N2 elimination approximately 2-fold and markedly lowers the risk of DCS. Based on the model, the prebreathe duration may be reduced from 240 min to a predicted 91 min for the protocol in our study, but this remains to be verified. The model provides a useful planning tool to develop and test appropriate prebreathe exercise protocols and to predict DCS risks for astronauts.
ERIC Educational Resources Information Center
Besken, Miri
2016-01-01
The perceptual fluency hypothesis claims that items that are easy to perceive at encoding induce an illusion that they will be easier to remember, despite the finding that perception does not generally affect recall. The current set of studies tested the predictions of the perceptual fluency hypothesis with a picture generation manipulation.…
Adolescents' Body Image Trajectories: A Further Test of the Self-Equilibrium Hypothesis
ERIC Educational Resources Information Center
Morin, Alexandre J. S.; Maïano, Christophe; Scalas, L. Francesca; Janosz, Michel; Litalien, David
2017-01-01
The self-equilibrium hypothesis underlines the importance of having a strong core self, which is defined as a high and developmentally stable self-concept. This study tested this hypothesis in relation to body image (BI) trajectories in a sample of 1,006 adolescents (M[subscript age] = 12.6, including 541 males and 465 females) across a 4-year…
ERIC Educational Resources Information Center
Trafimow, David
2017-01-01
There has been much controversy over the null hypothesis significance testing procedure, with much of the criticism centered on the problem of inverse inference. Specifically, p gives the probability of the finding (or one more extreme) given the null hypothesis, whereas the null hypothesis significance testing procedure involves drawing a…
ERIC Educational Resources Information Center
Lee, Jungmin
2016-01-01
This study tested the Bennett hypothesis by examining whether four-year colleges changed listed tuition and fees, the amount of institutional grants per student, and room and board charges after their states implemented statewide merit-based aid programs. According to the Bennett hypothesis, increases in government financial aid make it easier for…
Human female orgasm as evolved signal: a test of two hypotheses.
Ellsworth, Ryan M; Bailey, Drew H
2013-11-01
We present the results of a study designed to empirically test predictions derived from two hypotheses regarding human female orgasm behavior as an evolved communicative trait or signal. One hypothesis tested was the female fidelity hypothesis, which posits that human female orgasm signals a woman's sexual satisfaction and therefore her likelihood of future fidelity to a partner. The other was sire choice hypothesis, which posits that women's orgasm behavior signals increased chances of fertilization. To test the two hypotheses of human female orgasm, we administered a questionnaire to 138 females and 121 males who reported that they were currently in a romantic relationship. Key predictions of the female fidelity hypothesis were not supported. In particular, orgasm was not associated with female sexual fidelity nor was orgasm associated with male perceptions of partner sexual fidelity. However, faked orgasm was associated with female sexual infidelity and lower male relationship satisfaction. Overall, results were in greater support of the sire choice signaling hypothesis than the female fidelity hypothesis. Results also suggest that male satisfaction with, investment in, and sexual fidelity to a mate are benefits that favored the selection of orgasmic signaling in ancestral females.
Luo, Liqun; Zhao, Wei; Weng, Tangmei
2016-01-01
The Trivers-Willard hypothesis predicts that high-status parents will bias their investment to sons, whereas low-status parents will bias their investment to daughters. Among humans, tests of this hypothesis have yielded mixed results. This study tests the hypothesis using data collected among contemporary peasants in Central South China. We use current family status (rated by our informants) and father's former class identity (assigned by the Chinese Communist Party in the early 1950s) as measures of parental status, and proportion of sons in offspring and offspring's years of education as measures of parental investment. Results show that (i) those families with a higher former class identity such as landlord and rich peasant tend to have a higher socioeconomic status currently, (ii) high-status parents are more likely to have sons than daughters among their biological offspring, and (iii) in higher-status families, the years of education obtained by sons exceed that obtained by daughters to a larger extent than in lower-status families. Thus, the first assumption and the two predictions of the hypothesis are supported by this study. This article contributes a contemporary Chinese case to the testing of the Trivers-Willard hypothesis.
Hypothesis testing of a change point during cognitive decline among Alzheimer's disease patients.
Ji, Ming; Xiong, Chengjie; Grundman, Michael
2003-10-01
In this paper, we present a statistical hypothesis test for detecting a change point over the course of cognitive decline among Alzheimer's disease patients. The model under the null hypothesis assumes a constant rate of cognitive decline over time and the model under the alternative hypothesis is a general bilinear model with an unknown change point. When the change point is unknown, however, the null distribution of the test statistics is not analytically tractable and has to be simulated by parametric bootstrap. When the alternative hypothesis that a change point exists is accepted, we propose an estimate of its location based on the Akaike's Information Criterion. We applied our method to a data set from the Neuropsychological Database Initiative by implementing our hypothesis testing method to analyze Mini Mental Status Exam scores based on a random-slope and random-intercept model with a bilinear fixed effect. Our result shows that despite large amount of missing data, accelerated decline did occur for MMSE among AD patients. Our finding supports the clinical belief of the existence of a change point during cognitive decline among AD patients and suggests the use of change point models for the longitudinal modeling of cognitive decline in AD research.
NASA Astrophysics Data System (ADS)
Menne, Matthew J.; Williams, Claude N., Jr.
2005-10-01
An evaluation of three hypothesis test statistics that are commonly used in the detection of undocumented changepoints is described. The goal of the evaluation was to determine whether the use of multiple tests could improve undocumented, artificial changepoint detection skill in climate series. The use of successive hypothesis testing is compared to optimal approaches, both of which are designed for situations in which multiple undocumented changepoints may be present. In addition, the importance of the form of the composite climate reference series is evaluated, particularly with regard to the impact of undocumented changepoints in the various component series that are used to calculate the composite.In a comparison of single test changepoint detection skill, the composite reference series formulation is shown to be less important than the choice of the hypothesis test statistic, provided that the composite is calculated from the serially complete and homogeneous component series. However, each of the evaluated composite series is not equally susceptible to the presence of changepoints in its components, which may be erroneously attributed to the target series. Moreover, a reference formulation that is based on the averaging of the first-difference component series is susceptible to random walks when the composition of the component series changes through time (e.g., values are missing), and its use is, therefore, not recommended. When more than one test is required to reject the null hypothesis of no changepoint, the number of detected changepoints is reduced proportionately less than the number of false alarms in a wide variety of Monte Carlo simulations. Consequently, a consensus of hypothesis tests appears to improve undocumented changepoint detection skill, especially when reference series homogeneity is violated. A consensus of successive hypothesis tests using a semihierarchic splitting algorithm also compares favorably to optimal solutions, even when changepoints are not hierarchic.
Current Perspectives on the Cerebellum and Reading Development.
Alvarez, Travis A; Fiez, Julie A
2018-05-03
The dominant neural models of typical and atypical reading focus on the cerebral cortex. However, Nicolson et al. (2001) proposed a model, the cerebellar deficit hypothesis, in which the cerebellum plays an important role in reading. To evaluate the evidence in support of this model, we qualitatively review the current literature and employ meta-analytic tools examining patterns of functional connectivity between the cerebellum and the cerebral reading network. We find evidence for a phonological circuit with connectivity between the cerebellum and a dorsal fronto-parietal pathway, and a semantic circuit with cerebellar connectivity to a ventral fronto-temporal pathway. Furthermore, both cerebral pathways have functional connections with the mid-fusiform gyrus, a region implicated in orthographic processing. Consideration of these circuits within the context of the current literature suggests the cerebellum is positioned to influence both phonological and word-based decoding procedures for recognizing unfamiliar printed words. Overall, multiple lines of research provide support for the cerebellar deficit hypothesis, while also highlighting the need for further research to test mechanistic hypotheses. Copyright © 2018. Published by Elsevier Ltd.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Geng, J. J.; Huang, Y. F.; Lu, T., E-mail: hyf@nju.edu.cn
2015-05-01
Strange-quark matter (SQM) may be the true ground state of hadronic matter, indicating that the observed pulsars may actually be strange stars (SSs), but not neutron stars. According to the SQM hypothesis, the existence of a hydrostatically stable sequence of SQM stars has been predicted, ranging from 1 to 2 solar mass SSs, to smaller strange dwarfs and even strange planets. While gravitational wave (GW) astronomy is expected to open a new window to the universe, it will shed light on the search for SQM stars. Here we show that due to their extreme compactness, strange planets can spiral verymore » close to their host SSs without being tidally disrupted. Like inspiraling neutron stars or black holes, these systems would serve as new sources of GW bursts, producing strong GWs at the final stage. The events occurring in our local universe can be detected by upcoming GW detectors, such as Advanced LIGO and the Einstein Telescope. This effect provides a unique probe to SQM objects and is hopefully a powerful tool for testing the SQM hypothesis.« less
Balakumar, Pitchai; Inamdar, Mohammed Naseeruddin; Jagadeesh, Gowraganahalli
2013-01-01
An interactive workshop on ‘The Critical Steps for Successful Research: The Research Proposal and Scientific Writing’ was conducted in conjunction with the 64th Annual Conference of the Indian Pharmaceutical Congress-2012 at Chennai, India. In essence, research is performed to enlighten our understanding of a contemporary issue relevant to the needs of society. To accomplish this, a researcher begins search for a novel topic based on purpose, creativity, critical thinking, and logic. This leads to the fundamental pieces of the research endeavor: Question, objective, hypothesis, experimental tools to test the hypothesis, methodology, and data analysis. When correctly performed, research should produce new knowledge. The four cornerstones of good research are the well-formulated protocol or proposal that is well executed, analyzed, discussed and concluded. This recent workshop educated researchers in the critical steps involved in the development of a scientific idea to its successful execution and eventual publication. PMID:23761709
Human-pet interaction and loneliness: a test of concepts from Roy's adaptation model.
Calvert, M M
1989-01-01
This research used two key concepts from Roy's adaptation model of nursing to examine the relationship between human-pet interaction and loneliness in nursing home residents. These concepts included (a) environmental stimuli as factors influencing adaptation and (b) interdependence as a mode of response to the environment. The hypothesis of this study asserted that the residents of a nursing home who had greater levels of interaction with a pet program would experience less loneliness than those who had lower levels of interaction with a pet. The study used an ex post facto nonexperimental design with 65 subjects. The simplified version of the revised UCLA loneliness scale was used to measure loneliness. Reported level of human-pet interaction was measured according to a four-point scale (1 = no interaction, 4 = quite a lot of interaction). The hypothesis was supported at the p less than 0.03 level of significance. Implications for practice through organizing pet programs in situations where loneliness exists are discussed. Recommendations for future research include replicating the study using a larger sample and developing a comprehensive human-pet interaction tool.
Bayesian Methods for Determining the Importance of Effects
USDA-ARS?s Scientific Manuscript database
Criticisms have plagued the frequentist null-hypothesis significance testing (NHST) procedure since the day it was created from the Fisher Significance Test and Hypothesis Test of Jerzy Neyman and Egon Pearson. Alternatives to NHST exist in frequentist statistics, but competing methods are also avai...
Testing for purchasing power parity in the long-run for ASEAN-5
NASA Astrophysics Data System (ADS)
Choji, Niri Martha; Sek, Siok Kun
2017-04-01
For more than a decade, there has been a substantial interest in testing for the validity of the purchasing power parity (PPP) hypothesis empirically. This paper performs a test on revealing a long-run relative Purchasing Power Parity for a group of ASEAN-5 countries for the period of 1996-2016 using monthly data. For this purpose, we used the Pedroni co-integration method to test for the long-run hypothesis of purchasing power parity. We first tested for the stationarity of the variables and found that the variables are non-stationary at levels but stationary at first difference. Results of the Pedroni test rejected the null hypothesis of no co-integration meaning that we have enough evidence to support PPP in the long-run for the ASEAN-5 countries over the period of 1996-2016. In other words, the rejection of null hypothesis implies a long-run relation between nominal exchange rates and relative prices.
UNIFORMLY MOST POWERFUL BAYESIAN TESTS
Johnson, Valen E.
2014-01-01
Uniformly most powerful tests are statistical hypothesis tests that provide the greatest power against a fixed null hypothesis among all tests of a given size. In this article, the notion of uniformly most powerful tests is extended to the Bayesian setting by defining uniformly most powerful Bayesian tests to be tests that maximize the probability that the Bayes factor, in favor of the alternative hypothesis, exceeds a specified threshold. Like their classical counterpart, uniformly most powerful Bayesian tests are most easily defined in one-parameter exponential family models, although extensions outside of this class are possible. The connection between uniformly most powerful tests and uniformly most powerful Bayesian tests can be used to provide an approximate calibration between p-values and Bayes factors. Finally, issues regarding the strong dependence of resulting Bayes factors and p-values on sample size are discussed. PMID:24659829
An Independent Filter for Gene Set Testing Based on Spectral Enrichment.
Frost, H Robert; Li, Zhigang; Asselbergs, Folkert W; Moore, Jason H
2015-01-01
Gene set testing has become an indispensable tool for the analysis of high-dimensional genomic data. An important motivation for testing gene sets, rather than individual genomic variables, is to improve statistical power by reducing the number of tested hypotheses. Given the dramatic growth in common gene set collections, however, testing is often performed with nearly as many gene sets as underlying genomic variables. To address the challenge to statistical power posed by large gene set collections, we have developed spectral gene set filtering (SGSF), a novel technique for independent filtering of gene set collections prior to gene set testing. The SGSF method uses as a filter statistic the p-value measuring the statistical significance of the association between each gene set and the sample principal components (PCs), taking into account the significance of the associated eigenvalues. Because this filter statistic is independent of standard gene set test statistics under the null hypothesis but dependent under the alternative, the proportion of enriched gene sets is increased without impacting the type I error rate. As shown using simulated and real gene expression data, the SGSF algorithm accurately filters gene sets unrelated to the experimental outcome resulting in significantly increased gene set testing power.
[Experimental testing of Pflüger's reflex hypothesis of menstruation in late 19th century].
Simmer, H H
1980-07-01
Pflüger's hypothesis of a nerve reflex as the cause of menstruation published in 1865 and accepted by many, nonetheless did not lead to experimental investigations for 25 years. According to this hypothesis the nerve reflex starts in the ovary by an increase of the intraovarian pressure by the growing follicles. In 1884 Adolph Kehrer proposed a program to test the nerve reflex, but only in 1890, Cohnstein artificially increased the intraovarian pressure in women by bimanual compression from the outside and the vagina. His results were not convincing. Six years later, Strassmann injected fluids into ovaries of animals and obtained changes in the uterus resembling those of oestrus. His results seemed to verify a prognosis derived from Pflüger's hypothesis. Thus, after a long interval, that hypothesis had become a paradigma. Though reasons can be given for the delay, it is little understood, why experimental testing started so late.
When Null Hypothesis Significance Testing Is Unsuitable for Research: A Reassessment.
Szucs, Denes; Ioannidis, John P A
2017-01-01
Null hypothesis significance testing (NHST) has several shortcomings that are likely contributing factors behind the widely debated replication crisis of (cognitive) neuroscience, psychology, and biomedical science in general. We review these shortcomings and suggest that, after sustained negative experience, NHST should no longer be the default, dominant statistical practice of all biomedical and psychological research. If theoretical predictions are weak we should not rely on all or nothing hypothesis tests. Different inferential methods may be most suitable for different types of research questions. Whenever researchers use NHST they should justify its use, and publish pre-study power calculations and effect sizes, including negative findings. Hypothesis-testing studies should be pre-registered and optimally raw data published. The current statistics lite educational approach for students that has sustained the widespread, spurious use of NHST should be phased out.
When Null Hypothesis Significance Testing Is Unsuitable for Research: A Reassessment
Szucs, Denes; Ioannidis, John P. A.
2017-01-01
Null hypothesis significance testing (NHST) has several shortcomings that are likely contributing factors behind the widely debated replication crisis of (cognitive) neuroscience, psychology, and biomedical science in general. We review these shortcomings and suggest that, after sustained negative experience, NHST should no longer be the default, dominant statistical practice of all biomedical and psychological research. If theoretical predictions are weak we should not rely on all or nothing hypothesis tests. Different inferential methods may be most suitable for different types of research questions. Whenever researchers use NHST they should justify its use, and publish pre-study power calculations and effect sizes, including negative findings. Hypothesis-testing studies should be pre-registered and optimally raw data published. The current statistics lite educational approach for students that has sustained the widespread, spurious use of NHST should be phased out. PMID:28824397
SOCR Analyses - an Instructional Java Web-based Statistical Analysis Toolkit.
Chu, Annie; Cui, Jenny; Dinov, Ivo D
2009-03-01
The Statistical Online Computational Resource (SOCR) designs web-based tools for educational use in a variety of undergraduate courses (Dinov 2006). Several studies have demonstrated that these resources significantly improve students' motivation and learning experiences (Dinov et al. 2008). SOCR Analyses is a new component that concentrates on data modeling and analysis using parametric and non-parametric techniques supported with graphical model diagnostics. Currently implemented analyses include commonly used models in undergraduate statistics courses like linear models (Simple Linear Regression, Multiple Linear Regression, One-Way and Two-Way ANOVA). In addition, we implemented tests for sample comparisons, such as t-test in the parametric category; and Wilcoxon rank sum test, Kruskal-Wallis test, Friedman's test, in the non-parametric category. SOCR Analyses also include several hypothesis test models, such as Contingency tables, Friedman's test and Fisher's exact test.The code itself is open source (http://socr.googlecode.com/), hoping to contribute to the efforts of the statistical computing community. The code includes functionality for each specific analysis model and it has general utilities that can be applied in various statistical computing tasks. For example, concrete methods with API (Application Programming Interface) have been implemented in statistical summary, least square solutions of general linear models, rank calculations, etc. HTML interfaces, tutorials, source code, activities, and data are freely available via the web (www.SOCR.ucla.edu). Code examples for developers and demos for educators are provided on the SOCR Wiki website.In this article, the pedagogical utilization of the SOCR Analyses is discussed, as well as the underlying design framework. As the SOCR project is on-going and more functions and tools are being added to it, these resources are constantly improved. The reader is strongly encouraged to check the SOCR site for most updated information and newly added models.
NASA Astrophysics Data System (ADS)
Verdaguer-Codina, Joan; Mirallas, Jaume A.
1996-12-01
The technique of execution of any movement in Judo is extremely important. The coaches want tests and tools easy to use and cheaper, to evaluate the progress of a judoist in the tatame. In this paper we present a test developed by Mirallas, which has his name 'Test of Mirallas' to evaluate the maximal power capacity of the judoist. The near infrared spectroscopy (NIRS) signals were obtained to have a measurement of the metabolic work of the flexor carpi ulnaris and radialis muscles, during the execution of the ippon-seoi-nage movement, allowing this measurement to assess by NIRS the maximal oxygen uptake. Also obtained were tympanic, skin forehead, and biceps brachii temperatures during the test time and recovery phase to study the effects of ambient conditions and the post-exercise oxygen consumption. The deoxygenation and blood volume signals obtained gave different results, demonstrating the hypothesis of the coaches that some judoist do the execution of the ippon-seoi-nage movement correctly and the rest didn't. The heart rate frequency obtained in the group of judoist was between 190-207 bpm, and in the minute five of post-exercise was 114-137 bpm; the time employed in the MIrallas's test were from 7 feet 14 inches to 13 feet 49 inches, and the total of movements were from 199 to 409. The data obtained in the skin forehead, and skin biceps brachii confirms previous works that the oxygen consumption remains after exercise in the muscle studied. According to the results, the test developed by Mirallas is a good tool to evaluate the performance of judoist any time, giving better results compared with standard tests.
Testing fundamental ecological concepts with a Pythium-Prunus pathosystem
USDA-ARS?s Scientific Manuscript database
The study of plant-pathogen interactions has enabled tests of basic ecological concepts on plant community assembly (Janzen-Connell Hypothesis) and plant invasion (Enemy Release Hypothesis). We used a field experiment to (#1) test whether Pythium effects depended on host (seedling) density and/or d...
Statistics for Radiology Research.
Obuchowski, Nancy A; Subhas, Naveen; Polster, Joshua
2017-02-01
Biostatistics is an essential component in most original research studies in imaging. In this article we discuss five key statistical concepts for study design and analyses in modern imaging research: statistical hypothesis testing, particularly focusing on noninferiority studies; imaging outcomes especially when there is no reference standard; dealing with the multiplicity problem without spending all your study power; relevance of confidence intervals in reporting and interpreting study results; and finally tools for assessing quantitative imaging biomarkers. These concepts are presented first as examples of conversations between investigator and biostatistician, and then more detailed discussions of the statistical concepts follow. Three skeletal radiology examples are used to illustrate the concepts. Thieme Medical Publishers 333 Seventh Avenue, New York, NY 10001, USA.
Biorobotics: using robots to emulate and investigate agile locomotion.
Ijspeert, Auke J
2014-10-10
The graceful and agile movements of animals are difficult to analyze and emulate because locomotion is the result of a complex interplay of many components: the central and peripheral nervous systems, the musculoskeletal system, and the environment. The goals of biorobotics are to take inspiration from biological principles to design robots that match the agility of animals, and to use robots as scientific tools to investigate animal adaptive behavior. Used as physical models, biorobots contribute to hypothesis testing in fields such as hydrodynamics, biomechanics, neuroscience, and prosthetics. Their use may contribute to the design of prosthetic devices that more closely take human locomotion principles into account. Copyright © 2014, American Association for the Advancement of Science.
A checklist to facilitate objective hypothesis testing in social psychology research.
Washburn, Anthony N; Morgan, G Scott; Skitka, Linda J
2015-01-01
Social psychology is not a very politically diverse area of inquiry, something that could negatively affect the objectivity of social psychological theory and research, as Duarte et al. argue in the target article. This commentary offers a number of checks to help researchers uncover possible biases and identify when they are engaging in hypothesis confirmation and advocacy instead of hypothesis testing.
Nan Liu; Hai Ren; Sufen Yuan; Qinfeng Guo; Long Yang
2013-01-01
The relative importance of facilitation and competition between pairwise plants across abiotic stress gradients as predicted by the stress-gradient hypothesis has been confirmed in arid and temperate ecosystems, but the hypothesis has rarely been tested in tropical systems, particularly across nutrient gradients. The current research examines the interactions between a...
Phase II Clinical Trials: D-methionine to Reduce Noise-Induced Hearing Loss
2012-03-01
loss (NIHL) and tinnitus in our troops. Hypotheses: Primary Hypothesis: Administration of oral D-methionine prior to and during weapons...reduce or prevent noise-induced tinnitus . Primary outcome to test the primary hypothesis: Pure tone air-conduction thresholds. Primary outcome to...test the secondary hypothesis: Tinnitus questionnaires. Specific Aims: 1. To determine whether administering oral D-methionine (D-met) can
Viewing Objects and Planning Actions: On the Potentiation of Grasping Behaviours by Visual Objects
ERIC Educational Resources Information Center
Makris, Stergios; Hadar, Aviad A.; Yarrow, Kielan
2011-01-01
How do humans interact with tools? Gibson (1979) suggested that humans perceive directly what tools afford in terms of meaningful actions. This "affordances" hypothesis implies that visual objects can potentiate motor responses even in the absence of an intention to act. Here we explore the temporal evolution of motor plans afforded by common…
Explorations in Statistics: Hypothesis Tests and P Values
ERIC Educational Resources Information Center
Curran-Everett, Douglas
2009-01-01
Learning about statistics is a lot like learning about science: the learning is more meaningful if you can actively explore. This second installment of "Explorations in Statistics" delves into test statistics and P values, two concepts fundamental to the test of a scientific null hypothesis. The essence of a test statistic is that it compares what…
Planned Hypothesis Tests Are Not Necessarily Exempt from Multiplicity Adjustment
ERIC Educational Resources Information Center
Frane, Andrew V.
2015-01-01
Scientific research often involves testing more than one hypothesis at a time, which can inflate the probability that a Type I error (false discovery) will occur. To prevent this Type I error inflation, adjustments can be made to the testing procedure that compensate for the number of tests. Yet many researchers believe that such adjustments are…
ERIC Educational Resources Information Center
Malda, Maike; van de Vijver, Fons J. R.; Temane, Q. Michael
2010-01-01
In this study, cross-cultural differences in cognitive test scores are hypothesized to depend on a test's cultural complexity (Cultural Complexity Hypothesis: CCH), here conceptualized as its content familiarity, rather than on its cognitive complexity (Spearman's Hypothesis: SH). The content familiarity of tests assessing short-term memory,…
Is it better to select or to receive? Learning via active and passive hypothesis testing.
Markant, Douglas B; Gureckis, Todd M
2014-02-01
People can test hypotheses through either selection or reception. In a selection task, the learner actively chooses observations to test his or her beliefs, whereas in reception tasks data are passively encountered. People routinely use both forms of testing in everyday life, but the critical psychological differences between selection and reception learning remain poorly understood. One hypothesis is that selection learning improves learning performance by enhancing generic cognitive processes related to motivation, attention, and engagement. Alternatively, we suggest that differences between these 2 learning modes derives from a hypothesis-dependent sampling bias that is introduced when a person collects data to test his or her own individual hypothesis. Drawing on influential models of sequential hypothesis-testing behavior, we show that such a bias (a) can lead to the collection of data that facilitates learning compared with reception learning and (b) can be more effective than observing the selections of another person. We then report a novel experiment based on a popular category learning paradigm that compares reception and selection learning. We additionally compare selection learners to a set of "yoked" participants who viewed the exact same sequence of observations under reception conditions. The results revealed systematic differences in performance that depended on the learner's role in collecting information and the abstract structure of the problem.
Testing for purchasing power parity in 21 African countries using several unit root tests
NASA Astrophysics Data System (ADS)
Choji, Niri Martha; Sek, Siok Kun
2017-04-01
Purchasing power parity is used as a basis for international income and expenditure comparison through the exchange rate theory. However, empirical studies show disagreement on the validity of PPP. In this paper, we conduct the testing on the validity of PPP using panel data approach. We apply seven different panel unit root tests to test the validity of the purchasing power parity (PPP) hypothesis based on the quarterly data on real effective exchange rate for 21 African countries from the period 1971: Q1-2012: Q4. All the results of the seven tests rejected the hypothesis of stationarity meaning that absolute PPP does not hold in those African Countries. This result confirmed the claim from previous studies that standard panel unit tests fail to support the PPP hypothesis.
Does Testing Increase Spontaneous Mediation in Learning Semantically Related Paired Associates?
ERIC Educational Resources Information Center
Cho, Kit W.; Neely, James H.; Brennan, Michael K.; Vitrano, Deana; Crocco, Stephanie
2017-01-01
Carpenter (2011) argued that the testing effect she observed for semantically related but associatively unrelated paired associates supports the mediator effectiveness hypothesis. This hypothesis asserts that after the cue-target pair "mother-child" is learned, relative to restudying mother-child, a review test in which…
Spagnolo, Daniel M; Al-Kofahi, Yousef; Zhu, Peihong; Lezon, Timothy R; Gough, Albert; Stern, Andrew M; Lee, Adrian V; Ginty, Fiona; Sarachan, Brion; Taylor, D Lansing; Chennubhotla, S Chakra
2017-11-01
We introduce THRIVE (Tumor Heterogeneity Research Interactive Visualization Environment), an open-source tool developed to assist cancer researchers in interactive hypothesis testing. The focus of this tool is to quantify spatial intratumoral heterogeneity (ITH), and the interactions between different cell phenotypes and noncellular constituents. Specifically, we foresee applications in phenotyping cells within tumor microenvironments, recognizing tumor boundaries, identifying degrees of immune infiltration and epithelial/stromal separation, and identification of heterotypic signaling networks underlying microdomains. The THRIVE platform provides an integrated workflow for analyzing whole-slide immunofluorescence images and tissue microarrays, including algorithms for segmentation, quantification, and heterogeneity analysis. THRIVE promotes flexible deployment, a maintainable code base using open-source libraries, and an extensible framework for customizing algorithms with ease. THRIVE was designed with highly multiplexed immunofluorescence images in mind, and, by providing a platform to efficiently analyze high-dimensional immunofluorescence signals, we hope to advance these data toward mainstream adoption in cancer research. Cancer Res; 77(21); e71-74. ©2017 AACR . ©2017 American Association for Cancer Research.
2012-01-01
Background Lifestyle is one of the main determinants of people’s health. It is essential to find the most effective prevention strategies to be used to encourage behavioral changes in their patients. Many theories are available that explain change or adherence to specific health behaviors in subjects. In this sense the named Motivational Interviewing has increasingly gained relevance. Few well-validated instruments are available for measuring doctors’ communication skills, and more specifically the Motivational Interviewing. Methods/Design The hypothesis of this study is that the Scale for Measuring Motivational Interviewing Skills (EVEM questionnaire) is a valid and reliable instrument for measuring the primary care professionals skills to get behavior change in patients. To test the hypothesis we have designed a prospective, observational, multi-center study to validate a measuring instrument. –Scope: Thirty-two primary care centers in Spain. -Sampling and Size: a) face and consensual validity: A group composed of 15 experts in Motivational Interviewing. b) Assessment of the psychometric properties of the scale; 50 physician- patient encounters will be videoed; a total of 162 interviews will be conducted with six standardized patients, and another 200 interviews will be conducted with 50 real patients (n=362). Four physicians will be specially trained to assess 30 interviews randomly selected to test the scale reproducibility. -Measurements for to test the hypothesis: a) Face validity: development of a draft questionnaire based on a theoretical model, by using Delphi-type methodology with experts. b) Scale psychometric properties: intraobservers will evaluate video recorded interviews: content-scalability validity (Exploratory Factor Analysis), internal consistency (Cronbach alpha), intra-/inter-observer reliability (Kappa index, intraclass correlation coefficient, Bland & Altman methodology), generalizability, construct validity and sensitivity to change (Pearson product–moment correlation coefficient). Discussion The verification of the hypothesis that EVEM is a valid and reliable tool for assessing motivational interviewing would be a major breakthrough in the current theoretical and practical knowledge, as it could be used to assess if the providers put into practice a patient centered communication style and can be used both for training or researching purposes. Trials registration Dislip-EM study NCT01282190 (ClinicalTrials.gov) PMID:23173902
Hatori, Tsuyoshi; Takemura, Kazuhisa; Fujii, Satoshi; Ideno, Takashi
2011-06-01
This paper presents a new model of category judgment. The model hypothesizes that, when more attention is focused on a category, the psychological range of the category gets narrower (category-focusing hypothesis). We explain this hypothesis by using the metaphor of a "mental-box" model: the more attention that is focused on a mental box (i.e., a category set), the smaller the size of the box becomes (i.e., a cardinal number of the category set). The hypothesis was tested in an experiment (N = 40), where the focus of attention on prescribed verbal categories was manipulated. The obtained data gave support to the hypothesis: category-focusing effects were found in three experimental tasks (regarding the category of "food", "height", and "income"). The validity of the hypothesis was discussed based on the results.
Strand-seq: a unifying tool for studies of chromosome segregation.
Falconer, Ester; Lansdorp, Peter M
2013-01-01
Non random segregation of sister chromatids has been implicated to help specify daughter cell fate (the Silent Sister Hypothesis [1]) or to protect the genome of long-lived stem cells (the Immortal Strand Hypothesis [2]). The idea that sister chromatids are non-randomly segregated into specific daughter cells is only marginally supported by data in sporadic and often contradictory studies. As a result, the field has moved forward rather slowly. The advent of being able to directly label and differentiate sister chromatids in vivo using fluorescence in situ hybridization [3] was a significant advance for such studies. However, this approach is limited by the need for large tracks of unidirectional repeats on chromosomes and the reliance on quantitative imaging of fluorescent probes and rigorous statistical analysis to discern between the two competing hypotheses. A novel method called Strand-seq which uses next-generation sequencing to assay sister chromatid inheritance patterns independently for each chromosome [4] offers a comprehensive approach to test for non-random segregation. In addition Strand-seq enables studies on the deposition of chromatin marks in relation to DNA replication. This method is expected to help unify the field by testing previous claims of non-random segregation in an unbiased way in many model systems in vitro and in vivo. Crown Copyright © 2013. Published by Elsevier Ltd. All rights reserved.
Assessing and conceptualizing orgasm after a spinal cord injury.
Courtois, Frédérique; Charvier, Kathleen; Vézina, Jean-Guy; Journel, Nicolas Morel; Carrier, Serge; Jacquemin, Géraldine; Côté, Isabelle
2011-11-01
To provide a questionnaire for assessing the sensations characterizing orgasm. To test the hypothesis that orgasm is related to autonomic hyperreflexia (AHR) in individuals with a spinal cord injury (SCI). A total of 97 men with SCI, of whom 50 showed AHR at ejaculation and 39 showed no AHR, were compared. Ejaculation was obtained through natural stimulation, vibrostimulation or vibrostimulation combined with midodrine (5-25 mg). Cardiovascular measures were recorded before, at, and after each test. Responses to the questionnaire were divided into four categories: cardiovascular, muscular, autonomic and dysreflexic sensations. Significantly more sensations were described at ejaculation than with sexual stimulation alone. Men with SCI who experienced AHR at ejaculation reported significantly more cardiovascular, muscular, autonomic and dysreflexic responses than those who did not. There was no difference between men with complete and those with incomplete lesions. The findings show that the questionnaire is a useful tool to assess orgasm and to guide patients in identifying the bodily sensations that accompany or build up to orgasm. The findings also support the hypothesis that orgasm may be related to the presence of AHR in individuals with SCI. Data from able-bodied men also suggest that AHR could be related to orgasm, as increases in blood pressure are observed at ejaculation along with cardiovascular, autonomic and muscular sensations. © 2011 THE AUTHORS. BJU INTERNATIONAL © 2011 BJU INTERNATIONAL.
WH Craib: a critical account of his work
Naidoo, DP
2009-01-01
Summary Summary One hundred years after its introduction, the ECG remains the most commonly used cardiovascular laboratory procedure. It fulfils all the requirements of a diagnostic test: it is non-invasive, simple to record, highly reproducible and can be applied serially. It is the first laboratory test to be performed in a patient with chest pain, syncope or cardiac arrhythmias. It is also a prognostic tool that aids in risk stratification and clinical management. Among the many South Africans who have made remarkable contributions in the field of electrocardiography, Don Craib was the first to investigate the changing patterns of the ECG action potential in isolated skeletal muscle strips under varying conditions. It was during his stay at Johns Hopkins Hospital in Baltimore and Sir Thomas Lewis laboratory in London that Craib made singular observations about the fundamental origins of electrical signals in the skeletal muscle, and from these developed his hypothesis on the generation of the action potential in the electrocardiogram. His proposals went contrary to scientific opinion at the time and he was rebuffed by the scientific community. Frank Wilson subsequently went on to develop Craib’s doublet hypothesis into the dipole theory, acknowledging Craib’s work. Today the dipole theory is fundamental to the understanding of the spread of electrical activation in the myocardium and the genesis of the action potential. PMID:19287808
Qi, Bing-Bing; Resnick, Barbara
2014-01-01
To assess the psychometric properties of Chinese versions self-efficacy and outcome expectations on osteoporosis medication adherence (SEOMA-C and OEOMA-C) scales. Back-translated tools were assessed by internal consistency and R2 by structured equation modeling, confirmatory factor analyses, hypothesis testing, and criterion-related validity among 110 (81 females, 29 males) Mandarin-speaking immigrants (mean age = 63.44, SD = 9.63). The Cronbach's alpha for SEOMA-C and OEOMA-C is .904 and .937, respectively. There was fair and good fit of the measurement model to the data. Previous bone mineral density (BMD) testing, calcaneus BMD, self-efficacy for exercise, and osteoporosis medication adherence were positively related to SEOMA-C scores. These scales constitute some preliminary validity and reliability. Further refined and cultural sensitive items could be explored and added.
Reducing recurrence in child protective services: impact of a targeted safety protocol.
Fluke, J; Edwards, M; Bussey, M; Wells, S; Johnson, W
2001-08-01
Statewide implementation of a child safety assessment protocol by the Illinois Department of Children and Family Services (DCFS) in 1995 is assessed to determine its impact on near-term recurrence of child maltreatment. Literature on the use of risk and safety assessment as a decision-making tool supports the DCFS's approach. The literature on the use of recurrence as a summative measure for evaluation is described. Survival analysis is used with an administrative data set of 400,000 children reported to DCFS between October 1994 and November 1997. An ex-post facto design tests the hypothesis that the use of the protocol cannot be ruled out as an explanation for the observed decline in recurrence following implementation. Several alternative hypotheses are tested: change in use of protective custody, other concurrent changes in state policy, and the concurrent experience of other states. The impact of the protocol to reduce recurrence was not ruled out.
The power and benefits of concept mapping: measuring use, usefulness, ease of use, and satisfaction
NASA Astrophysics Data System (ADS)
Freeman, Lee A.; Jessup, Leonard M.
2004-02-01
The power and benefits of concept mapping rest in four arenas: enabling shared understanding, the inclusion of affect, the balance of power, and client involvement. Concept mapping theory and research indicate concept maps (1) are appropriate tools to assist with communication, (2) are easy to use, and (3) are seen as beneficial by their users. An experiment was conducted to test these assertions and analyze the power and benefits of concept mapping using a typical business consulting scenario involving 16 groups of two individuals. The results were analyzed via empirical hypothesis testing and protocol analyses, and indicate an overall support of the theory and prior research and additional support of new measures of usefulness, ease of use, and satisfaction by both parties. A more thorough understanding of concept mapping is gained and available to future practitioners and researchers.
Debates—Hypothesis testing in hydrology: Introduction
NASA Astrophysics Data System (ADS)
Blöschl, Günter
2017-03-01
This paper introduces the papers in the "Debates—Hypothesis testing in hydrology" series. The four articles in the series discuss whether and how the process of testing hypotheses leads to progress in hydrology. Repeated experiments with controlled boundary conditions are rarely feasible in hydrology. Research is therefore not easily aligned with the classical scientific method of testing hypotheses. Hypotheses in hydrology are often enshrined in computer models which are tested against observed data. Testability may be limited due to model complexity and data uncertainty. All four articles suggest that hypothesis testing has contributed to progress in hydrology and is needed in the future. However, the procedure is usually not as systematic as the philosophy of science suggests. A greater emphasis on a creative reasoning process on the basis of clues and explorative analyses is therefore needed.
ERIC Educational Resources Information Center
White, Brian
2004-01-01
This paper presents a generally applicable method for characterizing subjects' hypothesis-testing behaviour based on a synthesis that extends on previous work. Beginning with a transcript of subjects' speech and videotape of their actions, a Reasoning Map is created that depicts the flow of their hypotheses, tests, predictions, results, and…
Why Is Test-Restudy Practice Beneficial for Memory? An Evaluation of the Mediator Shift Hypothesis
ERIC Educational Resources Information Center
Pyc, Mary A.; Rawson, Katherine A.
2012-01-01
Although the memorial benefits of testing are well established empirically, the mechanisms underlying this benefit are not well understood. The authors evaluated the mediator shift hypothesis, which states that test-restudy practice is beneficial for memory because retrieval failures during practice allow individuals to evaluate the effectiveness…
Bayesian Approaches to Imputation, Hypothesis Testing, and Parameter Estimation
ERIC Educational Resources Information Center
Ross, Steven J.; Mackey, Beth
2015-01-01
This chapter introduces three applications of Bayesian inference to common and novel issues in second language research. After a review of the critiques of conventional hypothesis testing, our focus centers on ways Bayesian inference can be used for dealing with missing data, for testing theory-driven substantive hypotheses without a default null…
Tennie, Claudio
2018-01-01
A subspecies of long-tailed macaques (Macaca fascicularis aurea; Mfa) has been reported to use stone tools and a specific technique to process nuts in Southeast Asia, a behaviour known as ‘pound-hammering’. The aim of this study was to examine the development of pound-hammering in long-tailed macaques: whether this behavioural form can be individually learnt or whether it has to rely on some forms of social learning. Given the absence of Mfa from captivity, long-tailed macaques of a highly related subspecies (Macaca fascicularis fascicularis; Mff) were experimentally tested by providing them with the ecological materials necessary to show pound-hammering. A baseline was first carried out to observe whether pound-hammering would emerge spontaneously without social information. As this was not the case, different degrees of social information, culminating in a full demonstration of the behaviour, were provided. None of the subjects (n = 31) showed pound-hammering in any of the individual or social learning conditions. Although these data do not support the hypothesis that individual learning underlies this behaviour, no evidence was found that (at least) Mff learn pound-hammering socially either. We propose that other—potentially interacting—factors may determine whether this behaviour emerges in the various subspecies of long-tailed macaques, and provide a novel methodology to test the role of social and individual learning in the development of animal tool-use. PMID:29892375
Neuroanatomy Predicts Individual Risk Attitudes
Gilaie-Dotan, Sharon; Tymula, Agnieszka; Cooper, Nicole; Kable, Joseph W.; Glimcher, Paul W.
2014-01-01
Over the course of the last decade a multitude of studies have investigated the relationship between neural activations and individual human decision-making. Here we asked whether the anatomical features of individual human brains could be used to predict the fundamental preferences of human choosers. To that end, we quantified the risk attitudes of human decision-makers using standard economic tools and quantified the gray matter cortical volume in all brain areas using standard neurobiological tools. Our whole-brain analysis revealed that the gray matter volume of a region in the right posterior parietal cortex was significantly predictive of individual risk attitudes. Participants with higher gray matter volume in this region exhibited less risk aversion. To test the robustness of this finding we examined a second group of participants and used econometric tools to test the ex ante hypothesis that gray matter volume in this area predicts individual risk attitudes. Our finding was confirmed in this second group. Our results, while being silent about causal relationships, identify what might be considered the first stable biomarker for financial risk-attitude. If these results, gathered in a population of midlife northeast American adults, hold in the general population, they will provide constraints on the possible neural mechanisms underlying risk attitudes. The results will also provide a simple measurement of risk attitudes that could be easily extracted from abundance of existing medical brain scans, and could potentially provide a characteristic distribution of these attitudes for policy makers. PMID:25209279
ERIC Educational Resources Information Center
E. N., Ekesionye; A. N., Okolo
2012-01-01
The objective of the study was to examine women empowerment and participation in economic activities as tools for self-reliance and development of the Nigerian society. Research questions and hypothesis were used to guide the study. Structured questionnaire was used as the major instrument for data collection. Copies of questionnaires were…
ERIC Educational Resources Information Center
Yoon, Susan A.
2011-01-01
This study extends previous research that explores how visualization affordances that computational tools provide and social network analyses that account for individual- and group-level dynamic processes can work in conjunction to improve learning outcomes. The study's main hypothesis is that when social network graphs are used in instruction,…
Mayo, Ruth; Alfasi, Dana; Schwarz, Norbert
2014-06-01
Feelings of distrust alert people not to take information at face value, which may influence their reasoning strategy. Using the Wason (1960) rule identification task, we tested whether chronic and temporary distrust increase the use of negative hypothesis testing strategies suited to falsify one's own initial hunch. In Study 1, participants who were low in dispositional trust were more likely to engage in negative hypothesis testing than participants high in dispositional trust. In Study 2, trust and distrust were induced through an alleged person-memory task. Paralleling the effects of chronic distrust, participants exposed to a single distrust-eliciting face were 3 times as likely to engage in negative hypothesis testing as participants exposed to a trust-eliciting face. In both studies, distrust increased negative hypothesis testing, which was associated with better performance on the Wason task. In contrast, participants' initial rule generation was not consistently affected by distrust. These findings provide first evidence that distrust can influence which reasoning strategy people adopt. PsycINFO Database Record (c) 2014 APA, all rights reserved.
Di Febbraro, Mirko; Lurz, Peter W. W.; Genovesi, Piero; Maiorano, Luigi; Girardello, Marco; Bertolino, Sandro
2013-01-01
Species introduction represents one of the most serious threats for biodiversity. The realized climatic niche of an invasive species can be used to predict its potential distribution in new areas, providing a basis for screening procedures in the compilation of black and white lists to prevent new introductions. We tested this assertion by modeling the realized climatic niche of the Eastern grey squirrel Sciurus carolinensis. Maxent was used to develop three models: one considering only records from the native range (NRM), a second including records from native and invasive range (NIRM), a third calibrated with invasive occurrences and projected in the native range (RCM). Niche conservatism was tested considering both a niche equivalency and a niche similarity test. NRM failed to predict suitable parts of the currently invaded range in Europe, while RCM underestimated the suitability in the native range. NIRM accurately predicted both the native and invasive range. The niche equivalency hypothesis was rejected due to a significant difference between the grey squirrel’s niche in native and invasive ranges. The niche similarity test yielded no significant results. Our analyses support the hypothesis of a shift in the species’ climatic niche in the area of introductions. Species Distribution Models (SDMs) appear to be a useful tool in the compilation of black lists, allowing identifying areas vulnerable to invasions. We advise caution in the use of SDMs based only on the native range of a species for the compilation of white lists for other geographic areas, due to the significant risk of underestimating its potential invasive range. PMID:23843957
Two hypotheses on the causes of male homosexuality and paedophilia.
James, William H
2006-11-01
This note considers two hypotheses on the causes of homosexuality and paedophilia in men, viz. the hypotheses of maternal immunity and of postnatal learning. According to the maternal immune hypothesis, there is progressive immunization of some mothers to male-specific antigens by each succeeding male fetus, and there are concomitantly increasing effects of anti-male antibodies on the sexual differentiation of the brain in each succeeding male fetus. An attempt is made to assess the status of this hypothesis within immunology. Knowledge of the properties of anti-male antibodies is meagre and there has been little direct experimentation on them, let alone on their effects on the developing male fetal brain. Moreover until the relevant antigens are identified, it will not be possible to test mothers of male homosexuals or paedophiles for the presence of such antibodies. Yet until this experimentation has been done, it would seem premature to regard the hypothesis as more than a very provisional explanatory tool. The evidence in relation to the postnatal learning hypothesis is quite different. There is an abundance of data suggesting that male homosexuals and paedophiles report having experienced more sexual abuse (however defined) in childhood (CSA) than do heterosexual controls. The question revolves round the interpretation of these data. Many (though not all) of these studies are correlational and thus subject to the usual qualifications concerning such data. However, there are grounds for supposing that some of the reports are veridical, and there is support from a longitudinal study reporting a small but significant increase in paedophilia in adulthood following CSA. To summarize: most boys who experience CSA do not later develop into homosexuals or paedophiles. However, the available evidence suggests that a few do so as a result of the abuse.
In Defense of the Play-Creativity Hypothesis
ERIC Educational Resources Information Center
Silverman, Irwin W.
2016-01-01
The hypothesis that pretend play facilitates the creative thought process in children has received a great deal of attention. In a literature review, Lillard et al. (2013, p. 8) concluded that the evidence for this hypothesis was "not convincing." This article focuses on experimental and training studies that have tested this hypothesis.…
The frequentist implications of optional stopping on Bayesian hypothesis tests.
Sanborn, Adam N; Hills, Thomas T
2014-04-01
Null hypothesis significance testing (NHST) is the most commonly used statistical methodology in psychology. The probability of achieving a value as extreme or more extreme than the statistic obtained from the data is evaluated, and if it is low enough, the null hypothesis is rejected. However, because common experimental practice often clashes with the assumptions underlying NHST, these calculated probabilities are often incorrect. Most commonly, experimenters use tests that assume that sample sizes are fixed in advance of data collection but then use the data to determine when to stop; in the limit, experimenters can use data monitoring to guarantee that the null hypothesis will be rejected. Bayesian hypothesis testing (BHT) provides a solution to these ills because the stopping rule used is irrelevant to the calculation of a Bayes factor. In addition, there are strong mathematical guarantees on the frequentist properties of BHT that are comforting for researchers concerned that stopping rules could influence the Bayes factors produced. Here, we show that these guaranteed bounds have limited scope and often do not apply in psychological research. Specifically, we quantitatively demonstrate the impact of optional stopping on the resulting Bayes factors in two common situations: (1) when the truth is a combination of the hypotheses, such as in a heterogeneous population, and (2) when a hypothesis is composite-taking multiple parameter values-such as the alternative hypothesis in a t-test. We found that, for these situations, while the Bayesian interpretation remains correct regardless of the stopping rule used, the choice of stopping rule can, in some situations, greatly increase the chance of experimenters finding evidence in the direction they desire. We suggest ways to control these frequentist implications of stopping rules on BHT.
Royston, Patrick; Parmar, Mahesh K B
2014-08-07
Most randomized controlled trials with a time-to-event outcome are designed and analysed under the proportional hazards assumption, with a target hazard ratio for the treatment effect in mind. However, the hazards may be non-proportional. We address how to design a trial under such conditions, and how to analyse the results. We propose to extend the usual approach, a logrank test, to also include the Grambsch-Therneau test of proportional hazards. We test the resulting composite null hypothesis using a joint test for the hazard ratio and for time-dependent behaviour of the hazard ratio. We compute the power and sample size for the logrank test under proportional hazards, and from that we compute the power of the joint test. For the estimation of relevant quantities from the trial data, various models could be used; we advocate adopting a pre-specified flexible parametric survival model that supports time-dependent behaviour of the hazard ratio. We present the mathematics for calculating the power and sample size for the joint test. We illustrate the methodology in real data from two randomized trials, one in ovarian cancer and the other in treating cellulitis. We show selected estimates and their uncertainty derived from the advocated flexible parametric model. We demonstrate in a small simulation study that when a treatment effect either increases or decreases over time, the joint test can outperform the logrank test in the presence of both patterns of non-proportional hazards. Those designing and analysing trials in the era of non-proportional hazards need to acknowledge that a more complex type of treatment effect is becoming more common. Our method for the design of the trial retains the tools familiar in the standard methodology based on the logrank test, and extends it to incorporate a joint test of the null hypothesis with power against non-proportional hazards. For the analysis of trial data, we propose the use of a pre-specified flexible parametric model that can represent a time-dependent hazard ratio if one is present.
TRANSGENIC MOUSE MODELS AND PARTICULATE MATTER (PM)
The hypothesis to be tested is that metal catalyzed oxidative stress can contribute to the biological effects of particulate matter. We acquired several transgenic mouse strains to test this hypothesis. Breeding of the mice was accomplished by Duke University. Particles employed ...
Hypothesis Testing Using the Films of the Three Stooges
ERIC Educational Resources Information Center
Gardner, Robert; Davidson, Robert
2010-01-01
The use of The Three Stooges' films as a source of data in an introductory statistics class is described. The Stooges' films are separated into three populations. Using these populations, students may conduct hypothesis tests with data they collect.
Hovick, Stephen M; Whitney, Kenneth D
2014-01-01
The hypothesis that interspecific hybridisation promotes invasiveness has received much recent attention, but tests of the hypothesis can suffer from important limitations. Here, we provide the first systematic review of studies experimentally testing the hybridisation-invasion (H-I) hypothesis in plants, animals and fungi. We identified 72 hybrid systems for which hybridisation has been putatively associated with invasiveness, weediness or range expansion. Within this group, 15 systems (comprising 34 studies) experimentally tested performance of hybrids vs. their parental species and met our other criteria. Both phylogenetic and non-phylogenetic meta-analyses demonstrated that wild hybrids were significantly more fecund and larger than their parental taxa, but did not differ in survival. Resynthesised hybrids (which typically represent earlier generations than do wild hybrids) did not consistently differ from parental species in fecundity, survival or size. Using meta-regression, we found that fecundity increased (but survival decreased) with generation in resynthesised hybrids, suggesting that natural selection can play an important role in shaping hybrid performance – and thus invasiveness – over time. We conclude that the available evidence supports the H-I hypothesis, with the caveat that our results are clearly driven by tests in plants, which are more numerous than tests in animals and fungi. PMID:25234578
The Harm Done to Reproducibility by the Culture of Null Hypothesis Significance Testing.
Lash, Timothy L
2017-09-15
In the last few years, stakeholders in the scientific community have raised alarms about a perceived lack of reproducibility of scientific results. In reaction, guidelines for journals have been promulgated and grant applicants have been asked to address the rigor and reproducibility of their proposed projects. Neither solution addresses a primary culprit, which is the culture of null hypothesis significance testing that dominates statistical analysis and inference. In an innovative research enterprise, selection of results for further evaluation based on null hypothesis significance testing is doomed to yield a low proportion of reproducible results and a high proportion of effects that are initially overestimated. In addition, the culture of null hypothesis significance testing discourages quantitative adjustments to account for systematic errors and quantitative incorporation of prior information. These strategies would otherwise improve reproducibility and have not been previously proposed in the widely cited literature on this topic. Without discarding the culture of null hypothesis significance testing and implementing these alternative methods for statistical analysis and inference, all other strategies for improving reproducibility will yield marginal gains at best. © The Author(s) 2017. Published by Oxford University Press on behalf of the Johns Hopkins Bloomberg School of Public Health. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.
KinView: A visual comparative sequence analysis tool for integrated kinome research
McSkimming, Daniel Ian; Dastgheib, Shima; Baffi, Timothy R.; Byrne, Dominic P.; Ferries, Samantha; Scott, Steven Thomas; Newton, Alexandra C.; Eyers, Claire E.; Kochut, Krzysztof J.; Eyers, Patrick A.
2017-01-01
Multiple sequence alignments (MSAs) are a fundamental analysis tool used throughout biology to investigate relationships between protein sequence, structure, function, evolutionary history, and patterns of disease-associated variants. However, their widespread application in systems biology research is currently hindered by the lack of user-friendly tools to simultaneously visualize, manipulate and query the information conceptualized in large sequence alignments, and the challenges in integrating MSAs with multiple orthogonal data such as cancer variants and post-translational modifications, which are often stored in heterogeneous data sources and formats. Here, we present the Multiple Sequence Alignment Ontology (MSAOnt), which represents a profile or consensus alignment in an ontological format. Subsets of the alignment are easily selected through the SPARQL Protocol and RDF Query Language for downstream statistical analysis or visualization. We have also created the Kinome Viewer (KinView), an interactive integrative visualization that places eukaryotic protein kinase cancer variants in the context of natural sequence variation and experimentally determined post-translational modifications, which play central roles in the regulation of cellular signaling pathways. Using KinView, we identified differential phosphorylation patterns between tyrosine and serine/threonine kinases in the activation segment, a major kinase regulatory region that is often mutated in proliferative diseases. We discuss cancer variants that disrupt phosphorylation sites in the activation segment, and show how KinView can be used as a comparative tool to identify differences and similarities in natural variation, cancer variants and post-translational modifications between kinase groups, families and subfamilies. Based on KinView comparisons, we identify and experimentally characterize a regulatory tyrosine (Y177PLK4) in the PLK4 C-terminal activation segment region termed the P+1 loop. To further demonstrate the application of KinView in hypothesis generation and testing, we formulate and validate a hypothesis explaining a novel predicted loss-of-function variant (D523NPKCβ) in the regulatory spine of PKCβ, a recently identified tumor suppressor kinase. KinView provides a novel, extensible interface for performing comparative analyses between subsets of kinases and for integrating multiple types of residue specific annotations in user friendly formats. PMID:27731453
The Impact of Economic Factors and Acquisition Reforms on the Cost of Defense Weapon Systems
2006-03-01
test for homoskedasticity, the Breusch - Pagan test is employed. The null hypothesis of the Breusch - Pagan test is that the variance is equal to zero...made. Using the Breusch - Pagan test shown in Table 19 below, the prob>chi2 is greater than 05.=α , therefore we fail to reject the null hypothesis...overrunpercentfp100 Breusch - Pagan Test (Ho=Constant Variance) Estimated Results Variance Standard Deviation overrunpercent100
Learning from graphically integrated 2D and 3D representations improves retention of neuroanatomy
NASA Astrophysics Data System (ADS)
Naaz, Farah
Visualizations in the form of computer-based learning environments are highly encouraged in science education, especially for teaching spatial material. Some spatial material, such as sectional neuroanatomy, is very challenging to learn. It involves learning the two dimensional (2D) representations that are sampled from the three dimensional (3D) object. In this study, a computer-based learning environment was used to explore the hypothesis that learning sectional neuroanatomy from a graphically integrated 2D and 3D representation will lead to better learning outcomes than learning from a sequential presentation. The integrated representation explicitly demonstrates the 2D-3D transformation and should lead to effective learning. This study was conducted using a computer graphical model of the human brain. There were two learning groups:
ERIC Educational Resources Information Center
Tryon, Warren W.; Lewis, Charles
2008-01-01
Evidence of group matching frequently takes the form of a nonsignificant test of statistical difference. Theoretical hypotheses of no difference are also tested in this way. These practices are flawed in that null hypothesis statistical testing provides evidence against the null hypothesis and failing to reject H[subscript 0] is not evidence…
Effects of Item Exposure for Conventional Examinations in a Continuous Testing Environment.
ERIC Educational Resources Information Center
Hertz, Norman R.; Chinn, Roberta N.
This study explored the effect of item exposure on two conventional examinations administered as computer-based tests. A principal hypothesis was that item exposure would have little or no effect on average difficulty of the items over the course of an administrative cycle. This hypothesis was tested by exploring conventional item statistics and…
ERIC Educational Resources Information Center
McNeil, Keith
The use of directional and nondirectional hypothesis testing was examined from the perspectives of textbooks, journal articles, and members of editorial boards. Three widely used statistical texts were reviewed in terms of how directional and nondirectional tests of significance were presented. Texts reviewed were written by: (1) D. E. Hinkle, W.…
The Feminization of School Hypothesis Called into Question among Junior and High School Students
ERIC Educational Resources Information Center
Verniers, Catherine; Martinot, Delphine; Dompnier, Benoît
2016-01-01
Background: The feminization of school hypothesis suggests that boys underachieve in school compared to girls because school rewards feminine characteristics that are at odds with boys' masculine features. Aims: The feminization of school hypothesis lacks empirical evidence. The aim of this study was to test this hypothesis by examining the extent…
Supporting shared hypothesis testing in the biomedical domain.
Agibetov, Asan; Jiménez-Ruiz, Ernesto; Ondrésik, Marta; Solimando, Alessandro; Banerjee, Imon; Guerrini, Giovanna; Catalano, Chiara E; Oliveira, Joaquim M; Patanè, Giuseppe; Reis, Rui L; Spagnuolo, Michela
2018-02-08
Pathogenesis of inflammatory diseases can be tracked by studying the causality relationships among the factors contributing to its development. We could, for instance, hypothesize on the connections of the pathogenesis outcomes to the observed conditions. And to prove such causal hypotheses we would need to have the full understanding of the causal relationships, and we would have to provide all the necessary evidences to support our claims. In practice, however, we might not possess all the background knowledge on the causality relationships, and we might be unable to collect all the evidence to prove our hypotheses. In this work we propose a methodology for the translation of biological knowledge on causality relationships of biological processes and their effects on conditions to a computational framework for hypothesis testing. The methodology consists of two main points: hypothesis graph construction from the formalization of the background knowledge on causality relationships, and confidence measurement in a causality hypothesis as a normalized weighted path computation in the hypothesis graph. In this framework, we can simulate collection of evidences and assess confidence in a causality hypothesis by measuring it proportionally to the amount of available knowledge and collected evidences. We evaluate our methodology on a hypothesis graph that represents both contributing factors which may cause cartilage degradation and the factors which might be caused by the cartilage degradation during osteoarthritis. Hypothesis graph construction has proven to be robust to the addition of potentially contradictory information on the simultaneously positive and negative effects. The obtained confidence measures for the specific causality hypotheses have been validated by our domain experts, and, correspond closely to their subjective assessments of confidences in investigated hypotheses. Overall, our methodology for a shared hypothesis testing framework exhibits important properties that researchers will find useful in literature review for their experimental studies, planning and prioritizing evidence collection acquisition procedures, and testing their hypotheses with different depths of knowledge on causal dependencies of biological processes and their effects on the observed conditions.
Nutrient availability affects pigment production but not growth in lichens of biological soil crusts
Bowker, M.A.; Koch, G.W.; Belnap, J.; Johnson, N.C.
2008-01-01
Recent research suggests that micronutrients such as Mn may limit growth of slow-growing biological soil crusts (BSCs) in some of the drylands of the world. These soil surface communities contribute strongly to arid ecosystem function and are easily degraded, creating a need for new restoration tools. The possibility that Mn fertilization could be used as a restoration tool for BSCs has not been tested previously. We used microcosms in a controlled greenhouse setting to investigate the hypothesis that Mn may limit photosynthesis and consequently growth in Collema tenax, a dominant N-fixing lichen found in BSCs worldwide. We found no evidence to support our hypothesis; furthermore, addition of other nutrients (primarily P, K, and Zn) had a suppressive effect on gross photosynthesis (P = 0.05). We also monitored the growth and physiological status of our microcosms and found that other nutrients increased the production of scytonemin, an important sunscreen pigment, but only when not added with Mn (P = 0.01). A structural equation model indicated that this effect was independent of any photosynthesis-related variable. We propose two alternative hypotheses to account for this pattern: (1) Mn suppresses processes needed to produce scytonemin; and (2) Mn is required to suppress scytonemin production at low light, when it is an unnecessary photosynthate sink. Although Mn fertilization does not appear likely to increase photosynthesis or growth of Collema, it could have a role in survivorship during environmentally stressful periods due to modification of scytonemin production. Thus, Mn enrichment should be studied further for its potential to facilitate BSC rehabilitation. ?? 2008 Elsevier Ltd.
The limits to pride: A test of the pro-anorexia hypothesis.
Cornelius, Talea; Blanton, Hart
2016-01-01
Many social psychological models propose that positive self-conceptions promote self-esteem. An extreme version of this hypothesis is advanced in "pro-anorexia" communities: identifying with anorexia, in conjunction with disordered eating, can lead to higher self-esteem. The current study empirically tested this hypothesis. Results challenge the pro-anorexia hypothesis. Although those with higher levels of pro-anorexia identification trended towards higher self-esteem with increased disordered eating, this did not overcome the strong negative main effect of pro-anorexia identification. These data suggest a more effective strategy for promoting self-esteem is to encourage rejection of disordered eating and an anorexic identity.
Does the Slow-Growth, High-Mortality Hypothesis Apply Below Ground?
Hourston, James E; Bennett, Alison E; Johnson, Scott N; Gange, Alan C
2016-01-01
Belowground tri-trophic study systems present a challenging environment in which to study plant-herbivore-natural enemy interactions. For this reason, belowground examples are rarely available for testing general ecological theories. To redress this imbalance, we present, for the first time, data on a belowground tri-trophic system to test the slow growth, high mortality hypothesis. We investigated whether the differing performance of entomopathogenic nematodes (EPNs) in controlling the common pest black vine weevil Otiorhynchus sulcatus could be linked to differently resistant cultivars of the red raspberry Rubus idaeus. The O. sulcatus larvae recovered from R. idaeus plants showed significantly slower growth and higher mortality on the Glen Rosa cultivar, relative to the more commercially favored Glen Ample cultivar creating a convenient system for testing this hypothesis. Heterorhabditis megidis was found to be less effective at controlling O. sulcatus than Steinernema kraussei, but conformed to the hypothesis. However, S. kraussei maintained high levels of O. sulcatus mortality regardless of how larval growth was influenced by R. idaeus cultivar. We link this to direct effects that S. kraussei had on reducing O. sulcatus larval mass, indicating potential sub-lethal effects of S. kraussei, which the slow-growth, high-mortality hypothesis does not account for. Possible origins of these sub-lethal effects of EPN infection and how they may impact on a hypothesis designed and tested with aboveground predator and parasitoid systems are discussed.
A critique of statistical hypothesis testing in clinical research
Raha, Somik
2011-01-01
Many have documented the difficulty of using the current paradigm of Randomized Controlled Trials (RCTs) to test and validate the effectiveness of alternative medical systems such as Ayurveda. This paper critiques the applicability of RCTs for all clinical knowledge-seeking endeavors, of which Ayurveda research is a part. This is done by examining statistical hypothesis testing, the underlying foundation of RCTs, from a practical and philosophical perspective. In the philosophical critique, the two main worldviews of probability are that of the Bayesian and the frequentist. The frequentist worldview is a special case of the Bayesian worldview requiring the unrealistic assumptions of knowing nothing about the universe and believing that all observations are unrelated to each other. Many have claimed that the first belief is necessary for science, and this claim is debunked by comparing variations in learning with different prior beliefs. Moving beyond the Bayesian and frequentist worldviews, the notion of hypothesis testing itself is challenged on the grounds that a hypothesis is an unclear distinction, and assigning a probability on an unclear distinction is an exercise that does not lead to clarity of action. This critique is of the theory itself and not any particular application of statistical hypothesis testing. A decision-making frame is proposed as a way of both addressing this critique and transcending ideological debates on probability. An example of a Bayesian decision-making approach is shown as an alternative to statistical hypothesis testing, utilizing data from a past clinical trial that studied the effect of Aspirin on heart attacks in a sample population of doctors. As a big reason for the prevalence of RCTs in academia is legislation requiring it, the ethics of legislating the use of statistical methods for clinical research is also examined. PMID:22022152
Music-therapy analyzed through conceptual mapping
NASA Astrophysics Data System (ADS)
Martinez, Rodolfo; de la Fuente, Rebeca
2002-11-01
Conceptual maps have been employed lately as a learning tool, as a modern study technique, and as a new way to understand intelligence, which allows for the development of a strong theoretical reference, in order to prove the research hypothesis. This paper presents a music-therapy analysis based on this tool to produce a conceptual mapping network, which ranges from magic through the rigor of the hard sciences.
ERIC Educational Resources Information Center
Van Aalsvoort, Joke
2004-01-01
In a previous article, the problem of chemistry's lack of relevance in secondary chemical education was analysed using logical positivism as a tool. This article starts with the hypothesis that the problem can be addressed by means of activity theory, one of the important theories within the sociocultural school. The reason for this expectation is…
NASA Astrophysics Data System (ADS)
Gutowska, Dorota; Piskozub, Jacek
2017-04-01
There is a vast literature body on the climate indices and processes they represent. A large part of it deals with "teleconnections" or causal relations between them. However until recently time lagged correlations was the best tool of studying causation. However no correlation (even lagged) proves causation. We use a recently developed method of studying casual relations between short time series, Convergent Cross Mapping (CCM), to search for causation between the atmospheric (AO and NAO) and oceanic (AMO) indices. The version we have chosen (available as an R language package rEDM) allows for comparing time series with time lags. This work builds on previous one, showing with time-lagged correlations that AO/NAO precedes AMO by about 15 years and at the same time is preceded by AMO (but with an inverted sign) also by the same amount of time. This behaviour is identical to the relationship of a sine and cosine with the same period. This may suggest that the multidecadal oscillatory parts of the atmospheric and oceanic indices represent the same global-scale set of processes. In other words they may be symptoms of the same oscillation. The aim of present study is to test this hypothesis with a tool created specially for discovering causal relationships in dynamic systems.
Data-Driven Significance Estimation for Precise Spike Correlation
Grün, Sonja
2009-01-01
The mechanisms underlying neuronal coding and, in particular, the role of temporal spike coordination are hotly debated. However, this debate is often confounded by an implicit discussion about the use of appropriate analysis methods. To avoid incorrect interpretation of data, the analysis of simultaneous spike trains for precise spike correlation needs to be properly adjusted to the features of the experimental spike trains. In particular, nonstationarity of the firing of individual neurons in time or across trials, a spike train structure deviating from Poisson, or a co-occurrence of such features in parallel spike trains are potent generators of false positives. Problems can be avoided by including these features in the null hypothesis of the significance test. In this context, the use of surrogate data becomes increasingly important, because the complexity of the data typically prevents analytical solutions. This review provides an overview of the potential obstacles in the correlation analysis of parallel spike data and possible routes to overcome them. The discussion is illustrated at every stage of the argument by referring to a specific analysis tool (the Unitary Events method). The conclusions, however, are of a general nature and hold for other analysis techniques. Thorough testing and calibration of analysis tools and the impact of potentially erroneous preprocessing stages are emphasized. PMID:19129298
Ganju, Jitendra; Yu, Xinxin; Ma, Guoguang Julie
2013-01-01
Formal inference in randomized clinical trials is based on controlling the type I error rate associated with a single pre-specified statistic. The deficiency of using just one method of analysis is that it depends on assumptions that may not be met. For robust inference, we propose pre-specifying multiple test statistics and relying on the minimum p-value for testing the null hypothesis of no treatment effect. The null hypothesis associated with the various test statistics is that the treatment groups are indistinguishable. The critical value for hypothesis testing comes from permutation distributions. Rejection of the null hypothesis when the smallest p-value is less than the critical value controls the type I error rate at its designated value. Even if one of the candidate test statistics has low power, the adverse effect on the power of the minimum p-value statistic is not much. Its use is illustrated with examples. We conclude that it is better to rely on the minimum p-value rather than a single statistic particularly when that single statistic is the logrank test, because of the cost and complexity of many survival trials. Copyright © 2013 John Wiley & Sons, Ltd.
Kinematic imprints from the bar and spiral structures in the galatic disk
NASA Astrophysics Data System (ADS)
Figueras, F.; Antoja, T.; Valenzuela, O.; Romero-Gómez, M.; Pichardo, B.; Moreno, E.
2011-12-01
At 140 years of the discovery of the moving groups, these stellar streams are emerging as powerful tools to constrain the models for the spiral arms and the Galactic bar in the Gaia era. From the kinematic-age-metallicity analysis in the solar neighbourhood it is now well established that some of these kinematic structures have a dynamical origin, different from the classical cluster disruption hypothesis. Test particle simulations allow us to definitively establish that these local structures can be created by the dynamical resonances of material spiral arms and not exclusively by the Galactic bar. First studies to evaluate the capabilities of the future Gaia data to detect and characterize moving groups at 2-6 kpc from the solar neighborhood are discussed.
Dynamic sensor management of dispersed and disparate sensors for tracking resident space objects
NASA Astrophysics Data System (ADS)
El-Fallah, A.; Zatezalo, A.; Mahler, R.; Mehra, R. K.; Donatelli, D.
2008-04-01
Dynamic sensor management of dispersed and disparate sensors for space situational awareness presents daunting scientific and practical challenges as it requires optimal and accurate maintenance of all Resident Space Objects (RSOs) of interest. We demonstrate an approach to the space-based sensor management problem by extending a previously developed and tested sensor management objective function, the Posterior Expected Number of Targets (PENT), to disparate and dispersed sensors. This PENT extension together with observation models for various sensor platforms, and a Probability Hypothesis Density Particle Filter (PHD-PF) tracker provide a powerful tool for tackling this challenging problem. We demonstrate the approach using simulations for tracking RSOs by a Space Based Visible (SBV) sensor and ground based radars.
NASA Astrophysics Data System (ADS)
Dunstan, Jocelyn; Fallah-Fini, Saeideh; Nau, Claudia; Glass, Thomas; Global Obesity Prevention Center Team
The applications of sophisticated mathematical and numerical tools in public health has been demonstrated to be useful in predicting the outcome of public intervention as well as to study, for example, the main causes of obesity without doing experiments with the population. In this project we aim to understand which kind of food consumed in different countries over time best defines the rate of obesity in those countries. The use of Machine Learning is particularly useful because we do not need to create a hypothesis and test it with the data, but instead we learn from the data to find the groups of food that best describe the prevalence of obesity.
Translational Mouse Models of Autism: Advancing Toward Pharmacological Therapeutics
Kazdoba, Tatiana M.; Leach, Prescott T.; Yang, Mu; Silverman, Jill L.; Solomon, Marjorie
2016-01-01
Animal models provide preclinical tools to investigate the causal role of genetic mutations and environmental factors in the etiology of autism spectrum disorder (ASD). Knockout and humanized knock-in mice, and more recently knockout rats, have been generated for many of the de novo single gene mutations and copy number variants (CNVs) detected in ASD and comorbid neurodevelopmental disorders. Mouse models incorporating genetic and environmental manipulations have been employed for preclinical testing of hypothesis-driven pharmacological targets, to begin to develop treatments for the diagnostic and associated symptoms of autism. In this review, we summarize rodent behavioral assays relevant to the core features of autism, preclinical and clinical evaluations of pharmacological interventions, and strategies to improve the translational value of rodent models of autism. PMID:27305922
Interpolity exchange of basalt tools facilitated via elite control in Hawaiian archaic states
Kirch, Patrick V.; Mills, Peter R.; Lundblad, Steven P.; Sinton, John; Kahn, Jennifer G.
2012-01-01
Ethnohistoric accounts of late precontact Hawaiian archaic states emphasize the independence of chiefly controlled territories (ahupua‘a) based on an agricultural, staple economy. However, elite control of unevenly distributed resources, such as high-quality volcanic rock for adze production, may have provided an alternative source of economic power. To test this hypothesis we used nondestructive energy-dispersive X-ray fluorescence (ED-XRF) analysis of 328 lithic artifacts from 36 archaeological features in the Kahikinui district, Maui Island, to geochemically characterize the source groups. This process was followed by a limited sampling using destructive wavelength-dispersive X-ray fluorescence (WD-XRF) analysis to more precisely characterize certain nonlocal source groups. Seventeen geochemical groups were defined, eight of which represent extra-Maui Island sources. Although the majority of stone tools were derived from Maui Island sources (71%), a significant quantity (27%) of tools derived from extraisland sources, including the large Mauna Kea quarry on Hawai‘i Island as well as quarries on O‘ahu, Moloka‘i, and Lāna‘i islands. Importantly, tools quarried from extralocal sources are found in the highest frequency in elite residential features and in ritual contexts. These results suggest a significant role for a wealth economy based on the control and distribution of nonagricultural goods and resources during the rise of the Hawaiian archaic states. PMID:22203984
CytoBayesJ: software tools for Bayesian analysis of cytogenetic radiation dosimetry data.
Ainsbury, Elizabeth A; Vinnikov, Volodymyr; Puig, Pedro; Maznyk, Nataliya; Rothkamm, Kai; Lloyd, David C
2013-08-30
A number of authors have suggested that a Bayesian approach may be most appropriate for analysis of cytogenetic radiation dosimetry data. In the Bayesian framework, probability of an event is described in terms of previous expectations and uncertainty. Previously existing, or prior, information is used in combination with experimental results to infer probabilities or the likelihood that a hypothesis is true. It has been shown that the Bayesian approach increases both the accuracy and quality assurance of radiation dose estimates. New software entitled CytoBayesJ has been developed with the aim of bringing Bayesian analysis to cytogenetic biodosimetry laboratory practice. CytoBayesJ takes a number of Bayesian or 'Bayesian like' methods that have been proposed in the literature and presents them to the user in the form of simple user-friendly tools, including testing for the most appropriate model for distribution of chromosome aberrations and calculations of posterior probability distributions. The individual tools are described in detail and relevant examples of the use of the methods and the corresponding CytoBayesJ software tools are given. In this way, the suitability of the Bayesian approach to biological radiation dosimetry is highlighted and its wider application encouraged by providing a user-friendly software interface and manual in English and Russian. Copyright © 2013 Elsevier B.V. All rights reserved.
Population Genetic Structure and Colonisation History of the Tool-Using New Caledonian Crow
Abdelkrim, Jawad; Hunt, Gavin R.; Gray, Russell D.; Gemmell, Neil J.
2012-01-01
New Caledonian crows exhibit considerable variation in tool making between populations. Here, we present the first study of the species’ genetic structure over its geographical distribution. We collected feathers from crows on mainland Grande Terre, the inshore island of Toupéti, and the nearby island of Maré where it is believed birds were introduced after European colonisation. We used nine microsatellite markers to establish the genotypes of 136 crows from these islands and classical population genetic tools as well as Approximate Bayesian Computations to explore the distribution of genetic diversity. We found that New Caledonian crows most likely separate into three main distinct clusters: Grande Terre, Toupéti and Maré. Furthermore, Toupéti and Maré crows represent a subset of the genetic diversity observed on Grande Terre, confirming their mainland origin. The genetic data are compatible with a colonisation of Maré taking place after European colonisation around 1900. Importantly, we observed (1) moderate, but significant, genetic differentiation across Grande Terre, and (2) that the degree of differentiation between populations on the mainland increases with geographic distance. These data indicate that despite individual crows’ potential ability to disperse over large distances, most gene flow occurs over short distances. The temporal and spatial patterns described provide a basis for further hypothesis testing and investigation of the geographical variation observed in the tool skills of these crows. PMID:22590576
Mazerolle, M.J.
2006-01-01
In ecology, researchers frequently use observational studies to explain a given pattern, such as the number of individuals in a habitat patch, with a large number of explanatory (i.e., independent) variables. To elucidate such relationships, ecologists have long relied on hypothesis testing to include or exclude variables in regression models, although the conclusions often depend on the approach used (e.g., forward, backward, stepwise selection). Though better tools have surfaced in the mid 1970's, they are still underutilized in certain fields, particularly in herpetology. This is the case of the Akaike information criterion (AIC) which is remarkably superior in model selection (i.e., variable selection) than hypothesis-based approaches. It is simple to compute and easy to understand, but more importantly, for a given data set, it provides a measure of the strength of evidence for each model that represents a plausible biological hypothesis relative to the entire set of models considered. Using this approach, one can then compute a weighted average of the estimate and standard error for any given variable of interest across all the models considered. This procedure, termed model-averaging or multimodel inference, yields precise and robust estimates. In this paper, I illustrate the use of the AIC in model selection and inference, as well as the interpretation of results analysed in this framework with two real herpetological data sets. The AIC and measures derived from it is should be routinely adopted by herpetologists. ?? Koninklijke Brill NV 2006.
USDA-ARS?s Scientific Manuscript database
This study tests the hypothesis that phylogenetic classification can predict whether A. pullulans strains will produce useful levels of the commercial polysaccharide, pullulan, or the valuable enzyme, xylanase. To test this hypothesis, 19 strains of A. pullulans with previously described phenotypes...
Huang, Peng; Ou, Ai-hua; Piantadosi, Steven; Tan, Ming
2014-11-01
We discuss the problem of properly defining treatment superiority through the specification of hypotheses in clinical trials. The need to precisely define the notion of superiority in a one-sided hypothesis test problem has been well recognized by many authors. Ideally designed null and alternative hypotheses should correspond to a partition of all possible scenarios of underlying true probability models P={P(ω):ω∈Ω} such that the alternative hypothesis Ha={P(ω):ω∈Ωa} can be inferred upon the rejection of null hypothesis Ho={P(ω):ω∈Ω(o)} However, in many cases, tests are carried out and recommendations are made without a precise definition of superiority or a specification of alternative hypothesis. Moreover, in some applications, the union of probability models specified by the chosen null and alternative hypothesis does not constitute a completed model collection P (i.e., H(o)∪H(a) is smaller than P). This not only imposes a strong non-validated assumption of the underlying true models, but also leads to different superiority claims depending on which test is used instead of scientific plausibility. Different ways to partition P fro testing treatment superiority often have different implications on sample size, power, and significance in both efficacy and comparative effectiveness trial design. Such differences are often overlooked. We provide a theoretical framework for evaluating the statistical properties of different specification of superiority in typical hypothesis testing. This can help investigators to select proper hypotheses for treatment comparison inclinical trial design. Copyright © 2014 Elsevier Inc. All rights reserved.
The potential for increased power from combining P-values testing the same hypothesis.
Ganju, Jitendra; Julie Ma, Guoguang
2017-02-01
The conventional approach to hypothesis testing for formal inference is to prespecify a single test statistic thought to be optimal. However, we usually have more than one test statistic in mind for testing the null hypothesis of no treatment effect but we do not know which one is the most powerful. Rather than relying on a single p-value, combining p-values from prespecified multiple test statistics can be used for inference. Combining functions include Fisher's combination test and the minimum p-value. Using randomization-based tests, the increase in power can be remarkable when compared with a single test and Simes's method. The versatility of the method is that it also applies when the number of covariates exceeds the number of observations. The increase in power is large enough to prefer combined p-values over a single p-value. The limitation is that the method does not provide an unbiased estimator of the treatment effect and does not apply to situations when the model includes treatment by covariate interaction.
A test of the orthographic recoding hypothesis
NASA Astrophysics Data System (ADS)
Gaygen, Daniel E.
2003-04-01
The Orthographic Recoding Hypothesis [D. E. Gaygen and P. A. Luce, Percept. Psychophys. 60, 465-483 (1998)] was tested. According to this hypothesis, listeners recognize spoken words heard for the first time by mapping them onto stored representations of the orthographic forms of the words. Listeners have a stable orthographic representation of words, but no phonological representation, when those words have been read frequently but never heard or spoken. Such may be the case for low frequency words such as jargon. Three experiments using visually and auditorily presented nonword stimuli tested this hypothesis. The first two experiments were explicit tests of memory (old-new tests) for words presented visually. In the first experiment, the recognition of auditorily presented nonwords was facilitated when they previously appeared on a visually presented list. The second experiment was similar, but included a concurrent articulation task during a visual word list presentation, thus preventing covert rehearsal of the nonwords. The results were similar to the first experiment. The third experiment was an indirect test of memory (auditory lexical decision task) for visually presented nonwords. Auditorily presented nonwords were identified as nonwords significantly more slowly if they had previously appeared on the visually presented list accompanied by a concurrent articulation task.
Hamilton, Maryellen; Geraci, Lisa
2006-01-01
According to leading theories, the picture superiority effect is driven by conceptual processing, yet this effect has been difficult to obtain using conceptual implicit memory tests. We hypothesized that the picture superiority effect results from conceptual processing of a picture's distinctive features rather than a picture's semantic features. To test this hypothesis, we used 2 conceptual implicit general knowledge tests; one cued conceptually distinctive features (e.g., "What animal has large eyes?") and the other cued semantic features (e.g., "What animal is the figurehead of Tootsie Roll?"). Results showed a picture superiority effect only on the conceptual test using distinctive cues, supporting our hypothesis that this effect is mediated by conceptual processing of a picture's distinctive features.
Hypothesis testing for band size detection of high-dimensional banded precision matrices.
An, Baiguo; Guo, Jianhua; Liu, Yufeng
2014-06-01
Many statistical analysis procedures require a good estimator for a high-dimensional covariance matrix or its inverse, the precision matrix. When the precision matrix is banded, the Cholesky-based method often yields a good estimator of the precision matrix. One important aspect of this method is determination of the band size of the precision matrix. In practice, crossvalidation is commonly used; however, we show that crossvalidation not only is computationally intensive but can be very unstable. In this paper, we propose a new hypothesis testing procedure to determine the band size in high dimensions. Our proposed test statistic is shown to be asymptotically normal under the null hypothesis, and its theoretical power is studied. Numerical examples demonstrate the effectiveness of our testing procedure.
Why do mothers favor girls and fathers, boys? : A hypothesis and a test of investment disparity.
Godoy, Ricardo; Reyes-García, Victoria; McDade, Thomas; Tanner, Susan; Leonard, William R; Huanca, Tomás; Vadez, Vincent; Patel, Karishma
2006-06-01
Growing evidence suggests mothers invest more in girls than boys and fathers more in boys than girls. We develop a hypothesis that predicts preference for girls by the parent facing more resource constraints and preference for boys by the parent facing less constraint. We test the hypothesis with panel data from the Tsimane', a foraging-farming society in the Bolivian Amazon. Tsimane' mothers face more resource constraints than fathers. As predicted, mother's wealth protected girl's BMI, but father's wealth had weak effects on boy's BMI. Numerous tests yielded robust results, including those that controlled for fixed effects of child and household.
Bundschuh, Mirco; Newman, Michael C; Zubrod, Jochen P; Seitz, Frank; Rosenfeldt, Ricki R; Schulz, Ralf
2015-03-01
We argued recently that the positive predictive value (PPV) and the negative predictive value (NPV) are valuable metrics to include during null hypothesis significance testing: They inform the researcher about the probability of statistically significant and non-significant test outcomes actually being true. Although commonly misunderstood, a reported p value estimates only the probability of obtaining the results or more extreme results if the null hypothesis of no effect was true. Calculations of the more informative PPV and NPV require a priori estimate of the probability (R). The present document discusses challenges of estimating R.
RPD-based Hypothesis Reasoning for Cyber Situation Awareness
NASA Astrophysics Data System (ADS)
Yen, John; McNeese, Michael; Mullen, Tracy; Hall, David; Fan, Xiaocong; Liu, Peng
Intelligence workers such as analysts, commanders, and soldiers often need a hypothesis reasoning framework to gain improved situation awareness of the highly dynamic cyber space. The development of such a framework requires the integration of interdisciplinary techniques, including supports for distributed cognition (human-in-the-loop hypothesis generation), supports for team collaboration (identification of information for hypothesis evaluation), and supports for resource-constrained information collection (hypotheses competing for information collection resources). We here describe a cognitively-inspired framework that is built upon Klein’s recognition-primed decision model and integrates the three components of Endsley’s situation awareness model. The framework naturally connects the logic world of tools for cyber situation awareness with the mental world of human analysts, enabling the perception, comprehension, and prediction of cyber situations for better prevention, survival, and response to cyber attacks by adapting missions at the operational, tactical, and strategic levels.
Li, Fuhong; Cao, Bihua; Luo, Yuejia; Lei, Yi; Li, Hong
2013-02-01
Functional magnetic resonance imaging (fMRI) was used to examine differences in brain activation that occur when a person receives the different outcomes of hypothesis testing (HT). Participants were provided with a series of images of batteries and were asked to learn a rule governing what kinds of batteries were charged. Within each trial, the first two charged batteries were sequentially displayed, and participants would generate a preliminary hypothesis based on the perceptual comparison. Next, a third battery that served to strengthen, reject, or was irrelevant to the preliminary hypothesis was displayed. The fMRI results revealed that (1) no significant differences in brain activation were found between the 2 hypothesis-maintain conditions (i.e., strengthen and irrelevant conditions); and (2) compared with the hypothesis-maintain conditions, the hypothesis-reject condition activated the left medial frontal cortex, bilateral putamen, left parietal cortex, and right cerebellum. These findings are discussed in terms of the neural correlates of the subcomponents of HT and working memory manipulation. Copyright © 2012 Elsevier Inc. All rights reserved.
Small sample mediation testing: misplaced confidence in bootstrapped confidence intervals.
Koopman, Joel; Howe, Michael; Hollenbeck, John R; Sin, Hock-Peng
2015-01-01
Bootstrapping is an analytical tool commonly used in psychology to test the statistical significance of the indirect effect in mediation models. Bootstrapping proponents have particularly advocated for its use for samples of 20-80 cases. This advocacy has been heeded, especially in the Journal of Applied Psychology, as researchers are increasingly utilizing bootstrapping to test mediation with samples in this range. We discuss reasons to be concerned with this escalation, and in a simulation study focused specifically on this range of sample sizes, we demonstrate not only that bootstrapping has insufficient statistical power to provide a rigorous hypothesis test in most conditions but also that bootstrapping has a tendency to exhibit an inflated Type I error rate. We then extend our simulations to investigate an alternative empirical resampling method as well as a Bayesian approach and demonstrate that they exhibit comparable statistical power to bootstrapping in small samples without the associated inflated Type I error. Implications for researchers testing mediation hypotheses in small samples are presented. For researchers wishing to use these methods in their own research, we have provided R syntax in the online supplemental materials. (c) 2015 APA, all rights reserved.
Koops, Kathelijne; Furuichi, Takeshi; Hashimoto, Chie; van Schaik, Carel P
2015-01-01
Sex differences in immatures predict behavioural differences in adulthood in many mammal species. Because most studies have focused on sex differences in social interactions, little is known about possible sex differences in 'preparation' for adult life with regards to tool use skills. We investigated sex and age differences in object manipulation in immature apes. Chimpanzees use a variety of tools across numerous contexts, whereas bonobos use few tools and none in foraging. In both species, a female bias in adult tool use has been reported. We studied object manipulation in immature chimpanzees at Kalinzu (Uganda) and bonobos at Wamba (Democratic Republic of Congo). We tested predictions of the 'preparation for tool use' hypothesis. We confirmed that chimpanzees showed higher rates and more diverse types of object manipulation than bonobos. Against expectation, male chimpanzees showed higher object manipulation rates than females, whereas in bonobos no sex difference was found. However, object manipulation by male chimpanzees was play-dominated, whereas manipulation types of female chimpanzees were more diverse (e.g., bite, break, carry). Manipulation by young immatures of both species was similarly dominated by play, but only in chimpanzees did it become more diverse with age. Moreover, in chimpanzees, object types became more tool-like (i.e., sticks) with age, further suggesting preparation for tool use in adulthood. The male bias in object manipulation in immature chimpanzees, along with the late onset of tool-like object manipulation, indicates that not all (early) object manipulation (i.e., object play) in immatures prepares for subsistence tool use. Instead, given the similarity with gender differences in human children, object play may also function in motor skill practice for male-specific behaviours (e.g., dominance displays). In conclusion, even though immature behaviours almost certainly reflect preparation for adult roles, more detailed future work is needed to disentangle possible functions of object manipulation during development.
Koops, Kathelijne; Furuichi, Takeshi; Hashimoto, Chie; van Schaik, Carel P.
2015-01-01
Sex differences in immatures predict behavioural differences in adulthood in many mammal species. Because most studies have focused on sex differences in social interactions, little is known about possible sex differences in ‘preparation’ for adult life with regards to tool use skills. We investigated sex and age differences in object manipulation in immature apes. Chimpanzees use a variety of tools across numerous contexts, whereas bonobos use few tools and none in foraging. In both species, a female bias in adult tool use has been reported. We studied object manipulation in immature chimpanzees at Kalinzu (Uganda) and bonobos at Wamba (Democratic Republic of Congo). We tested predictions of the ‘preparation for tool use’ hypothesis. We confirmed that chimpanzees showed higher rates and more diverse types of object manipulation than bonobos. Against expectation, male chimpanzees showed higher object manipulation rates than females, whereas in bonobos no sex difference was found. However, object manipulation by male chimpanzees was play-dominated, whereas manipulation types of female chimpanzees were more diverse (e.g., bite, break, carry). Manipulation by young immatures of both species was similarly dominated by play, but only in chimpanzees did it become more diverse with age. Moreover, in chimpanzees, object types became more tool-like (i.e., sticks) with age, further suggesting preparation for tool use in adulthood. The male bias in object manipulation in immature chimpanzees, along with the late onset of tool-like object manipulation, indicates that not all (early) object manipulation (i.e., object play) in immatures prepares for subsistence tool use. Instead, given the similarity with gender differences in human children, object play may also function in motor skill practice for male-specific behaviours (e.g., dominance displays). In conclusion, even though immature behaviours almost certainly reflect preparation for adult roles, more detailed future work is needed to disentangle possible functions of object manipulation during development. PMID:26444011
Benedetti, Manuel; Pontiggia, Daniela; Raggi, Sara; Cheng, Zhenyu; Scaloni, Flavio; Ferrari, Simone; Ausubel, Frederick M; Cervone, Felice; De Lorenzo, Giulia
2015-04-28
Oligogalacturonides (OGs) are fragments of pectin that activate plant innate immunity by functioning as damage-associated molecular patterns (DAMPs). We set out to test the hypothesis that OGs are generated in planta by partial inhibition of pathogen-encoded polygalacturonases (PGs). A gene encoding a fungal PG was fused with a gene encoding a plant polygalacturonase-inhibiting protein (PGIP) and expressed in transgenic Arabidopsis plants. We show that expression of the PGIP-PG chimera results in the in vivo production of OGs that can be detected by mass spectrometric analysis. Transgenic plants expressing the chimera under control of a pathogen-inducible promoter are more resistant to the phytopathogens Botrytis cinerea, Pectobacterium carotovorum, and Pseudomonas syringae. These data provide strong evidence for the hypothesis that OGs released in vivo act as a DAMP signal to trigger plant immunity and suggest that controlled release of these molecules upon infection may be a valuable tool to protect plants against infectious diseases. On the other hand, elevated levels of expression of the chimera cause the accumulation of salicylic acid, reduced growth, and eventually lead to plant death, consistent with the current notion that trade-off occurs between growth and defense.
A sustainable building promotes pro-environmental behavior: an observational study on food disposal.
Wu, David W-L; DiGiacomo, Alessandra; Kingstone, Alan
2013-01-01
In order to develop a more sustainable society, the wider public will need to increase engagement in pro-environmental behaviors. Psychological research on pro-environmental behaviors has thus far focused on identifying individual factors that promote such behavior, designing interventions based on these factors, and evaluating these interventions. Contextual factors that may also influence behavior at an aggregate level have been largely ignored. In the current study, we test a novel hypothesis--whether simply being in a sustainable building can elicit environmentally sustainable behavior. We find support for our hypothesis: people are significantly more likely to correctly choose the proper disposal bin (garbage, compost, recycling) in a building designed with sustainability in mind compared to a building that was not. Questionnaires reveal that these results are not due to self-selection biases. Our study provides empirical support that one's surroundings can have a profound and positive impact on behavior. It also suggests the opportunity for a new line of research that bridges psychology, design, and policy-making in an attempt to understand how the human environment can be designed and used as a subtle yet powerful tool to encourage and achieve aggregate pro-environmental behavior.
A Sustainable Building Promotes Pro-Environmental Behavior: An Observational Study on Food Disposal
Wu, David W.–L.; DiGiacomo, Alessandra; Kingstone, Alan
2013-01-01
In order to develop a more sustainable society, the wider public will need to increase engagement in pro-environmental behaviors. Psychological research on pro-environmental behaviors has thus far focused on identifying individual factors that promote such behavior, designing interventions based on these factors, and evaluating these interventions. Contextual factors that may also influence behavior at an aggregate level have been largely ignored. In the current study, we test a novel hypothesis – whether simply being in a sustainable building can elicit environmentally sustainable behavior. We find support for our hypothesis: people are significantly more likely to correctly choose the proper disposal bin (garbage, compost, recycling) in a building designed with sustainability in mind compared to a building that was not. Questionnaires reveal that these results are not due to self-selection biases. Our study provides empirical support that one's surroundings can have a profound and positive impact on behavior. It also suggests the opportunity for a new line of research that bridges psychology, design, and policy-making in an attempt to understand how the human environment can be designed and used as a subtle yet powerful tool to encourage and achieve aggregate pro-environmental behavior. PMID:23326521
[Relationships between alexithymia, depression and interpersonal dependency in addictive subjects].
Speranza, Mario; Stéphan, Philippe; Corcos, Maurice; Loas, Gwenolé; Taieb, Olivier; Guilbaud, Olivier; Perez-Diaz, Fernando; Venisse, Jean-Luc; Bizouard, Paul; Halfon, Olivier; Jeammet, Philippe
2003-06-01
In the scientific literature, the term of addiction is currently used to describe a whole range of phenomena characterized by an irresistible urge to engage in a series of behaviors carried out in a repetitive and persistent manner despite accruing adverse somatic, psychological and social consequences for the individual. It has been suggested that subjects presenting such behaviors would share specific features of personality which support the appearance or are associated with these addictive behaviors. Dimensions such as alexithymia and depression have been particularly well investigated. The aim of this study was to explore the hypothesis of a specific psychopathological model relating alexithymia and depression in different addictive disorders such as alcoholism, drug addiction or eating disorders. Alexithymic and depressive dimensions were explored and analyzed through the statistical tool of path analysis in a large clinical sample of addicted patients and controls. The results of this statistical method, which tests unidirectional causal relationships between a certain number of observed variables, showed a good adjustment between the observed data and the ideal model, and support the hypothesis that a depressive dimension can facilitate the development of dependence in vulnerable alexithymic subjects. These results can have clinical implications in the treatment of addictive disorders.
Moss-associated methylobacteria as phytosymbionts: an experimental study
NASA Astrophysics Data System (ADS)
Hornschuh, M.; Grotha, R.; Kutschera, U.
2006-10-01
Methylotrophic bacteria inhabit the surface of plant organs, but the interaction between these microbes and their host cells is largely unknown. Protonemata (gametophytes) of the moss Funaria hygrometrica were cultivated in vitro under axenic conditions and the growth of the protonemal filaments recorded. In the presence of methylobacteria (different strains of Methylobacterium), average cell length and the number of cells per filament were both enhanced. We tested the hypothesis that auxin (indole-3-acetic acid, IAA), secreted by the epiphytic bacteria and taken up by the plant cells, may in part be responsible for this promotion of protonema development. The antiauxin parachlorophenoxyisobutyric acid (PCIB) was used as a tool to analyze the role of IAA and methylobacteria in the regulation of cell growth. In the presence of PCIB, cell elongation and protonema differentiation were both inhibited. This effect was compensated for by the addition of different Methylobacterium strains to the culture medium. Biosynthesis and secretion of IAA by methylobacteria maintained in liquid culture was documented via a colorimetric assay and thin layer chromatography. Our results support the hypothesis that the development of Funaria protonemata is promoted by beneficial phytohormone-producing methylobacteria, which can be classified as phytosymbionts.
Moss-associated methylobacteria as phytosymbionts: an experimental study.
Hornschuh, M; Grotha, R; Kutschera, U
2006-10-01
Methylotrophic bacteria inhabit the surface of plant organs, but the interaction between these microbes and their host cells is largely unknown. Protonemata (gametophytes) of the moss Funaria hygrometrica were cultivated in vitro under axenic conditions and the growth of the protonemal filaments recorded. In the presence of methylobacteria (different strains of Methylobacterium), average cell length and the number of cells per filament were both enhanced. We tested the hypothesis that auxin (indole-3-acetic acid, IAA), secreted by the epiphytic bacteria and taken up by the plant cells, may in part be responsible for this promotion of protonema development. The antiauxin parachlorophenoxyisobutyric acid (PCIB) was used as a tool to analyze the role of IAA and methylobacteria in the regulation of cell growth. In the presence of PCIB, cell elongation and protonema differentiation were both inhibited. This effect was compensated for by the addition of different Methylobacterium strains to the culture medium. Biosynthesis and secretion of IAA by methylobacteria maintained in liquid culture was documented via a colorimetric assay and thin layer chromatography. Our results support the hypothesis that the development of Funaria protonemata is promoted by beneficial phytohormone-producing methylobacteria, which can be classified as phytosymbionts.
Benedetti, Manuel; Pontiggia, Daniela; Raggi, Sara; Cheng, Zhenyu; Scaloni, Flavio; Ferrari, Simone; Ausubel, Frederick M.; Cervone, Felice; De Lorenzo, Giulia
2015-01-01
Oligogalacturonides (OGs) are fragments of pectin that activate plant innate immunity by functioning as damage-associated molecular patterns (DAMPs). We set out to test the hypothesis that OGs are generated in planta by partial inhibition of pathogen-encoded polygalacturonases (PGs). A gene encoding a fungal PG was fused with a gene encoding a plant polygalacturonase-inhibiting protein (PGIP) and expressed in transgenic Arabidopsis plants. We show that expression of the PGIP–PG chimera results in the in vivo production of OGs that can be detected by mass spectrometric analysis. Transgenic plants expressing the chimera under control of a pathogen-inducible promoter are more resistant to the phytopathogens Botrytis cinerea, Pectobacterium carotovorum, and Pseudomonas syringae. These data provide strong evidence for the hypothesis that OGs released in vivo act as a DAMP signal to trigger plant immunity and suggest that controlled release of these molecules upon infection may be a valuable tool to protect plants against infectious diseases. On the other hand, elevated levels of expression of the chimera cause the accumulation of salicylic acid, reduced growth, and eventually lead to plant death, consistent with the current notion that trade-off occurs between growth and defense. PMID:25870275
Animal Models for Testing the DOHaD Hypothesis
Since the seminal work in human populations by David Barker and colleagues, several species of animals have been used in the laboratory to test the Developmental Origins of Health and Disease (DOHaD) hypothesis. Rats, mice, guinea pigs, sheep, pigs and non-human primates have bee...
A "Projective" Test of the Golden Section Hypothesis.
ERIC Educational Resources Information Center
Lee, Chris; Adams-Webber, Jack
1987-01-01
In a projective test of the golden section hypothesis, 24 high school students rated themselves and 10 comic strip characters on basis of 12 bipolar constructs. Overall proportion of cartoon figures which subjects assigned to positive poles of constructs was very close to golden section. (Author/NB)
Peterson, Chris J; Dosch, Jerald J; Carson, Walter P
2014-08-01
The nucleation hypothesis appears to explain widespread patterns of succession in tropical pastures, specifically the tendency for isolated trees to promote woody species recruitment. Still, the nucleation hypothesis has usually been tested explicitly for only short durations and in some cases isolated trees fail to promote woody recruitment. Moreover, at times, nucleation occurs in other key habitat patches. Thus, we propose an extension, the matrix discontinuity hypothesis: woody colonization will occur in focal patches that function to mitigate the herbaceous vegetation effects, thus providing safe sites or regeneration niches. We tested predictions of the classical nucleation hypothesis, the matrix discontinuity hypothesis, and a distance from forest edge hypothesis, in five abandoned pastures in Costa Rica, across the first 11 years of succession. Our findings confirmed the matrix discontinuity hypothesis: specifically, rotting logs and steep slopes significantly enhanced woody colonization. Surprisingly, isolated trees did not consistently significantly enhance recruitment; only larger trees did so. Finally, woody recruitment consistently decreased with distance from forest. Our results as well as results from others suggest that the nucleation hypothesis needs to be broadened beyond its historical focus on isolated trees or patches; the matrix discontinuity hypothesis focuses attention on a suite of key patch types or microsites that promote woody species recruitment. We argue that any habitat discontinuities that ameliorate the inhibition by dense graminoid layers will be foci for recruitment. Such patches could easily be manipulated to speed the transition of pastures to closed canopy forests.
Humans have evolved specialized skills of social cognition: the cultural intelligence hypothesis.
Herrmann, Esther; Call, Josep; Hernàndez-Lloreda, Maráa Victoria; Hare, Brian; Tomasello, Michael
2007-09-07
Humans have many cognitive skills not possessed by their nearest primate relatives. The cultural intelligence hypothesis argues that this is mainly due to a species-specific set of social-cognitive skills, emerging early in ontogeny, for participating and exchanging knowledge in cultural groups. We tested this hypothesis by giving a comprehensive battery of cognitive tests to large numbers of two of humans' closest primate relatives, chimpanzees and orangutans, as well as to 2.5-year-old human children before literacy and schooling. Supporting the cultural intelligence hypothesis and contradicting the hypothesis that humans simply have more "general intelligence," we found that the children and chimpanzees had very similar cognitive skills for dealing with the physical world but that the children had more sophisticated cognitive skills than either of the ape species for dealing with the social world.
1986-09-01
HYPOTHESIS TEST .................... 20 III. TIME TO GET RATED TWO FACTOR ANOVA RESULTS ......... 23 IV. TIME TO GET RATED TUKEY’S PAIRED COvfl’PARISON... TEST RESULTS A ............................................ 24 V. TIME TO GET RATED TUKEY’S PAIRED COMPARISON TEST RESULTS B...25 VI. SINGLE FACTOR ANOVA HYPOTHESIS TEST #I............... 27 VII. AT: TIME TO GET RATED ANOVA TEST RESULTS ............. 30
Sensory discrimination and intelligence: testing Spearman's other hypothesis.
Deary, Ian J; Bell, P Joseph; Bell, Andrew J; Campbell, Mary L; Fazal, Nicola D
2004-01-01
At the centenary of Spearman's seminal 1904 article, his general intelligence hypothesis remains one of the most influential in psychology. Less well known is the article's other hypothesis that there is "a correspondence between what may provisionally be called 'General Discrimination' and 'General Intelligence' which works out with great approximation to one or absoluteness" (Spearman, 1904, p. 284). Studies that do not find high correlations between psychometric intelligence and single sensory discrimination tests do not falsify this hypothesis. This study is the first directly to address Spearman's general intelligence-general sensory discrimination hypothesis. It attempts to replicate his findings with a similar sample of schoolchildren. In a well-fitting structural equation model of the data, general intelligence and general discrimination correlated .92. In a reanalysis of data published byActon and Schroeder (2001), general intelligence and general sensory ability correlated .68 in men and women. One hundred years after its conception, Spearman's other hypothesis achieves some confirmation. The association between general intelligence and general sensory ability remains to be replicated and explained.
Dynamic test input generation for multiple-fault isolation
NASA Technical Reports Server (NTRS)
Schaefer, Phil
1990-01-01
Recent work is Causal Reasoning has provided practical techniques for multiple fault diagnosis. These techniques provide a hypothesis/measurement diagnosis cycle. Using probabilistic methods, they choose the best measurements to make, then update fault hypotheses in response. For many applications such as computers and spacecraft, few measurement points may be accessible, or values may change quickly as the system under diagnosis operates. In these cases, a hypothesis/measurement cycle is insufficient. A technique is presented for a hypothesis/test-input/measurement diagnosis cycle. In contrast to generating tests a priori for determining device functionality, it dynamically generates tests in response to current knowledge about fault probabilities. It is shown how the mathematics previously used for measurement specification can be applied to the test input generation process. An example from an efficient implementation called Multi-Purpose Causal (MPC) is presented.
Lee, Chai-Jin; Kang, Dongwon; Lee, Sangseon; Lee, Sunwon; Kang, Jaewoo; Kim, Sun
2018-05-25
Determining functions of a gene requires time consuming, expensive biological experiments. Scientists can speed up this experimental process if the literature information and biological networks can be adequately provided. In this paper, we present a web-based information system that can perform in silico experiments of computationally testing hypothesis on the function of a gene. A hypothesis that is specified in English by the user is converted to genes using a literature and knowledge mining system called BEST. Condition-specific TF, miRNA and PPI (protein-protein interaction) networks are automatically generated by projecting gene and miRNA expression data to template networks. Then, an in silico experiment is to test how well the target genes are connected from the knockout gene through the condition-specific networks. The test result visualizes path from the knockout gene to the target genes in the three networks. Statistical and information-theoretic scores are provided on the resulting web page to help scientists either accept or reject the hypothesis being tested. Our web-based system was extensively tested using three data sets, such as E2f1, Lrrk2, and Dicer1 knockout data sets. We were able to re-produce gene functions reported in the original research papers. In addition, we comprehensively tested with all disease names in MalaCards as hypothesis to show the effectiveness of our system. Our in silico experiment system can be very useful in suggesting biological mechanisms which can be further tested in vivo or in vitro. http://biohealth.snu.ac.kr/software/insilico/. Copyright © 2018 Elsevier Inc. All rights reserved.
A Task-oriented Approach for Hydrogeological Site Characterization
NASA Astrophysics Data System (ADS)
Rubin, Y.; Nowak, W.; de Barros, F.
2010-12-01
Hydrogeological site characterization is a challenging task from several reasons: (1) the large spatial variability and scarcity of prior information render the outcome of any planned sampling campaign uncertain; (2) there are no simple tools for comparing between the many alternative measurement techniques and data acquisition strategies, and (3) physical and budgetary constraints associated with data acquisition. This paper presents several ideas on how to plan sampling campaigns in a rational manner while addressing these challenges. The first idea is to recognize that different sites and different problems require different characterization strategies. Hence the idea is to plan data acquisition according to its capability for meeting site-specific goals. For example, the characterization needs at a “research problem” site (e.g., a site intended to investigate the transport of uranium in the subsurface such as in Hanford) are different from those of a “problem” site (e.g., contaminated site associated with a health risk to human such as Camp Lejeune, or determining the safe yield of an aquifer). This distinction requires planners to define the characterization goal(s) in a quantitative manner. The second idea is to define metrics that could link specific data types and data acquisition strategies with the site-specific goals in a way that would allow planners to compare between strongly different, alternatives strategies at the design stage (even prior to data acquisition) and to modify the strategies as more data become available. To meet this goal, we developed the concept of the (comparative) information yield curve. Finally, we propose to look at site characterization from the perspective of statistical hypothesis testing, whereby data acquisition strategies could be evaluated in terms of their ability to support or refute various hypotheses made with regard to the characterization goals, and the strategies could be modified once the test is completed. Accept/reject regions for hypothesis testing can be determined based on goals determined by regulations or by agreement between the stakeholders. Hypothesis-driven design could help in minimizing the chances of making wrong decision (false positives or false negatives) with regard to the site-specific goals.
Killeen's (2005) "p[subscript rep]" Coefficient: Logical and Mathematical Problems
ERIC Educational Resources Information Center
Maraun, Michael; Gabriel, Stephanie
2010-01-01
In his article, "An Alternative to Null-Hypothesis Significance Tests," Killeen (2005) urged the discipline to abandon the practice of "p[subscript obs]"-based null hypothesis testing and to quantify the signal-to-noise characteristics of experimental outcomes with replication probabilities. He described the coefficient that he…
Using VITA Service Learning Experiences to Teach Hypothesis Testing and P-Value Analysis
ERIC Educational Resources Information Center
Drougas, Anne; Harrington, Steve
2011-01-01
This paper describes a hypothesis testing project designed to capture student interest and stimulate classroom interaction and communication. Using an online survey instrument, the authors collected student demographic information and data regarding university service learning experiences. Introductory statistics students performed a series of…
A Rational Analysis of the Selection Task as Optimal Data Selection.
ERIC Educational Resources Information Center
Oaksford, Mike; Chater, Nick
1994-01-01
Experimental data on human reasoning in hypothesis-testing tasks is reassessed in light of a Bayesian model of optimal data selection in inductive hypothesis testing. The rational analysis provided by the model suggests that reasoning in such tasks may be rational rather than subject to systematic bias. (SLD)
Random Effects Structure for Confirmatory Hypothesis Testing: Keep It Maximal
ERIC Educational Resources Information Center
Barr, Dale J.; Levy, Roger; Scheepers, Christoph; Tily, Harry J.
2013-01-01
Linear mixed-effects models (LMEMs) have become increasingly prominent in psycholinguistics and related areas. However, many researchers do not seem to appreciate how random effects structures affect the generalizability of an analysis. Here, we argue that researchers using LMEMs for confirmatory hypothesis testing should minimally adhere to the…
USDA-ARS?s Scientific Manuscript database
The effects of bias (over and underestimates) in estimates of disease severity on hypothesis testing using different assessment methods was explored. Nearest percent estimates (NPE), the Horsfall-Barratt (H-B) scale, and two different linear category scales (10% increments, with and without addition...
A Multivariate Test of the Bott Hypothesis in an Urban Irish Setting
ERIC Educational Resources Information Center
Gordon, Michael; Downing, Helen
1978-01-01
Using a sample of 686 married Irish women in Cork City the Bott hypothesis was tested, and the results of a multivariate regression analysis revealed that neither network connectedness nor the strength of the respondent's emotional ties to the network had any explanatory power. (Author)
Polarization, Definition, and Selective Media Learning.
ERIC Educational Resources Information Center
Tichenor, P. J.; And Others
The traditional hypothesis that extreme attitudinal positions on controversial issues are likely to produce low understanding of messages on these issues--especially when the messages represent opposing views--is tested. Data for test of the hypothesis are from two field studies, each dealing with reader attitudes and decoding of one news article…
The Lasting Effects of Introductory Economics Courses.
ERIC Educational Resources Information Center
Sanders, Philip
1980-01-01
Reports research which tests the Stigler Hypothesis. The hypothesis suggests that students who have taken introductory economics courses and those who have not show little difference in test performance five years after completing college. Results of the author's research illustrate that economics students do retain some knowledge of economics…
Robust Approach to Verifying the Weak Form of the Efficient Market Hypothesis
NASA Astrophysics Data System (ADS)
Střelec, Luboš
2011-09-01
The weak form of the efficient markets hypothesis states that prices incorporate only past information about the asset. An implication of this form of the efficient markets hypothesis is that one cannot detect mispriced assets and consistently outperform the market through technical analysis of past prices. One of possible formulations of the efficient market hypothesis used for weak form tests is that share prices follow a random walk. It means that returns are realizations of IID sequence of random variables. Consequently, for verifying the weak form of the efficient market hypothesis, we can use distribution tests, among others, i.e. some tests of normality and/or some graphical methods. Many procedures for testing the normality of univariate samples have been proposed in the literature [7]. Today the most popular omnibus test of normality for a general use is the Shapiro-Wilk test. The Jarque-Bera test is the most widely adopted omnibus test of normality in econometrics and related fields. In particular, the Jarque-Bera test (i.e. test based on the classical measures of skewness and kurtosis) is frequently used when one is more concerned about heavy-tailed alternatives. As these measures are based on moments of the data, this test has a zero breakdown value [2]. In other words, a single outlier can make the test worthless. The reason so many classical procedures are nonrobust to outliers is that the parameters of the model are expressed in terms of moments, and their classical estimators are expressed in terms of sample moments, which are very sensitive to outliers. Another approach to robustness is to concentrate on the parameters of interest suggested by the problem under this study. Consequently, novel robust testing procedures of testing normality are presented in this paper to overcome shortcomings of classical normality tests in the field of financial data, which are typical with occurrence of remote data points and additional types of deviations from normality. This study also discusses some results of simulation power studies of these tests for normality against selected alternatives. Based on outcome of the power simulation study, selected normality tests were consequently used to verify weak form of efficiency in Central Europe stock markets.
Concerns regarding a call for pluralism of information theory and hypothesis testing
Lukacs, P.M.; Thompson, W.L.; Kendall, W.L.; Gould, W.R.; Doherty, P.F.; Burnham, K.P.; Anderson, D.R.
2007-01-01
1. Stephens et al . (2005) argue for `pluralism? in statistical analysis, combining null hypothesis testing and information-theoretic (I-T) methods. We show that I-T methods are more informative even in single variable problems and we provide an ecological example. 2. I-T methods allow inferences to be made from multiple models simultaneously. We believe multimodel inference is the future of data analysis, which cannot be achieved with null hypothesis-testing approaches. 3. We argue for a stronger emphasis on critical thinking in science in general and less reliance on exploratory data analysis and data dredging. Deriving alternative hypotheses is central to science; deriving a single interesting science hypothesis and then comparing it to a default null hypothesis (e.g. `no difference?) is not an efficient strategy for gaining knowledge. We think this single-hypothesis strategy has been relied upon too often in the past. 4. We clarify misconceptions presented by Stephens et al . (2005). 5. We think inference should be made about models, directly linked to scientific hypotheses, and their parameters conditioned on data, Prob(Hj| data). I-T methods provide a basis for this inference. Null hypothesis testing merely provides a probability statement about the data conditioned on a null model, Prob(data |H0). 6. Synthesis and applications. I-T methods provide a more informative approach to inference. I-T methods provide a direct measure of evidence for or against hypotheses and a means to consider simultaneously multiple hypotheses as a basis for rigorous inference. Progress in our science can be accelerated if modern methods can be used intelligently; this includes various I-T and Bayesian methods.
Applicability of Taylor's hypothesis in thermally driven turbulence
Verma, Mahendra K.
2018-01-01
In this paper, we show that, in the presence of large-scale circulation (LSC), Taylor’s hypothesis can be invoked to deduce the energy spectrum in thermal convection using real-space probes, a popular experimental tool. We perform numerical simulation of turbulent convection in a cube and observe that the velocity field follows Kolmogorov’s spectrum (k−5/3). We also record the velocity time series using real-space probes near the lateral walls. The corresponding frequency spectrum exhibits Kolmogorov’s spectrum (f−5/3), thus validating Taylor’s hypothesis with the steady LSC playing the role of a mean velocity field. The aforementioned findings based on real-space probes provide valuable inputs for experimental measurements used for studying the spectrum of convective turbulence. PMID:29765668
Applicability of Taylor's hypothesis in thermally driven turbulence
NASA Astrophysics Data System (ADS)
Kumar, Abhishek; Verma, Mahendra K.
2018-04-01
In this paper, we show that, in the presence of large-scale circulation (LSC), Taylor's hypothesis can be invoked to deduce the energy spectrum in thermal convection using real-space probes, a popular experimental tool. We perform numerical simulation of turbulent convection in a cube and observe that the velocity field follows Kolmogorov's spectrum (k-5/3). We also record the velocity time series using real-space probes near the lateral walls. The corresponding frequency spectrum exhibits Kolmogorov's spectrum (f-5/3), thus validating Taylor's hypothesis with the steady LSC playing the role of a mean velocity field. The aforementioned findings based on real-space probes provide valuable inputs for experimental measurements used for studying the spectrum of convective turbulence.
Dorny, P; Dermauw, V; Van Hul, A; Trevisan, C; Gabriël, S
2017-10-15
Taenia solium taeniasis/cysticercosis is a zoonosis included in the WHO's list of neglected tropical diseases. Accurate diagnostic tools for humans and pigs are needed to monitor intervention outcomes. Currently used diagnostic tools for porcine cysticercosis all have drawbacks. Serological tests are mainly confronted with problems of specificity. More specifically, circulating antigen detecting tests cross-react with Taenia hydatigena and the possibility of transient antigens as a result of aborted infections is suspected. Furthermore, the hypothesis has been raised that hatched ingested eggs of other Taenia species may lead to a transient antibody response or to the presence of circulating antigen detectable by serological tests used for porcine cysticercosis. Here we describe the results of a study that consisted of oral administration of Taenia saginata eggs to five piglets followed by serological testing during five weeks and necropsy aiming at studying possible cross reactions in serological tests used for porcine cysticercosis. The infectivity of the eggs was verified by in vitro hatching and by experimental infection of a calf. One piglet developed acute respiratory disease and died on day 6 post infection. The remaining four piglets did not show any clinical signs until euthanasia. None of the serum samples from four piglets collected between days 0 and 35 post infection gave a positive reaction in the B158/B60 Ag-ELISA and in a commercial Western blot for antibody detection. In conclusion, this study showed that experimental exposure of four pigs to T. saginata eggs did not result in positive serologies for T. solium. These results may help interpreting serological results in monitoring of T. solium control programmes. Copyright © 2017 Elsevier B.V. All rights reserved.
Nicolini, Antonello; Mollar, Elena; Grecchi, Bruna; Landucci, Norma
2014-01-01
Results supporting the use and the effectiveness of positive expiratory, pressure devices in chronic obstructive pulmonary disease (COPD) patients are still controversial, We have tested the hypothesis that adding TPEP or IPPB to standard pharmacological therapy may provide additional clinical benefit over, pharmacological therapy only in patients with severe COPD. Fourty-five patients were randomized in three groups: a group was treated; with IPPB,a group was treated with TPEP and a group with pharmacological; therapy alone (control group). Primary outcome measures included the measurement of scale or, questionnaire concerning dyspnea (MRC scale),dyspnea,cough, and, sputum (BCSS) and quality of life (COPD assessment test) (CAT). Secondary, outcome measures were respiratory function testing,arterial blood gas,analysis,and hematological examinations. Both patients in the IPPB group and in the TPEP group showed a significant, improvement in two of three tests (MRC,CAT) compared to the control, group.However,in the group comparison analysis for, the same variables between IPPB group and TPEP group we observed a, significant improvement in the IPPB group (P≤.05 for MRC and P≤.01 for, CAT). The difference of action of the two techniques are evident in the results of, pulmonary function testing: IPPB increases FVC, FEV1, and MIP; this reflects, its capacity to increase lung volume. Also TPEP increases FVC and FEV1 (less, than IPPB), but increases MEP, while decreasing total lung capacity and, residual volume. The two techniques (IPPB and TPEP) improves significantly dyspnea; quality of; life tools and lung function in patients with severe COPD. IPPB demonstrated a greater effectiveness to improve dyspnea and quality of life tools (MRC, CAT) than TPEP. Copyright © 2013 SEPAR. Published by Elsevier Espana. All rights reserved.
Map LineUps: Effects of spatial structure on graphical inference.
Beecham, Roger; Dykes, Jason; Meulemans, Wouter; Slingsby, Aidan; Turkay, Cagatay; Wood, Jo
2017-01-01
Fundamental to the effective use of visualization as an analytic and descriptive tool is the assurance that presenting data visually provides the capability of making inferences from what we see. This paper explores two related approaches to quantifying the confidence we may have in making visual inferences from mapped geospatial data. We adapt Wickham et al.'s 'Visual Line-up' method as a direct analogy with Null Hypothesis Significance Testing (NHST) and propose a new approach for generating more credible spatial null hypotheses. Rather than using as a spatial null hypothesis the unrealistic assumption of complete spatial randomness, we propose spatially autocorrelated simulations as alternative nulls. We conduct a set of crowdsourced experiments (n=361) to determine the just noticeable difference (JND) between pairs of choropleth maps of geographic units controlling for spatial autocorrelation (Moran's I statistic) and geometric configuration (variance in spatial unit area). Results indicate that people's abilities to perceive differences in spatial autocorrelation vary with baseline autocorrelation structure and the geometric configuration of geographic units. These results allow us, for the first time, to construct a visual equivalent of statistical power for geospatial data. Our JND results add to those provided in recent years by Klippel et al. (2011), Harrison et al. (2014) and Kay & Heer (2015) for correlation visualization. Importantly, they provide an empirical basis for an improved construction of visual line-ups for maps and the development of theory to inform geospatial tests of graphical inference.
Mitochondrial NADH Fluorescence is Enhanced by Complex I Binding
Blinova, Ksenia; Levine, Rodney L.; Boja, Emily S.; Griffiths, Gary L.; Shi, Zhen-Dan; Ruddy, Brian; Balaban, Robert S.
2012-01-01
Mitochondrial NADH fluorescence has been a useful tool in evaluating mitochondrial energetics both in vitro and in vivo. Mitochondrial NADH fluorescence is enhanced several fold in the matrix through extended fluorescence lifetimes (EFL). However, the actual binding sites responsible for NADH EFL are unknown. We tested the hypothesis that NADH binding to Complex I is a significant source of mitochondrial NADH fluorescence enhancement. To test this hypothesis, the effect of Complex I binding on NADH fluorescence efficiency was evaluated in purified protein, and in native gels of the entire porcine heart mitochondria proteome. To avoid the oxidation of NADH in these preparations, we conducted the binding experiments under anoxic conditions in a specially designed apparatus. Purified intact Complex I enhanced NADH fluorescence in native gels approximately 10 fold. However, no enhancement was detected in denatured individual Complex I subunit proteins. In the Clear and Ghost native gels of the entire mitochondrial proteome, NADH fluorescence enhancement was localized to regions where NADH oxidation occurred in the presence of oxygen. Inhibitor and mass spectroscopy studies revealed that the fluorescence enhancement was specific to Complex I proteins. No fluorescence enhancement was detected for MDH or other dehydrogenases in this assay system, at physiological mole fractions of the matrix proteins. These data suggest that NADH associated with Complex I significantly contributes to the overall mitochondrial NADH fluorescence signal and provides an explanation for the well established close correlation of mitochondrial NADH fluorescence and the metabolic state. PMID:18702505
Time-Frequency Learning Machines for Nonstationarity Detection Using Surrogates
NASA Astrophysics Data System (ADS)
Borgnat, Pierre; Flandrin, Patrick; Richard, Cédric; Ferrari, André; Amoud, Hassan; Honeine, Paul
2012-03-01
Time-frequency representations provide a powerful tool for nonstationary signal analysis and classification, supporting a wide range of applications [12]. As opposed to conventional Fourier analysis, these techniques reveal the evolution in time of the spectral content of signals. In Ref. [7,38], time-frequency analysis is used to test stationarity of any signal. The proposed method consists of a comparison between global and local time-frequency features. The originality is to make use of a family of stationary surrogate signals for defining the null hypothesis of stationarity and, based upon this information, to derive statistical tests. An open question remains, however, about how to choose relevant time-frequency features. Over the last decade, a number of new pattern recognition methods based on reproducing kernels have been introduced. These learning machines have gained popularity due to their conceptual simplicity and their outstanding performance [30]. Initiated by Vapnik’s support vector machines (SVM) [35], they offer now a wide class of supervised and unsupervised learning algorithms. In Ref. [17-19], the authors have shown how the most effective and innovative learning machines can be tuned to operate in the time-frequency domain. This chapter follows this line of research by taking advantage of learning machines to test and quantify stationarity. Based on one-class SVM, our approach uses the entire time-frequency representation and does not require arbitrary feature extraction. Applied to a set of surrogates, it provides the domain boundary that includes most of these stationarized signals. This allows us to test the stationarity of the signal under investigation. This chapter is organized as follows. In Section 22.2, we introduce the surrogate data method to generate stationarized signals, namely, the null hypothesis of stationarity. The concept of time-frequency learning machines is presented in Section 22.3, and applied to one-class SVM in order to derive a stationarity test in Section 22.4. The relevance of the latter is illustrated by simulation results in Section 22.5.
InteGO2: a web tool for measuring and visualizing gene semantic similarities using Gene Ontology.
Peng, Jiajie; Li, Hongxiang; Liu, Yongzhuang; Juan, Liran; Jiang, Qinghua; Wang, Yadong; Chen, Jin
2016-08-31
The Gene Ontology (GO) has been used in high-throughput omics research as a major bioinformatics resource. The hierarchical structure of GO provides users a convenient platform for biological information abstraction and hypothesis testing. Computational methods have been developed to identify functionally similar genes. However, none of the existing measurements take into account all the rich information in GO. Similarly, using these existing methods, web-based applications have been constructed to compute gene functional similarities, and to provide pure text-based outputs. Without a graphical visualization interface, it is difficult for result interpretation. We present InteGO2, a web tool that allows researchers to calculate the GO-based gene semantic similarities using seven widely used GO-based similarity measurements. Also, we provide an integrative measurement that synergistically integrates all the individual measurements to improve the overall performance. Using HTML5 and cytoscape.js, we provide a graphical interface in InteGO2 to visualize the resulting gene functional association networks. InteGO2 is an easy-to-use HTML5 based web tool. With it, researchers can measure gene or gene product functional similarity conveniently, and visualize the network of functional interactions in a graphical interface. InteGO2 can be accessed via http://mlg.hit.edu.cn:8089/ .
Live Interrogation and Visualization of Earth Systems (LIVES)
NASA Astrophysics Data System (ADS)
Nunn, J. A.; Anderson, L. C.
2007-12-01
Twenty tablet PCs and associated peripherals acquired through a HP Technology for Teaching grant are being used to redesign two freshman laboratory courses as well as a sophomore geobiology course in Geology and Geophysics at Louisiana State University. The two introductory laboratories serve approximately 750 students per academic year including both majors and non-majors; the geobiology course enrolls about 35 students/year and is required for majors in the department's geology concentration. Limited enrollments and 3 hour labs make it possible to incorporate hands-on visualization, animation, GIS, manipulation of data and images, and access to geological data available online. Goals of the course redesigns include: enhancing visualization of earth materials, physical/chemical/biological processes, and biosphere/geosphere history; strengthening student's ability to acquire, manage, and interpret multifaceted geological information; fostering critical thinking, the scientific method, and earth-system science/perspective in ancient and modern environments (such as coastal erosion and restoration in Louisiana or the Snowball Earth hypothesis); improving student communication skills; and increasing the quantity, quality, and diversity of students pursuing Earth Science careers. IT resources available in the laboratory provide students with sophisticated visualization tools, allowing them to switch between 2-D and 3-D reconstructions more seamlessly, and enabling them to manipulate larger integrated data- sets, thus permitting more time for critical thinking and hypothesis testing. IT resources also enable faculty and students to simultaneously work with simulation software to animate earth processes such as plate motions or groundwater flow and immediately test hypothesis formulated in the data analysis. Finally, tablet PCs make it possible for data gathering and analysis outside a formal classroom. As a result, students will achieve fluency in using visualization and technology for informal and formal scientific communication. The equipment and exercises developed also will be used in additional upper level undergraduate classes and two outreach programs: NSF funded Geoscience Alliance for Enhanced Minority Participation and Shell Foundation funded Shell Undergraduate Recruiting and Geoscience Education.
Caricati, Luca
2017-01-01
The status-legitimacy hypothesis was tested by analyzing cross-national data about social inequality. Several indicators were used as indexes of social advantage: social class, personal income, and self-position in the social hierarchy. Moreover, inequality and freedom in nations, as indexed by Gini and by the human freedom index, were considered. Results from 36 nations worldwide showed no support for the status-legitimacy hypothesis. The perception that income distribution was fair tended to increase as social advantage increased. Moreover, national context increased the difference between advantaged and disadvantaged people in the perception of social fairness: Contrary to the status-legitimacy hypothesis, disadvantaged people were more likely than advantaged people to perceive income distribution as too large, and this difference increased in nations with greater freedom and equality. The implications for the status-legitimacy hypothesis are discussed.
NASA Astrophysics Data System (ADS)
van Aalsvoort, Joke
In a previous article, the problem of chemistry's lack of relevance in secondary chemical education was analysed using logical positivism as a tool. This article starts with the hypothesis that the problem can be addressed by means of activity theory, one of the important theories within the sociocultural school. The reason for this expectation is that, while logical positivism creates a divide between science and society, activity theory offers a model of society in which science and society are related. With the use of this model, a new course for grade nine has been constructed. This results in a confirmation of the hypothesis, at least at a theoretical level. A comparison with the Salters' approach is made in order to demonstrate the relative merits of a mediated way of dealing with the problem of the lack of relevance of chemistry in chemical education.
Tests of the Giant Impact Hypothesis
NASA Technical Reports Server (NTRS)
Jones, J. H.
1998-01-01
The giant impact hypothesis has gained popularity as a means of explaining a volatile-depleted Moon that still has a chemical affinity to the Earth. As Taylor's Axiom decrees, the best models of lunar origin are testable, but this is difficult with the giant impact model. The energy associated with the impact would be sufficient to totally melt and partially vaporize the Earth. And this means that there should he no geological vestige of Barber times. Accordingly, it is important to devise tests that may be used to evaluate the giant impact hypothesis. Three such tests are discussed here. None of these is supportive of the giant impact model, but neither do they disprove it.
Genetics and recent human evolution.
Templeton, Alan R
2007-07-01
Starting with "mitochondrial Eve" in 1987, genetics has played an increasingly important role in studies of the last two million years of human evolution. It initially appeared that genetic data resolved the basic models of recent human evolution in favor of the "out-of-Africa replacement" hypothesis in which anatomically modern humans evolved in Africa about 150,000 years ago, started to spread throughout the world about 100,000 years ago, and subsequently drove to complete genetic extinction (replacement) all other human populations in Eurasia. Unfortunately, many of the genetic studies on recent human evolution have suffered from scientific flaws, including misrepresenting the models of recent human evolution, focusing upon hypothesis compatibility rather than hypothesis testing, committing the ecological fallacy, and failing to consider a broader array of alternative hypotheses. Once these flaws are corrected, there is actually little genetic support for the out-of-Africa replacement hypothesis. Indeed, when genetic data are used in a hypothesis-testing framework, the out-of-Africa replacement hypothesis is strongly rejected. The model of recent human evolution that emerges from a statistical hypothesis-testing framework does not correspond to any of the traditional models of human evolution, but it is compatible with fossil and archaeological data. These studies also reveal that any one gene or DNA region captures only a small part of human evolutionary history, so multilocus studies are essential. As more and more loci became available, genetics will undoubtedly offer additional insights and resolutions of human evolution.
Age Dedifferentiation Hypothesis: Evidence form the WAIS III.
ERIC Educational Resources Information Center
Juan-Espinosa, Manuel; Garcia, Luis F.; Escorial, Sergio; Rebollo, Irene; Colom, Roberto; Abad, Francisco J.
2002-01-01
Used the Spanish standardization of the Wechsler Adult Intelligence Scale III (WAIS III) (n=1,369) to test the age dedifferentiation hypothesis. Results show no changes in the percentage of variance accounted for by "g" and four group factors when restriction of range is controlled. Discusses an age indifferentation hypothesis. (SLD)
Hypothesis tests for the detection of constant speed radiation moving sources
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dumazert, Jonathan; Coulon, Romain; Kondrasovs, Vladimir
2015-07-01
Radiation Portal Monitors are deployed in linear network to detect radiological material in motion. As a complement to single and multichannel detection algorithms, inefficient under too low signal to noise ratios, temporal correlation algorithms have been introduced. Test hypothesis methods based on empirically estimated mean and variance of the signals delivered by the different channels have shown significant gain in terms of a tradeoff between detection sensitivity and false alarm probability. This paper discloses the concept of a new hypothesis test for temporal correlation detection methods, taking advantage of the Poisson nature of the registered counting signals, and establishes amore » benchmark between this test and its empirical counterpart. The simulation study validates that in the four relevant configurations of a pedestrian source carrier under respectively high and low count rate radioactive background, and a vehicle source carrier under the same respectively high and low count rate radioactive background, the newly introduced hypothesis test ensures a significantly improved compromise between sensitivity and false alarm, while guaranteeing the stability of its optimization parameter regardless of signal to noise ratio variations between 2 to 0.8. (authors)« less
Multiple Hypothesis Testing for Experimental Gingivitis Based on Wilcoxon Signed Rank Statistics
Preisser, John S.; Sen, Pranab K.; Offenbacher, Steven
2011-01-01
Dental research often involves repeated multivariate outcomes on a small number of subjects for which there is interest in identifying outcomes that exhibit change in their levels over time as well as to characterize the nature of that change. In particular, periodontal research often involves the analysis of molecular mediators of inflammation for which multivariate parametric methods are highly sensitive to outliers and deviations from Gaussian assumptions. In such settings, nonparametric methods may be favored over parametric ones. Additionally, there is a need for statistical methods that control an overall error rate for multiple hypothesis testing. We review univariate and multivariate nonparametric hypothesis tests and apply them to longitudinal data to assess changes over time in 31 biomarkers measured from the gingival crevicular fluid in 22 subjects whereby gingivitis was induced by temporarily withholding tooth brushing. To identify biomarkers that can be induced to change, multivariate Wilcoxon signed rank tests for a set of four summary measures based upon area under the curve are applied for each biomarker and compared to their univariate counterparts. Multiple hypothesis testing methods with choice of control of the false discovery rate or strong control of the family-wise error rate are examined. PMID:21984957
Shnabel, Nurit; Bar-Anan, Yoav; Kende, Anna; Bareket, Orly; Lazar, Yael
2016-01-01
Based on theorizing that helping relations may serve as a subtle mechanism to reinforce intergroup inequality, the present research (N = 1,315) examined the relation between benevolent sexism (i.e., a chivalrous yet subtly oppressive view of women) and helping. In cross-gender interactions, the endorsement of (Studies 1, 3, and 4) or exposure to (Study 2) benevolent sexism predicted (a) men's preference to provide women with dependency-oriented help (i.e., direct assistance) rather than tools for autonomous coping, and (b) women's preference to seek dependency-oriented help rather than tools for autonomous coping. Benevolent sexism did not predict men's and women's engagement in dependency-oriented helping relations in same-gender interactions. Studies 1 and 2 examined behavioral intentions in response to a series of hypothetical scenarios; Studies 3 and 4 examined actual behavior in tests of mathematical and logical ability, and pointed to assumed partner's expectations as a potential mediator. The converging evidence supports the hypothesis that benevolent sexism encourages engagement in cross-gender helping relations that perpetuate traditional gender roles. (c) 2016 APA, all rights reserved).
Methodology to assess clinical liver safety data.
Merz, Michael; Lee, Kwan R; Kullak-Ublick, Gerd A; Brueckner, Andreas; Watkins, Paul B
2014-11-01
Analysis of liver safety data has to be multivariate by nature and needs to take into account time dependency of observations. Current standard tools for liver safety assessment such as summary tables, individual data listings, and narratives address these requirements to a limited extent only. Using graphics in the context of a systematic workflow including predefined graph templates is a valuable addition to standard instruments, helping to ensure completeness of evaluation, and supporting both hypothesis generation and testing. Employing graphical workflows interactively allows analysis in a team-based setting and facilitates identification of the most suitable graphics for publishing and regulatory reporting. Another important tool is statistical outlier detection, accounting for the fact that for assessment of Drug-Induced Liver Injury, identification and thorough evaluation of extreme values has much more relevance than measures of central tendency in the data. Taken together, systematical graphical data exploration and statistical outlier detection may have the potential to significantly improve assessment and interpretation of clinical liver safety data. A workshop was convened to discuss best practices for the assessment of drug-induced liver injury (DILI) in clinical trials.
Service-based analysis of biological pathways
Zheng, George; Bouguettaya, Athman
2009-01-01
Background Computer-based pathway discovery is concerned with two important objectives: pathway identification and analysis. Conventional mining and modeling approaches aimed at pathway discovery are often effective at achieving either objective, but not both. Such limitations can be effectively tackled leveraging a Web service-based modeling and mining approach. Results Inspired by molecular recognitions and drug discovery processes, we developed a Web service mining tool, named PathExplorer, to discover potentially interesting biological pathways linking service models of biological processes. The tool uses an innovative approach to identify useful pathways based on graph-based hints and service-based simulation verifying user's hypotheses. Conclusion Web service modeling of biological processes allows the easy access and invocation of these processes on the Web. Web service mining techniques described in this paper enable the discovery of biological pathways linking these process service models. Algorithms presented in this paper for automatically highlighting interesting subgraph within an identified pathway network enable the user to formulate hypothesis, which can be tested out using our simulation algorithm that are also described in this paper. PMID:19796403
Small-sided games in football as a method to improve high school students’ instep passing skills
NASA Astrophysics Data System (ADS)
Ridwan, M.; Darmawan, G.; Fuadi, Z.
2018-01-01
This study analyzed the influence of small sided games application toward increasing the learning result of instep passing in football. The research used one group pretest-posttest design. The data were obtained once a week for 135 minutes of small sided games and this activity had been held for four weeks with a final test in the final meeting. According to descriptive data result, there were increases of the mean. The data showed the increase of the application of small sided games resulted in not only the mean of the descriptive data but also the result of T-test. The significant of T-test is 0,000. It means less than 0,05 then the hypothesis Ha received and Ho rejected automatically. The presentation showed that 48,15% data is increasing by small-sided games application. The small-sided games were proven to be the right tool to increase instep passing football technique. We suggested to the apply that kind of games of football learning on physical education subject, especially for pre-university students.
A default Bayesian hypothesis test for mediation.
Nuijten, Michèle B; Wetzels, Ruud; Matzke, Dora; Dolan, Conor V; Wagenmakers, Eric-Jan
2015-03-01
In order to quantify the relationship between multiple variables, researchers often carry out a mediation analysis. In such an analysis, a mediator (e.g., knowledge of a healthy diet) transmits the effect from an independent variable (e.g., classroom instruction on a healthy diet) to a dependent variable (e.g., consumption of fruits and vegetables). Almost all mediation analyses in psychology use frequentist estimation and hypothesis-testing techniques. A recent exception is Yuan and MacKinnon (Psychological Methods, 14, 301-322, 2009), who outlined a Bayesian parameter estimation procedure for mediation analysis. Here we complete the Bayesian alternative to frequentist mediation analysis by specifying a default Bayesian hypothesis test based on the Jeffreys-Zellner-Siow approach. We further extend this default Bayesian test by allowing a comparison to directional or one-sided alternatives, using Markov chain Monte Carlo techniques implemented in JAGS. All Bayesian tests are implemented in the R package BayesMed (Nuijten, Wetzels, Matzke, Dolan, & Wagenmakers, 2014).
Knowledge Base Refinement as Improving an Incorrect and Incomplete Domain Theory
1990-04-01
Ginsberg et al., 1985), and RL (Fu and Buchanan, 1985), which perform empirical induction over a library of test cases. This chapter describes a new...state knowledge. Examples of high-level goals are: to test a hypothesis, to differentiate between several plausible hypotheses, to ask a clarifying...one tuple when we Group Hypotheses Test Hypothesis Applyrule Findout Strategy Metarule Strategy Metarule Strategy Metarule Strategy Metarule goal(group
A robust hypothesis test for the sensitive detection of constant speed radiation moving sources
NASA Astrophysics Data System (ADS)
Dumazert, Jonathan; Coulon, Romain; Kondrasovs, Vladimir; Boudergui, Karim; Moline, Yoann; Sannié, Guillaume; Gameiro, Jordan; Normand, Stéphane; Méchin, Laurence
2015-09-01
Radiation Portal Monitors are deployed in linear networks to detect radiological material in motion. As a complement to single and multichannel detection algorithms, inefficient under too low signal-to-noise ratios, temporal correlation algorithms have been introduced. Test hypothesis methods based on empirically estimated mean and variance of the signals delivered by the different channels have shown significant gain in terms of a tradeoff between detection sensitivity and false alarm probability. This paper discloses the concept of a new hypothesis test for temporal correlation detection methods, taking advantage of the Poisson nature of the registered counting signals, and establishes a benchmark between this test and its empirical counterpart. The simulation study validates that in the four relevant configurations of a pedestrian source carrier under respectively high and low count rate radioactive backgrounds, and a vehicle source carrier under the same respectively high and low count rate radioactive backgrounds, the newly introduced hypothesis test ensures a significantly improved compromise between sensitivity and false alarm. It also guarantees that the optimal coverage factor for this compromise remains stable regardless of signal-to-noise ratio variations between 2 and 0.8, therefore allowing the final user to parametrize the test with the sole prior knowledge of background amplitude.
The [Geo]Scientific Method; Hypothesis Testing and Geoscience Proposal Writing for Students
ERIC Educational Resources Information Center
Markley, Michelle J.
2010-01-01
Most undergraduate-level geoscience texts offer a paltry introduction to the nuanced approach to hypothesis testing that geoscientists use when conducting research and writing proposals. Fortunately, there are a handful of excellent papers that are accessible to geoscience undergraduates. Two historical papers by the eminent American geologists G.…
Mental Abilities and School Achievement: A Test of a Mediation Hypothesis
ERIC Educational Resources Information Center
Vock, Miriam; Preckel, Franzis; Holling, Heinz
2011-01-01
This study analyzes the interplay of four cognitive abilities--reasoning, divergent thinking, mental speed, and short-term memory--and their impact on academic achievement in school in a sample of adolescents in grades seven to 10 (N = 1135). Based on information processing approaches to intelligence, we tested a mediation hypothesis, which states…
The Relation between Parental Values and Parenting Behavior: A Test of the Kohn Hypothesis.
ERIC Educational Resources Information Center
Luster, Tom; Rhoades, Kelly
To investigate how values influence parenting beliefs and practices, a test was made of Kohn's hypothesis that parents valuing self-direction emphasize the supportive function of parenting, while parents valuing conformity emphasize control of unsanctioned behaviors. Participating in the study were 65 mother-infant dyads. Infants ranged in age…
Chromosome Connections: Compelling Clues to Common Ancestry
ERIC Educational Resources Information Center
Flammer, Larry
2013-01-01
Students compare banding patterns on hominid chromosomes and see striking evidence of their common ancestry. To test this, human chromosome no. 2 is matched with two shorter chimpanzee chromosomes, leading to the hypothesis that human chromosome 2 resulted from the fusion of the two shorter chromosomes. Students test that hypothesis by looking for…
The main objective of the feasibility study described here was to test the hypothesis that properly plugged wells are effectively sealed by drilling mud. In The process of testing the hypothesis, evidence about dynamics of building mud cake on the wellbore-face was obtained, as ...
A test of the predator satiation hypothesis, acorn predator size, and acorn preference
C.H. Greenberg; S.J. Zarnoch
2018-01-01
Mast seeding is hypothesized to satiate seed predators with heavy production and reduce populations with crop failure, thereby increasing seed survival. Preference for red or white oak acorns could influence recruitment among oak species. We tested the predator satiation hypothesis, acorn preference, and predator size by concurrently...
The Need for Nuance in the Null Hypothesis Significance Testing Debate
ERIC Educational Resources Information Center
Häggström, Olle
2017-01-01
Null hypothesis significance testing (NHST) provides an important statistical toolbox, but there are a number of ways in which it is often abused and misinterpreted, with bad consequences for the reliability and progress of science. Parts of contemporary NHST debate, especially in the psychological sciences, is reviewed, and a suggestion is made…
Acorn Caching in Tree Squirrels: Teaching Hypothesis Testing in the Park
ERIC Educational Resources Information Center
McEuen, Amy B.; Steele, Michael A.
2012-01-01
We developed an exercise for a university-level ecology class that teaches hypothesis testing by examining acorn preferences and caching behavior of tree squirrels (Sciurus spp.). This exercise is easily modified to teach concepts of behavioral ecology for earlier grades, particularly high school, and provides students with a theoretical basis for…
Shaping Up the Practice of Null Hypothesis Significance Testing.
ERIC Educational Resources Information Center
Wainer, Howard; Robinson, Daniel H.
2003-01-01
Discusses criticisms of null hypothesis significance testing (NHST), suggesting that historical use of NHST was reasonable, and current users should read Sir Ronald Fisher's applied work. Notes that modifications to NHST and interpretations of its outcomes might better suit the needs of modern science. Concludes that NHST is most often useful as…
SOME EFFECTS OF DOGMATISM IN ELEMENTARY SCHOOL PRINCIPALS AND TEACHERS.
ERIC Educational Resources Information Center
BENTZEN, MARY M.
THE HYPOTHESIS THAT RATINGS ON CONGENIALITY AS A COWORKER GIVEN TO TEACHERS WILL BE IN PART A FUNCTION OF THE ORGANIZATIONAL STATUS OF THE RATER WAS TESTED. A SECONDARY PROBLEM WAS TO TEST THE HYPOTHESIS THAT DOGMATIC SUBJECTS MORE THAN NONDOGMATIC SUBJECTS WOULD EXHIBIT COGNITIVE BEHAVIOR WHICH INDICATED (1) GREATER DISTINCTION BETWEEN POSITIVE…
Thou Shalt Not Bear False Witness against Null Hypothesis Significance Testing
ERIC Educational Resources Information Center
García-Pérez, Miguel A.
2017-01-01
Null hypothesis significance testing (NHST) has been the subject of debate for decades and alternative approaches to data analysis have been proposed. This article addresses this debate from the perspective of scientific inquiry and inference. Inference is an inverse problem and application of statistical methods cannot reveal whether effects…
Wissman, Kathryn T; Rawson, Katherine A
2018-04-01
Arnold and McDermott [(2013). Test-potentiated learning: Distinguishing between direct and indirect effects of testing. Journal of Experimental Psychology: Learning, Memory, and Cognition, 39, 940-945] isolated the indirect effects of testing and concluded that encoding is enhanced to a greater extent following more versus fewer practice tests, referred to as test-potentiated learning. The current research provided further evidence for test-potentiated learning and evaluated the covert retrieval hypothesis as an alternative explanation for the observed effect. Learners initially studied foreign language word pairs and then completed either one or five practice tests before restudy occurred. Results of greatest interest concern performance on test trials following restudy for items that were not correctly recalled on the test trials that preceded restudy. Results replicate Arnold and McDermott (2013) by demonstrating that more versus fewer tests potentiate learning when trial time is limited. Results also provide strong evidence against the covert retrieval hypothesis concerning why the effect occurs (i.e., it does not reflect differential covert retrieval during pre-restudy trials). In addition, outcomes indicate that the magnitude of the test-potentiated learning effect decreases as trial length increases, revealing an unexpected boundary condition to test-potentiated learning.
Benzo, Roberto P; Chang, Chung-Chou H; Farrell, Max H; Kaplan, Robert; Ries, Andrew; Martinez, Fernando J; Wise, Robert; Make, Barry; Sciurba, Frank
2010-01-01
Chronic obstructive pulmonary disease (COPD) is a leading cause of death and 70% of the cost of COPD is due to hospitalizations. Self-reported daily physical activity and health status have been reported as predictors of a hospitalization in COPD but are not routinely assessed. We tested the hypothesis that self-reported daily physical activity and health status assessed by a simple question were predictors of a hospitalization in a well-characterized cohort of patients with severe emphysema. Investigators gathered daily physical activity and health status data assessed by a simple question in 597 patients with severe emphysema and tested the association of those patient-reported outcomes to the occurrence of a hospitalization in the following year. Multiple logistic regression analyses were used to determine predictors of hospitalization during the first 12 months after randomization. The two variables tested in the hypothesis were significant predictors of a hospitalization after adjusting for all univariable significant predictors: >2 h of physical activity per week had a protective effect [odds ratio (OR) 0.60; 95% confidence interval (95% CI) 0.41-0.88] and self-reported health status as fair or poor had a deleterious effect (OR 1.57; 95% CI 1.10-2.23). In addition, two other variables became significant in the multivariate model: total lung capacity (every 10% increase) had a protective effect (OR 0.88; 95% CI 0.78-0.99) and self-reported anxiety had a deleterious effect (OR 1.75; 95% CI 1.13-2.70). Self-reported daily physical activity and health status are independently associated with COPD hospitalizations. Our findings, assessed by simple questions, suggest the value of patient-reported outcomes in developing risk assessment tools that are easy to use.
Correcting power and p-value calculations for bias in diffusion tensor imaging.
Lauzon, Carolyn B; Landman, Bennett A
2013-07-01
Diffusion tensor imaging (DTI) provides quantitative parametric maps sensitive to tissue microarchitecture (e.g., fractional anisotropy, FA). These maps are estimated through computational processes and subject to random distortions including variance and bias. Traditional statistical procedures commonly used for study planning (including power analyses and p-value/alpha-rate thresholds) specifically model variability, but neglect potential impacts of bias. Herein, we quantitatively investigate the impacts of bias in DTI on hypothesis test properties (power and alpha-rate) using a two-sided hypothesis testing framework. We present theoretical evaluation of bias on hypothesis test properties, evaluate the bias estimation technique SIMEX for DTI hypothesis testing using simulated data, and evaluate the impacts of bias on spatially varying power and alpha rates in an empirical study of 21 subjects. Bias is shown to inflame alpha rates, distort the power curve, and cause significant power loss even in empirical settings where the expected difference in bias between groups is zero. These adverse effects can be attenuated by properly accounting for bias in the calculation of power and p-values. Copyright © 2013 Elsevier Inc. All rights reserved.
SOCR Analyses – an Instructional Java Web-based Statistical Analysis Toolkit
Chu, Annie; Cui, Jenny; Dinov, Ivo D.
2011-01-01
The Statistical Online Computational Resource (SOCR) designs web-based tools for educational use in a variety of undergraduate courses (Dinov 2006). Several studies have demonstrated that these resources significantly improve students' motivation and learning experiences (Dinov et al. 2008). SOCR Analyses is a new component that concentrates on data modeling and analysis using parametric and non-parametric techniques supported with graphical model diagnostics. Currently implemented analyses include commonly used models in undergraduate statistics courses like linear models (Simple Linear Regression, Multiple Linear Regression, One-Way and Two-Way ANOVA). In addition, we implemented tests for sample comparisons, such as t-test in the parametric category; and Wilcoxon rank sum test, Kruskal-Wallis test, Friedman's test, in the non-parametric category. SOCR Analyses also include several hypothesis test models, such as Contingency tables, Friedman's test and Fisher's exact test. The code itself is open source (http://socr.googlecode.com/), hoping to contribute to the efforts of the statistical computing community. The code includes functionality for each specific analysis model and it has general utilities that can be applied in various statistical computing tasks. For example, concrete methods with API (Application Programming Interface) have been implemented in statistical summary, least square solutions of general linear models, rank calculations, etc. HTML interfaces, tutorials, source code, activities, and data are freely available via the web (www.SOCR.ucla.edu). Code examples for developers and demos for educators are provided on the SOCR Wiki website. In this article, the pedagogical utilization of the SOCR Analyses is discussed, as well as the underlying design framework. As the SOCR project is on-going and more functions and tools are being added to it, these resources are constantly improved. The reader is strongly encouraged to check the SOCR site for most updated information and newly added models. PMID:21546994
Hypothesis testing and earthquake prediction.
Jackson, D D
1996-04-30
Requirements for testing include advance specification of the conditional rate density (probability per unit time, area, and magnitude) or, alternatively, probabilities for specified intervals of time, space, and magnitude. Here I consider testing fully specified hypotheses, with no parameter adjustments or arbitrary decisions allowed during the test period. Because it may take decades to validate prediction methods, it is worthwhile to formulate testable hypotheses carefully in advance. Earthquake prediction generally implies that the probability will be temporarily higher than normal. Such a statement requires knowledge of "normal behavior"--that is, it requires a null hypothesis. Hypotheses can be tested in three ways: (i) by comparing the number of actual earth-quakes to the number predicted, (ii) by comparing the likelihood score of actual earthquakes to the predicted distribution, and (iii) by comparing the likelihood ratio to that of a null hypothesis. The first two tests are purely self-consistency tests, while the third is a direct comparison of two hypotheses. Predictions made without a statement of probability are very difficult to test, and any test must be based on the ratio of earthquakes in and out of the forecast regions.
Hypothesis testing and earthquake prediction.
Jackson, D D
1996-01-01
Requirements for testing include advance specification of the conditional rate density (probability per unit time, area, and magnitude) or, alternatively, probabilities for specified intervals of time, space, and magnitude. Here I consider testing fully specified hypotheses, with no parameter adjustments or arbitrary decisions allowed during the test period. Because it may take decades to validate prediction methods, it is worthwhile to formulate testable hypotheses carefully in advance. Earthquake prediction generally implies that the probability will be temporarily higher than normal. Such a statement requires knowledge of "normal behavior"--that is, it requires a null hypothesis. Hypotheses can be tested in three ways: (i) by comparing the number of actual earth-quakes to the number predicted, (ii) by comparing the likelihood score of actual earthquakes to the predicted distribution, and (iii) by comparing the likelihood ratio to that of a null hypothesis. The first two tests are purely self-consistency tests, while the third is a direct comparison of two hypotheses. Predictions made without a statement of probability are very difficult to test, and any test must be based on the ratio of earthquakes in and out of the forecast regions. PMID:11607663
Mismatch or cumulative stress: toward an integrated hypothesis of programming effects.
Nederhof, Esther; Schmidt, Mathias V
2012-07-16
This paper integrates the cumulative stress hypothesis with the mismatch hypothesis, taking into account individual differences in sensitivity to programming. According to the cumulative stress hypothesis, individuals are more likely to suffer from disease as adversity accumulates. According to the mismatch hypothesis, individuals are more likely to suffer from disease if a mismatch occurs between the early programming environment and the later adult environment. These seemingly contradicting hypotheses are integrated into a new model proposing that the cumulative stress hypothesis applies to individuals who were not or only to a small extent programmed by their early environment, while the mismatch hypothesis applies to individuals who experienced strong programming effects. Evidence for the main effects of adversity as well as evidence for the interaction between adversity in early and later life is presented from human observational studies and animal models. Next, convincing evidence for individual differences in sensitivity to programming is presented. We extensively discuss how our integrated model can be tested empirically in animal models and human studies, inviting researchers to test this model. Furthermore, this integrated model should tempt clinicians and other intervenors to interpret symptoms as possible adaptations from an evolutionary biology perspective. Copyright © 2011 Elsevier Inc. All rights reserved.
1982-04-01
S. (1979), "Conflict Among Criteria for Testing Hypothesis: Extension and Comments," Econometrica, 47, 203-207 Breusch , T. S. and Pagan , A. R. (1980...Savin, N. E. (1977), "Conflict Among Criteria for Testing Hypothesis in the Multivariate Linear Regression Model," Econometrica, 45, 1263-1278 Breusch , T...VNCLASSIFIED RAND//-6756NL U l~ I- THE RELATION AMONG THE LIKELIHOOD RATIO-, WALD-, AND LAGRANGE MULTIPLIER TESTS AND THEIR APPLICABILITY TO SMALL SAMPLES
NASA Astrophysics Data System (ADS)
Sirenko, M. A.; Tarasenko, P. F.; Pushkarev, M. I.
2017-01-01
One of the most noticeable features of sign-based statistical procedures is an opportunity to build an exact test for simple hypothesis testing of parameters in a regression model. In this article, we expanded a sing-based approach to the nonlinear case with dependent noise. The examined model is a multi-quantile regression, which makes it possible to test hypothesis not only of regression parameters, but of noise parameters as well.
A practical tool for modeling biospecimen user fees.
Matzke, Lise; Dee, Simon; Bartlett, John; Damaraju, Sambasivarao; Graham, Kathryn; Johnston, Randal; Mes-Masson, Anne-Marie; Murphy, Leigh; Shepherd, Lois; Schacter, Brent; Watson, Peter H
2014-08-01
The question of how best to attribute the unit costs of the annotated biospecimen product that is provided to a research user is a common issue for many biobanks. Some of the factors influencing user fees are capital and operating costs, internal and external demand and market competition, and moral standards that dictate that fees must have an ethical basis. It is therefore important to establish a transparent and accurate costing tool that can be utilized by biobanks and aid them in establishing biospecimen user fees. To address this issue, we built a biospecimen user fee calculator tool, accessible online at www.biobanking.org . The tool was built to allow input of: i) annual operating and capital costs; ii) costs categorized by the major core biobanking operations; iii) specimen products requested by a biobank user; and iv) services provided by the biobank beyond core operations (e.g., histology, tissue micro-array); as well as v) several user defined variables to allow the calculator to be adapted to different biobank operational designs. To establish default values for variables within the calculator, we first surveyed the members of the Canadian Tumour Repository Network (CTRNet) management committee. We then enrolled four different participants from CTRNet biobanks to test the hypothesis that the calculator tool could change approaches to user fees. Participants were first asked to estimate user fee pricing for three hypothetical user scenarios based on their biobanking experience (estimated pricing) and then to calculate fees for the same scenarios using the calculator tool (calculated pricing). Results demonstrated significant variation in estimated pricing that was reduced by calculated pricing, and that higher user fees are consistently derived when using the calculator. We conclude that adoption of this online calculator for user fee determination is an important first step towards harmonization and realistic user fees.
Quanbeck, Stephanie M.; Brachova, Libuse; Campbell, Alexis A.; Guan, Xin; Perera, Ann; He, Kun; Rhee, Seung Y.; Bais, Preeti; Dickerson, Julie A.; Dixon, Philip; Wohlgemuth, Gert; Fiehn, Oliver; Barkan, Lenore; Lange, Iris; Lange, B. Markus; Lee, Insuk; Cortes, Diego; Salazar, Carolina; Shuman, Joel; Shulaev, Vladimir; Huhman, David V.; Sumner, Lloyd W.; Roth, Mary R.; Welti, Ruth; Ilarslan, Hilal; Wurtele, Eve S.; Nikolau, Basil J.
2012-01-01
Metabolomics is the methodology that identifies and measures global pools of small molecules (of less than about 1,000 Da) of a biological sample, which are collectively called the metabolome. Metabolomics can therefore reveal the metabolic outcome of a genetic or environmental perturbation of a metabolic regulatory network, and thus provide insights into the structure and regulation of that network. Because of the chemical complexity of the metabolome and limitations associated with individual analytical platforms for determining the metabolome, it is currently difficult to capture the complete metabolome of an organism or tissue, which is in contrast to genomics and transcriptomics. This paper describes the analysis of Arabidopsis metabolomics data sets acquired by a consortium that includes five analytical laboratories, bioinformaticists, and biostatisticians, which aims to develop and validate metabolomics as a hypothesis-generating functional genomics tool. The consortium is determining the metabolomes of Arabidopsis T-DNA mutant stocks, grown in standardized controlled environment optimized to minimize environmental impacts on the metabolomes. Metabolomics data were generated with seven analytical platforms, and the combined data is being provided to the research community to formulate initial hypotheses about genes of unknown function (GUFs). A public database (www.PlantMetabolomics.org) has been developed to provide the scientific community with access to the data along with tools to allow for its interactive analysis. Exemplary datasets are discussed to validate the approach, which illustrate how initial hypotheses can be generated from the consortium-produced metabolomics data, integrated with prior knowledge to provide a testable hypothesis concerning the functionality of GUFs. PMID:22645570
Clark, Geoffrey R.; Reepmeyer, Christian; Melekiola, Nivaleti; Woodhead, Jon; Dickinson, William R.; Martinsson-Wallin, Helene
2014-01-01
Tonga was unique in the prehistoric Pacific for developing a maritime state that integrated the archipelago under a centralized authority and for undertaking long-distance economic and political exchanges in the second millennium A.D. To establish the extent of Tonga’s maritime polity, we geochemically analyzed stone tools excavated from the central places of the ruling paramounts, particularly lithic artifacts associated with stone-faced chiefly tombs. The lithic networks of the Tongan state focused on Samoa and Fiji, with one adze sourced to the Society Islands 2,500 km from Tongatapu. To test the hypothesis that nonlocal lithics were especially valued by Tongan elites and were an important source of political capital, we analyzed prestate lithics from Tongatapu and stone artifacts from Samoa. In the Tongan state, 66% of worked stone tools were long-distance imports, indicating that interarchipelago connections intensified with the development of the Tongan polity after A.D. 1200. In contrast, stone tools found in Samoa were from local sources, including tools associated with a monumental structure contemporary with the Tongan state. Network analysis of lithics entering the Tongan state and of the distribution of Samoan adzes in the Pacific identified a centralized polity and the products of specialized lithic workshops, respectively. These results indicate that a significant consequence of social complexity was the establishment of new types of specialized sites in distant geographic areas. Specialized sites were loci of long-distance interaction and formed important centers for the transmission of information, people, and materials in prehistoric Oceania. PMID:25002481
NASA Astrophysics Data System (ADS)
Clark, Geoffrey R.; Reepmeyer, Christian; Melekiola, Nivaleti; Woodhead, Jon; Dickinson, William R.; Martinsson-Wallin, Helene
2014-07-01
Tonga was unique in the prehistoric Pacific for developing a maritime state that integrated the archipelago under a centralized authority and for undertaking long-distance economic and political exchanges in the second millennium A.D. To establish the extent of Tonga's maritime polity, we geochemically analyzed stone tools excavated from the central places of the ruling paramounts, particularly lithic artifacts associated with stone-faced chiefly tombs. The lithic networks of the Tongan state focused on Samoa and Fiji, with one adze sourced to the Society Islands 2,500 km from Tongatapu. To test the hypothesis that nonlocal lithics were especially valued by Tongan elites and were an important source of political capital, we analyzed prestate lithics from Tongatapu and stone artifacts from Samoa. In the Tongan state, 66% of worked stone tools were long-distance imports, indicating that interarchipelago connections intensified with the development of the Tongan polity after A.D. 1200. In contrast, stone tools found in Samoa were from local sources, including tools associated with a monumental structure contemporary with the Tongan state. Network analysis of lithics entering the Tongan state and of the distribution of Samoan adzes in the Pacific identified a centralized polity and the products of specialized lithic workshops, respectively. These results indicate that a significant consequence of social complexity was the establishment of new types of specialized sites in distant geographic areas. Specialized sites were loci of long-distance interaction and formed important centers for the transmission of information, people, and materials in prehistoric Oceania.
[The effects of moderate physical exercise on cognition in adults over 60 years of age].
Sanchez-Gonzalez, J L; Calvo-Arenillas, J I; Sanchez-Rodriguez, J L
2018-04-01
Clinical evidence gathered in recent years indicates that elderly individuals more frequently display cognitive changes. These age-related changes refer, above all, to memory functions and to the speed of thinking and reasoning. A number of studies have shown that physical activity can be used as an important mechanism for protecting the cognitive functions. To test the hypothesis that physical exercise is able to bring about changes in the cognitive functions of healthy elderly adults without cognitive impairment, thereby improving their quality of life. The study population included participants in the University of Salamanca geriatric revitalisation programme. The sample initially consisted of a total of 44 subjects of both sexes, with a mean age of 74.93 years. The neuropsychological evaluation of the subjects included a series of validated neuropsychological tests: Mini-Mental State Examination, Benton Visual Retention Test, Rey Auditory Verbal Learning Test, Stroop Test and Trail Making Test. The results show that more physical activity is related to better performance in the cognitive functions of the subjects included in this study, after applying the geriatric revitalisation programme. The geriatric revitalisation programme can be a valuable tool for improving cognition in adults over 60 years of age, resulting in enhanced well-being in their quality of life.
Warren, D L; Iglesias, T L
2012-06-01
The 'expensive-tissue hypothesis' states that investment in one metabolically costly tissue necessitates decreased investment in other tissues and has been one of the keystone concepts used in studying the evolution of metabolically expensive tissues. The trade-offs expected under this hypothesis have been investigated in comparative studies in a number of clades, yet support for the hypothesis is mixed. Nevertheless, the expensive-tissue hypothesis has been used to explain everything from the evolution of the human brain to patterns of reproductive investment in bats. The ambiguous support for the hypothesis may be due to interspecific differences in selection, which could lead to spurious results both positive and negative. To control for this, we conduct a study of trade-offs within a single species, Thalassoma bifasciatum, a coral reef fish that exhibits more intraspecific variation in a single tissue (testes) than is seen across many of the clades previously analysed in studies of tissue investment. This constitutes a robust test of the constraints posited under the expensive-tissue hypothesis that is not affected by many of the factors that may confound interspecific studies. However, we find no evidence of trade-offs between investment in testes and investment in liver or brain, which are typically considered to be metabolically expensive. Our results demonstrate that the frequent rejection of the expensive-tissue hypothesis may not be an artefact of interspecific differences in selection and suggests that organisms may be capable of compensating for substantial changes in tissue investment without sacrificing mass in other expensive tissues. © 2012 The Authors. Journal of Evolutionary Biology © 2012 European Society For Evolutionary Biology.
Harris, Liam W.; Davies, T. Jonathan
2016-01-01
Explaining the uneven distribution of species richness across the branches of the tree of life has been a major challenge for evolutionary biologists. Advances in phylogenetic reconstruction, allowing the generation of large, well-sampled, phylogenetic trees have provided an opportunity to contrast competing hypotheses. Here, we present a new time-calibrated phylogeny of seed plant families using Bayesian methods and 26 fossil calibrations. While there are various published phylogenetic trees for plants which have a greater density of species sampling, we are still a long way from generating a complete phylogeny for all ~300,000+ plants. Our phylogeny samples all seed plant families and is a useful tool for comparative analyses. We use this new phylogenetic hypothesis to contrast two alternative explanations for differences in species richness among higher taxa: time for speciation versus ecological limits. We calculated net diversification rate for each clade in the phylogeny and assessed the relationship between clade age and species richness. We then fit models of speciation and extinction to individual branches in the tree to identify major rate-shifts. Our data suggest that the majority of lineages are diversifying very slowly while a few lineages, distributed throughout the tree, are diversifying rapidly. Diversification is unrelated to clade age, no matter the age range of the clades being examined, contrary to both the assumption of an unbounded lineage increase through time, and the paradigm of fixed ecological limits. These findings are consistent with the idea that ecology plays a role in diversification, but rather than imposing a fixed limit, it may have variable effects on per lineage diversification rates through time. PMID:27706173
From patterns to causal understanding: Structural equation modeling (SEM) in soil ecology
Eisenhauer, Nico; Powell, Jeff R; Grace, James B.; Bowker, Matthew A.
2015-01-01
In this perspectives paper we highlight a heretofore underused statistical method in soil ecological research, structural equation modeling (SEM). SEM is commonly used in the general ecological literature to develop causal understanding from observational data, but has been more slowly adopted by soil ecologists. We provide some basic information on the many advantages and possibilities associated with using SEM and provide some examples of how SEM can be used by soil ecologists to shift focus from describing patterns to developing causal understanding and inspiring new types of experimental tests. SEM is a promising tool to aid the growth of soil ecology as a discipline, particularly by supporting research that is increasingly hypothesis-driven and interdisciplinary, thus shining light into the black box of interactions belowground.
Pharmacophore modeling of diverse classes of p38 MAP kinase inhibitors.
Sarma, Rituparna; Sinha, Sharat; Ravikumar, Muttineni; Kishore Kumar, Madala; Mahmood, S K
2008-12-01
Mitogen-activated protein (MAP) p38 kinase is a serine-threonine protein kinase and its inhibitors are useful in the treatment of inflammatory diseases. Pharmacophore models were developed using HypoGen program of Catalyst with diverse classes of p38 MAP kinase inhibitors. The best pharmacophore hypothesis (Hypo1) with hydrogen-bond acceptor (HBA), hydrophobic (HY), hydrogen-bond donor (HBD), and ring aromatic (RA) as features has correlation coefficient of 0.959, root mean square deviation (RMSD) of 1.069 and configuration cost of 14.536. The model was validated using test set containing 119 compounds and had high correlation coefficient of 0.851. The results demonstrate that results obtained in this study can be considered to be useful and reliable tools in identifying structurally diverse compounds with desired biological activity.
Increased prefrontal cortex neurogranin enhances plasticity and extinction learning.
Zhong, Ling; Brown, Joshua; Kramer, Audra; Kaleka, Kanwardeep; Petersen, Amber; Krueger, Jamie N; Florence, Matthew; Muelbl, Matthew J; Battle, Michelle; Murphy, Geoffrey G; Olsen, Christopher M; Gerges, Nashaat Z
2015-05-13
Increasing plasticity in neurons of the prefrontal cortex (PFC) has been proposed as a possible therapeutic tool to enhance extinction, a process that is impaired in post-traumatic stress disorder, schizophrenia, and addiction. To test this hypothesis, we generated transgenic mice that overexpress neurogranin (a calmodulin-binding protein that facilitates long-term potentiation) in the PFC. Neurogranin overexpression in the PFC enhanced long-term potentiation and increased the rates of extinction learning of both fear conditioning and sucrose self-administration. Our results indicate that elevated neurogranin function within the PFC can enhance local plasticity and increase the rate of extinction learning across different behavioral tasks. Thus, neurogranin can provide a molecular link between enhanced plasticity and enhanced extinction. Copyright © 2015 the authors 0270-6474/15/357503-06$15.00/0.
Mechanisms of eyewitness suggestibility: tests of the explanatory role hypothesis.
Rindal, Eric J; Chrobak, Quin M; Zaragoza, Maria S; Weihing, Caitlin A
2017-10-01
In a recent paper, Chrobak and Zaragoza (Journal of Experimental Psychology: General, 142(3), 827-844, 2013) proposed the explanatory role hypothesis, which posits that the likelihood of developing false memories for post-event suggestions is a function of the explanatory function the suggestion serves. In support of this hypothesis, they provided evidence that participant-witnesses were especially likely to develop false memories for their forced fabrications when their fabrications helped to explain outcomes they had witnessed. In three experiments, we test the generality of the explanatory role hypothesis as a mechanism of eyewitness suggestibility by assessing whether this hypothesis can predict suggestibility errors in (a) situations where the post-event suggestions are provided by the experimenter (as opposed to fabricated by the participant), and (b) across a variety of memory measures and measures of recollective experience. In support of the explanatory role hypothesis, participants were more likely to subsequently freely report (E1) and recollect the suggestions as part of the witnessed event (E2, source test) when the post-event suggestion helped to provide a causal explanation for a witnessed outcome than when it did not serve this explanatory role. Participants were also less likely to recollect the suggestions as part of the witnessed event (on measures of subjective experience) when their explanatory strength had been reduced by the presence of an alternative explanation that could explain the same outcome (E3, source test + warning). Collectively, the results provide strong evidence that the search for explanatory coherence influences people's tendency to misremember witnessing events that were only suggested to them.
A Continuous Threshold Expectile Model.
Zhang, Feipeng; Li, Qunhua
2017-12-01
Expectile regression is a useful tool for exploring the relation between the response and the explanatory variables beyond the conditional mean. A continuous threshold expectile regression is developed for modeling data in which the effect of a covariate on the response variable is linear but varies below and above an unknown threshold in a continuous way. The estimators for the threshold and the regression coefficients are obtained using a grid search approach. The asymptotic properties for all the estimators are derived, and the estimator for the threshold is shown to achieve root-n consistency. A weighted CUSUM type test statistic is proposed for the existence of a threshold at a given expectile, and its asymptotic properties are derived under both the null and the local alternative models. This test only requires fitting the model under the null hypothesis in the absence of a threshold, thus it is computationally more efficient than the likelihood-ratio type tests. Simulation studies show that the proposed estimators and test have desirable finite sample performance in both homoscedastic and heteroscedastic cases. The application of the proposed method on a Dutch growth data and a baseball pitcher salary data reveals interesting insights. The proposed method is implemented in the R package cthreshER .
Design and development of a sensorized wireless toy for measuring infants' manual actions.
Serio, Stefano Marco; Cecchi, Francesca; Assaf, Tareq; Laschi, Cecilia; Dario, Paolo
2013-05-01
The development of grasping is an important milestone that infants encounter during the first months of life. Novel approaches for measuring infants' manual actions are based on sensorized platform usable in natural settings, such as instrumented wireless toys that could be exploited for diagnosis and rehabilitation purposes. A new sensorized wireless toy has been designed and developed with embedded pressure sensors and audio-visual feedback. The fulfillment of clinical specifications has been proved through mechanical and electrical characterization. Infants showed a good grade of acceptance to such kind of tools, as confirmed by the results of preliminary tests that involved nine healthy infants: the dimensions fulfill infants' anthropometrics, the device is robust and safe, the acquired signals are in the expected range and the wireless communication is stable. Although achieved only through preliminary tests, such results confirm the hypothesis that this typology of instrumented toys could be useful for quantitative monitoring and measuring infants' motor development and ready to be evaluated for assessing motor skills through appropriate clinical trials.
Effects of dividing attention during encoding on perceptual priming of unfamiliar visual objects.
Soldan, Anja; Mangels, Jennifer A; Cooper, Lynn A
2008-11-01
According to the distractor-selection hypothesis (Mulligan, 2003), dividing attention during encoding reduces perceptual priming when responses to non-critical (i.e., distractor) stimuli are selected frequently and simultaneously with critical stimulus encoding. Because direct support for this hypothesis comes exclusively from studies using familiar word stimuli, the present study tested whether the predictions of the distractor-selection hypothesis extend to perceptual priming of unfamiliar visual objects using the possible/impossible object decision test. Consistent with the distractor-selection hypothesis, Experiments 1 and 2 found no reduction in priming when the non-critical stimuli were presented infrequently and non-synchronously with the critical target stimuli, even though explicit recognition memory was reduced. In Experiment 3, non-critical stimuli were presented frequently and simultaneously during encoding of critical stimuli; however, no decrement in priming was detected, even when encoding time was reduced. These results suggest that priming in the possible/impossible object decision test is relatively immune to reductions in central attention and that not all aspects of the distractor-selection hypothesis generalise to priming of unfamiliar visual objects. Implications for theoretical models of object decision priming are discussed.
Effects of dividing attention during encoding on perceptual priming of unfamiliar visual objects
Soldan, Anja; Mangels, Jennifer A.; Cooper, Lynn A.
2008-01-01
According to the distractor-selection hypothesis (Mulligan, 2003), dividing attention during encoding reduces perceptual priming when responses to non-critical (i.e., distractor) stimuli are selected frequently and simultaneously with critical stimulus encoding. Because direct support for this hypothesis comes exclusively from studies using familiar word stimuli, the present study tested whether the predictions of the distractor-selection hypothesis extend to perceptual priming of unfamiliar visual objects using the possible/impossible object-decision test. Consistent with the distractor-selection hypothesis, Experiments 1 and 2 found no reduction in priming when the non-critical stimuli were presented infrequently and non-synchronously with the critical target stimuli, even though explicit recognition memory was reduced. In Experiment 3, non-critical stimuli were presented frequently and simultaneously during encoding of critical stimuli; however, no decrement in priming was detected, even when encoding time was reduced. These results suggest that priming in the possible/impossible object-decision test is relatively immune to reductions in central attention and that not all aspects of the distractor-selection hypothesis generalize to priming of unfamiliar visual objects. Implications for theoretical models of object-decision priming are discussed. PMID:18821167
Sex and Class Differences in Parent-Child Interaction: A Test of Kohn's Hypothesis
ERIC Educational Resources Information Center
Gecas, Viktor; Nye, F. Ivan
1974-01-01
This paper focuses on Melvin Kohn's suggestive hypothesis that white-collar parents stress the development of internal standards of conduct in their children while blue-collar parents are more likely to react on the basis of the consequences of the child's behavior. This hypothesis was supported. (Author)
Assess the Critical Period Hypothesis in Second Language Acquisition
ERIC Educational Resources Information Center
Du, Lihong
2010-01-01
The Critical Period Hypothesis aims to investigate the reason for significant difference between first language acquisition and second language acquisition. Over the past few decades, researchers carried out a series of studies to test the validity of the hypothesis. Although there were certain limitations in these studies, most of their results…
Further Evidence on the Weak and Strong Versions of the Screening Hypothesis in Greece.
ERIC Educational Resources Information Center
Lambropoulos, Haris S.
1992-01-01
Uses Greek data for 1981 and 1985 to test screening hypothesis by replicating method proposed by Psacharopoulos. Credentialism, or sheepskin effect of education, directly challenges human capital theory, which views education as a productivity augmenting process. Results do not support the strong version of the screening hypothesis and suggest…
A Clinical Evaluation of the Competing Sources of Input Hypothesis
ERIC Educational Resources Information Center
Fey, Marc E.; Leonard, Laurence B.; Bredin-Oja, Shelley L.; Deevy, Patricia
2017-01-01
Purpose: Our purpose was to test the competing sources of input (CSI) hypothesis by evaluating an intervention based on its principles. This hypothesis proposes that children's use of main verbs without tense is the result of their treating certain sentence types in the input (e.g., "Was 'she laughing'?") as models for declaratives…
Experimental comparisons of hypothesis test and moving average based combustion phase controllers.
Gao, Jinwu; Wu, Yuhu; Shen, Tielong
2016-11-01
For engine control, combustion phase is the most effective and direct parameter to improve fuel efficiency. In this paper, the statistical control strategy based on hypothesis test criterion is discussed. Taking location of peak pressure (LPP) as combustion phase indicator, the statistical model of LPP is first proposed, and then the controller design method is discussed on the basis of both Z and T tests. For comparison, moving average based control strategy is also presented and implemented in this study. The experiments on a spark ignition gasoline engine at various operating conditions show that the hypothesis test based controller is able to regulate LPP close to set point while maintaining the rapid transient response, and the variance of LPP is also well constrained. Copyright © 2016 ISA. Published by Elsevier Ltd. All rights reserved.
Use of Pearson's Chi-Square for Testing Equality of Percentile Profiles across Multiple Populations.
Johnson, William D; Beyl, Robbie A; Burton, Jeffrey H; Johnson, Callie M; Romer, Jacob E; Zhang, Lei
2015-08-01
In large sample studies where distributions may be skewed and not readily transformed to symmetry, it may be of greater interest to compare different distributions in terms of percentiles rather than means. For example, it may be more informative to compare two or more populations with respect to their within population distributions by testing the hypothesis that their corresponding respective 10 th , 50 th , and 90 th percentiles are equal. As a generalization of the median test, the proposed test statistic is asymptotically distributed as Chi-square with degrees of freedom dependent upon the number of percentiles tested and constraints of the null hypothesis. Results from simulation studies are used to validate the nominal 0.05 significance level under the null hypothesis, and asymptotic power properties that are suitable for testing equality of percentile profiles against selected profile discrepancies for a variety of underlying distributions. A pragmatic example is provided to illustrate the comparison of the percentile profiles for four body mass index distributions.
On selecting evidence to test hypotheses: A theory of selection tasks.
Ragni, Marco; Kola, Ilir; Johnson-Laird, Philip N
2018-05-21
How individuals choose evidence to test hypotheses is a long-standing puzzle. According to an algorithmic theory that we present, it is based on dual processes: individuals' intuitions depending on mental models of the hypothesis yield selections of evidence matching instances of the hypothesis, but their deliberations yield selections of potential counterexamples to the hypothesis. The results of 228 experiments using Wason's selection task corroborated the theory's predictions. Participants made dependent choices of items of evidence: the selections in 99 experiments were significantly more redundant (using Shannon's measure) than those of 10,000 simulations of each experiment based on independent selections. Participants tended to select evidence corresponding to instances of hypotheses, or to its counterexamples, or to both. Given certain contents, instructions, or framings of the task, they were more likely to select potential counterexamples to the hypothesis. When participants received feedback about their selections in the "repeated" selection task, they switched from selections of instances of the hypothesis to selection of potential counterexamples. These results eliminated most of the 15 alternative theories of selecting evidence. In a meta-analysis, the model theory yielded a better fit of the results of 228 experiments than the one remaining theory based on reasoning rather than meaning. We discuss the implications of the model theory for hypothesis testing and for a well-known paradox of confirmation. (PsycINFO Database Record (c) 2018 APA, all rights reserved).
Does Maltreatment Beget Maltreatment? A Systematic Review of the Intergenerational Literature
Thornberry, Terence P.; Knight, Kelly E.; Lovegrove, Peter J.
2014-01-01
In this paper, we critically review the literature testing the cycle of maltreatment hypothesis which posits continuity in maltreatment across adjacent generations. That is, we examine whether a history of maltreatment victimization is a significant risk factor for the later perpetration of maltreatment. We begin by establishing 11 methodological criteria that studies testing this hypothesis should meet. They include such basic standards as using representative samples, valid and reliable measures, prospective designs, and different reporters for each generation. We identify 47 studies that investigated this issue and then evaluate them with regard to the 11 methodological criteria. Overall, most of these studies report findings consistent with the cycle of maltreatment hypothesis. Unfortunately, at the same time, few of them satisfy the basic methodological criteria that we established; indeed, even the stronger studies in this area only meet about half of them. Moreover, the methodologically stronger studies present mixed support for the hypothesis. As a result, the positive association often reported in the literature appears to be based largely on the methodologically weaker designs. Based on our systematic methodological review, we conclude that this small and methodologically weak body of literature does not provide a definitive test of the cycle of maltreatment hypothesis. We conclude that it is imperative to develop more robust and methodologically adequate assessments of this hypothesis to more accurately inform the development of prevention and treatment programs. PMID:22673145
Optimizing Aircraft Availability: Where to Spend Your Next O&M Dollar
2010-03-01
patterns of variance are present. In addition, we use the Breusch - Pagan test to statistically determine whether homoscedasticity exists. For this... Breusch - Pagan test , large p-values are preferred so that we may accept the null hypothesis of normality. Failure to meet the fourth assumption is...Next, we show the residual by predicted plot and the Breusch - Pagan test for constant variance of the residuals. The null hypothesis is that the
Estimating Required Contingency Funds for Construction Projects using Multiple Linear Regression
2006-03-01
Breusch - Pagan test , in which the null hypothesis states that the residuals have constant variance. The alternate hypothesis is that the residuals do not...variance, the Breusch - Pagan test provides statistical evidence that the assumption is justified. For the proposed model, the p-value is 0.173...entire test sample. v Acknowledgments First, I would like to acknowledge the influence and help of Greg Hoffman. His work served as the
DOE Office of Scientific and Technical Information (OSTI.GOV)
Andrews, Stephen A.; Sigeti, David E.
These are a set of slides about Bayesian hypothesis testing, where many hypotheses are tested. The conclusions are the following: The value of the Bayes factor obtained when using the median of the posterior marginal is almost the minimum value of the Bayes factor. The value of τ 2 which minimizes the Bayes factor is a reasonable choice for this parameter. This allows a likelihood ratio to be computed with is the least favorable to H 0.
Steele, James; Ferrari, Pier Francesco; Fogassi, Leonardo
2012-01-01
The papers in this Special Issue examine tool use and manual gestures in primates as a window on the evolution of the human capacity for language. Neurophysiological research has supported the hypothesis of a close association between some aspects of human action organization and of language representation, in both phonology and semantics. Tool use provides an excellent experimental context to investigate analogies between action organization and linguistic syntax. Contributors report and contextualize experimental evidence from monkeys, great apes, humans and fossil hominins, and consider the nature and the extent of overlaps between the neural representations of tool use, manual gestures and linguistic processes. PMID:22106422
Computer-Assisted Problem Solving in School Mathematics
ERIC Educational Resources Information Center
Hatfield, Larry L.; Kieren, Thomas E.
1972-01-01
A test of the hypothesis that writing and using computer programs related to selected mathematical content positively affects performance on those topics. Results particularly support the hypothesis. (MM)
[Examination of the hypothesis 'the factors and mechanisms of superiority'].
Sierra-Fitzgerald, O; Quevedo-Caicedo, J; López-Calderón, M G
INTRODUCTION. The hypothesis of Geschwind and Galaburda suggests that specific cognitive superiority arises as a result of an alteration in development of the nervous system. In this article we review the co existence of superiority and inferiority . PATIENTS AND METHODS. A study was made of six children aged between 6 and 8 years old at the Instituto de Belles Artes Antonio Maria Valencia in Cali,Columbia with an educational level between second and third grade at a primary school and of medium low socio economic status. The children were considered to have superior musical ability by music experts, which is the way in which the concept of superiority was to be tested. The concept of inferiority was tested by neuropsychological tests = 1.5 DE below normal for the same age. We estimated the perinatal neurological risk in each case. Subsequently the children s general intelligence and specific cognitive abilities were evaluated. In the first case the WISC R and MSCA were used. The neuropsychological profiles were obtained by broad evaluation using a verbal fluency test, a test using counters, Boston vocabulary test, the Wechster memory scale, sequential verbal memory test, super imposed figures test, Piaget Head battery, Rey Osterrieth complex figure and the Wisconsin card classification test. The RESULTS showed slight/moderate deficits in practical construction ability and mild defects of memory and concept abilities. In general the results supported the hypothesis tested. The mechanisms of superiority proposed in the classical hypothesis mainly involve the contralateral hemisphere: in this study the ipsilateral mechanism was more important.
Predictability, Force and (Anti-)Resonance in Complex Object Control.
Maurice, Pauline; Hogan, Neville; Sternad, Dagmar
2018-04-18
Manipulation of complex objects as in tool use is ubiquitous and has given humans an evolutionary advantage. This study examined the strategies humans choose when manipulating an object with underactuated internal dynamics, such as a cup of coffee. The object's dynamics renders the temporal evolution complex, possibly even chaotic, and difficult to predict. A cart-and-pendulum model, loosely mimicking coffee sloshing in a cup, was implemented in a virtual environment with a haptic interface. Participants rhythmically manipulated the virtual cup containing a rolling ball; they could choose the oscillation frequency, while the amplitude was prescribed. Three hypotheses were tested: 1) humans decrease interaction forces between hand and object; 2) humans increase the predictability of the object dynamics; 3) humans exploit the resonances of the coupled object-hand system. Analysis revealed that humans chose either a high-frequency strategy with anti-phase cup-and-ball movements or a low-frequency strategy with in-phase cup-and-ball movements. Counter Hypothesis 1, they did not decrease interaction force; instead, they increased the predictability of the interaction dynamics, quantified by mutual information, supporting Hypothesis 2. To address Hypothesis 3, frequency analysis of the coupled hand-object system revealed two resonance frequencies separated by an anti-resonance frequency. The low-frequency strategy exploited one resonance, while the high-frequency strategy afforded more choice, consistent with the frequency response of the coupled system; both strategies avoided the anti-resonance. Hence, humans did not prioritize interaction force, but rather strategies that rendered interactions predictable. These findings highlight that physical interactions with complex objects pose control challenges not present in unconstrained movements.
Viewpoints: diet and dietary adaptations in early hominins: the hard food perspective.
Strait, David S; Constantino, Paul; Lucas, Peter W; Richmond, Brian G; Spencer, Mark A; Dechow, Paul C; Ross, Callum F; Grosse, Ian R; Wright, Barth W; Wood, Bernard A; Weber, Gerhard W; Wang, Qian; Byron, Craig; Slice, Dennis E; Chalk, Janine; Smith, Amanda L; Smith, Leslie C; Wood, Sarah; Berthaume, Michael; Benazzi, Stefano; Dzialo, Christine; Tamvada, Kelli; Ledogar, Justin A
2013-07-01
Recent biomechanical analyses examining the feeding adaptations of early hominins have yielded results consistent with the hypothesis that hard foods exerted a selection pressure that influenced the evolution of australopith morphology. However, this hypothesis appears inconsistent with recent reconstructions of early hominin diet based on dental microwear and stable isotopes. Thus, it is likely that either the diets of some australopiths included a high proportion of foods these taxa were poorly adapted to consume (i.e., foods that they would not have processed efficiently), or that aspects of what we thought we knew about the functional morphology of teeth must be wrong. Evaluation of these possibilities requires a recognition that analyses based on microwear, isotopes, finite element modeling, and enamel chips and cracks each test different types of hypotheses and allow different types of inferences. Microwear and isotopic analyses are best suited to reconstructing broad dietary patterns, but are limited in their ability to falsify specific hypotheses about morphological adaptation. Conversely, finite element analysis is a tool for evaluating the mechanical basis of form-function relationships, but says little about the frequency with which specific behaviors were performed or the particular types of food that were consumed. Enamel chip and crack analyses are means of both reconstructing diet and examining biomechanics. We suggest that current evidence is consistent with the hypothesis that certain derived australopith traits are adaptations for consuming hard foods, but that australopiths had generalized diets that could include high proportions of foods that were both compliant and tough. Copyright © 2013 Wiley Periodicals, Inc.
NASA Astrophysics Data System (ADS)
Lalit, Manisha; Gangwal, Rahul P.; Dhoke, Gaurao V.; Damre, Mangesh V.; Khandelwal, Kanchan; Sangamwar, Abhay T.
2013-10-01
A combined pharmacophore modelling, 3D-QSAR and molecular docking approach was employed to reveal structural and chemical features essential for the development of small molecules as LRH-1 agonists. The best HypoGen pharmacophore hypothesis (Hypo1) consists of one hydrogen-bond donor (HBD), two general hydrophobic (H), one hydrophobic aromatic (HYAr) and one hydrophobic aliphatic (HYA) feature. It has exhibited high correlation coefficient of 0.927, cost difference of 85.178 bit and low RMS value of 1.411. This pharmacophore hypothesis was cross-validated using test set, decoy set and Cat-Scramble methodology. Subsequently, validated pharmacophore hypothesis was used in the screening of small chemical databases. Further, 3D-QSAR models were developed based on the alignment obtained using substructure alignment. The best CoMFA and CoMSIA model has exhibited excellent rncv2 values of 0.991 and 0.987, and rcv2 values of 0.767 and 0.703, respectively. CoMFA predicted rpred2 of 0.87 and CoMSIA predicted rpred2 of 0.78 showed that the predicted values were in good agreement with the experimental values. Molecular docking analysis reveals that π-π interaction with His390 and hydrogen bond interaction with His390/Arg393 is essential for LRH-1 agonistic activity. The results from pharmacophore modelling, 3D-QSAR and molecular docking are complementary to each other and could serve as a powerful tool for the discovery of potent small molecules as LRH-1 agonists.
Mothers Who Kill Their Offspring: Testing Evolutionary Hypothesis in a 110-Case Italian Sample
ERIC Educational Resources Information Center
Camperio Ciani, Andrea S.; Fontanesi, Lilybeth
2012-01-01
Objectives: This research aimed to identify incidents of mothers in Italy killing their own children and to test an adaptive evolutionary hypothesis to explain their occurrence. Methods: 110 cases of mothers killing 123 of their own offspring from 1976 to 2010 were analyzed. Each case was classified using 13 dichotomic variables. Descriptive…
ERIC Educational Resources Information Center
Martin, Todd F.; White, James M.; Perlman, Daniel
2003-01-01
This study used a sample of 2,379 seventh through twelfth graders in 5 Protestant denominations to test the hypothesis that parental influences on religious faith are mediated through peer selection and congregation selection. Findings revealed that peer and parental influence remained stable during the adolescent years. Parental influence did not…
Bayesian Hypothesis Testing for Psychologists: A Tutorial on the Savage-Dickey Method
ERIC Educational Resources Information Center
Wagenmakers, Eric-Jan; Lodewyckx, Tom; Kuriyal, Himanshu; Grasman, Raoul
2010-01-01
In the field of cognitive psychology, the "p"-value hypothesis test has established a stranglehold on statistical reporting. This is unfortunate, as the "p"-value provides at best a rough estimate of the evidence that the data provide for the presence of an experimental effect. An alternative and arguably more appropriate measure of evidence is…
Parenting as a Dynamic Process: A Test of the Resource Dilution Hypothesis
ERIC Educational Resources Information Center
Strohschein, Lisa; Gauthier, Anne H.; Campbell, Rachel; Kleparchuk, Clayton
2008-01-01
In this paper, we tested the resource dilution hypothesis, which posits that, because parenting resources are finite, the addition of a new sibling depletes parenting resources for other children in the household. We estimated growth curve models on the self-reported parenting practices of mothers using four waves of data collected biennially…
ERIC Educational Resources Information Center
Fan, Weihua; Hancock, Gregory R.
2012-01-01
This study proposes robust means modeling (RMM) approaches for hypothesis testing of mean differences for between-subjects designs in order to control the biasing effects of nonnormality and variance inequality. Drawing from structural equation modeling (SEM), the RMM approaches make no assumption of variance homogeneity and employ robust…
USDA-ARS?s Scientific Manuscript database
The impact of rater bias and assessment method on hypothesis testing was studied for different experimental designs for plant disease assessment using balanced and unbalanced data sets. Data sets with the same number of replicate estimates for each of two treatments are termed ‘balanced’, and those ...
ERIC Educational Resources Information Center
Stone, Emily A.; Shackelford, Todd K.; Buss, David M.
2012-01-01
This study tests the hypothesis presented by Penke, Denissen, and Miller (2007a) that condition-dependent traits, including intelligence, attractiveness, and health, are universally and uniformly preferred as characteristics in a mate relative to traits that are less indicative of condition, including personality traits. We analyzed…
ERIC Educational Resources Information Center
Edlin, James M.; Lyle, Keith B.
2013-01-01
The simple act of repeatedly looking left and right can enhance subsequent cognition, including divergent thinking, detection of matching letters from visual arrays, and memory retrieval. One hypothesis is that saccade execution enhances subsequent cognition by altering attentional control. To test this hypothesis, we compared performance…
The Genesis of Pedophilia: Testing the "Abuse-to-Abuser" Hypothesis.
ERIC Educational Resources Information Center
Fedoroff, J. Paul; Pinkus, Shari
1996-01-01
This study tested three versions of the "abuse-to-abuser" hypothesis by comparing men with personal histories of sexual abuse and men without sexual abuse histories. There was a statistically non-significant trend for assaulted offenders to be more likely as adults to commit genital assaults on children. Implications for the abuse-to-abuser…
2004-2006 Puget Sound Traffic Choices Study | Transportation Secure Data
Center | NREL 04-2006 Puget Sound Traffic Choices Study 2004-2006 Puget Sound Traffic Choices Study The 2004-2006 Puget Sound Traffic Choices Study tested the hypothesis that time-of-day variable Administration for a pilot project on congestion-based tolling. Methodology To test the hypothesis, the study
Assessment of Theory of Mind in Children with Communication Disorders: Role of Presentation Mode
ERIC Educational Resources Information Center
van Buijsen, Marit; Hendriks, Angelique; Ketelaars, Mieke; Verhoeven, Ludo
2011-01-01
Children with communication disorders have problems with both language and social interaction. The theory-of-mind hypothesis provides an explanation for these problems, and different tests have been developed to test this hypothesis. However, different modes of presentation are used in these tasks, which make the results difficult to compare. In…
Resilience to the contralateral visual field bias as a window into object representations
Garcea, Frank E.; Kristensen, Stephanie; Almeida, Jorge; Mahon, Bradford Z.
2016-01-01
Viewing images of manipulable objects elicits differential blood oxygen level-dependent (BOLD) contrast across parietal and dorsal occipital areas of the human brain that support object-directed reaching, grasping, and complex object manipulation. However, it is unknown which object-selective regions of parietal cortex receive their principal inputs from the ventral object-processing pathway and which receive their inputs from the dorsal object-processing pathway. Parietal areas that receive their inputs from the ventral visual pathway, rather than from the dorsal stream, will have inputs that are already filtered through object categorization and identification processes. This predicts that parietal regions that receive inputs from the ventral visual pathway should exhibit object-selective responses that are resilient to contralateral visual field biases. To test this hypothesis, adult participants viewed images of tools and animals that were presented to the left or right visual fields during functional magnetic resonance imaging (fMRI). We found that the left inferior parietal lobule showed robust tool preferences independently of the visual field in which tool stimuli were presented. In contrast, a region in posterior parietal/dorsal occipital cortex in the right hemisphere exhibited an interaction between visual field and category: tool-preferences were strongest contralateral to the stimulus. These findings suggest that action knowledge accessed in the left inferior parietal lobule operates over inputs that are abstracted from the visual input and contingent on analysis by the ventral visual pathway, consistent with its putative role in supporting object manipulation knowledge. PMID:27160998
Molnos, Sophie; Baumbach, Clemens; Wahl, Simone; Müller-Nurasyid, Martina; Strauch, Konstantin; Wang-Sattler, Rui; Waldenberger, Melanie; Meitinger, Thomas; Adamski, Jerzy; Kastenmüller, Gabi; Suhre, Karsten; Peters, Annette; Grallert, Harald; Theis, Fabian J; Gieger, Christian
2017-09-29
Genome-wide association studies allow us to understand the genetics of complex diseases. Human metabolism provides information about the disease-causing mechanisms, so it is usual to investigate the associations between genetic variants and metabolite levels. However, only considering genetic variants and their effects on one trait ignores the possible interplay between different "omics" layers. Existing tools only consider single-nucleotide polymorphism (SNP)-SNP interactions, and no practical tool is available for large-scale investigations of the interactions between pairs of arbitrary quantitative variables. We developed an R package called pulver to compute p-values for the interaction term in a very large number of linear regression models. Comparisons based on simulated data showed that pulver is much faster than the existing tools. This is achieved by using the correlation coefficient to test the null-hypothesis, which avoids the costly computation of inversions. Additional tricks are a rearrangement of the order, when iterating through the different "omics" layers, and implementing this algorithm in the fast programming language C++. Furthermore, we applied our algorithm to data from the German KORA study to investigate a real-world problem involving the interplay among DNA methylation, genetic variants, and metabolite levels. The pulver package is a convenient and rapid tool for screening huge numbers of linear regression models for significant interaction terms in arbitrary pairs of quantitative variables. pulver is written in R and C++, and can be downloaded freely from CRAN at https://cran.r-project.org/web/packages/pulver/ .
Morehead, Kayla; Dunlosky, John; Rawson, Katherine A; Bishop, Melissa; Pyc, Mary A
2018-04-01
When study is spaced across sessions (versus massed within a single session), final performance is greater after spacing. This spacing effect may have multiple causes, and according to the mediator hypothesis, part of the effect can be explained by the use of mediator-based strategies. This hypothesis proposes that when study is spaced across sessions, rather than massed within a session, more mediators will be generated that are longer lasting and hence more mediators will be available to support criterion recall. In two experiments, participants were randomly assigned to study paired associates using either a spaced or massed schedule. They reported strategy use for each item during study trials and during the final test. Consistent with the mediator hypothesis, participants who had spaced (as compared to massed) practice reported using more mediators on the final test. This use of effective mediators also statistically accounted for some - but not all of - the spacing effect on final performance.
A more powerful test based on ratio distribution for retention noninferiority hypothesis.
Deng, Ling; Chen, Gang
2013-03-11
Rothmann et al. ( 2003 ) proposed a method for the statistical inference of fraction retention noninferiority (NI) hypothesis. A fraction retention hypothesis is defined as a ratio of the new treatment effect verse the control effect in the context of a time to event endpoint. One of the major concerns using this method in the design of an NI trial is that with a limited sample size, the power of the study is usually very low. This makes an NI trial not applicable particularly when using time to event endpoint. To improve power, Wang et al. ( 2006 ) proposed a ratio test based on asymptotic normality theory. Under a strong assumption (equal variance of the NI test statistic under null and alternative hypotheses), the sample size using Wang's test was much smaller than that using Rothmann's test. However, in practice, the assumption of equal variance is generally questionable for an NI trial design. This assumption is removed in the ratio test proposed in this article, which is derived directly from a Cauchy-like ratio distribution. In addition, using this method, the fundamental assumption used in Rothmann's test, that the observed control effect is always positive, that is, the observed hazard ratio for placebo over the control is greater than 1, is no longer necessary. Without assuming equal variance under null and alternative hypotheses, the sample size required for an NI trial can be significantly reduced if using the proposed ratio test for a fraction retention NI hypothesis.
The fourth dimension of tool use: temporally enduring artefacts aid primates learning to use tools
Fragaszy, D. M.; Biro, D.; Eshchar, Y.; Humle, T.; Izar, P.; Resende, B.; Visalberghi, E.
2013-01-01
All investigated cases of habitual tool use in wild chimpanzees and capuchin monkeys include youngsters encountering durable artefacts, most often in a supportive social context. We propose that enduring artefacts associated with tool use, such as previously used tools, partly processed food items and residual material from previous activity, aid non-human primates to learn to use tools, and to develop expertise in their use, thus contributing to traditional technologies in non-humans. Therefore, social contributions to tool use can be considered as situated in the three dimensions of Euclidean space, and in the fourth dimension of time. This notion expands the contribution of social context to learning a skill beyond the immediate presence of a model nearby. We provide examples supporting this hypothesis from wild bearded capuchin monkeys and chimpanzees, and suggest avenues for future research. PMID:24101621
The fourth dimension of tool use: temporally enduring artefacts aid primates learning to use tools.
Fragaszy, D M; Biro, D; Eshchar, Y; Humle, T; Izar, P; Resende, B; Visalberghi, E
2013-11-19
All investigated cases of habitual tool use in wild chimpanzees and capuchin monkeys include youngsters encountering durable artefacts, most often in a supportive social context. We propose that enduring artefacts associated with tool use, such as previously used tools, partly processed food items and residual material from previous activity, aid non-human primates to learn to use tools, and to develop expertise in their use, thus contributing to traditional technologies in non-humans. Therefore, social contributions to tool use can be considered as situated in the three dimensions of Euclidean space, and in the fourth dimension of time. This notion expands the contribution of social context to learning a skill beyond the immediate presence of a model nearby. We provide examples supporting this hypothesis from wild bearded capuchin monkeys and chimpanzees, and suggest avenues for future research.
Haller, Moira; Chassin, Laurie
2014-09-01
The present study utilized longitudinal data from a community sample (n = 377; 166 trauma-exposed; 54% males; 73% non-Hispanic Caucasian; 22% Hispanic; 5% other ethnicity) to test whether pretrauma substance use problems increase risk for trauma exposure (high-risk hypothesis) or posttraumatic stress disorder (PTSD) symptoms (susceptibility hypothesis), whether PTSD symptoms increase risk for later alcohol/drug problems (self-medication hypothesis), and whether the association between PTSD symptoms and alcohol/drug problems is attributable to shared risk factors (shared vulnerability hypothesis). Logistic and negative binomial regressions were performed in a path analysis framework. Results provided the strongest support for the self-medication hypothesis, such that PTSD symptoms predicted higher levels of later alcohol and drug problems, over and above the influences of pretrauma family risk factors, pretrauma substance use problems, trauma exposure, and demographic variables. Results partially supported the high-risk hypothesis, such that adolescent substance use problems increased risk for assaultive violence exposure but did not influence overall risk for trauma exposure. There was no support for the susceptibility hypothesis. Finally, there was little support for the shared vulnerability hypothesis. Neither trauma exposure nor preexisting family adversity accounted for the link between PTSD symptoms and later substance use problems. Rather, PTSD symptoms mediated the effect of pretrauma family adversity on later alcohol and drug problems, thereby supporting the self-medication hypothesis. These findings make important contributions to better understanding the directions of influence among traumatic stress, PTSD symptoms, and substance use problems.
Test of the Brink-Axel Hypothesis for the Pygmy Dipole Resonance
NASA Astrophysics Data System (ADS)
Martin, D.; von Neumann-Cosel, P.; Tamii, A.; Aoi, N.; Bassauer, S.; Bertulani, C. A.; Carter, J.; Donaldson, L.; Fujita, H.; Fujita, Y.; Hashimoto, T.; Hatanaka, K.; Ito, T.; Krugmann, A.; Liu, B.; Maeda, Y.; Miki, K.; Neveling, R.; Pietralla, N.; Poltoratska, I.; Ponomarev, V. Yu.; Richter, A.; Shima, T.; Yamamoto, T.; Zweidinger, M.
2017-11-01
The gamma strength function and level density of 1- states in 96Mo have been extracted from a high-resolution study of the (p → , p→ ' ) reaction at 295 MeV and extreme forward angles. By comparison with compound nucleus γ decay experiments, this allows a test of the generalized Brink-Axel hypothesis in the energy region of the pygmy dipole resonance. The Brink-Axel hypothesis is commonly assumed in astrophysical reaction network calculations and states that the gamma strength function in nuclei is independent of the structure of the initial and final state. The present results validate the Brink-Axel hypothesis for 96Mo and provide independent confirmation of the methods used to separate gamma strength function and level density in γ decay experiments.
Transdisciplinary Application of Cross-Scale Resilience
The cross-scale resilience model was developed in ecology to explain the emergence of resilience from the distribution of ecological functions within and across scales, and as a tool to assess resilience. We propose that the model and the underlyingdiscontinuity hypothesis are re...
NASA Astrophysics Data System (ADS)
Lehmann, Rüdiger; Lösler, Michael
2017-12-01
Geodetic deformation analysis can be interpreted as a model selection problem. The null model indicates that no deformation has occurred. It is opposed to a number of alternative models, which stipulate different deformation patterns. A common way to select the right model is the usage of a statistical hypothesis test. However, since we have to test a series of deformation patterns, this must be a multiple test. As an alternative solution for the test problem, we propose the p-value approach. Another approach arises from information theory. Here, the Akaike information criterion (AIC) or some alternative is used to select an appropriate model for a given set of observations. Both approaches are discussed and applied to two test scenarios: A synthetic levelling network and the Delft test data set. It is demonstrated that they work but behave differently, sometimes even producing different results. Hypothesis tests are well-established in geodesy, but may suffer from an unfavourable choice of the decision error rates. The multiple test also suffers from statistical dependencies between the test statistics, which are neglected. Both problems are overcome by applying information criterions like AIC.
2014-01-01
Background In complex large-scale experiments, in addition to simultaneously considering a large number of features, multiple hypotheses are often being tested for each feature. This leads to a problem of multi-dimensional multiple testing. For example, in gene expression studies over ordered categories (such as time-course or dose-response experiments), interest is often in testing differential expression across several categories for each gene. In this paper, we consider a framework for testing multiple sets of hypothesis, which can be applied to a wide range of problems. Results We adopt the concept of the overall false discovery rate (OFDR) for controlling false discoveries on the hypothesis set level. Based on an existing procedure for identifying differentially expressed gene sets, we discuss a general two-step hierarchical hypothesis set testing procedure, which controls the overall false discovery rate under independence across hypothesis sets. In addition, we discuss the concept of the mixed-directional false discovery rate (mdFDR), and extend the general procedure to enable directional decisions for two-sided alternatives. We applied the framework to the case of microarray time-course/dose-response experiments, and proposed three procedures for testing differential expression and making multiple directional decisions for each gene. Simulation studies confirm the control of the OFDR and mdFDR by the proposed procedures under independence and positive correlations across genes. Simulation results also show that two of our new procedures achieve higher power than previous methods. Finally, the proposed methodology is applied to a microarray dose-response study, to identify 17 β-estradiol sensitive genes in breast cancer cells that are induced at low concentrations. Conclusions The framework we discuss provides a platform for multiple testing procedures covering situations involving two (or potentially more) sources of multiplicity. The framework is easy to use and adaptable to various practical settings that frequently occur in large-scale experiments. Procedures generated from the framework are shown to maintain control of the OFDR and mdFDR, quantities that are especially relevant in the case of multiple hypothesis set testing. The procedures work well in both simulations and real datasets, and are shown to have better power than existing methods. PMID:24731138
Alcohol dependence and opiate dependence: lack of relationship in mice.
Goldstein, A; Judson, B A
1971-04-16
According to a recently proposed hypothesis, physical dependence upon alcohol is due to the formation of an endogenous opiate. We tested the hypothesis by determining whether or not ethanol-dependent mice would show typical opiate-dependent behavior (withdrawal jumping syndrome) when challenged with the opiate antagonist naloxone. Our results do not support the hypothesis.
On the Flexibility of Social Source Memory: A Test of the Emotional Incongruity Hypothesis
ERIC Educational Resources Information Center
Bell, Raoul; Buchner, Axel; Kroneisen, Meike; Giang, Trang
2012-01-01
A popular hypothesis in evolutionary psychology posits that reciprocal altruism is supported by a cognitive module that helps cooperative individuals to detect and remember cheaters. Consistent with this hypothesis, a source memory advantage for faces of cheaters (better memory for the cheating context in which these faces were encountered) was…
ERIC Educational Resources Information Center
Sackett, Paul R.
1982-01-01
Recent findings suggest individuals seek evidence to confirm initial hypotheses about other people, and that seeking confirmatory evidence makes it likely that a hypothesis will be confirmed. Examined the generalizability of these findings to the employment interview. Consistent use of confirmatory hypothesis testing strategies was not found.…
ERIC Educational Resources Information Center
Bakker, Martin P.; Ormel, Johan; Lindenberg, Siegwart; Verhulst, Frank C.; Oldehinkel, Albertine J.
2011-01-01
This study developed two specifications of the social skills deficit stress generation hypothesis: the "gender-incongruence" hypothesis to predict peer victimization and the "need for autonomy" hypothesis to predict conflict with authorities. These hypotheses were tested in a prospective large population cohort of 2,064 Dutch…
Margiotta-Casaluci, Luigi; Owen, Stewart F.; Cumming, Rob I.; de Polo, Anna; Winter, Matthew J.; Panter, Grace H.; Rand-Weaver, Mariann; Sumpter, John P.
2014-01-01
Fish are an important model for the pharmacological and toxicological characterization of human pharmaceuticals in drug discovery, drug safety assessment and environmental toxicology. However, do fish respond to pharmaceuticals as humans do? To address this question, we provide a novel quantitative cross-species extrapolation approach (qCSE) based on the hypothesis that similar plasma concentrations of pharmaceuticals cause comparable target-mediated effects in both humans and fish at similar level of biological organization (Read-Across Hypothesis). To validate this hypothesis, the behavioural effects of the anti-depressant drug fluoxetine on the fish model fathead minnow (Pimephales promelas) were used as test case. Fish were exposed for 28 days to a range of measured water concentrations of fluoxetine (0.1, 1.0, 8.0, 16, 32, 64 µg/L) to produce plasma concentrations below, equal and above the range of Human Therapeutic Plasma Concentrations (HTPCs). Fluoxetine and its metabolite, norfluoxetine, were quantified in the plasma of individual fish and linked to behavioural anxiety-related endpoints. The minimum drug plasma concentrations that elicited anxiolytic responses in fish were above the upper value of the HTPC range, whereas no effects were observed at plasma concentrations below the HTPCs. In vivo metabolism of fluoxetine in humans and fish was similar, and displayed bi-phasic concentration-dependent kinetics driven by the auto-inhibitory dynamics and saturation of the enzymes that convert fluoxetine into norfluoxetine. The sensitivity of fish to fluoxetine was not so dissimilar from that of patients affected by general anxiety disorders. These results represent the first direct evidence of measured internal dose response effect of a pharmaceutical in fish, hence validating the Read-Across hypothesis applied to fluoxetine. Overall, this study demonstrates that the qCSE approach, anchored to internal drug concentrations, is a powerful tool to guide the assessment of the sensitivity of fish to pharmaceuticals, and strengthens the translational power of the cross-species extrapolation. PMID:25338069
Visualizing statistical significance of disease clusters using cartograms.
Kronenfeld, Barry J; Wong, David W S
2017-05-15
Health officials and epidemiological researchers often use maps of disease rates to identify potential disease clusters. Because these maps exaggerate the prominence of low-density districts and hide potential clusters in urban (high-density) areas, many researchers have used density-equalizing maps (cartograms) as a basis for epidemiological mapping. However, we do not have existing guidelines for visual assessment of statistical uncertainty. To address this shortcoming, we develop techniques for visual determination of statistical significance of clusters spanning one or more districts on a cartogram. We developed the techniques within a geovisual analytics framework that does not rely on automated significance testing, and can therefore facilitate visual analysis to detect clusters that automated techniques might miss. On a cartogram of the at-risk population, the statistical significance of a disease cluster is determinate from the rate, area and shape of the cluster under standard hypothesis testing scenarios. We develop formulae to determine, for a given rate, the area required for statistical significance of a priori and a posteriori designated regions under certain test assumptions. Uniquely, our approach enables dynamic inference of aggregate regions formed by combining individual districts. The method is implemented in interactive tools that provide choropleth mapping, automated legend construction and dynamic search tools to facilitate cluster detection and assessment of the validity of tested assumptions. A case study of leukemia incidence analysis in California demonstrates the ability to visually distinguish between statistically significant and insignificant regions. The proposed geovisual analytics approach enables intuitive visual assessment of statistical significance of arbitrarily defined regions on a cartogram. Our research prompts a broader discussion of the role of geovisual exploratory analyses in disease mapping and the appropriate framework for visually assessing the statistical significance of spatial clusters.
Emerging Concepts of Data Integration in Pathogen Phylodynamics.
Baele, Guy; Suchard, Marc A; Rambaut, Andrew; Lemey, Philippe
2017-01-01
Phylodynamics has become an increasingly popular statistical framework to extract evolutionary and epidemiological information from pathogen genomes. By harnessing such information, epidemiologists aim to shed light on the spatio-temporal patterns of spread and to test hypotheses about the underlying interaction of evolutionary and ecological dynamics in pathogen populations. Although the field has witnessed a rich development of statistical inference tools with increasing levels of sophistication, these tools initially focused on sequences as their sole primary data source. Integrating various sources of information, however, promises to deliver more precise insights in infectious diseases and to increase opportunities for statistical hypothesis testing. Here, we review how the emerging concept of data integration is stimulating new advances in Bayesian evolutionary inference methodology which formalize a marriage of statistical thinking and evolutionary biology. These approaches include connecting sequence to trait evolution, such as for host, phenotypic and geographic sampling information, but also the incorporation of covariates of evolutionary and epidemic processes in the reconstruction procedures. We highlight how a full Bayesian approach to covariate modeling and testing can generate further insights into sequence evolution, trait evolution, and population dynamics in pathogen populations. Specific examples demonstrate how such approaches can be used to test the impact of host on rabies and HIV evolutionary rates, to identify the drivers of influenza dispersal as well as the determinants of rabies cross-species transmissions, and to quantify the evolutionary dynamics of influenza antigenicity. Finally, we briefly discuss how data integration is now also permeating through the inference of transmission dynamics, leading to novel insights into tree-generative processes and detailed reconstructions of transmission trees. [Bayesian inference; birth–death models; coalescent models; continuous trait evolution; covariates; data integration; discrete trait evolution; pathogen phylodynamics.
Mitchell, Katy; Graff, Megan; Hedt, Corbin; Simmons, James
2016-08-01
Purpose/hypothesis: This study was designed to investigate the test-retest reliability, concurrent validity, and the standard error of measurement (SEm) of a pulse rate assessment application (Azumio®'s Instant Heart Rate) on both Android® and iOS® (iphone operating system) smartphones as compared to a FT7 Polar® Heart Rate monitor. Number of subjects: 111. Resting (sitting) pulse rate was assessed twice and then the participants were asked to complete a 1-min standing step test and then immediately re-assessed. The smartphone assessors were blinded to their measurements. Test-retest reliability (intraclass correlation coefficient [ICC 2,1] and 95% confidence interval) for the three tools at rest (time 1/time 2): iOS® (0.76 [0.67-0.83]); Polar® (0.84 [0.78-0.89]); and Android® (0.82 [0.75-0.88]). Concurrent validity at rest time 2 (ICC 2,1) with the Polar® device: IOS® (0.92 [0.88-0.94]) and Android® (0.95 [0.92-0.96]). Concurrent validity post-exercise (time 3) (ICC) with the Polar® device: iOS® (0.90 [0.86-0.93]) and Android® (0.94 [0.91-0.96]). The SEm values for the three devices at rest: iOS® (5.77 beats per minute [BPM]), Polar® (4.56 BPM) and Android® (4.96 BPM). The Android®, iOS®, and Polar® devices showed acceptable test-retest reliability at rest and post-exercise. Both the smartphone platforms demonstrated concurrent validity with the Polar® at rest and post-exercise. The Azumio® Instant Heart Rate application when used by either platform appears to be a reliable and valid tool to assess pulse rate in healthy individuals.
NASA Astrophysics Data System (ADS)
Panopoulou, A.; Fransen, S.; Gomez Molinero, V.; Kostopoulos, V.
2012-07-01
The objective of this work is to develop a new structural health monitoring system for composite aerospace structures based on dynamic response strain measurements and experimental modal analysis techniques. Fibre Bragg Grating (FBG) optical sensors were used for monitoring the dynamic response of the composite structure. The structural dynamic behaviour has been numerically simulated and experimentally verified by means of vibration testing. The hypothesis of all vibration tests was that actual damage in composites reduces their stiffness and produces the same result as mass increase produces. Thus, damage was simulated by slightly varying locally the mass of the structure at different zones. Experimental modal analysis based on the strain responses was conducted and the extracted strain mode shapes were the input for the damage detection expert system. A feed-forward back propagation neural network was the core of the damage detection system. The features-input to the neural network consisted of the strain mode shapes, extracted from the experimental modal analysis. Dedicated training and validation activities were carried out based on the experimental results. The system showed high reliability, confirmed by the ability of the neural network to recognize the size and the position of damage on the structure. The experiments were performed on a real structure i.e. a lightweight antenna sub- reflector, manufactured and tested at EADS CASA ESPACIO. An integrated FBG sensor network, based on the advantage of multiplexing, was mounted on the structure with optimum topology. Numerical simulation of both structures was used as a support tool at all the steps of the work. Potential applications for the proposed system are during ground qualification extensive tests of space structures and during the mission as modal analysis tool on board, being able via the FBG responses to identify a potential failure.
Emerging Concepts of Data Integration in Pathogen Phylodynamics
Baele, Guy; Suchard, Marc A.; Rambaut, Andrew; Lemey, Philippe
2017-01-01
Phylodynamics has become an increasingly popular statistical framework to extract evolutionary and epidemiological information from pathogen genomes. By harnessing such information, epidemiologists aim to shed light on the spatio-temporal patterns of spread and to test hypotheses about the underlying interaction of evolutionary and ecological dynamics in pathogen populations. Although the field has witnessed a rich development of statistical inference tools with increasing levels of sophistication, these tools initially focused on sequences as their sole primary data source. Integrating various sources of information, however, promises to deliver more precise insights in infectious diseases and to increase opportunities for statistical hypothesis testing. Here, we review how the emerging concept of data integration is stimulating new advances in Bayesian evolutionary inference methodology which formalize a marriage of statistical thinking and evolutionary biology. These approaches include connecting sequence to trait evolution, such as for host, phenotypic and geographic sampling information, but also the incorporation of covariates of evolutionary and epidemic processes in the reconstruction procedures. We highlight how a full Bayesian approach to covariate modeling and testing can generate further insights into sequence evolution, trait evolution, and population dynamics in pathogen populations. Specific examples demonstrate how such approaches can be used to test the impact of host on rabies and HIV evolutionary rates, to identify the drivers of influenza dispersal as well as the determinants of rabies cross-species transmissions, and to quantify the evolutionary dynamics of influenza antigenicity. Finally, we briefly discuss how data integration is now also permeating through the inference of transmission dynamics, leading to novel insights into tree-generative processes and detailed reconstructions of transmission trees. [Bayesian inference; birth–death models; coalescent models; continuous trait evolution; covariates; data integration; discrete trait evolution; pathogen phylodynamics. PMID:28173504
Direct-to-Consumer Racial Admixture Tests and Beliefs About Essential Racial Differences
Phelan, Jo C.; Link, Bruce G.; Zelner, Sarah; Yang, Lawrence H.
2015-01-01
Although at first relatively disinterested in race, modern genomic research has increasingly turned attention to racial variations. We examine a prominent example of this focus—direct-to-consumer racial admixture tests—and ask how information about the methods and results of these tests in news media may affect beliefs in racial differences. The reification hypothesis proposes that by emphasizing a genetic basis for race, thereby reifying race as a biological reality, the tests increase beliefs that whites and blacks are essentially different. The challenge hypothesis suggests that by describing differences between racial groups as continua rather than sharp demarcations, the results produced by admixture tests break down racial categories and reduce beliefs in racial differences. A nationally representative survey experiment (N = 526) provided clear support for the reification hypothesis. The results suggest that an unintended consequence of the genomic revolution may be to reinvigorate age-old beliefs in essential racial differences. PMID:25870464
Hypothesis testing in students: Sequences, stages, and instructional strategies
NASA Astrophysics Data System (ADS)
Moshman, David; Thompson, Pat A.
Six sequences in the development of hypothesis-testing conceptions are proposed, involving (a) interpretation of the hypothesis; (b) the distinction between using theories and testing theories; (c) the consideration of multiple possibilities; (d) the relation of theory and data; (e) the nature of verification and falsification; and (f) the relation of truth and falsity. An alternative account is then provided involving three global stages: concrete operations, formal operations, and a postformal metaconstructivestage. Relative advantages and difficulties of the stage and sequence conceptualizations are discussed. Finally, three families of teaching strategy are distinguished, which emphasize, respectively: (a) social transmission of knowledge; (b) carefully sequenced empirical experience by the student; and (c) self-regulated cognitive activity of the student. It is argued on the basis of Piaget's theory that the last of these plays a crucial role in the construction of such logical reasoning strategies as those involved in testing hypotheses.
Castellanos-Morales, Gabriela; Gámez, Niza; Castillo-Gámez, Reyna A; Eguiarte, Luis E
2016-01-01
The hypothesis that endemic species could have originated by the isolation and divergence of peripheral populations of widespread species can be tested through the use of ecological niche models (ENMs) and statistical phylogeography. The joint use of these tools provides complementary perspectives on historical dynamics and allows testing hypotheses regarding the origin of endemic taxa. We used this approach to infer the historical processes that have influenced the origin of a species endemic to the Mexican Plateau (Cynomys mexicanus) and its divergence from a widespread ancestor (Cynomys ludovicianus), and to test whether this endemic species originated through peripatric speciation. We obtained genetic data for 295 individuals for two species of black-tailed prairie dogs (C. ludovicianus and C. mexicanus). Genetic data consisted of mitochondrial DNA sequences (cytochrome b and control region), and 10 nuclear microsatellite loci. We estimated dates of divergence between species and between lineages within each species and performed ecological niche modelling (Present, Last Glacial Maximum and Last Interglacial) to determine changes in the distribution range of both species during the Pleistocene. Finally, we used Bayesian inference methods (DIYABC) to test different hypotheses regarding the divergence and demographic history of these species. Data supported the hypothesis of the origin of C. mexicanus from a peripheral population isolated during the Pleistocene [∼230,000 years ago (0.1-0.43 Ma 95% HPD)], with a Pleistocene-Holocene (∼9,000-11,000 years ago) population expansion (∼10-fold increase in population size). We identified the presence of two possible refugia in the southern area of the distribution range of C. ludovicianus and another, consistent with the distribution range of C. mexicanus. Our analyses suggest that Pleistocene climate change had a strong impact in the distribution of these species, promoting peripatric speciation for the origin of C. mexicanus and lineage divergence within C. ludovicianus. Copyright © 2015 Elsevier Inc. All rights reserved.
A Description of a Blind Student's Science Process Skills through Health Physics
ERIC Educational Resources Information Center
Bülbül, M. Sahin
2013-01-01
This study describes an approach for blind students thought health physics about how they could set a hypothesis and test it. The participant of the study used some health materials designed for high school blind student and tested her hypothesis with the data she gathered with those materials. It was asked that she should hypothesize which could…
ERIC Educational Resources Information Center
Bertrams, Alex; Dickhauser, Oliver
2009-01-01
In the present article, we examine the hypothesis that high-school students' motivation to engage in cognitive endeavors (i.e., their need for cognition; NFC) is positively related to their dispositional self-control capacity. Furthermore, we test the prediction that the relation between NFC and school achievement is mediated by self-control…
Visual Working Memory and Number Sense: Testing the Double Deficit Hypothesis in Mathematics
ERIC Educational Resources Information Center
Toll, Sylke W. M.; Kroesbergen, Evelyn H.; Van Luit, Johannes E. H.
2016-01-01
Background: Evidence exists that there are two main underlying cognitive factors in mathematical difficulties: working memory and number sense. It is suggested that real math difficulties appear when both working memory and number sense are weak, here referred to as the double deficit (DD) hypothesis. Aims: The aim of this study was to test the DD…
Richard V. Pouyat; Ian D. Yesilonis; Miklos Dombos; Katalin Szlavecz; Heikki Setala; Sarel Cilliers; Erzsebet Hornung; D. Johan Kotze; Stephanie Yarwood
2015-01-01
As part of the Global Urban Soil Ecology and Education Network and to test the urban ecosystem convergence hypothesis, we report on soil pH, organic carbon (OC), total nitrogen (TN), phosphorus (P), and potassium (K) measured in four soil habitat types (turfgrass, ruderal, remnant, and reference) in five metropolitan areas (Baltimore, Budapest,...
Test of the prey-base hypothesis to explain use of red squirrel midden sites by American martens
Dean E. Pearson; Leonard F. Ruggiero
2001-01-01
We tested the prey-base hypothesis to determine whether selection of red squirrel (Tamiasciurus hudsonicus) midden sites (cone caches) by American martens (Martes americana) for resting and denning could be attributed to greater abundance of small-mammal prey. Five years of livetrapping at 180 sampling stations in 2 drainages showed that small mammals,...
ERIC Educational Resources Information Center
Luo, Li Zhuo; Li, Hong; Lee, Kang
2011-01-01
This study examined adults' evaluations of likeability and attractiveness of children's faces from infancy to early childhood. We tested whether Lorenz's baby schema hypothesis ("Zeitschrift fur Tierpsychologie" (1943), Vol. 5, pp. 235-409) is applicable not only to infant faces but also to faces of children at older ages. Adult participants were…
Emile S. Gardiner; D. Ramsey Russell; John D. Hodges; T. Conner Fristoe
2000-01-01
Two water tupelo (Nyssa aquatica L.) stands in the Mobile Delta of Alabama were selected to test the hypothesis that mechanized felling does not reduce establishment and growth of natural water tupelo regeneration relative to traditional tree felling with chainsaws. To test the hypothesis, we established six 2 acre treatment plots in each of two...
ERIC Educational Resources Information Center
Liu, Lisa L.; Lau, Anna S.; Chen, Angela Chia-Chen; Dinh, Khanh T.; Kim, Su Yeong
2009-01-01
Associations among neighborhood disadvantage, maternal acculturation, parenting and conduct problems were investigated in a sample of 444 Chinese American adolescents. Adolescents (54% female, 46% male) ranged from 12 to 15 years of age (mean age = 13.0 years). Multilevel modeling was employed to test the hypothesis that the association between…
Lozano, José H
2016-02-01
Previous research aimed at testing the situational strength hypothesis suffers from serious limitations regarding the conceptualization of strength. In order to overcome these limitations, the present study attempts to test the situational strength hypothesis based on the operationalization of strength as reinforcement contingencies. One dispositional factor of proven effect on cooperative behavior, social value orientation (SVO), was used as a predictor of behavior in four social dilemmas with varying degree of situational strength. The moderating role of incentive condition (hypothetical vs. real) on the relationship between SVO and behavior was also tested. One hundred undergraduates were presented with the four social dilemmas and the Social Value Orientation Scale. One-half of the sample played the social dilemmas using real incentives, whereas the other half used hypothetical incentives. Results supported the situational strength hypothesis in that no behavioral variability and no effect of SVO on behavior were found in the strongest situation. However, situational strength did not moderate the effect of SVO on behavior in situations where behavior showed variability. No moderating effect was found for incentive condition either. The implications of these results for personality theory and assessment are discussed. © 2014 Wiley Periodicals, Inc.
The strength of great apes and the speed of humans.
Walker, Alan
2009-04-01
Cliff Jolly developed a causal model of human origins in his paper "The Seed-Eaters," published in 1970. He was one of the first to attempt this, and the paper has since become a classic. I do not have such grand goals; instead, I seek to understand a major difference between the living great apes and humans. More than 50 years ago, Maynard Smith and Savage (1956) showed that the musculoskeletal systems of mammals can be adapted for strength at one extreme and speed at the other but not both. Great apes are adapted for strength--chimpanzees have been shown to be about four times as strong as fit young humans when normalized for body size. The corresponding speed that human limb systems gain at the expense of power is critical for effective human activities such as running, throwing, and manipulation, including tool making. The fossil record can shed light on when the change from power to speed occurred. I outline a hypothesis that suggests that the difference in muscular performance between the two species is caused by chimpanzees having many fewer small motor units than humans, which leads them, in turn, to contract more muscle fibers earlier in any particular task. I outline a histological test of this hypothesis.
Mills, Stacia; Xiao, Anna Q; Wolitzky-Taylor, Kate; Lim, Russell; Lu, Francis G
2017-04-01
The objective of this study was to assess whether a 1-hour didactic session on the DSM-5 Cultural Formulation Interview (CFI) improves the cultural competence of general psychiatry residents. The main hypothesis was that teaching adult psychiatry residents a 1-hour session on the CFI would improve cultural competence. The exploratory hypothesis was that trainees with more experience in cultural diversity would have a greater increase in cultural competency scores. Psychiatry residents at a metropolitan, county hospital completed demographics and preintervention questionnaires, were exposed to a 1-hour session on the CFI, and were given a postintervention questionnaire. The questionnaire was an adapted version of the validated Cultural Competence Assessment Tool . Paired samples t tests compared pre- to posttest change. Hierarchical linear regression assessed whether pretraining characteristics predicted posttest scores. The mean change of total pre- and posttest scores was significant ( p = .002), as was the mean change in subscales Nonverbal Communications ( p < .001) and Cultural Knowledge ( p = .002). Demographic characteristics did not predict higher posttest scores (when covarying for pretest scores). Psychiatry residents' cultural competence scores improved irrespective of previous experience in cultural diversity. More research is needed to further explore the implications of the improved scores in clinical practice.
Applying the scientific method to small catchment studies: Areview of the Panola Mountain experience
Hooper, R.P.
2001-01-01
A hallmark of the scientific method is its iterative application to a problem to increase and refine the understanding of the underlying processes controlling it. A successful iterative application of the scientific method to catchment science (including the fields of hillslope hydrology and biogeochemistry) has been hindered by two factors. First, the scale at which controlled experiments can be performed is much smaller than the scale of the phenomenon of interest. Second, computer simulation models generally have not been used as hypothesis-testing tools as rigorously as they might have been. Model evaluation often has gone only so far as evaluation of goodness of fit, rather than a full structural analysis, which is more useful when treating the model as a hypothesis. An iterative application of a simple mixing model to the Panola Mountain Research Watershed is reviewed to illustrate the increase in understanding gained by this approach and to discern general principles that may be applicable to other studies. The lessons learned include the need for an explicitly stated conceptual model of the catchment, the definition of objective measures of its applicability, and a clear linkage between the scale of observations and the scale of predictions. Published in 2001 by John Wiley & Sons. Ltd.
Correlates of androgens in wild male Barbary macaques: Testing the challenge hypothesis.
Rincon, Alan V; Maréchal, Laëtitia; Semple, Stuart; Majolo, Bonaventura; MacLarnon, Ann
2017-10-01
Investigating causes and consequences of variation in hormonal expression is a key focus in behavioral ecology. Many studies have explored patterns of secretion of the androgen testosterone in male vertebrates, using the challenge hypothesis (Wingfield, Hegner, Dufty, & Ball, 1990; The American Naturalist, 136(6), 829-846) as a theoretical framework. Rather than the classic association of testosterone with male sexual behavior, this hypothesis predicts that high levels of testosterone are associated with male-male reproductive competition but also inhibit paternal care. The hypothesis was originally developed for birds, and subsequently tested in other vertebrate taxa, including primates. Such studies have explored the link between testosterone and reproductive aggression as well as other measures of mating competition, or between testosterone and aspects of male behavior related to the presence of infants. Very few studies have simultaneously investigated the links between testosterone and male aggression, other aspects of mating competition and infant-related behavior. We tested predictions derived from the challenge hypothesis in wild male Barbary macaques (Macaca sylvanus), a species with marked breeding seasonality and high levels of male-infant affiliation, providing a powerful test of this theoretical framework. Over 11 months, 251 hr of behavioral observations and 296 fecal samples were collected from seven adult males in the Middle Atlas Mountains, Morocco. Fecal androgen levels rose before the onset of the mating season, during a period of rank instability, and were positively related to group mating activity across the mating season. Androgen levels were unrelated to rates of male-male aggression in any period, but higher ranked males had higher levels in both the mating season and in the period of rank instability. Lower androgen levels were associated with increased rates of male-infant grooming during the mating and unstable periods. Our results generally support the challenge hypothesis and highlight the importance of considering individual species' behavioral ecology when testing this framework. © 2017 Wiley Periodicals, Inc.
Luigi Ingrassia, Pier; Ragazzoni, Luca; Carenzo, Luca; Colombo, Davide; Ripoll Gallardo, Alba; Della Corte, Francesco
2015-04-01
This study tested the hypothesis that virtual reality simulation is equivalent to live simulation for testing naive medical students' abilities to perform mass casualty triage using the Simple Triage and Rapid Treatment (START) algorithm in a simulated disaster scenario and to detect the improvement in these skills after a teaching session. Fifty-six students in their last year of medical school were randomized into two groups (A and B). The same scenario, a car accident, was developed identically on the two simulation methodologies: virtual reality and live simulation. On day 1, group A was exposed to the live scenario and group B was exposed to the virtual reality scenario, aiming to triage 10 victims. On day 2, all students attended a 2-h lecture on mass casualty triage, specifically the START triage method. On day 3, groups A and B were crossed over. The groups' abilities to perform mass casualty triage in terms of triage accuracy, intervention correctness, and speed in the scenarios were assessed. Triage and lifesaving treatment scores were assessed equally by virtual reality and live simulation on day 1 and on day 3. Both simulation methodologies detected an improvement in triage accuracy and treatment correctness from day 1 to day 3 (P<0.001). The time to complete each scenario and its decrease from day 1 to day 3 were detected equally in the two groups (P<0.05). Virtual reality simulation proved to be a valuable tool, equivalent to live simulation, to test medical students' abilities to perform mass casualty triage and to detect improvement in such skills.
Confirming expectations in asymmetric and symmetric social hypothesis testing.
Rusconi, Patrice; Sacchi, Simona; Toscano, Armando; Cherubini, Paolo
2012-01-01
This article examines individuals' expectations in a social hypothesis testing task. Participants selected questions from a list to investigate the presence of personality traits in a target individual. They also identified the responses that they expected to receive and the likelihood of the expected responses. The results of two studies indicated that when people asked questions inquiring about the hypothesized traits that did not entail strong a priori beliefs, they expected to find evidence confirming the hypothesis under investigation. These confirming expectations were more pronounced for symmetric questions, in which the diagnosticity and frequency of the expected evidence did not conflict. When the search for information was asymmetric, confirming expectations were diminished, likely as a consequence of either the rareness or low diagnosticity of the hypothesis-confirming outcome. We also discuss the implications of these findings for confirmation bias.
Wagner, C; Groene, O; Dersarkissian, M; Thompson, C A; Klazinga, N S; Arah, O A; Suñol, R
2014-04-01
Stakeholders of hospitals often lack standardized tools to assess compliance with quality management strategies and the implementation of clinical quality activities in hospitals. Such assessment tools, if easy to use, could be helpful to hospitals, health-care purchasers and health-care inspectorates. The aim of our study was to determine the psychometric properties of two newly developed tools for measuring compliance with process-oriented quality management strategies and the extent of implementation of clinical quality strategies at the hospital level. We developed and tested two measurement instruments that could be used during on-site visits by trained external surveyors to calculate a Quality Management Compliance Index (QMCI) and a Clinical Quality Implementation Index (CQII). We used psychometric methods and the cross-sectional data to explore the factor structure, reliability and validity of each of these instruments. The sample consisted of 74 acute care hospitals selected at random from each of 7 European countries. The psychometric properties of the two indices (QMCI and CQII). Overall, the indices demonstrated favourable psychometric performance based on factor analysis, item correlations, internal consistency and hypothesis testing. Cronbach's alpha was acceptable for the scales of the QMCI (α: 0.74-0.78) and the CQII (α: 0.82-0.93). Inter-scale correlations revealed that the scales were positively correlated, but distinct. All scales added sufficient new information to each main index to be retained. This study has produced two reliable instruments that can be used during on-site visits to assess compliance with quality management strategies and implementation of quality management activities by hospitals in Europe and perhaps other jurisdictions.
CLUSTERnGO: a user-defined modelling platform for two-stage clustering of time-series data.
Fidaner, Işık Barış; Cankorur-Cetinkaya, Ayca; Dikicioglu, Duygu; Kirdar, Betul; Cemgil, Ali Taylan; Oliver, Stephen G
2016-02-01
Simple bioinformatic tools are frequently used to analyse time-series datasets regardless of their ability to deal with transient phenomena, limiting the meaningful information that may be extracted from them. This situation requires the development and exploitation of tailor-made, easy-to-use and flexible tools designed specifically for the analysis of time-series datasets. We present a novel statistical application called CLUSTERnGO, which uses a model-based clustering algorithm that fulfils this need. This algorithm involves two components of operation. Component 1 constructs a Bayesian non-parametric model (Infinite Mixture of Piecewise Linear Sequences) and Component 2, which applies a novel clustering methodology (Two-Stage Clustering). The software can also assign biological meaning to the identified clusters using an appropriate ontology. It applies multiple hypothesis testing to report the significance of these enrichments. The algorithm has a four-phase pipeline. The application can be executed using either command-line tools or a user-friendly Graphical User Interface. The latter has been developed to address the needs of both specialist and non-specialist users. We use three diverse test cases to demonstrate the flexibility of the proposed strategy. In all cases, CLUSTERnGO not only outperformed existing algorithms in assigning unique GO term enrichments to the identified clusters, but also revealed novel insights regarding the biological systems examined, which were not uncovered in the original publications. The C++ and QT source codes, the GUI applications for Windows, OS X and Linux operating systems and user manual are freely available for download under the GNU GPL v3 license at http://www.cmpe.boun.edu.tr/content/CnG. sgo24@cam.ac.uk Supplementary data are available at Bioinformatics online. © The Author 2015. Published by Oxford University Press.
In the Beginning-There Is the Introduction-and Your Study Hypothesis.
Vetter, Thomas R; Mascha, Edward J
2017-05-01
Writing a manuscript for a medical journal is very akin to writing a newspaper article-albeit a scholarly one. Like any journalist, you have a story to tell. You need to tell your story in a way that is easy to follow and makes a compelling case to the reader. Although recommended since the beginning of the 20th century, the conventional Introduction-Methods-Results-And-Discussion (IMRAD) scientific reporting structure has only been the standard since the 1980s. The Introduction should be focused and succinct in communicating the significance, background, rationale, study aims or objectives, and the primary (and secondary, if appropriate) study hypotheses. Hypothesis testing involves posing both a null and an alternative hypothesis. The null hypothesis proposes that no difference or association exists on the outcome variable of interest between the interventions or groups being compared. The alternative hypothesis is the opposite of the null hypothesis and thus typically proposes that a difference in the population does exist between the groups being compared on the parameter of interest. Most investigators seek to reject the null hypothesis because of their expectation that the studied intervention does result in a difference between the study groups or that the association of interest does exist. Therefore, in most clinical and basic science studies and manuscripts, the alternative hypothesis is stated, not the null hypothesis. Also, in the Introduction, the alternative hypothesis is typically stated in the direction of interest, or the expected direction. However, when assessing the association of interest, researchers typically look in both directions (ie, favoring 1 group or the other) by conducting a 2-tailed statistical test because the true direction of the effect is typically not known, and either direction would be important to report.
Beyond the continuum: a multi-dimensional phase space for neutral-niche community assembly.
Latombe, Guillaume; Hui, Cang; McGeoch, Melodie A
2015-12-22
Neutral and niche processes are generally considered to interact in natural communities along a continuum, exhibiting community patterns bounded by pure neutral and pure niche processes. The continuum concept uses niche separation, an attribute of the community, to test the hypothesis that communities are bounded by pure niche or pure neutral conditions. It does not accommodate interactions via feedback between processes and the environment. By contrast, we introduce the Community Assembly Phase Space (CAPS), a multi-dimensional space that uses community processes (such as dispersal and niche selection) to define the limiting neutral and niche conditions and to test the continuum hypothesis. We compare the outputs of modelled communities in a heterogeneous landscape, assembled by pure neutral, pure niche and composite processes. Differences in patterns under different combinations of processes in CAPS reveal hidden complexity in neutral-niche community dynamics. The neutral-niche continuum only holds for strong dispersal limitation and niche separation. For weaker dispersal limitation and niche separation, neutral and niche processes amplify each other via feedback with the environment. This generates patterns that lie well beyond those predicted by a continuum. Inferences drawn from patterns about community assembly processes can therefore be misguided when based on the continuum perspective. CAPS also demonstrates the complementary information value of different patterns for inferring community processes and captures the complexity of community assembly. It provides a general tool for studying the processes structuring communities and can be applied to address a range of questions in community and metacommunity ecology. © 2015 The Author(s).
Beyond the continuum: a multi-dimensional phase space for neutral–niche community assembly
Latombe, Guillaume; McGeoch, Melodie A.
2015-01-01
Neutral and niche processes are generally considered to interact in natural communities along a continuum, exhibiting community patterns bounded by pure neutral and pure niche processes. The continuum concept uses niche separation, an attribute of the community, to test the hypothesis that communities are bounded by pure niche or pure neutral conditions. It does not accommodate interactions via feedback between processes and the environment. By contrast, we introduce the Community Assembly Phase Space (CAPS), a multi-dimensional space that uses community processes (such as dispersal and niche selection) to define the limiting neutral and niche conditions and to test the continuum hypothesis. We compare the outputs of modelled communities in a heterogeneous landscape, assembled by pure neutral, pure niche and composite processes. Differences in patterns under different combinations of processes in CAPS reveal hidden complexity in neutral–niche community dynamics. The neutral–niche continuum only holds for strong dispersal limitation and niche separation. For weaker dispersal limitation and niche separation, neutral and niche processes amplify each other via feedback with the environment. This generates patterns that lie well beyond those predicted by a continuum. Inferences drawn from patterns about community assembly processes can therefore be misguided when based on the continuum perspective. CAPS also demonstrates the complementary information value of different patterns for inferring community processes and captures the complexity of community assembly. It provides a general tool for studying the processes structuring communities and can be applied to address a range of questions in community and metacommunity ecology. PMID:26702047
NASA Astrophysics Data System (ADS)
Hoell, Simon; Omenzetter, Piotr
2017-07-01
Considering jointly damage sensitive features (DSFs) of signals recorded by multiple sensors, applying advanced transformations to these DSFs and assessing systematically their contribution to damage detectability and localisation can significantly enhance the performance of structural health monitoring systems. This philosophy is explored here for partial autocorrelation coefficients (PACCs) of acceleration responses. They are interrogated with the help of the linear discriminant analysis based on the Fukunaga-Koontz transformation using datasets of the healthy and selected reference damage states. Then, a simple but efficient fast forward selection procedure is applied to rank the DSF components with respect to statistical distance measures specialised for either damage detection or localisation. For the damage detection task, the optimal feature subsets are identified based on the statistical hypothesis testing. For damage localisation, a hierarchical neuro-fuzzy tool is developed that uses the DSF ranking to establish its own optimal architecture. The proposed approaches are evaluated experimentally on data from non-destructively simulated damage in a laboratory scale wind turbine blade. The results support our claim of being able to enhance damage detectability and localisation performance by transforming and optimally selecting DSFs. It is demonstrated that the optimally selected PACCs from multiple sensors or their Fukunaga-Koontz transformed versions can not only improve the detectability of damage via statistical hypothesis testing but also increase the accuracy of damage localisation when used as inputs into a hierarchical neuro-fuzzy network. Furthermore, the computational effort of employing these advanced soft computing models for damage localisation can be significantly reduced by using transformed DSFs.
Boessenkool, Sanne; Star, Bastiaan; Waters, Jonathan M; Seddon, Philip J
2009-06-01
The identification of demographically independent populations and the recognition of management units have been greatly facilitated by the continuing advances in genetic tools. Managements units now play a key role in short-term conservation management programmes of declining species, but their importance in expanding populations receives comparatively little attention. The endangered yellow-eyed penguin (Megadyptes antipodes) expanded its range from the subantarctic to New Zealand's South Island a few hundred years ago and this new population now represents almost half of the species' total census size. This dramatic expansion attests to M. antipodes' high dispersal abilities and suggests the species is likely to constitute a single demographic population. Here we test this hypothesis of panmixia by investigating genetic differentiation and levels of gene flow among penguin breeding areas using 12 autosomal microsatellite loci along with mitochondrial control region sequence analyses for 350 individuals. Contrary to our hypothesis, however, the analyses reveal two genetically and geographically distinct assemblages: South Island vs. subantarctic populations. Using assignment tests, we recognize just two first-generation migrants between these populations (corresponding to a migration rate of < 2%), indicating that ongoing levels of long-distance migration are low. Furthermore, the South Island population has low genetic variability compared to the subantarctic population. These results suggest that the South Island population was founded by only a small number of individuals, and that subsequent levels of gene flow have remained low. The demographic independence of the two populations warrants their designation as distinct management units and conservation efforts should be adjusted accordingly to protect both populations.
Watson, Robert A
2014-08-01
To test the hypothesis that machine learning algorithms increase the predictive power to classify surgical expertise using surgeons' hand motion patterns. In 2012 at the University of North Carolina at Chapel Hill, 14 surgical attendings and 10 first- and second-year surgical residents each performed two bench model venous anastomoses. During the simulated tasks, the participants wore an inertial measurement unit on the dorsum of their dominant (right) hand to capture their hand motion patterns. The pattern from each bench model task performed was preprocessed into a symbolic time series and labeled as expert (attending) or novice (resident). The labeled hand motion patterns were processed and used to train a Support Vector Machine (SVM) classification algorithm. The trained algorithm was then tested for discriminative/predictive power against unlabeled (blinded) hand motion patterns from tasks not used in the training. The Lempel-Ziv (LZ) complexity metric was also measured from each hand motion pattern, with an optimal threshold calculated to separately classify the patterns. The LZ metric classified unlabeled (blinded) hand motion patterns into expert and novice groups with an accuracy of 70% (sensitivity 64%, specificity 80%). The SVM algorithm had an accuracy of 83% (sensitivity 86%, specificity 80%). The results confirmed the hypothesis. The SVM algorithm increased the predictive power to classify blinded surgical hand motion patterns into expert versus novice groups. With further development, the system used in this study could become a viable tool for low-cost, objective assessment of procedural proficiency in a competency-based curriculum.
Genetics of pediatric obesity.
Manco, Melania; Dallapiccola, Bruno
2012-07-01
Onset of obesity has been anticipated at earlier ages, and prevalence has dramatically increased worldwide over the past decades. Epidemic obesity is mainly attributable to modern lifestyle, but family studies prove the significant role of genes in the individual's predisposition to obesity. Advances in genotyping technologies have raised great hope and expectations that genetic testing will pave the way to personalized medicine and that complex traits such as obesity will be prevented even before birth. In the presence of the pressing offer of direct-to-consumer genetic testing services from private companies to estimate the individual's risk for complex phenotypes including obesity, the present review offers pediatricians an update of the state of the art on genomics obesity in childhood. Discrepancies with respect to genomics of adult obesity are discussed. After an appraisal of findings from genome-wide association studies in pediatric populations, the rare variant-common disease hypothesis, the theoretical soil for next-generation sequencing techniques, is discussed as opposite to the common disease-common variant hypothesis. Next-generation sequencing techniques are expected to fill the gap of "missing heritability" of obesity, identifying rare variants associated with the trait and clarifying the role of epigenetics in its heritability. Pediatric obesity emerges as a complex phenotype, modulated by unique gene-environment interactions that occur in periods of life and are "permissive" for the programming of adult obesity. With the advent of next-generation sequencing techniques and advances in the field of exposomics, sensitive and specific tools to predict the obesity risk as early as possible are the challenge for the next decade.
Ji, Hong; Petro, Nathan M; Chen, Badong; Yuan, Zejian; Wang, Jianji; Zheng, Nanning; Keil, Andreas
2018-02-06
Over the past decade, the simultaneous recording of electroencephalogram (EEG) and functional magnetic resonance imaging (fMRI) data has garnered growing interest because it may provide an avenue towards combining the strengths of both imaging modalities. Given their pronounced differences in temporal and spatial statistics, the combination of EEG and fMRI data is however methodologically challenging. Here, we propose a novel screening approach that relies on a Cross Multivariate Correlation Coefficient (xMCC) framework. This approach accomplishes three tasks: (1) It provides a measure for testing multivariate correlation and multivariate uncorrelation of the two modalities; (2) it provides criterion for the selection of EEG features; (3) it performs a screening of relevant EEG information by grouping the EEG channels into clusters to improve efficiency and to reduce computational load when searching for the best predictors of the BOLD signal. The present report applies this approach to a data set with concurrent recordings of steady-state-visual evoked potentials (ssVEPs) and fMRI, recorded while observers viewed phase-reversing Gabor patches. We test the hypothesis that fluctuations in visuo-cortical mass potentials systematically covary with BOLD fluctuations not only in visual cortical, but also in anterior temporal and prefrontal areas. Results supported the hypothesis and showed that the xMCC-based analysis provides straightforward identification of neurophysiological plausible brain regions with EEG-fMRI covariance. Furthermore xMCC converged with other extant methods for EEG-fMRI analysis. © 2018 The Authors Journal of Neuroscience Research Published by Wiley Periodicals, Inc.
Baka, Łukasz
2015-01-01
The aim of the study was to investigate the direct and indirect - mediated by job burnout - effects of job demands on mental and physical health problems. The Job Demands-Resources model was the theoretical framework of the study. Three job demands were taken into account - interpersonal conflicts at work, organizational constraints and workload. Indicators of mental and physical health problems included depression and physical symptoms, respectively. Three hundred and sixteen Polish teachers from 8 schools participated in the study. The hypotheses were tested with the use of tools measuring job demands (Interpersonal Conflicts at Work, Organizational Constraints, Quantitative Workload), job burnout (the Oldenburg Burnout Inventory), depression (the Beck Hopelessness Scale), and physical symptoms (the Physical Symptoms Inventory). The regression analysis with bootstrapping, using the PROCESS macros of Hayes was applied. The results support the hypotheses partially. The indirect effect and to some extent the direct effect of job demands turned out to be statistically important. The negative impact of 3 job demands on mental (hypothesis 1 - H1) and physical (hypothesis 2 - H2) health were mediated by the increasing job burnout. Only organizational constraints were directly associated with mental (and not physical) health. The results partially support the notion of the Job Demands-Resources model and provide further insight into processes leading to the low well-being of teachers in the workplace. This work is available in Open Access model and licensed under a CC BY-NC 3.0 PL license.
An Assessment of the Impact of Hafting on Paleoindian Point Variability
Buchanan, Briggs; O'Brien, Michael J.; Kilby, J. David; Huckell, Bruce B.; Collard, Mark
2012-01-01
It has long been argued that the form of North American Paleoindian points was affected by hafting. According to this hypothesis, hafting constrained point bases such that they are less variable than point blades. The results of several studies have been claimed to be consistent with this hypothesis. However, there are reasons to be skeptical of these results. None of the studies employed statistical tests, and all of them focused on points recovered from kill and camp sites, which makes it difficult to be certain that the differences in variability are the result of hafting rather than a consequence of resharpening. Here, we report a study in which we tested the predictions of the hafting hypothesis by statistically comparing the variability of different parts of Clovis points. We controlled for the potentially confounding effects of resharpening by analyzing largely unused points from caches as well as points from kill and camp sites. The results of our analyses were not consistent with the predictions of the hypothesis. We found that several blade characters and point thickness were no more variable than the base characters. Our results indicate that the hafting hypothesis does not hold for Clovis points and indicate that there is a need to test its applicability in relation to post-Clovis Paleoindian points. PMID:22666320
2011-03-01
1.179 1 22 .289 POP-UP .000 1 22 .991 Tests the null hypothesis that the error variance of the dependent variable is equal across groups. a. Design ...POP-UP 2.104 1 22 .161 Tests the null hypothesis that the error variance of the dependent variable is equal across groups. a. Design : Intercept... design also limited the number of intended treatments. The experimental design originally was suppose to test all three adverse events that threaten
Testing the cultural group selection hypothesis in Northern Ghana and Oaxaca.
Acedo-Carmona, Cristina; Gomila, Antoni
2016-01-01
We examine the cultural group selection (CGS) hypothesis in light of our fieldwork in Northern Ghana and Oaxaca, highly multi-ethnic regions. Our evidence fails to corroborate two central predictions of the hypothesis: that the cultural group is the unit of evolution, and that cultural homogenization is to be expected as the outcome of a selective process.
Bayes factor and posterior probability: Complementary statistical evidence to p-value.
Lin, Ruitao; Yin, Guosheng
2015-09-01
As a convention, a p-value is often computed in hypothesis testing and compared with the nominal level of 0.05 to determine whether to reject the null hypothesis. Although the smaller the p-value, the more significant the statistical test, it is difficult to perceive the p-value in a probability scale and quantify it as the strength of the data against the null hypothesis. In contrast, the Bayesian posterior probability of the null hypothesis has an explicit interpretation of how strong the data support the null. We make a comparison of the p-value and the posterior probability by considering a recent clinical trial. The results show that even when we reject the null hypothesis, there is still a substantial probability (around 20%) that the null is true. Not only should we examine whether the data would have rarely occurred under the null hypothesis, but we also need to know whether the data would be rare under the alternative. As a result, the p-value only provides one side of the information, for which the Bayes factor and posterior probability may offer complementary evidence. Copyright © 2015 Elsevier Inc. All rights reserved.
Earthquake likelihood model testing
Schorlemmer, D.; Gerstenberger, M.C.; Wiemer, S.; Jackson, D.D.; Rhoades, D.A.
2007-01-01
INTRODUCTIONThe Regional Earthquake Likelihood Models (RELM) project aims to produce and evaluate alternate models of earthquake potential (probability per unit volume, magnitude, and time) for California. Based on differing assumptions, these models are produced to test the validity of their assumptions and to explore which models should be incorporated in seismic hazard and risk evaluation. Tests based on physical and geological criteria are useful but we focus on statistical methods using future earthquake catalog data only. We envision two evaluations: a test of consistency with observed data and a comparison of all pairs of models for relative consistency. Both tests are based on the likelihood method, and both are fully prospective (i.e., the models are not adjusted to fit the test data). To be tested, each model must assign a probability to any possible event within a specified region of space, time, and magnitude. For our tests the models must use a common format: earthquake rates in specified “bins” with location, magnitude, time, and focal mechanism limits.Seismology cannot yet deterministically predict individual earthquakes; however, it should seek the best possible models for forecasting earthquake occurrence. This paper describes the statistical rules of an experiment to examine and test earthquake forecasts. The primary purposes of the tests described below are to evaluate physical models for earthquakes, assure that source models used in seismic hazard and risk studies are consistent with earthquake data, and provide quantitative measures by which models can be assigned weights in a consensus model or be judged as suitable for particular regions.In this paper we develop a statistical method for testing earthquake likelihood models. A companion paper (Schorlemmer and Gerstenberger 2007, this issue) discusses the actual implementation of these tests in the framework of the RELM initiative.Statistical testing of hypotheses is a common task and a wide range of possible testing procedures exist. Jolliffe and Stephenson (2003) present different forecast verifications from atmospheric science, among them likelihood testing of probability forecasts and testing the occurrence of binary events. Testing binary events requires that for each forecasted event, the spatial, temporal and magnitude limits be given. Although major earthquakes can be considered binary events, the models within the RELM project express their forecasts on a spatial grid and in 0.1 magnitude units; thus the results are a distribution of rates over space and magnitude. These forecasts can be tested with likelihood tests.In general, likelihood tests assume a valid null hypothesis against which a given hypothesis is tested. The outcome is either a rejection of the null hypothesis in favor of the test hypothesis or a nonrejection, meaning the test hypothesis cannot outperform the null hypothesis at a given significance level. Within RELM, there is no accepted null hypothesis and thus the likelihood test needs to be expanded to allow comparable testing of equipollent hypotheses.To test models against one another, we require that forecasts are expressed in a standard format: the average rate of earthquake occurrence within pre-specified limits of hypocentral latitude, longitude, depth, magnitude, time period, and focal mechanisms. Focal mechanisms should either be described as the inclination of P-axis, declination of P-axis, and inclination of the T-axis, or as strike, dip, and rake angles. Schorlemmer and Gerstenberger (2007, this issue) designed classes of these parameters such that similar models will be tested against each other. These classes make the forecasts comparable between models. Additionally, we are limited to testing only what is precisely defined and consistently reported in earthquake catalogs. Therefore it is currently not possible to test such information as fault rupture length or area, asperity location, etc. Also, to account for data quality issues, we allow for location and magnitude uncertainties as well as the probability that an event is dependent on another event.As we mentioned above, only models with comparable forecasts can be tested against each other. Our current tests are designed to examine grid-based models. This requires that any fault-based model be adapted to a grid before testing is possible. While this is a limitation of the testing, it is an inherent difficulty in any such comparative testing. Please refer to appendix B for a statistical evaluation of the application of the Poisson hypothesis to fault-based models.The testing suite we present consists of three different tests: L-Test, N-Test, and R-Test. These tests are defined similarily to Kagan and Jackson (1995). The first two tests examine the consistency of the hypotheses with the observations while the last test compares the spatial performances of the models.
Timing and proximate causes of mortality in wild bird populations: testing Ashmole’s hypothesis
Barton, Daniel C.; Martin, Thomas E.
2012-01-01
Fecundity in birds is widely recognized to increase with latitude across diverse phylogenetic groups and regions, yet the causes of this variation remain enigmatic. Ashmole’s hypothesis is one of the most broadly accepted explanations for this pattern. This hypothesis suggests that increasing seasonality leads to increasing overwinter mortality due to resource scarcity during the lean season (e.g., winter) in higher latitude climates. This mortality is then thought to yield increased per-capita resources for breeding that allow larger clutch sizes at high latitudes. Support for this hypothesis has been based on indirect tests, whereas the underlying mechanisms and assumptions remain poorly explored. We used a meta-analysis of over 150 published studies to test two underlying and critical assumptions of Ashmole’s hypothesis: first, that ad ult mortality is greatest during the season of greatest resource scarcity, and second, t hat most mortality is caused by starvation. We found that the lean season (winter) was generally not the season of greatest mortality. Instead, spring or summer was most frequently the season of greatest mortality. Moreover, monthly survival rates were not explained by monthly productivity, again opposing predictions from Ashmole’s hypothesis. Finally, predation, rather than starvation, was the most frequent proximate cause o f mortality. Our results do not support the mechanistic predictions of Ashmole‘s hypothesis, and suggest alternative explanations of latitudinal variation in clutch size should remain under consideration. Our meta-analysis also highlights a paucity of data available on the timing and causes of mortality in many bird populations, particularly tropical bird populations, despite the clear theoretical and empirical importance of such data.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Nelson, P.; Seth, D.L.; Ray, A.K.
A detailed and systematic study of the nature of the discretization error associated with the upwind finite-difference method is presented. A basic model problem has been identified and based upon the results for this problem, a basic hypothesis regarding the accuracy of the computational solution of the Spencer-Lewis equation is formulated. The basic hypothesis is then tested under various systematic single complexifications of the basic model problem. The results of these tests provide the framework of the refined hypothesis presented in the concluding comments. 27 refs., 3 figs., 14 tabs.
Comparing Web, Group and Telehealth Formats of a Military Parenting Program
2017-06-01
directed approaches. Comparative effectiveness will be tested by specifying a non - equivalence hypothesis for group -based and web-facilitated relative...Comparative effectiveness will be tested by specifying a non - equivalence hypothesis fro group based and individualized facilitated relative to self-directed...documents for review and approval. 1a. Finalize human subjects protocol and consent documents for pilot group (N=5 families), and randomized controlled
ERIC Educational Resources Information Center
Paek, Insu
2010-01-01
Conservative bias in rejection of a null hypothesis from using the continuity correction in the Mantel-Haenszel (MH) procedure was examined through simulation in a differential item functioning (DIF) investigation context in which statistical testing uses a prespecified level [alpha] for the decision on an item with respect to DIF. The standard MH…
ERIC Educational Resources Information Center
Chukwu, Leo C.; Eze, Thecla A. Y.; Agada, Fidelia Chinyelugo
2016-01-01
The study examined the availability of instructional materials at the basic education level in Enugu Education Zone of Enugu State, Nigeria. One research question and one hypothesis guided the study. The research question was answered using mean and grand mean ratings, while the hypothesis was tested using t-test statistics at 0.05 level of…
Enhancement of Intermittent Androgen Ablation Therapy by Finasteride Administration in Animal Models
2003-02-01
that intermittent androgen ablation therapy can be enhanced by finasteride , an inhibitor of T to DHT conversion, To test our hypothesis in animal...models, it is necessary to deliver exogenous T at physiologic levels and finasteride over a long period of time, We have worked out conditions to deliver T and finasteride in nude mice, which will allow us to test our hypothesis.
ERIC Educational Resources Information Center
LeMire, Steven D.
2010-01-01
This paper proposes an argument framework for the teaching of null hypothesis statistical testing and its application in support of research. Elements of the Toulmin (1958) model of argument are used to illustrate the use of p values and Type I and Type II error rates in support of claims about statistical parameters and subject matter research…
Quantitative analysis of diffusion tensor orientation: theoretical framework.
Wu, Yu-Chien; Field, Aaron S; Chung, Moo K; Badie, Benham; Alexander, Andrew L
2004-11-01
Diffusion-tensor MRI (DT-MRI) yields information about the magnitude, anisotropy, and orientation of water diffusion of brain tissues. Although white matter tractography and eigenvector color maps provide visually appealing displays of white matter tract organization, they do not easily lend themselves to quantitative and statistical analysis. In this study, a set of visual and quantitative tools for the investigation of tensor orientations in the human brain was developed. Visual tools included rose diagrams, which are spherical coordinate histograms of the major eigenvector directions, and 3D scatterplots of the major eigenvector angles. A scatter matrix of major eigenvector directions was used to describe the distribution of major eigenvectors in a defined anatomic region. A measure of eigenvector dispersion was developed to describe the degree of eigenvector coherence in the selected region. These tools were used to evaluate directional organization and the interhemispheric symmetry of DT-MRI data in five healthy human brains and two patients with infiltrative diseases of the white matter tracts. In normal anatomical white matter tracts, a high degree of directional coherence and interhemispheric symmetry was observed. The infiltrative diseases appeared to alter the eigenvector properties of affected white matter tracts, showing decreased eigenvector coherence and interhemispheric symmetry. This novel approach distills the rich, 3D information available from the diffusion tensor into a form that lends itself to quantitative analysis and statistical hypothesis testing. (c) 2004 Wiley-Liss, Inc.
A New Paradigm to Analyze Data Completeness of Patient Data.
Nasir, Ayan; Gurupur, Varadraj; Liu, Xinliang
2016-08-03
There is a need to develop a tool that will measure data completeness of patient records using sophisticated statistical metrics. Patient data integrity is important in providing timely and appropriate care. Completeness is an important step, with an emphasis on understanding the complex relationships between data fields and their relative importance in delivering care. This tool will not only help understand where data problems are but also help uncover the underlying issues behind them. Develop a tool that can be used alongside a variety of health care database software packages to determine the completeness of individual patient records as well as aggregate patient records across health care centers and subpopulations. The methodology of this project is encapsulated within the Data Completeness Analysis Package (DCAP) tool, with the major components including concept mapping, CSV parsing, and statistical analysis. The results from testing DCAP with Healthcare Cost and Utilization Project (HCUP) State Inpatient Database (SID) data show that this tool is successful in identifying relative data completeness at the patient, subpopulation, and database levels. These results also solidify a need for further analysis and call for hypothesis driven research to find underlying causes for data incompleteness. DCAP examines patient records and generates statistics that can be used to determine the completeness of individual patient data as well as the general thoroughness of record keeping in a medical database. DCAP uses a component that is customized to the settings of the software package used for storing patient data as well as a Comma Separated Values (CSV) file parser to determine the appropriate measurements. DCAP itself is assessed through a proof of concept exercise using hypothetical data as well as available HCUP SID patient data.
A New Paradigm to Analyze Data Completeness of Patient Data
Nasir, Ayan; Liu, Xinliang
2016-01-01
Summary Background There is a need to develop a tool that will measure data completeness of patient records using sophisticated statistical metrics. Patient data integrity is important in providing timely and appropriate care. Completeness is an important step, with an emphasis on understanding the complex relationships between data fields and their relative importance in delivering care. This tool will not only help understand where data problems are but also help uncover the underlying issues behind them. Objectives Develop a tool that can be used alongside a variety of health care database software packages to determine the completeness of individual patient records as well as aggregate patient records across health care centers and subpopulations. Methods The methodology of this project is encapsulated within the Data Completeness Analysis Package (DCAP) tool, with the major components including concept mapping, CSV parsing, and statistical analysis. Results The results from testing DCAP with Healthcare Cost and Utilization Project (HCUP) State Inpatient Database (SID) data show that this tool is successful in identifying relative data completeness at the patient, subpopulation, and database levels. These results also solidify a need for further analysis and call for hypothesis driven research to find underlying causes for data incompleteness. Conclusion DCAP examines patient records and generates statistics that can be used to determine the completeness of individual patient data as well as the general thoroughness of record keeping in a medical database. DCAP uses a component that is customized to the settings of the software package used for storing patient data as well as a Comma Separated Values (CSV) file parser to determine the appropriate measurements. DCAP itself is assessed through a proof of concept exercise using hypothetical data as well as available HCUP SID patient data. PMID:27484918
Ghandikota, Sudhir; Hershey, Gurjit K Khurana; Mersha, Tesfaye B
2018-03-24
Advances in high-throughput sequencing technologies have made it possible to generate multiple omics data at an unprecedented rate and scale. The accumulation of these omics data far outpaces the rate at which biologists can mine and generate new hypothesis to test experimentally. There is an urgent need to develop a myriad of powerful tools to efficiently and effectively search and filter these resources to address specific post-GWAS functional genomics questions. However, to date, these resources are scattered across several databases and often lack a unified portal for data annotation and analytics. In addition, existing tools to analyze and visualize these databases are highly fragmented, resulting researchers to access multiple applications and manual interventions for each gene or variant in an ad hoc fashion until all the questions are answered. In this study, we present GENEASE, a web-based one-stop bioinformatics tool designed to not only query and explore multi-omics and phenotype databases (e.g., GTEx, ClinVar, dbGaP, GWAS Catalog, ENCODE, Roadmap Epigenomics, KEGG, Reactome, Gene and Phenotype Ontology) in a single web interface but also to perform seamless post genome-wide association downstream functional and overlap analysis for non-coding regulatory variants. GENEASE accesses over 50 different databases in public domain including model organism-specific databases to facilitate gene/variant and disease exploration, enrichment and overlap analysis in real time. It is a user-friendly tool with point-and-click interface containing links for support information including user manual and examples. GENEASE can be accessed freely at http://research.cchmc.org/mershalab/genease_new/login.html. Tesfaye.Mersha@cchmc.org, Sudhir.Ghandikota@cchmc.org. Supplementary data are available at Bioinformatics online.
A methodological approach for designing a usable ontology-based GUI in healthcare.
Lasierra, N; Kushniruk, A; Alesanco, A; Borycki, E; García, J
2013-01-01
This paper presents a methodological approach to the design and evaluation of an interface for an ontology-based system used for designing care plans for monitoring patients at home. In order to define the care plans, physicians need a tool for creating instances of the ontology and configuring some rules. Our purpose is to develop an interface to allow clinicians to interact with the ontology. Although ontology-driven applications do not necessarily present the ontology in the user interface, it is our hypothesis that showing selected parts of the ontology in a "usable" way could enhance clinician's understanding and make easier the definition of the care plans. Based on prototyping and iterative testing, this methodology combines visualization techniques and usability methods. Preliminary results obtained after a formative evaluation indicate the effectiveness of suggested combination.
Neurobiological constraints on behavioral models of motivation.
Nader, K; Bechara, A; van der Kooy, D
1997-01-01
The application of neurobiological tools to behavioral questions has produced a number of working models of the mechanisms mediating the rewarding and aversive properties of stimuli. The authors review and compare three models that differ in the nature and number of the processes identified. The dopamine hypothesis, a single system model, posits that the neurotransmitter dopamine plays a fundamental role in mediating the rewarding properties of all classes of stimuli. In contrast, both nondeprived/deprived and saliency attribution models claim that separate systems make independent contributions to reward. The former identifies the psychological boundary defined by the two systems as being between states of nondeprivation (e.g. food sated) and deprivation (e.g. hunger). The latter identifies a boundary between liking and wanting systems. Neurobiological dissociations provide tests of and explanatory power for behavioral theories of goal-directed behavior.
Selection against small males in utero: a test of the Wells hypothesis.
Catalano, R; Goodman, J; Margerison-Zilko, C E; Saxton, K B; Anderson, E; Epstein, M
2012-04-01
The argument that women in stressful environments spontaneously abort their least fit fetuses enjoys wide dissemination despite the fact that several of its most intuitive predictions remain untested. The literature includes no tests, for example, of the hypothesis that these mechanisms select against small for gestational age (SGA) males. We apply time-series modeling to 4.9 million California male term births to test the hypothesis that the rate of SGA infants in 1096 weekly birth cohorts varies inversely with labor market contraction, a known stressor of contemporary populations. We find support for the hypothesis that small size becomes less frequent among term male infants when the labor market contracts. Our findings contribute to the evidence supporting selection in utero. They also suggest that research into the association between maternal stress and adverse birth outcomes should acknowledge the possibility that fetal loss may affect findings and their interpretation. Strengths of our analyses include the large number and size of our birth cohorts and our control for autocorrelation. Weaknesses include that we, like nearly all researchers in the field, have no direct measure of fetal loss.
Reynolds, Matthew R; Scheiber, Caroline; Hajovsky, Daniel B; Schwartz, Bryanna; Kaufman, Alan S
2015-01-01
The gender similarities hypothesis by J. S. Hyde ( 2005 ), based on large-scale reviews of studies, concludes that boys and girls are more alike than different on most psychological variables, including academic skills such as reading and math (J. S. Hyde, 2005 ). Writing is an academic skill that may be an exception. The authors investigated gender differences in academic achievement using a large, nationally stratified sample of children and adolescents ranging from ages 7-19 years (N = 2,027). Achievement data were from the conormed sample for the Kaufman intelligence and achievement tests. Multiple-indicator, multiple-cause, and multigroup mean and covariance structure models were used to test for mean differences. Girls had higher latent reading ability and higher scores on a test of math computation, but the effect sizes were consistent with the gender similarities hypothesis. Conversely, girls scored higher on spelling and written expression, with effect sizes inconsistent with the gender similarities hypothesis. The findings remained the same after controlling for cognitive ability. Girls outperform boys on tasks of writing.
Sources of Error and the Statistical Formulation of M S: m b Seismic Event Screening Analysis
NASA Astrophysics Data System (ADS)
Anderson, D. N.; Patton, H. J.; Taylor, S. R.; Bonner, J. L.; Selby, N. D.
2014-03-01
The Comprehensive Nuclear-Test-Ban Treaty (CTBT), a global ban on nuclear explosions, is currently in a ratification phase. Under the CTBT, an International Monitoring System (IMS) of seismic, hydroacoustic, infrasonic and radionuclide sensors is operational, and the data from the IMS is analysed by the International Data Centre (IDC). The IDC provides CTBT signatories basic seismic event parameters and a screening analysis indicating whether an event exhibits explosion characteristics (for example, shallow depth). An important component of the screening analysis is a statistical test of the null hypothesis H 0: explosion characteristics using empirical measurements of seismic energy (magnitudes). The established magnitude used for event size is the body-wave magnitude (denoted m b) computed from the initial segment of a seismic waveform. IDC screening analysis is applied to events with m b greater than 3.5. The Rayleigh wave magnitude (denoted M S) is a measure of later arriving surface wave energy. Magnitudes are measurements of seismic energy that include adjustments (physical correction model) for path and distance effects between event and station. Relative to m b, earthquakes generally have a larger M S magnitude than explosions. This article proposes a hypothesis test (screening analysis) using M S and m b that expressly accounts for physical correction model inadequacy in the standard error of the test statistic. With this hypothesis test formulation, the 2009 Democratic Peoples Republic of Korea announced nuclear weapon test fails to reject the null hypothesis H 0: explosion characteristics.
Qu, Long; Guennel, Tobias; Marshall, Scott L
2013-12-01
Following the rapid development of genome-scale genotyping technologies, genetic association mapping has become a popular tool to detect genomic regions responsible for certain (disease) phenotypes, especially in early-phase pharmacogenomic studies with limited sample size. In response to such applications, a good association test needs to be (1) applicable to a wide range of possible genetic models, including, but not limited to, the presence of gene-by-environment or gene-by-gene interactions and non-linearity of a group of marker effects, (2) accurate in small samples, fast to compute on the genomic scale, and amenable to large scale multiple testing corrections, and (3) reasonably powerful to locate causal genomic regions. The kernel machine method represented in linear mixed models provides a viable solution by transforming the problem into testing the nullity of variance components. In this study, we consider score-based tests by choosing a statistic linear in the score function. When the model under the null hypothesis has only one error variance parameter, our test is exact in finite samples. When the null model has more than one variance parameter, we develop a new moment-based approximation that performs well in simulations. Through simulations and analysis of real data, we demonstrate that the new test possesses most of the aforementioned characteristics, especially when compared to existing quadratic score tests or restricted likelihood ratio tests. © 2013, The International Biometric Society.
THE DYNAMIC LEAP AND BALANCE TEST (DLBT): A TEST-RETEST RELIABILITY STUDY
Newman, Thomas M.; Smith, Brent I.; John Miller, Sayers
2017-01-01
Background There is a need for new clinical assessment tools to test dynamic balance during typical functional movements. Common methods for assessing dynamic balance, such as the Star Excursion Balance Test, which requires controlled movement of body segments over an unchanged base of support, may not be an adequate measure for testing typical functional movements that involve controlled movement of body segments along with a change in base of support. Purpose/hypothesis The purpose of this study was to determine the reliability of the Dynamic Leap and Balance Test (DLBT) by assessing its test-retest reliability. It was hypothesized that there would be no statistically significant differences between testing days in time taken to complete the test. Study Design Reliability study Methods Thirty healthy college aged individuals participated in this study. Participants performed a series of leaps in a prescribed sequence, unique to the DLBT test. Time required by the participants to complete the 20-leap task was the dependent variable. Subjects leaped back and forth from peripheral to central targets alternating weight bearing from one leg to the other. Participants landed on the central target with the tested limb and were required to stabilize for two seconds before leaping to the next target. Stability was based upon qualitative measures similar to Balance Error Scoring System. Each assessment was comprised of three trials and performed on two days with a separation of at least six days. Results Two-way mixed ANOVA was used to analyze the differences in time to complete the sequence between the three trial averages of the two testing sessions. Intraclass Correlation Coefficient (ICC3,1) was used to establish between session test-retest reliability of the test trial averages. Significance was set a priori at p ≤ 0.05. No significant differences (p > 0.05) were detected between the two testing sessions. The ICC was 0.93 with a 95% confidence interval from 0.84 to 0.96. Conclusion This test is a cost-effective, easy to administer and clinically relevant novel measure for assessing dynamic balance that has excellent test-retest reliability. Clinical relevance As a new measure of dynamic balance, the DLBT has the potential to be a cost-effective, challenging and functional tool for clinicians. Level of Evidence 2b PMID:28900556
DOE Office of Scientific and Technical Information (OSTI.GOV)
Audenaert, Koenraad M. R., E-mail: koenraad.audenaert@rhul.ac.uk; Department of Physics and Astronomy, University of Ghent, S9, Krijgslaan 281, B-9000 Ghent; Mosonyi, Milán, E-mail: milan.mosonyi@gmail.com
2014-10-01
We consider the multiple hypothesis testing problem for symmetric quantum state discrimination between r given states σ₁, …, σ{sub r}. By splitting up the overall test into multiple binary tests in various ways we obtain a number of upper bounds on the optimal error probability in terms of the binary error probabilities. These upper bounds allow us to deduce various bounds on the asymptotic error rate, for which it has been hypothesized that it is given by the multi-hypothesis quantum Chernoff bound (or Chernoff divergence) C(σ₁, …, σ{sub r}), as recently introduced by Nussbaum and Szkoła in analogy with Salikhov'smore » classical multi-hypothesis Chernoff bound. This quantity is defined as the minimum of the pairwise binary Chernoff divergences min{sub j« less
Pravosudov, Vladimir V; Clayton, Nicola S
2002-08-01
To test the hypothesis that accurate cache recovery is more critical for birds that live in harsh conditions where the food supply is limited and unpredictable, the authors compared food caching, memory, and the hippocampus of black-capped chickadees (Poecile atricapilla) from Alaska and Colorado. Under identical laboratory conditions, Alaska chickadees (a) cached significantly more food; (b) were more efficient at cache recovery: (c) performed more accurately on one-trial associative learning tasks in which birds had to rely on spatial memory, but did not differ when tested on a nonspatial version of this task; and (d) had significantly larger hippocampal volumes containing more neurons compared with Colorado chickadees. The results support the hypothesis that these population differences may reflect adaptations to a harsh environment.
The Failed Feminist Challenge to `Fundamental Epistemology'
NASA Astrophysics Data System (ADS)
Pinnick, Cassandra L.
Despite volumes written in the name of the new and fundamental feminist project in philosophy of science, and conclusions drawn on the strength of the hypothesis that the feminist project will boost progress toward cognitive aims associated with science and rationality (and, one might add, policy decisions enacted in the name of these aims), the whole rationale for the project remains (after 20 years, plus) wholly unsubstantiated. We must remain agnostic about its evidentiary merits or demerits. This is because we are without evidence to test the hypothesis: certainly, we have no data that would test the strength of the hypothesis as asserting a causal relationship between women and cognitive ends. Thus, any self-respecting epistemologist who places a premium on evidence-driven belief and justification ought not to accept the hypothesis. By extension, there is no reasoned basis to draw any definitive conclusion about the project itself. No matter how self-evidently correct.
Geographic profiling applied to testing models of bumble-bee foraging.
Raine, Nigel E; Rossmo, D Kim; Le Comber, Steven C
2009-03-06
Geographic profiling (GP) was originally developed as a statistical tool to help police forces prioritize lists of suspects in investigations of serial crimes. GP uses the location of related crime sites to make inferences about where the offender is most likely to live, and has been extremely successful in criminology. Here, we show how GP is applicable to experimental studies of animal foraging, using the bumble-bee Bombus terrestris. GP techniques enable us to simplify complex patterns of spatial data down to a small number of parameters (2-3) for rigorous hypothesis testing. Combining computer model simulations and experimental observation of foraging bumble-bees, we demonstrate that GP can be used to discriminate between foraging patterns resulting from (i) different hypothetical foraging algorithms and (ii) different food item (flower) densities. We also demonstrate that combining experimental and simulated data can be used to elucidate animal foraging strategies: specifically that the foraging patterns of real bumble-bees can be reliably discriminated from three out of nine hypothetical foraging algorithms. We suggest that experimental systems, like foraging bees, could be used to test and refine GP model predictions, and that GP offers a useful technique to analyse spatial animal behaviour data in both the laboratory and field.
Winter feeding of elk in the Greater Yellowstone Ecosystem and its effects on disease dynamics
Cotterill, Gavin G.; Cross, Paul C.; Cole, Eric K.; Fuda, Rebecca K.; Rogerson, Jared D.; Scurlock, Brandon M.; du Toit, Johan T.
2018-01-01
Providing food to wildlife during periods when natural food is limited results in aggregations that may facilitate disease transmission. This is exemplified in western Wyoming where institutional feeding over the past century has aimed to mitigate wildlife–livestock conflict and minimize winter mortality of elk (Cervus canadensis). Here we review research across 23 winter feedgrounds where the most studied disease is brucellosis, caused by the bacterium Brucella abortus. Traditional veterinary practices (vaccination, test-and-slaughter) have thus far been unable to control this disease in elk, which can spill over to cattle. Current disease-reduction efforts are being guided by ecological research on elk movement and density, reproduction, stress, co-infections and scavengers. Given the right tools, feedgrounds could provide opportunities for adaptive management of brucellosis through regular animal testing and population-level manipulations. Our analyses of several such manipulations highlight the value of a research–management partnership guided by hypothesis testing, despite the constraints of the sociopolitical environment. However, brucellosis is now spreading in unfed elk herds, while other diseases (e.g. chronic wasting disease) are of increasing concern at feedgrounds. Therefore experimental closures of feedgrounds, reduced feeding and lower elk populations merit consideration.
Phylogeny predicts future habitat shifts due to climate change.
Kuntner, Matjaž; Năpăruş, Magdalena; Li, Daiqin; Coddington, Jonathan A
2014-01-01
Taxa may respond differently to climatic changes, depending on phylogenetic or ecological effects, but studies that discern among these alternatives are scarce. Here, we use two species pairs from globally distributed spider clades, each pair representing two lifestyles (generalist, specialist) to test the relative importance of phylogeny versus ecology in predicted responses to climate change. We used a recent phylogenetic hypothesis for nephilid spiders to select four species from two genera (Nephilingis and Nephilengys) that match the above criteria, are fully allopatric but combined occupy all subtropical-tropical regions. Based on their records, we modeled each species niche spaces and predicted their ecological shifts 20, 40, 60, and 80 years into the future using customized GIS tools and projected climatic changes. Phylogeny better predicts the species current ecological preferences than do lifestyles. By 2080 all species face dramatic reductions in suitable habitat (54.8-77.1%) and adapt by moving towards higher altitudes and latitudes, although at different tempos. Phylogeny and life style explain simulated habitat shifts in altitude, but phylogeny is the sole best predictor of latitudinal shifts. Models incorporating phylogenetic relatedness are an important additional tool to predict accurately biotic responses to global change.
Cognitive assessment and health education in children from two different cultures.
Sivaramakrishnan, M; Arocha, J F; Patel, V L
1998-09-01
This paper presents research aimed at investigating high level comprehension and problem solving processes in children in two different countries, India and Colombia. To this end, we use a series of health-related cognitive tasks as assessment tools. In one study, we also examine children's performance on these cognitive tasks, in relation to their nutritional status and parasitic load. The ages of the children tested ranged from 2 through 14 years. The tasks were designed to assess comprehension of sequences, organization of concepts, understanding of health routines (hygiene practices) and evaluation of hypothesis and evidence. The results show that children approach the different tasks with a baggage of beliefs and local knowledge of the world which determines their reasoning process, their comprehension and their problems solving. The results are discussed in terms of cognitive assessment approaches, as applied to classroom instruction. Given that children construct their understanding of reality based on what they already know and that education does not take this into account, we recommend that assessment tools should be devised that can tap prior knowledge and understanding, such that this can be analyzed and understood in relation to knowledge taught in the classroom. Current educational assessment fails in such an endeavor.
The Pedometer as a Tool to Enrich Science Learning in a Public Health Context
NASA Astrophysics Data System (ADS)
Rye, James A.; Zizzi, Samuel J.; Vitullo, Elizabeth A.; Tompkins, Nancy O'hara
2005-12-01
The United States is experiencing an obesity epidemic: A science-technology-society public health issue tied to our built environment, which is characterized by heavy dependence on automobiles and reduced opportunities to walk and bicycle for transportation. This presents an informal science education opportunity within "science in personal and social perspectives'' to use pedometer technology for enhancing students' understandings about human energy balance. An exploratory study was conducted with 29 teachers to investigate how pedometers could be used for providing academic enrichment to secondary students participating in after-school Health Sciences and Technology Academy clubs. Frequency analysis revealed that the pedometer activities often investigated kilocalorie expenditure and/or incorporated hypothesis testing/experimenting. Teachers' perspectives on learning outcomes most frequently conveyed that students increased their awareness of the importance of health habits relative to kilocalorie intake and expenditure. Pedometers have considerable merit for the regular science curriculum as they allow for numerous mathematics applications and inquiry learning and target concepts such as energy and equilibrium that cut across the National Science Education Standards. Pedometers and associated resources on human energy balance are important tools that science teachers can employ in helping schools respond to the national call to prevent childhood obesity.
Fabrikant, Jerry M; Park, Tae Soon
2011-06-01
Ultrasound, well recognized as an effective diagnostic tool, reveals a thickening of the plantar fascia in patients with plantar fasciitis/fasciosis disease. The authors hypothesized that ultrasound would also reveal a decrease in the plantar fascia thickness for patients undergoing treatment for the disease, a hypothesis that, heretofore, had been only tested on a limited number of subjects. They conducted a more statistically significant study that found that clinical treatment with injection and biomechanical correction does indeed diminish plantar fascia thickness as shown on ultrasound. The study also revealed that patients experience the most heightened plantar fascia tenderness toward the end of the day, and improvement in their symptomatic complaints were associated with a reduction in plantar fascia thickness. As a result, the authors conclude that office-based ultrasound can help diagnose and confirm plantar fasciitis/fasciosis through the measurement of the plantar fascia thickness. Because of the advantages of ultrasound--that it is non-invasive with greater patient acceptance, cost effective and radiation-free--the imaging tool should be considered and implemented early in the diagnosis and treatment of plantar fasciitis/fasciosis. Copyright © 2011 Elsevier Ltd. All rights reserved.
ChemBank: a small-molecule screening and cheminformatics resource database.
Seiler, Kathleen Petri; George, Gregory A; Happ, Mary Pat; Bodycombe, Nicole E; Carrinski, Hyman A; Norton, Stephanie; Brudz, Steve; Sullivan, John P; Muhlich, Jeremy; Serrano, Martin; Ferraiolo, Paul; Tolliday, Nicola J; Schreiber, Stuart L; Clemons, Paul A
2008-01-01
ChemBank (http://chembank.broad.harvard.edu/) is a public, web-based informatics environment developed through a collaboration between the Chemical Biology Program and Platform at the Broad Institute of Harvard and MIT. This knowledge environment includes freely available data derived from small molecules and small-molecule screens and resources for studying these data. ChemBank is unique among small-molecule databases in its dedication to the storage of raw screening data, its rigorous definition of screening experiments in terms of statistical hypothesis testing, and its metadata-based organization of screening experiments into projects involving collections of related assays. ChemBank stores an increasingly varied set of measurements derived from cells and other biological assay systems treated with small molecules. Analysis tools are available and are continuously being developed that allow the relationships between small molecules, cell measurements, and cell states to be studied. Currently, ChemBank stores information on hundreds of thousands of small molecules and hundreds of biomedically relevant assays that have been performed at the Broad Institute by collaborators from the worldwide research community. The goal of ChemBank is to provide life scientists unfettered access to biomedically relevant data and tools heretofore available primarily in the private sector.