A Rational Analysis of the Selection Task as Optimal Data Selection.
ERIC Educational Resources Information Center
Oaksford, Mike; Chater, Nick
1994-01-01
Experimental data on human reasoning in hypothesis-testing tasks is reassessed in light of a Bayesian model of optimal data selection in inductive hypothesis testing. The rational analysis provided by the model suggests that reasoning in such tasks may be rational rather than subject to systematic bias. (SLD)
A test of ecological optimality for semiarid vegetation. M.S. Thesis
NASA Technical Reports Server (NTRS)
Salvucci, Guido D.; Eagleson, Peter S.; Turner, Edmund K.
1992-01-01
Three ecological optimality hypotheses which have utility in parameter reduction and estimation in a climate-soil-vegetation water balance model are reviewed and tested. The first hypothesis involves short term optimization of vegetative canopy density through equilibrium soil moisture maximization. The second hypothesis involves vegetation type selection again through soil moisture maximization, and the third involves soil genesis through plant induced modification of soil hydraulic properties to values which result in a maximum rate of biomass productivity.
A Hypothesis-Driven Approach to Site Investigation
NASA Astrophysics Data System (ADS)
Nowak, W.
2008-12-01
Variability of subsurface formations and the scarcity of data lead to the notion of aquifer parameters as geostatistical random variables. Given an information need and limited resources for field campaigns, site investigation is often put into the context of optimal design. In optimal design, the types, numbers and positions of samples are optimized under case-specific objectives to meet the information needs. Past studies feature optimal data worth (balancing maximum financial profit in an engineering task versus the cost of additional sampling), or aim at a minimum prediction uncertainty of stochastic models for a prescribed investigation budget. Recent studies also account for other sources of uncertainty outside the hydrogeological range, such as uncertain toxicity, ingestion and behavioral parameters of the affected population when predicting the human health risk from groundwater contaminations. The current study looks at optimal site investigation from a new angle. Answering a yes/no question under uncertainty directly requires recasting the original question as a hypothesis test. Otherwise, false confidence in the resulting answer would be pretended. A straightforward example is whether a recent contaminant spill will cause contaminant concentrations in excess of a legal limit at a nearby drinking water well. This question can only be answered down to a specified chance of error, i.e., based on the significance level used in hypothesis tests. Optimal design is placed into the hypothesis-driven context by using the chance of providing a false yes/no answer as new criterion to be minimized. Different configurations apply for one-sided and two-sided hypothesis tests. If a false answer entails financial liability, the hypothesis-driven context can be re-cast in the context of data worth. The remaining difference is that failure is a hard constraint in the data worth context versus a monetary punishment term in the hypothesis-driven context. The basic principle is discussed and illustrated on the case of a hypothetical contaminant spill and the exceedance of critical contaminant levels at a downstream location. An tempting and important side question is whether site investigation could be tweaked towards a yes or no answer in maliciously biased campaigns by unfair formulation of the optimization objective.
Hirsch, Jameson K; Conner, Kenneth R
2006-12-01
To test the hypothesis that higher levels of optimism reduce the association between hopelessness and suicidal ideation, 284 college students completed self-report measures of optimism and Beck scales for hopelessness, suicidal ideation, and depression. A statistically significant interaction between hopelessness and one measure of optimism was obtained, consistent with the hypothesis that optimism moderates the relationship between hopelessness and suicidal ideation. Hopelessness is not inevitably associated with suicidal ideation. Optimism may be an important moderator of the association. The development of treatments to enhance optimism may complement standard treatments to reduce suicidality that target depression and hopelessness.
Mudge, Joseph F; Penny, Faith M; Houlahan, Jeff E
2012-12-01
Setting optimal significance levels that minimize Type I and Type II errors allows for more transparent and well-considered statistical decision making compared to the traditional α = 0.05 significance level. We use the optimal α approach to re-assess conclusions reached by three recently published tests of the pace-of-life syndrome hypothesis, which attempts to unify occurrences of different physiological, behavioral, and life history characteristics under one theory, over different scales of biological organization. While some of the conclusions reached using optimal α were consistent to those previously reported using the traditional α = 0.05 threshold, opposing conclusions were also frequently reached. The optimal α approach reduced probabilities of Type I and Type II errors, and ensured statistical significance was associated with biological relevance. Biologists should seriously consider their choice of α when conducting null hypothesis significance tests, as there are serious disadvantages with consistent reliance on the traditional but arbitrary α = 0.05 significance level. Copyright © 2012 WILEY Periodicals, Inc.
Phase II design with sequential testing of hypotheses within each stage.
Poulopoulou, Stavroula; Karlis, Dimitris; Yiannoutsos, Constantin T; Dafni, Urania
2014-01-01
The main goal of a Phase II clinical trial is to decide, whether a particular therapeutic regimen is effective enough to warrant further study. The hypothesis tested by Fleming's Phase II design (Fleming, 1982) is [Formula: see text] versus [Formula: see text], with level [Formula: see text] and with a power [Formula: see text] at [Formula: see text], where [Formula: see text] is chosen to represent the response probability achievable with standard treatment and [Formula: see text] is chosen such that the difference [Formula: see text] represents a targeted improvement with the new treatment. This hypothesis creates a misinterpretation mainly among clinicians that rejection of the null hypothesis is tantamount to accepting the alternative, and vice versa. As mentioned by Storer (1992), this introduces ambiguity in the evaluation of type I and II errors and the choice of the appropriate decision at the end of the study. Instead of testing this hypothesis, an alternative class of designs is proposed in which two hypotheses are tested sequentially. The hypothesis [Formula: see text] versus [Formula: see text] is tested first. If this null hypothesis is rejected, the hypothesis [Formula: see text] versus [Formula: see text] is tested next, in order to examine whether the therapy is effective enough to consider further testing in a Phase III study. For the derivation of the proposed design the exact binomial distribution is used to calculate the decision cut-points. The optimal design parameters are chosen, so as to minimize the average sample number (ASN) under specific upper bounds for error levels. The optimal values for the design were found using a simulated annealing method.
Risk-Based, Hypothesis-Driven Framework for Hydrological Field Campaigns with Case Studies
NASA Astrophysics Data System (ADS)
Harken, B.; Rubin, Y.
2014-12-01
There are several stages in any hydrological modeling campaign, including: formulation and analysis of a priori information, data acquisition through field campaigns, inverse modeling, and prediction of some environmental performance metric (EPM). The EPM being predicted could be, for example, contaminant concentration or plume travel time. These predictions often have significant bearing on a decision that must be made. Examples include: how to allocate limited remediation resources between contaminated groundwater sites or where to place a waste repository site. Answering such questions depends on predictions of EPMs using forward models as well as levels of uncertainty related to these predictions. Uncertainty in EPM predictions stems from uncertainty in model parameters, which can be reduced by measurements taken in field campaigns. The costly nature of field measurements motivates a rational basis for determining a measurement strategy that is optimal with respect to the uncertainty in the EPM prediction. The tool of hypothesis testing allows this uncertainty to be quantified by computing the significance of the test resulting from a proposed field campaign. The significance of the test gives a rational basis for determining the optimality of a proposed field campaign. This hypothesis testing framework is demonstrated and discussed using various synthetic case studies. This study involves contaminated aquifers where a decision must be made based on prediction of when a contaminant will arrive at a specified location. The EPM, in this case contaminant travel time, is cast into the hypothesis testing framework. The null hypothesis states that the contaminant plume will arrive at the specified location before a critical amount of time passes, and the alternative hypothesis states that the plume will arrive after the critical time passes. The optimality of different field campaigns is assessed by computing the significance of the test resulting from each one. Evaluating the level of significance caused by a field campaign involves steps including likelihood-based inverse modeling and semi-analytical conditional particle tracking.
NASA Astrophysics Data System (ADS)
Menne, Matthew J.; Williams, Claude N., Jr.
2005-10-01
An evaluation of three hypothesis test statistics that are commonly used in the detection of undocumented changepoints is described. The goal of the evaluation was to determine whether the use of multiple tests could improve undocumented, artificial changepoint detection skill in climate series. The use of successive hypothesis testing is compared to optimal approaches, both of which are designed for situations in which multiple undocumented changepoints may be present. In addition, the importance of the form of the composite climate reference series is evaluated, particularly with regard to the impact of undocumented changepoints in the various component series that are used to calculate the composite.In a comparison of single test changepoint detection skill, the composite reference series formulation is shown to be less important than the choice of the hypothesis test statistic, provided that the composite is calculated from the serially complete and homogeneous component series. However, each of the evaluated composite series is not equally susceptible to the presence of changepoints in its components, which may be erroneously attributed to the target series. Moreover, a reference formulation that is based on the averaging of the first-difference component series is susceptible to random walks when the composition of the component series changes through time (e.g., values are missing), and its use is, therefore, not recommended. When more than one test is required to reject the null hypothesis of no changepoint, the number of detected changepoints is reduced proportionately less than the number of false alarms in a wide variety of Monte Carlo simulations. Consequently, a consensus of hypothesis tests appears to improve undocumented changepoint detection skill, especially when reference series homogeneity is violated. A consensus of successive hypothesis tests using a semihierarchic splitting algorithm also compares favorably to optimal solutions, even when changepoints are not hierarchic.
NASA Technical Reports Server (NTRS)
Hess, R. A.
1977-01-01
A brief review of some of the more pertinent applications of analytical pilot models to the prediction of aircraft handling qualities is undertaken. The relative ease with which multiloop piloting tasks can be modeled via the optimal control formulation makes the use of optimal pilot models particularly attractive for handling qualities research. To this end, a rating hypothesis is introduced which relates the numerical pilot opinion rating assigned to a particular vehicle and task to the numerical value of the index of performance resulting from an optimal pilot modeling procedure as applied to that vehicle and task. This hypothesis is tested using data from piloted simulations and is shown to be reasonable. An example concerning a helicopter landing approach is introduced to outline the predictive capability of the rating hypothesis in multiaxis piloting tasks.
ERIC Educational Resources Information Center
Hirsch, Jameson K.; Conner, Kenneth R.
2006-01-01
To test the hypothesis that higher levels of optimism reduce the association between hopelessness and suicidal ideation, 284 college students completed self-report measures of optimism and Beck scales for hopelessness, suicidal ideation, and depression. A statistically significant interaction between hopelessness and one measure of optimism was…
Quantum chi-squared and goodness of fit testing
DOE Office of Scientific and Technical Information (OSTI.GOV)
Temme, Kristan; Verstraete, Frank
2015-01-15
A quantum mechanical hypothesis test is presented for the hypothesis that a certain setup produces a given quantum state. Although the classical and the quantum problems are very much related to each other, the quantum problem is much richer due to the additional optimization over the measurement basis. A goodness of fit test for i.i.d quantum states is developed and a max-min characterization for the optimal measurement is introduced. We find the quantum measurement which leads both to the maximal Pitman and Bahadur efficiencies, and determine the associated divergence rates. We discuss the relationship of the quantum goodness of fitmore » test to the problem of estimating multiple parameters from a density matrix. These problems are found to be closely related and we show that the largest error of an optimal strategy, determined by the smallest eigenvalue of the Fisher information matrix, is given by the divergence rate of the goodness of fit test.« less
Sanchez, Gaëtan; Lecaignard, Françoise; Otman, Anatole; Maby, Emmanuel; Mattout, Jérémie
2016-01-01
The relatively young field of Brain-Computer Interfaces has promoted the use of electrophysiology and neuroimaging in real-time. In the meantime, cognitive neuroscience studies, which make extensive use of functional exploration techniques, have evolved toward model-based experiments and fine hypothesis testing protocols. Although these two developments are mostly unrelated, we argue that, brought together, they may trigger an important shift in the way experimental paradigms are being designed, which should prove fruitful to both endeavors. This change simply consists in using real-time neuroimaging in order to optimize advanced neurocognitive hypothesis testing. We refer to this new approach as the instantiation of an Active SAmpling Protocol (ASAP). As opposed to classical (static) experimental protocols, ASAP implements online model comparison, enabling the optimization of design parameters (e.g., stimuli) during the course of data acquisition. This follows the well-known principle of sequential hypothesis testing. What is radically new, however, is our ability to perform online processing of the huge amount of complex data that brain imaging techniques provide. This is all the more relevant at a time when physiological and psychological processes are beginning to be approached using more realistic, generative models which may be difficult to tease apart empirically. Based upon Bayesian inference, ASAP proposes a generic and principled way to optimize experimental design adaptively. In this perspective paper, we summarize the main steps in ASAP. Using synthetic data we illustrate its superiority in selecting the right perceptual model compared to a classical design. Finally, we briefly discuss its future potential for basic and clinical neuroscience as well as some remaining challenges.
Batzli, George O
2016-11-01
Increased habitat fragmentation leads to smaller size of habitat patches and to greater distance between patches. The ROMPA hypothesis (ratio of optimal to marginal patch area) uniquely links vole population fluctuations to the composition of the landscape. It states that as ROMPA decreases (fragmentation increases), vole population fluctuations will increase (including the tendency to display multi-annual cycles in abundance) because decreased proportions of optimal habitat result in greater population declines and longer recovery time after a harsh season. To date, only comparative observations in the field have supported the hypothesis. This paper reports the results of the first experimental test. I used prairie voles, Microtus ochrogaster, and mowed grassland to create model landscapes with 3 levels of ROMPA (high with 25% mowed, medium with 50% mowed and low with 75% mowed). As ROMPA decreased, distances between patches of favorable habitat (high cover) increased owing to a greater proportion of unfavorable (mowed) habitat. Results from the first year with intensive live trapping indicated that the preconditions for operation of the hypothesis existed (inversely density dependent emigration and, as ROMPA decreased, increased per capita mortality and decreased per capita movement between optimal patches). Nevertheless, contrary to the prediction of the hypothesis that populations in landscapes with high ROMPA should have the lowest variability, 5 years of trapping indicated that variability was lowest with medium ROMPA. The design of field experiments may never be perfect, but these results indicate that the ROMPA hypothesis needs further rigorous testing. © 2016 International Society of Zoological Sciences, Institute of Zoology/Chinese Academy of Sciences and John Wiley & Sons Australia, Ltd.
Optimizing Aircraft Availability: Where to Spend Your Next O&M Dollar
2010-03-01
patterns of variance are present. In addition, we use the Breusch - Pagan test to statistically determine whether homoscedasticity exists. For this... Breusch - Pagan test , large p-values are preferred so that we may accept the null hypothesis of normality. Failure to meet the fourth assumption is...Next, we show the residual by predicted plot and the Breusch - Pagan test for constant variance of the residuals. The null hypothesis is that the
When Null Hypothesis Significance Testing Is Unsuitable for Research: A Reassessment.
Szucs, Denes; Ioannidis, John P A
2017-01-01
Null hypothesis significance testing (NHST) has several shortcomings that are likely contributing factors behind the widely debated replication crisis of (cognitive) neuroscience, psychology, and biomedical science in general. We review these shortcomings and suggest that, after sustained negative experience, NHST should no longer be the default, dominant statistical practice of all biomedical and psychological research. If theoretical predictions are weak we should not rely on all or nothing hypothesis tests. Different inferential methods may be most suitable for different types of research questions. Whenever researchers use NHST they should justify its use, and publish pre-study power calculations and effect sizes, including negative findings. Hypothesis-testing studies should be pre-registered and optimally raw data published. The current statistics lite educational approach for students that has sustained the widespread, spurious use of NHST should be phased out.
When Null Hypothesis Significance Testing Is Unsuitable for Research: A Reassessment
Szucs, Denes; Ioannidis, John P. A.
2017-01-01
Null hypothesis significance testing (NHST) has several shortcomings that are likely contributing factors behind the widely debated replication crisis of (cognitive) neuroscience, psychology, and biomedical science in general. We review these shortcomings and suggest that, after sustained negative experience, NHST should no longer be the default, dominant statistical practice of all biomedical and psychological research. If theoretical predictions are weak we should not rely on all or nothing hypothesis tests. Different inferential methods may be most suitable for different types of research questions. Whenever researchers use NHST they should justify its use, and publish pre-study power calculations and effect sizes, including negative findings. Hypothesis-testing studies should be pre-registered and optimally raw data published. The current statistics lite educational approach for students that has sustained the widespread, spurious use of NHST should be phased out. PMID:28824397
Optimal foraging, not biogenetic law, predicts spider orb web allometry.
Gregorič, Matjaž; Kiesbüy, Heine C; Lebrón, Shakira G Quiñones; Rozman, Alenka; Agnarsson, Ingi; Kuntner, Matjaž
2013-03-01
The biogenetic law posits that the ontogeny of an organism recapitulates the pattern of evolutionary changes. Morphological evidence has offered some support for, but also considerable evidence against, the hypothesis. However, biogenetic law in behavior remains underexplored. As physical manifestation of behavior, spider webs offer an interesting model for the study of ontogenetic behavioral changes. In orb-weaving spiders, web symmetry often gets distorted through ontogeny, and these changes have been interpreted to reflect the biogenetic law. Here, we test the biogenetic law hypothesis against the alternative, the optimal foraging hypothesis, by studying the allometry in Leucauge venusta orb webs. These webs range in inclination from vertical through tilted to horizontal; biogenetic law predicts that allometry relates to ontogenetic stage, whereas optimal foraging predicts that allometry relates to gravity. Specifically, pronounced asymmetry should only be seen in vertical webs under optimal foraging theory. We show that, through ontogeny, vertical webs in L. venusta become more asymmetrical in contrast to tilted and horizontal webs. Biogenetic law thus cannot explain L. venusta web allometry, but our results instead support optimization of foraging area in response to spider size.
HYPOTHESIS TESTING FOR HIGH-DIMENSIONAL SPARSE BINARY REGRESSION
Mukherjee, Rajarshi; Pillai, Natesh S.; Lin, Xihong
2015-01-01
In this paper, we study the detection boundary for minimax hypothesis testing in the context of high-dimensional, sparse binary regression models. Motivated by genetic sequencing association studies for rare variant effects, we investigate the complexity of the hypothesis testing problem when the design matrix is sparse. We observe a new phenomenon in the behavior of detection boundary which does not occur in the case of Gaussian linear regression. We derive the detection boundary as a function of two components: a design matrix sparsity index and signal strength, each of which is a function of the sparsity of the alternative. For any alternative, if the design matrix sparsity index is too high, any test is asymptotically powerless irrespective of the magnitude of signal strength. For binary design matrices with the sparsity index that is not too high, our results are parallel to those in the Gaussian case. In this context, we derive detection boundaries for both dense and sparse regimes. For the dense regime, we show that the generalized likelihood ratio is rate optimal; for the sparse regime, we propose an extended Higher Criticism Test and show it is rate optimal and sharp. We illustrate the finite sample properties of the theoretical results using simulation studies. PMID:26246645
Predicting Short-Term Remembering as Boundedly Optimal Strategy Choice
ERIC Educational Resources Information Center
Howes, Andrew; Duggan, Geoffrey B.; Kalidindi, Kiran; Tseng, Yuan-Chi; Lewis, Richard L.
2016-01-01
It is known that, on average, people adapt their choice of memory strategy to the subjective utility of interaction. What is not known is whether an individual's choices are "boundedly optimal." Two experiments are reported that test the hypothesis that an individual's decisions about the distribution of remembering between internal and…
Self-Efficacy and Interest: Experimental Studies of Optimal Incompetence.
ERIC Educational Resources Information Center
Silvia, Paul J.
2003-01-01
To test the optimal incompetence hypothesis (high self-efficacy lowers task interest), 30 subjects rated interest, perceived difficulty, and confidence of success in different tasks. In study 2, 33 subjects completed a dart-game task in easy, moderate, and difficult conditions. In both, interest was a quadratic function of self-efficacy,…
The potential for increased power from combining P-values testing the same hypothesis.
Ganju, Jitendra; Julie Ma, Guoguang
2017-02-01
The conventional approach to hypothesis testing for formal inference is to prespecify a single test statistic thought to be optimal. However, we usually have more than one test statistic in mind for testing the null hypothesis of no treatment effect but we do not know which one is the most powerful. Rather than relying on a single p-value, combining p-values from prespecified multiple test statistics can be used for inference. Combining functions include Fisher's combination test and the minimum p-value. Using randomization-based tests, the increase in power can be remarkable when compared with a single test and Simes's method. The versatility of the method is that it also applies when the number of covariates exceeds the number of observations. The increase in power is large enough to prefer combined p-values over a single p-value. The limitation is that the method does not provide an unbiased estimator of the treatment effect and does not apply to situations when the model includes treatment by covariate interaction.
The optimal power puzzle: scrutiny of the monotone likelihood ratio assumption in multiple testing.
Cao, Hongyuan; Sun, Wenguang; Kosorok, Michael R
2013-01-01
In single hypothesis testing, power is a non-decreasing function of type I error rate; hence it is desirable to test at the nominal level exactly to achieve optimal power. The puzzle lies in the fact that for multiple testing, under the false discovery rate paradigm, such a monotonic relationship may not hold. In particular, exact false discovery rate control may lead to a less powerful testing procedure if a test statistic fails to fulfil the monotone likelihood ratio condition. In this article, we identify different scenarios wherein the condition fails and give caveats for conducting multiple testing in practical settings.
Phi Index: A New Metric to Test the Flush Early and Avoid the Rush Hypothesis
Samia, Diogo S. M.; Blumstein, Daniel T.
2014-01-01
Optimal escape theory states that animals should counterbalance the costs and benefits of flight when escaping from a potential predator. However, in apparent contradiction with this well-established optimality model, birds and mammals generally initiate escape soon after beginning to monitor an approaching threat, a phenomena codified as the “Flush Early and Avoid the Rush” (FEAR) hypothesis. Typically, the FEAR hypothesis is tested using correlational statistics and is supported when there is a strong relationship between the distance at which an individual first responds behaviorally to an approaching predator (alert distance, AD), and its flight initiation distance (the distance at which it flees the approaching predator, FID). However, such correlational statistics are both inadequate to analyze relationships constrained by an envelope (such as that in the AD-FID relationship) and are sensitive to outliers with high leverage, which can lead one to erroneous conclusions. To overcome these statistical concerns we develop the phi index (Φ), a distribution-free metric to evaluate the goodness of fit of a 1∶1 relationship in a constraint envelope (the prediction of the FEAR hypothesis). Using both simulation and empirical data, we conclude that Φ is superior to traditional correlational analyses because it explicitly tests the FEAR prediction, is robust to outliers, and it controls for the disproportionate influence of observations from large predictor values (caused by the constrained envelope in AD-FID relationship). Importantly, by analyzing the empirical data we corroborate the strong effect that alertness has on flight as stated by the FEAR hypothesis. PMID:25405872
Phi index: a new metric to test the flush early and avoid the rush hypothesis.
Samia, Diogo S M; Blumstein, Daniel T
2014-01-01
Optimal escape theory states that animals should counterbalance the costs and benefits of flight when escaping from a potential predator. However, in apparent contradiction with this well-established optimality model, birds and mammals generally initiate escape soon after beginning to monitor an approaching threat, a phenomena codified as the "Flush Early and Avoid the Rush" (FEAR) hypothesis. Typically, the FEAR hypothesis is tested using correlational statistics and is supported when there is a strong relationship between the distance at which an individual first responds behaviorally to an approaching predator (alert distance, AD), and its flight initiation distance (the distance at which it flees the approaching predator, FID). However, such correlational statistics are both inadequate to analyze relationships constrained by an envelope (such as that in the AD-FID relationship) and are sensitive to outliers with high leverage, which can lead one to erroneous conclusions. To overcome these statistical concerns we develop the phi index (Φ), a distribution-free metric to evaluate the goodness of fit of a 1:1 relationship in a constraint envelope (the prediction of the FEAR hypothesis). Using both simulation and empirical data, we conclude that Φ is superior to traditional correlational analyses because it explicitly tests the FEAR prediction, is robust to outliers, and it controls for the disproportionate influence of observations from large predictor values (caused by the constrained envelope in AD-FID relationship). Importantly, by analyzing the empirical data we corroborate the strong effect that alertness has on flight as stated by the FEAR hypothesis.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Audenaert, Koenraad M. R., E-mail: koenraad.audenaert@rhul.ac.uk; Department of Physics and Astronomy, University of Ghent, S9, Krijgslaan 281, B-9000 Ghent; Mosonyi, Milán, E-mail: milan.mosonyi@gmail.com
2014-10-01
We consider the multiple hypothesis testing problem for symmetric quantum state discrimination between r given states σ₁, …, σ{sub r}. By splitting up the overall test into multiple binary tests in various ways we obtain a number of upper bounds on the optimal error probability in terms of the binary error probabilities. These upper bounds allow us to deduce various bounds on the asymptotic error rate, for which it has been hypothesized that it is given by the multi-hypothesis quantum Chernoff bound (or Chernoff divergence) C(σ₁, …, σ{sub r}), as recently introduced by Nussbaum and Szkoła in analogy with Salikhov'smore » classical multi-hypothesis Chernoff bound. This quantity is defined as the minimum of the pairwise binary Chernoff divergences min{sub j« less
Radhika, Venkatesan; Kost, Christian; Bartram, Stefan; Heil, Martin
2008-01-01
Many plants respond to herbivory with an increased production of extrafloral nectar (EFN) and/or volatile organic compounds (VOCs) to attract predatory arthropods as an indirect defensive strategy. In this study, we tested whether these two indirect defences fit the optimal defence hypothesis (ODH), which predicts the within-plant allocation of anti-herbivore defences according to trade-offs between growth and defence. Using jasmonic acid-induced plants of Phaseolus lunatus and Ricinus communis, we tested whether the within-plant distribution pattern of these two indirect defences reflects the fitness value of the respective plant parts. Furthermore, we quantified photosynthetic rates and followed the within-plant transport of assimilates with 13C labelling experiments. EFN secretion and VOC emission were highest in younger leaves. Moreover, the photosynthetic rate increased with leaf age, and pulse-labelling experiments suggested transport of carbon to younger leaves. Our results demonstrate that the ODH can explain the within-plant allocation pattern of both indirect defences studied. PMID:18493790
Hypothesis tests for the detection of constant speed radiation moving sources
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dumazert, Jonathan; Coulon, Romain; Kondrasovs, Vladimir
2015-07-01
Radiation Portal Monitors are deployed in linear network to detect radiological material in motion. As a complement to single and multichannel detection algorithms, inefficient under too low signal to noise ratios, temporal correlation algorithms have been introduced. Test hypothesis methods based on empirically estimated mean and variance of the signals delivered by the different channels have shown significant gain in terms of a tradeoff between detection sensitivity and false alarm probability. This paper discloses the concept of a new hypothesis test for temporal correlation detection methods, taking advantage of the Poisson nature of the registered counting signals, and establishes amore » benchmark between this test and its empirical counterpart. The simulation study validates that in the four relevant configurations of a pedestrian source carrier under respectively high and low count rate radioactive background, and a vehicle source carrier under the same respectively high and low count rate radioactive background, the newly introduced hypothesis test ensures a significantly improved compromise between sensitivity and false alarm, while guaranteeing the stability of its optimization parameter regardless of signal to noise ratio variations between 2 to 0.8. (authors)« less
Predicting Cost and Schedule Growth for Military and Civil Space Systems
2008-03-01
the Shapiro-Wilk Test , and testing the residuals for constant variance using the Breusch - Pagan test . For logistic models, diagnostics include...the Breusch - Pagan Test . With this test , a p-value below 0.05 rejects the null hypothesis that the residuals have constant variance. Thus, similar...to the Shapiro- Wilk Test , because the optimal model will have constant variance of its residuals, this requires Breusch - Pagan p-values over 0.05
Is protection against florivory consistent with the optimal defense hypothesis?
Godschalx, Adrienne L; Stady, Lauren; Watzig, Benjamin; Ballhorn, Daniel J
2016-01-28
Plant defense traits require resources and energy that plants may otherwise use for growth and reproduction. In order to most efficiently protect plant tissues from herbivory, one widely accepted assumption of the optimal defense hypothesis states that plants protect tissues most relevant to fitness. Reproductive organs directly determining plant fitness, including flowers and immature fruit, as well as young, productive leaf tissue thus should be particularly well-defended. To test this hypothesis, we quantified the cyanogenic potential (HCNp)-a direct, chemical defense-systemically expressed in vegetative and reproductive organs in lima bean (Phaseolus lunatus), and we tested susceptibility of these organs in bioassays with a generalist insect herbivore, the Large Yellow Underwing (Noctuidae: Noctua pronuba). To determine the actual impact of either florivory (herbivory on flowers) or folivory on seed production as a measure of maternal fitness, we removed varying percentages of total flowers or young leaf tissue and quantified developing fruit, seeds, and seed viability. We found extremely low HCNp in flowers (8.66 ± 2.19 μmol CN(-) g(-1) FW in young, white flowers, 6.23 ± 1.25 μmol CN(-) g(-1) FW in mature, yellow flowers) and in pods (ranging from 32.05 ± 7.08 to 0.09 ± 0.08 μmol CN(-) g(-1) FW in young to mature pods, respectively) whereas young leaves showed high levels of defense (67.35 ± 3.15 μmol CN(-) g(-1) FW). Correspondingly, herbivores consumed more flowers than any other tissue, which, when taken alone, appears to contradict the optimal defense hypothesis. However, experimentally removing flowers did not significantly impact fitness, while leaf tissue removal significantly reduced production of viable seeds. Even though flowers were the least defended and most consumed, our results support the optimal defense hypothesis due to i) the lack of flower removal effects on fitness and ii) the high defense investment in young leaves, which have high consequences for fitness. These data highlight the importance of considering plant defense interactions from multiple angles; interpreting where empirical data fit within any plant defense hypothesis requires understanding the fitness consequences associated with the observed defense pattern.
Phase Transitions in Living Neural Networks
NASA Astrophysics Data System (ADS)
Williams-Garcia, Rashid Vladimir
Our nervous systems are composed of intricate webs of interconnected neurons interacting in complex ways. These complex interactions result in a wide range of collective behaviors with implications for features of brain function, e.g., information processing. Under certain conditions, such interactions can drive neural network dynamics towards critical phase transitions, where power-law scaling is conjectured to allow optimal behavior. Recent experimental evidence is consistent with this idea and it seems plausible that healthy neural networks would tend towards optimality. This hypothesis, however, is based on two problematic assumptions, which I describe and for which I present alternatives in this thesis. First, critical transitions may vanish due to the influence of an environment, e.g., a sensory stimulus, and so living neural networks may be incapable of achieving "critical" optimality. I develop a framework known as quasicriticality, in which a relative optimality can be achieved depending on the strength of the environmental influence. Second, the power-law scaling supporting this hypothesis is based on statistical analysis of cascades of activity known as neuronal avalanches, which conflate causal and non-causal activity, thus confounding important dynamical information. In this thesis, I present a new method to unveil causal links, known as causal webs, between neuronal activations, thus allowing for experimental tests of the quasicriticality hypothesis and other practical applications.
NASA Astrophysics Data System (ADS)
Audenaert, Koenraad M. R.; Mosonyi, Milán
2014-10-01
We consider the multiple hypothesis testing problem for symmetric quantum state discrimination between r given states σ1, …, σr. By splitting up the overall test into multiple binary tests in various ways we obtain a number of upper bounds on the optimal error probability in terms of the binary error probabilities. These upper bounds allow us to deduce various bounds on the asymptotic error rate, for which it has been hypothesized that it is given by the multi-hypothesis quantum Chernoff bound (or Chernoff divergence) C(σ1, …, σr), as recently introduced by Nussbaum and Szkoła in analogy with Salikhov's classical multi-hypothesis Chernoff bound. This quantity is defined as the minimum of the pairwise binary Chernoff divergences min _{j
A robust hypothesis test for the sensitive detection of constant speed radiation moving sources
NASA Astrophysics Data System (ADS)
Dumazert, Jonathan; Coulon, Romain; Kondrasovs, Vladimir; Boudergui, Karim; Moline, Yoann; Sannié, Guillaume; Gameiro, Jordan; Normand, Stéphane; Méchin, Laurence
2015-09-01
Radiation Portal Monitors are deployed in linear networks to detect radiological material in motion. As a complement to single and multichannel detection algorithms, inefficient under too low signal-to-noise ratios, temporal correlation algorithms have been introduced. Test hypothesis methods based on empirically estimated mean and variance of the signals delivered by the different channels have shown significant gain in terms of a tradeoff between detection sensitivity and false alarm probability. This paper discloses the concept of a new hypothesis test for temporal correlation detection methods, taking advantage of the Poisson nature of the registered counting signals, and establishes a benchmark between this test and its empirical counterpart. The simulation study validates that in the four relevant configurations of a pedestrian source carrier under respectively high and low count rate radioactive backgrounds, and a vehicle source carrier under the same respectively high and low count rate radioactive backgrounds, the newly introduced hypothesis test ensures a significantly improved compromise between sensitivity and false alarm. It also guarantees that the optimal coverage factor for this compromise remains stable regardless of signal-to-noise ratio variations between 2 and 0.8, therefore allowing the final user to parametrize the test with the sole prior knowledge of background amplitude.
Fisher, Neyman-Pearson or NHST? A tutorial for teaching data testing.
Perezgonzalez, Jose D
2015-01-01
Despite frequent calls for the overhaul of null hypothesis significance testing (NHST), this controversial procedure remains ubiquitous in behavioral, social and biomedical teaching and research. Little change seems possible once the procedure becomes well ingrained in the minds and current practice of researchers; thus, the optimal opportunity for such change is at the time the procedure is taught, be this at undergraduate or at postgraduate levels. This paper presents a tutorial for the teaching of data testing procedures, often referred to as hypothesis testing theories. The first procedure introduced is Fisher's approach to data testing-tests of significance; the second is Neyman-Pearson's approach-tests of acceptance; the final procedure is the incongruent combination of the previous two theories into the current approach-NSHT. For those researchers sticking with the latter, two compromise solutions on how to improve NHST conclude the tutorial.
Chiang, Kuo-Szu; Bock, Clive H; Lee, I-Hsuan; El Jarroudi, Moussa; Delfosse, Philippe
2016-12-01
The effect of rater bias and assessment method on hypothesis testing was studied for representative experimental designs for plant disease assessment using balanced and unbalanced data sets. Data sets with the same number of replicate estimates for each of two treatments are termed "balanced" and those with unequal numbers of replicate estimates are termed "unbalanced". The three assessment methods considered were nearest percent estimates (NPEs), an amended 10% incremental scale, and the Horsfall-Barratt (H-B) scale. Estimates of severity of Septoria leaf blotch on leaves of winter wheat were used to develop distributions for a simulation model. The experimental designs are presented here in the context of simulation experiments which consider the optimal design for the number of specimens (individual units sampled) and the number of replicate estimates per specimen for a fixed total number of observations (total sample size for the treatments being compared). The criterion used to gauge each method was the power of the hypothesis test. As expected, at a given fixed number of observations, the balanced experimental designs invariably resulted in a higher power compared with the unbalanced designs at different disease severity means, mean differences, and variances. Based on these results, with unbiased estimates using NPE, the recommended number of replicate estimates taken per specimen is 2 (from a sample of specimens of at least 30), because this conserves resources. Furthermore, for biased estimates, an apparent difference in the power of the hypothesis test was observed between assessment methods and between experimental designs. Results indicated that, regardless of experimental design or rater bias, an amended 10% incremental scale has slightly less power compared with NPEs, and that the H-B scale is more likely than the others to cause a type II error. These results suggest that choice of assessment method, optimizing sample number and number of replicate estimates, and using a balanced experimental design are important criteria to consider to maximize the power of hypothesis tests for comparing treatments using disease severity estimates.
Thermal physiology, disease, and amphibian declines on the eastern slopes of the Andes.
Catenazzi, Alessandro; Lehr, Edgar; Vredenburg, Vance T
2014-04-01
Rising temperatures, a widespread consequence of climate change, have been implicated in enigmatic amphibian declines from habitats with little apparent human impact. The pathogenic fungus Batrachochytrium dendrobatidis (Bd), now widespread in Neotropical mountains, may act in synergy with climate change causing collapse in thermally stressed hosts. We measured the thermal tolerance of frogs along a wide elevational gradient in the Tropical Andes, where frog populations have collapsed. We used the difference between critical thermal maximum and the temperature a frog experiences in nature as a measure of tolerance to high temperatures. Temperature tolerance increased as elevation increased, suggesting that frogs at higher elevations may be less sensitive to rising temperatures. We tested the alternative pathogen optimal growth hypothesis that prevalence of the pathogen should decrease as temperatures fall outside the optimal range of pathogen growth. Our infection-prevalence data supported the pathogen optimal growth hypothesis because we found that prevalence of Bd increased when host temperatures matched its optimal growth range. These findings suggest that rising temperatures may not be the driver of amphibian declines in the eastern slopes of the Andes. Zoonotic outbreaks of Bd are the most parsimonious hypothesis to explain the collapse of montane amphibian faunas; but our results also reveal that lowland tropical amphibians, despite being shielded from Bd by higher temperatures, are vulnerable to climate-warming stress. © 2013 Society for Conservation Biology.
Predicting Short-Term Remembering as Boundedly Optimal Strategy Choice.
Howes, Andrew; Duggan, Geoffrey B; Kalidindi, Kiran; Tseng, Yuan-Chi; Lewis, Richard L
2016-07-01
It is known that, on average, people adapt their choice of memory strategy to the subjective utility of interaction. What is not known is whether an individual's choices are boundedly optimal. Two experiments are reported that test the hypothesis that an individual's decisions about the distribution of remembering between internal and external resources are boundedly optimal where optimality is defined relative to experience, cognitive constraints, and reward. The theory makes predictions that are tested against data, not fitted to it. The experiments use a no-choice/choice utility learning paradigm where the no-choice phase is used to elicit a profile of each participant's performance across the strategy space and the choice phase is used to test predicted choices within this space. They show that the majority of individuals select strategies that are boundedly optimal. Further, individual differences in what people choose to do are successfully predicted by the analysis. Two issues are discussed: (a) the performance of the minority of participants who did not find boundedly optimal adaptations, and (b) the possibility that individuals anticipate what, with practice, will become a bounded optimal strategy, rather than what is boundedly optimal during training. Copyright © 2015 Cognitive Science Society, Inc.
NASA Astrophysics Data System (ADS)
Renes, Joseph M.
2017-10-01
We extend the recent bounds of Sason and Verdú relating Rényi entropy and Bayesian hypothesis testing (arXiv:1701.01974.) to the quantum domain and show that they have a number of different applications. First, we obtain a sharper bound relating the optimal probability of correctly distinguishing elements of an ensemble of states to that of the pretty good measurement, and an analogous bound for optimal and pretty good entanglement recovery. Second, we obtain bounds relating optimal guessing and entanglement recovery to the fidelity of the state with a product state, which then leads to tight tripartite uncertainty and monogamy relations.
Biostatistics Series Module 2: Overview of Hypothesis Testing.
Hazra, Avijit; Gogtay, Nithya
2016-01-01
Hypothesis testing (or statistical inference) is one of the major applications of biostatistics. Much of medical research begins with a research question that can be framed as a hypothesis. Inferential statistics begins with a null hypothesis that reflects the conservative position of no change or no difference in comparison to baseline or between groups. Usually, the researcher has reason to believe that there is some effect or some difference which is the alternative hypothesis. The researcher therefore proceeds to study samples and measure outcomes in the hope of generating evidence strong enough for the statistician to be able to reject the null hypothesis. The concept of the P value is almost universally used in hypothesis testing. It denotes the probability of obtaining by chance a result at least as extreme as that observed, even when the null hypothesis is true and no real difference exists. Usually, if P is < 0.05 the null hypothesis is rejected and sample results are deemed statistically significant. With the increasing availability of computers and access to specialized statistical software, the drudgery involved in statistical calculations is now a thing of the past, once the learning curve of the software has been traversed. The life sciences researcher is therefore free to devote oneself to optimally designing the study, carefully selecting the hypothesis tests to be applied, and taking care in conducting the study well. Unfortunately, selecting the right test seems difficult initially. Thinking of the research hypothesis as addressing one of five generic research questions helps in selection of the right hypothesis test. In addition, it is important to be clear about the nature of the variables (e.g., numerical vs. categorical; parametric vs. nonparametric) and the number of groups or data sets being compared (e.g., two or more than two) at a time. The same research question may be explored by more than one type of hypothesis test. While this may be of utility in highlighting different aspects of the problem, merely reapplying different tests to the same issue in the hope of finding a P < 0.05 is a wrong use of statistics. Finally, it is becoming the norm that an estimate of the size of any effect, expressed with its 95% confidence interval, is required for meaningful interpretation of results. A large study is likely to have a small (and therefore "statistically significant") P value, but a "real" estimate of the effect would be provided by the 95% confidence interval. If the intervals overlap between two interventions, then the difference between them is not so clear-cut even if P < 0.05. The two approaches are now considered complementary to one another.
Biostatistics Series Module 2: Overview of Hypothesis Testing
Hazra, Avijit; Gogtay, Nithya
2016-01-01
Hypothesis testing (or statistical inference) is one of the major applications of biostatistics. Much of medical research begins with a research question that can be framed as a hypothesis. Inferential statistics begins with a null hypothesis that reflects the conservative position of no change or no difference in comparison to baseline or between groups. Usually, the researcher has reason to believe that there is some effect or some difference which is the alternative hypothesis. The researcher therefore proceeds to study samples and measure outcomes in the hope of generating evidence strong enough for the statistician to be able to reject the null hypothesis. The concept of the P value is almost universally used in hypothesis testing. It denotes the probability of obtaining by chance a result at least as extreme as that observed, even when the null hypothesis is true and no real difference exists. Usually, if P is < 0.05 the null hypothesis is rejected and sample results are deemed statistically significant. With the increasing availability of computers and access to specialized statistical software, the drudgery involved in statistical calculations is now a thing of the past, once the learning curve of the software has been traversed. The life sciences researcher is therefore free to devote oneself to optimally designing the study, carefully selecting the hypothesis tests to be applied, and taking care in conducting the study well. Unfortunately, selecting the right test seems difficult initially. Thinking of the research hypothesis as addressing one of five generic research questions helps in selection of the right hypothesis test. In addition, it is important to be clear about the nature of the variables (e.g., numerical vs. categorical; parametric vs. nonparametric) and the number of groups or data sets being compared (e.g., two or more than two) at a time. The same research question may be explored by more than one type of hypothesis test. While this may be of utility in highlighting different aspects of the problem, merely reapplying different tests to the same issue in the hope of finding a P < 0.05 is a wrong use of statistics. Finally, it is becoming the norm that an estimate of the size of any effect, expressed with its 95% confidence interval, is required for meaningful interpretation of results. A large study is likely to have a small (and therefore “statistically significant”) P value, but a “real” estimate of the effect would be provided by the 95% confidence interval. If the intervals overlap between two interventions, then the difference between them is not so clear-cut even if P < 0.05. The two approaches are now considered complementary to one another. PMID:27057011
Distributed Immune Systems for Wireless Network Information Assurance
2010-04-26
ratio test (SPRT), where the goal is to optimize a hypothesis testing problem given a trade-off between the probability of errors and the...using cumulative sum (CUSUM) and Girshik-Rubin-Shiryaev (GRSh) statistics. In sequential versions of the problem the sequential probability ratio ...the more complicated problems, in particular those where no clear mean can be established. We developed algorithms based on the sequential probability
An algorithm for testing the efficient market hypothesis.
Boboc, Ioana-Andreea; Dinică, Mihai-Cristian
2013-01-01
The objective of this research is to examine the efficiency of EUR/USD market through the application of a trading system. The system uses a genetic algorithm based on technical analysis indicators such as Exponential Moving Average (EMA), Moving Average Convergence Divergence (MACD), Relative Strength Index (RSI) and Filter that gives buying and selling recommendations to investors. The algorithm optimizes the strategies by dynamically searching for parameters that improve profitability in the training period. The best sets of rules are then applied on the testing period. The results show inconsistency in finding a set of trading rules that performs well in both periods. Strategies that achieve very good returns in the training period show difficulty in returning positive results in the testing period, this being consistent with the efficient market hypothesis (EMH).
An Algorithm for Testing the Efficient Market Hypothesis
Boboc, Ioana-Andreea; Dinică, Mihai-Cristian
2013-01-01
The objective of this research is to examine the efficiency of EUR/USD market through the application of a trading system. The system uses a genetic algorithm based on technical analysis indicators such as Exponential Moving Average (EMA), Moving Average Convergence Divergence (MACD), Relative Strength Index (RSI) and Filter that gives buying and selling recommendations to investors. The algorithm optimizes the strategies by dynamically searching for parameters that improve profitability in the training period. The best sets of rules are then applied on the testing period. The results show inconsistency in finding a set of trading rules that performs well in both periods. Strategies that achieve very good returns in the training period show difficulty in returning positive results in the testing period, this being consistent with the efficient market hypothesis (EMH). PMID:24205148
USDA-ARS?s Scientific Manuscript database
Background - Optimal nutritional choices are linked with better health but most current interventions to improve diet have limited effect. We tested the hypothesis that providing personalized nutrition (PN) advice based on collected information on individual diet and lifestyle, phenotype or genotype...
USDA-ARS?s Scientific Manuscript database
Substitution of fishmeal with alternate proteins in aquafeeds often results in dietary imbalances of firstlimiting essential amino acids (EAA) and poorer fish performance. This growth trial was undertaken to test the hypothesis that ideal protein theory accurately predicts first-limiting amino acids...
Proper Image Subtraction—Optimal Transient Detection, Photometry, and Hypothesis Testing
NASA Astrophysics Data System (ADS)
Zackay, Barak; Ofek, Eran O.; Gal-Yam, Avishay
2016-10-01
Transient detection and flux measurement via image subtraction stand at the base of time domain astronomy. Due to the varying seeing conditions, the image subtraction process is non-trivial, and existing solutions suffer from a variety of problems. Starting from basic statistical principles, we develop the optimal statistic for transient detection, flux measurement, and any image-difference hypothesis testing. We derive a closed-form statistic that: (1) is mathematically proven to be the optimal transient detection statistic in the limit of background-dominated noise, (2) is numerically stable, (3) for accurately registered, adequately sampled images, does not leave subtraction or deconvolution artifacts, (4) allows automatic transient detection to the theoretical sensitivity limit by providing credible detection significance, (5) has uncorrelated white noise, (6) is a sufficient statistic for any further statistical test on the difference image, and, in particular, allows us to distinguish particle hits and other image artifacts from real transients, (7) is symmetric to the exchange of the new and reference images, (8) is at least an order of magnitude faster to compute than some popular methods, and (9) is straightforward to implement. Furthermore, we present extensions of this method that make it resilient to registration errors, color-refraction errors, and any noise source that can be modeled. In addition, we show that the optimal way to prepare a reference image is the proper image coaddition presented in Zackay & Ofek. We demonstrate this method on simulated data and real observations from the PTF data release 2. We provide an implementation of this algorithm in MATLAB and Python.
Nearly ideal binary communication in squeezed channels
NASA Astrophysics Data System (ADS)
Paris, Matteo G.
2001-07-01
We analyze the effect of squeezing the channel in binary communication based on Gaussian states. We show that for coding on pure states, squeezing increases the detection probability at fixed size of the strategy, actually saturating the optimal bound already for moderate signal energy. Using Neyman-Pearson lemma for fuzzy hypothesis testing we are able to analyze also the case of mixed states, and to find the optimal amount of squeezing that can be effectively employed. It results that optimally squeezed channels are robust against signal mixing, and largely improve the strategy power by comparison with coherent ones.
Role of APOE Isoforms in the Pathogenesis of TBI induced Alzheimer’s Disease
2016-10-01
deletion, APOE targeted replacement, complex breeding, CCI model optimization, mRNA library generation, high throughput massive parallel sequencing...demonstrate that the lack of Abca1 increases amyloid plaques and decreased APOE protein levels in AD-model mice. In this proposal we will test the hypothesis...injury, inflammatory reaction, transcriptome, high throughput massive parallel sequencing, mRNA-seq., behavioral testing, memory impairment, recovery 3
Evidence supporting the match/mismatch hypothesis of psychiatric disorders.
Santarelli, Sara; Lesuis, Sylvie L; Wang, Xiao-Dong; Wagner, Klaus V; Hartmann, Jakob; Labermaier, Christiana; Scharf, Sebastian H; Müller, Marianne B; Holsboer, Florian; Schmidt, Mathias V
2014-06-01
Chronic stress is one of the predominant environmental risk factors for a number of psychiatric disorders, particularly for major depression. Different hypotheses have been formulated to address the interaction between early and adult chronic stress in psychiatric disease vulnerability. The match/mismatch hypothesis of psychiatric disease states that the early life environment shapes coping strategies in a manner that enables individuals to optimally face similar environments later in life. We tested this hypothesis in female Balb/c mice that underwent either stress or enrichment early in life and were in adulthood further subdivided in single or group housed, in order to provide aversive or positive adult environments, respectively. We studied the effects of the environmental manipulation on anxiety-like, depressive-like and sociability behaviors and gene expression profiles. We show that continuous exposure to adverse environments (matched condition) is not necessarily resulting in an opposite phenotype compared to a continuous supportive environment (matched condition). Rather, animals with mismatched environmental conditions behaved differently from animals with matched environments on anxious, social and depressive like phenotypes. These results further support the match/mismatch hypothesis and illustrate how mild or moderate aversive conditions during development can shape an individual to be optimally adapted to similar conditions later in life. Copyright © 2014 Elsevier B.V. and ECNP. All rights reserved.
To Build an Ecosystem: An Introductory Lab for Environmental Science & Biology Students
ERIC Educational Resources Information Center
Hudon, Daniel; Finnerty, John R.
2013-01-01
A hypothesis-driven laboratory is described that introduces students to the complexities of ecosystem function. Students work with live algae, brine shrimp, and sea anemones to test hypotheses regarding the trophic interactions among species, the exchange of nutrients and gases, and the optimal ratio of producers to consumers and predators in…
Organic supplemental nitrogen sources for field corn production after a hairy vetch cover crop
USDA-ARS?s Scientific Manuscript database
The combined use of legume cover crops and animal byproduct organic amendments could provide agronomic and environmental benefits to organic farmers by increasing corn grain yield while optimizing N and P inputs. To test this hypothesis we conducted a two-year field study and a laboratory soil incu...
USDA-ARS?s Scientific Manuscript database
Substitution of fishmeal with alternate proteins in aquafeeds often results in dietary imbalances of first-limiting essential amino acids (EAA) and poorer fish performance. This 12-week growth trial was undertaken to test the hypothesis that ideal protein theory accurately predicts first-limiting am...
Learning in the Laboratory: How Group Assignments Affect Motivation and Performance
ERIC Educational Resources Information Center
Belanger, John R.
2016-01-01
Team projects can optimize educational resources in a laboratory, but also create the potential for social loafing. Allowing students to choose their own groups could increase their motivation to learn and improve academic performance. To test this hypothesis, final grades and feedback from students were compared for the same course in two…
Vitamin D deficiency at birth among military dependants in Hawai'i.
Palmer, Eldon G; Ramirez-Enriquez, Emmanuel; Frioux, Sarah M; Tyree, Melissa M
2013-03-01
Vitamin D has long been known to be essential in bone mineralization as well as calcium and phosphate regulation. An increasing body of literature suggests that Vitamin D is also key in many other areas to include immune function, brain development, prevention of autoimmune disease, and prevention of certain types of cancers. Studies also suggest that, with decreased sun exposure due to concern for skin cancer risk, much of the world's population is becoming increasingly deficient in vitamin D. Our hypothesis was that vitamin D deficiency exists, and can be detected, even in sunny climates such as the state of Hawai'i. To test this hypothesis, eighty-six cord blood samples were collected in the process of routine clinical testing. These samples were tested for 25-hydroxy vitamin D via liquid chromatography mass spectroscopy. Percent deficiency (<20ng/mL) and insufficiency (20-31.9ng/mL) were determined by statistical analysis. Forty-six percent (n=37) of cord blood samples tested were deficient in vitamin D; 47 percent (n=38) of samples had insufficient 25-OH vitamin D. Only 7 percent (n=6) of samples showed vitamin D concentrations at the recommended levels. A vast majority of military dependents in Hawai'i have less than optimal vitamin D levels at birth. Further investigation of vitamin D supplementation during pregnancy is required to optimize vitamin D status at birth. We conclude that a vast majority of military dependents in Hawai'i have less than optimal vitamin D levels at birth supporting the recommendation for supplementation in this population.
Mau, Ted; Palaparthi, Anil; Riede, Tobias; Titze, Ingo R.
2015-01-01
Objectives/Hypothesis To test the hypothesis that subligamental cordectomy produces superior acoustic outcome than subepithelial cordectomy for early (T1-2) glottic cancer that requires complete removal of the superficial lamina propria but does not involve the vocal ligament. Study Design Computer simulation Methods A computational tool for vocal fold surgical planning and simulation (the National Center for Voice and Speech Phonosurgery Optimizer-Simulator) was used to evaluate the acoustic output of alternative vocal fold morphologies. Four morphologies were simulated: normal, subepithelial cordectomy, subligamental cordectomy, and transligamental cordectomy (partial ligament resection). The primary outcome measure was the range of fundamental frequency (F0) and sound pressure level (SPL). A more restricted F0-SPL range was considered less favorable because of reduced acoustic possibilities given the same range of driving subglottic pressure and identical vocal fold posturing. Results Subligamental cordectomy generated solutions covering an F0-SPL range 82% of normal for a rectangular vocal fold. In contrast, transligamental and subepithelial cordectomies produced significantly smaller F0-SPL ranges, 57% and 19% of normal, respectively. Conclusion This study illustrates the use of the Phonosurgery Optimizer-Simulator to test a specific hypothesis regarding the merits of two surgical alternatives. These simulation results provide theoretical support for vocal ligament excision with maximum muscle preservation when superficial lamina propria resection is necessary but the vocal ligament can be spared on oncological grounds. The resection of more tissue may paradoxically allow the eventual recovery of a better speaking voice, assuming glottal width is restored. Application of this conclusion to surgical practice will require confirmatory clinical data. PMID:26010240
2011-01-01
Using an automated shuttlebox system, we conducted patch choice experiments with 32, 8–12 g bluegill sunfish (Lepomis macrochirus) to test a behavioral energetics hypothesis of habitat choice. When patch temperature and food levels were held constant within patches but different between patches, we expected bluegill to choose patches that maximized growth based on the bioenergetic integration of food and temperature as predicted by a bioenergetics model. Alternative hypotheses were that bluegill may choose patches based only on food (optimal foraging) or temperature (behavioral thermoregulation). The behavioral energetics hypothesis was not a good predictor of short-term (from minutes to weeks) patch choice by bluegill; the behavioral thermoregulation hypothesis was the best predictor. In the short-term, food and temperature appeared to affect patch choice hierarchically; temperature was more important, although food can alter temperature preference during feeding periods. Over a 19-d experiment, mean temperatures occupied by fish offered low rations did decline as predicted by the behavioral energetics hypothesis, but the decline was less than 1.0 °C as opposed to a possible 5 °C decline. A short-term, bioenergetic response to food and temperature may be precluded by physiological costs of acclimation not considered explicitly in the behavioral energetics hypothesis.
Scheduling Earth Observing Satellites with Evolutionary Algorithms
NASA Technical Reports Server (NTRS)
Globus, Al; Crawford, James; Lohn, Jason; Pryor, Anna
2003-01-01
We hypothesize that evolutionary algorithms can effectively schedule coordinated fleets of Earth observing satellites. The constraints are complex and the bottlenecks are not well understood, a condition where evolutionary algorithms are often effective. This is, in part, because evolutionary algorithms require only that one can represent solutions, modify solutions, and evaluate solution fitness. To test the hypothesis we have developed a representative set of problems, produced optimization software (in Java) to solve them, and run experiments comparing techniques. This paper presents initial results of a comparison of several evolutionary and other optimization techniques; namely the genetic algorithm, simulated annealing, squeaky wheel optimization, and stochastic hill climbing. We also compare separate satellite vs. integrated scheduling of a two satellite constellation. While the results are not definitive, tests to date suggest that simulated annealing is the best search technique and integrated scheduling is superior.
Maximum saliency bias in binocular fusion
NASA Astrophysics Data System (ADS)
Lu, Yuhao; Stafford, Tom; Fox, Charles
2016-07-01
Subjective experience at any instant consists of a single ("unitary"), coherent interpretation of sense data rather than a "Bayesian blur" of alternatives. However, computation of Bayes-optimal actions has no role for unitary perception, instead being required to integrate over every possible action-percept pair to maximise expected utility. So what is the role of unitary coherent percepts, and how are they computed? Recent work provided objective evidence for non-Bayes-optimal, unitary coherent, perception and action in humans; and further suggested that the percept selected is not the maximum a posteriori percept but is instead affected by utility. The present study uses a binocular fusion task first to reproduce the same effect in a new domain, and second, to test multiple hypotheses about exactly how utility may affect the percept. After accounting for high experimental noise, it finds that both Bayes optimality (maximise expected utility) and the previously proposed maximum-utility hypothesis are outperformed in fitting the data by a modified maximum-salience hypothesis, using unsigned utility magnitudes in place of signed utilities in the bias function.
USDA-ARS?s Scientific Manuscript database
The use of live oocyst vaccines is becoming increasingly important in the control of avian coccidosis in broiler chicks. Knowledge of the mechanisms of how chicks uptake oocysts and become immune is important for optimizing delivery of live vaccines. The current study tests the hypothesis that chick...
Increasing arousal enhances inhibitory control in calm but not excitable dogs
Bray, Emily E.; MacLean, Evan L.; Hare, Brian A.
2015-01-01
The emotional-reactivity hypothesis proposes that problem-solving abilities can be constrained by temperament, within and across species. One way to test this hypothesis is with the predictions of the Yerkes-Dodson law. The law posits that arousal level, a component of temperament, affects problem solving in an inverted U-shaped relationship: optimal performance is reached at intermediate levels of arousal and impeded by high and low levels. Thus, a powerful test of the emotional-reactivity hypothesis is to compare cognitive performance in dog populations that have been bred and trained based in part on their arousal levels. We therefore compared a group of pet dogs to a group of assistance dogs bred and trained for low arousal (N = 106) on a task of inhibitory control involving a detour response. Consistent with the Yerkes-Dodson law, assistance dogs, which began the test with lower levels of baseline arousal, showed improvements when arousal was artificially increased. In contrast, pet dogs, which began the test with higher levels of baseline arousal, were negatively affected when their arousal was increased. Furthermore, the dogs’ baseline levels of arousal, as measured in their rate of tail wagging, differed by population in the expected directions. Low-arousal assistance dogs showed the most inhibition in a detour task when humans eagerly encouraged them while more highly aroused pet dogs performed worst on the same task with strong encouragement. Our findings support the hypothesis that selection on temperament can have important implications for cognitive performance. PMID:26169659
Increasing arousal enhances inhibitory control in calm but not excitable dogs.
Bray, Emily E; MacLean, Evan L; Hare, Brian A
2015-11-01
The emotional-reactivity hypothesis proposes that problem-solving abilities can be constrained by temperament, within and across species. One way to test this hypothesis is with the predictions of the Yerkes-Dodson law. The law posits that arousal level, a component of temperament, affects problem solving in an inverted U-shaped relationship: Optimal performance is reached at intermediate levels of arousal and impeded by high and low levels. Thus, a powerful test of the emotional-reactivity hypothesis is to compare cognitive performance in dog populations that have been bred and trained based in part on their arousal levels. We therefore compared a group of pet dogs to a group of assistance dogs bred and trained for low arousal (N = 106) on a task of inhibitory control involving a detour response. Consistent with the Yerkes-Dodson law, assistance dogs, which began the test with lower levels of baseline arousal, showed improvements when arousal was artificially increased. In contrast, pet dogs, which began the test with higher levels of baseline arousal, were negatively affected when their arousal was increased. Furthermore, the dogs' baseline levels of arousal, as measured in their rate of tail wagging, differed by population in the expected directions. Low-arousal assistance dogs showed the most inhibition in a detour task when humans eagerly encouraged them, while more highly aroused pet dogs performed worst on the same task with strong encouragement. Our findings support the hypothesis that selection on temperament can have important implications for cognitive performance.
Do the Emotional Benefits of Optimism Vary Across Older Adulthood? A Life Span Perspective.
Wrosch, Carsten; Jobin, Joelle; Scheier, Michael F
2017-06-01
This study examined whether the emotional benefits of dispositional optimism for managing stressful encounters decrease across older adulthood. Such an effect might emerge because age-related declines in opportunities for overcoming stressors could reduce the effectiveness of optimism. This hypothesis was tested in a 6-year longitudinal study of 171 community-dwelling older adults (age range = 64-90 years). Hierarchical linear models showed that dispositional optimism protected relatively young participants from exhibiting elevations in depressive symptoms over time, but that these benefits became increasingly reduced among their older counterparts. Moreover, the findings showed that an age-related association between optimism and depressive symptoms was observed particularly during periods of enhanced, as compared to reduced, stress. These results suggest that dispositional optimism protects emotional well-being during the early phases of older adulthood, but that its effects are reduced in advanced old age. © 2016 Wiley Periodicals, Inc.
A common optimization principle for motor execution in healthy subjects and parkinsonian patients.
Baraduc, Pierre; Thobois, Stéphane; Gan, Jing; Broussolle, Emmanuel; Desmurget, Michel
2013-01-09
Recent research on Parkinson's disease (PD) has emphasized that parkinsonian movement, although bradykinetic, shares many attributes with healthy behavior. This observation led to the suggestion that bradykinesia in PD could be due to a reduction in motor motivation. This hypothesis can be tested in the framework of optimal control theory, which accounts for many characteristics of healthy human movement while providing a link between the motor behavior and a cost/benefit trade-off. This approach offers the opportunity to interpret movement deficits of PD patients in the light of a computational theory of normal motor control. We studied 14 PD patients with bilateral subthalamic nucleus (STN) stimulation and 16 age-matched healthy controls, and tested whether reaching movements were governed by similar rules in these two groups. A single optimal control model accounted for the reaching movements of healthy subjects and PD patients, whatever the condition of STN stimulation (on or off). The choice of movement speed was explained in all subjects by the existence of a preset dynamic range for the motor signals. This range was idiosyncratic and applied to all movements regardless of their amplitude. In PD patients this dynamic range was abnormally narrow and correlated with bradykinesia. STN stimulation reduced bradykinesia and widened this range in all patients, but did not restore it to a normal value. These results, consistent with the motor motivation hypothesis, suggest that constrained optimization of motor effort is the main determinant of movement planning (choice of speed) and movement production, in both healthy and PD subjects.
NASA Astrophysics Data System (ADS)
Hoell, Simon; Omenzetter, Piotr
2017-07-01
Considering jointly damage sensitive features (DSFs) of signals recorded by multiple sensors, applying advanced transformations to these DSFs and assessing systematically their contribution to damage detectability and localisation can significantly enhance the performance of structural health monitoring systems. This philosophy is explored here for partial autocorrelation coefficients (PACCs) of acceleration responses. They are interrogated with the help of the linear discriminant analysis based on the Fukunaga-Koontz transformation using datasets of the healthy and selected reference damage states. Then, a simple but efficient fast forward selection procedure is applied to rank the DSF components with respect to statistical distance measures specialised for either damage detection or localisation. For the damage detection task, the optimal feature subsets are identified based on the statistical hypothesis testing. For damage localisation, a hierarchical neuro-fuzzy tool is developed that uses the DSF ranking to establish its own optimal architecture. The proposed approaches are evaluated experimentally on data from non-destructively simulated damage in a laboratory scale wind turbine blade. The results support our claim of being able to enhance damage detectability and localisation performance by transforming and optimally selecting DSFs. It is demonstrated that the optimally selected PACCs from multiple sensors or their Fukunaga-Koontz transformed versions can not only improve the detectability of damage via statistical hypothesis testing but also increase the accuracy of damage localisation when used as inputs into a hierarchical neuro-fuzzy network. Furthermore, the computational effort of employing these advanced soft computing models for damage localisation can be significantly reduced by using transformed DSFs.
Control of Finite-State, Finite Memory Stochastic Systems
NASA Technical Reports Server (NTRS)
Sandell, Nils R.
1974-01-01
A generalized problem of stochastic control is discussed in which multiple controllers with different data bases are present. The vehicle for the investigation is the finite state, finite memory (FSFM) stochastic control problem. Optimality conditions are obtained by deriving an equivalent deterministic optimal control problem. A FSFM minimum principle is obtained via the equivalent deterministic problem. The minimum principle suggests the development of a numerical optimization algorithm, the min-H algorithm. The relationship between the sufficiency of the minimum principle and the informational properties of the problem are investigated. A problem of hypothesis testing with 1-bit memory is investigated to illustrate the application of control theoretic techniques to information processing problems.
Ecological optimality in water-limited natural soil-vegetation systems. II - Tests and applications
NASA Technical Reports Server (NTRS)
Eagleson, P. S.; Tellers, T. E.
1982-01-01
The long-term optimal climatic climax soil-vegetation system is defined for several climates according to previous hypotheses in terms of two free parameters, effective porosity and plant water use coefficient. The free parameters are chosen by matching the predicted and observed average annual water yield. The resulting climax soil and vegetation properties are tested by comparison with independent observations of canopy density and average annual surface runoff. The climax properties are shown also to satisfy a previous hypothesis for short-term optimization of canopy density and water use coefficient. Using these hypotheses, a relationship between average evapotranspiration and optimum vegetation canopy density is derived and is compared with additional field observations. An algorithm is suggested by which the climax soil and vegetation properties can be calculated given only the climate parameters and the soil effective porosity. Sensitivity of the climax properties to the effective porosity is explored.
NASA Astrophysics Data System (ADS)
Ewers, B. E.; Mackay, D. S.; Samanta, S.; Ahl, D. E.; Burrows, S. S.; Gower, S. T.
2001-12-01
Land use changes over the last century in northern Wisconsin have resulted in a heterogeneous landscape composed of the following four main forest types: northern hardwoods, northern conifer, aspen/fir, and forested wetland. Based on sap flux measurements, aspen/fir has twice the canopy transpiration of northern hardwoods. In addition, daily transpiration was only explained by daily average vapor pressure deficit across the cover types. The objective of this study was to determine if canopy average stomatal conductance could be used to explain the species effects on tree transpiration. Our first hypothesis is that across all of the species, stomatal conductance will respond to vapor pressure deficit so as to maintain a minimum leaf water potential to prevent catostrophic cavitiation. The consequence of this hypothesis is that among species and individuals there is a proportionality between high stomatal conductance and the sensitivity of stomatal conductance to vapor pressure deficit. Our second hypothesis is that species that do not follow the proportionality deviate because the canopies are decoupled from the atmosphere. To test our two hypotheses we calculated canopy average stomatal conductance from sap flux measurements using an inversion of the Penman-Monteith equation. We estimated the canopy coupling using a leaf energy budget model that requires leaf transpiration and canopy aerodynamic conductance. We optimized the parameters of the aerodynamic conductance model using a Monte Carlo technique across six parameters. We determined the optimal model for each species by selecting parameter sets that resulted in the proportionality of our first hypothesis. We then tested the optimal energy budget models of each species by comparing leaf temperature and leaf width predicted by the models to measurements of each tree species. In red pine, sugar maple, and trembling aspen trees under high canopy coupling conditions, we found the hypothesized proportionality between high stomatal conductance and the sensitivity of stomatal conductance to vapor pressure deficit. In addition, the canopy conductance of trembling aspen was twice as high as sugar maple and the aspen trees showed much more variability.
Is hefting to perceive the affordance for throwing a smart perceptual mechanism?
Zhu, Qin; Bingham, Geoffrey P
2008-08-01
G. P. Bingham, R. C. Schmidt, and L. D. Rosenblum (1989) found that, by hefting objects of different sizes and weights, people could choose the optimal weight in each size for throwing to a maximum distance. In Experiment 1, the authors replicated this result. G. P. Bingham et al. hypothesized that hefting is a smart mechanism that allows objects to be perceived in the context of throwing dynamics. This hypothesis entails 2 assumptions. First, hefting by hand is required for information about throwing by hand. The authors tested and confirmed this in Experiments 2 and 3. Second, optimal objects are determined by the dynamics of throwing. In Experiment 4, the authors tested this by measuring throwing release angles and using them with mean thrown distances from Experiment 1 and object sizes and weights to simulate projectile motion and recover release velocities. The results showed that only weight, not size, affects throwing. This failed to provide evidence supporting the particular smart mechanism hypothesis of G. P. Bingham et al. Because the affordance relation is determined in part by the dynamics of projectile motion, the results imply that the affordance is learned from knowledge of results of throwing.
The hubris hypothesis: The downside of comparative optimism displays.
Hoorens, Vera; Van Damme, Carolien; Helweg-Larsen, Marie; Sedikides, Constantine
2017-04-01
According to the hubris hypothesis, observers respond more unfavorably to individuals who express their positive self-views comparatively than to those who express their positive self-views non-comparatively, because observers infer that the former hold a more disparaging view of others and particularly of observers. Two experiments extended the hubris hypothesis in the domain of optimism. Observers attributed less warmth (but not less competence) to, and showed less interest in affiliating with, an individual displaying comparative optimism (the belief that one's future will be better than others' future) than with an individual displaying absolute optimism (the belief that one's future will be good). Observers responded differently to individuals displaying comparative versus absolute optimism, because they inferred that the former held a gloomier view of the observers' future. Consistent with previous research, observers still attributed more positive traits to a comparative or absolute optimist than to a comparative or absolute pessimist. Copyright © 2016. Published by Elsevier Inc.
Experimental design, power and sample size for animal reproduction experiments.
Chapman, Phillip L; Seidel, George E
2008-01-01
The present paper concerns statistical issues in the design of animal reproduction experiments, with emphasis on the problems of sample size determination and power calculations. We include examples and non-technical discussions aimed at helping researchers avoid serious errors that may invalidate or seriously impair the validity of conclusions from experiments. Screen shots from interactive power calculation programs and basic SAS power calculation programs are presented to aid in understanding statistical power and computing power in some common experimental situations. Practical issues that are common to most statistical design problems are briefly discussed. These include one-sided hypothesis tests, power level criteria, equality of within-group variances, transformations of response variables to achieve variance equality, optimal specification of treatment group sizes, 'post hoc' power analysis and arguments for the increased use of confidence intervals in place of hypothesis tests.
Debates—Hypothesis testing in hydrology: Pursuing certainty versus pursuing uberty
NASA Astrophysics Data System (ADS)
Baker, Victor R.
2017-03-01
Modern hydrology places nearly all its emphasis on science-as-knowledge, the hypotheses of which are increasingly expressed as physical models, whose predictions are tested by correspondence to quantitative data sets. Though arguably appropriate for applications of theory to engineering and applied science, the associated emphases on truth and degrees of certainty are not optimal for the productive and creative processes that facilitate the fundamental advancement of science as a process of discovery. The latter requires an investigative approach, where the goal is uberty, a kind of fruitfulness of inquiry, in which the abductive mode of inference adds to the much more commonly acknowledged modes of deduction and induction. The resulting world-directed approach to hydrology provides a valuable complement to the prevailing hypothesis- (theory-) directed paradigm.
Practical scheme for optimal measurement in quantum interferometric devices
NASA Astrophysics Data System (ADS)
Takeoka, Masahiro; Ban, Masashi; Sasaki, Masahide
2003-06-01
We apply a Kennedy-type detection scheme, which was originally proposed for a binary communications system, to interferometric sensing devices. We show that the minimum detectable perturbation of the proposed system reaches the ultimate precision bound which is predicted by quantum Neyman-Pearson hypothesis testing. To provide concrete examples, we apply our interferometric scheme to phase shift detection by using coherent and squeezed probe fields.
Wong, Jeremy D; O'Connor, Shawn M; Selinger, Jessica C; Donelan, J Maxwell
2017-08-01
People can adapt their gait to minimize energetic cost, indicating that walking's neural control has access to ongoing measurements of the body's energy use. In this study we tested the hypothesis that an important source of energetic cost measurements arises from blood gas receptors that are sensitive to O 2 and CO 2 concentrations. These receptors are known to play a role in regulating other physiological processes related to energy consumption, such as ventilation rate. Given the role of O 2 and CO 2 in oxidative metabolism, sensing their levels can provide an accurate estimate of the body's total energy use. To test our hypothesis, we simulated an added energetic cost for blood gas receptors that depended on a subject's step frequency and determined if subjects changed their behavior in response to this simulated cost. These energetic costs were simulated by controlling inspired gas concentrations to decrease the circulating levels of O 2 and increase CO 2 We found this blood gas control to be effective at shifting the step frequency that minimized the ventilation rate and perceived exertion away from the normally preferred frequency, indicating that these receptors provide the nervous system with strong physiological and psychological signals. However, rather than adapt their preferred step frequency toward these lower simulated costs, subjects persevered at their normally preferred frequency even after extensive experience with the new simulated costs. These results suggest that blood gas receptors play a negligible role in sensing energetic cost for the purpose of optimizing gait. NEW & NOTEWORTHY Human gait adaptation implies that the nervous system senses energetic cost, yet this signal is unknown. We tested the hypothesis that the blood gas receptors sense cost for gait optimization by controlling blood O 2 and CO 2 with step frequency as people walked. At the simulated energetic minimum, ventilation and perceived exertion were lowest, yet subjects preferred walking at their original frequency. This suggests that blood gas receptors are not critical for sensing cost during gait. Copyright © 2017 the American Physiological Society.
Sims, David W; Humphries, Nicolas E; Bradford, Russell W; Bruce, Barry D
2012-03-01
1. Search processes play an important role in physical, chemical and biological systems. In animal foraging, the search strategy predators should use to search optimally for prey is an enduring question. Some models demonstrate that when prey is sparsely distributed, an optimal search pattern is a specialised random walk known as a Lévy flight, whereas when prey is abundant, simple Brownian motion is sufficiently efficient. These predictions form part of what has been termed the Lévy flight foraging hypothesis (LFF) which states that as Lévy flights optimise random searches, movements approximated by optimal Lévy flights may have naturally evolved in organisms to enhance encounters with targets (e.g. prey) when knowledge of their locations is incomplete. 2. Whether free-ranging predators exhibit the movement patterns predicted in the LFF hypothesis in response to known prey types and distributions, however, has not been determined. We tested this using vertical and horizontal movement data from electronic tagging of an apex predator, the great white shark Carcharodon carcharias, across widely differing habitats reflecting different prey types. 3. Individual white sharks exhibited movement patterns that predicted well the prey types expected under the LFF hypothesis. Shark movements were best approximated by Brownian motion when hunting near abundant, predictable sources of prey (e.g. seal colonies, fish aggregations), whereas movements approximating truncated Lévy flights were present when searching for sparsely distributed or potentially difficult-to-detect prey in oceanic or shelf environments, respectively. 4. That movement patterns approximated by truncated Lévy flights and Brownian behaviour were present in the predicted prey fields indicates search strategies adopted by white sharks appear to be the most efficient ones for encountering prey in the habitats where such patterns are observed. This suggests that C. carcharias appears capable of exhibiting search patterns that are approximated as optimal in response to encountered changes in prey type and abundance, and across diverse marine habitats, from the surf zone to the deep ocean. 5. Our results provide some support for the LFF hypothesis. However, it is possible that the observed Lévy patterns of white sharks may not arise from an adaptive behaviour but could be an emergent property arising from simple, straight-line movements between complex (e.g. fractal) distributions of prey. Experimental studies are needed in vertebrates to test for the presence of Lévy behaviour patterns in the absence of complex prey distributions. © 2011 The Authors. Journal of Animal Ecology © 2011 British Ecological Society.
Design of clinical trials involving multiple hypothesis tests with a common control.
Schou, I Manjula; Marschner, Ian C
2017-07-01
Randomized clinical trials comparing several treatments to a common control are often reported in the medical literature. For example, multiple experimental treatments may be compared with placebo, or in combination therapy trials, a combination therapy may be compared with each of its constituent monotherapies. Such trials are typically designed using a balanced approach in which equal numbers of individuals are randomized to each arm, however, this can result in an inefficient use of resources. We provide a unified framework and new theoretical results for optimal design of such single-control multiple-comparator studies. We consider variance optimal designs based on D-, A-, and E-optimality criteria, using a general model that allows for heteroscedasticity and a range of effect measures that include both continuous and binary outcomes. We demonstrate the sensitivity of these designs to the type of optimality criterion by showing that the optimal allocation ratios are systematically ordered according to the optimality criterion. Given this sensitivity to the optimality criterion, we argue that power optimality is a more suitable approach when designing clinical trials where testing is the objective. Weighted variance optimal designs are also discussed, which, like power optimal designs, allow the treatment difference to play a major role in determining allocation ratios. We illustrate our methods using two real clinical trial examples taken from the medical literature. Some recommendations on the use of optimal designs in single-control multiple-comparator trials are also provided. © 2016 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Koblin, Beryl; Hirshfield, Sabina; Chiasson, Mary Ann; Wilton, Leo; Usher, DaShawn; Nandi, Vijay; Hoover, Donald R; Frye, Victoria
2017-12-19
HIV testing is a critical component of HIV prevention and care. Interventions to increase HIV testing rates among young black men who have sex with men (MSM) and black transgender women (transwomen) are needed. Personalized recommendations for an individual's optimal HIV testing approach may increase testing. This randomized trial tests the hypothesis that a personalized recommendation of an optimal HIV testing approach will increase HIV testing more than standard HIV testing information. A randomized trial among 236 young black men and transwomen who have sex with men or transwomen is being conducted. Participants complete a computerized baseline assessment and are randomized to electronically receive a personalized HIV testing recommendation or standard HIV testing information. Follow-up surveys are conducted online at 3 and 6 months after baseline. The All About Me randomized trial was launched in June 2016. Enrollment is completed and 3-month retention is 92.4% (218/236) and has exceeded study target goals. The All About Me intervention is an innovative approach to increase HIV testing by providing a personalized recommendation of a person's optimal HIV testing approach. If successful, optimizing this intervention for mobile devices will widen access to large numbers of individuals. ClinicalTrial.gov NCT02834572; https://clinicaltrials.gov/ct2/show/NCT02834572 (Archived by WebCite at http://www.webcitation.org/6vLJWOS1B). ©Beryl Koblin, Sabina Hirshfield, Mary Ann Chiasson, Leo Wilton, DaShawn Usher, Vijay Nandi, Donald R Hoover, Victoria Frye. Originally published in JMIR Research Protocols (http://www.researchprotocols.org), 19.12.2017.
Discerning the role of optimism in persuasion: the valence-enhancement hypothesis.
Geers, Andrew L; Handley, Ian M; McLarney, Amber R
2003-09-01
The valence-enhancement hypothesis argues that because of their active coping strategies, optimists are especially likely to elaborate on valenced information that is of high personal relevance. The hypothesis predicts that as a result, optimists will be more persuaded by personally relevant positive messages and less persuaded by personally relevant negative messages than pessimists. It also predicts that when the message is not personally relevant, optimism and persuasion will not be related in this manner. The results of 3 studies support these predictions and supply evidence against several alternative hypotheses. The possibility that the observed effects are not due to optimism but to the confounding influence of 7 additional variables is also addressed and ruled out. Implications are discussed.
Highly adaptive tests for group differences in brain functional connectivity.
Kim, Junghi; Pan, Wei
2015-01-01
Resting-state functional magnetic resonance imaging (rs-fMRI) and other technologies have been offering evidence and insights showing that altered brain functional networks are associated with neurological illnesses such as Alzheimer's disease. Exploring brain networks of clinical populations compared to those of controls would be a key inquiry to reveal underlying neurological processes related to such illnesses. For such a purpose, group-level inference is a necessary first step in order to establish whether there are any genuinely disrupted brain subnetworks. Such an analysis is also challenging due to the high dimensionality of the parameters in a network model and high noise levels in neuroimaging data. We are still in the early stage of method development as highlighted by Varoquaux and Craddock (2013) that "there is currently no unique solution, but a spectrum of related methods and analytical strategies" to learn and compare brain connectivity. In practice the important issue of how to choose several critical parameters in estimating a network, such as what association measure to use and what is the sparsity of the estimated network, has not been carefully addressed, largely because the answers are unknown yet. For example, even though the choice of tuning parameters in model estimation has been extensively discussed in the literature, as to be shown here, an optimal choice of a parameter for network estimation may not be optimal in the current context of hypothesis testing. Arbitrarily choosing or mis-specifying such parameters may lead to extremely low-powered tests. Here we develop highly adaptive tests to detect group differences in brain connectivity while accounting for unknown optimal choices of some tuning parameters. The proposed tests combine statistical evidence against a null hypothesis from multiple sources across a range of plausible tuning parameter values reflecting uncertainty with the unknown truth. These highly adaptive tests are not only easy to use, but also high-powered robustly across various scenarios. The usage and advantages of these novel tests are demonstrated on an Alzheimer's disease dataset and simulated data.
[Dilemma of null hypothesis in ecological hypothesis's experiment test.
Li, Ji
2016-06-01
Experimental test is one of the major test methods of ecological hypothesis, though there are many arguments due to null hypothesis. Quinn and Dunham (1983) analyzed the hypothesis deduction model from Platt (1964) and thus stated that there is no null hypothesis in ecology that can be strictly tested by experiments. Fisher's falsificationism and Neyman-Pearson (N-P)'s non-decisivity inhibit statistical null hypothesis from being strictly tested. Moreover, since the null hypothesis H 0 (α=1, β=0) and alternative hypothesis H 1 '(α'=1, β'=0) in ecological progresses are diffe-rent from classic physics, the ecological null hypothesis can neither be strictly tested experimentally. These dilemmas of null hypothesis could be relieved via the reduction of P value, careful selection of null hypothesis, non-centralization of non-null hypothesis, and two-tailed test. However, the statistical null hypothesis significance testing (NHST) should not to be equivalent to the causality logistical test in ecological hypothesis. Hence, the findings and conclusions about methodological studies and experimental tests based on NHST are not always logically reliable.
Optimal Sensor Scheduling for Multiple Hypothesis Testing
1981-09-01
Naval Research, under contract N00014-77-0532 is gratpfully acknowledged. 2 Laboratory for Information and Decision Systems , MIT Room 35-213, Cambridge...treat the more general problem [9,10]. However, two common threads connect these approaches: they obtain feedback laws mapping posterior destributions ...objective of a detection or identification algorithm is to produce correct estimates of the true state of a system . It is also bene- ficial if these
Monotonicity of fitness landscapes and mutation rate control.
Belavkin, Roman V; Channon, Alastair; Aston, Elizabeth; Aston, John; Krašovec, Rok; Knight, Christopher G
2016-12-01
A common view in evolutionary biology is that mutation rates are minimised. However, studies in combinatorial optimisation and search have shown a clear advantage of using variable mutation rates as a control parameter to optimise the performance of evolutionary algorithms. Much biological theory in this area is based on Ronald Fisher's work, who used Euclidean geometry to study the relation between mutation size and expected fitness of the offspring in infinite phenotypic spaces. Here we reconsider this theory based on the alternative geometry of discrete and finite spaces of DNA sequences. First, we consider the geometric case of fitness being isomorphic to distance from an optimum, and show how problems of optimal mutation rate control can be solved exactly or approximately depending on additional constraints of the problem. Then we consider the general case of fitness communicating only partial information about the distance. We define weak monotonicity of fitness landscapes and prove that this property holds in all landscapes that are continuous and open at the optimum. This theoretical result motivates our hypothesis that optimal mutation rate functions in such landscapes will increase when fitness decreases in some neighbourhood of an optimum, resembling the control functions derived in the geometric case. We test this hypothesis experimentally by analysing approximately optimal mutation rate control functions in 115 complete landscapes of binding scores between DNA sequences and transcription factors. Our findings support the hypothesis and find that the increase of mutation rate is more rapid in landscapes that are less monotonic (more rugged). We discuss the relevance of these findings to living organisms.
SAR-based change detection using hypothesis testing and Markov random field modelling
NASA Astrophysics Data System (ADS)
Cao, W.; Martinis, S.
2015-04-01
The objective of this study is to automatically detect changed areas caused by natural disasters from bi-temporal co-registered and calibrated TerraSAR-X data. The technique in this paper consists of two steps: Firstly, an automatic coarse detection step is applied based on a statistical hypothesis test for initializing the classification. The original analytical formula as proposed in the constant false alarm rate (CFAR) edge detector is reviewed and rewritten in a compact form of the incomplete beta function, which is a builtin routine in commercial scientific software such as MATLAB and IDL. Secondly, a post-classification step is introduced to optimize the noisy classification result in the previous step. Generally, an optimization problem can be formulated as a Markov random field (MRF) on which the quality of a classification is measured by an energy function. The optimal classification based on the MRF is related to the lowest energy value. Previous studies provide methods for the optimization problem using MRFs, such as the iterated conditional modes (ICM) algorithm. Recently, a novel algorithm was presented based on graph-cut theory. This method transforms a MRF to an equivalent graph and solves the optimization problem by a max-flow/min-cut algorithm on the graph. In this study this graph-cut algorithm is applied iteratively to improve the coarse classification. At each iteration the parameters of the energy function for the current classification are set by the logarithmic probability density function (PDF). The relevant parameters are estimated by the method of logarithmic cumulants (MoLC). Experiments are performed using two flood events in Germany and Australia in 2011 and a forest fire on La Palma in 2009 using pre- and post-event TerraSAR-X data. The results show convincing coarse classifications and considerable improvement by the graph-cut post-classification step.
Iron and infection: An investigation of the optimal iron hypothesis in Lima, Peru.
Dorsey, Achsah F; Thompson, Amanda L; Kleinman, Ronald E; Duggan, Christopher P; Penny, Mary E
2018-02-19
This article explores the optimal iron hypothesis through secondary data analysis of the association between hemoglobin levels and morbidity among children living in Canto Grande, a peri-urban community located on the outskirts of Lima, Peru. Risk ratios were used to test whether lower iron status, assessed using the HemoCue B-Hemoglobin System, was associated with an increased relative risk of morbidity symptoms compared to iron replete status, controlling for infant age, sex, weight for height z-score, maternal education, and repeated measures in 515 infants aged 6-12 months. Infants with fewer current respiratory and diarrheal morbidity symptoms had a lower risk of low iron deficiency compared to participants who were iron replete (P < .10). Infants with fewer current respiratory infection symptoms had a statistically significant (P < .05) reduction in risk of moderate iron deficiency compared to infants who were iron replete. In this study, morbidity status was not predictive of iron deficient status over a six-month interval period, but nonreplete iron status was shown to be associated with current morbidity symptoms. These results support investigating iron status as an allostatic system that responds to infection adaptively, rather than expecting an optimal preinfection value. © 2018 Wiley Periodicals, Inc.
Parsa, Behnoosh; Terekhov, Alexander; Zatsiorsky, Vladimir M; Latash, Mark L
2017-02-01
We address the nature of unintentional changes in performance in two papers. This first paper tested a hypothesis that unintentional changes in performance variables during continuous tasks without visual feedback are due to two processes. First, there is a drift of the referent coordinate for the salient performance variable toward the actual coordinate of the effector. Second, there is a drift toward minimum of a cost function. We tested this hypothesis in four-finger isometric pressing tasks that required the accurate production of a combination of total moment and total force with natural and modified finger involvement. Subjects performed accurate force-moment production tasks under visual feedback, and then visual feedback was removed for some or all of the salient variables. Analytical inverse optimization was used to compute a cost function. Without visual feedback, both force and moment drifted slowly toward lower absolute magnitudes. Over 15 s, the force drop could reach 20% of its initial magnitude while moment drop could reach 30% of its initial magnitude. Individual finger forces could show drifts toward both higher and lower forces. The cost function estimated using the analytical inverse optimization reduced its value as a consequence of the drift. We interpret the results within the framework of hierarchical control with referent spatial coordinates for salient variables at each level of the hierarchy combined with synergic control of salient variables. The force drift is discussed as a natural relaxation process toward states with lower potential energy in the physical (physiological) system involved in the task.
Parsa, Behnoosh; Terekhov, Alexander; Zatsiorsky, Vladimir M.; Latash, Mark L.
2016-01-01
We address the nature of unintentional changes in performance in two papers. This first paper tested a hypothesis that unintentional changes in performance variables during continuous tasks without visual feedback are due to two processes. First, there is a drift of the referent coordinate for the salient performance variable toward the actual coordinate of the effector. Second, there is a drift toward minimum of a cost function. We tested this hypothesis in four-finger isometric pressing tasks that required the accurate production of a combination of total moment and total force with natural and modified finger involvement. Subjects performed accurate force/moment production tasks under visual feedback, and then visual feedback was removed for some or all of the salient variables. Analytical inverse optimization was used to compute a cost function. Without visual feedback, both force and moment drifted slowly toward lower absolute magnitudes. Over 15 s, the force drop could reach 20% of its initial magnitude while moment drop could reach 30% of its initial magnitude. Individual finger forces could show drifts toward both higher and lower forces. The cost function estimated using the analytical inverse optimization reduced its value as a consequence of the drift. We interpret the results within the framework of hierarchical control with referent spatial coordinates for salient variables at each level of the hierarchy combined with synergic control of salient variables. The force drift is discussed as a natural relaxation process toward states with lower potential energy in the physical (physiological) system involved in the task. PMID:27785549
Environmental context explains Lévy and Brownian movement patterns of marine predators.
Humphries, Nicolas E; Queiroz, Nuno; Dyer, Jennifer R M; Pade, Nicolas G; Musyl, Michael K; Schaefer, Kurt M; Fuller, Daniel W; Brunnschweiler, Juerg M; Doyle, Thomas K; Houghton, Jonathan D R; Hays, Graeme C; Jones, Catherine S; Noble, Leslie R; Wearmouth, Victoria J; Southall, Emily J; Sims, David W
2010-06-24
An optimal search theory, the so-called Lévy-flight foraging hypothesis, predicts that predators should adopt search strategies known as Lévy flights where prey is sparse and distributed unpredictably, but that Brownian movement is sufficiently efficient for locating abundant prey. Empirical studies have generated controversy because the accuracy of statistical methods that have been used to identify Lévy behaviour has recently been questioned. Consequently, whether foragers exhibit Lévy flights in the wild remains unclear. Crucially, moreover, it has not been tested whether observed movement patterns across natural landscapes having different expected resource distributions conform to the theory's central predictions. Here we use maximum-likelihood methods to test for Lévy patterns in relation to environmental gradients in the largest animal movement data set assembled for this purpose. Strong support was found for Lévy search patterns across 14 species of open-ocean predatory fish (sharks, tuna, billfish and ocean sunfish), with some individuals switching between Lévy and Brownian movement as they traversed different habitat types. We tested the spatial occurrence of these two principal patterns and found Lévy behaviour to be associated with less productive waters (sparser prey) and Brownian movements to be associated with productive shelf or convergence-front habitats (abundant prey). These results are consistent with the Lévy-flight foraging hypothesis, supporting the contention that organism search strategies naturally evolved in such a way that they exploit optimal Lévy patterns.
Optimal integration of gravity in trajectory planning of vertical pointing movements.
Crevecoeur, Frédéric; Thonnard, Jean-Louis; Lefèvre, Philippe
2009-08-01
The planning and control of motor actions requires knowledge of the dynamics of the controlled limb to generate the appropriate muscular commands and achieve the desired goal. Such planning and control imply that the CNS must be able to deal with forces and constraints acting on the limb, such as the omnipresent force of gravity. The present study investigates the effect of hypergravity induced by parabolic flights on the trajectory of vertical pointing movements to test the hypothesis that motor commands are optimized with respect to the effect of gravity on the limb. Subjects performed vertical pointing movements in normal gravity and hypergravity. We use a model based on optimal control to identify the role played by gravity in the optimal arm trajectory with minimal motor costs. First, the simulations in normal gravity reproduce the asymmetry in the velocity profiles (the velocity reaches its maximum before half of the movement duration), which typically characterizes the vertical pointing movements performed on Earth, whereas the horizontal movements present symmetrical velocity profiles. Second, according to the simulations, the optimal trajectory in hypergravity should present an increase in the peak acceleration and peak velocity despite the increase in the arm weight. In agreement with these predictions, the subjects performed faster movements in hypergravity with significant increases in the peak acceleration and peak velocity, which were accompanied by a significant decrease in the movement duration. This suggests that movement kinematics change in response to an increase in gravity, which is consistent with the hypothesis that motor commands are optimized and the action of gravity on the limb is taken into account. The results provide evidence for an internal representation of gravity in the central planning process and further suggest that an adaptation to altered dynamics can be understood as a reoptimization process.
Carbon and nutrient use efficiencies optimally balance stoichiometric imbalances
NASA Astrophysics Data System (ADS)
Manzoni, Stefano; Čapek, Petr; Lindahl, Björn; Mooshammer, Maria; Richter, Andreas; Šantrůčková, Hana
2016-04-01
Decomposer organisms face large stoichiometric imbalances because their food is generally poor in nutrients compared to the decomposer cellular composition. The presence of excess carbon (C) requires adaptations to utilize nutrients effectively while disposing of or investing excess C. As food composition changes, these adaptations lead to variable C- and nutrient-use efficiencies (defined as the ratios of C and nutrients used for growth over the amounts consumed). For organisms to be ecologically competitive, these changes in efficiencies with resource stoichiometry have to balance advantages and disadvantages in an optimal way. We hypothesize that efficiencies are varied so that community growth rate is optimized along stoichiometric gradients of their resources. Building from previous theories, we predict that maximum growth is achieved when C and nutrients are co-limiting, so that the maximum C-use efficiency is reached, and nutrient release is minimized. This optimality principle is expected to be applicable across terrestrial-aquatic borders, to various elements, and at different trophic levels. While the growth rate maximization hypothesis has been evaluated for consumers and predators, in this contribution we test it for terrestrial and aquatic decomposers degrading resources across wide stoichiometry gradients. The optimality hypothesis predicts constant efficiencies at low substrate C:N and C:P, whereas above a stoichiometric threshold, C-use efficiency declines and nitrogen- and phosphorus-use efficiencies increase up to one. Thus, high resource C:N and C:P lead to low C-use efficiency, but effective retention of nitrogen and phosphorus. Predictions are broadly consistent with efficiency trends in decomposer communities across terrestrial and aquatic ecosystems.
Evolutionary agroecology: individual fitness and population yield in wheat (Triticum aestivum).
Weiner, Jacob; Du, Yan-Lei; Zhang, Cong; Qin, Xiao-Liang; Li, Feng-Min
2017-09-01
Although the importance of group selection in nature is highly controversial, several researchers have argued that plant breeding for agriculture should be based on group selection, because the goal in agriculture is to optimize population production, not individual fitness. A core hypothesis behind this claim is that crop genotypes with the highest individual fitness in a mixture of genotypes will not produce the highest population yield, because fitness is often increased by "selfish" behaviors, which reduce population performance. We tested this hypothesis by growing 35 cultivars of spring wheat (Triticum aestivum L.) in mixtures and monocultures, and analyzing the relationship between population yield in monoculture and individual yield in mixture. The relationship was unimodal, as predicted. The highest-yielding populations were from cultivars that had intermediate fitness, and these produced, on average, 35% higher yields than cultivars with the highest fitness. It is unlikely that plant breeding or genetic engineering can improve traits that natural selection has been optimizing for millions of years, but there is unutilized potential in traits that increase crop yield by decreasing individual fitness. © 2017 by the Ecological Society of America.
Shifflett, Benjamin; Huang, Rong; Edland, Steven D
2017-01-01
Genotypic association studies are prone to inflated type I error rates if multiple hypothesis testing is performed, e.g., sequentially testing for recessive, multiplicative, and dominant risk. Alternatives to multiple hypothesis testing include the model independent genotypic χ 2 test, the efficiency robust MAX statistic, which corrects for multiple comparisons but with some loss of power, or a single Armitage test for multiplicative trend, which has optimal power when the multiplicative model holds but with some loss of power when dominant or recessive models underlie the genetic association. We used Monte Carlo simulations to describe the relative performance of these three approaches under a range of scenarios. All three approaches maintained their nominal type I error rates. The genotypic χ 2 and MAX statistics were more powerful when testing a strictly recessive genetic effect or when testing a dominant effect when the allele frequency was high. The Armitage test for multiplicative trend was most powerful for the broad range of scenarios where heterozygote risk is intermediate between recessive and dominant risk. Moreover, all tests had limited power to detect recessive genetic risk unless the sample size was large, and conversely all tests were relatively well powered to detect dominant risk. Taken together, these results suggest the general utility of the multiplicative trend test when the underlying genetic model is unknown.
Comparison of contraction times of a muscle and its motor units
NASA Technical Reports Server (NTRS)
Eldred, E.; Smith, L.; Edgerton, V. R.
1992-01-01
The twitch contraction time (CT) for each of 13 soleus (SOL) and 13 medial gastrocnemius (MG) muscles was compared with the mean CT from a sample of its motor units (MUs; 356 total) to see if the CT of a whole muscle when tested at its optimal length (Lo) differed systematically from that of its MUs tested at their individual Lo's. The CTs of the whole muscle were significantly longer in the ratio of 1.13. This is consistent with a hypothesis that electrical-field effects result in a more protracted contraction of the individual muscle fiber.
Statistical considerations in monitoring birds over large areas
Johnson, D.H.
2000-01-01
The proper design of a monitoring effort depends primarily on the objectives desired, constrained by the resources available to conduct the work. Typically, managers have numerous objectives, such as determining abundance of the species, detecting changes in population size, evaluating responses to management activities, and assessing habitat associations. A design that is optimal for one objective will likely not be optimal for others. Careful consideration of the importance of the competing objectives may lead to a design that adequately addresses the priority concerns, although it may not be optimal for any individual objective. Poor design or inadequate sample sizes may result in such weak conclusions that the effort is wasted. Statistical expertise can be used at several stages, such as estimating power of certain hypothesis tests, but is perhaps most useful in fundamental considerations of describing objectives and designing sampling plans.
Mieres, Jennifer H; Shaw, Leslee J; Hendel, Robert C; Heller, Gary V
2009-01-01
Coronary artery disease remains the leading cause of morbidity and mortality in women. The optimal non-invasive test for evaluation of ischemic heart disease in women is unknown. Although current guidelines support the choice of the exercise tolerance test (ETT) as a first line test for women with a normal baseline ECG and adequate exercise capabilities, supportive data for this recommendation are controversial. The what is the optimal method for ischemia evaluation in women? (WOMEN) study was designed to determine the optimal non-invasive strategy for CAD risk detection of intermediate and high risk women presenting with chest pain or equivalent symptoms suggestive of ischemic heart disease. The study will prospectively compare the 2-year event rates in women capable of performing exercise treadmill testing or Tc-99 m tetrofosmin SPECT myocardial perfusion imaging (MPI). The study will enroll women presenting for the evaluation of chest pain or anginal equivalent symptoms who are capable of performing >5 METs of exercise while at intermediate-high pretest risk for ischemic heart disease who will be randomized to either ETT testing alone or with Tc-99 m tetrofosmin SPECT MPI. The null hypothesis for this project is that the exercise ECG has the same negative predictive value for risk detection as gated myocardial perfusion SPECT in women. The primary aim is to compare 2-year cardiac event rates in women randomized to SPECT MPI to those randomized to ETT. The WOMEN study seeks to provide objective information for guidelines for the evaluation of symptomatic women with an intermediate-high likelihood for CAD.
DeLong, John P; Hanley, Torrance C
2013-01-01
The identification of trade-offs is necessary for understanding the evolution and maintenance of diversity. Here we employ the supply-demand (SD) body size optimization model to predict a trade-off between asymptotic body size and growth rate. We use the SD model to quantitatively predict the slope of the relationship between asymptotic body size and growth rate under high and low food regimes and then test the predictions against observations for Daphnia ambigua. Close quantitative agreement between observed and predicted slopes at both food levels lends support to the model and confirms that a 'rate-size' trade-off structures life history variation in this population. In contrast to classic life history expectations, growth and reproduction were positively correlated after controlling for the rate-size trade-off. We included 12 Daphnia clones in our study, but clone identity explained only some of the variation in life history traits. We also tested the hypothesis that growth rate would be positively related to intergenic spacer length (i.e. the growth rate hypothesis) across clones, but we found that clones with intermediate intergenic spacer lengths had larger asymptotic sizes and slower growth rates. Our results strongly support a resource-based optimization of body size following the SD model. Furthermore, because some resource allocation decisions necessarily precede others, understanding interdependent life history traits may require a more nested approach.
The Dilution Effect and Information Integration in Perceptual Decision Making
Hotaling, Jared M.; Cohen, Andrew L.; Shiffrin, Richard M.; Busemeyer, Jerome R.
2015-01-01
In cognitive science there is a seeming paradox: On the one hand, studies of human judgment and decision making have repeatedly shown that people systematically violate optimal behavior when integrating information from multiple sources. On the other hand, optimal models, often Bayesian, have been successful at accounting for information integration in fields such as categorization, memory, and perception. This apparent conflict could be due, in part, to different materials and designs that lead to differences in the nature of processing. Stimuli that require controlled integration of information, such as the quantitative or linguistic information (commonly found in judgment studies), may lead to suboptimal performance. In contrast, perceptual stimuli may lend themselves to automatic processing, resulting in integration that is closer to optimal. We tested this hypothesis with an experiment in which participants categorized faces based on resemblance to a family patriarch. The amount of evidence contained in the top and bottom halves of each test face was independently manipulated. These data allow us to investigate a canonical example of sub-optimal information integration from the judgment and decision making literature, the dilution effect. Splitting the top and bottom halves of a face, a manipulation meant to encourage controlled integration of information, produced farther from optimal behavior and larger dilution effects. The Multi-component Information Accumulation model, a hybrid optimal/averaging model of information integration, successfully accounts for key accuracy, response time, and dilution effects. PMID:26406323
The Dilution Effect and Information Integration in Perceptual Decision Making.
Hotaling, Jared M; Cohen, Andrew L; Shiffrin, Richard M; Busemeyer, Jerome R
2015-01-01
In cognitive science there is a seeming paradox: On the one hand, studies of human judgment and decision making have repeatedly shown that people systematically violate optimal behavior when integrating information from multiple sources. On the other hand, optimal models, often Bayesian, have been successful at accounting for information integration in fields such as categorization, memory, and perception. This apparent conflict could be due, in part, to different materials and designs that lead to differences in the nature of processing. Stimuli that require controlled integration of information, such as the quantitative or linguistic information (commonly found in judgment studies), may lead to suboptimal performance. In contrast, perceptual stimuli may lend themselves to automatic processing, resulting in integration that is closer to optimal. We tested this hypothesis with an experiment in which participants categorized faces based on resemblance to a family patriarch. The amount of evidence contained in the top and bottom halves of each test face was independently manipulated. These data allow us to investigate a canonical example of sub-optimal information integration from the judgment and decision making literature, the dilution effect. Splitting the top and bottom halves of a face, a manipulation meant to encourage controlled integration of information, produced farther from optimal behavior and larger dilution effects. The Multi-component Information Accumulation model, a hybrid optimal/averaging model of information integration, successfully accounts for key accuracy, response time, and dilution effects.
Dávid-Barrett, T.; Dunbar, R. I. M.
2013-01-01
Sociality is primarily a coordination problem. However, the social (or communication) complexity hypothesis suggests that the kinds of information that can be acquired and processed may limit the size and/or complexity of social groups that a species can maintain. We use an agent-based model to test the hypothesis that the complexity of information processed influences the computational demands involved. We show that successive increases in the kinds of information processed allow organisms to break through the glass ceilings that otherwise limit the size of social groups: larger groups can only be achieved at the cost of more sophisticated kinds of information processing that are disadvantageous when optimal group size is small. These results simultaneously support both the social brain and the social complexity hypotheses. PMID:23804623
Velásquez, Nelson A; Moreno-Gómez, Felipe N; Brunetti, Enzo; Penna, Mario
2018-05-03
Animal communication occurs in environments that affect the properties of signals as they propagate from senders to receivers. We studied the geographic variation of the advertisement calls of male Pleurodema thaul individuals from eight localities in Chile. Furthermore, by means of signal propagation experiments, we tested the hypothesis that local calls are better transmitted and less degraded than foreign calls (i.e. acoustic adaptation hypothesis). Overall, the advertisement calls varied greatly along the distribution of P. thaul in Chile, and it was possible to discriminate localities grouped into northern, central and southern stocks. Propagation distance affected signal amplitude and spectral degradation in all localities, but temporal degradation was only affected by propagation distance in one out of seven localities. Call origin affected signal amplitude in five out of seven localities and affected spectral and temporal degradation in six out of seven localities. In addition, in northern localities, local calls degraded more than foreign calls, and in southern localities the opposite was observed. The lack of a strict optimal relationship between signal characteristics and environment indicates partial concordance with the acoustic adaptation hypothesis. Inter-population differences in selectivity for call patterns may compensate for such environmental constraints on acoustic communication.
Intra-fraction motion of the prostate is a random walk
NASA Astrophysics Data System (ADS)
Ballhausen, H.; Li, M.; Hegemann, N.-S.; Ganswindt, U.; Belka, C.
2015-01-01
A random walk model for intra-fraction motion has been proposed, where at each step the prostate moves a small amount from its current position in a random direction. Online tracking data from perineal ultrasound is used to validate or reject this model against alternatives. Intra-fraction motion of a prostate was recorded by 4D ultrasound (Elekta Clarity system) during 84 fractions of external beam radiotherapy of six patients. In total, the center of the prostate was tracked for 8 h in intervals of 4 s. Maximum likelihood model parameters were fitted to the data. The null hypothesis of a random walk was tested with the Dickey-Fuller test. The null hypothesis of stationarity was tested by the Kwiatkowski-Phillips-Schmidt-Shin test. The increase of variance in prostate position over time and the variability in motility between fractions were analyzed. Intra-fraction motion of the prostate was best described as a stochastic process with an auto-correlation coefficient of ρ = 0.92 ± 0.13. The random walk hypothesis (ρ = 1) could not be rejected (p = 0.27). The static noise hypothesis (ρ = 0) was rejected (p < 0.001). The Dickey-Fuller test rejected the null hypothesis ρ = 1 in 25% to 32% of cases. On average, the Kwiatkowski-Phillips-Schmidt-Shin test rejected the null hypothesis ρ = 0 with a probability of 93% to 96%. The variance in prostate position increased linearly over time (r2 = 0.9 ± 0.1). Variance kept increasing and did not settle at a maximum as would be expected from a stationary process. There was substantial variability in motility between fractions and patients with maximum aberrations from isocenter ranging from 0.5 mm to over 10 mm in one patient alone. In conclusion, evidence strongly suggests that intra-fraction motion of the prostate is a random walk and neither static (like inter-fraction setup errors) nor stationary (like a cyclic motion such as breathing, for example). The prostate tends to drift away from the isocenter during a fraction, and this variance increases with time, such that shorter fractions are beneficial to the problem of intra-fraction motion. As a consequence, fixed safety margins (which would over-compensate at the beginning and under-compensate at the end of a fraction) cannot optimally account for intra-fraction motion. Instead, online tracking and position correction on-the-fly should be considered as the preferred approach to counter intra-fraction motion.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zackay, Barak; Ofek, Eran O.
Image coaddition is one of the most basic operations that astronomers perform. In Paper I, we presented the optimal ways to coadd images in order to detect faint sources and to perform flux measurements under the assumption that the noise is approximately Gaussian. Here, we build on these results and derive from first principles a coaddition technique that is optimal for any hypothesis testing and measurement (e.g., source detection, flux or shape measurements, and star/galaxy separation), in the background-noise-dominated case. This method has several important properties. The pixels of the resulting coadded image are uncorrelated. This image preserves all themore » information (from the original individual images) on all spatial frequencies. Any hypothesis testing or measurement that can be done on all the individual images simultaneously, can be done on the coadded image without any loss of information. The PSF of this image is typically as narrow, or narrower than the PSF of the best image in the ensemble. Moreover, this image is practically indistinguishable from a regular single image, meaning that any code that measures any property on a regular astronomical image can be applied to it unchanged. In particular, the optimal source detection statistic derived in Paper I is reproduced by matched filtering this image with its own PSF. This coaddition process, which we call proper coaddition, can be understood as the maximum signal-to-noise ratio measurement of the Fourier transform of the image, weighted in such a way that the noise in the entire Fourier domain is of equal variance. This method has important implications for multi-epoch seeing-limited deep surveys, weak lensing galaxy shape measurements, and diffraction-limited imaging via speckle observations. The last topic will be covered in depth in future papers. We provide an implementation of this algorithm in MATLAB.« less
Taghanaki, Saeid Asgari; Kawahara, Jeremy; Miles, Brandon; Hamarneh, Ghassan
2017-07-01
Feature reduction is an essential stage in computer aided breast cancer diagnosis systems. Multilayer neural networks can be trained to extract relevant features by encoding high-dimensional data into low-dimensional codes. Optimizing traditional auto-encoders works well only if the initial weights are close to a proper solution. They are also trained to only reduce the mean squared reconstruction error (MRE) between the encoder inputs and the decoder outputs, but do not address the classification error. The goal of the current work is to test the hypothesis that extending traditional auto-encoders (which only minimize reconstruction error) to multi-objective optimization for finding Pareto-optimal solutions provides more discriminative features that will improve classification performance when compared to single-objective and other multi-objective approaches (i.e. scalarized and sequential). In this paper, we introduce a novel multi-objective optimization of deep auto-encoder networks, in which the auto-encoder optimizes two objectives: MRE and mean classification error (MCE) for Pareto-optimal solutions, rather than just MRE. These two objectives are optimized simultaneously by a non-dominated sorting genetic algorithm. We tested our method on 949 X-ray mammograms categorized into 12 classes. The results show that the features identified by the proposed algorithm allow a classification accuracy of up to 98.45%, demonstrating favourable accuracy over the results of state-of-the-art methods reported in the literature. We conclude that adding the classification objective to the traditional auto-encoder objective and optimizing for finding Pareto-optimal solutions, using evolutionary multi-objective optimization, results in producing more discriminative features. Copyright © 2017 Elsevier B.V. All rights reserved.
Dietary nucleotides and early growth in formula-fed infants: a randomized controlled trial.
Singhal, Atul; Kennedy, Kathy; Lanigan, J; Clough, Helen; Jenkins, Wendy; Elias-Jones, Alun; Stephenson, Terrence; Dudek, Peter; Lucas, Alan
2010-10-01
Dietary nucleotides are nonprotein nitrogenous compounds that are found in high concentrations in breast milk and are thought to be conditionally essential nutrients in infancy. A high nucleotide intake has been suggested to explain some of the benefits of breastfeeding compared with formula feeding and to promote infant growth. However, relatively few large-scale randomized trials have tested this hypothesis in healthy infants. We tested the hypothesis that nucleotide supplementation of formula benefits early infant growth. Occipitofrontal head circumference, weight, and length were assessed in infants who were randomly assigned to groups fed nucleotide-supplemented (31 mg/L; n=100) or control formula without nucleotide supplementation (n=100) from birth to the age of 20 weeks, and in infants who were breastfed (reference group; n=101). Infants fed with nucleotide-supplemented formula had greater occipitofrontal head circumference at ages 8, 16, and 20 weeks than infants fed control formula (mean difference in z scores at 8 weeks: 0.4 [95% confidence interval: 0.1-0.7]; P=.006) even after adjustment for potential confounding factors (P=.002). Weight at 8 weeks and the increase in both occipitofrontal head circumference and weight from birth to 8 weeks were also greater in infants fed nucleotide-supplemented formula than in those fed control formula. Our data support the hypothesis that nucleotide supplementation leads to increased weight gain and head growth in formula-fed infants. Therefore, nucleotides could be conditionally essential for optimal infant growth in some formula-fed populations. Additional research is needed to test the hypothesis that the benefits of nucleotide supplementation for early head growth, a critical period for brain growth, have advantages for long-term cognitive development.
Explorations in statistics: hypothesis tests and P values.
Curran-Everett, Douglas
2009-06-01
Learning about statistics is a lot like learning about science: the learning is more meaningful if you can actively explore. This second installment of Explorations in Statistics delves into test statistics and P values, two concepts fundamental to the test of a scientific null hypothesis. The essence of a test statistic is that it compares what we observe in the experiment to what we expect to see if the null hypothesis is true. The P value associated with the magnitude of that test statistic answers this question: if the null hypothesis is true, what proportion of possible values of the test statistic are at least as extreme as the one I got? Although statisticians continue to stress the limitations of hypothesis tests, there are two realities we must acknowledge: hypothesis tests are ingrained within science, and the simple test of a null hypothesis can be useful. As a result, it behooves us to explore the notions of hypothesis tests, test statistics, and P values.
Royzman, Edward B; Landy, Justin F; Leeman, Robert F
2015-03-01
Recent theorizing about the cognitive underpinnings of dilemmatic moral judgment has equated slow, deliberative thinking with the utilitarian disposition and fast, automatic thinking with the deontological disposition. However, evidence for the reflective utilitarian hypothesis-the hypothesized link between utilitarian judgment and individual differences in the capacity for rational reflection (gauged here by the Cognitive Reflection Test [CRT; Frederick, 2005]) has been inconsistent and difficult to interpret in light of several design flaws. In two studies aimed at addressing some of the flaws, we found robust evidence for a reflective minimalist hypothesis-high CRT performers' tendency to regard utility-optimizing acts as largely a matter of personal prerogative, permissible both to perform and to leave undone. This relationship between CRT and the "minimalist" orientation remained intact after controlling for age, sex, trait affect, social desirability, and educational attainment. No significant association was found between CRT and the strict utilitarian response pattern or CRT and the strict deontological response pattern, nor did we find any significant association between CRT and willingness to act in the utility-optimizing manner. However, we found an inverse association between empathic concern and a willingness to act in the utility-optimizing manner, but there was no comparable association between empathic concern and the deontological judgment pattern. Theoretical, methodological, and normative implications of the findings are discussed. Copyright © 2014 Cognitive Science Society, Inc.
Meinzer, Caitlyn; Martin, Renee; Suarez, Jose I
2017-09-08
In phase II trials, the most efficacious dose is usually not known. Moreover, given limited resources, it is difficult to robustly identify a dose while also testing for a signal of efficacy that would support a phase III trial. Recent designs have sought to be more efficient by exploring multiple doses through the use of adaptive strategies. However, the added flexibility may potentially increase the risk of making incorrect assumptions and reduce the total amount of information available across the dose range as a function of imbalanced sample size. To balance these challenges, a novel placebo-controlled design is presented in which a restricted Bayesian response adaptive randomization (RAR) is used to allocate a majority of subjects to the optimal dose of active drug, defined as the dose with the lowest probability of poor outcome. However, the allocation between subjects who receive active drug or placebo is held constant to retain the maximum possible power for a hypothesis test of overall efficacy comparing the optimal dose to placebo. The design properties and optimization of the design are presented in the context of a phase II trial for subarachnoid hemorrhage. For a fixed total sample size, a trade-off exists between the ability to select the optimal dose and the probability of rejecting the null hypothesis. This relationship is modified by the allocation ratio between active and control subjects, the choice of RAR algorithm, and the number of subjects allocated to an initial fixed allocation period. While a responsive RAR algorithm improves the ability to select the correct dose, there is an increased risk of assigning more subjects to a worse arm as a function of ephemeral trends in the data. A subarachnoid treatment trial is used to illustrate how this design can be customized for specific objectives and available data. Bayesian adaptive designs are a flexible approach to addressing multiple questions surrounding the optimal dose for treatment efficacy within the context of limited resources. While the design is general enough to apply to many situations, future work is needed to address interim analyses and the incorporation of models for dose response.
NASA Astrophysics Data System (ADS)
Psaltis, Dimitrios; Özel, Feryal; Chan, Chi-Kwan; Marrone, Daniel P.
2015-12-01
The half opening angle of a Kerr black hole shadow is always equal to (5 ± 0.2)GM/Dc2, where M is the mass of the black hole and D is its distance from the Earth. Therefore, measuring the size of a shadow and verifying whether it is within this 4% range constitutes a null hypothesis test of general relativity. We show that the black hole in the center of the Milky Way, Sgr A*, is the optimal target for performing this test with upcoming observations using the Event Horizon Telescope (EHT). We use the results of optical/IR monitoring of stellar orbits to show that the mass-to-distance ratio for Sgr A* is already known to an accuracy of ∼4%. We investigate our prior knowledge of the properties of the scattering screen between Sgr A* and the Earth, the effects of which will need to be corrected for in order for the black hole shadow to appear sharp against the background emission. Finally, we explore an edge detection scheme for interferometric data and a pattern matching algorithm based on the Hough/Radon transform and demonstrate that the shadow of the black hole at 1.3 mm can be localized, in principle, to within ∼9%. All these results suggest that our prior knowledge of the properties of the black hole, of scattering broadening, and of the accretion flow can only limit this general relativistic null hypothesis test with EHT observations of Sgr A* to ≲10%.
Cirrus Cloud Seeding has Potential to Cool Climate
NASA Technical Reports Server (NTRS)
Storelvmo, T.; Kristjansson, J. E.; Muri, H.; Pfeffer, M.; Barahona, D.; Nenes, A.
2013-01-01
Cirrus clouds, thin ice clouds in the upper troposphere, have a net warming effect on Earth s climate. Consequently, a reduction in cirrus cloud amount or optical thickness would cool the climate. Recent research indicates that by seeding cirrus clouds with particles that promote ice nucleation, their lifetimes and coverage could be reduced. We have tested this hypothesis in a global climate model with a state-of-the-art representation of cirrus clouds and find that cirrus cloud seeding has the potential to cancel the entire warming caused by human activity from pre-industrial times to present day. However, the desired effect is only obtained for seeding particle concentrations that lie within an optimal range. With lower than optimal particle concentrations, a seeding exercise would have no effect. Moreover, a higher than optimal concentration results in an over-seeding that could have the deleterious effect of prolonging cirrus lifetime and contributing to global warming.
Offspring fitness and individual optimization of clutch size
Both, C.; Tinbergen, J. M.; Noordwijk, A. J. van
1998-01-01
Within-year variation in clutch size has been claimed to be an adaptation to variation in the individual capacity to raise offspring. We tested this hypothesis by manipulating brood size to one common size, and predicted that if clutch size is individually optimized, then birds with originally large clutches have a higher fitness than birds with originally small clutches. No evidence was found that fitness was related to the original clutch size, and in this population clutch size is thus not related to the parental capacity to raise offspring. However, offspring from larger original clutches recruited better than their nest mates that came from smaller original clutches. This suggests that early maternal or genetic variation in viability is related to clutch size.
Multistate and multihypothesis discrimination with open quantum systems
NASA Astrophysics Data System (ADS)
Kiilerich, Alexander Holm; Mølmer, Klaus
2018-05-01
We show how an upper bound for the ability to discriminate any number N of candidates for the Hamiltonian governing the evolution of an open quantum system may be calculated by numerically efficient means. Our method applies an effective master-equation analysis to evaluate the pairwise overlaps between candidate full states of the system and its environment pertaining to the Hamiltonians. These overlaps are then used to construct an N -dimensional representation of the states. The optimal positive-operator valued measure (POVM) and the corresponding probability of assigning a false hypothesis may subsequently be evaluated by phrasing optimal discrimination of multiple nonorthogonal quantum states as a semidefinite programming problem. We provide three realistic examples of multihypothesis testing with open quantum systems.
Schymanski, Stanislaus J; Roderick, Michael L; Sivapalan, Murugesu; Hutley, Lindsay B; Beringer, Jason
2007-12-01
Photosynthesis provides plants with their main building material, carbohydrates, and with the energy necessary to thrive and prosper in their environment. We expect, therefore, that natural vegetation would evolve optimally to maximize its net carbon profit (NCP), the difference between carbon acquired by photosynthesis and carbon spent on maintenance of the organs involved in its uptake. We modelled N(CP) for an optimal vegetation for a site in the wet-dry tropics of north Australia based on this hypothesis and on an ecophysiological gas exchange and photosynthesis model, and compared the modelled CO2 fluxes and canopy properties with observations from the site. The comparison gives insights into theoretical and real controls on gas exchange and canopy structure, and supports the optimality approach for the modelling of gas exchange of natural vegetation. The main advantage of the optimality approach we adopt is that no assumptions about the particular vegetation of a site are required, making it a very powerful tool for predicting vegetation response to long-term climate or land use change.
Design optimization of a radial functionally graded dental implant.
Ichim, Paul I; Hu, Xiaozhi; Bazen, Jennifer J; Yi, Wei
2016-01-01
In this work, we use FEA to test the hypothesis that a low-modulus coating of a cylindrical zirconia dental implant would reduce the stresses in the peri-implant bone and we use design optimization and the rule of mixture to estimate the elastic modulus and the porosity of the coating that provides optimal stress shielding. We show that a low-modulus coating of a dental implant significantly reduces the maximum stresses in the peri-implant bone without affecting the average stresses thus creating a potentially favorable biomechanical environment. Our results suggest that a resilient coating is capable of reducing the maximum compressive and tensile stresses in the peri-implant bone by up to 50% and the average stresses in the peri-implant bone by up to 15%. We further show that a transitional gradient between the high-modulus core and the low-modulus coating is not necessary and for a considered zirconia/HA composite the optimal thickness of the coating is 100 µ with its optimal elastic at the lowest value considered of 45 GPa. © 2015 Wiley Periodicals, Inc.
Cognitive Fatigue Facilitates Procedural Sequence Learning.
Borragán, Guillermo; Slama, Hichem; Destrebecqz, Arnaud; Peigneux, Philippe
2016-01-01
Enhanced procedural learning has been evidenced in conditions where cognitive control is diminished, including hypnosis, disruption of prefrontal activity and non-optimal time of the day. Another condition depleting the availability of controlled resources is cognitive fatigue (CF). We tested the hypothesis that CF, eventually leading to diminished cognitive control, facilitates procedural sequence learning. In a two-day experiment, 23 young healthy adults were administered a serial reaction time task (SRTT) following the induction of high or low levels of CF, in a counterbalanced order. CF was induced using the Time load Dual-back (TloadDback) paradigm, a dual working memory task that allows tailoring cognitive load levels to the individual's optimal performance capacity. In line with our hypothesis, reaction times (RT) in the SRTT were faster in the high- than in the low-level fatigue condition, and performance improvement was higher for the sequential than the motor components. Altogether, our results suggest a paradoxical, facilitating impact of CF on procedural motor sequence learning. We propose that facilitated learning in the high-level fatigue condition stems from a reduction in the cognitive resources devoted to cognitive control processes that normally oppose automatic procedural acquisition mechanisms.
Anderson, D.R.
1974-01-01
Optimal exploitation strategies were studied for an animal population in a stochastic, serially correlated environment. This is a general case and encompasses a number of important cases as simplifications. Data on the mallard (Anas platyrhynchos) were used to explore the exploitation strategies and test several hypotheses because relatively much is known concerning the life history and general ecology of this species and extensive empirical data are available for analysis. The number of small ponds on the central breeding grounds was used as an index to the state of the environment. Desirable properties of an optimal exploitation strategy were defined. A mathematical model was formulated to provide a synthesis of the existing literature, estimates of parameters developed from an analysis of data, and hypotheses regarding the specific effect of exploitation on total survival. Both the literature and the analysis of data were inconclusive concerning the effect of exploitation on survival. Therefore, alternative hypotheses were formulated: (1) exploitation mortality represents a largely additive form of mortality, or (2 ) exploitation mortality is compensatory with other forms of mortality, at least to some threshold level. Models incorporating these two hypotheses were formulated as stochastic dynamic programming models and optimal exploitation strategies were derived numerically on a digital computer. Optimal exploitation strategies were found to exist under rather general conditions. Direct feedback control was an integral component in the optimal decision-making process. Optimal exploitation was found to be substantially different depending upon the hypothesis regarding the effect of exploitation on the population. Assuming that exploitation is largely an additive force of mortality, optimal exploitation decisions are a convex function of the size of the breeding population and a linear or slightly concave function of the environmental conditions. Optimal exploitation under this hypothesis tends to reduce the variance of the size of the population. Under the hypothesis of compensatory mortality forces, optimal exploitation decisions are approximately linearly related to the size of the breeding population. Environmental variables may be somewhat more important than the size of the breeding population to the production of young mallards. In contrast, the size of the breeding population appears to be more important in the exploitation process than is the state of the environment. The form of the exploitation strategy appears to be relatively insensitive to small changes in the production rate. In general, the relative importance of the size of the breeding population may decrease as fecundity increases. The optimal level of exploitation in year t must be based on the observed size of the population and the state of the environment in year t unless the dynamics of the population, the state of the environment, and the result of the exploitation decisions are completely deterministic. Exploitation based on an average harvest, harvest rate, or designed to maintain a constant breeding population size is inefficient.
Novoseltsev, V N; Arking, R; Novoseltseva, J A; Yashin, A I
2002-06-01
The general purpose of the paper is to test evolutionary optimality theories with experimental data on reproduction, energy consumption, and longevity in a particular Drosophila genotype. We describe the resource allocation in Drosophila females in terms of the oxygen consumption rates devoted to reproduction and to maintenance. The maximum ratio of the component spent on reproduction to the total rate of oxygen consumption, which can be realized by the female reproductive machinery, is called metabolic reproductive efficiency (MRE). We regard MRE as an evolutionary constraint. We demonstrate that MRE may be evaluated for a particular Drosophila phenotype given the fecundity pattern, the age-related pattern of oxygen consumption rate, and the longevity. We use a homeostatic model of aging to simulate a life history of a representative female fly, which describes the control strain in the long-term experiments with the Wayne State Drosophila genotype. We evaluate the theoretically optimal trade-offs in this genotype. Then we apply the Van Noordwijk-de Jong resource acquisition and allocation model, Kirkwood's disposable soma theory. and the Partridge-Barton optimality approach to test if the experimentally observed trade-offs may be regarded as close to the theoretically optimal ones. We demonstrate that the two approaches by Partridge-Barton and Kirkwood allow a positive answer to the question, whereas the Van Noordwijk-de Jong approach may be used to illustrate the optimality. We discuss the prospects of applying the proposed technique to various Drosophila experiments, in particular those including manipulations affecting fecundity.
Testing the arousal hypothesis of neonatal imitation in infant rhesus macaques
Pedersen, Eric J.; Simpson, Elizabeth A.
2017-01-01
Neonatal imitation is the matching of (often facial) gestures by newborn infants. Some studies suggest that performance of facial gestures is due to general arousal, which may produce false positives on neonatal imitation assessments. Here we examine whether arousal is linked to facial gesturing in newborn infant rhesus macaques (Macaca mulatta). We tested 163 infants in a neonatal imitation paradigm in their first postnatal week and analyzed their lipsmacking gestures (a rapid opening and closing of the mouth), tongue protrusion gestures, and yawn responses (a measure of arousal). Arousal increased during dynamic stimulus presentation compared to the static baseline across all conditions, and arousal was higher in the facial gestures conditions than the nonsocial control condition. However, even after controlling for arousal, we found a condition-specific increase in facial gestures in infants who matched lipsmacking and tongue protrusion gestures. Thus, we found no support for the arousal hypothesis. Consistent with reports in human newborns, imitators’ propensity to match facial gestures is based on abilities that go beyond mere arousal. We discuss optimal testing conditions to minimize potentially confounding effects of arousal on measurements of neonatal imitation. PMID:28617816
Seeking health information on the web: positive hypothesis testing.
Kayhan, Varol Onur
2013-04-01
The goal of this study is to investigate positive hypothesis testing among consumers of health information when they search the Web. After demonstrating the extent of positive hypothesis testing using Experiment 1, we conduct Experiment 2 to test the effectiveness of two debiasing techniques. A total of 60 undergraduate students searched a tightly controlled online database developed by the authors to test the validity of a hypothesis. The database had four abstracts that confirmed the hypothesis and three abstracts that disconfirmed it. Findings of Experiment 1 showed that majority of participants (85%) exhibited positive hypothesis testing. In Experiment 2, we found that the recommendation technique was not effective in reducing positive hypothesis testing since none of the participants assigned to this server could retrieve disconfirming evidence. Experiment 2 also showed that the incorporation technique successfully reduced positive hypothesis testing since 75% of the participants could retrieve disconfirming evidence. Positive hypothesis testing on the Web is an understudied topic. More studies are needed to validate the effectiveness of the debiasing techniques discussed in this study and develop new techniques. Search engine developers should consider developing new options for users so that both confirming and disconfirming evidence can be presented in search results as users test hypotheses using search engines. Copyright © 2012 Elsevier Ireland Ltd. All rights reserved.
Hozo, Iztok; Schell, Michael J; Djulbegovic, Benjamin
2008-07-01
The absolute truth in research is unobtainable, as no evidence or research hypothesis is ever 100% conclusive. Therefore, all data and inferences can in principle be considered as "inconclusive." Scientific inference and decision-making need to take into account errors, which are unavoidable in the research enterprise. The errors can occur at the level of conclusions that aim to discern the truthfulness of research hypothesis based on the accuracy of research evidence and hypothesis, and decisions, the goal of which is to enable optimal decision-making under present and specific circumstances. To optimize the chance of both correct conclusions and correct decisions, the synthesis of all major statistical approaches to clinical research is needed. The integration of these approaches (frequentist, Bayesian, and decision-analytic) can be accomplished through formal risk:benefit (R:B) analysis. This chapter illustrates the rational choice of a research hypothesis using R:B analysis based on decision-theoretic expected utility theory framework and the concept of "acceptable regret" to calculate the threshold probability of the "truth" above which the benefit of accepting a research hypothesis outweighs its risks.
Li, Ke; Zhang, Qiuju; Wang, Kun; Chen, Peng; Wang, Huaqing
2016-01-01
A new fault diagnosis method for rotating machinery based on adaptive statistic test filter (ASTF) and Diagnostic Bayesian Network (DBN) is presented in this paper. ASTF is proposed to obtain weak fault features under background noise, ASTF is based on statistic hypothesis testing in the frequency domain to evaluate similarity between reference signal (noise signal) and original signal, and remove the component of high similarity. The optimal level of significance α is obtained using particle swarm optimization (PSO). To evaluate the performance of the ASTF, evaluation factor Ipq is also defined. In addition, a simulation experiment is designed to verify the effectiveness and robustness of ASTF. A sensitive evaluation method using principal component analysis (PCA) is proposed to evaluate the sensitiveness of symptom parameters (SPs) for condition diagnosis. By this way, the good SPs that have high sensitiveness for condition diagnosis can be selected. A three-layer DBN is developed to identify condition of rotation machinery based on the Bayesian Belief Network (BBN) theory. Condition diagnosis experiment for rolling element bearings demonstrates the effectiveness of the proposed method. PMID:26761006
Li, Ke; Zhang, Qiuju; Wang, Kun; Chen, Peng; Wang, Huaqing
2016-01-08
A new fault diagnosis method for rotating machinery based on adaptive statistic test filter (ASTF) and Diagnostic Bayesian Network (DBN) is presented in this paper. ASTF is proposed to obtain weak fault features under background noise, ASTF is based on statistic hypothesis testing in the frequency domain to evaluate similarity between reference signal (noise signal) and original signal, and remove the component of high similarity. The optimal level of significance α is obtained using particle swarm optimization (PSO). To evaluate the performance of the ASTF, evaluation factor Ipq is also defined. In addition, a simulation experiment is designed to verify the effectiveness and robustness of ASTF. A sensitive evaluation method using principal component analysis (PCA) is proposed to evaluate the sensitiveness of symptom parameters (SPs) for condition diagnosis. By this way, the good SPs that have high sensitiveness for condition diagnosis can be selected. A three-layer DBN is developed to identify condition of rotation machinery based on the Bayesian Belief Network (BBN) theory. Condition diagnosis experiment for rolling element bearings demonstrates the effectiveness of the proposed method.
Integration of QSAR and in vitro toxicology.
Barratt, M D
1998-01-01
The principles of quantitative structure-activity relationships (QSAR) are based on the premise that the properties of a chemical are implicit in its molecular structure. Therefore, if a mechanistic hypothesis can be proposed linking a group of related chemicals with a particular toxic end point, the hypothesis can be used to define relevant parameters to establish a QSAR. Ways in which QSAR and in vitro toxicology can complement each other in development of alternatives to live animal experiments are described and illustrated by examples from acute toxicological end points. Integration of QSAR and in vitro methods is examined in the context of assessing mechanistic competence and improving the design of in vitro assays and the development of prediction models. The nature of biological variability is explored together with its implications for the selection of sets of chemicals for test development, optimization, and validation. Methods are described to support the use of data from in vivo tests that do not meet today's stringent requirements of acceptability. Integration of QSAR and in vitro methods into strategic approaches for the replacement, reduction, and refinement of the use of animals is described with examples. PMID:9599692
Two-faced property of a market factor in asset pricing and diversification effect
NASA Astrophysics Data System (ADS)
Eom, Cheoljun
2017-04-01
This study empirically investigates the test hypothesis that a market factor acting as a representative common factor in the pricing models has a negative influence on constructing a well-diversified portfolio from the Markowitz mean-variance optimization function (MVOF). We use the comparative correlation matrix (C-CM) method to control a single eigenvalue among all eigenvalues included in the sample correlation matrix (S-CM), through the random matrix theory (RMT). In particular, this study observes the effect of the largest eigenvalue that has the property of the market factor. According to the results, the largest eigenvalue has the highest explanatory power on the stock return changes. The C-CM without the largest eigenvalue in the S-CM constructs a more diversified portfolio capable of improving the practical applicability of the MVOF. Moreover, the more diversified portfolio constructed from this C-CM has better out-of-sample performance in the future period. These results support the test hypothesis for the two-faced property of the market factor, defined by the largest eigenvalue.
Frequency Spectrum Neutrality Tests: One for All and All for One
Achaz, Guillaume
2009-01-01
Neutrality tests based on the frequency spectrum (e.g., Tajima's D or Fu and Li's F) are commonly used by population geneticists as routine tests to assess the goodness-of-fit of the standard neutral model on their data sets. Here, I show that these neutrality tests are specific instances of a general model that encompasses them all. I illustrate how this general framework can be taken advantage of to devise new more powerful tests that better detect deviations from the standard model. Finally, I exemplify the usefulness of the framework on SNP data by showing how it supports the selection hypothesis in the lactase human gene by overcoming the ascertainment bias. The framework presented here paves the way for constructing novel tests optimized for specific violations of the standard model that ultimately will help to unravel scenarios of evolution. PMID:19546320
Koelewijn, Anne D; van den Bogert, Antonie J
2016-09-01
Despite having a fully functional knee and hip in both legs, asymmetries in joint moments of the knee and hip are often seen in gait of persons with a unilateral transtibial amputation (TTA), possibly resulting in excessive joint loading. We hypothesize that persons with a TTA can walk with more symmetric joint moments at the cost of increased effort or abnormal kinematics. The hypothesis was tested using predictive simulations of gait. Open loop controls of one gait cycle were found by solving an optimization problem that minimizes a combination of walking effort and tracking error in joint angles, ground reaction force and gait cycle duration. A second objective was added to penalize joint moment asymmetry, creating a multi-objective optimization problem. A Pareto front was constructed by changing the weights of the objectives and three solutions were analyzed to study the effect of increasing joint moment symmetry. When the optimization placed more weight on moment symmetry, walking effort increased and kinematics became less normal, confirming the hypothesis. TTA gait improved with a moderate increase in joint moment symmetry. At a small cost of effort and abnormal kinematics, the peak hip extension moment in the intact leg was decreased significantly, and so was the joint contact force in the knee and hip. Additional symmetry required a significant increase in walking effort and the joint contact forces in both hips became significantly higher than in able-bodied gait. Copyright © 2016 Elsevier B.V. All rights reserved.
Biomarker selection for medical diagnosis using the partial area under the ROC curve
2014-01-01
Background A biomarker is usually used as a diagnostic or assessment tool in medical research. Finding an ideal biomarker is not easy and combining multiple biomarkers provides a promising alternative. Moreover, some biomarkers based on the optimal linear combination do not have enough discriminatory power. As a result, the aim of this study was to find the significant biomarkers based on the optimal linear combination maximizing the pAUC for assessment of the biomarkers. Methods Under the binormality assumption we obtain the optimal linear combination of biomarkers maximizing the partial area under the receiver operating characteristic curve (pAUC). Related statistical tests are developed for assessment of a biomarker set and of an individual biomarker. Stepwise biomarker selections are introduced to identify those biomarkers of statistical significance. Results The results of simulation study and three real examples, Duchenne Muscular Dystrophy disease, heart disease, and breast tissue example are used to show that our methods are most suitable biomarker selection for the data sets of a moderate number of biomarkers. Conclusions Our proposed biomarker selection approaches can be used to find the significant biomarkers based on hypothesis testing. PMID:24410929
DOE Office of Scientific and Technical Information (OSTI.GOV)
Psaltis, Dimitrios; Özel, Feryal; Chan, Chi-Kwan
2015-12-01
The half opening angle of a Kerr black hole shadow is always equal to (5 ± 0.2)GM/Dc{sup 2}, where M is the mass of the black hole and D is its distance from the Earth. Therefore, measuring the size of a shadow and verifying whether it is within this 4% range constitutes a null hypothesis test of general relativity. We show that the black hole in the center of the Milky Way, Sgr A*, is the optimal target for performing this test with upcoming observations using the Event Horizon Telescope (EHT). We use the results of optical/IR monitoring of stellar orbits to showmore » that the mass-to-distance ratio for Sgr A* is already known to an accuracy of ∼4%. We investigate our prior knowledge of the properties of the scattering screen between Sgr A* and the Earth, the effects of which will need to be corrected for in order for the black hole shadow to appear sharp against the background emission. Finally, we explore an edge detection scheme for interferometric data and a pattern matching algorithm based on the Hough/Radon transform and demonstrate that the shadow of the black hole at 1.3 mm can be localized, in principle, to within ∼9%. All these results suggest that our prior knowledge of the properties of the black hole, of scattering broadening, and of the accretion flow can only limit this general relativistic null hypothesis test with EHT observations of Sgr A* to ≲10%.« less
The importance of physiological ecology in conservation biology
Tracy, C.R.; Nussear, K.E.; Esque, T.C.; Dean-Bradley, K.; DeFalco, L.A.; Castle, K.T.; Zimmerman, L.C.; Espinoza, R.E.; Barber, A.M.
2006-01-01
Many of the threats to the persistence of populations of sensitive species have physiological or pathological mechanisms, and those mechanisms are best understood through the inherently integrative discipline of physiological ecology. The desert tortoise was listed under the Endangered Species Act largely due to a newly recognized upper respiratory disease thought to cause mortality in individuals and severe declines in populations. Numerous hypotheses about the threats to the persistence of desert tortoise populations involve acquisition of nutrients, and its connection to stress and disease. The nutritional wisdom hypothesis posits that animals should forage not for particular food items, but instead, for particular nutrients such as calcium and phosphorus used in building bones. The optimal foraging hypothesis suggests that, in circumstances of resource abundance, tortoises should forage as dietary specialists as a means of maximizing intake of resources. The optimal digestion hypothesis suggests that tortoises should process ingesta in ways that regulate assimilation rate. Finally, the cost-of-switching hypothesis suggests that herbivores, like the desert tortoise, should avoid switching food types to avoid negatively affecting the microbe community responsible for fermenting plants into energy and nutrients. Combining hypotheses into a resource acquisition theory leads to novel predictions that are generally supported by data presented here. Testing hypotheses, and synthesizing test results into a theory, provides a robust scientific alternative to the popular use of untested hypotheses and unanalyzed data to assert the needs of species. The scientific approach should focus on hypotheses concerning anthropogenic modifications of the environment that impact physiological processes ultimately important to population phenomena. We show how measurements of such impacts as nutrient starvation, can cause physiological stress, and that the endocrine mechanisms involved with stress can result in disease. Finally, our new syntheses evince a new hypothesis. Free molecules of the stress hormone corticosterone can inhibit immunity, and the abundance of "free corticosterone" in the blood (thought to be the active form of the hormone) is regulated when the corticosterone molecules combine with binding globulins. The sex hormone, testosterone, combines with the same binding globulin. High levels of testosterone, naturally occurring in the breeding season, may be further enhanced in populations at high densities, and the resulting excess testosterone may compete with binding globulins, thereby releasing corticosterone and reducing immunity to disease. This sequence could result in physiological and pathological phenomena leading to population cycles with a period that would be essentially impossible to observe in desert tortoise. Such cycles could obscure population fluctuations of anthropogenic origin. ?? 2006 The Author(s).
The dependency paradox in close relationships: accepting dependence promotes independence.
Feeney, Brooke C
2007-02-01
Using multiple methods, this investigation tested the hypothesis that a close relationship partner's acceptance of dependence when needed (e.g., sensitive responsiveness to distress cues) is associated with less dependence, more autonomous functioning, and more self-sufficiency (as opposed to more dependence) on the part of the supported individual. In two studies, measures of acceptance of dependency needs and independent functioning were obtained through couple member reports, by observing couple members' behaviors during laboratory interactions, by observing responses to experimentally manipulated partner assistance provided during an individual laboratory task, and by following couples over a period of 6 months to examine independent goal striving as a function of prior assessments of dependency acceptance. Results provided converging evidence in support of the proposed hypothesis. Implications of the importance of close relationships for optimal individual functioning are discussed. ((c) 2007 APA, all rights reserved).
Effects of strategy on visual working memory capacity
Bengson, Jesse J.; Luck, Steven J.
2015-01-01
Substantial evidence suggests that individual differences in estimates of working memory capacity reflect differences in how effectively people use their intrinsic storage capacity. This suggests that estimated capacity could be increased by instructions that encourage more effective encoding strategies. The present study tested this by giving different participants explicit strategy instructions in a change detection task. Compared to a condition in which participants were simply told to do their best, we found that estimated capacity was increased for participants who were instructed to remember the entire visual display, even at set sizes beyond their capacity. However, no increase in estimated capacity was found for a group that was told to focus on a subset of the items in supracapacity arrays. This finding confirms the hypothesis that encoding strategies may influence visual working memory performance, and it is contrary to the hypothesis that the optimal strategy is to filter out any items beyond the storage capacity. PMID:26139356
Effects of strategy on visual working memory capacity.
Bengson, Jesse J; Luck, Steven J
2016-02-01
Substantial evidence suggests that individual differences in estimates of working memory capacity reflect differences in how effectively people use their intrinsic storage capacity. This suggests that estimated capacity could be increased by instructions that encourage more effective encoding strategies. The present study tested this by giving different participants explicit strategy instructions in a change detection task. Compared to a condition in which participants were simply told to do their best, we found that estimated capacity was increased for participants who were instructed to remember the entire visual display, even at set sizes beyond their capacity. However, no increase in estimated capacity was found for a group that was told to focus on a subset of the items in supracapacity arrays. This finding confirms the hypothesis that encoding strategies may influence visual working memory performance, and it is contrary to the hypothesis that the optimal strategy is to filter out any items beyond the storage capacity.
Taxonomic triage and the poverty of phylogeny.
Wheeler, Quentin D
2004-01-01
Revisionary taxonomy is frequently dismissed as merely descriptive, which belies its strong intellectual content and hypothesis-driven nature. Funding for taxonomy is inadequate and largely diverted to studies of phylogeny that neither improve classifications nor nomenclature. Phylogenetic classifications are optimal for storing and predicting information, but phylogeny divorced from taxonomy is ephemeral and erodes the accuracy and information content of the language of biology. Taxonomic revisions and monographs are efficient, high-throughput species hypothesis-testing devices that are ideal for the World Wide Web. Taxonomic knowledge remains essential to credible biological research and is made urgent by the biodiversity crisis. Theoretical and technological advances and threats of mass species extinctions indicate that this is the time for a renaissance in taxonomy. Clarity of vision and courage of purpose are needed from individual taxonomists and natural history museums to bring about this evolution of taxonomy into the information age. PMID:15253345
Decision-making in plants under competition.
Gruntman, Michal; Groß, Dorothee; Májeková, Maria; Tielbörger, Katja
2017-12-21
Plants can plastically respond to light competition in three strategies, comprising vertical growth, which promotes competitive dominance; shade tolerance, which maximises performance under shade; or lateral growth, which offers avoidance of competition. Here, we test the hypothesis that plants can 'choose' between these responses, according to their abilities to competitively overcome their neighbours. We study this hypothesis in the clonal plant Potentilla reptans using an experimental setup that simulates both the height and density of neighbours, thus presenting plants with different light-competition scenarios. Potentilla reptans ramets exhibit the highest vertical growth under simulated short-dense neighbours, highest specific leaf area (leaf area/dry mass) under tall-dense neighbours, and tend to increase total stolon length under tall-sparse neighbours. These responses suggest shifts between 'confrontational' vertical growth, shade tolerance and lateral-avoidance, respectively, and provide evidence that plants adopt one of several alternative plastic responses in a way that optimally corresponds to prevailing light-competition scenarios.
New methods of testing nonlinear hypothesis using iterative NLLS estimator
NASA Astrophysics Data System (ADS)
Mahaboob, B.; Venkateswarlu, B.; Mokeshrayalu, G.; Balasiddamuni, P.
2017-11-01
This research paper discusses the method of testing nonlinear hypothesis using iterative Nonlinear Least Squares (NLLS) estimator. Takeshi Amemiya [1] explained this method. However in the present research paper, a modified Wald test statistic due to Engle, Robert [6] is proposed to test the nonlinear hypothesis using iterative NLLS estimator. An alternative method for testing nonlinear hypothesis using iterative NLLS estimator based on nonlinear hypothesis using iterative NLLS estimator based on nonlinear studentized residuals has been proposed. In this research article an innovative method of testing nonlinear hypothesis using iterative restricted NLLS estimator is derived. Pesaran and Deaton [10] explained the methods of testing nonlinear hypothesis. This paper uses asymptotic properties of nonlinear least squares estimator proposed by Jenrich [8]. The main purpose of this paper is to provide very innovative methods of testing nonlinear hypothesis using iterative NLLS estimator, iterative NLLS estimator based on nonlinear studentized residuals and iterative restricted NLLS estimator. Eakambaram et al. [12] discussed least absolute deviation estimations versus nonlinear regression model with heteroscedastic errors and also they studied the problem of heteroscedasticity with reference to nonlinear regression models with suitable illustration. William Grene [13] examined the interaction effect in nonlinear models disused by Ai and Norton [14] and suggested ways to examine the effects that do not involve statistical testing. Peter [15] provided guidelines for identifying composite hypothesis and addressing the probability of false rejection for multiple hypotheses.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Smith, Brennan T.; Welch, Tim; Witt, Adam M.
The Multi-Year Plan for Research, Development, and Prototype Testing of Standard Modular Hydropower Technology (MYRP) presents a strategy for specifying, designing, testing, and demonstrating the efficacy of standard modular hydropower (SMH) as an environmentally compatible and cost-optimized renewable electricity generation technology. The MYRP provides the context, background, and vision for testing the SMH hypothesis: if standardization, modularity, and preservation of stream functionality become essential and fully realized features of hydropower technology, project design, and regulatory processes, they will enable previously unrealized levels of new project development with increased acceptance, reduced costs, increased predictability of outcomes, and increased value to stakeholders.more » To achieve success in this effort, the MYRP outlines a framework of stakeholder-validated criteria, models, design tools, testing facilities, and assessment protocols that will facilitate the development of next-generation hydropower technologies.« less
Lorenz, W; Stinner, B; Bauhofer, A; Rothmund, M; Celik, I; Fingerhut, A; Koller, M; Lorijn, R H; Nyström, P O; Sitter, H; Schein, M; Solomkin, J S; Troidl, H; Wyatt, J; Wittmann, D H
2001-03-01
Presentation of a novel study protocol to evalue the effectiveness of an immune modifier (rhG-CSF, filgrastim): prevention of postoperative infectious complications and sub-optimal recovery from operation in patients with colorectal cancer and increased preoperative risk (ASA 3 and 4). The rationale and hypothesis are presented in this part of the protocol of the randomised, placebo controlled, double-blinded, single-centre study performed at an university hospital (n = 40 patients for each group). Part one of this protocol describes the concepts of three major sections of the study: Definition of optimum and sub-optimal recovery after operation. Recovery, as an outcome, is not a simple univariate endpoint, but a complex construction of mechanistic variables (i. e. death, complications and health status assessed by the surgeon), quality of life expressed by the patient, and finally a weighted outcome judgement by both the patient and the surgeon (true endpoint). Its conventional early assessment within 14-28 days is artificial: longer periods (such as 6 months) are needed for the patient to state: "I am now as well as I was before". Identification of suitable target patients: the use of biological response modifiers (immune modulators) in addition to traditional prophylaxes (i. e. antibiotics, heparin, volume substitutes) may improve postoperative outcome in appropriate selected patients with reduced host defence and increased immunological stress response, but these have to be defined. Patients classified as ASA 3 and 4 (American Society for Anaesthesiologists) and with colorectal cancer will be studied to prove this hypothesis. Choice of biological response modifier: Filgrastim has been chosen as an example of a biological response modifier because it was effective in a new study type, clinic-modelling randomised trials in rodents, and has shown promise in some clinical trials for indications other than preoperative prophylaxis. It has also enhanced host defence and has been anti-inflammatory in basic research. The following hypothesis will be tested in patients with operations for colorectal cancer and increased preoperative risk (ASA 3 and 4): is the outcome as evaluated by the hermeneutic endpoint (quality of life expressed by the patient) and mechanistic endpoints (mortality rate, complication rate, relative hospital stay, assessed by the doctor) improved in the group receiving filgrastim prophylaxis in comparison with the placebo group? Quality of life will be the first primary endpoint in the hierarchical, statistical testing of confirmatory analysis.
Tuffaha, Haitham W; Reynolds, Heather; Gordon, Louisa G; Rickard, Claire M; Scuffham, Paul A
2014-12-01
Value of information analysis has been proposed as an alternative to the standard hypothesis testing approach, which is based on type I and type II errors, in determining sample sizes for randomized clinical trials. However, in addition to sample size calculation, value of information analysis can optimize other aspects of research design such as possible comparator arms and alternative follow-up times, by considering trial designs that maximize the expected net benefit of research, which is the difference between the expected cost of the trial and the expected value of additional information. To apply value of information methods to the results of a pilot study on catheter securement devices to determine the optimal design of a future larger clinical trial. An economic evaluation was performed using data from a multi-arm randomized controlled pilot study comparing the efficacy of four types of catheter securement devices: standard polyurethane, tissue adhesive, bordered polyurethane and sutureless securement device. Probabilistic Monte Carlo simulation was used to characterize uncertainty surrounding the study results and to calculate the expected value of additional information. To guide the optimal future trial design, the expected costs and benefits of the alternative trial designs were estimated and compared. Analysis of the value of further information indicated that a randomized controlled trial on catheter securement devices is potentially worthwhile. Among the possible designs for the future trial, a four-arm study with 220 patients/arm would provide the highest expected net benefit corresponding to 130% return-on-investment. The initially considered design of 388 patients/arm, based on hypothesis testing calculations, would provide lower net benefit with return-on-investment of 79%. Cost-effectiveness and value of information analyses were based on the data from a single pilot trial which might affect the accuracy of our uncertainty estimation. Another limitation was that different follow-up durations for the larger trial were not evaluated. The value of information approach allows efficient trial design by maximizing the expected net benefit of additional research. This approach should be considered early in the design of randomized clinical trials. © The Author(s) 2014.
Influences of landscape heterogeneity on home-range sizes of brown bears
Mangipane, Lindsey S.; Belant, Jerrold L.; Hiller, Tim L.; Colvin, Michael E.; Gustine, David; Mangipane, Buck A.; Hilderbrand, Grant V.
2018-01-01
Animal space use is influenced by many factors and can affect individual survival and fitness. Under optimal foraging theory, individuals use landscapes to optimize high-quality resources while minimizing the amount of energy used to acquire them. The spatial resource variability hypothesis states that as patchiness of resources increases, individuals use larger areas to obtain the resources necessary to meet energetic requirements. Additionally, under the temporal resource variability hypothesis, seasonal variation in available resources can reduce distances moved while providing a variety of food sources. Our objective was to determine if seasonal home ranges of brown bears (Ursus arctos) were influenced by temporal availability and spatial distribution of resources and whether individual reproductive status, sex, or size (i.e., body mass) mediated space use. To test our hypotheses, we radio collared brown bears (n = 32 [9 male, 23 female]) in 2014–2016 and used 18 a prioriselected linear models to evaluate seasonal utilization distributions (UD) in relation to our hypotheses. Our top-ranked model by AICc, supported the spatial resource variability hypothesis and included percentage of like adjacency (PLADJ) of all cover types (P < 0.01), reproductive class (P > 0.17 for males, solitary females, and females with dependent young), and body mass (kg; P = 0.66). Based on this model, for every percentage increase in PLADJ, UD area was predicted to increase 1.16 times for all sex and reproductive classes. Our results suggest that landscape heterogeneity influences brown bear space use; however, we found that bears used larger areas when landscape homogeneity increased, presumably to gain a diversity of food resources. Our results did not support the temporal resource variability hypothesis, suggesting that the spatial distribution of food was more important than seasonal availability in relation to brown bear home range size.
Optimality of the basic colour categories for classification
Griffin, Lewis D
2005-01-01
Categorization of colour has been widely studied as a window into human language and cognition, and quite separately has been used pragmatically in image-database retrieval systems. This suggests the hypothesis that the best category system for pragmatic purposes coincides with human categories (i.e. the basic colours). We have tested this hypothesis by assessing the performance of different category systems in a machine-vision task. The task was the identification of the odd-one-out from triples of images obtained using a web-based image-search service. In each triple, two of the images had been retrieved using the same search term, the other a different term. The terms were simple concrete nouns. The results were as follows: (i) the odd-one-out task can be performed better than chance using colour alone; (ii) basic colour categorization performs better than random systems of categories; (iii) a category system that performs better than the basic colours could not be found; and (iv) it is not just the general layout of the basic colours that is important, but also the detail. We conclude that (i) the results support the plausibility of an explanation for the basic colours as a result of a pressure-to-optimality and (ii) the basic colours are good categories for machine vision image-retrieval systems. PMID:16849219
Willmott, Keith R; Robinson Willmott, Julia C; Elias, Marianne; Jiggins, Chris D
2017-05-31
Mimicry is one of the best-studied examples of adaptation, and recent studies have provided new insights into the role of mimicry in speciation and diversification. Classical Müllerian mimicry theory predicts convergence in warning signal among protected species, yet tropical butterflies are exuberantly diverse in warning colour patterns, even within communities. We tested the hypothesis that microhabitat partitioning in aposematic butterflies and insectivorous birds can lead to selection for different colour patterns in different microhabitats and thus help maintain mimicry diversity. We measured distribution across flight height and topography for 64 species of clearwing butterflies (Ithomiini) and their co-mimics, and 127 species of insectivorous birds, in an Amazon rainforest community. For the majority of bird species, estimated encounter rates were non-random for the two most abundant mimicry rings. Furthermore, most butterfly species in these two mimicry rings displayed the warning colour pattern predicted to be optimal for anti-predator defence in their preferred microhabitats. These conclusions were supported by a field trial using butterfly specimens, which showed significantly different predation rates on colour patterns in two microhabitats. We therefore provide the first direct evidence to support the hypothesis that different mimicry patterns can represent stable, community-level adaptations to differing biotic environments. © 2017 The Author(s).
Robinson Willmott, Julia C.
2017-01-01
Mimicry is one of the best-studied examples of adaptation, and recent studies have provided new insights into the role of mimicry in speciation and diversification. Classical Müllerian mimicry theory predicts convergence in warning signal among protected species, yet tropical butterflies are exuberantly diverse in warning colour patterns, even within communities. We tested the hypothesis that microhabitat partitioning in aposematic butterflies and insectivorous birds can lead to selection for different colour patterns in different microhabitats and thus help maintain mimicry diversity. We measured distribution across flight height and topography for 64 species of clearwing butterflies (Ithomiini) and their co-mimics, and 127 species of insectivorous birds, in an Amazon rainforest community. For the majority of bird species, estimated encounter rates were non-random for the two most abundant mimicry rings. Furthermore, most butterfly species in these two mimicry rings displayed the warning colour pattern predicted to be optimal for anti-predator defence in their preferred microhabitats. These conclusions were supported by a field trial using butterfly specimens, which showed significantly different predation rates on colour patterns in two microhabitats. We therefore provide the first direct evidence to support the hypothesis that different mimicry patterns can represent stable, community-level adaptations to differing biotic environments. PMID:28539522
Hypothesis testing in hydrology: Theory and practice
NASA Astrophysics Data System (ADS)
Kirchner, James; Pfister, Laurent
2017-04-01
Well-posed hypothesis tests have spurred major advances in hydrological theory. However, a random sample of recent research papers suggests that in hydrology, as in other fields, hypothesis formulation and testing rarely correspond to the idealized model of the scientific method. Practices such as "p-hacking" or "HARKing" (Hypothesizing After the Results are Known) are major obstacles to more rigorous hypothesis testing in hydrology, along with the well-known problem of confirmation bias - the tendency to value and trust confirmations more than refutations - among both researchers and reviewers. Hypothesis testing is not the only recipe for scientific progress, however: exploratory research, driven by innovations in measurement and observation, has also underlain many key advances. Further improvements in observation and measurement will be vital to both exploratory research and hypothesis testing, and thus to advancing the science of hydrology.
Cauley, Jane A; Smagula, Stephen F; Hovey, Kathleen M; Wactawski-Wende, Jean; Andrews, Christopher A; Crandall, Carolyn J; LeBoff, Meryl S; Li, Wenjun; Coday, Mace; Sattari, Maryam; Tindle, Hilary A
2017-02-01
Traits of optimism and cynical hostility are features of personality that could influence the risk of falls and fractures by influencing risk-taking behaviors, health behaviors, or inflammation. To test the hypothesis that personality influences falls and fracture risk, we studied 87,342 women enrolled in WHI-OS. Optimism was assessed by the Life Orientation Test-Revised and cynical hostility, the cynicism subscale of the Cook-Medley questionnaire. Higher scores indicate greater optimism and hostility. Optimism and hostility were correlated at r = -0. 31, p < 0.001. Annual self-report of falling ≥2 times in the past year was modeled using repeated measures logistic regression. Cox proportional hazards models were used for the fracture outcomes. We examined the risk of falls and fractures across the quartiles (Q) of optimism and hostility with tests for trends; Q1 formed the referent group. The average follow-up for fractures was 11.4 years and for falls was 7.6 years. In multivariable (MV)-adjusted models, women with the highest optimism scores (Q4) were 11% less likely to report ≥2 falls in the past year (odds ratio [OR] = 0.89; 95% confidence intervals [CI] 0.85-0.90). Women in Q4 for hostility had a 12% higher risk of ≥2 falls (OR = 1.12; 95% CI 1.07-1.17). Higher optimism scores were also associated with a 10% lower risk of fractures, but this association was attenuated in MV models. Women with the greatest hostility (Q4) had a modest increased risk of any fracture (MV-adjusted hazard ratio = 1. 05; 95% CI 1.01-1.09), but there was no association with specific fracture sites. In conclusion, optimism was independently associated with a decreased risk of ≥2 falls, and hostility with an increased risk of ≥2 falls, independent of traditional risk factors. The magnitude of the association was similar to aging 5 years. Whether interventions aimed at attitudes could reduce fall risks remains to be determined. © 2016 American Society for Bone and Mineral Research. © 2016 American Society for Bone and Mineral Research.
Schram, Edward; Bierman, Stijn; Teal, Lorna R.; Haenen, Olga; van de Vis, Hans; Rijnsdorp, Adriaan D.
2013-01-01
Dover sole (Solea solea) is an obligate ectotherm with a natural thermal habitat ranging from approximately 5 to 27°C. Thermal optima for growth lie in the range of 20 to 25°C. More precise information on thermal optima for growth is needed for cost-effective Dover sole aquaculture. The main objective of this study was to determine the optimal growth temperature of juvenile Dover sole (Solea solea) and in addition to test the hypothesis that the final preferendum equals the optimal growth temperature. Temperature preference was measured in a circular preference chamber for Dover sole acclimated to 18, 22 and 28°C. Optimal growth temperature was measured by rearing Dover sole at 19, 22, 25 and 28°C. The optimal growth temperature resulting from this growth experiment was 22.7°C for Dover sole with a size between 30 to 50 g. The temperature preferred by juvenile Dover sole increases with acclimation temperature and exceeds the optimal temperature for growth. A final preferendum could not be detected. Although a confounding effect of behavioural fever on temperature preference could not be entirely excluded, thermal preference and thermal optima for physiological processes seem to be unrelated in Dover sole. PMID:23613837
Acquisition of decision making criteria: reward rate ultimately beats accuracy.
Balci, Fuat; Simen, Patrick; Niyogi, Ritwik; Saxe, Andrew; Hughes, Jessica A; Holmes, Philip; Cohen, Jonathan D
2011-02-01
Speed-accuracy trade-offs strongly influence the rate of reward that can be earned in many decision-making tasks. Previous reports suggest that human participants often adopt suboptimal speed-accuracy trade-offs in single session, two-alternative forced-choice tasks. We investigated whether humans acquired optimal speed-accuracy trade-offs when extensively trained with multiple signal qualities. When performance was characterized in terms of decision time and accuracy, our participants eventually performed nearly optimally in the case of higher signal qualities. Rather than adopting decision criteria that were individually optimal for each signal quality, participants adopted a single threshold that was nearly optimal for most signal qualities. However, setting a single threshold for different coherence conditions resulted in only negligible decrements in the maximum possible reward rate. Finally, we tested two hypotheses regarding the possible sources of suboptimal performance: (1) favoring accuracy over reward rate and (2) misestimating the reward rate due to timing uncertainty. Our findings provide support for both hypotheses, but also for the hypothesis that participants can learn to approach optimality. We find specifically that an accuracy bias dominates early performance, but diminishes greatly with practice. The residual discrepancy between optimal and observed performance can be explained by an adaptive response to uncertainty in time estimation.
Schram, Edward; Bierman, Stijn; Teal, Lorna R; Haenen, Olga; van de Vis, Hans; Rijnsdorp, Adriaan D
2013-01-01
Dover sole (Solea solea) is an obligate ectotherm with a natural thermal habitat ranging from approximately 5 to 27°C. Thermal optima for growth lie in the range of 20 to 25°C. More precise information on thermal optima for growth is needed for cost-effective Dover sole aquaculture. The main objective of this study was to determine the optimal growth temperature of juvenile Dover sole (Solea solea) and in addition to test the hypothesis that the final preferendum equals the optimal growth temperature. Temperature preference was measured in a circular preference chamber for Dover sole acclimated to 18, 22 and 28°C. Optimal growth temperature was measured by rearing Dover sole at 19, 22, 25 and 28°C. The optimal growth temperature resulting from this growth experiment was 22.7°C for Dover sole with a size between 30 to 50 g. The temperature preferred by juvenile Dover sole increases with acclimation temperature and exceeds the optimal temperature for growth. A final preferendum could not be detected. Although a confounding effect of behavioural fever on temperature preference could not be entirely excluded, thermal preference and thermal optima for physiological processes seem to be unrelated in Dover sole.
Testing the null hypothesis: the forgotten legacy of Karl Popper?
Wilkinson, Mick
2013-01-01
Testing of the null hypothesis is a fundamental aspect of the scientific method and has its basis in the falsification theory of Karl Popper. Null hypothesis testing makes use of deductive reasoning to ensure that the truth of conclusions is irrefutable. In contrast, attempting to demonstrate the new facts on the basis of testing the experimental or research hypothesis makes use of inductive reasoning and is prone to the problem of the Uniformity of Nature assumption described by David Hume in the eighteenth century. Despite this issue and the well documented solution provided by Popper's falsification theory, the majority of publications are still written such that they suggest the research hypothesis is being tested. This is contrary to accepted scientific convention and possibly highlights a poor understanding of the application of conventional significance-based data analysis approaches. Our work should remain driven by conjecture and attempted falsification such that it is always the null hypothesis that is tested. The write up of our studies should make it clear that we are indeed testing the null hypothesis and conforming to the established and accepted philosophical conventions of the scientific method.
Corti, Daniele; Galbiati, Valentina; Gatti, Nicolò; Marinovich, Marina; Galli, Corrado L; Corsini, Emanuela
2015-10-01
Despite important impacts of systemic hypersensitivity induced by pharmaceuticals, for such endpoint no reliable preclinical approaches are available. We previously established an in vitro test to identify contact and respiratory allergens based on interleukin-8 (IL-8) production in THP-1 cells. Here, we challenged it for identification of pharmaceuticals associated with systemic hypersensitivity reactions, with the idea that drug sensitizers share common mechanisms of cell activation. Cells were exposed to drugs associated with systemic hypersensitivity reactions (streptozotocin, sulfamethoxazole, neomycin, probenecid, clonidine, procainamide, ofloxacin, methyl salicylate), while metformin was used as negative drug. Differently to chemicals, drugs tested were well tolerated, except clonidine and probenecid, with no signs of cytotoxicity up to 1-2mg/ml. THP-1 activation assay was adjusted, and conditions, that allow identification of all sensitizing drugs tested, were established. Next, using streptozotocin and selective inhibitors of PKC-β and p38 MAPK, two pathways involved in chemical allergen-induced cell activation, we tested the hypothesis that similar pathways were also involved in drug-induced IL-8 production and CD86 upregulation. Results indicated that drugs and chemical allergens share similar activation pathways. Finally, we made a structure-activity hypothesis related to hypersensitivity reactions, trying to individuate structural requisite that can be involved in immune mediated adverse reactions. Copyright © 2015 Elsevier Ltd. All rights reserved.
Improved tests reveal that the accelarating moment release hypothesis is statistically insignificant
Hardebeck, J.L.; Felzer, K.R.; Michael, A.J.
2008-01-01
We test the hypothesis that accelerating moment release (AMR) is a precursor to large earthquakes, using data from California, Nevada, and Sumatra. Spurious cases of AMR can arise from data fitting because the time period, area, and sometimes magnitude range analyzed before each main shock are often optimized to produce the strongest AMR signal. Optimizing the search criteria can identify apparent AMR even if no robust signal exists. For both 1950-2006 California-Nevada M ??? 6.5 earthquakes and the 2004 M9.3 Sumatra earthquake, we can find two contradictory patterns in the pre-main shock earthquakes by data fitting: AMR and decelerating moment release. We compare the apparent AMR found in the real data to the apparent AMR found in four types of synthetic catalogs with no inherent AMR. When spatiotemporal clustering is included in the simulations, similar AMR signals are found by data fitting in both the real and synthetic data sets even though the synthetic data sets contain no real AMR. These tests demonstrate that apparent AMR may arise from a combination of data fitting and normal foreshock and aftershock activity. In principle, data-fitting artifacts could be avoided if the free parameters were determined from scaling relationships between the duration and spatial extent of the AMR pattern and the magnitude of the earthquake that follows it. However, we demonstrate that previously proposed scaling relationships are unstable, statistical artifacts caused by the use of a minimum magnitude for the earthquake catalog that scales with the main shock magnitude. Some recent AMR studies have used spatial regions based on hypothetical stress loading patterns, rather than circles, to select the data. We show that previous tests were biased and that unbiased tests do not find this change to the method to be an improvement. The use of declustered catalogs has also been proposed to eliminate the effect of clustering but we demonstrate that this does not increase the statistical significance of AMR. Given the ease with which data fitting can find desired patterns in seismicity, future studies of AMR-like observations must include complete tests against synthetic catalogs that include spatiotemporal clustering.
Unscaled Bayes factors for multiple hypothesis testing in microarray experiments.
Bertolino, Francesco; Cabras, Stefano; Castellanos, Maria Eugenia; Racugno, Walter
2015-12-01
Multiple hypothesis testing collects a series of techniques usually based on p-values as a summary of the available evidence from many statistical tests. In hypothesis testing, under a Bayesian perspective, the evidence for a specified hypothesis against an alternative, conditionally on data, is given by the Bayes factor. In this study, we approach multiple hypothesis testing based on both Bayes factors and p-values, regarding multiple hypothesis testing as a multiple model selection problem. To obtain the Bayes factors we assume default priors that are typically improper. In this case, the Bayes factor is usually undetermined due to the ratio of prior pseudo-constants. We show that ignoring prior pseudo-constants leads to unscaled Bayes factor which do not invalidate the inferential procedure in multiple hypothesis testing, because they are used within a comparative scheme. In fact, using partial information from the p-values, we are able to approximate the sampling null distribution of the unscaled Bayes factor and use it within Efron's multiple testing procedure. The simulation study suggests that under normal sampling model and even with small sample sizes, our approach provides false positive and false negative proportions that are less than other common multiple hypothesis testing approaches based only on p-values. The proposed procedure is illustrated in two simulation studies, and the advantages of its use are showed in the analysis of two microarray experiments. © The Author(s) 2011.
A Critique of One-Tailed Hypothesis Test Procedures in Business and Economics Statistics Textbooks.
ERIC Educational Resources Information Center
Liu, Tung; Stone, Courtenay C.
1999-01-01
Surveys introductory business and economics statistics textbooks and finds that they differ over the best way to explain one-tailed hypothesis tests: the simple null-hypothesis approach or the composite null-hypothesis approach. Argues that the composite null-hypothesis approach contains methodological shortcomings that make it more difficult for…
A sigmoidal model for biosorption of heavy metal cations from aqueous media.
Özen, Rümeysa; Sayar, Nihat Alpagu; Durmaz-Sam, Selcen; Sayar, Ahmet Alp
2015-07-01
A novel multi-input single output (MISO) black-box sigmoid model is developed to simulate the biosorption of heavy metal cations by the fission yeast from aqueous medium. Validation and verification of the model is done through statistical chi-squared hypothesis tests and the model is evaluated by uncertainty and sensitivity analyses. The simulated results are in agreement with the data of the studied system in which Schizosaccharomyces pombe biosorbs Ni(II) cations at various process conditions. Experimental data is obtained originally for this work using dead cells of an adapted variant of S. Pombe and represented by Freundlich isotherms. A process optimization scheme is proposed using the present model to build a novel application of a cost-merit objective function which would be useful to predict optimal operation conditions. Copyright © 2015. Published by Elsevier Inc.
The thermal mismatch hypothesis explains host susceptibility to an emerging infectious disease.
Cohen, Jeremy M; Venesky, Matthew D; Sauer, Erin L; Civitello, David J; McMahon, Taegan A; Roznik, Elizabeth A; Rohr, Jason R
2017-02-01
Parasites typically have broader thermal limits than hosts, so large performance gaps between pathogens and their cold- and warm-adapted hosts should occur at relatively warm and cold temperatures, respectively. We tested this thermal mismatch hypothesis by quantifying the temperature-dependent susceptibility of cold- and warm-adapted amphibian species to the fungal pathogen Batrachochytrium dendrobatidis (Bd) using laboratory experiments and field prevalence estimates from 15 410 individuals in 598 populations. In both the laboratory and field, we found that the greatest susceptibility of cold- and warm-adapted hosts occurred at relatively warm and cool temperatures, respectively, providing support for the thermal mismatch hypothesis. Our results suggest that as climate change shifts hosts away from their optimal temperatures, the probability of increased host susceptibility to infectious disease might increase, but the effect will depend on the host species and the direction of the climate shift. Our findings help explain the tremendous variation in species responses to Bd across climates and spatial, temporal and species-level variation in disease outbreaks associated with extreme weather events that are becoming more common with climate change. © 2017 John Wiley & Sons Ltd/CNRS.
Framework for adaptive multiscale analysis of nonhomogeneous point processes.
Helgason, Hannes; Bartroff, Jay; Abry, Patrice
2011-01-01
We develop the methodology for hypothesis testing and model selection in nonhomogeneous Poisson processes, with an eye toward the application of modeling and variability detection in heart beat data. Modeling the process' non-constant rate function using templates of simple basis functions, we develop the generalized likelihood ratio statistic for a given template and a multiple testing scheme to model-select from a family of templates. A dynamic programming algorithm inspired by network flows is used to compute the maximum likelihood template in a multiscale manner. In a numerical example, the proposed procedure is nearly as powerful as the super-optimal procedures that know the true template size and true partition, respectively. Extensions to general history-dependent point processes is discussed.
P value and the theory of hypothesis testing: an explanation for new researchers.
Biau, David Jean; Jolles, Brigitte M; Porcher, Raphaël
2010-03-01
In the 1920s, Ronald Fisher developed the theory behind the p value and Jerzy Neyman and Egon Pearson developed the theory of hypothesis testing. These distinct theories have provided researchers important quantitative tools to confirm or refute their hypotheses. The p value is the probability to obtain an effect equal to or more extreme than the one observed presuming the null hypothesis of no effect is true; it gives researchers a measure of the strength of evidence against the null hypothesis. As commonly used, investigators will select a threshold p value below which they will reject the null hypothesis. The theory of hypothesis testing allows researchers to reject a null hypothesis in favor of an alternative hypothesis of some effect. As commonly used, investigators choose Type I error (rejecting the null hypothesis when it is true) and Type II error (accepting the null hypothesis when it is false) levels and determine some critical region. If the test statistic falls into that critical region, the null hypothesis is rejected in favor of the alternative hypothesis. Despite similarities between the two, the p value and the theory of hypothesis testing are different theories that often are misunderstood and confused, leading researchers to improper conclusions. Perhaps the most common misconception is to consider the p value as the probability that the null hypothesis is true rather than the probability of obtaining the difference observed, or one that is more extreme, considering the null is true. Another concern is the risk that an important proportion of statistically significant results are falsely significant. Researchers should have a minimum understanding of these two theories so that they are better able to plan, conduct, interpret, and report scientific experiments.
Visibility-Based Hypothesis Testing Using Higher-Order Optical Interference
NASA Astrophysics Data System (ADS)
Jachura, Michał; Jarzyna, Marcin; Lipka, Michał; Wasilewski, Wojciech; Banaszek, Konrad
2018-03-01
Many quantum information protocols rely on optical interference to compare data sets with efficiency or security unattainable by classical means. Standard implementations exploit first-order coherence between signals whose preparation requires a shared phase reference. Here, we analyze and experimentally demonstrate the binary discrimination of visibility hypotheses based on higher-order interference for optical signals with a random relative phase. This provides a robust protocol implementation primitive when a phase lock is unavailable or impractical. With the primitive cost quantified by the total detected optical energy, optimal operation is typically reached in the few-photon regime.
Debates—Hypothesis testing in hydrology: Theory and practice
NASA Astrophysics Data System (ADS)
Pfister, Laurent; Kirchner, James W.
2017-03-01
The basic structure of the scientific method—at least in its idealized form—is widely championed as a recipe for scientific progress, but the day-to-day practice may be different. Here, we explore the spectrum of current practice in hypothesis formulation and testing in hydrology, based on a random sample of recent research papers. This analysis suggests that in hydrology, as in other fields, hypothesis formulation and testing rarely correspond to the idealized model of the scientific method. Practices such as "p-hacking" or "HARKing" (Hypothesizing After the Results are Known) are major obstacles to more rigorous hypothesis testing in hydrology, along with the well-known problem of confirmation bias—the tendency to value and trust confirmations more than refutations—among both researchers and reviewers. Nonetheless, as several examples illustrate, hypothesis tests have played an essential role in spurring major advances in hydrological theory. Hypothesis testing is not the only recipe for scientific progress, however. Exploratory research, driven by innovations in measurement and observation, has also underlain many key advances. Further improvements in observation and measurement will be vital to both exploratory research and hypothesis testing, and thus to advancing the science of hydrology.
Metin, Baris; Wiersema, Jan R; Verguts, Tom; Gasthuys, Roos; van Der Meere, Jacob J; Roeyers, Herbert; Sonuga-Barke, Edmund
2016-01-01
According to the state regulation deficit (SRD) account, ADHD is associated with a problem using effort to maintain an optimal activation state under demanding task settings such as very fast or very slow event rates. This leads to a prediction of disrupted performance at event rate extremes reflected in higher Gaussian response variability that is a putative marker of activation during motor preparation. In the current study, we tested this hypothesis using ex-Gaussian modeling, which distinguishes Gaussian from non-Gaussian variability. Twenty-five children with ADHD and 29 typically developing controls performed a simple Go/No-Go task under four different event-rate conditions. There was an accentuated quadratic relationship between event rate and Gaussian variability in the ADHD group compared to the controls. The children with ADHD had greater Gaussian variability at very fast and very slow event rates but not at moderate event rates. The results provide evidence for the SRD account of ADHD. However, given that this effect did not explain all group differences (some of which were independent of event rate) other cognitive and/or motivational processes are also likely implicated in ADHD performance deficits.
Bayesian inference for psychology. Part II: Example applications with JASP.
Wagenmakers, Eric-Jan; Love, Jonathon; Marsman, Maarten; Jamil, Tahira; Ly, Alexander; Verhagen, Josine; Selker, Ravi; Gronau, Quentin F; Dropmann, Damian; Boutin, Bruno; Meerhoff, Frans; Knight, Patrick; Raj, Akash; van Kesteren, Erik-Jan; van Doorn, Johnny; Šmíra, Martin; Epskamp, Sacha; Etz, Alexander; Matzke, Dora; de Jong, Tim; van den Bergh, Don; Sarafoglou, Alexandra; Steingroever, Helen; Derks, Koen; Rouder, Jeffrey N; Morey, Richard D
2018-02-01
Bayesian hypothesis testing presents an attractive alternative to p value hypothesis testing. Part I of this series outlined several advantages of Bayesian hypothesis testing, including the ability to quantify evidence and the ability to monitor and update this evidence as data come in, without the need to know the intention with which the data were collected. Despite these and other practical advantages, Bayesian hypothesis tests are still reported relatively rarely. An important impediment to the widespread adoption of Bayesian tests is arguably the lack of user-friendly software for the run-of-the-mill statistical problems that confront psychologists for the analysis of almost every experiment: the t-test, ANOVA, correlation, regression, and contingency tables. In Part II of this series we introduce JASP ( http://www.jasp-stats.org ), an open-source, cross-platform, user-friendly graphical software package that allows users to carry out Bayesian hypothesis tests for standard statistical problems. JASP is based in part on the Bayesian analyses implemented in Morey and Rouder's BayesFactor package for R. Armed with JASP, the practical advantages of Bayesian hypothesis testing are only a mouse click away.
Teaching Hypothesis Testing by Debunking a Demonstration of Telepathy.
ERIC Educational Resources Information Center
Bates, John A.
1991-01-01
Discusses a lesson designed to demonstrate hypothesis testing to introductory college psychology students. Explains that a psychology instructor demonstrated apparent psychic abilities to students. Reports that students attempted to explain the instructor's demonstrations through hypothesis testing and revision. Provides instructions on performing…
Where does the Occluded Artery Trial leave the late open artery hypothesis?
Lamas, Gervasio A; Hochman, Judith S
2007-01-01
As of April 2007 the early open artery hypothesis is alive and well, but the late open artery hypothesis is adrift. For the foreseeable future, stable patients with persistent occlusion of the infarct artery late after myocardial infarction, and without severe ischaemia or uncontrollable angina, should be managed initially with optimal medical treatment alone, and not with percutaneous coronary intervention. Efforts should focus on establishing reperfusion earlier, including reducing the time to patient presentation. PMID:17933981
Lash, Ayhan Aytekin; Plonczynski, Donna J; Sehdev, Amikar
2011-01-01
To compare the inclusion and the influences of selected variables on hypothesis testing during the 1980s and 1990s. In spite of the emphasis on conducting inquiry consistent with the tenets of logical positivism, there have been no studies investigating the frequency and patterns of hypothesis testing in nursing research The sample was obtained from the journal Nursing Research which was the research journal with the highest circulation during the study period under study. All quantitative studies published during the two decades including briefs and historical studies were included in the analyses A retrospective design was used to select the sample. Five years from the 1980s and 1990s each were randomly selected from the journal, Nursing Research. Of the 582 studies, 517 met inclusion criteria. Findings suggest that there has been a decline in the use of hypothesis testing in the last decades of the 20th century. Further research is needed to identify the factors that influence the conduction of research with hypothesis testing. Hypothesis testing in nursing research showed a steady decline from the 1980s to 1990s. Research purposes of explanation, and prediction/ control increased the likelihood of hypothesis testing. Hypothesis testing strengthens the quality of the quantitative studies, increases the generality of findings and provides dependable knowledge. This is particularly true for quantitative studies that aim to explore, explain and predict/control phenomena and/or test theories. The findings also have implications for doctoral programmes, research preparation of nurse-investigators, and theory testing.
A shift from significance test to hypothesis test through power analysis in medical research.
Singh, G
2006-01-01
Medical research literature until recently, exhibited substantial dominance of the Fisher's significance test approach of statistical inference concentrating more on probability of type I error over Neyman-Pearson's hypothesis test considering both probability of type I and II error. Fisher's approach dichotomises results into significant or not significant results with a P value. The Neyman-Pearson's approach talks of acceptance or rejection of null hypothesis. Based on the same theory these two approaches deal with same objective and conclude in their own way. The advancement in computing techniques and availability of statistical software have resulted in increasing application of power calculations in medical research and thereby reporting the result of significance tests in the light of power of the test also. Significance test approach, when it incorporates power analysis contains the essence of hypothesis test approach. It may be safely argued that rising application of power analysis in medical research may have initiated a shift from Fisher's significance test to Neyman-Pearson's hypothesis test procedure.
2011-01-01
Background Although many biological databases are applying semantic web technologies, meaningful biological hypothesis testing cannot be easily achieved. Database-driven high throughput genomic hypothesis testing requires both of the capabilities of obtaining semantically relevant experimental data and of performing relevant statistical testing for the retrieved data. Tissue Microarray (TMA) data are semantically rich and contains many biologically important hypotheses waiting for high throughput conclusions. Methods An application-specific ontology was developed for managing TMA and DNA microarray databases by semantic web technologies. Data were represented as Resource Description Framework (RDF) according to the framework of the ontology. Applications for hypothesis testing (Xperanto-RDF) for TMA data were designed and implemented by (1) formulating the syntactic and semantic structures of the hypotheses derived from TMA experiments, (2) formulating SPARQLs to reflect the semantic structures of the hypotheses, and (3) performing statistical test with the result sets returned by the SPARQLs. Results When a user designs a hypothesis in Xperanto-RDF and submits it, the hypothesis can be tested against TMA experimental data stored in Xperanto-RDF. When we evaluated four previously validated hypotheses as an illustration, all the hypotheses were supported by Xperanto-RDF. Conclusions We demonstrated the utility of high throughput biological hypothesis testing. We believe that preliminary investigation before performing highly controlled experiment can be benefited. PMID:21342584
Knowledge dimensions in hypothesis test problems
NASA Astrophysics Data System (ADS)
Krishnan, Saras; Idris, Noraini
2012-05-01
The reformation in statistics education over the past two decades has predominantly shifted the focus of statistical teaching and learning from procedural understanding to conceptual understanding. The emphasis of procedural understanding is on the formulas and calculation procedures. Meanwhile, conceptual understanding emphasizes students knowing why they are using a particular formula or executing a specific procedure. In addition, the Revised Bloom's Taxonomy offers a twodimensional framework to describe learning objectives comprising of the six revised cognition levels of original Bloom's taxonomy and four knowledge dimensions. Depending on the level of complexities, the four knowledge dimensions essentially distinguish basic understanding from the more connected understanding. This study identifiesthe factual, procedural and conceptual knowledgedimensions in hypothesis test problems. Hypothesis test being an important tool in making inferences about a population from sample informationis taught in many introductory statistics courses. However, researchers find that students in these courses still have difficulty in understanding the underlying concepts of hypothesis test. Past studies also show that even though students can perform the hypothesis testing procedure, they may not understand the rationale of executing these steps or know how to apply them in novel contexts. Besides knowing the procedural steps in conducting a hypothesis test, students must have fundamental statistical knowledge and deep understanding of the underlying inferential concepts such as sampling distribution and central limit theorem. By identifying the knowledge dimensions of hypothesis test problems in this study, suitable instructional and assessment strategies can be developed in future to enhance students' learning of hypothesis test as a valuable inferential tool.
Toward Joint Hypothesis-Tests Seismic Event Screening Analysis: Ms|mb and Event Depth
DOE Office of Scientific and Technical Information (OSTI.GOV)
Anderson, Dale; Selby, Neil
2012-08-14
Well established theory can be used to combine single-phenomenology hypothesis tests into a multi-phenomenology event screening hypothesis test (Fisher's and Tippett's tests). Commonly used standard error in Ms:mb event screening hypothesis test is not fully consistent with physical basis. Improved standard error - Better agreement with physical basis, and correctly partitions error to include Model Error as a component of variance, correctly reduces station noise variance through network averaging. For 2009 DPRK test - Commonly used standard error 'rejects' H0 even with better scaling slope ({beta} = 1, Selby et al.), improved standard error 'fails to rejects' H0.
The role of optimism in health-promoting behaviors in new primiparous mothers.
Gill, Robyn M; Loh, Jennifer M I
2010-01-01
Perceived stress has been associated with fewer health-promoting behaviors in new primiparous mothers, but less is known about the mechanisms responsible for such effects. The objective of this study was to examine the hypothesis that the relationship between perceived stress and health-promoting behaviors is mediated partially by a primiparous mother's sense of optimism. The transactional model of stress and coping and the model of behavioral self-regulation were used as the theoretical framework for the study. An ex post facto cross-sectional design was used for this study. Participants consisted of 174 primiparous mothers who had given birth within the previous 12 months. Participants completed a self-reported online questionnaire consisting of the Perceived Stress Scale, the Health-Promoting Lifestyle Profile II, and the revised Life Orientation Test. Results indicated that perceived stress predicted less health-promoting behaviors in new primiparous mothers (p < .001). Importantly, this relationship was mediated partially by the optimism displayed by the mother (p < .001). The findings indicated that optimism partially mediated the relationship between perceived stress and health-promoting behaviors in new primiparous mothers. The implications for psychological practice are discussed.
NASA Astrophysics Data System (ADS)
Harken, B.; Geiges, A.; Rubin, Y.
2013-12-01
There are several stages in any hydrological modeling campaign, including: formulation and analysis of a priori information, data acquisition through field campaigns, inverse modeling, and forward modeling and prediction of some environmental performance metric (EPM). The EPM being predicted could be, for example, contaminant concentration, plume travel time, or aquifer recharge rate. These predictions often have significant bearing on some decision that must be made. Examples include: how to allocate limited remediation resources between multiple contaminated groundwater sites, where to place a waste repository site, and what extraction rates can be considered sustainable in an aquifer. Providing an answer to these questions depends on predictions of EPMs using forward models as well as levels of uncertainty related to these predictions. Uncertainty in model parameters, such as hydraulic conductivity, leads to uncertainty in EPM predictions. Often, field campaigns and inverse modeling efforts are planned and undertaken with reduction of parametric uncertainty as the objective. The tool of hypothesis testing allows this to be taken one step further by considering uncertainty reduction in the ultimate prediction of the EPM as the objective and gives a rational basis for weighing costs and benefits at each stage. When using the tool of statistical hypothesis testing, the EPM is cast into a binary outcome. This is formulated as null and alternative hypotheses, which can be accepted and rejected with statistical formality. When accounting for all sources of uncertainty at each stage, the level of significance of this test provides a rational basis for planning, optimization, and evaluation of the entire campaign. Case-specific information, such as consequences prediction error and site-specific costs can be used in establishing selection criteria based on what level of risk is deemed acceptable. This framework is demonstrated and discussed using various synthetic case studies. The case studies involve contaminated aquifers where a decision must be made based on prediction of when a contaminant will arrive at a given location. The EPM, in this case contaminant travel time, is cast into the hypothesis testing framework. The null hypothesis states that the contaminant plume will arrive at the specified location before a critical value of time passes, and the alternative hypothesis states that the plume will arrive after the critical time passes. Different field campaigns are analyzed based on effectiveness in reducing the probability of selecting the wrong hypothesis, which in this case corresponds to reducing uncertainty in the prediction of plume arrival time. To examine the role of inverse modeling in this framework, case studies involving both Maximum Likelihood parameter estimation and Bayesian inversion are used.
ON THE SUBJECT OF HYPOTHESIS TESTING
Ugoni, Antony
1993-01-01
In this paper, the definition of a statistical hypothesis is discussed, and the considerations which need to be addressed when testing a hypothesis. In particular, the p-value, significance level, and power of a test are reviewed. Finally, the often quoted confidence interval is given a brief introduction. PMID:17989768
Some consequences of using the Horsfall-Barratt scale for hypothesis testing
USDA-ARS?s Scientific Manuscript database
Comparing treatment effects by hypothesis testing is a common practice in plant pathology. Nearest percent estimates (NPEs) of disease severity were compared to Horsfall-Barratt (H-B) scale data to explore whether there was an effect of assessment method on hypothesis testing. A simulation model ba...
Hypothesis Testing in Task-Based Interaction
ERIC Educational Resources Information Center
Choi, Yujeong; Kilpatrick, Cynthia
2014-01-01
Whereas studies show that comprehensible output facilitates L2 learning, hypothesis testing has received little attention in Second Language Acquisition (SLA). Following Shehadeh (2003), we focus on hypothesis testing episodes (HTEs) in which learners initiate repair of their own speech in interaction. In the context of a one-way information gap…
Classroom-Based Strategies to Incorporate Hypothesis Testing in Functional Behavior Assessments
ERIC Educational Resources Information Center
Lloyd, Blair P.; Weaver, Emily S.; Staubitz, Johanna L.
2017-01-01
When results of descriptive functional behavior assessments are unclear, hypothesis testing can help school teams understand how the classroom environment affects a student's challenging behavior. This article describes two hypothesis testing strategies that can be used in classroom settings: structural analysis and functional analysis. For each…
Hypothesis Testing in the Real World
ERIC Educational Resources Information Center
Miller, Jeff
2017-01-01
Critics of null hypothesis significance testing suggest that (a) its basic logic is invalid and (b) it addresses a question that is of no interest. In contrast to (a), I argue that the underlying logic of hypothesis testing is actually extremely straightforward and compelling. To substantiate that, I present examples showing that hypothesis…
Learning stochastic reward distributions in a speeded pointing task.
Seydell, Anna; McCann, Brian C; Trommershäuser, Julia; Knill, David C
2008-04-23
Recent studies have shown that humans effectively take into account task variance caused by intrinsic motor noise when planning fast hand movements. However, previous evidence suggests that humans have greater difficulty accounting for arbitrary forms of stochasticity in their environment, both in economic decision making and sensorimotor tasks. We hypothesized that humans can learn to optimize movement strategies when environmental randomness can be experienced and thus implicitly learned over several trials, especially if it mimics the kinds of randomness for which subjects might have generative models. We tested the hypothesis using a task in which subjects had to rapidly point at a target region partly covered by three stochastic penalty regions introduced as "defenders." At movement completion, each defender jumped to a new position drawn randomly from fixed probability distributions. Subjects earned points when they hit the target, unblocked by a defender, and lost points otherwise. Results indicate that after approximately 600 trials, subjects approached optimal behavior. We further tested whether subjects simply learned a set of stimulus-contingent motor plans or the statistics of defenders' movements by training subjects with one penalty distribution and then testing them on a new penalty distribution. Subjects immediately changed their strategy to achieve the same average reward as subjects who had trained with the second penalty distribution. These results indicate that subjects learned the parameters of the defenders' jump distributions and used this knowledge to optimally plan their hand movements under conditions involving stochastic rewards and penalties.
Koa-Wing, Michael; Nakagawa, Hiroshi; Luther, Vishal; Jamil-Copley, Shahnaz; Linton, Nick; Sandler, Belinda; Qureshi, Norman; Peters, Nicholas S; Davies, D Wyn; Francis, Darrel P; Jackman, Warren; Kanagaratnam, Prapa
2015-11-15
Ripple Mapping (RM) is designed to overcome the limitations of existing isochronal 3D mapping systems by representing the intracardiac electrogram as a dynamic bar on a surface bipolar voltage map that changes in height according to the electrogram voltage-time relationship, relative to a fiduciary point. We tested the hypothesis that standard approaches to atrial tachycardia CARTO™ activation maps were inadequate for RM creation and interpretation. From the results, we aimed to develop an algorithm to optimize RMs for future prospective testing on a clinical RM platform. CARTO-XP™ activation maps from atrial tachycardia ablations were reviewed by two blinded assessors on an off-line RM workstation. Ripple Maps were graded according to a diagnostic confidence scale (Grade I - high confidence with clear pattern of activation through to Grade IV - non-diagnostic). The RM-based diagnoses were corroborated against the clinical diagnoses. 43 RMs from 14 patients were classified as Grade I (5 [11.5%]); Grade II (17 [39.5%]); Grade III (9 [21%]) and Grade IV (12 [28%]). Causes of low gradings/errors included the following: insufficient chamber point density; window-of-interest<100% of cycle length (CL); <95% tachycardia CL mapped; variability of CL and/or unstable fiducial reference marker; and suboptimal bar height and scar settings. A data collection and map interpretation algorithm has been developed to optimize Ripple Maps in atrial tachycardias. This algorithm requires prospective testing on a real-time clinical platform. Copyright © 2015 Elsevier Ireland Ltd. All rights reserved.
Motor planning flexibly optimizes performance under uncertainty about task goals.
Wong, Aaron L; Haith, Adrian M
2017-03-03
In an environment full of potential goals, how does the brain determine which movement to execute? Existing theories posit that the motor system prepares for all potential goals by generating several motor plans in parallel. One major line of evidence for such theories is that presenting two competing goals often results in a movement intermediate between them. These intermediate movements are thought to reflect an unintentional averaging of the competing plans. However, normative theories suggest instead that intermediate movements might actually be deliberate, generated because they improve task performance over a random guessing strategy. To test this hypothesis, we vary the benefit of making an intermediate movement by changing movement speed. We find that participants generate intermediate movements only at (slower) speeds where they measurably improve performance. Our findings support the normative view that the motor system selects only a single, flexible motor plan, optimized for uncertain goals.
Martinez-Tossas, Luis A.; Churchfield, Matthew J.; Meneveau, Charles
2016-10-03
When representing the blade aerodynamics with rotating actuator lines, the computed forces have to be projected back to the CFD flow field as a volumetric body force. That has been done in the past with a geometrically simple uniform three-dimensional Gaussian at each point along the blade. Here, we argue that the body force can be shaped in a way that better predicts the blade local flow field, the blade load distribution, and the formation of the tip/root vortices. In previous work, we have determined the optimal scales of circular and elliptical Gaussian kernels that best reproduce the local flowmore » field in two-dimensions. Lastly, in this work we extend the analysis and applications by considering the full three-dimensional blade to test our hypothesis in a highly resolved Large Eddy Simulation.« less
A wave dynamics criterion for optimization of mammalian cardiovascular system.
Pahlevan, Niema M; Gharib, Morteza
2014-05-07
The cardiovascular system in mammals follows various optimization criteria covering the heart, the vascular network, and the coupling of the two. Through a simple dimensional analysis we arrived at a non-dimensional number (wave condition number) that can predict the optimum wave state in which the left ventricular (LV) pulsatile power (LV workload) is minimized in a mammalian cardiovascular system. This number is also universal among all mammals independent of animal size maintaining a value of around 0.1. By utilizing a unique in vitro model of human aorta, we tested our hypothesis against a wide range of aortic compliance (pulse wave velocity). We concluded that the optimum value of the wave condition number remains to be around 0.1 for a wide range of aorta compliance that we could simulate in our in-vitro system. Copyright © 2014 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Hosseini, Hamid Reza; Yunos, Mohd Yazid Mohd; Ismail, Sumarni; Yaman, Maheran
2017-12-01
This paper analysis the effects of indoor air elements on the dissatisfaction of occupants in education of environments. Tries to find the equation model for increasing the comprehension about these affects and optimizes satisfaction of occupants about indoor environment. Subsequently, increase performance of students, lecturers and staffs. As the method, a satisfaction questionnaire (SQ) and measuring environment elements (MEE) was conducted, 143 respondents at five classrooms, four staff rooms and five lectures rooms were considered. Temperature, air velocity and humidity (TVH) were used as independent variables and dissatisfaction as dependent variable. The hypothesis was tested for significant relationship between variables, and analysis was applied. Results found that indoor air quality presents direct effects on dissatisfaction of occupants and indirect effects on performance and the highest effects fallowed by temperature. These results may help to optimize the quality of efficiency and effectiveness in education environments.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Martinez-Tossas, Luis A.; Churchfield, Matthew J.; Meneveau, Charles
When representing the blade aerodynamics with rotating actuator lines, the computed forces have to be projected back to the CFD flow field as a volumetric body force. That has been done in the past with a geometrically simple uniform three-dimensional Gaussian at each point along the blade. Here, we argue that the body force can be shaped in a way that better predicts the blade local flow field, the blade load distribution, and the formation of the tip/root vortices. In previous work, we have determined the optimal scales of circular and elliptical Gaussian kernels that best reproduce the local flowmore » field in two-dimensions. Lastly, in this work we extend the analysis and applications by considering the full three-dimensional blade to test our hypothesis in a highly resolved Large Eddy Simulation.« less
Posada, German; Lu, Ting; Trumbell, Jill; Kaloustian, Garene; Trudel, Marcel; Plata, Sandra J; Peña, Paola P; Perez, Jennifer; Tereno, Susana; Dugravier, Romain; Coppola, Gabrielle; Constantini, Alessandro; Cassibba, Rosalinda; Kondo-Ikemura, Kiyomi; Nóblega, Magaly; Haya, Ines M; Pedraglio, Claudia; Verissimo, Manuela; Santos, Antonio J; Monteiro, Ligia; Lay, Keng-Ling
2013-01-01
The evolutionary rationale offered by Bowlby implies that secure base relationships are common in child-caregiver dyads and thus, child secure behavior observable across diverse social contexts and cultures. This study offers a test of the universality hypothesis. Trained observers in nine countries used the Attachment Q-set to describe the organization of children's behavior in naturalistic settings. Children (N = 547) were 10-72 months old. Child development experts (N = 81) from all countries provided definitions of optimal child secure base use. Findings indicate that children from all countries use their mother as a secure base. Children's organization of secure base behavior was modestly related to each other both within and across countries. Experts' descriptions of the optimally attached child were highly similar across cultures. © 2013 The Authors. Child Development © 2013 Society for Research in Child Development, Inc.
Learning transitive verbs from single-word verbs in the input by young children acquiring English.
Ninio, Anat
2016-09-01
The environmental context of verbs addressed by adults to young children is claimed to be uninformative regarding the verbs' meaning, yielding the Syntactic Bootstrapping Hypothesis that, for verb learning, full sentences are needed to demonstrate the semantic arguments of verbs. However, reanalysis of Gleitman's (1990) original data regarding input to a blind child revealed the context of single-word parental verbs to be more transparent than that of sentences. We tested the hypothesis that English-speaking children learn their early verbs from parents' single-word utterances. Distribution of single-word transitive verbs produced by a large sample of young children was strongly predicted by the relative token frequency of verbs in parental single-word utterances, but multiword sentences had no predictive value. Analysis of the interactive context showed that objects of verbs are retrievable by pragmatic inference, as is the meaning of the verbs. Single-word input appears optimal for learning an initial vocabulary of verbs.
ERIC Educational Resources Information Center
Kwon, Yong-Ju; Jeong, Jin-Su; Park, Yun-Bok
2006-01-01
The purpose of the present study was to test the hypothesis that student's abductive reasoning skills play an important role in the generation of hypotheses on pendulum motion tasks. To test the hypothesis, a hypothesis-generating test on pendulum motion, and a prior-belief test about pendulum motion were developed and administered to a sample of…
Magnetic resonance diffusion-perfusion mismatch in acute ischemic stroke: An update
Chen, Feng; Ni, Yi-Cheng
2012-01-01
The concept of magnetic resonance perfusion-diffusion mismatch (PDM) provides a practical and approximate measure of the tissue at risk and has been increasingly applied for the evaluation of hyperacute and acute stroke in animals and patients. Recent studies demonstrated that PDM does not optimally define the ischemic penumbra; because early abnormality on diffusion-weighted imaging overestimates the infarct core by including part of the penumbra, and the abnormality on perfusion weighted imaging overestimates the penumbra by including regions of benign oligemia. To overcome these limitations, many efforts have been made to optimize conventional PDM. Various alternatives beyond the PDM concept are under investigation in order to better define the penumbra. The PDM theory has been applied in ischemic stroke for at least three purposes: to be used as a practical selection tool for stroke treatment; to test the hypothesis that patients with PDM pattern will benefit from treatment, while those without mismatch pattern will not; to be a surrogate measure for stroke outcome. The main patterns of PDM and its relation with clinical outcomes were also briefly reviewed. The conclusion was that patients with PDM documented more reperfusion, reduced infarct growth and better clinical outcomes compared to patients without PDM, but it was not yet clear that thrombolytic therapy is beneficial when patients were selected on PDM. Studies based on a larger cohort are currently under investigation to further validate the PDM hypothesis. PMID:22468186
Functional assessment of the ex vivo vocal folds through biomechanical testing: A review
Dion, Gregory R.; Jeswani, Seema; Roof, Scott; Fritz, Mark; Coelho, Paulo; Sobieraj, Michael; Amin, Milan R.; Branski, Ryan C.
2016-01-01
The human vocal folds are complex structures made up of distinct layers that vary in cellular and extracellular composition. The mechanical properties of vocal fold tissue are fundamental to the study of both the acoustics and biomechanics of voice production. To date, quantitative methods have been applied to characterize the vocal fold tissue in both normal and pathologic conditions. This review describes, summarizes, and discusses the most commonly employed methods for vocal fold biomechanical testing. Force-elongation, torsional parallel plate rheometry, simple-shear parallel plate rheometry, linear skin rheometry, and indentation are the most frequently employed biomechanical tests for vocal fold tissues and each provide material properties data that can be used to compare native tissue verses diseased for treated tissue. Force-elongation testing is clinically useful, as it allows for functional unit testing, while rheometry provides physiologically relevant shear data, and nanoindentation permits micrometer scale testing across different areas of the vocal fold as well as whole organ testing. Thoughtful selection of the testing technique during experimental design to evaluate a hypothesis is important to optimizing biomechanical testing of vocal fold tissues. PMID:27127075
The importance of functional form in optimal control solutions of problems in population dynamics
Runge, M.C.; Johnson, F.A.
2002-01-01
Optimal control theory is finding increased application in both theoretical and applied ecology, and it is a central element of adaptive resource management. One of the steps in an adaptive management process is to develop alternative models of system dynamics, models that are all reasonable in light of available data, but that differ substantially in their implications for optimal control of the resource. We explored how the form of the recruitment and survival functions in a general population model for ducks affected the patterns in the optimal harvest strategy, using a combination of analytical, numerical, and simulation techniques. We compared three relationships between recruitment and population density (linear, exponential, and hyperbolic) and three relationships between survival during the nonharvest season and population density (constant, logistic, and one related to the compensatory harvest mortality hypothesis). We found that the form of the component functions had a dramatic influence on the optimal harvest strategy and the ultimate equilibrium state of the system. For instance, while it is commonly assumed that a compensatory hypothesis leads to higher optimal harvest rates than an additive hypothesis, we found this to depend on the form of the recruitment function, in part because of differences in the optimal steady-state population density. This work has strong direct consequences for those developing alternative models to describe harvested systems, but it is relevant to a larger class of problems applying optimal control at the population level. Often, different functional forms will not be statistically distinguishable in the range of the data. Nevertheless, differences between the functions outside the range of the data can have an important impact on the optimal harvest strategy. Thus, development of alternative models by identifying a single functional form, then choosing different parameter combinations from extremes on the likelihood profile may end up producing alternatives that do not differ as importantly as if different functional forms had been used. We recommend that biological knowledge be used to bracket a range of possible functional forms, and robustness of conclusions be checked over this range.
Hypothesis test for synchronization: twin surrogates revisited.
Romano, M Carmen; Thiel, Marco; Kurths, Jürgen; Mergenthaler, Konstantin; Engbert, Ralf
2009-03-01
The method of twin surrogates has been introduced to test for phase synchronization of complex systems in the case of passive experiments. In this paper we derive new analytical expressions for the number of twins depending on the size of the neighborhood, as well as on the length of the trajectory. This allows us to determine the optimal parameters for the generation of twin surrogates. Furthermore, we determine the quality of the twin surrogates with respect to several linear and nonlinear statistics depending on the parameters of the method. In the second part of the paper we perform a hypothesis test for phase synchronization in the case of experimental data from fixational eye movements. These miniature eye movements have been shown to play a central role in neural information processing underlying the perception of static visual scenes. The high number of data sets (21 subjects and 30 trials per person) allows us to compare the generated twin surrogates with the "natural" surrogates that correspond to the different trials. We show that the generated twin surrogates reproduce very well all linear and nonlinear characteristics of the underlying experimental system. The synchronization analysis of fixational eye movements by means of twin surrogates reveals that the synchronization between the left and right eye is significant, indicating that either the centers in the brain stem generating fixational eye movements are closely linked, or, alternatively that there is only one center controlling both eyes.
Making Knowledge Delivery Failsafe: Adding Step Zero in Hypothesis Testing
ERIC Educational Resources Information Center
Pan, Xia; Zhou, Qiang
2010-01-01
Knowledge of statistical analysis is increasingly important for professionals in modern business. For example, hypothesis testing is one of the critical topics for quality managers and team workers in Six Sigma training programs. Delivering the knowledge of hypothesis testing effectively can be an important step for the incapable learners or…
Testing of Hypothesis in Equivalence and Non Inferiority Trials-A Concept.
Juneja, Atul; Aggarwal, Abha R; Adhikari, Tulsi; Pandey, Arvind
2016-04-01
Establishing the appropriate hypothesis is one of the important steps for carrying out the statistical tests/analysis. Its understanding is important for interpreting the results of statistical analysis. The current communication attempts to provide the concept of testing of hypothesis in non inferiority and equivalence trials, where the null hypothesis is just reverse of what is set up for conventional superiority trials. It is similarly looked for rejection for establishing the fact the researcher is intending to prove. It is important to mention that equivalence or non inferiority cannot be proved by accepting the null hypothesis of no difference. Hence, establishing the appropriate statistical hypothesis is extremely important to arrive at meaningful conclusion for the set objectives in research.
NASA Astrophysics Data System (ADS)
Worthy, Johnny L.; Holzinger, Marcus J.; Scheeres, Daniel J.
2018-06-01
The observation to observation measurement association problem for dynamical systems can be addressed by determining if the uncertain admissible regions produced from each observation have one or more points of intersection in state space. An observation association method is developed which uses an optimization based approach to identify local Mahalanobis distance minima in state space between two uncertain admissible regions. A binary hypothesis test with a selected false alarm rate is used to assess the probability that an intersection exists at the point(s) of minimum distance. The systemic uncertainties, such as measurement uncertainties, timing errors, and other parameter errors, define a distribution about a state estimate located at the local Mahalanobis distance minima. If local minima do not exist, then the observations are not associated. The proposed method utilizes an optimization approach defined on a reduced dimension state space to reduce the computational load of the algorithm. The efficacy and efficiency of the proposed method is demonstrated on observation data collected from the Georgia Tech Space Object Research Telescope.
Population-wide folic acid fortification and preterm birth: testing the folate depletion hypothesis.
Naimi, Ashley I; Auger, Nathalie
2015-04-01
We assess whether population-wide folic acid fortification policies were followed by a reduction of preterm and early-term birth rates in Québec among women with short and optimal interpregnancy intervals. We extracted birth certificate data for 1.3 million births between 1981 and 2010 to compute age-adjusted preterm and early-term birth rates stratified by short and optimal interpregnancy intervals. We used Joinpoint regression to detect changes in the preterm and early term birth rates and assess whether these changes coincide with the implementation of population-wide folic acid fortification. A change in the preterm birth rate occurred in 2000 among women with short (95% confidence interval [CI] = 1994, 2005) and optimal (95% CI = 1995, 2008) interpregnancy intervals. Changes in early term birth rates did not coincide with the implementation of folic acid fortification. Our results do not indicate a link between folic acid fortification and early term birth but suggest an improvement in preterm birth rates after implementation of a nationwide folic acid fortification program.
The impact of chief executive officer optimism on hospital strategic decision making.
Langabeer, James R; Yao, Emery
2012-01-01
Previous strategic decision making research has focused mostly on the analytical positioning approach, which broadly emphasizes an alignment between rationality and the external environment. In this study, we propose that hospital chief executive optimism (or the general tendency to expect positive future outcomes) will moderate the relationship between comprehensively rational decision-making process and organizational performance. The purpose of this study was to explore the impact that dispositional optimism has on the well-established relationship between rational decision-making processes and organizational performance. Specifically, we hypothesized that optimism will moderate the relationship between the level of rationality and the organization's performance. We further suggest that this relationship will be more negative for those with high, as opposed to low, optimism. We surveyed 168 hospital CEOs and used moderated hierarchical regression methods to statically test our hypothesis. On the basis of a survey study of 168 hospital CEOs, we found evidence of a complex interplay of optimism in the rationality-organizational performance relationship. More specifically, we found that the two-way interactions between optimism and rational decision making were negatively associated with performance and that where optimism was the highest, the rationality-performance relationship was the most negative. Executive optimism was positively associated with organizational performance. We also found that greater perceived environmental turbulence, when interacting with optimism, did not have a significant interaction effect on the rationality-performance relationship. These findings suggest potential for broader participation in strategic processes and the use of organizational development techniques that assess executive disposition and traits for recruitment processes, because CEO optimism influences hospital-level processes. Research implications include incorporating greater use of behavior and cognition constructs to better depict decision-making processes in complex organizations like hospitals.
Jiang, Hong; Chess, Leonard
2008-11-01
By discriminating self from nonself and controlling the magnitude and class of immune responses, the immune system mounts effective immunity against virtually any foreign antigens but avoids harmful immune responses to self. These are two equally important and related but distinct processes, which function in concert to ensure an optimal function of the immune system. Immunologically relevant clinical problems often occur because of failure of either process, especially the former. Currently, there is no unified conceptual framework to characterize the precise relationship between thymic negative selection and peripheral immune regulation, which is the basis for understanding self-non-self discrimination versus control of magnitude and class of immune responses. In this article, we explore a novel hypothesis of how the immune system discriminates self from nonself in the periphery during adaptive immunity. This hypothesis permits rational analysis of various seemingly unrelated biomedical problems inherent in immunologic disorders that cannot be uniformly interpreted by any currently existing paradigms. The proposed hypothesis is based on a unified conceptual framework of the "avidity model of peripheral T-cell regulation" that we originally proposed and tested, in both basic and clinical immunology, to understand how the immune system achieves self-nonself discrimination in the periphery.
Facio, Flavia M; Sapp, Julie C; Linn, Amy; Biesecker, Leslie G
2012-10-10
Massively-parallel sequencing (MPS) technologies create challenges for informed consent of research participants given the enormous scale of the data and the wide range of potential results. We propose that the consent process in these studies be based on whether they use MPS to test a hypothesis or to generate hypotheses. To demonstrate the differences in these approaches to informed consent, we describe the consent processes for two MPS studies. The purpose of our hypothesis-testing study is to elucidate the etiology of rare phenotypes using MPS. The purpose of our hypothesis-generating study is to test the feasibility of using MPS to generate clinical hypotheses, and to approach the return of results as an experimental manipulation. Issues to consider in both designs include: volume and nature of the potential results, primary versus secondary results, return of individual results, duty to warn, length of interaction, target population, and privacy and confidentiality. The categorization of MPS studies as hypothesis-testing versus hypothesis-generating can help to clarify the issue of so-called incidental or secondary results for the consent process, and aid the communication of the research goals to study participants.
An Exercise for Illustrating the Logic of Hypothesis Testing
ERIC Educational Resources Information Center
Lawton, Leigh
2009-01-01
Hypothesis testing is one of the more difficult concepts for students to master in a basic, undergraduate statistics course. Students often are puzzled as to why statisticians simply don't calculate the probability that a hypothesis is true. This article presents an exercise that forces students to lay out on their own a procedure for testing a…
ERIC Educational Resources Information Center
Wilcox, Rand R.; Serang, Sarfaraz
2017-01-01
The article provides perspectives on p values, null hypothesis testing, and alternative techniques in light of modern robust statistical methods. Null hypothesis testing and "p" values can provide useful information provided they are interpreted in a sound manner, which includes taking into account insights and advances that have…
Hypothesis Testing Using Spatially Dependent Heavy Tailed Multisensor Data
2014-12-01
Office of Research 113 Bowne Hall Syracuse, NY 13244 -1200 ABSTRACT HYPOTHESIS TESTING USING SPATIALLY DEPENDENT HEAVY-TAILED MULTISENSOR DATA Report...consistent with the null hypothesis of linearity and can be used to estimate the distribution of a test statistic that can discrimi- nate between the null... Test for nonlinearity. Histogram is generated using the surrogate data. The statistic of the original time series is represented by the solid line
The role of responsibility and fear of guilt in hypothesis-testing.
Mancini, Francesco; Gangemi, Amelia
2006-12-01
Recent theories argue that both perceived responsibility and fear of guilt increase obsessive-like behaviours. We propose that hypothesis-testing might account for this effect. Both perceived responsibility and fear of guilt would influence subjects' hypothesis-testing, by inducing a prudential style. This style implies focusing on and confirming the worst hypothesis, and reiterating the testing process. In our experiment, we manipulated the responsibility and fear of guilt of 236 normal volunteers who executed a deductive task. The results show that perceived responsibility is the main factor that influenced individuals' hypothesis-testing. Fear of guilt has however a significant additive effect. Guilt-fearing participants preferred to carry on with the diagnostic process, even when faced with initial favourable evidence, whereas participants in the responsibility condition only did so when confronted with an unfavourable evidence. Implications for the understanding of obsessive-compulsive disorder (OCD) are discussed.
Chiba, Yasutaka
2017-09-01
Fisher's exact test is commonly used to compare two groups when the outcome is binary in randomized trials. In the context of causal inference, this test explores the sharp causal null hypothesis (i.e. the causal effect of treatment is the same for all subjects), but not the weak causal null hypothesis (i.e. the causal risks are the same in the two groups). Therefore, in general, rejection of the null hypothesis by Fisher's exact test does not mean that the causal risk difference is not zero. Recently, Chiba (Journal of Biometrics and Biostatistics 2015; 6: 244) developed a new exact test for the weak causal null hypothesis when the outcome is binary in randomized trials; the new test is not based on any large sample theory and does not require any assumption. In this paper, we extend the new test; we create a version of the test applicable to a stratified analysis. The stratified exact test that we propose is general in nature and can be used in several approaches toward the estimation of treatment effects after adjusting for stratification factors. The stratified Fisher's exact test of Jung (Biometrical Journal 2014; 56: 129-140) tests the sharp causal null hypothesis. This test applies a crude estimator of the treatment effect and can be regarded as a special case of our proposed exact test. Our proposed stratified exact test can be straightforwardly extended to analysis of noninferiority trials and to construct the associated confidence interval. © 2017 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Yang, Tianzhi; Martin, Paige; Fogarty, Brittany; Brown, Alison; Schurman, Kayla; Phipps, Roger; Yin, Viravuth P; Lockman, Paul; Bai, Shuhua
2015-06-01
The blood-brain barrier (BBB) essentially restricts therapeutic drugs from entering into the brain. This study tests the hypothesis that brain endothelial cell derived exosomes can deliver anticancer drug across the BBB for the treatment of brain cancer in a zebrafish (Danio rerio) model. Four types of exosomes were isolated from brain cell culture media and characterized by particle size, morphology, total protein, and transmembrane protein markers. Transport mechanism, cell uptake, and cytotoxicity of optimized exosome delivery system were tested. Brain distribution of exosome delivered anticancer drugs was evaluated using transgenic zebrafish TG (fli1: GFP) embryos and efficacies of optimized formations were examined in a xenotransplanted zebrafish model of brain cancer model. Four exosomes in 30-100 diameters showed different morphologies and exosomes derived from brain endothelial cells expressed more CD63 tetraspanins transmembrane proteins. Optimized exosomes increased the uptake of fluorescent marker via receptor mediated endocytosis and cytotoxicity of anticancer drugs in cancer cells. Images of the zebrafish showed exosome delivered anticancer drugs crossed the BBB and entered into the brain. In the brain cancer model, exosome delivered anticancer drugs significantly decreased fluorescent intensity of xenotransplanted cancer cells and tumor growth marker. Brain endothelial cell derived exosomes could be potentially used as a carrier for brain delivery of anticancer drug for the treatment of brain cancer.
A statistical test to show negligible trend
Philip M. Dixon; Joseph H.K. Pechmann
2005-01-01
The usual statistical tests of trend are inappropriate for demonstrating the absence of trend. This is because failure to reject the null hypothesis of no trend does not prove that null hypothesis. The appropriate statistical method is based on an equivalence test. The null hypothesis is that the trend is not zero, i.e., outside an a priori specified equivalence region...
Niesman, Peter J; Wei, Jiahui; LaPorte, Megan J; Carlson, Lauren J; Nassau, Kileigh L; Bao, Gina C; Cheng, Jeffrey P; de la Tremblaye, Patricia; Lajud, Naima; Bondi, Corina O; Kline, Anthony E
2018-02-05
Behavioral assessments in rats are overwhelmingly conducted during the day, albeit that is when they are least active. This incongruity may preclude optimal performance. Hence, the goal of this study was to determine if differences in neurobehavior exist in traumatic brain injured (TBI) rats when assessed during the day vs. night. The hypothesis was that the night group would perform better than the day group on all behavioral tasks. Anesthetized adult male rats received either a cortical impact or sham injury and then were randomly assigned to either Day (1:00-3:00p.m.) or Night (7:30-9:30p.m.) testing. Motor function (beam-balance/walk) was conducted on post-operative days 1-5 and cognitive performance (spatial learning) was assessed on days 14-18. Corticosterone (CORT) levels were quantified at 24h and 21days after TBI. No significant differences were revealed between the TBI rats tested during the Day vs. Night for motor or cognition (p's<0.05). CORT levels were higher in the Night-tested TBI and sham groups at 24h (p<0.05), but returned to baseline and were no longer different by day 21 (p>0.05), suggesting an initial, but transient, stress response that did not affect neurobehavioral outcome. These data suggest that the time rats are tested has no noticeable impact on their performance, which does not support the hypothesis. The finding validates the interpretations from numerous studies conducted when rats were tested during the day vs. their natural active period. Copyright © 2017 Elsevier B.V. All rights reserved.
Unadjusted Bivariate Two-Group Comparisons: When Simpler is Better.
Vetter, Thomas R; Mascha, Edward J
2018-01-01
Hypothesis testing involves posing both a null hypothesis and an alternative hypothesis. This basic statistical tutorial discusses the appropriate use, including their so-called assumptions, of the common unadjusted bivariate tests for hypothesis testing and thus comparing study sample data for a difference or association. The appropriate choice of a statistical test is predicated on the type of data being analyzed and compared. The unpaired or independent samples t test is used to test the null hypothesis that the 2 population means are equal, thereby accepting the alternative hypothesis that the 2 population means are not equal. The unpaired t test is intended for comparing dependent continuous (interval or ratio) data from 2 study groups. A common mistake is to apply several unpaired t tests when comparing data from 3 or more study groups. In this situation, an analysis of variance with post hoc (posttest) intragroup comparisons should instead be applied. Another common mistake is to apply a series of unpaired t tests when comparing sequentially collected data from 2 study groups. In this situation, a repeated-measures analysis of variance, with tests for group-by-time interaction, and post hoc comparisons, as appropriate, should instead be applied in analyzing data from sequential collection points. The paired t test is used to assess the difference in the means of 2 study groups when the sample observations have been obtained in pairs, often before and after an intervention in each study subject. The Pearson chi-square test is widely used to test the null hypothesis that 2 unpaired categorical variables, each with 2 or more nominal levels (values), are independent of each other. When the null hypothesis is rejected, 1 concludes that there is a probable association between the 2 unpaired categorical variables. When comparing 2 groups on an ordinal or nonnormally distributed continuous outcome variable, the 2-sample t test is usually not appropriate. The Wilcoxon-Mann-Whitney test is instead preferred. When making paired comparisons on data that are ordinal, or continuous but nonnormally distributed, the Wilcoxon signed-rank test can be used. In analyzing their data, researchers should consider the continued merits of these simple yet equally valid unadjusted bivariate statistical tests. However, the appropriate use of an unadjusted bivariate test still requires a solid understanding of its utility, assumptions (requirements), and limitations. This understanding will mitigate the risk of misleading findings, interpretations, and conclusions.
Maidment, Susannah C R; Barrett, Paul M
2012-09-22
Convergent morphologies are thought to indicate functional similarity, arising because of a limited number of evolutionary or developmental pathways. Extant taxa displaying convergent morphologies are used as analogues to assess function in extinct taxa with similar characteristics. However, functional studies of extant taxa have shown that functional similarity can arise from differing morphologies, calling into question the paradigm that form and function are closely related. We test the hypothesis that convergent skeletal morphology indicates functional similarity in the fossil record using ornithischian dinosaurs. The rare transition from bipedality to quadrupedality occurred at least three times independently in this clade, resulting in a suite of convergent osteological characteristics. We use homology rather than analogy to provide an independent line of evidence about function, reconstructing soft tissues using the extant phylogenetic bracket and applying biomechanical concepts to produce qualitative assessments of muscle leverage. We also optimize character changes to investigate the sequence of character acquisition. Different lineages of quadrupedal ornithischian dinosaur stood and walked differently from each other, falsifying the hypothesis that osteological convergence indicates functional similarity. The acquisition of features correlated with quadrupedalism generally occurs in the same order in each clade, suggesting underlying developmental mechanisms that act as evolutionary constraints.
Inference for High-dimensional Differential Correlation Matrices.
Cai, T Tony; Zhang, Anru
2016-01-01
Motivated by differential co-expression analysis in genomics, we consider in this paper estimation and testing of high-dimensional differential correlation matrices. An adaptive thresholding procedure is introduced and theoretical guarantees are given. Minimax rate of convergence is established and the proposed estimator is shown to be adaptively rate-optimal over collections of paired correlation matrices with approximately sparse differences. Simulation results show that the procedure significantly outperforms two other natural methods that are based on separate estimation of the individual correlation matrices. The procedure is also illustrated through an analysis of a breast cancer dataset, which provides evidence at the gene co-expression level that several genes, of which a subset has been previously verified, are associated with the breast cancer. Hypothesis testing on the differential correlation matrices is also considered. A test, which is particularly well suited for testing against sparse alternatives, is introduced. In addition, other related problems, including estimation of a single sparse correlation matrix, estimation of the differential covariance matrices, and estimation of the differential cross-correlation matrices, are also discussed.
Longitudinal Dimensionality of Adolescent Psychopathology: Testing the Differentiation Hypothesis
ERIC Educational Resources Information Center
Sterba, Sonya K.; Copeland, William; Egger, Helen L.; Costello, E. Jane; Erkanli, Alaattin; Angold, Adrian
2010-01-01
Background: The differentiation hypothesis posits that the underlying liability distribution for psychopathology is of low dimensionality in young children, inflating diagnostic comorbidity rates, but increases in dimensionality with age as latent syndromes become less correlated. This hypothesis has not been adequately tested with longitudinal…
A large scale test of the gaming-enhancement hypothesis.
Przybylski, Andrew K; Wang, John C
2016-01-01
A growing research literature suggests that regular electronic game play and game-based training programs may confer practically significant benefits to cognitive functioning. Most evidence supporting this idea, the gaming-enhancement hypothesis , has been collected in small-scale studies of university students and older adults. This research investigated the hypothesis in a general way with a large sample of 1,847 school-aged children. Our aim was to examine the relations between young people's gaming experiences and an objective test of reasoning performance. Using a Bayesian hypothesis testing approach, evidence for the gaming-enhancement and null hypotheses were compared. Results provided no substantive evidence supporting the idea that having preference for or regularly playing commercially available games was positively associated with reasoning ability. Evidence ranged from equivocal to very strong in support for the null hypothesis over what was predicted. The discussion focuses on the value of Bayesian hypothesis testing for investigating electronic gaming effects, the importance of open science practices, and pre-registered designs to improve the quality of future work.
Null but not void: considerations for hypothesis testing.
Shaw, Pamela A; Proschan, Michael A
2013-01-30
Standard statistical theory teaches us that once the null and alternative hypotheses have been defined for a parameter, the choice of the statistical test is clear. Standard theory does not teach us how to choose the null or alternative hypothesis appropriate to the scientific question of interest. Neither does it tell us that in some cases, depending on which alternatives are realistic, we may want to define our null hypothesis differently. Problems in statistical practice are frequently not as pristinely summarized as the classic theory in our textbooks. In this article, we present examples in statistical hypothesis testing in which seemingly simple choices are in fact rich with nuance that, when given full consideration, make the choice of the right hypothesis test much less straightforward. Published 2012. This article is a US Government work and is in the public domain in the USA.
Effect of climate-related mass extinctions on escalation in molluscs
NASA Astrophysics Data System (ADS)
Hansen, Thor A.; Kelley, Patricia H.; Melland, Vicky D.; Graham, Scott E.
1999-12-01
We test the hypothesis that escalated species (e.g., those with antipredatory adaptations such as heavy armor) are more vulnerable to extinctions caused by changes in climate. If this hypothesis is valid, recovery faunas after climate-related extinctions should include significantly fewer species with escalated shell characteristics, and escalated species should undergo greater rates of extinction than nonescalated species. This hypothesis is tested for the Cretaceous-Paleocene, Eocene-Oligocene, middle Miocene, and Pliocene-Pleistocene mass extinctions. Gastropod and bivalve molluscs from the U.S. coastal plain were evaluated for 10 shell characters that confer resistance to predators. Of 40 tests, one supported the hypothesis; highly ornamented gastropods underwent greater levels of Pliocene-Pleistocene extinction than did nonescalated species. All remaining tests were nonsignificant. The hypothesis that escalated species are more vulnerable to climate-related mass extinctions is not supported.
Proficiency Testing for Evaluating Aerospace Materials Test Anomalies
NASA Technical Reports Server (NTRS)
Hirsch, D.; Motto, S.; Peyton, S.; Beeson, H.
2006-01-01
ASTM G 86 and ASTM G 74 are commonly used to evaluate materials susceptibility to ignition in liquid and gaseous oxygen systems. However, the methods have been known for their lack of repeatability. The inherent problems identified with the test logic would either not allow precise identification or the magnitude of problems related to running the tests, such as lack of consistency of systems performance, lack of adherence to procedures, etc. Excessive variability leads to increasing instances of accepting the null hypothesis erroneously, and so to the false logical deduction that problems are nonexistent when they really do exist. This paper attempts to develop and recommend an approach that could lead to increased accuracy in problem diagnostics by using the 50% reactivity point, which has been shown to be more repeatable. The initial tests conducted indicate that PTFE and Viton A (for pneumatic impact) and Buna S (for mechanical impact) would be good choices for additional testing and consideration for inter-laboratory evaluations. The approach presented could also be used to evaluate variable effects with increased confidence and tolerance optimization.
On Restructurable Control System Theory
NASA Technical Reports Server (NTRS)
Athans, M.
1983-01-01
The state of stochastic system and control theory as it impacts restructurable control issues is addressed. The multivariable characteristics of the control problem are addressed. The failure detection/identification problem is discussed as a multi-hypothesis testing problem. Control strategy reconfiguration, static multivariable controls, static failure hypothesis testing, dynamic multivariable controls, fault-tolerant control theory, dynamic hypothesis testing, generalized likelihood ratio (GLR) methods, and adaptive control are discussed.
ERIC Educational Resources Information Center
Marmolejo-Ramos, Fernando; Cousineau, Denis
2017-01-01
The number of articles showing dissatisfaction with the null hypothesis statistical testing (NHST) framework has been progressively increasing over the years. Alternatives to NHST have been proposed and the Bayesian approach seems to have achieved the highest amount of visibility. In this last part of the special issue, a few alternative…
A model of optimal voluntary muscular control.
FitzHugh, R
1977-07-19
In the absence of detailed knowledge of how the CNS controls a muscle through its motor fibers, a reasonable hypothesis is that of optimal control. This hypothesis is studied using a simplified mathematical model of a single muscle, based on A.V. Hill's equations, with series elastic element omitted, and with the motor signal represented by a single input variable. Two cost functions were used. The first was total energy expended by the muscle (work plus heat). If the load is a constant force, with no inertia, Hill's optimal velocity of shortening results. If the load includes a mass, analysis by optimal control theory shows that the motor signal to the muscle consists of three phases: (1) maximal stimulation to accelerate the mass to the optimal velocity as quickly as possible, (2) an intermediate level of stimulation to hold the velocity at its optimal value, once reached, and (3) zero stimulation, to permit the mass to slow down, as quickly as possible, to zero velocity at the specified distance shortened. If the latter distance is too small, or the mass too large, the optimal velocity is not reached, and phase (2) is absent. For lengthening, there is no optimal velocity; there are only two phases, zero stimulation followed by maximal stimulation. The second cost function was total time. The optimal control for shortening consists of only phases (1) and (3) above, and is identical to the minimal energy control whenever phase (2) is absent from the latter. Generalization of this model to include viscous loads and a series elastic element are discussed.
Kawasaki disease and toxic shock syndrome--at last the etiology is clear?
Curtis, Nigel
2004-01-01
A decade after the superantigen hypothesis for KD was first suggested, it has still not been either proven or refuted conclusively. Although initial optimism for the hypothesis was quashed by a series of published papers apparently refuting the idea, in the last few years there have been a number of good studies providing evidence in support of the superantigen hypothesis. Whether this renewed enthusiasm is justified will hopefully become clear in the near future. Ultimately, accurate diagnosis, more targeted treatment, and preventative strategies depend on the unraveling of the immunopathogenesis of this disease.
Revised standards for statistical evidence.
Johnson, Valen E
2013-11-26
Recent advances in Bayesian hypothesis testing have led to the development of uniformly most powerful Bayesian tests, which represent an objective, default class of Bayesian hypothesis tests that have the same rejection regions as classical significance tests. Based on the correspondence between these two classes of tests, it is possible to equate the size of classical hypothesis tests with evidence thresholds in Bayesian tests, and to equate P values with Bayes factors. An examination of these connections suggest that recent concerns over the lack of reproducibility of scientific studies can be attributed largely to the conduct of significance tests at unjustifiably high levels of significance. To correct this problem, evidence thresholds required for the declaration of a significant finding should be increased to 25-50:1, and to 100-200:1 for the declaration of a highly significant finding. In terms of classical hypothesis tests, these evidence standards mandate the conduct of tests at the 0.005 or 0.001 level of significance.
Stochastic gradient ascent outperforms gamers in the Quantum Moves game
NASA Astrophysics Data System (ADS)
Sels, Dries
2018-04-01
In a recent work on quantum state preparation, Sørensen and co-workers [Nature (London) 532, 210 (2016), 10.1038/nature17620] explore the possibility of using video games to help design quantum control protocols. The authors present a game called "Quantum Moves" (https://www.scienceathome.org/games/quantum-moves/) in which gamers have to move an atom from A to B by means of optical tweezers. They report that, "players succeed where purely numerical optimization fails." Moreover, by harnessing the player strategies, they can "outperform the most prominent established numerical methods." The aim of this Rapid Communication is to analyze the problem in detail and show that those claims are untenable. In fact, without any prior knowledge and starting from a random initial seed, a simple stochastic local optimization method finds near-optimal solutions which outperform all players. Counterdiabatic driving can even be used to generate protocols without resorting to numeric optimization. The analysis results in an accurate analytic estimate of the quantum speed limit which, apart from zero-point motion, is shown to be entirely classical in nature. The latter might explain why gamers are reasonably good at the game. A simple modification of the BringHomeWater challenge is proposed to test this hypothesis.
Optimization behavior of brainstem respiratory neurons. A cerebral neural network model.
Poon, C S
1991-01-01
A recent model of respiratory control suggested that the steady-state respiratory responses to CO2 and exercise may be governed by an optimal control law in the brainstem respiratory neurons. It was not certain, however, whether such complex optimization behavior could be accomplished by a realistic biological neural network. To test this hypothesis, we developed a hybrid computer-neural model in which the dynamics of the lung, brain and other tissue compartments were simulated on a digital computer. Mimicking the "controller" was a human subject who pedalled on a bicycle with varying speed (analog of ventilatory output) with a view to minimize an analog signal of the total cost of breathing (chemical and mechanical) which was computed interactively and displayed on an oscilloscope. In this manner, the visuomotor cortex served as a proxy (homolog) of the brainstem respiratory neurons in the model. Results in 4 subjects showed a linear steady-state ventilatory CO2 response to arterial PCO2 during simulated CO2 inhalation and a nearly isocapnic steady-state response during simulated exercise. Thus, neural optimization is a plausible mechanism for respiratory control during exercise and can be achieved by a neural network with cognitive computational ability without the need for an exercise stimulus.
Identification of combinatorial drug regimens for treatment of Huntington's disease using Drosophila
NASA Astrophysics Data System (ADS)
Agrawal, Namita; Pallos, Judit; Slepko, Natalia; Apostol, Barbara L.; Bodai, Laszlo; Chang, Ling-Wen; Chiang, Ann-Shyn; Michels Thompson, Leslie; Marsh, J. Lawrence
2005-03-01
We explore the hypothesis that pathology of Huntington's disease involves multiple cellular mechanisms whose contributions to disease are incrementally additive or synergistic. We provide evidence that the photoreceptor neuron degeneration seen in flies expressing mutant human huntingtin correlates with widespread degenerative events in the Drosophila CNS. We use a Drosophila Huntington's disease model to establish dose regimens and protocols to assess the effectiveness of drug combinations used at low threshold concentrations. These proof of principle studies identify at least two potential combinatorial treatment options and illustrate a rapid and cost-effective paradigm for testing and optimizing combinatorial drug therapies while reducing side effects for patients with neurodegenerative disease. The potential for using prescreening in Drosophila to inform combinatorial therapies that are most likely to be effective for testing in mammals is discussed. combinatorial treatments | neurodegeneration
NASA Astrophysics Data System (ADS)
Thompson, E. M.; Hewlett, J. B.; Baise, L. G.; Vogel, R. M.
2011-01-01
Annual maximum (AM) time series are incomplete (i.e., censored) when no events are included above the assumed censoring threshold (i.e., magnitude of completeness). We introduce a distrtibutional hypothesis test for left-censored Gumbel observations based on the probability plot correlation coefficient (PPCC). Critical values of the PPCC hypothesis test statistic are computed from Monte-Carlo simulations and are a function of sample size, censoring level, and significance level. When applied to a global catalog of earthquake observations, the left-censored Gumbel PPCC tests are unable to reject the Gumbel hypothesis for 45 of 46 seismic regions. We apply four different field significance tests for combining individual tests into a collective hypothesis test. None of the field significance tests are able to reject the global hypothesis that AM earthquake magnitudes arise from a Gumbel distribution. Because the field significance levels are not conclusive, we also compute the likelihood that these field significance tests are unable to reject the Gumbel model when the samples arise from a more complex distributional alternative. A power study documents that the censored Gumbel PPCC test is unable to reject some important and viable Generalized Extreme Value (GEV) alternatives. Thus, we cannot rule out the possibility that the global AM earthquake time series could arise from a GEV distribution with a finite upper bound, also known as a reverse Weibull distribution. Our power study also indicates that the binomial and uniform field significance tests are substantially more powerful than the more commonly used Bonferonni and false discovery rate multiple comparison procedures.
NASA Technical Reports Server (NTRS)
Huebner, W. P.; Leigh, R. J.; Seidman, S. H.; Thomas, C. W.; Billian, C.; DiScenna, A. O.; Dell'Osso, L. F.
1992-01-01
1. We used a modeling approach to test the hypothesis that, in humans, the smooth pursuit (SP) system provides the primary signal for cancelling the vestibuloocular reflex (VOR) during combined eye-head tracking (CEHT) of a target moving smoothly in the horizontal plane. Separate models for SP and the VOR were developed. The optimal values of parameters of the two models were calculated using measured responses of four subjects to trials of SP and the visually enhanced VOR. After optimal parameter values were specified, each model generated waveforms that accurately reflected the subjects' responses to SP and vestibular stimuli. The models were then combined into a CEHT model wherein the final eye movement command signal was generated as the linear summation of the signals from the SP and VOR pathways. 2. The SP-VOR superposition hypothesis was tested using two types of CEHT stimuli, both of which involved passive rotation of subjects in a vestibular chair. The first stimulus consisted of a "chair brake" or sudden stop of the subject's head during CEHT; the visual target continued to move. The second stimulus consisted of a sudden change from the visually enhanced VOR to CEHT ("delayed target onset" paradigm); as the vestibular chair rotated past the angular position of the stationary visual stimulus, the latter started to move in synchrony with the chair. Data collected during experiments that employed these stimuli were compared quantitatively with predictions made by the CEHT model. 3. During CEHT, when the chair was suddenly and unexpectedly stopped, the eye promptly began to move in the orbit to track the moving target. Initially, gaze velocity did not completely match target velocity, however; this finally occurred approximately 100 ms after the brake onset. The model did predict the prompt onset of eye-in-orbit motion after the brake, but it did not predict that gaze velocity would initially be only approximately 70% of target velocity. One possible explanation for this discrepancy is that VOR gain can be dynamically modulated and, during sustained CEHT, it may assume a lower value. Consequently, during CEHT, a smaller-amplitude SP signal would be needed to cancel the lower-gain VOR. This reduction of the SP signal could account for the attenuated tracking response observed immediately after the brake. We found evidence for the dynamic modulation of VOR gain by noting differences in responses to the onset and offset of head rotation in trials of the visually enhanced VOR.(ABSTRACT TRUNCATED AT 400 WORDS).
Saraf, Sanatan; Mathew, Thomas; Roy, Anindya
2015-01-01
For the statistical validation of surrogate endpoints, an alternative formulation is proposed for testing Prentice's fourth criterion, under a bivariate normal model. In such a setup, the criterion involves inference concerning an appropriate regression parameter, and the criterion holds if the regression parameter is zero. Testing such a null hypothesis has been criticized in the literature since it can only be used to reject a poor surrogate, and not to validate a good surrogate. In order to circumvent this, an equivalence hypothesis is formulated for the regression parameter, namely the hypothesis that the parameter is equivalent to zero. Such an equivalence hypothesis is formulated as an alternative hypothesis, so that the surrogate endpoint is statistically validated when the null hypothesis is rejected. Confidence intervals for the regression parameter and tests for the equivalence hypothesis are proposed using bootstrap methods and small sample asymptotics, and their performances are numerically evaluated and recommendations are made. The choice of the equivalence margin is a regulatory issue that needs to be addressed. The proposed equivalence testing formulation is also adopted for other parameters that have been proposed in the literature on surrogate endpoint validation, namely, the relative effect and proportion explained.
Test of association: which one is the most appropriate for my study?
Gonzalez-Chica, David Alejandro; Bastos, João Luiz; Duquia, Rodrigo Pereira; Bonamigo, Renan Rangel; Martínez-Mesa, Jeovany
2015-01-01
Hypothesis tests are statistical tools widely used for assessing whether or not there is an association between two or more variables. These tests provide a probability of the type 1 error (p-value), which is used to accept or reject the null study hypothesis. To provide a practical guide to help researchers carefully select the most appropriate procedure to answer the research question. We discuss the logic of hypothesis testing and present the prerequisites of each procedure based on practical examples.
Improving the Crossing-SIBTEST Statistic for Detecting Non-uniform DIF.
Chalmers, R Philip
2018-06-01
This paper demonstrates that, after applying a simple modification to Li and Stout's (Psychometrika 61(4):647-677, 1996) CSIBTEST statistic, an improved variant of the statistic could be realized. It is shown that this modified version of CSIBTEST has a more direct association with the SIBTEST statistic presented by Shealy and Stout (Psychometrika 58(2):159-194, 1993). In particular, the asymptotic sampling distributions and general interpretation of the effect size estimates are the same for SIBTEST and the new CSIBTEST. Given the more natural connection to SIBTEST, it is shown that Li and Stout's hypothesis testing approach is insufficient for CSIBTEST; thus, an improved hypothesis testing procedure is required. Based on the presented arguments, a new chi-squared-based hypothesis testing approach is proposed for the modified CSIBTEST statistic. Positive results from a modest Monte Carlo simulation study strongly suggest the original CSIBTEST procedure and randomization hypothesis testing approach should be replaced by the modified statistic and hypothesis testing method.
Statistical evaluation of synchronous spike patterns extracted by frequent item set mining
Torre, Emiliano; Picado-Muiño, David; Denker, Michael; Borgelt, Christian; Grün, Sonja
2013-01-01
We recently proposed frequent itemset mining (FIM) as a method to perform an optimized search for patterns of synchronous spikes (item sets) in massively parallel spike trains. This search outputs the occurrence count (support) of individual patterns that are not trivially explained by the counts of any superset (closed frequent item sets). The number of patterns found by FIM makes direct statistical tests infeasible due to severe multiple testing. To overcome this issue, we proposed to test the significance not of individual patterns, but instead of their signatures, defined as the pairs of pattern size z and support c. Here, we derive in detail a statistical test for the significance of the signatures under the null hypothesis of full independence (pattern spectrum filtering, PSF) by means of surrogate data. As a result, injected spike patterns that mimic assembly activity are well detected, yielding a low false negative rate. However, this approach is prone to additionally classify patterns resulting from chance overlap of real assembly activity and background spiking as significant. These patterns represent false positives with respect to the null hypothesis of having one assembly of given signature embedded in otherwise independent spiking activity. We propose the additional method of pattern set reduction (PSR) to remove these false positives by conditional filtering. By employing stochastic simulations of parallel spike trains with correlated activity in form of injected spike synchrony in subsets of the neurons, we demonstrate for a range of parameter settings that the analysis scheme composed of FIM, PSF and PSR allows to reliably detect active assemblies in massively parallel spike trains. PMID:24167487
The Quiet Eye and Motor Expertise: Explaining the “Efficiency Paradox”
Klostermann, André; Hossner, Ernst-Joachim
2018-01-01
It has been consistently reported that experts show longer quiet eye (QE) durations when compared to near-experts and novices. However, this finding is rather paradoxical as motor expertise is characterized by an economization of motor-control processes rather than by a prolongation in response programming, a suggested explanatory mechanism of the QE phenomenon. Therefore, an inhibition hypothesis was proposed that suggests an inhibition of non-optimal task solutions over movement parametrization, which is particularly necessary in experts due to the great extent and high density of their experienced task-solution space. In the current study, the effect of the task-solution space’ extension was tested by comparing the QE-duration gains in groups that trained a far-aiming task with a small number (low-extent) vs. a large number (high-extent) of task variants. After an extensive training period of more than 750 trials, both groups showed superior performance in post-test and retention test when compared to pretest and longer QE durations in post-test when compared to pretest. However, the QE durations dropped to baseline values at retention. Finally, the expected additional gain in QE duration for the high-extent group was not found and thus, the assumption of long QE durations due to an extended task-solution space was not confirmed. The findings were (by tendency) more in line with the density explanation of the inhibition hypothesis. This density argument suits research revealing a high specificity of motor skills in experts thus providing worthwhile options for future research on the paradoxical relation between the QE and motor expertise. PMID:29472882
Shi, Yue; Queener, Hope M.; Marsack, Jason D.; Ravikumar, Ayeswarya; Bedell, Harold E.; Applegate, Raymond A.
2013-01-01
Dynamic registration uncertainty of a wavefront-guided correction with respect to underlying wavefront error (WFE) inevitably decreases retinal image quality. A partial correction may improve average retinal image quality and visual acuity in the presence of registration uncertainties. The purpose of this paper is to (a) develop an algorithm to optimize wavefront-guided correction that improves visual acuity given registration uncertainty and (b) test the hypothesis that these corrections provide improved visual performance in the presence of these uncertainties as compared to a full-magnitude correction or a correction by Guirao, Cox, and Williams (2002). A stochastic parallel gradient descent (SPGD) algorithm was used to optimize the partial-magnitude correction for three keratoconic eyes based on measured scleral contact lens movement. Given its high correlation with logMAR acuity, the retinal image quality metric log visual Strehl was used as a predictor of visual acuity. Predicted values of visual acuity with the optimized corrections were validated by regressing measured acuity loss against predicted loss. Measured loss was obtained from normal subjects viewing acuity charts that were degraded by the residual aberrations generated by the movement of the full-magnitude correction, the correction by Guirao, and optimized SPGD correction. Partial-magnitude corrections optimized with an SPGD algorithm provide at least one line improvement of average visual acuity over the full magnitude and the correction by Guirao given the registration uncertainty. This study demonstrates that it is possible to improve the average visual acuity by optimizing wavefront-guided correction in the presence of registration uncertainty. PMID:23757512
Rethinking exchange market models as optimization algorithms
NASA Astrophysics Data System (ADS)
Luquini, Evandro; Omar, Nizam
2018-02-01
The exchange market model has mainly been used to study the inequality problem. Although the human society inequality problem is very important, the exchange market models dynamics until stationary state and its capability of ranking individuals is interesting in itself. This study considers the hypothesis that the exchange market model could be understood as an optimization procedure. We present herein the implications for algorithmic optimization and also the possibility of a new family of exchange market models
Sexual orientation in men and avuncularity in Japan: implications for the kin selection hypothesis.
Vasey, Paul L; VanderLaan, Doug P
2012-02-01
The kin selection hypothesis for male androphilia posits that genes for male androphilia can be maintained in the population if the fitness costs of not reproducing directly are offset by enhancing inclusive fitness. In theory, androphilic males can increase their inclusive fitness by directing altruistic behavior toward kin, which, in turn, allows kin to increase their reproductive success. Previous research conducted in Western countries (U.S., UK) has failed to find any support for this hypothesis. In contrast, research conducted in Samoa has provided repeated support for it. In light of these cross-cultural differences, we hypothesized that the development of elevated avuncular (i.e., altruistic uncle-like) tendencies in androphilic males may be contingent on a relatively collectivistic cultural context. To test this hypothesis, we compared data on the avuncular tendencies and altruistic tendencies toward non-kin children of childless androphilic and gynephilic men in Japan, a culture that is known to be relatively collectivistic. The results of this study furnished no evidence that androphilic Japanese men exhibited elevated avuncular tendencies compared to their gynephilic counterparts. Moreover, there was no evidence that androphilic men's avuncular tendencies were more optimally designed (i.e., were more dissociated from their altruistic tendencies toward non-kin children) compared to gynephilic men. If an adaptively designed avuncular male androphilic phenotype exists and its development is contingent on a particular social environment, then the research presented here suggests that a collectivistic cultural context is insufficient, in and of itself, for the expression of such a phenotype.
How organisms do the right thing: The attractor hypothesis
Emlen, J.M.; Freeman, D.C.; Mills, A.; Graham, J.H.
1998-01-01
Neo-Darwinian theory is highly successful at explaining the emergence of adaptive traits over successive generations. However, there are reasons to doubt its efficacy in explaining the observed, impressively detailed adaptive responses of organisms to day-to-day changes in their surroundings. Also, the theory lacks a clear mechanism to account for both plasticity and canalization. In effect, there is a growing sentiment that the neo-Darwinian paradigm is incomplete, that something more than genetic structure, mutation, genetic drift, and the action of natural selection is required to explain organismal behavior. In this paper we extend the view of organisms as complex self-organizing entities by arguing that basic physical laws, coupled with the acquisitive nature of organisms, makes adaptation all but tautological. That is, much adaptation is an unavoidable emergent property of organisms' complexity and, to some a significant degree, occurs quite independently of genomic changes wrought by natural selection. For reasons that will become obvious, we refer to this assertion as the attractor hypothesis. The arguments also clarify the concept of "adaptation." Adaptation across generations, by natural selection, equates to the (game theoretic) maximization of fitness (the success with which one individual produces more individuals), while self-organizing based adaptation, within generations, equates to energetic efficiency and the matching of intake and biosynthesis to need. Finally, we discuss implications of the attractor hypothesis for a wide variety of genetical and physiological phenomena, including genetic architecture, directed mutation, genetic imprinting, paramutation, hormesis, plasticity, optimality theory, genotype-phenotype linkage and puncuated equilibrium, and present suggestions for tests of the hypothesis. ?? 1998 American Institute of Physics.
How organisms do the right thing: The attractor hypothesis
NASA Astrophysics Data System (ADS)
Emlen, John M.; Freeman, D. Carl; Mills, April; Graham, John H.
1998-09-01
Neo-Darwinian theory is highly successful at explaining the emergence of adaptive traits over successive generations. However, there are reasons to doubt its efficacy in explaining the observed, impressively detailed adaptive responses of organisms to day-to-day changes in their surroundings. Also, the theory lacks a clear mechanism to account for both plasticity and canalization. In effect, there is a growing sentiment that the neo-Darwinian paradigm is incomplete, that something more than genetic structure, mutation, genetic drift, and the action of natural selection is required to explain organismal behavior. In this paper we extend the view of organisms as complex self-organizing entities by arguing that basic physical laws, coupled with the acquisitive nature of organisms, makes adaptation all but tautological. That is, much adaptation is an unavoidable emergent property of organisms' complexity and, to some a significant degree, occurs quite independently of genomic changes wrought by natural selection. For reasons that will become obvious, we refer to this assertion as the attractor hypothesis. The arguments also clarify the concept of "adaptation." Adaptation across generations, by natural selection, equates to the (game theoretic) maximization of fitness (the success with which one individual produces more individuals), while self-organizing based adaptation, within generations, equates to energetic efficiency and the matching of intake and biosynthesis to need. Finally, we discuss implications of the attractor hypothesis for a wide variety of genetical and physiological phenomena, including genetic architecture, directed mutation, genetic imprinting, paramutation, hormesis, plasticity, optimality theory, genotype-phenotype linkage and puncuated equilibrium, and present suggestions for tests of the hypothesis.
Scaling of angiosperm xylem structure with safety and efficiency.
Hacke, Uwe G; Sperry, John S; Wheeler, James K; Castro, Laura
2006-06-01
We tested the hypothesis that greater cavitation resistance correlates with less total inter-vessel pit area per vessel (the pit area hypothesis) and evaluated a trade-off between cavitation safety and transport efficiency. Fourteen species of diverse growth form (vine, ring- and diffuse-porous tree, shrub) and family affinity were added to published data predominately from the Rosaceae (29 species total). Two types of vulnerability-to-cavitation curves were found. Ring-porous trees and vines showed an abrupt drop in hydraulic conductivity with increasing negative pressure, whereas hydraulic conductivity in diffuse-porous species generally decreased gradually. The ring-porous type curve was not an artifact of the centrifuge method because it was obtained also with the air-injection technique. A safety versus efficiency trade-off was evident when curves were compared across species: for a given pressure, there was a limited range of optimal vulnerability curves. The pit area hypothesis was supported by a strong relationship (r2 = 0.77) between increasing cavitation resistance and diminishing pit membrane area per vessel (A(P)). Small A(P) was associated with small vessel surface area and hence narrow vessel diameter (D) and short vessel length (L)--consistent with an increase in vessel flow resistance with cavitation resistance. This trade-off was amplified at the tissue level by an increase in xylem/vessel area ratio with cavitation resistance. Ring-porous species were more efficient than diffuse-porous species on a vessel basis but not on a xylem basis owing to higher xylem/vessel area ratios in ring-porous anatomy. Across four orders of magnitude, lumen and end-wall resistivities maintained a relatively tight proportionality with a near-optimal mean of 56% of the total vessel resistivity residing in the end-wall. This was consistent with an underlying scaling of L to D(3/2) across species. Pit flow resistance did not increase with cavitation safety, suggesting that cavitation pressure was not related to mean pit membrane porosity.
2012-01-01
Background Oviposition-site choice is an essential component of the life history of all mosquito species. According to the oviposition-preference offspring-performance (P-P) hypothesis, if optimizing offspring performance and fitness ensures high overall reproductive fitness for a given species, the female should accurately assess details of the heterogeneous environment and lay her eggs preferentially in sites with conditions more suitable to offspring. Methods We empirically tested the P-P hypothesis using the mosquito species Aedes albopictus by artificially manipulating two habitat conditions: diet (measured as mg of food added to a container) and conspecific density (CD; number of pre-existing larvae of the same species). Immature development (larval mortality, development time to pupation and time to emergence) and fitness (measured as wing length) were monitored from first instar through adult emergence using a factorial experimental design over two ascending gradients of diet (2.0, 3.6, 7.2 and 20 mg food/300 ml water) and CD (0, 20, 40 and 80 larvae/300 ml water). Treatments that exerted the most contrasting values of larval performance were recreated in a second experiment consisting of single-female oviposition site selection assay. Results Development time decreased as food concentration increased, except from 7.2 mg to 20.0 mg (Two-Way CR ANOVA Post-Hoc test, P > 0.1). Development time decreased also as conspecific density increased from zero to 80 larvae (Two-Way CR ANOVA Post-Hoc test, P < 0.5). Combined, these results support the role of density-dependent competition for resources as a limiting factor for mosquito larval performance. Oviposition assays indicated that female mosquitoes select for larval habitats with conspecifics and that larval density was more important than diet in driving selection for oviposition sites. Conclusions This study supports predictions of the P-P hypothesis and provides a mechanistic understanding of the underlying factors driving mosquito oviposition site selection. PMID:23044004
Color inference in visual communication: the meaning of colors in recycling.
Schloss, Karen B; Lessard, Laurent; Walmsley, Charlotte S; Foley, Kathleen
2018-01-01
People interpret abstract meanings from colors, which makes color a useful perceptual feature for visual communication. This process is complicated, however, because there is seldom a one-to-one correspondence between colors and meanings. One color can be associated with many different concepts (one-to-many mapping) and many colors can be associated with the same concept (many-to-one mapping). We propose that to interpret color-coding systems, people perform assignment inference to determine how colors map onto concepts. We studied assignment inference in the domain of recycling. Participants saw images of colored but unlabeled bins and were asked to indicate which bins they would use to discard different kinds of recyclables and trash. In Experiment 1, we tested two hypotheses for how people perform assignment inference. The local assignment hypothesis predicts that people simply match objects with their most strongly associated color. The global assignment hypothesis predicts that people also account for the association strengths between all other objects and colors within the scope of the color-coding system. Participants discarded objects in bins that optimized the color-object associations of the entire set, which is consistent with the global assignment hypothesis. This sometimes resulted in discarding objects in bins whose colors were weakly associated with the object, even when there was a stronger associated option available. In Experiment 2, we tested different methods for encoding color-coding systems and found that people were better at assignment inference when color sets simultaneously maximized the association strength between assigned color-object parings while minimizing associations between unassigned pairings. Our study provides an approach for designing intuitive color-coding systems that facilitate communication through visual media such as graphs, maps, signs, and artifacts.
The Importance of Teaching Power in Statistical Hypothesis Testing
ERIC Educational Resources Information Center
Olinsky, Alan; Schumacher, Phyllis; Quinn, John
2012-01-01
In this paper, we discuss the importance of teaching power considerations in statistical hypothesis testing. Statistical power analysis determines the ability of a study to detect a meaningful effect size, where the effect size is the difference between the hypothesized value of the population parameter under the null hypothesis and the true value…
The Relation between Parental Values and Parenting Behavior: A Test of the Kohn Hypothesis.
ERIC Educational Resources Information Center
Luster, Tom; And Others
1989-01-01
Used data on 65 mother-infant dyads to test Kohn's hypothesis concerning the relation between values and parenting behavior. Findings support Kohn's hypothesis that parents who value self-direction would emphasize supportive function of parenting and parents who value conformity would emphasize their obligations to impose restraints. (Author/NB)
Cognitive Biases in the Interpretation of Autonomic Arousal: A Test of the Construal Bias Hypothesis
ERIC Educational Resources Information Center
Ciani, Keith D.; Easter, Matthew A.; Summers, Jessica J.; Posada, Maria L.
2009-01-01
According to Bandura's construal bias hypothesis, derived from social cognitive theory, persons with the same heightened state of autonomic arousal may experience either pleasant or deleterious emotions depending on the strength of perceived self-efficacy. The current study tested this hypothesis by proposing that college students' preexisting…
Overgaard, Morten; Lindeløv, Jonas; Svejstrup, Stinna; Døssing, Marianne; Hvid, Tanja; Kauffmann, Oliver; Mouridsen, Kim
2013-01-01
This paper reports an experiment intended to test a particular hypothesis derived from blindsight research, which we name the “source misidentification hypothesis.” According to this hypothesis, a subject may be correct about a stimulus without being correct about how she had access to this knowledge (whether the stimulus was visual, auditory, or something else). We test this hypothesis in healthy subjects, asking them to report whether a masked stimulus was presented auditorily or visually, what the stimulus was, and how clearly they experienced the stimulus using the Perceptual Awareness Scale (PAS). We suggest that knowledge about perceptual modality may be a necessary precondition in order to issue correct reports of which stimulus was presented. Furthermore, we find that PAS ratings correlate with correctness, and that subjects are at chance level when reporting no conscious experience of the stimulus. To demonstrate that particular levels of reporting accuracy are obtained, we employ a statistical strategy, which operationally tests the hypothesis of non-equality, such that the usual rejection of the null-hypothesis admits the conclusion of equivalence. PMID:23508677
A large scale test of the gaming-enhancement hypothesis
Wang, John C.
2016-01-01
A growing research literature suggests that regular electronic game play and game-based training programs may confer practically significant benefits to cognitive functioning. Most evidence supporting this idea, the gaming-enhancement hypothesis, has been collected in small-scale studies of university students and older adults. This research investigated the hypothesis in a general way with a large sample of 1,847 school-aged children. Our aim was to examine the relations between young people’s gaming experiences and an objective test of reasoning performance. Using a Bayesian hypothesis testing approach, evidence for the gaming-enhancement and null hypotheses were compared. Results provided no substantive evidence supporting the idea that having preference for or regularly playing commercially available games was positively associated with reasoning ability. Evidence ranged from equivocal to very strong in support for the null hypothesis over what was predicted. The discussion focuses on the value of Bayesian hypothesis testing for investigating electronic gaming effects, the importance of open science practices, and pre-registered designs to improve the quality of future work. PMID:27896035
Objects predict fixations better than early saliency.
Einhäuser, Wolfgang; Spain, Merrielle; Perona, Pietro
2008-11-20
Humans move their eyes while looking at scenes and pictures. Eye movements correlate with shifts in attention and are thought to be a consequence of optimal resource allocation for high-level tasks such as visual recognition. Models of attention, such as "saliency maps," are often built on the assumption that "early" features (color, contrast, orientation, motion, and so forth) drive attention directly. We explore an alternative hypothesis: Observers attend to "interesting" objects. To test this hypothesis, we measure the eye position of human observers while they inspect photographs of common natural scenes. Our observers perform different tasks: artistic evaluation, analysis of content, and search. Immediately after each presentation, our observers are asked to name objects they saw. Weighted with recall frequency, these objects predict fixations in individual images better than early saliency, irrespective of task. Also, saliency combined with object positions predicts which objects are frequently named. This suggests that early saliency has only an indirect effect on attention, acting through recognized objects. Consequently, rather than treating attention as mere preprocessing step for object recognition, models of both need to be integrated.
Optimum Heart Rate to Minimize Pulsatile External Cardiac Power
NASA Astrophysics Data System (ADS)
Pahlevan, Niema; Gharib, Morteza
2011-11-01
The workload on the left ventricle is composed of steady and pulsatile components. Clinical investigations have confirmed that an abnormal pulsatile load plays an important role in the pathogenesis of left ventricular hypertrophy (LVH) and progression of LVH to congestive heart failure (CHF). The pulsatile load is the result of the complex dynamics of wave propagation and reflection in the compliant arterial vasculature. We hypothesize that aortic waves can be optimized to reduce the left ventricular (LV) pulsatile load. We used an in-vitro experimental approach to investigate our hypothesis. A unique hydraulic model was used for in-vitro experiments. This model has physical and dynamical properties similar to the heart-aorta system. Different compliant models of the artificial aorta were used to test the hypothesis under various aortic rigidities. Our results indicate that: i) there is an optimum heart rate that minimizes LV pulsatile power (this is in agreement with our previous computational study); ii) introducing an extra reflection site at the specific location along the aorta creates constructive wave conditions that reduce the LV pulsatile power.
Does Contextual Cueing Guide the Deployment of Attention?
Kunar, Melina A.; Flusberg, Stephen; Horowitz, Todd S.; Wolfe, Jeremy M.
2008-01-01
Contextual cueing experiments show that when displays are repeated, reaction times (RTs) to find a target decrease over time even when observers are not aware of the repetition. It has been thought that the context of the display guides attention to the target. We tested this hypothesis by comparing the effects of guidance in a standard search task to the effects of contextual cueing. Firstly, in standard search, an improvement in guidance causes search slopes (derived from RT × Set Size functions) to decrease. In contrast, we found that search slopes in contextual cueing did not become more efficient over time (Experiment 1). Secondly, when guidance is optimal (e.g. in easy feature search) we still found a small, but reliable contextual cueing effect (Experiments 2a and 2b), suggesting that other factors, such as response selection, contribute to the effect. Experiment 3 supported this hypothesis by showing that the contextual cueing effect disappeared when we added interference to the response selection process. Overall, our data suggest that the relationship between guidance and contextual cueing is weak and that response selection can account for part of the effect. PMID:17683230
Empirical evidence for resource-rational anchoring and adjustment.
Lieder, Falk; Griffiths, Thomas L; M Huys, Quentin J; Goodman, Noah D
2018-04-01
People's estimates of numerical quantities are systematically biased towards their initial guess. This anchoring bias is usually interpreted as sign of human irrationality, but it has recently been suggested that the anchoring bias instead results from people's rational use of their finite time and limited cognitive resources. If this were true, then adjustment should decrease with the relative cost of time. To test this hypothesis, we designed a new numerical estimation paradigm that controls people's knowledge and varies the cost of time and error independently while allowing people to invest as much or as little time and effort into refining their estimate as they wish. Two experiments confirmed the prediction that adjustment decreases with time cost but increases with error cost regardless of whether the anchor was self-generated or provided. These results support the hypothesis that people rationally adapt their number of adjustments to achieve a near-optimal speed-accuracy tradeoff. This suggests that the anchoring bias might be a signature of the rational use of finite time and limited cognitive resources rather than a sign of human irrationality.
ERIC Educational Resources Information Center
SAW, J.G.
THIS PAPER DEALS WITH SOME TESTS OF HYPOTHESIS FREQUENTLY ENCOUNTERED IN THE ANALYSIS OF MULTIVARIATE DATA. THE TYPE OF HYPOTHESIS CONSIDERED IS THAT WHICH THE STATISTICIAN CAN ANSWER IN THE NEGATIVE OR AFFIRMATIVE. THE DOOLITTLE METHOD MAKES IT POSSIBLE TO EVALUATE THE DETERMINANT OF A MATRIX OF HIGH ORDER, TO SOLVE A MATRIX EQUATION, OR TO…
Kruschke, John K; Liddell, Torrin M
2018-02-01
In the practice of data analysis, there is a conceptual distinction between hypothesis testing, on the one hand, and estimation with quantified uncertainty on the other. Among frequentists in psychology, a shift of emphasis from hypothesis testing to estimation has been dubbed "the New Statistics" (Cumming 2014). A second conceptual distinction is between frequentist methods and Bayesian methods. Our main goal in this article is to explain how Bayesian methods achieve the goals of the New Statistics better than frequentist methods. The article reviews frequentist and Bayesian approaches to hypothesis testing and to estimation with confidence or credible intervals. The article also describes Bayesian approaches to meta-analysis, randomized controlled trials, and power analysis.
NASA Astrophysics Data System (ADS)
Miloi, Mădălina Mihaela; Goryunov, Semyon; Kulin, German
2018-04-01
A wide range of problems in neutron optics is well described by a theory based on application of the effective potential model. It was assumed that the concept of the effective potential in neutron optics have a limited region of validity and ceases to be correct in the case of the giant acceleration of a matter. To test this hypothesis a new Ultra Cold neutron experiment for the observation neutron interaction with potential structure oscillating in space was proposed. The report is focused on the model calculations of the topography of sample surface that oscillate in space. These calculations are necessary to find an optimal parameters and geometry of the planned experiment.
Comparative effectiveness research methodology using secondary data: A starting user's guide.
Sun, Maxine; Lipsitz, Stuart R
2018-04-01
The use of secondary data, such as claims or administrative data, in comparative effectiveness research has grown tremendously in recent years. We believe that the current review can help investigators relying on secondary data to (1) gain insight into both the methodologies and statistical methods, (2) better understand the necessity of a rigorous planning before initiating a comparative effectiveness investigation, and (3) optimize the quality of their investigations. Specifically, we review concepts of adjusted analyses and confounders, methods of propensity score analyses, and instrumental variable analyses, risk prediction models (logistic and time-to-event), decision-curve analysis, as well as the interpretation of the P value and hypothesis testing. Overall, we hope that the current review article can help research investigators relying on secondary data to perform comparative effectiveness research better understand the necessity of a rigorous planning before study start, and gain better insight in the choice of statistical methods so as to optimize the quality of the research study. Copyright © 2017 Elsevier Inc. All rights reserved.
How quantitative measures unravel design principles in multi-stage phosphorylation cascades.
Frey, Simone; Millat, Thomas; Hohmann, Stefan; Wolkenhauer, Olaf
2008-09-07
We investigate design principles of linear multi-stage phosphorylation cascades by using quantitative measures for signaling time, signal duration and signal amplitude. We compare alternative pathway structures by varying the number of phosphorylations and the length of the cascade. We show that a model for a weakly activated pathway does not reflect the biological context well, unless it is restricted to certain parameter combinations. Focusing therefore on a more general model, we compare alternative structures with respect to a multivariate optimization criterion. We test the hypothesis that the structure of a linear multi-stage phosphorylation cascade is the result of an optimization process aiming for a fast response, defined by the minimum of the product of signaling time and signal duration. It is then shown that certain pathway structures minimize this criterion. Several popular models of MAPK cascades form the basis of our study. These models represent different levels of approximation, which we compare and discuss with respect to the quantitative measures.
Mendenhall, Jeffrey; Meiler, Jens
2016-02-01
Dropout is an Artificial Neural Network (ANN) training technique that has been shown to improve ANN performance across canonical machine learning (ML) datasets. Quantitative Structure Activity Relationship (QSAR) datasets used to relate chemical structure to biological activity in Ligand-Based Computer-Aided Drug Discovery pose unique challenges for ML techniques, such as heavily biased dataset composition, and relatively large number of descriptors relative to the number of actives. To test the hypothesis that dropout also improves QSAR ANNs, we conduct a benchmark on nine large QSAR datasets. Use of dropout improved both enrichment false positive rate and log-scaled area under the receiver-operating characteristic curve (logAUC) by 22-46 % over conventional ANN implementations. Optimal dropout rates are found to be a function of the signal-to-noise ratio of the descriptor set, and relatively independent of the dataset. Dropout ANNs with 2D and 3D autocorrelation descriptors outperform conventional ANNs as well as optimized fingerprint similarity search methods.
Mendenhall, Jeffrey; Meiler, Jens
2016-01-01
Dropout is an Artificial Neural Network (ANN) training technique that has been shown to improve ANN performance across canonical machine learning (ML) datasets. Quantitative Structure Activity Relationship (QSAR) datasets used to relate chemical structure to biological activity in Ligand-Based Computer-Aided Drug Discovery (LB-CADD) pose unique challenges for ML techniques, such as heavily biased dataset composition, and relatively large number of descriptors relative to the number of actives. To test the hypothesis that dropout also improves QSAR ANNs, we conduct a benchmark on nine large QSAR datasets. Use of dropout improved both Enrichment false positive rate (FPR) and log-scaled area under the receiver-operating characteristic curve (logAUC) by 22–46% over conventional ANN implementations. Optimal dropout rates are found to be a function of the signal-to-noise ratio of the descriptor set, and relatively independent of the dataset. Dropout ANNs with 2D and 3D autocorrelation descriptors outperform conventional ANNs as well as optimized fingerprint similarity search methods. PMID:26830599
NASA Astrophysics Data System (ADS)
Ueno, Akira; Ikegami, Kiyoshi; Kondo, Yasuhiro
2004-05-01
A Cs-free negative hydrogen (H-) ion source driven by pulsed arc plasma with a LaB6 filament is being operated for the beam tests of the Japan Proton Accelerator Research Complex (J-PARC) linac. A peak H- current of 38 mA, which exceeds the requirement of the J-PARC first stage, is stably extracted from the ion source with a beam duty factor of 0.9% (360 μs×25 Hz) by principally optimizing the surface condition and shape of the plasma electrode. The sufficiently small emittance of the beam was confirmed by high transmission efficiency (around 90%) through the following 324 MHz 3 MeV J-PARC radio frequency quadrupole linac (M. Ikegami et al., Proc. 2003 Part. Accel. Conf. 2003, p. 1509). The process of the optimization, which confirms the validity of hypothesis that H- ions are produced by surface reaction on a Mo plasma electrode dominantly in the ion source, is presented.
Young, Anna M.; Cordier, Breanne; Mundry, Roger; Wright, Timothy F.
2014-01-01
In many social species group, members share acoustically similar calls. Functional hypotheses have been proposed for call sharing, but previous studies have been limited by an inability to distinguish among these hypotheses. We examined the function of vocal sharing in female budgerigars with a two-part experimental design that allowed us to distinguish between two functional hypotheses. The social association hypothesis proposes that shared calls help animals mediate affiliative and aggressive interactions, while the password hypothesis proposes that shared calls allow animals to distinguish group identity and exclude nonmembers. We also tested the labeling hypothesis, a mechanistic explanation which proposes that shared calls are used to address specific individuals within the sender–receiver relationship. We tested the social association hypothesis by creating four–member flocks of unfamiliar female budgerigars (Melopsittacus undulatus) and then monitoring the birds’ calls, social behaviors, and stress levels via fecal glucocorticoid metabolites. We tested the password hypothesis by moving immigrants into established social groups. To test the labeling hypothesis, we conducted additional recording sessions in which individuals were paired with different group members. The social association hypothesis was supported by the development of multiple shared call types in each cage and a correlation between the number of shared call types and the number of aggressive interactions between pairs of birds. We also found support for calls serving as a labeling mechanism using discriminant function analysis with a permutation procedure. Our results did not support the password hypothesis, as there was no difference in stress or directed behaviors between immigrant and control birds. PMID:24860236
A suggestion for computing objective function in model calibration
Wu, Yiping; Liu, Shuguang
2014-01-01
A parameter-optimization process (model calibration) is usually required for numerical model applications, which involves the use of an objective function to determine the model cost (model-data errors). The sum of square errors (SSR) has been widely adopted as the objective function in various optimization procedures. However, ‘square error’ calculation was found to be more sensitive to extreme or high values. Thus, we proposed that the sum of absolute errors (SAR) may be a better option than SSR for model calibration. To test this hypothesis, we used two case studies—a hydrological model calibration and a biogeochemical model calibration—to investigate the behavior of a group of potential objective functions: SSR, SAR, sum of squared relative deviation (SSRD), and sum of absolute relative deviation (SARD). Mathematical evaluation of model performance demonstrates that ‘absolute error’ (SAR and SARD) are superior to ‘square error’ (SSR and SSRD) in calculating objective function for model calibration, and SAR behaved the best (with the least error and highest efficiency). This study suggests that SSR might be overly used in real applications, and SAR may be a reasonable choice in common optimization implementations without emphasizing either high or low values (e.g., modeling for supporting resources management).
Optimal predictions in everyday cognition: the wisdom of individuals or crowds?
Mozer, Michael C; Pashler, Harold; Homaei, Hadjar
2008-10-01
Griffiths and Tenenbaum (2006) asked individuals to make predictions about the duration or extent of everyday events (e.g., cake baking times), and reported that predictions were optimal, employing Bayesian inference based on veridical prior distributions. Although the predictions conformed strikingly to statistics of the world, they reflect averages over many individuals. On the conjecture that the accuracy of the group response is chiefly a consequence of aggregating across individuals, we constructed simple, heuristic approximations to the Bayesian model premised on the hypothesis that individuals have access merely to a sample of k instances drawn from the relevant distribution. The accuracy of the group response reported by Griffiths and Tenenbaum could be accounted for by supposing that individuals each utilize only two instances. Moreover, the variability of the group data is more consistent with this small-sample hypothesis than with the hypothesis that people utilize veridical or nearly veridical representations of the underlying prior distributions. Our analyses lead to a qualitatively different view of how individuals reason from past experience than the view espoused by Griffiths and Tenenbaum. 2008 Cognitive Science Society, Inc.
The phase shift hypothesis for the circadian component of winter depression
Lewy, Alfred J.; Rough, Jennifer N.; Songer, Jeannine B.; Mishra, Neelam; Yuhas, Krista; Emens, Jonathan S.
2007-01-01
The finding that bright light can suppress melatonin production led to the study of two situations, indeed, models, of light deprivation: totally blind people and winterdepressives. The leading hypothesis for winter depression (seasonal affective disorder, or SAD) is the phase shift hypothesis (PSH). The PSH was recently established in a study in which SAD patients were given low-dose melatonin in the afternoon/evening to cause phase advances, or in the morning to cause phase delays, or placebo. The prototypical phase-delayed patient as well as the smaller subgroup of phase-advanced patients, optimally responded to melatonin given at the correct time. Symptom severity improved as circadian misalignment was corrected. Orcadian misalignment is best measured as the time interval between the dim light melatonin onset (DLMO) and mid-sleep. Using the operational definition of the plasma DLMO as the interpolated time when melatonin levels continuously rise above the threshold of 10 pglmL, the average interval between DLMO and mid-sleep in healthy controls is 6 hours, which is associated with optimal mood in SAD patients. PMID:17969866
Decision Environment and Heuristics in Individual and Collective Hypothesis Generation
2017-11-01
number of images viewed: When the high -value cue appeared late in the trial, time pressure yielded hypothesis changes sooner than did no time pressure...However, when the high -value cue appeared early in the trial, time pressure had no effect on images viewed. Overall, participants’ timing was...scenarios. Cue order influenced the quality of timing. Participants were more likely to change their hypotheses at an optimal time when the high
In Search of the Optimal Path: How Learners at Task Use an Online Dictionary
ERIC Educational Resources Information Center
Hamel, Marie-Josee
2012-01-01
We have analyzed circa 180 navigation paths followed by six learners while they performed three language encoding tasks at the computer using an online dictionary prototype. Our hypothesis was that learners who follow an "optimal path" while navigating within the dictionary, using its search and look-up functions, would have a high chance of…
An Extension of RSS-based Model Comparison Tests for Weighted Least Squares
2012-08-22
use the model comparison test statistic to analyze the null hypothesis. Under the null hypothesis, the weighted least squares cost functional is JWLS ...q̂WLSH ) = 10.3040×106. Under the alternative hypothesis, the weighted least squares cost functional is JWLS (q̂WLS) = 8.8394 × 106. Thus the model
Hypothesis testing of scientific Monte Carlo calculations.
Wallerberger, Markus; Gull, Emanuel
2017-11-01
The steadily increasing size of scientific Monte Carlo simulations and the desire for robust, correct, and reproducible results necessitates rigorous testing procedures for scientific simulations in order to detect numerical problems and programming bugs. However, the testing paradigms developed for deterministic algorithms have proven to be ill suited for stochastic algorithms. In this paper we demonstrate explicitly how the technique of statistical hypothesis testing, which is in wide use in other fields of science, can be used to devise automatic and reliable tests for Monte Carlo methods, and we show that these tests are able to detect some of the common problems encountered in stochastic scientific simulations. We argue that hypothesis testing should become part of the standard testing toolkit for scientific simulations.
Hypothesis testing of scientific Monte Carlo calculations
NASA Astrophysics Data System (ADS)
Wallerberger, Markus; Gull, Emanuel
2017-11-01
The steadily increasing size of scientific Monte Carlo simulations and the desire for robust, correct, and reproducible results necessitates rigorous testing procedures for scientific simulations in order to detect numerical problems and programming bugs. However, the testing paradigms developed for deterministic algorithms have proven to be ill suited for stochastic algorithms. In this paper we demonstrate explicitly how the technique of statistical hypothesis testing, which is in wide use in other fields of science, can be used to devise automatic and reliable tests for Monte Carlo methods, and we show that these tests are able to detect some of the common problems encountered in stochastic scientific simulations. We argue that hypothesis testing should become part of the standard testing toolkit for scientific simulations.
Sex ratios in the two Germanies: a test of the economic stress hypothesis.
Catalano, Ralph A
2003-09-01
Literature describing temporal variation in the secondary sex ratio among humans reports an association between population stressors and declines in the odds of male birth. Explanations of this phenomenon draw on reports that stressed females spontaneously abort male more than female fetuses, and that stressed males exhibit reduced sperm motility. This work has led to the argument that population stress induced by a declining economy reduces the human sex ratio. No direct test of this hypothesis appears in the literature. Here, a test is offered based on a comparison of the sex ratio in East and West Germany for the years 1946 to 1999. The theory suggests that the East German sex ratio should be lower in 1991, when East Germany's economy collapsed, than expected from its own history and from the sex ratio in West Germany. The hypothesis is tested using time-series modelling methods. The data support the hypothesis. The sex ratio in East Germany was at its lowest in 1991. This first direct test supports the hypothesis that economic decline reduces the human sex ratio.
Understanding suicide terrorism: premature dismissal of the religious-belief hypothesis.
Liddle, James R; Machluf, Karin; Shackelford, Todd K
2010-07-06
We comment on work by Ginges, Hansen, and Norenzayan (2009), in which they compare two hypotheses for predicting individual support for suicide terrorism: the religious-belief hypothesis and the coalitional-commitment hypothesis. Although we appreciate the evidence provided in support of the coalitional-commitment hypothesis, we argue that their method of testing the religious-belief hypothesis is conceptually flawed, thus calling into question their conclusion that the religious-belief hypothesis has been disconfirmed. In addition to critiquing the methodology implemented by Ginges et al., we provide suggestions on how the religious-belief hypothesis may be properly tested. It is possible that the premature and unwarranted conclusions reached by Ginges et al. may deter researchers from examining the effect of specific religious beliefs on support for terrorism, and we hope that our comments can mitigate this possibility.
Feldman, Anatol G; Latash, Mark L
2005-02-01
Criticisms of the equilibrium point (EP) hypothesis have recently appeared that are based on misunderstandings of some of its central notions. Starting from such interpretations of the hypothesis, incorrect predictions are made and tested. When the incorrect predictions prove false, the hypothesis is claimed to be falsified. In particular, the hypothesis has been rejected based on the wrong assumptions that it conflicts with empirically defined joint stiffness values or that it is incompatible with violations of equifinality under certain velocity-dependent perturbations. Typically, such attempts use notions describing the control of movements of artificial systems in place of physiologically relevant ones. While appreciating constructive criticisms of the EP hypothesis, we feel that incorrect interpretations have to be clarified by reiterating what the EP hypothesis does and does not predict. We conclude that the recent claims of falsifying the EP hypothesis and the calls for its replacement by EMG-force control hypothesis are unsubstantiated. The EP hypothesis goes far beyond the EMG-force control view. In particular, the former offers a resolution for the famous posture-movement paradox while the latter fails to resolve it.
How much is too much? The effects of information quantity on crowdfunding performance.
Moy, Naomi; Chan, Ho Fai; Torgler, Benno
2018-01-01
We explore the effects of the quantity of information on the tendency to contribute to crowdfunding campaigns. Using the crowdfunding platform Kickstarter, we analyze the campaign descriptions and the performance of over 70,000 projects. We look empirically at the effect of information quantity (word count) on funding success (as measure by amount raised and number of backers). Within this empirical approach, we test whether an excessive amount of information will affect funding success. To do so, we test for the non-linearity (quadratic) effect of our independent variable (word count) using regression analysis. Consistent with the hypothesis that excess information will negatively affect funds raised and number of contributors, we observe a consistent U-shaped relationship between campaign text length and overall success which suggest that an optimal number of words exists within crowdfunding texts and that going over this point will reduce a project's chance of fundraising success.
Accounting for informatively missing data in logistic regression by means of reassessment sampling.
Lin, Ji; Lyles, Robert H
2015-05-20
We explore the 'reassessment' design in a logistic regression setting, where a second wave of sampling is applied to recover a portion of the missing data on a binary exposure and/or outcome variable. We construct a joint likelihood function based on the original model of interest and a model for the missing data mechanism, with emphasis on non-ignorable missingness. The estimation is carried out by numerical maximization of the joint likelihood function with close approximation of the accompanying Hessian matrix, using sharable programs that take advantage of general optimization routines in standard software. We show how likelihood ratio tests can be used for model selection and how they facilitate direct hypothesis testing for whether missingness is at random. Examples and simulations are presented to demonstrate the performance of the proposed method. Copyright © 2015 John Wiley & Sons, Ltd.
How much is too much? The effects of information quantity on crowdfunding performance
2018-01-01
We explore the effects of the quantity of information on the tendency to contribute to crowdfunding campaigns. Using the crowdfunding platform Kickstarter, we analyze the campaign descriptions and the performance of over 70,000 projects. We look empirically at the effect of information quantity (word count) on funding success (as measure by amount raised and number of backers). Within this empirical approach, we test whether an excessive amount of information will affect funding success. To do so, we test for the non-linearity (quadratic) effect of our independent variable (word count) using regression analysis. Consistent with the hypothesis that excess information will negatively affect funds raised and number of contributors, we observe a consistent U-shaped relationship between campaign text length and overall success which suggest that an optimal number of words exists within crowdfunding texts and that going over this point will reduce a project’s chance of fundraising success. PMID:29538371
A cognitive prosthesis for complex decision-making.
Tremblay, Sébastien; Gagnon, Jean-François; Lafond, Daniel; Hodgetts, Helen M; Doiron, Maxime; Jeuniaux, Patrick P J M H
2017-01-01
While simple heuristics can be ecologically rational and effective in naturalistic decision making contexts, complex situations require analytical decision making strategies, hypothesis-testing and learning. Sub-optimal decision strategies - using simplified as opposed to analytic decision rules - have been reported in domains such as healthcare, military operational planning, and government policy making. We investigate the potential of a computational toolkit called "IMAGE" to improve decision-making by developing structural knowledge and increasing understanding of complex situations. IMAGE is tested within the context of a complex military convoy management task through (a) interactive simulations, and (b) visualization and knowledge representation capabilities. We assess the usefulness of two versions of IMAGE (desktop and immersive) compared to a baseline. Results suggest that the prosthesis helped analysts in making better decisions, but failed to increase their structural knowledge about the situation once the cognitive prosthesis is removed. Copyright © 2016 Elsevier Ltd. All rights reserved.
Cerebral morphology and functional sparing after prenatal frontal cortex lesions in rats.
Kolb, B; Cioe, J; Muirhead, D
1998-03-01
Rats were given suction lesions of the presumptive frontal cortex on embryonic day 18 (E18) and subsequently tested, as adults, on tests of spatial navigation (Morris water task, radial arm maze), motor tasks (Whishaw reaching task, beam walking), and locomotor activity. Frontal cortical lesions at E18 affected cerebral morphogenesis, producing unusual morphological structures including abnormal patches of neurons in the cortex and white matter as well as neuronal bridges between the hemispheres. A small sample of E18 operates also had hydrocephaly. The animals with E18 lesions without hydrocephalus were behaviorally indistinguishable from littermate controls. The results demonstrate that animals with focal lesions of the presumptive frontal cortex have gross abnormalities in cerebral morphology but the lesions leave the functions normally subserved by the frontal cortex in adult rats unaffected. The results are discussed in the context of a hypothesis regarding the optimal times for functional recovery from cortical injury.
Action perception as hypothesis testing.
Donnarumma, Francesco; Costantini, Marcello; Ambrosini, Ettore; Friston, Karl; Pezzulo, Giovanni
2017-04-01
We present a novel computational model that describes action perception as an active inferential process that combines motor prediction (the reuse of our own motor system to predict perceived movements) and hypothesis testing (the use of eye movements to disambiguate amongst hypotheses). The system uses a generative model of how (arm and hand) actions are performed to generate hypothesis-specific visual predictions, and directs saccades to the most informative places of the visual scene to test these predictions - and underlying hypotheses. We test the model using eye movement data from a human action observation study. In both the human study and our model, saccades are proactive whenever context affords accurate action prediction; but uncertainty induces a more reactive gaze strategy, via tracking the observed movements. Our model offers a novel perspective on action observation that highlights its active nature based on prediction dynamics and hypothesis testing. Copyright © 2017 The Authors. Published by Elsevier Ltd.. All rights reserved.
Confidence intervals for single-case effect size measures based on randomization test inversion.
Michiels, Bart; Heyvaert, Mieke; Meulders, Ann; Onghena, Patrick
2017-02-01
In the current paper, we present a method to construct nonparametric confidence intervals (CIs) for single-case effect size measures in the context of various single-case designs. We use the relationship between a two-sided statistical hypothesis test at significance level α and a 100 (1 - α) % two-sided CI to construct CIs for any effect size measure θ that contain all point null hypothesis θ values that cannot be rejected by the hypothesis test at significance level α. This method of hypothesis test inversion (HTI) can be employed using a randomization test as the statistical hypothesis test in order to construct a nonparametric CI for θ. We will refer to this procedure as randomization test inversion (RTI). We illustrate RTI in a situation in which θ is the unstandardized and the standardized difference in means between two treatments in a completely randomized single-case design. Additionally, we demonstrate how RTI can be extended to other types of single-case designs. Finally, we discuss a few challenges for RTI as well as possibilities when using the method with other effect size measures, such as rank-based nonoverlap indices. Supplementary to this paper, we provide easy-to-use R code, which allows the user to construct nonparametric CIs according to the proposed method.
ERIC Educational Resources Information Center
Besken, Miri
2016-01-01
The perceptual fluency hypothesis claims that items that are easy to perceive at encoding induce an illusion that they will be easier to remember, despite the finding that perception does not generally affect recall. The current set of studies tested the predictions of the perceptual fluency hypothesis with a picture generation manipulation.…
Adolescents' Body Image Trajectories: A Further Test of the Self-Equilibrium Hypothesis
ERIC Educational Resources Information Center
Morin, Alexandre J. S.; Maïano, Christophe; Scalas, L. Francesca; Janosz, Michel; Litalien, David
2017-01-01
The self-equilibrium hypothesis underlines the importance of having a strong core self, which is defined as a high and developmentally stable self-concept. This study tested this hypothesis in relation to body image (BI) trajectories in a sample of 1,006 adolescents (M[subscript age] = 12.6, including 541 males and 465 females) across a 4-year…
ERIC Educational Resources Information Center
Trafimow, David
2017-01-01
There has been much controversy over the null hypothesis significance testing procedure, with much of the criticism centered on the problem of inverse inference. Specifically, p gives the probability of the finding (or one more extreme) given the null hypothesis, whereas the null hypothesis significance testing procedure involves drawing a…
ERIC Educational Resources Information Center
Lee, Jungmin
2016-01-01
This study tested the Bennett hypothesis by examining whether four-year colleges changed listed tuition and fees, the amount of institutional grants per student, and room and board charges after their states implemented statewide merit-based aid programs. According to the Bennett hypothesis, increases in government financial aid make it easier for…
Silber, Sherman; Geisler, Jonathan H.; Bolortsetseg, Minjin
2011-01-01
It has been suggested that climate change at the Cretaceous–Palaeogene (K–Pg) boundary, initiated by a bolide impact or volcanic eruptions, caused species with temperature-dependent sex determination (TSD), including dinosaurs, to go extinct because of a skewed sex ratio towards all males. To test this hypothesis, the sex-determining mechanisms (SDMs) of Cretaceous tetrapods of the Hell Creek Formation (Montana, USA) were inferred using parsimony optimizations of SDMs on a tree, including Hell Creek species and their extant relatives. Although the SDMs of non-avian dinosaurs could not be inferred, we were able to determine the SDMs of 62 species; 46 had genotypic sex determination (GSD) and 16 had TSD. The TSD hypothesis for extinctions performed poorly, predicting between 32 and 34 per cent of survivals and extinctions. Most surprisingly, of the 16 species with TSD, 14 of them survived into the Early Palaeocene. In contrast, 61 per cent of species with GSD went extinct. Possible explanations include minimal climate change at the K–Pg, or if climate change did occur, TSD species that survived had egg-laying behaviour that prevented the skewing of sex ratios, or had a sex ratio skewed towards female rather than male preponderance. Application of molecular clocks may allow the SDMs of non-avian dinosaurs to be inferred, which would be an important test of the pattern discovered here. PMID:20980293
Ni, W; Song, X; Cui, J
2014-03-01
The purpose of this study was to test the mutant selection window (MSW) hypothesis with Escherichia coli exposed to levofloxacin in a rabbit model and to compare in vivo and in vitro exposure thresholds that restrict the selection of fluoroquinolone-resistant mutants. Local infection with E. coli was established in rabbits, and the infected animals were treated orally with various doses of levofloxacin once a day for five consecutive days. Changes in levofloxacin concentration and levofloxacin susceptibility were monitored at the site of infection. The MICs of E. coli increased when levofloxacin concentrations at the site of infection fluctuated between the lower and upper boundaries of the MSW, defined in vitro as the minimum inhibitory concentration (MIC99) and the mutant prevention concentration (MPC), respectively. The pharmacodynamic thresholds at which resistant mutants are not selected in vivo was estimated as AUC24/MPC > 20 h or AUC24/MIC > 60 h, where AUC24 is the area under the drug concentration time curve in a 24-h interval. Our finding demonstrated that the MSW existed in vivo. The AUC24/MPC ratio that prevented resistant mutants from being selected estimated in vivo is consistent with that observed in vitro, indicating it might be a reliable index for guiding the optimization of antimicrobial treatment regimens for suppression of the selection of antimicrobial resistance.
Human female orgasm as evolved signal: a test of two hypotheses.
Ellsworth, Ryan M; Bailey, Drew H
2013-11-01
We present the results of a study designed to empirically test predictions derived from two hypotheses regarding human female orgasm behavior as an evolved communicative trait or signal. One hypothesis tested was the female fidelity hypothesis, which posits that human female orgasm signals a woman's sexual satisfaction and therefore her likelihood of future fidelity to a partner. The other was sire choice hypothesis, which posits that women's orgasm behavior signals increased chances of fertilization. To test the two hypotheses of human female orgasm, we administered a questionnaire to 138 females and 121 males who reported that they were currently in a romantic relationship. Key predictions of the female fidelity hypothesis were not supported. In particular, orgasm was not associated with female sexual fidelity nor was orgasm associated with male perceptions of partner sexual fidelity. However, faked orgasm was associated with female sexual infidelity and lower male relationship satisfaction. Overall, results were in greater support of the sire choice signaling hypothesis than the female fidelity hypothesis. Results also suggest that male satisfaction with, investment in, and sexual fidelity to a mate are benefits that favored the selection of orgasmic signaling in ancestral females.
Luo, Liqun; Zhao, Wei; Weng, Tangmei
2016-01-01
The Trivers-Willard hypothesis predicts that high-status parents will bias their investment to sons, whereas low-status parents will bias their investment to daughters. Among humans, tests of this hypothesis have yielded mixed results. This study tests the hypothesis using data collected among contemporary peasants in Central South China. We use current family status (rated by our informants) and father's former class identity (assigned by the Chinese Communist Party in the early 1950s) as measures of parental status, and proportion of sons in offspring and offspring's years of education as measures of parental investment. Results show that (i) those families with a higher former class identity such as landlord and rich peasant tend to have a higher socioeconomic status currently, (ii) high-status parents are more likely to have sons than daughters among their biological offspring, and (iii) in higher-status families, the years of education obtained by sons exceed that obtained by daughters to a larger extent than in lower-status families. Thus, the first assumption and the two predictions of the hypothesis are supported by this study. This article contributes a contemporary Chinese case to the testing of the Trivers-Willard hypothesis.
Hypothesis testing of a change point during cognitive decline among Alzheimer's disease patients.
Ji, Ming; Xiong, Chengjie; Grundman, Michael
2003-10-01
In this paper, we present a statistical hypothesis test for detecting a change point over the course of cognitive decline among Alzheimer's disease patients. The model under the null hypothesis assumes a constant rate of cognitive decline over time and the model under the alternative hypothesis is a general bilinear model with an unknown change point. When the change point is unknown, however, the null distribution of the test statistics is not analytically tractable and has to be simulated by parametric bootstrap. When the alternative hypothesis that a change point exists is accepted, we propose an estimate of its location based on the Akaike's Information Criterion. We applied our method to a data set from the Neuropsychological Database Initiative by implementing our hypothesis testing method to analyze Mini Mental Status Exam scores based on a random-slope and random-intercept model with a bilinear fixed effect. Our result shows that despite large amount of missing data, accelerated decline did occur for MMSE among AD patients. Our finding supports the clinical belief of the existence of a change point during cognitive decline among AD patients and suggests the use of change point models for the longitudinal modeling of cognitive decline in AD research.
Merino, M P; Andrews, B A; Parada, P; Asenjo, J A
2016-11-01
Biomining is defined as biotechnology for metal recovery from minerals, and is promoted by the concerted effort of a consortium of acidophile prokaryotes, comprised of members of the Bacteria and Archaea domains. Ferroplasma acidiphilum and Leptospirillum ferriphilum are the dominant species in extremely acid environments and have great use in bioleaching applications; however, the role of each species in this consortia is still a subject of research. The hypothesis of this work is that F. acidiphilum uses the organic matter secreted by L. ferriphilum for growth, maintaining low levels of organic compounds in the culture medium, preventing their toxic effects on L. ferriphilum. To test this hypothesis, a characterization of Ferroplasma acidiphilum strain BRL-115 was made with the objective of determining its optimal growth conditions. Subsequently, under the optimal conditions, L. ferriphilum and F. acidiphilum were tested growing in each other's supernatant, in order to define if there was exchange of metabolites between the species. With these results, a mixed culture in batch cyclic operation was performed to obtain main specific growth rates, which were used to evaluate a mixed metabolic model previously developed by our group. It was observed that F. acidiphilum, strain BRL-115 is a chemomixotrophic organism, and its growth is maximized with yeast extract at a concentration of 0.04% wt/vol. From the experiments of L. ferriphilum growing on F. acidiphilum supernatant and vice versa, it was observed that in both cases cell growth is favorably affected by the presence of the filtered medium of the other microorganism, proving a synergistic interaction between these species. Specific growth rates were obtained in cyclic batch operation of the mixed culture and were used as input data for a Flux Balance Analysis of the mixed metabolic model, obtaining a reasonable behavior of the metabolic fluxes and the system as a whole, therefore consolidating the model previously developed. © 2016 American Institute of Chemical Engineers Biotechnol. Prog., 32:1390-1396, 2016. © 2016 American Institute of Chemical Engineers.
Bouchard, Mathieu; Garet, Jérôme
The decreasing abundance of mature forests and their fragmentation have been identified as major threats for the preservation of biodiversity in managed landscapes. In this study, we developed a multi-level framework to coordinate forest harvestings so as to optimize the retention or restoration of large mature forest tracts in managed forests. We used mixed-integer programming for this optimization, and integrated realistic management assumptions regarding stand yield and operational harvest constraints. The model was parameterized for eastern Canadian boreal forests, where clear-cutting is the main silvicultural system, and is used to examine two hypotheses. First, we tested if mature forest tract targets had more negative impacts on wood supplies when implemented in landscapes that are very different from targeted conditions. Second, we tested the hypothesis that using more partial cuts can be useful to attenuate the negative impacts of mature forest targets on wood supplies. The results indicate that without the integration of an explicit mature forest tract target, the optimization leads to relatively high fragmentation levels. Forcing the retention or restoration of large mature forest tracts on 40% of the landscapes had negative impacts on wood supplies in all types of landscapes, but these impacts were less important in landscapes that were initially fragmented. This counter-intuitive result is explained by the presence in the models of an operational constraint that forbids diffuse patterns of harvestings, which are more costly. Once this constraint is applied, the residual impact of the mature forest tract target is low. The results also indicate that partial cuts are of very limited use to attenuate the impacts of mature forest tract targets on wood supplies in highly fragmented landscapes. Partial cuts are somewhat more useful in landscapes that are less fragmented, but they have to be well coordinated with clearcut schedules in order to contribute efficiently to conservation objectives. This modeling framework could easily be adapted and parameterized to test hypotheses or to optimize restoration schedules in landscapes where issues such as forest fragmentation and the abundance of mature or old-growth forests are a concern.
Bayesian Methods for Determining the Importance of Effects
USDA-ARS?s Scientific Manuscript database
Criticisms have plagued the frequentist null-hypothesis significance testing (NHST) procedure since the day it was created from the Fisher Significance Test and Hypothesis Test of Jerzy Neyman and Egon Pearson. Alternatives to NHST exist in frequentist statistics, but competing methods are also avai...
Testing for purchasing power parity in the long-run for ASEAN-5
NASA Astrophysics Data System (ADS)
Choji, Niri Martha; Sek, Siok Kun
2017-04-01
For more than a decade, there has been a substantial interest in testing for the validity of the purchasing power parity (PPP) hypothesis empirically. This paper performs a test on revealing a long-run relative Purchasing Power Parity for a group of ASEAN-5 countries for the period of 1996-2016 using monthly data. For this purpose, we used the Pedroni co-integration method to test for the long-run hypothesis of purchasing power parity. We first tested for the stationarity of the variables and found that the variables are non-stationary at levels but stationary at first difference. Results of the Pedroni test rejected the null hypothesis of no co-integration meaning that we have enough evidence to support PPP in the long-run for the ASEAN-5 countries over the period of 1996-2016. In other words, the rejection of null hypothesis implies a long-run relation between nominal exchange rates and relative prices.
UNIFORMLY MOST POWERFUL BAYESIAN TESTS
Johnson, Valen E.
2014-01-01
Uniformly most powerful tests are statistical hypothesis tests that provide the greatest power against a fixed null hypothesis among all tests of a given size. In this article, the notion of uniformly most powerful tests is extended to the Bayesian setting by defining uniformly most powerful Bayesian tests to be tests that maximize the probability that the Bayes factor, in favor of the alternative hypothesis, exceeds a specified threshold. Like their classical counterpart, uniformly most powerful Bayesian tests are most easily defined in one-parameter exponential family models, although extensions outside of this class are possible. The connection between uniformly most powerful tests and uniformly most powerful Bayesian tests can be used to provide an approximate calibration between p-values and Bayes factors. Finally, issues regarding the strong dependence of resulting Bayes factors and p-values on sample size are discussed. PMID:24659829
The Effects of Two Study Methods on Memory
1987-07-01
hypothesis that optimal processing of individual words, qua individual words is sufficient to support good recall" (Craik & Tulving , 1975, p. 27...Likewise, mnemonic strategies, such as creating a story (Bel- lezza, Cheesman, & Reddy, 19/7) or categories (Mandler, Pearlstone , & Koopmans, 1969) for...that optimal retrieval per- formance is produced by extensive semantic processing for the individual item (Craik & Tulving , 1975; Hyde & Jenkins, 1973
Inference for High-dimensional Differential Correlation Matrices *
Cai, T. Tony; Zhang, Anru
2015-01-01
Motivated by differential co-expression analysis in genomics, we consider in this paper estimation and testing of high-dimensional differential correlation matrices. An adaptive thresholding procedure is introduced and theoretical guarantees are given. Minimax rate of convergence is established and the proposed estimator is shown to be adaptively rate-optimal over collections of paired correlation matrices with approximately sparse differences. Simulation results show that the procedure significantly outperforms two other natural methods that are based on separate estimation of the individual correlation matrices. The procedure is also illustrated through an analysis of a breast cancer dataset, which provides evidence at the gene co-expression level that several genes, of which a subset has been previously verified, are associated with the breast cancer. Hypothesis testing on the differential correlation matrices is also considered. A test, which is particularly well suited for testing against sparse alternatives, is introduced. In addition, other related problems, including estimation of a single sparse correlation matrix, estimation of the differential covariance matrices, and estimation of the differential cross-correlation matrices, are also discussed. PMID:26500380
Nieznański, Marek
2014-10-01
According to many theoretical accounts, reinstating study context at the time of test creates optimal circumstances for item retrieval. The role of context reinstatement was tested in reference to context memory in several experiments. On the encoding phase, participants were presented with words printed in two different font colors (intrinsic context) or two different sides of the computer screen (extrinsic context). At test, the context was reinstated or changed and participants were asked to recognize words and recollect their study context. Moreover, a read-generate manipulation was introduced at encoding and retrieval, which was intended to influence the relative salience of item and context information. The results showed that context reinstatement had no effect on memory for extrinsic context but affected memory for intrinsic context when the item was generated at encoding and read at test. These results supported the hypothesis that context information is reconstructed at retrieval only when context was poorly encoded at study. © 2014 Scandinavian Psychological Associations and John Wiley & Sons Ltd.
[Experimental testing of Pflüger's reflex hypothesis of menstruation in late 19th century].
Simmer, H H
1980-07-01
Pflüger's hypothesis of a nerve reflex as the cause of menstruation published in 1865 and accepted by many, nonetheless did not lead to experimental investigations for 25 years. According to this hypothesis the nerve reflex starts in the ovary by an increase of the intraovarian pressure by the growing follicles. In 1884 Adolph Kehrer proposed a program to test the nerve reflex, but only in 1890, Cohnstein artificially increased the intraovarian pressure in women by bimanual compression from the outside and the vagina. His results were not convincing. Six years later, Strassmann injected fluids into ovaries of animals and obtained changes in the uterus resembling those of oestrus. His results seemed to verify a prognosis derived from Pflüger's hypothesis. Thus, after a long interval, that hypothesis had become a paradigma. Though reasons can be given for the delay, it is little understood, why experimental testing started so late.
NASA Astrophysics Data System (ADS)
Borghesani, P.; Pennacchi, P.; Ricci, R.; Chatterton, S.
2013-10-01
Cyclostationary models for the diagnostic signals measured on faulty rotating machineries have proved to be successful in many laboratory tests and industrial applications. The squared envelope spectrum has been pointed out as the most efficient indicator for the assessment of second order cyclostationary symptoms of damages, which are typical, for instance, of rolling element bearing faults. In an attempt to foster the spread of rotating machinery diagnostics, the current trend in the field is to reach higher levels of automation of the condition monitoring systems. For this purpose, statistical tests for the presence of cyclostationarity have been proposed during the last years. The statistical thresholds proposed in the past for the identification of cyclostationary components have been obtained under the hypothesis of having a white noise signal when the component is healthy. This need, coupled with the non-white nature of the real signals implies the necessity of pre-whitening or filtering the signal in optimal narrow-bands, increasing the complexity of the algorithm and the risk of losing diagnostic information or introducing biases on the result. In this paper, the authors introduce an original analytical derivation of the statistical tests for cyclostationarity in the squared envelope spectrum, dropping the hypothesis of white noise from the beginning. The effect of first order and second order cyclostationary components on the distribution of the squared envelope spectrum will be quantified and the effectiveness of the newly proposed threshold verified, providing a sound theoretical basis and a practical starting point for efficient automated diagnostics of machine components such as rolling element bearings. The analytical results will be verified by means of numerical simulations and by using experimental vibration data of rolling element bearings.
Praveen, Vijayakumar; Praveen, Shama
2016-01-01
Sudden infant death syndrome (SIDS) continues to be a major public health issue. Following its major decline since the "Back to Sleep" campaign, the incidence of SIDS has plateaued, with an annual incidence of about 1,500 SIDS-related deaths in the United States and thousands more throughout the world. The etiology of SIDS, the major cause of postneonatal mortality in the western world, is still poorly understood. Although sleeping in prone position is a major risk factor, SIDS continues to occur even in the supine sleeping position. The triple-risk model of Filiano and Kinney emphasizes the interaction between a susceptible infant during a critical developmental period and stressor/s in the pathogenesis of SIDS. Recent evidence ranges from dysregulated autonomic control to findings of altered neurochemistry, especially the serotonergic system that plays an important role in brainstem cardiorespiratory/thermoregulatory centers. Brainstem serotonin (5-HT) and tryptophan hydroxylase-2 (TPH-2) levels have been shown to be lower in SIDS, supporting the evidence that defects in the medullary serotonergic system play a significant role in SIDS. Pathogenic bacteria and their enterotoxins have been associated with SIDS, although no direct evidence has been established. We present a new hypothesis that the infant's gut microbiome, and/or its metabolites, by its direct effects on the gut enterochromaffin cells, stimulates the afferent gut vagal endings by releasing serotonin (paracrine effect), optimizing autoresuscitation by modulating brainstem 5-HT levels through the microbiome-gut-brain axis, thus playing a significant role in SIDS during the critical period of gut flora development and vulnerability to SIDS. The shared similarities between various risk factors for SIDS and their relationship with the infant gut microbiome support our hypothesis. Comprehensive gut-microbiome studies are required to test our hypothesis.
Assessment of statistical significance and clinical relevance.
Kieser, Meinhard; Friede, Tim; Gondan, Matthias
2013-05-10
In drug development, it is well accepted that a successful study will demonstrate not only a statistically significant result but also a clinically relevant effect size. Whereas standard hypothesis tests are used to demonstrate the former, it is less clear how the latter should be established. In the first part of this paper, we consider the responder analysis approach and study the performance of locally optimal rank tests when the outcome distribution is a mixture of responder and non-responder distributions. We find that these tests are quite sensitive to their planning assumptions and have therefore not really any advantage over standard tests such as the t-test and the Wilcoxon-Mann-Whitney test, which perform overall well and can be recommended for applications. In the second part, we present a new approach to the assessment of clinical relevance based on the so-called relative effect (or probabilistic index) and derive appropriate sample size formulae for the design of studies aiming at demonstrating both a statistically significant and clinically relevant effect. Referring to recent studies in multiple sclerosis, we discuss potential issues in the application of this approach. Copyright © 2012 John Wiley & Sons, Ltd.
Testing fundamental ecological concepts with a Pythium-Prunus pathosystem
USDA-ARS?s Scientific Manuscript database
The study of plant-pathogen interactions has enabled tests of basic ecological concepts on plant community assembly (Janzen-Connell Hypothesis) and plant invasion (Enemy Release Hypothesis). We used a field experiment to (#1) test whether Pythium effects depended on host (seedling) density and/or d...
A checklist to facilitate objective hypothesis testing in social psychology research.
Washburn, Anthony N; Morgan, G Scott; Skitka, Linda J
2015-01-01
Social psychology is not a very politically diverse area of inquiry, something that could negatively affect the objectivity of social psychological theory and research, as Duarte et al. argue in the target article. This commentary offers a number of checks to help researchers uncover possible biases and identify when they are engaging in hypothesis confirmation and advocacy instead of hypothesis testing.
Nan Liu; Hai Ren; Sufen Yuan; Qinfeng Guo; Long Yang
2013-01-01
The relative importance of facilitation and competition between pairwise plants across abiotic stress gradients as predicted by the stress-gradient hypothesis has been confirmed in arid and temperate ecosystems, but the hypothesis has rarely been tested in tropical systems, particularly across nutrient gradients. The current research examines the interactions between a...
Phase II Clinical Trials: D-methionine to Reduce Noise-Induced Hearing Loss
2012-03-01
loss (NIHL) and tinnitus in our troops. Hypotheses: Primary Hypothesis: Administration of oral D-methionine prior to and during weapons...reduce or prevent noise-induced tinnitus . Primary outcome to test the primary hypothesis: Pure tone air-conduction thresholds. Primary outcome to...test the secondary hypothesis: Tinnitus questionnaires. Specific Aims: 1. To determine whether administering oral D-methionine (D-met) can
NASA Astrophysics Data System (ADS)
Chung, Hye Won; Guha, Saikat; Zheng, Lizhong
2017-07-01
We study the problem of designing optical receivers to discriminate between multiple coherent states using coherent processing receivers—i.e., one that uses arbitrary coherent feedback control and quantum-noise-limited direct detection—which was shown by Dolinar to achieve the minimum error probability in discriminating any two coherent states. We first derive and reinterpret Dolinar's binary-hypothesis minimum-probability-of-error receiver as the one that optimizes the information efficiency at each time instant, based on recursive Bayesian updates within the receiver. Using this viewpoint, we propose a natural generalization of Dolinar's receiver design to discriminate M coherent states, each of which could now be a codeword, i.e., a sequence of N coherent states, each drawn from a modulation alphabet. We analyze the channel capacity of the pure-loss optical channel with a general coherent-processing receiver in the low-photon number regime and compare it with the capacity achievable with direct detection and the Holevo limit (achieving the latter would require a quantum joint-detection receiver). We show compelling evidence that despite the optimal performance of Dolinar's receiver for the binary coherent-state hypothesis test (either in error probability or mutual information), the asymptotic communication rate achievable by such a coherent-processing receiver is only as good as direct detection. This suggests that in the infinitely long codeword limit, all potential benefits of coherent processing at the receiver can be obtained by designing a good code and direct detection, with no feedback within the receiver.
Distributed resource allocation under communication constraints
NASA Astrophysics Data System (ADS)
Dodin, Pierre; Nimier, Vincent
2001-03-01
This paper deals with a study of the multi-sensor management problem for multi-target tracking. The collaboration between many sensors observing the same target means that they are able to fuse their data during the information process. Then one must take into account this possibility to compute the optimal association sensors-target at each step of time. In order to solve this problem for real large scale system, one must both consider the information aspect and the control aspect of the problem. To unify these problems, one possibility is to use a decentralized filtering algorithm locally driven by an assignment algorithm. The decentralized filtering algorithm we use in our model is the filtering algorithm of Grime, which relaxes the usual full-connected hypothesis. By full-connected, one means that the information in a full-connected system is totally distributed everywhere at the same moment, which is unacceptable for a real large scale system. We modelize the distributed assignment decision with the help of a greedy algorithm. Each sensor performs a global optimization, in order to estimate other information sets. A consequence of the relaxation of the full- connected hypothesis is that the sensors' information set are not the same at each step of time, producing an information dis- symmetry in the system. The assignment algorithm uses a local knowledge of this dis-symmetry. By testing the reactions and the coherence of the local assignment decisions of our system, against maneuvering targets, we show that it is still possible to manage with decentralized assignment control even though the system is not full-connected.
Is effective force application in handrim wheelchair propulsion also efficient?
Bregman, D J J; van Drongelen, S; Veeger, H E J
2009-01-01
Efficiency in manual wheelchair propulsion is low, as is the fraction of the propulsion force that is attributed to the moment of propulsion of the wheelchair. In this study we tested the hypothesis that a tangential propulsion force direction leads to an increase in physiological cost, due to (1) the sub-optimal use of elbow flexors and extensors, and/or (2) the necessity of preventing of glenohumeral subluxation. Five able-bodied and 11 individuals with a spinal cord injury propelled a wheelchair while kinematics and kinetics were collected. The results were used to perform inverse dynamical simulations with input of (1) the experimentally obtained propulsion force, and (2) only the tangential component of that force. In the tangential force condition the physiological cost was over 30% higher, while the tangential propulsion force was only 75% of the total experimental force. According to model estimations, the tangential force condition led to more co-contraction around the elbow, and a higher power production around the shoulder joint. The tangential propulsion force led to a significant, but small 4% increase in necessity for the model to compensate for glenohumeral subluxation, which indicates that this is not a likely cause of the decrease in efficiency. The present findings support the hypothesis that the observed force direction in wheelchair propulsion is a compromise between efficiency and the constraints imposed by the wheelchair-user system. This implies that training should not be aimed at optimization of the propulsion force, because this may be less efficient and more straining for the musculoskeletal system.
An omnibus test for the global null hypothesis.
Futschik, Andreas; Taus, Thomas; Zehetmayer, Sonja
2018-01-01
Global hypothesis tests are a useful tool in the context of clinical trials, genetic studies, or meta-analyses, when researchers are not interested in testing individual hypotheses, but in testing whether none of the hypotheses is false. There are several possibilities how to test the global null hypothesis when the individual null hypotheses are independent. If it is assumed that many of the individual null hypotheses are false, combination tests have been recommended to maximize power. If, however, it is assumed that only one or a few null hypotheses are false, global tests based on individual test statistics are more powerful (e.g. Bonferroni or Simes test). However, usually there is no a priori knowledge on the number of false individual null hypotheses. We therefore propose an omnibus test based on cumulative sums of the transformed p-values. We show that this test yields an impressive overall performance. The proposed method is implemented in an R-package called omnibus.
Explorations in Statistics: Hypothesis Tests and P Values
ERIC Educational Resources Information Center
Curran-Everett, Douglas
2009-01-01
Learning about statistics is a lot like learning about science: the learning is more meaningful if you can actively explore. This second installment of "Explorations in Statistics" delves into test statistics and P values, two concepts fundamental to the test of a scientific null hypothesis. The essence of a test statistic is that it compares what…
Planned Hypothesis Tests Are Not Necessarily Exempt from Multiplicity Adjustment
ERIC Educational Resources Information Center
Frane, Andrew V.
2015-01-01
Scientific research often involves testing more than one hypothesis at a time, which can inflate the probability that a Type I error (false discovery) will occur. To prevent this Type I error inflation, adjustments can be made to the testing procedure that compensate for the number of tests. Yet many researchers believe that such adjustments are…
ERIC Educational Resources Information Center
Malda, Maike; van de Vijver, Fons J. R.; Temane, Q. Michael
2010-01-01
In this study, cross-cultural differences in cognitive test scores are hypothesized to depend on a test's cultural complexity (Cultural Complexity Hypothesis: CCH), here conceptualized as its content familiarity, rather than on its cognitive complexity (Spearman's Hypothesis: SH). The content familiarity of tests assessing short-term memory,…
Hopkins, Kathryn; King, Andrew; Moore, Brian C J
2012-09-01
Hearing aids use amplitude compression to compensate for the effects of loudness recruitment. The compression speed that gives the best speech intelligibility varies among individuals. Moore [(2008). Trends Amplif. 12, 300-315] suggested that an individual's sensitivity to temporal fine structure (TFS) information may affect which compression speed gives most benefit. This hypothesis was tested using normal-hearing listeners with a simulated hearing loss. Sentences in a competing talker background were processed using multi-channel fast or slow compression followed by a simulation of threshold elevation and loudness recruitment. Signals were either tone vocoded with 1-ERB(N)-wide channels (where ERB(N) is the bandwidth of normal auditory filters) to remove the original TFS information, or not processed further. In a second experiment, signals were vocoded with either 1 - or 2-ERB(N)-wide channels, to test whether the available spectral detail affects the optimal compression speed. Intelligibility was significantly better for fast than slow compression regardless of vocoder channel bandwidth. The results suggest that the availability of original TFS or detailed spectral information does not affect the optimal compression speed. This conclusion is tentative, since while the vocoder processing removed the original TFS information, listeners may have used the altered TFS in the vocoded signals.
Tang, Dalin; Yang, Chun; Geva, Tal; del Nido, Pedro J.
2010-01-01
Recent advances in medical imaging technology and computational modeling techniques are making it possible that patient-specific computational ventricle models be constructed and used to test surgical hypotheses and replace empirical and often risky clinical experimentation to examine the efficiency and suitability of various reconstructive procedures in diseased hearts. In this paper, we provide a brief review on recent development in ventricle modeling and its potential application in surgical planning and management of tetralogy of Fallot (ToF) patients. Aspects of data acquisition, model selection and construction, tissue material properties, ventricle layer structure and tissue fiber orientations, pressure condition, model validation and virtual surgery procedures (changing patient-specific ventricle data and perform computer simulation) were reviewed. Results from a case study using patient-specific cardiac magnetic resonance (CMR) imaging and right/left ventricle and patch (RV/LV/Patch) combination model with fluid-structure interactions (FSI) were reported. The models were used to evaluate and optimize human pulmonary valve replacement/insertion (PVR) surgical procedure and patch design and test a surgical hypothesis that PVR with small patch and aggressive scar tissue trimming in PVR surgery may lead to improved recovery of RV function and reduced stress/strain conditions in the patch area. PMID:21344066
Stock price change rate prediction by utilizing social network activities.
Deng, Shangkun; Mitsubuchi, Takashi; Sakurai, Akito
2014-01-01
Predicting stock price change rates for providing valuable information to investors is a challenging task. Individual participants may express their opinions in social network service (SNS) before or after their transactions in the market; we hypothesize that stock price change rate is better predicted by a function of social network service activities and technical indicators than by a function of just stock market activities. The hypothesis is tested by accuracy of predictions as well as performance of simulated trading because success or failure of prediction is better measured by profits or losses the investors gain or suffer. In this paper, we propose a hybrid model that combines multiple kernel learning (MKL) and genetic algorithm (GA). MKL is adopted to optimize the stock price change rate prediction models that are expressed in a multiple kernel linear function of different types of features extracted from different sources. GA is used to optimize the trading rules used in the simulated trading by fusing the return predictions and values of three well-known overbought and oversold technical indicators. Accumulated return and Sharpe ratio were used to test the goodness of performance of the simulated trading. Experimental results show that our proposed model performed better than other models including ones using state of the art techniques.
Stock Price Change Rate Prediction by Utilizing Social Network Activities
Mitsubuchi, Takashi; Sakurai, Akito
2014-01-01
Predicting stock price change rates for providing valuable information to investors is a challenging task. Individual participants may express their opinions in social network service (SNS) before or after their transactions in the market; we hypothesize that stock price change rate is better predicted by a function of social network service activities and technical indicators than by a function of just stock market activities. The hypothesis is tested by accuracy of predictions as well as performance of simulated trading because success or failure of prediction is better measured by profits or losses the investors gain or suffer. In this paper, we propose a hybrid model that combines multiple kernel learning (MKL) and genetic algorithm (GA). MKL is adopted to optimize the stock price change rate prediction models that are expressed in a multiple kernel linear function of different types of features extracted from different sources. GA is used to optimize the trading rules used in the simulated trading by fusing the return predictions and values of three well-known overbought and oversold technical indicators. Accumulated return and Sharpe ratio were used to test the goodness of performance of the simulated trading. Experimental results show that our proposed model performed better than other models including ones using state of the art techniques. PMID:24790586
Is it better to select or to receive? Learning via active and passive hypothesis testing.
Markant, Douglas B; Gureckis, Todd M
2014-02-01
People can test hypotheses through either selection or reception. In a selection task, the learner actively chooses observations to test his or her beliefs, whereas in reception tasks data are passively encountered. People routinely use both forms of testing in everyday life, but the critical psychological differences between selection and reception learning remain poorly understood. One hypothesis is that selection learning improves learning performance by enhancing generic cognitive processes related to motivation, attention, and engagement. Alternatively, we suggest that differences between these 2 learning modes derives from a hypothesis-dependent sampling bias that is introduced when a person collects data to test his or her own individual hypothesis. Drawing on influential models of sequential hypothesis-testing behavior, we show that such a bias (a) can lead to the collection of data that facilitates learning compared with reception learning and (b) can be more effective than observing the selections of another person. We then report a novel experiment based on a popular category learning paradigm that compares reception and selection learning. We additionally compare selection learners to a set of "yoked" participants who viewed the exact same sequence of observations under reception conditions. The results revealed systematic differences in performance that depended on the learner's role in collecting information and the abstract structure of the problem.
A systematic review of the measurement properties of the Body Image Scale (BIS) in cancer patients.
Melissant, Heleen C; Neijenhuijs, Koen I; Jansen, Femke; Aaronson, Neil K; Groenvold, Mogens; Holzner, Bernhard; Terwee, Caroline B; van Uden-Kraan, Cornelia F; Cuijpers, Pim; Verdonck-de Leeuw, Irma M
2018-06-01
Body image is acknowledged as an important aspect of health-related quality of life in cancer patients. The Body Image Scale (BIS) is a patient-reported outcome measure (PROM) to evaluate body image in cancer patients. The aim of this study was to systematically review measurement properties of the BIS among cancer patients. A search in Embase, MEDLINE, PsycINFO, and Web of Science was performed to identify studies that investigated measurement properties of the BIS (Prospero ID 42017057237). Study quality was assessed (excellent, good, fair, poor), and data were extracted and analyzed according to the COnsensus-based Standards for the selection of health Measurement INstruments (COSMIN) methodology on structural validity, internal consistency, reliability, measurement error, hypothesis testing for construct validity, and responsiveness. Evidence was categorized into sufficient, insufficient, inconsistent, or indeterminate. Nine studies were included. Evidence was sufficient for structural validity (one factor solution), internal consistency (α = 0.86-0.96), and reliability (r > 0.70); indeterminate for measurement error (information on minimal important change lacked) and responsiveness (increasing body image disturbance in only one study); and inconsistent for hypothesis testing (conflicting results). Quality of the evidence was moderate to low. No studies reported on cross-cultural validity. The BIS is a PROM with good structural validity, internal consistency, and test-retest reliability, but good quality studies on the other measurement properties are needed to optimize evidence. It is recommended to include a wider variety of cancer diagnoses and treatment modalities in these future studies.
Testing for purchasing power parity in 21 African countries using several unit root tests
NASA Astrophysics Data System (ADS)
Choji, Niri Martha; Sek, Siok Kun
2017-04-01
Purchasing power parity is used as a basis for international income and expenditure comparison through the exchange rate theory. However, empirical studies show disagreement on the validity of PPP. In this paper, we conduct the testing on the validity of PPP using panel data approach. We apply seven different panel unit root tests to test the validity of the purchasing power parity (PPP) hypothesis based on the quarterly data on real effective exchange rate for 21 African countries from the period 1971: Q1-2012: Q4. All the results of the seven tests rejected the hypothesis of stationarity meaning that absolute PPP does not hold in those African Countries. This result confirmed the claim from previous studies that standard panel unit tests fail to support the PPP hypothesis.
Does Testing Increase Spontaneous Mediation in Learning Semantically Related Paired Associates?
ERIC Educational Resources Information Center
Cho, Kit W.; Neely, James H.; Brennan, Michael K.; Vitrano, Deana; Crocco, Stephanie
2017-01-01
Carpenter (2011) argued that the testing effect she observed for semantically related but associatively unrelated paired associates supports the mediator effectiveness hypothesis. This hypothesis asserts that after the cue-target pair "mother-child" is learned, relative to restudying mother-child, a review test in which…
Dynamic sensor management of dispersed and disparate sensors for tracking resident space objects
NASA Astrophysics Data System (ADS)
El-Fallah, A.; Zatezalo, A.; Mahler, R.; Mehra, R. K.; Donatelli, D.
2008-04-01
Dynamic sensor management of dispersed and disparate sensors for space situational awareness presents daunting scientific and practical challenges as it requires optimal and accurate maintenance of all Resident Space Objects (RSOs) of interest. We demonstrate an approach to the space-based sensor management problem by extending a previously developed and tested sensor management objective function, the Posterior Expected Number of Targets (PENT), to disparate and dispersed sensors. This PENT extension together with observation models for various sensor platforms, and a Probability Hypothesis Density Particle Filter (PHD-PF) tracker provide a powerful tool for tackling this challenging problem. We demonstrate the approach using simulations for tracking RSOs by a Space Based Visible (SBV) sensor and ground based radars.
True Color Image Analysis For Determination Of Bone Growth In Fluorochromic Biopsies
NASA Astrophysics Data System (ADS)
Madachy, Raymond J.; Chotivichit, Lee; Huang, H. K.; Johnson, Eric E.
1989-05-01
A true color imaging technique has been developed for analysis of microscopic fluorochromic bone biopsy images to quantify new bone growth. The technique searches for specified colors in a medical image for quantification of areas of interest. Based on a user supplied training set, a multispectral classification of pixel values is performed and used for segmenting the image. Good results were obtained when compared to manual tracings of new bone growth performed by an orthopedic surgeon. At a 95% confidence level, the hypothesis that there is no difference between the two methods can be accepted. Work is in progress to test bone biopsies with different colored stains and further optimize the analysis process using three-dimensional spectral ordering techniques.
Toward the detection of gravitational waves under non-Gaussian noises I. Locally optimal statistic.
Yokoyama, Jun'ichi
2014-01-01
After reviewing the standard hypothesis test and the matched filter technique to identify gravitational waves under Gaussian noises, we introduce two methods to deal with non-Gaussian stationary noises. We formulate the likelihood ratio function under weakly non-Gaussian noises through the Edgeworth expansion and strongly non-Gaussian noises in terms of a new method we call Gaussian mapping where the observed marginal distribution and the two-body correlation function are fully taken into account. We then apply these two approaches to Student's t-distribution which has a larger tails than Gaussian. It is shown that while both methods work well in the case the non-Gaussianity is small, only the latter method works well for highly non-Gaussian case.
Optimal Irrigation and Debridement of Infected Joint Implants
Schwechter, Evan M.; Folk, David; Varshney, Avanish K.; Fries, Bettina C.; Kim, Sun Jin; Hirsh, David M.
2014-01-01
Acute postoperative and acute, late hematogenous prosthetic joint infections have been treated with 1-stage irrigation and debridement with polyethylene exchange. Success rates, however, are highly variable. Reported studies demonstrate that detergents are effective at decreasing bacterial colony counts on orthopedic implants. Our hypothesis is that the combination of a detergent and an antiseptic would be more effective than using a detergent alone to decrease colony counts from a methicillin-resistant Staphylococcus aureus biofilm-coated titanium alloy disk simulating an orthopedic implant. In our study of various agents tested, chlorhexidine gluconate scrub (antiseptic and detergent) was the most effective at decreasing bacterial colony counts both prereincubation and postreincubation of the disks; pulse lavage and scrubbing were not more effective than pulse lavage alone. PMID:21641757
Hatori, Tsuyoshi; Takemura, Kazuhisa; Fujii, Satoshi; Ideno, Takashi
2011-06-01
This paper presents a new model of category judgment. The model hypothesizes that, when more attention is focused on a category, the psychological range of the category gets narrower (category-focusing hypothesis). We explain this hypothesis by using the metaphor of a "mental-box" model: the more attention that is focused on a mental box (i.e., a category set), the smaller the size of the box becomes (i.e., a cardinal number of the category set). The hypothesis was tested in an experiment (N = 40), where the focus of attention on prescribed verbal categories was manipulated. The obtained data gave support to the hypothesis: category-focusing effects were found in three experimental tasks (regarding the category of "food", "height", and "income"). The validity of the hypothesis was discussed based on the results.
Debates—Hypothesis testing in hydrology: Introduction
NASA Astrophysics Data System (ADS)
Blöschl, Günter
2017-03-01
This paper introduces the papers in the "Debates—Hypothesis testing in hydrology" series. The four articles in the series discuss whether and how the process of testing hypotheses leads to progress in hydrology. Repeated experiments with controlled boundary conditions are rarely feasible in hydrology. Research is therefore not easily aligned with the classical scientific method of testing hypotheses. Hypotheses in hydrology are often enshrined in computer models which are tested against observed data. Testability may be limited due to model complexity and data uncertainty. All four articles suggest that hypothesis testing has contributed to progress in hydrology and is needed in the future. However, the procedure is usually not as systematic as the philosophy of science suggests. A greater emphasis on a creative reasoning process on the basis of clues and explorative analyses is therefore needed.
ERIC Educational Resources Information Center
White, Brian
2004-01-01
This paper presents a generally applicable method for characterizing subjects' hypothesis-testing behaviour based on a synthesis that extends on previous work. Beginning with a transcript of subjects' speech and videotape of their actions, a Reasoning Map is created that depicts the flow of their hypotheses, tests, predictions, results, and…
Why Is Test-Restudy Practice Beneficial for Memory? An Evaluation of the Mediator Shift Hypothesis
ERIC Educational Resources Information Center
Pyc, Mary A.; Rawson, Katherine A.
2012-01-01
Although the memorial benefits of testing are well established empirically, the mechanisms underlying this benefit are not well understood. The authors evaluated the mediator shift hypothesis, which states that test-restudy practice is beneficial for memory because retrieval failures during practice allow individuals to evaluate the effectiveness…
Bayesian Approaches to Imputation, Hypothesis Testing, and Parameter Estimation
ERIC Educational Resources Information Center
Ross, Steven J.; Mackey, Beth
2015-01-01
This chapter introduces three applications of Bayesian inference to common and novel issues in second language research. After a review of the critiques of conventional hypothesis testing, our focus centers on ways Bayesian inference can be used for dealing with missing data, for testing theory-driven substantive hypotheses without a default null…
Mayo, Ruth; Alfasi, Dana; Schwarz, Norbert
2014-06-01
Feelings of distrust alert people not to take information at face value, which may influence their reasoning strategy. Using the Wason (1960) rule identification task, we tested whether chronic and temporary distrust increase the use of negative hypothesis testing strategies suited to falsify one's own initial hunch. In Study 1, participants who were low in dispositional trust were more likely to engage in negative hypothesis testing than participants high in dispositional trust. In Study 2, trust and distrust were induced through an alleged person-memory task. Paralleling the effects of chronic distrust, participants exposed to a single distrust-eliciting face were 3 times as likely to engage in negative hypothesis testing as participants exposed to a trust-eliciting face. In both studies, distrust increased negative hypothesis testing, which was associated with better performance on the Wason task. In contrast, participants' initial rule generation was not consistently affected by distrust. These findings provide first evidence that distrust can influence which reasoning strategy people adopt. PsycINFO Database Record (c) 2014 APA, all rights reserved.
In Defense of the Play-Creativity Hypothesis
ERIC Educational Resources Information Center
Silverman, Irwin W.
2016-01-01
The hypothesis that pretend play facilitates the creative thought process in children has received a great deal of attention. In a literature review, Lillard et al. (2013, p. 8) concluded that the evidence for this hypothesis was "not convincing." This article focuses on experimental and training studies that have tested this hypothesis.…
The frequentist implications of optional stopping on Bayesian hypothesis tests.
Sanborn, Adam N; Hills, Thomas T
2014-04-01
Null hypothesis significance testing (NHST) is the most commonly used statistical methodology in psychology. The probability of achieving a value as extreme or more extreme than the statistic obtained from the data is evaluated, and if it is low enough, the null hypothesis is rejected. However, because common experimental practice often clashes with the assumptions underlying NHST, these calculated probabilities are often incorrect. Most commonly, experimenters use tests that assume that sample sizes are fixed in advance of data collection but then use the data to determine when to stop; in the limit, experimenters can use data monitoring to guarantee that the null hypothesis will be rejected. Bayesian hypothesis testing (BHT) provides a solution to these ills because the stopping rule used is irrelevant to the calculation of a Bayes factor. In addition, there are strong mathematical guarantees on the frequentist properties of BHT that are comforting for researchers concerned that stopping rules could influence the Bayes factors produced. Here, we show that these guaranteed bounds have limited scope and often do not apply in psychological research. Specifically, we quantitatively demonstrate the impact of optional stopping on the resulting Bayes factors in two common situations: (1) when the truth is a combination of the hypotheses, such as in a heterogeneous population, and (2) when a hypothesis is composite-taking multiple parameter values-such as the alternative hypothesis in a t-test. We found that, for these situations, while the Bayesian interpretation remains correct regardless of the stopping rule used, the choice of stopping rule can, in some situations, greatly increase the chance of experimenters finding evidence in the direction they desire. We suggest ways to control these frequentist implications of stopping rules on BHT.
Does contextual cuing guide the deployment of attention?
Kunar, Melina A; Flusberg, Stephen; Horowitz, Todd S; Wolfe, Jeremy M
2007-08-01
Contextual cuing experiments show that when displays are repeated, reaction times to find a target decrease over time even when observers are not aware of the repetition. It has been thought that the context of the display guides attention to the target. The authors tested this hypothesis by comparing the effects of guidance in a standard search task with the effects of contextual cuing. First, in standard search, an improvement in guidance causes search slopes (derived from Reaction Time x Set Size functions) to decrease. In contrast, the authors found that search slopes in contextual cuing did not become more efficient over time (Experiment 1). Second, when guidance was optimal (e.g., in easy feature search), they still found a small but reliable contextual cuing effect (Experiments 2a and 2b), suggesting that other factors, such as response selection, contribute to the effect. Experiment 3 supported this hypothesis by showing that the contextual cuing effect disappeared when the authors added interference to the response selection process. Overall, the data suggest that the relationship between guidance and contextual cuing is weak and that response selection can account for part of the effect. (c) 2007 APA, all rights reserved
Optimum swimming pathways of fish spawning migrations in rivers
McElroy, Brandon; DeLonay, Aaron; Jacobson, Robert
2012-01-01
Fishes that swim upstream in rivers to spawn must navigate complex fluvial velocity fields to arrive at their ultimate locations. One hypothesis with substantial implications is that fish traverse pathways that minimize their energy expenditure during migration. Here we present the methodological and theoretical developments necessary to test this and similar hypotheses. First, a cost function is derived for upstream migration that relates work done by a fish to swimming drag. The energetic cost scales with the cube of a fish's relative velocity integrated along its path. By normalizing to the energy requirements of holding a position in the slowest waters at the path's origin, a cost function is derived that depends only on the physical environment and not on specifics of individual fish. Then, as an example, we demonstrate the analysis of a migration pathway of a telemetrically tracked pallid sturgeon (Scaphirhynchus albus) in the Missouri River (USA). The actual pathway cost is lower than 105 random paths through the surveyed reach and is consistent with the optimization hypothesis. The implication—subject to more extensive validation—is that reproductive success in managed rivers could be increased through manipulation of reservoir releases or channel morphology to increase abundance of lower-cost migration pathways.
Dilworth-Bart, Janean E; Miller, Kyle E; Hane, Amanda
2012-04-01
We examined the joint roles of child negative emotionality and parenting in the visual-spatial development of toddlers born preterm or with low birthweights (PTLBW). Neonatal risk data were collected at hospital discharge, observer- and parent-rated child negative emotionality was assessed at 9-months postterm, and mother-initiated task changes and flexibility during play were observed during a dyadic play interaction at 16-months postterm. Abbreviated IQ scores, and verbal/nonverbal and visual-spatial processing data were collected at 24-months postterm. Hierarchical regression analyses did not support our hypothesis that the visual-spatial processing of PTLBW toddlers with higher negative emotionality would be differentially susceptible to parenting behaviors during play. Instead, observer-rated distress and a negativity composite score were associated with less optimal visual-spatial processing when mothers were more flexible during the 16-month play interaction. Mother-initiated task changes did not interact with any of the negative emotionality variables to predict any of the 24-month neurocognitive outcomes, nor did maternal flexibility interact with mother-rated difficult temperament to predict the visual-spatial processing outcomes. Copyright © 2011 Elsevier Inc. All rights reserved.
Gautestad, Arild O; Mysterud, Atle
2013-01-01
The Lévy flight foraging hypothesis predicts a transition from scale-free Lévy walk (LW) to scale-specific Brownian motion (BM) as an animal moves from resource-poor towards resource-rich environment. However, the LW-BM continuum implies a premise of memory-less search, which contradicts the cognitive capacity of vertebrates. We describe methods to test if apparent support for LW-BM transitions may rather be a statistical artifact from movement under varying intensity of site fidelity. A higher frequency of returns to previously visited patches (stronger site fidelity) may erroneously be interpreted as a switch from LW towards BM. Simulations of scale-free, memory-enhanced space use illustrate how the ratio between return events and scale-free exploratory movement translates to varying strength of site fidelity. An expanded analysis of GPS data of 18 female red deer, Cervus elaphus, strengthens previous empirical support of memory-enhanced and scale-free space use in a northern forest ecosystem. A statistical mechanical model architecture that describes foraging under environment-dependent variation of site fidelity may allow for higher realism of optimal search models and movement ecology in general, in particular for vertebrates with high cognitive capacity.
Simulation of talking faces in the human brain improves auditory speech recognition
von Kriegstein, Katharina; Dogan, Özgür; Grüter, Martina; Giraud, Anne-Lise; Kell, Christian A.; Grüter, Thomas; Kleinschmidt, Andreas; Kiebel, Stefan J.
2008-01-01
Human face-to-face communication is essentially audiovisual. Typically, people talk to us face-to-face, providing concurrent auditory and visual input. Understanding someone is easier when there is visual input, because visual cues like mouth and tongue movements provide complementary information about speech content. Here, we hypothesized that, even in the absence of visual input, the brain optimizes both auditory-only speech and speaker recognition by harvesting speaker-specific predictions and constraints from distinct visual face-processing areas. To test this hypothesis, we performed behavioral and neuroimaging experiments in two groups: subjects with a face recognition deficit (prosopagnosia) and matched controls. The results show that observing a specific person talking for 2 min improves subsequent auditory-only speech and speaker recognition for this person. In both prosopagnosics and controls, behavioral improvement in auditory-only speech recognition was based on an area typically involved in face-movement processing. Improvement in speaker recognition was only present in controls and was based on an area involved in face-identity processing. These findings challenge current unisensory models of speech processing, because they show that, in auditory-only speech, the brain exploits previously encoded audiovisual correlations to optimize communication. We suggest that this optimization is based on speaker-specific audiovisual internal models, which are used to simulate a talking face. PMID:18436648
Optimal allocation of leaf epidermal area for gas exchange.
de Boer, Hugo J; Price, Charles A; Wagner-Cremer, Friederike; Dekker, Stefan C; Franks, Peter J; Veneklaas, Erik J
2016-06-01
A long-standing research focus in phytology has been to understand how plants allocate leaf epidermal space to stomata in order to achieve an economic balance between the plant's carbon needs and water use. Here, we present a quantitative theoretical framework to predict allometric relationships between morphological stomatal traits in relation to leaf gas exchange and the required allocation of epidermal area to stomata. Our theoretical framework was derived from first principles of diffusion and geometry based on the hypothesis that selection for higher anatomical maximum stomatal conductance (gsmax ) involves a trade-off to minimize the fraction of the epidermis that is allocated to stomata. Predicted allometric relationships between stomatal traits were tested with a comprehensive compilation of published and unpublished data on 1057 species from all major clades. In support of our theoretical framework, stomatal traits of this phylogenetically diverse sample reflect spatially optimal allometry that minimizes investment in the allocation of epidermal area when plants evolve towards higher gsmax . Our results specifically highlight that the stomatal morphology of angiosperms evolved along spatially optimal allometric relationships. We propose that the resulting wide range of viable stomatal trait combinations equips angiosperms with developmental and evolutionary flexibility in leaf gas exchange unrivalled by gymnosperms and pteridophytes. © 2016 The Authors New Phytologist © 2016 New Phytologist Trust.
Classification of Focal and Non Focal Epileptic Seizures Using Multi-Features and SVM Classifier.
Sriraam, N; Raghu, S
2017-09-02
Identifying epileptogenic zones prior to surgery is an essential and crucial step in treating patients having pharmacoresistant focal epilepsy. Electroencephalogram (EEG) is a significant measurement benchmark to assess patients suffering from epilepsy. This paper investigates the application of multi-features derived from different domains to recognize the focal and non focal epileptic seizures obtained from pharmacoresistant focal epilepsy patients from Bern Barcelona database. From the dataset, five different classification tasks were formed. Total 26 features were extracted from focal and non focal EEG. Significant features were selected using Wilcoxon rank sum test by setting p-value (p < 0.05) and z-score (-1.96 > z > 1.96) at 95% significance interval. Hypothesis was made that the effect of removing outliers improves the classification accuracy. Turkey's range test was adopted for pruning outliers from feature set. Finally, 21 features were classified using optimized support vector machine (SVM) classifier with 10-fold cross validation. Bayesian optimization technique was adopted to minimize the cross-validation loss. From the simulation results, it was inferred that the highest sensitivity, specificity, and classification accuracy of 94.56%, 89.74%, and 92.15% achieved respectively and found to be better than the state-of-the-art approaches. Further, it was observed that the classification accuracy improved from 80.2% with outliers to 92.15% without outliers. The classifier performance metrics ensures the suitability of the proposed multi-features with optimized SVM classifier. It can be concluded that the proposed approach can be applied for recognition of focal EEG signals to localize epileptogenic zones.
Yang, Qi; Franco, Christopher M M; Zhang, Wei
2015-10-01
Experiments were designed to validate the two common DNA extraction protocols (CTAB-based method and DNeasy Blood & Tissue Kit) used to effectively recover actinobacterial DNA from sponge samples in order to study the sponge-associated actinobacterial diversity. This was done by artificially spiking sponge samples with actinobacteria (spores, mycelia and a combination of the two). Our results demonstrated that both DNA extraction methods were effective in obtaining DNA from the sponge samples as well as the sponge samples spiked with different amounts of actinobacteria. However, it was noted that in the presence of the sponge, the bacterial 16S rRNA gene could not be amplified unless the combined DNA template was diluted. To test the hypothesis that the extracted sponge DNA contained inhibitors, dilutions of the DNA extracts were tested for six sponge species representing five orders. The results suggested that the inhibitors were co-extracted with the sponge DNA, and a high dilution of this DNA was required for the successful PCR amplification for most of the samples. The optimized PCR conditions, including primer selection, PCR reaction system and program optimization, further improved the PCR performance. However, no single PCR condition was found to be suitable for the diverse sponge samples using various primer sets. These results highlight for the first time that the DNA extraction methods used are effective in obtaining actinobacterial DNA and that the presence of inhibitors in the sponge DNA requires high dilution coupled with fine tuning of the PCR conditions to achieve success in the study of sponge-associated actinobacterial diversity.
Optimization of constellation jettisoning regards to short term collision risks
NASA Astrophysics Data System (ADS)
Handschuh, D.-DA.-A.; Bourgeois, E.
2018-04-01
The space debris problematic is directly linked to the in-orbit collision risk between artificial satellites. With the increase of the space constellation projects, a multiplication of multi-payload launches should occur. In the specific cases where many satellites are injected into orbit with the same launcher upper stage, all these objects will be placed on similar orbits, very close one from each other, at a specific moment where their control capabilities will be very limited. Under this hypothesis, it is up to the launcher operator to ensure that the simultaneous in-orbit injection is safe enough to guarantee the non-collision risk between all the objects under a ballistic hypothesis eventually considering appropriate uncertainties. The purpose of the present study is to find optimized safe separation conditions to limit the in-orbit collision risk following the injection of many objects on very close orbits in a short-delay mission.
TRANSGENIC MOUSE MODELS AND PARTICULATE MATTER (PM)
The hypothesis to be tested is that metal catalyzed oxidative stress can contribute to the biological effects of particulate matter. We acquired several transgenic mouse strains to test this hypothesis. Breeding of the mice was accomplished by Duke University. Particles employed ...
Hypothesis Testing Using the Films of the Three Stooges
ERIC Educational Resources Information Center
Gardner, Robert; Davidson, Robert
2010-01-01
The use of The Three Stooges' films as a source of data in an introductory statistics class is described. The Stooges' films are separated into three populations. Using these populations, students may conduct hypothesis tests with data they collect.
Hovick, Stephen M; Whitney, Kenneth D
2014-01-01
The hypothesis that interspecific hybridisation promotes invasiveness has received much recent attention, but tests of the hypothesis can suffer from important limitations. Here, we provide the first systematic review of studies experimentally testing the hybridisation-invasion (H-I) hypothesis in plants, animals and fungi. We identified 72 hybrid systems for which hybridisation has been putatively associated with invasiveness, weediness or range expansion. Within this group, 15 systems (comprising 34 studies) experimentally tested performance of hybrids vs. their parental species and met our other criteria. Both phylogenetic and non-phylogenetic meta-analyses demonstrated that wild hybrids were significantly more fecund and larger than their parental taxa, but did not differ in survival. Resynthesised hybrids (which typically represent earlier generations than do wild hybrids) did not consistently differ from parental species in fecundity, survival or size. Using meta-regression, we found that fecundity increased (but survival decreased) with generation in resynthesised hybrids, suggesting that natural selection can play an important role in shaping hybrid performance – and thus invasiveness – over time. We conclude that the available evidence supports the H-I hypothesis, with the caveat that our results are clearly driven by tests in plants, which are more numerous than tests in animals and fungi. PMID:25234578
The Harm Done to Reproducibility by the Culture of Null Hypothesis Significance Testing.
Lash, Timothy L
2017-09-15
In the last few years, stakeholders in the scientific community have raised alarms about a perceived lack of reproducibility of scientific results. In reaction, guidelines for journals have been promulgated and grant applicants have been asked to address the rigor and reproducibility of their proposed projects. Neither solution addresses a primary culprit, which is the culture of null hypothesis significance testing that dominates statistical analysis and inference. In an innovative research enterprise, selection of results for further evaluation based on null hypothesis significance testing is doomed to yield a low proportion of reproducible results and a high proportion of effects that are initially overestimated. In addition, the culture of null hypothesis significance testing discourages quantitative adjustments to account for systematic errors and quantitative incorporation of prior information. These strategies would otherwise improve reproducibility and have not been previously proposed in the widely cited literature on this topic. Without discarding the culture of null hypothesis significance testing and implementing these alternative methods for statistical analysis and inference, all other strategies for improving reproducibility will yield marginal gains at best. © The Author(s) 2017. Published by Oxford University Press on behalf of the Johns Hopkins Bloomberg School of Public Health. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.
Liberating Lévy walk research from the shackles of optimal foraging
NASA Astrophysics Data System (ADS)
Reynolds, Andy
2015-09-01
There is now compelling evidence that many organisms have movement patterns that can be described as Lévy walks, or Lévy flights. Lévy movement patterns have been identified in cells, microorganisms, molluscs, insects, reptiles, fish, birds and even human hunter-gatherers. Most research into Lévy walks as models of organism movement patterns has been shaped by the 'Lévy flight foraging hypothesis'. This states that, since Lévy walks can optimize search efficiencies, natural selection should lead to adaptations that select for Lévy walk foraging. However, a growing body of research on generative mechanisms suggests that Lévy walks can arise freely as by-products of otherwise innocuous behaviours; consequently their advantageous properties are purely coincidental. This suggests that the Lévy flight foraging hypothesis should be amended, or even replaced, by a simpler and more general hypothesis. This new hypothesis would state that 'Lévy walks emerge spontaneously and naturally from innate behaviours and innocuous responses to the environment but, if advantageous, then there could be selection against losing them'. The new hypothesis has the virtue of making fewer assumptions and being broader than the original hypothesis; it also encompasses the many examples of suboptimal Lévy patterns that challenge the prevailing paradigm. This does not detract from the Lévy flight foraging hypothesis, in fact, it adds to the theory by providing a stronger and more compelling case for the occurrence of Lévy walks. It dispenses with concerns about the theoretical arguments in support of the Lévy flight foraging hypothesis and so may lead to a wider acceptance of Lévy walks as models of movement pattern data. Furthermore, organisms can approximate Lévy walks by adapting intrinsic behaviour in simple ways; this occurs when Lévy movement patterns are advantageous, but come with an associated cost. These new developments represent a major change in perspective and provide the broadest picture yet of Lévy movement patterns. However, the process of understanding and identifying Lévy movement patterns still has a long way to go, and further reinterpretations and shifts in understanding will occur. In conclusion, Lévy walk research remains exciting precisely because so much remains to be understood, and because, even relatively small studies, are interesting discoveries in their own right.
Liberating Lévy walk research from the shackles of optimal foraging.
Reynolds, Andy
2015-09-01
There is now compelling evidence that many organisms have movement patterns that can be described as Lévy walks, or Lévy flights. Lévy movement patterns have been identified in cells, microorganisms, molluscs, insects, reptiles, fish, birds and even human hunter-gatherers. Most research into Lévy walks as models of organism movement patterns has been shaped by the 'Lévy flight foraging hypothesis'. This states that, since Lévy walks can optimize search efficiencies, natural selection should lead to adaptations that select for Lévy walk foraging. However, a growing body of research on generative mechanisms suggests that Lévy walks can arise freely as by-products of otherwise innocuous behaviours; consequently their advantageous properties are purely coincidental. This suggests that the Lévy flight foraging hypothesis should be amended, or even replaced, by a simpler and more general hypothesis. This new hypothesis would state that 'Lévy walks emerge spontaneously and naturally from innate behaviours and innocuous responses to the environment but, if advantageous, then there could be selection against losing them'. The new hypothesis has the virtue of making fewer assumptions and being broader than the original hypothesis; it also encompasses the many examples of suboptimal Lévy patterns that challenge the prevailing paradigm. This does not detract from the Lévy flight foraging hypothesis, in fact, it adds to the theory by providing a stronger and more compelling case for the occurrence of Lévy walks. It dispenses with concerns about the theoretical arguments in support of the Lévy flight foraging hypothesis and so may lead to a wider acceptance of Lévy walks as models of movement pattern data. Furthermore, organisms can approximate Lévy walks by adapting intrinsic behaviour in simple ways; this occurs when Lévy movement patterns are advantageous, but come with an associated cost. These new developments represent a major change in perspective and provide the broadest picture yet of Lévy movement patterns. However, the process of understanding and identifying Lévy movement patterns still has a long way to go, and further reinterpretations and shifts in understanding will occur. In conclusion, Lévy walk research remains exciting precisely because so much remains to be understood, and because, even relatively small studies, are interesting discoveries in their own right. Copyright © 2015 Elsevier B.V. All rights reserved.
The Impact of Economic Factors and Acquisition Reforms on the Cost of Defense Weapon Systems
2006-03-01
test for homoskedasticity, the Breusch - Pagan test is employed. The null hypothesis of the Breusch - Pagan test is that the variance is equal to zero...made. Using the Breusch - Pagan test shown in Table 19 below, the prob>chi2 is greater than 05.=α , therefore we fail to reject the null hypothesis...overrunpercentfp100 Breusch - Pagan Test (Ho=Constant Variance) Estimated Results Variance Standard Deviation overrunpercent100
Competition strength influences individual preferences in an auction game
Toelch, Ulf; Jubera-Garcia, Esperanza; Kurth-Nelson, Zeb; Dolan, Raymond J.
2014-01-01
Competitive interactions between individuals are ubiquitous in human societies. Auctions represent an institutionalized context for these interactions, a context where individuals frequently make non-optimal decisions. In particular, competition in auctions can lead to overbidding, resulting in the so-called winner’s curse, often explained by invoking emotional arousal. In this study, we investigated an alternative possibility, namely that competitors’ bids are construed as a source of information about the good’s common value thereby influencing an individuals’ private value estimate. We tested this hypothesis by asking participants to bid in a repeated all-pay auction game for five different real items. Crucially, participants had to rank the auction items for their preference before and after the experiment. We observed a clear relation between auction dynamics and preference change. We found that low competition reduced preference while high competition increased preference. Our findings support a view that competitors’ bids in auction games are perceived as valid social signal for the common value of an item. We suggest that this influence of social information constitutes a major cause for the frequently observed deviations from optimality in auctions. PMID:25168161
Scale-dependent feedbacks between patch size and plant reproduction in desert grassland
Svejcar, Lauren N.; Bestelmeyer, Brandon T.; Duniway, Michael C.; James, Darren K.
2015-01-01
Theoretical models suggest that scale-dependent feedbacks between plant reproductive success and plant patch size govern transitions from highly to sparsely vegetated states in drylands, yet there is scant empirical evidence for these mechanisms. Scale-dependent feedback models suggest that an optimal patch size exists for growth and reproduction of plants and that a threshold patch organization exists below which positive feedbacks between vegetation and resources can break down, leading to critical transitions. We examined the relationship between patch size and plant reproduction using an experiment in a Chihuahuan Desert grassland. We tested the hypothesis that reproductive effort and success of a dominant grass (Bouteloua eriopoda) would vary predictably with patch size. We found that focal plants in medium-sized patches featured higher rates of grass reproductive success than when plants occupied either large patch interiors or small patches. These patterns support the existence of scale-dependent feedbacks in Chihuahuan Desert grasslands and indicate an optimal patch size for reproductive effort and success in B. eriopoda. We discuss the implications of these results for detecting ecological thresholds in desert grasslands.
Identifying functionally informative evolutionary sequence profiles.
Gil, Nelson; Fiser, Andras
2018-04-15
Multiple sequence alignments (MSAs) can provide essential input to many bioinformatics applications, including protein structure prediction and functional annotation. However, the optimal selection of sequences to obtain biologically informative MSAs for such purposes is poorly explored, and has traditionally been performed manually. We present Selection of Alignment by Maximal Mutual Information (SAMMI), an automated, sequence-based approach to objectively select an optimal MSA from a large set of alternatives sampled from a general sequence database search. The hypothesis of this approach is that the mutual information among MSA columns will be maximal for those MSAs that contain the most diverse set possible of the most structurally and functionally homogeneous protein sequences. SAMMI was tested to select MSAs for functional site residue prediction by analysis of conservation patterns on a set of 435 proteins obtained from protein-ligand (peptides, nucleic acids and small substrates) and protein-protein interaction databases. Availability and implementation: A freely accessible program, including source code, implementing SAMMI is available at https://github.com/nelsongil92/SAMMI.git. andras.fiser@einstein.yu.edu. Supplementary data are available at Bioinformatics online.
Janssen, Christian P; Brumby, Duncan P; Dowell, John; Chater, Nick; Howes, Andrew
2011-01-01
We report the results of a dual-task study in which participants performed a tracking and typing task under various experimental conditions. An objective payoff function was used to provide explicit feedback on how participants should trade off performance between the tasks. Results show that participants' dual-task interleaving strategy was sensitive to changes in the difficulty of the tracking task and resulted in differences in overall task performance. To test the hypothesis that people select strategies that maximize payoff, a Cognitively Bounded Rational Analysis model was developed. This analysis evaluated a variety of dual-task interleaving strategies to identify the optimal strategy for maximizing payoff in each condition. The model predicts that the region of optimum performance is different between experimental conditions. The correspondence between human data and the prediction of the optimal strategy is found to be remarkably high across a number of performance measures. This suggests that participants were honing their behavior to maximize payoff. Limitations are discussed. Copyright © 2011 Cognitive Science Society, Inc.
ERIC Educational Resources Information Center
Tryon, Warren W.; Lewis, Charles
2008-01-01
Evidence of group matching frequently takes the form of a nonsignificant test of statistical difference. Theoretical hypotheses of no difference are also tested in this way. These practices are flawed in that null hypothesis statistical testing provides evidence against the null hypothesis and failing to reject H[subscript 0] is not evidence…
Effects of Item Exposure for Conventional Examinations in a Continuous Testing Environment.
ERIC Educational Resources Information Center
Hertz, Norman R.; Chinn, Roberta N.
This study explored the effect of item exposure on two conventional examinations administered as computer-based tests. A principal hypothesis was that item exposure would have little or no effect on average difficulty of the items over the course of an administrative cycle. This hypothesis was tested by exploring conventional item statistics and…
ERIC Educational Resources Information Center
McNeil, Keith
The use of directional and nondirectional hypothesis testing was examined from the perspectives of textbooks, journal articles, and members of editorial boards. Three widely used statistical texts were reviewed in terms of how directional and nondirectional tests of significance were presented. Texts reviewed were written by: (1) D. E. Hinkle, W.…
The Feminization of School Hypothesis Called into Question among Junior and High School Students
ERIC Educational Resources Information Center
Verniers, Catherine; Martinot, Delphine; Dompnier, Benoît
2016-01-01
Background: The feminization of school hypothesis suggests that boys underachieve in school compared to girls because school rewards feminine characteristics that are at odds with boys' masculine features. Aims: The feminization of school hypothesis lacks empirical evidence. The aim of this study was to test this hypothesis by examining the extent…
Supporting shared hypothesis testing in the biomedical domain.
Agibetov, Asan; Jiménez-Ruiz, Ernesto; Ondrésik, Marta; Solimando, Alessandro; Banerjee, Imon; Guerrini, Giovanna; Catalano, Chiara E; Oliveira, Joaquim M; Patanè, Giuseppe; Reis, Rui L; Spagnuolo, Michela
2018-02-08
Pathogenesis of inflammatory diseases can be tracked by studying the causality relationships among the factors contributing to its development. We could, for instance, hypothesize on the connections of the pathogenesis outcomes to the observed conditions. And to prove such causal hypotheses we would need to have the full understanding of the causal relationships, and we would have to provide all the necessary evidences to support our claims. In practice, however, we might not possess all the background knowledge on the causality relationships, and we might be unable to collect all the evidence to prove our hypotheses. In this work we propose a methodology for the translation of biological knowledge on causality relationships of biological processes and their effects on conditions to a computational framework for hypothesis testing. The methodology consists of two main points: hypothesis graph construction from the formalization of the background knowledge on causality relationships, and confidence measurement in a causality hypothesis as a normalized weighted path computation in the hypothesis graph. In this framework, we can simulate collection of evidences and assess confidence in a causality hypothesis by measuring it proportionally to the amount of available knowledge and collected evidences. We evaluate our methodology on a hypothesis graph that represents both contributing factors which may cause cartilage degradation and the factors which might be caused by the cartilage degradation during osteoarthritis. Hypothesis graph construction has proven to be robust to the addition of potentially contradictory information on the simultaneously positive and negative effects. The obtained confidence measures for the specific causality hypotheses have been validated by our domain experts, and, correspond closely to their subjective assessments of confidences in investigated hypotheses. Overall, our methodology for a shared hypothesis testing framework exhibits important properties that researchers will find useful in literature review for their experimental studies, planning and prioritizing evidence collection acquisition procedures, and testing their hypotheses with different depths of knowledge on causal dependencies of biological processes and their effects on the observed conditions.
The limits to pride: A test of the pro-anorexia hypothesis.
Cornelius, Talea; Blanton, Hart
2016-01-01
Many social psychological models propose that positive self-conceptions promote self-esteem. An extreme version of this hypothesis is advanced in "pro-anorexia" communities: identifying with anorexia, in conjunction with disordered eating, can lead to higher self-esteem. The current study empirically tested this hypothesis. Results challenge the pro-anorexia hypothesis. Although those with higher levels of pro-anorexia identification trended towards higher self-esteem with increased disordered eating, this did not overcome the strong negative main effect of pro-anorexia identification. These data suggest a more effective strategy for promoting self-esteem is to encourage rejection of disordered eating and an anorexic identity.
Does the Slow-Growth, High-Mortality Hypothesis Apply Below Ground?
Hourston, James E; Bennett, Alison E; Johnson, Scott N; Gange, Alan C
2016-01-01
Belowground tri-trophic study systems present a challenging environment in which to study plant-herbivore-natural enemy interactions. For this reason, belowground examples are rarely available for testing general ecological theories. To redress this imbalance, we present, for the first time, data on a belowground tri-trophic system to test the slow growth, high mortality hypothesis. We investigated whether the differing performance of entomopathogenic nematodes (EPNs) in controlling the common pest black vine weevil Otiorhynchus sulcatus could be linked to differently resistant cultivars of the red raspberry Rubus idaeus. The O. sulcatus larvae recovered from R. idaeus plants showed significantly slower growth and higher mortality on the Glen Rosa cultivar, relative to the more commercially favored Glen Ample cultivar creating a convenient system for testing this hypothesis. Heterorhabditis megidis was found to be less effective at controlling O. sulcatus than Steinernema kraussei, but conformed to the hypothesis. However, S. kraussei maintained high levels of O. sulcatus mortality regardless of how larval growth was influenced by R. idaeus cultivar. We link this to direct effects that S. kraussei had on reducing O. sulcatus larval mass, indicating potential sub-lethal effects of S. kraussei, which the slow-growth, high-mortality hypothesis does not account for. Possible origins of these sub-lethal effects of EPN infection and how they may impact on a hypothesis designed and tested with aboveground predator and parasitoid systems are discussed.
Optimal multisensory decision-making in a reaction-time task.
Drugowitsch, Jan; DeAngelis, Gregory C; Klier, Eliana M; Angelaki, Dora E; Pouget, Alexandre
2014-06-14
Humans and animals can integrate sensory evidence from various sources to make decisions in a statistically near-optimal manner, provided that the stimulus presentation time is fixed across trials. Little is known about whether optimality is preserved when subjects can choose when to make a decision (reaction-time task), nor when sensory inputs have time-varying reliability. Using a reaction-time version of a visual/vestibular heading discrimination task, we show that behavior is clearly sub-optimal when quantified with traditional optimality metrics that ignore reaction times. We created a computational model that accumulates evidence optimally across both cues and time, and trades off accuracy with decision speed. This model quantitatively explains subjects's choices and reaction times, supporting the hypothesis that subjects do, in fact, accumulate evidence optimally over time and across sensory modalities, even when the reaction time is under the subject's control.
On the functional optimization of a certain class of nonstationary spatial functions
Christakos, G.; Paraskevopoulos, P.N.
1987-01-01
Procedures are developed in order to obtain optimal estimates of linear functionals for a wide class of nonstationary spatial functions. These procedures rely on well-established constrained minimum-norm criteria, and are applicable to multidimensional phenomena which are characterized by the so-called hypothesis of inherentity. The latter requires elimination of the polynomial, trend-related components of the spatial function leading to stationary quantities, and also it generates some interesting mathematics within the context of modelling and optimization in several dimensions. The arguments are illustrated using various examples, and a case study computed in detail. ?? 1987 Plenum Publishing Corporation.
Growth factor transgenes interactively regulate articular chondrocytes.
Shi, Shuiliang; Mercer, Scott; Eckert, George J; Trippel, Stephen B
2013-04-01
Adult articular chondrocytes lack an effective repair response to correct damage from injury or osteoarthritis. Polypeptide growth factors that stimulate articular chondrocyte proliferation and cartilage matrix synthesis may augment this response. Gene transfer is a promising approach to delivering such factors. Multiple growth factor genes regulate these cell functions, but multiple growth factor gene transfer remains unexplored. We tested the hypothesis that multiple growth factor gene transfer selectively modulates articular chondrocyte proliferation and matrix synthesis. We tested the hypothesis by delivering combinations of the transgenes encoding insulin-like growth factor I (IGF-I), fibroblast growth factor-2 (FGF-2), transforming growth factor beta1 (TGF-β1), bone morphogenetic protein-2 (BMP-2), and bone morphogenetic protien-7 (BMP-7) to articular chondrocytes and measured changes in the production of DNA, glycosaminoglycan, and collagen. The transgenes differentially regulated all these chondrocyte activities. In concert, the transgenes interacted to generate widely divergent responses from the cells. These interactions ranged from inhibitory to synergistic. The transgene pair encoding IGF-I and FGF-2 maximized cell proliferation. The three-transgene group encoding IGF-I, BMP-2, and BMP-7 maximized matrix production and also optimized the balance between cell proliferation and matrix production. These data demonstrate an approach to articular chondrocyte regulation that may be tailored to stimulate specific cell functions, and suggest that certain growth factor gene combinations have potential value for cell-based articular cartilage repair. Copyright © 2012 Wiley Periodicals, Inc.
Financial literacy is associated with white matter integrity in old age.
Han, S Duke; Boyle, Patricia A; Arfanakis, Konstantinos; Fleischman, Debra; Yu, Lei; James, Bryan D; Bennett, David A
2016-04-15
Financial literacy, the ability to understand, access, and utilize information in ways that contribute to optimal financial outcomes, is important for independence and wellbeing in old age. We previously reported that financial literacy is associated with greater functional connectivity between brain regions in old age. Here, we tested the hypothesis that higher financial literacy would be associated with greater white matter integrity in old age. Participants included 346 persons without dementia (mean age=81.36, mean education=15.39, male/female=79/267, mean MMSE=28.52) from the Rush Memory and Aging Project. Financial literacy was assessed using a series of questions imbedded as part of an ongoing decision making study. White matter integrity was assessed with diffusion anisotropy measured with diffusion tensor magnetic resonance imaging (DTI). We tested the hypothesis that higher financial literacy is associated with higher diffusion anisotropy in white matter, adjusting for the effects of age, education, sex, and white matter hyperintense lesions. We then repeated the analysis also adjusting for cognitive function. Analyses revealed regions with significant positive associations between financial literacy and diffusion anisotropy, and many remained significant after accounting for cognitive function. White matter tracts connecting right hemisphere temporal-parietal brain regions were particularly implicated. Greater financial literacy is associated with higher diffusion anisotropy in white matter of nondemented older adults after adjusting for important covariates. These results suggest that financial literacy is positively associated with white matter integrity in old age. Copyright © 2016 The Authors. Published by Elsevier Inc. All rights reserved.
Financial Literacy is Associated with White Matter Integrity in Old Age
Han, S. Duke; Boyle, Patricia A.; Arfanakis, Konstantinos; Fleischman, Debra; Yu, Lei; James, Bryan D.; Bennett, David A.
2016-01-01
Financial literacy, the ability to understand, access, and utilize information in ways that contribute to optimal financial outcomes, is important for independence and wellbeing in old age. We previously reported that financial literacy is associated with greater functional connectivity between brain regions in old age. Here, we tested the hypothesis that higher financial literacy would be associated with greater white matter integrity in old age. Participants included 346 persons without dementia (mean age=81.36, mean education=15.39, male/female=79/267, mean MMSE=28.52) from the Rush Memory and Aging Project. Financial literacy was assessed using a series of questions imbedded as part of an ongoing decision making study. White matter integrity was assessed with diffusion anisotropy measured with diffusion tensor magnetic resonance imaging (DTI). We tested the hypothesis that higher financial literacy is associated with higher diffusion anisotropy in white matter, adjusting for the effects of age, education, sex, and white matter hyperintense lesions. We then repeated the analysis also adjusting for cognitive function. Analyses revealed regions with significant positive associations between financial literacy and diffusion anisotropy, and many remained significant after accounting for cognitive function. White matter tracts connecting right hemisphere temporal-parietal brain regions were particularly implicated. Greater financial literacy is associated with higher diffusion anisotropy in white matter of nondemented older adults after adjusting for important covariates. These results suggest that financial literacy is positively associated with white matter integrity in old age. PMID:26899784
MachineProse: an Ontological Framework for Scientific Assertions
Dinakarpandian, Deendayal; Lee, Yugyung; Vishwanath, Kartik; Lingambhotla, Rohini
2006-01-01
Objective: The idea of testing a hypothesis is central to the practice of biomedical research. However, the results of testing a hypothesis are published mainly in the form of prose articles. Encoding the results as scientific assertions that are both human and machine readable would greatly enhance the synergistic growth and dissemination of knowledge. Design: We have developed MachineProse (MP), an ontological framework for the concise specification of scientific assertions. MP is based on the idea of an assertion constituting a fundamental unit of knowledge. This is in contrast to current approaches that use discrete concept terms from domain ontologies for annotation and assertions are only inferred heuristically. Measurements: We use illustrative examples to highlight the advantages of MP over the use of the Medical Subject Headings (MeSH) system and keywords in indexing scientific articles. Results: We show how MP makes it possible to carry out semantic annotation of publications that is machine readable and allows for precise search capabilities. In addition, when used by itself, MP serves as a knowledge repository for emerging discoveries. A prototype for proof of concept has been developed that demonstrates the feasibility and novel benefits of MP. As part of the MP framework, we have created an ontology of relationship types with about 100 terms optimized for the representation of scientific assertions. Conclusion: MachineProse is a novel semantic framework that we believe may be used to summarize research findings, annotate biomedical publications, and support sophisticated searches. PMID:16357355
Length dependence of staircase potentiation: interactions with caffeine and dantrolene sodium.
Rassier, D E; MacIntosh, B R
2000-04-01
In skeletal muscle, there is a length dependence of staircase potentiation for which the mechanism is unclear. In this study we tested the hypothesis that abolition of this length dependence by caffeine is effected by a mechanism independent of enhanced Ca2+ release. To test this hypothesis we have used caffeine, which abolishes length dependence of potentiation, and dantrolene sodium, which inhibits Ca2+ release. In situ isometric twitch contractions of rat gastrocnemius muscle before and after 20 s of repetitive stimulation at 5 Hz were analyzed at optimal length (Lo), Lo - 10%, and Lo + 10%. Potentiation was observed to be length dependent, with an increase in developed tension (DT) of 78 +/- 12, 51 +/- 5, and 34 +/- 9% (mean +/- SEM), at Lo - 10%, Lo, and Lo + 10%, respectively. Caffeine diminished the length dependence of activation and suppressed the length dependence of staircase potentiation, giving increases in DT of 65+/-13, 53 +/- 11, and 45 +/- 12% for Lo - 10%, Lo, and Lo + 10%, respectively. Dantrolene administered after caffeine did not reverse this effect. Dantrolene alone depressed the potentiation response, but did not affect the length dependence of staircase potentiation, with increases in DT of 58 +/- 17, 26 +/- 8, and 18 +/- 7%, respectively. This study confirms that there is a length dependence of staircase potentiation in mammalian skeletal muscle which is suppressed by caffeine. Since dantrolene did not alter this suppression of the length dependence of potentiation by caffeine, it is apparently not directly modulated by Ca2+ availability in the myoplasm.
A critique of statistical hypothesis testing in clinical research
Raha, Somik
2011-01-01
Many have documented the difficulty of using the current paradigm of Randomized Controlled Trials (RCTs) to test and validate the effectiveness of alternative medical systems such as Ayurveda. This paper critiques the applicability of RCTs for all clinical knowledge-seeking endeavors, of which Ayurveda research is a part. This is done by examining statistical hypothesis testing, the underlying foundation of RCTs, from a practical and philosophical perspective. In the philosophical critique, the two main worldviews of probability are that of the Bayesian and the frequentist. The frequentist worldview is a special case of the Bayesian worldview requiring the unrealistic assumptions of knowing nothing about the universe and believing that all observations are unrelated to each other. Many have claimed that the first belief is necessary for science, and this claim is debunked by comparing variations in learning with different prior beliefs. Moving beyond the Bayesian and frequentist worldviews, the notion of hypothesis testing itself is challenged on the grounds that a hypothesis is an unclear distinction, and assigning a probability on an unclear distinction is an exercise that does not lead to clarity of action. This critique is of the theory itself and not any particular application of statistical hypothesis testing. A decision-making frame is proposed as a way of both addressing this critique and transcending ideological debates on probability. An example of a Bayesian decision-making approach is shown as an alternative to statistical hypothesis testing, utilizing data from a past clinical trial that studied the effect of Aspirin on heart attacks in a sample population of doctors. As a big reason for the prevalence of RCTs in academia is legislation requiring it, the ethics of legislating the use of statistical methods for clinical research is also examined. PMID:22022152
Ganju, Jitendra; Yu, Xinxin; Ma, Guoguang Julie
2013-01-01
Formal inference in randomized clinical trials is based on controlling the type I error rate associated with a single pre-specified statistic. The deficiency of using just one method of analysis is that it depends on assumptions that may not be met. For robust inference, we propose pre-specifying multiple test statistics and relying on the minimum p-value for testing the null hypothesis of no treatment effect. The null hypothesis associated with the various test statistics is that the treatment groups are indistinguishable. The critical value for hypothesis testing comes from permutation distributions. Rejection of the null hypothesis when the smallest p-value is less than the critical value controls the type I error rate at its designated value. Even if one of the candidate test statistics has low power, the adverse effect on the power of the minimum p-value statistic is not much. Its use is illustrated with examples. We conclude that it is better to rely on the minimum p-value rather than a single statistic particularly when that single statistic is the logrank test, because of the cost and complexity of many survival trials. Copyright © 2013 John Wiley & Sons, Ltd.
Schaarup, Clara; Hartvigsen, Gunnar; Larsen, Lars Bo; Tan, Zheng-Hua; Årsand, Eirik; Hejlesen, Ole Kristian
2015-01-01
The Online Diabetes Exercise System was developed to motivate people with Type 2 diabetes to do a 25 minutes low-volume high-intensity interval training program. In a previous multi-method evaluation of the system, several usability issues were identified and corrected. Despite the thorough testing, it was unclear whether all usability problems had been identified using the multi-method evaluation. Our hypothesis was that adding the eye-tracking triangulation to the multi-method evaluation would increase the accuracy and completeness when testing the usability of the system. The study design was an Eye-tracking Triangulation; conventional eye-tracking with predefined tasks followed by The Post-Experience Eye-Tracked Protocol (PEEP). Six Areas of Interests were the basis for the PEEP-session. The eye-tracking triangulation gave objective and subjective results, which are believed to be highly relevant for designing, implementing, evaluating and optimizing systems in the field of health informatics. Future work should include testing the method on a larger and more representative group of users and apply the method on different system types.
The predatory mite Phytoseiulus persimilis adjusts patch-leaving to own and progeny prey needs.
Vanas, V; Enigl, M; Walzer, A; Schausberger, P
2006-01-01
Integration of optimal foraging and optimal oviposition theories suggests that predator females should adjust patch leaving to own and progeny prey needs to maximize current and future reproductive success. We tested this hypothesis in the predatory mite Phytoseiulus persimilis and its patchily distributed prey, the two-spotted spider mite Tetranychus urticae. In three separate experiments we assessed (1) the minimum number of prey needed to complete juvenile development, (2) the minimum number of prey needed to produce an egg, and (3) the ratio between eggs laid and spider mites left when a gravid P. persimilis female leaves a patch. Experiments (1) and (2) were the pre-requirements to assess the fitness costs associated with staying or leaving a prey patch. Immature P. persimilis needed at least 7 and on average 14+/-3.6 (SD) T. urticae eggs to reach adulthood. Gravid females needed at least 5 and on average 8.5+/-3.1 (SD) T. urticae eggs to produce an egg. Most females left the initial patch before spider mite extinction, leaving prey for progeny to develop to adulthood. Females placed in a low density patch left 5.6+/-6.1 (SD) eggs per egg laid, whereas those placed in a high density patch left 15.8+/-13.7 (SD) eggs per egg laid. The three experiments in concert suggest that gravid P. persimilis females are able to balance the trade off between optimal foraging and optimal oviposition and adjust patch-leaving to own and progeny prey needs.
USDA-ARS?s Scientific Manuscript database
This study tests the hypothesis that phylogenetic classification can predict whether A. pullulans strains will produce useful levels of the commercial polysaccharide, pullulan, or the valuable enzyme, xylanase. To test this hypothesis, 19 strains of A. pullulans with previously described phenotypes...
Huang, Peng; Ou, Ai-hua; Piantadosi, Steven; Tan, Ming
2014-11-01
We discuss the problem of properly defining treatment superiority through the specification of hypotheses in clinical trials. The need to precisely define the notion of superiority in a one-sided hypothesis test problem has been well recognized by many authors. Ideally designed null and alternative hypotheses should correspond to a partition of all possible scenarios of underlying true probability models P={P(ω):ω∈Ω} such that the alternative hypothesis Ha={P(ω):ω∈Ωa} can be inferred upon the rejection of null hypothesis Ho={P(ω):ω∈Ω(o)} However, in many cases, tests are carried out and recommendations are made without a precise definition of superiority or a specification of alternative hypothesis. Moreover, in some applications, the union of probability models specified by the chosen null and alternative hypothesis does not constitute a completed model collection P (i.e., H(o)∪H(a) is smaller than P). This not only imposes a strong non-validated assumption of the underlying true models, but also leads to different superiority claims depending on which test is used instead of scientific plausibility. Different ways to partition P fro testing treatment superiority often have different implications on sample size, power, and significance in both efficacy and comparative effectiveness trial design. Such differences are often overlooked. We provide a theoretical framework for evaluating the statistical properties of different specification of superiority in typical hypothesis testing. This can help investigators to select proper hypotheses for treatment comparison inclinical trial design. Copyright © 2014 Elsevier Inc. All rights reserved.
Akimov, Alexander G; Egorova, Marina A; Ehret, Günter
2017-02-01
Selectivity for processing of species-specific vocalizations and communication sounds has often been associated with the auditory cortex. The midbrain inferior colliculus, however, is the first center in the auditory pathways of mammals integrating acoustic information processed in separate nuclei and channels in the brainstem and, therefore, could significantly contribute to enhance the perception of species' communication sounds. Here, we used natural wriggling calls of mouse pups, which communicate need for maternal care to adult females, and further 15 synthesized sounds to test the hypothesis that neurons in the central nucleus of the inferior colliculus of adult females optimize their response rates for reproduction of the three main harmonics (formants) of wriggling calls. The results confirmed the hypothesis showing that average response rates, as recorded extracellularly from single units, were highest and spectral facilitation most effective for both onset and offset responses to the call and call models with three resolved frequencies according to critical bands in perception. In addition, the general on- and/or off-response enhancement in almost half the investigated 122 neurons favors not only perception of single calls but also of vocalization rhythm. In summary, our study provides strong evidence that critical-band resolved frequency components within a communication sound increase the probability of its perception by boosting the signal-to-noise ratio of neural response rates within the inferior colliculus for at least 20% (our criterion for facilitation). These mechanisms, including enhancement of rhythm coding, are generally favorable to processing of other animal and human vocalizations, including formants of speech sounds. © 2016 Federation of European Neuroscience Societies and John Wiley & Sons Ltd.
A test of the orthographic recoding hypothesis
NASA Astrophysics Data System (ADS)
Gaygen, Daniel E.
2003-04-01
The Orthographic Recoding Hypothesis [D. E. Gaygen and P. A. Luce, Percept. Psychophys. 60, 465-483 (1998)] was tested. According to this hypothesis, listeners recognize spoken words heard for the first time by mapping them onto stored representations of the orthographic forms of the words. Listeners have a stable orthographic representation of words, but no phonological representation, when those words have been read frequently but never heard or spoken. Such may be the case for low frequency words such as jargon. Three experiments using visually and auditorily presented nonword stimuli tested this hypothesis. The first two experiments were explicit tests of memory (old-new tests) for words presented visually. In the first experiment, the recognition of auditorily presented nonwords was facilitated when they previously appeared on a visually presented list. The second experiment was similar, but included a concurrent articulation task during a visual word list presentation, thus preventing covert rehearsal of the nonwords. The results were similar to the first experiment. The third experiment was an indirect test of memory (auditory lexical decision task) for visually presented nonwords. Auditorily presented nonwords were identified as nonwords significantly more slowly if they had previously appeared on the visually presented list accompanied by a concurrent articulation task.
Hamilton, Maryellen; Geraci, Lisa
2006-01-01
According to leading theories, the picture superiority effect is driven by conceptual processing, yet this effect has been difficult to obtain using conceptual implicit memory tests. We hypothesized that the picture superiority effect results from conceptual processing of a picture's distinctive features rather than a picture's semantic features. To test this hypothesis, we used 2 conceptual implicit general knowledge tests; one cued conceptually distinctive features (e.g., "What animal has large eyes?") and the other cued semantic features (e.g., "What animal is the figurehead of Tootsie Roll?"). Results showed a picture superiority effect only on the conceptual test using distinctive cues, supporting our hypothesis that this effect is mediated by conceptual processing of a picture's distinctive features.
Hypothesis testing for band size detection of high-dimensional banded precision matrices.
An, Baiguo; Guo, Jianhua; Liu, Yufeng
2014-06-01
Many statistical analysis procedures require a good estimator for a high-dimensional covariance matrix or its inverse, the precision matrix. When the precision matrix is banded, the Cholesky-based method often yields a good estimator of the precision matrix. One important aspect of this method is determination of the band size of the precision matrix. In practice, crossvalidation is commonly used; however, we show that crossvalidation not only is computationally intensive but can be very unstable. In this paper, we propose a new hypothesis testing procedure to determine the band size in high dimensions. Our proposed test statistic is shown to be asymptotically normal under the null hypothesis, and its theoretical power is studied. Numerical examples demonstrate the effectiveness of our testing procedure.
Why do mothers favor girls and fathers, boys? : A hypothesis and a test of investment disparity.
Godoy, Ricardo; Reyes-García, Victoria; McDade, Thomas; Tanner, Susan; Leonard, William R; Huanca, Tomás; Vadez, Vincent; Patel, Karishma
2006-06-01
Growing evidence suggests mothers invest more in girls than boys and fathers more in boys than girls. We develop a hypothesis that predicts preference for girls by the parent facing more resource constraints and preference for boys by the parent facing less constraint. We test the hypothesis with panel data from the Tsimane', a foraging-farming society in the Bolivian Amazon. Tsimane' mothers face more resource constraints than fathers. As predicted, mother's wealth protected girl's BMI, but father's wealth had weak effects on boy's BMI. Numerous tests yielded robust results, including those that controlled for fixed effects of child and household.
Bundschuh, Mirco; Newman, Michael C; Zubrod, Jochen P; Seitz, Frank; Rosenfeldt, Ricki R; Schulz, Ralf
2015-03-01
We argued recently that the positive predictive value (PPV) and the negative predictive value (NPV) are valuable metrics to include during null hypothesis significance testing: They inform the researcher about the probability of statistically significant and non-significant test outcomes actually being true. Although commonly misunderstood, a reported p value estimates only the probability of obtaining the results or more extreme results if the null hypothesis of no effect was true. Calculations of the more informative PPV and NPV require a priori estimate of the probability (R). The present document discusses challenges of estimating R.
Li, Fuhong; Cao, Bihua; Luo, Yuejia; Lei, Yi; Li, Hong
2013-02-01
Functional magnetic resonance imaging (fMRI) was used to examine differences in brain activation that occur when a person receives the different outcomes of hypothesis testing (HT). Participants were provided with a series of images of batteries and were asked to learn a rule governing what kinds of batteries were charged. Within each trial, the first two charged batteries were sequentially displayed, and participants would generate a preliminary hypothesis based on the perceptual comparison. Next, a third battery that served to strengthen, reject, or was irrelevant to the preliminary hypothesis was displayed. The fMRI results revealed that (1) no significant differences in brain activation were found between the 2 hypothesis-maintain conditions (i.e., strengthen and irrelevant conditions); and (2) compared with the hypothesis-maintain conditions, the hypothesis-reject condition activated the left medial frontal cortex, bilateral putamen, left parietal cortex, and right cerebellum. These findings are discussed in terms of the neural correlates of the subcomponents of HT and working memory manipulation. Copyright © 2012 Elsevier Inc. All rights reserved.
Lin, Cheng Yu; Kikuchi, Noboru; Hollister, Scott J
2004-05-01
An often-proposed tissue engineering design hypothesis is that the scaffold should provide a biomimetic mechanical environment for initial function and appropriate remodeling of regenerating tissue while concurrently providing sufficient porosity for cell migration and cell/gene delivery. To provide a systematic study of this hypothesis, the ability to precisely design and manufacture biomaterial scaffolds is needed. Traditional methods for scaffold design and fabrication cannot provide the control over scaffold architecture design to achieve specified properties within fixed limits on porosity. The purpose of this paper was to develop a general design optimization scheme for 3D internal scaffold architecture to match desired elastic properties and porosity simultaneously, by introducing the homogenization-based topology optimization algorithm (also known as general layout optimization). With an initial target for bone tissue engineering, we demonstrate that the method can produce highly porous structures that match human trabecular bone anisotropic stiffness using accepted biomaterials. In addition, we show that anisotropic bone stiffness may be matched with scaffolds of widely different porosity. Finally, we also demonstrate that prototypes of the designed structures can be fabricated using solid free-form fabrication (SFF) techniques.
Protein-energy nutrition in the ICU is the power couple: A hypothesis forming analysis.
Oshima, Taku; Deutz, Nicolaas E; Doig, Gordon; Wischmeyer, Paul E; Pichard, Claude
2016-08-01
We hypothesize that an optimal and simultaneous provision of energy and protein is favorable to clinical outcome of the critically ill patients. We conducted a review of the literature, obtained via electronic databases and focused on the metabolic alterations during critical illness, the estimation of energy and protein requirements, as well as the impact of their administration. Critically ill patients undergo severe metabolic stress during which time a great amount of energy and protein is utilized in a variety of reactions essential for survival. Energy provision for critically ill patients has drawn attention given its association with morbidity, survival and long-term recovery, but protein provision is not sufficiently taken into account as a critical component of nutrition support that influences clinical outcome. Measurement of energy expenditure is done by indirect calorimetry, but protein status cannot be measured with a bedside technology at present. Recent studies suggest the importance of optimal and combined provision of energy and protein to optimize clinical outcome. Clinical randomized controlled studies measuring energy and protein targets should confirm this hypothesis and therefore establish energy and protein as a power couple. Copyright © 2015 Elsevier Ltd and European Society for Clinical Nutrition and Metabolism. All rights reserved.
Toward the detection of gravitational waves under non-Gaussian noises I. Locally optimal statistic
YOKOYAMA, Jun’ichi
2014-01-01
After reviewing the standard hypothesis test and the matched filter technique to identify gravitational waves under Gaussian noises, we introduce two methods to deal with non-Gaussian stationary noises. We formulate the likelihood ratio function under weakly non-Gaussian noises through the Edgeworth expansion and strongly non-Gaussian noises in terms of a new method we call Gaussian mapping where the observed marginal distribution and the two-body correlation function are fully taken into account. We then apply these two approaches to Student’s t-distribution which has a larger tails than Gaussian. It is shown that while both methods work well in the case the non-Gaussianity is small, only the latter method works well for highly non-Gaussian case. PMID:25504231
Attention and Conscious Perception in the Hypothesis Testing Brain
Hohwy, Jakob
2012-01-01
Conscious perception and attention are difficult to study, partly because their relation to each other is not fully understood. Rather than conceiving and studying them in isolation from each other it may be useful to locate them in an independently motivated, general framework, from which a principled account of how they relate can then emerge. Accordingly, these mental phenomena are here reviewed through the prism of the increasingly influential predictive coding framework. On this framework, conscious perception can be seen as the upshot of prediction error minimization and attention as the optimization of precision expectations during such perceptual inference. This approach maps on well to a range of standard characteristics of conscious perception and attention, and can be used to interpret a range of empirical findings on their relation to each other. PMID:22485102
Fluid cognitive ability is a resource for successful emotion regulation in older and younger adults
Opitz, Philipp C.; Lee, Ihno A.; Gross, James J.; Urry, Heather L.
2014-01-01
The Selection, Optimization, and Compensation with Emotion Regulation (SOC-ER) framework suggests that (1) emotion regulation (ER) strategies require resources and that (2) higher levels of relevant resources may increase ER success. In the current experiment, we tested the specific hypothesis that individual differences in one internal class of resources, namely cognitive ability, would contribute to greater success using cognitive reappraisal (CR), a form of ER in which one reinterprets the meaning of emotion-eliciting situations. To test this hypothesis, 60 participants (30 younger and 30 older adults) completed standardized neuropsychological tests that assess fluid and crystallized cognitive ability, as well as a CR task in which participants reinterpreted the meaning of sad pictures in order to alter (increase or decrease) their emotions. In a control condition, they viewed the pictures without trying to change how they felt. Throughout the task, we indexed subjective emotional experience (self-reported ratings of emotional intensity), expressive behavior (corrugator muscle activity), and autonomic physiology (heart rate and electrodermal activity) as measures of emotional responding. Multilevel models were constructed to explain within-subjects variation in emotional responding as a function of ER contrasts comparing increase or decrease conditions with the view control condition and between-subjects variation as a function of cognitive ability and/or age group (older, younger). As predicted, higher fluid cognitive ability—indexed by perceptual reasoning, processing speed, and working memory—was associated with greater success using reappraisal to alter emotional responding. Reappraisal success did not vary as a function of crystallized cognitive ability or age group. Collectively, our results provide support for a key tenet of the SOC-ER framework that higher levels of relevant resources may confer greater success at emotion regulation. PMID:24987387
Newton-Bishop, Julia A.; Beswick, Samantha; Randerson-Moor, Juliette; Chang, Yu-Mei; Affleck, Paul; Elliott, Faye; Chan, May; Leake, Susan; Karpavicius, Birute; Haynes, Sue; Kukalizch, Kairen; Whitaker, Linda; Jackson, Sharon; Gerry, Edwina; Nolan, Clarissa; Bertram, Chandra; Marsden, Jerry; Elder, David E.; Barrett, Jennifer H.; Bishop, D. Timothy
2009-01-01
Purpose A cohort study was carried out to test the hypothesis that higher vitamin D levels reduce the risk of relapse from melanoma. Methods A pilot retrospective study of 271 patients with melanoma suggested that vitamin D may protect against recurrence of melanoma. We tested these findings in a survival analysis in a cohort of 872 patients recruited to the Leeds Melanoma Cohort (median follow-up, 4.7 years). Results In the retrospective study, self-reports of taking vitamin D supplements were nonsignificantly correlated with a reduced risk of melanoma relapse (odds ratio = 0.6; 95% CI, 0.4 to 1.1; P = .09). Nonrelapsers had higher mean 25-hydroxyvitamin D3 levels than relapsers (49 v 46 nmol/L; P = .3; not statistically significant). In the cohort (prospective) study, higher 25-hydroxyvitamin D3 levels were associated with lower Breslow thickness at diagnosis (P = .002) and were independently protective of relapse and death: the hazard ratio for relapse-free survival (RFS) was 0.79 (95% CI, 0.64 to 0.96; P = .01) for a 20 nmol/L increase in serum level. There was evidence of interaction between the vitamin D receptor (VDR) BsmI genotype and serum 25-hydroxyvitamin D3 levels on RFS. Conclusion Results from the retrospective study were consistent with a role for vitamin D in melanoma outcome. The cohort study tests this hypothesis, providing evidence that higher 25-hydroxyvitamin D3 levels, at diagnosis, are associated with both thinner tumors and better survival from melanoma, independent of Breslow thickness. Patients with melanoma, and those at high risk of melanoma, should seek to ensure vitamin D sufficiency. Additional studies are needed to establish optimal serum levels for patients with melanoma. PMID:19770375
Fluid cognitive ability is a resource for successful emotion regulation in older and younger adults.
Opitz, Philipp C; Lee, Ihno A; Gross, James J; Urry, Heather L
2014-01-01
The Selection, Optimization, and Compensation with Emotion Regulation (SOC-ER) framework suggests that (1) emotion regulation (ER) strategies require resources and that (2) higher levels of relevant resources may increase ER success. In the current experiment, we tested the specific hypothesis that individual differences in one internal class of resources, namely cognitive ability, would contribute to greater success using cognitive reappraisal (CR), a form of ER in which one reinterprets the meaning of emotion-eliciting situations. To test this hypothesis, 60 participants (30 younger and 30 older adults) completed standardized neuropsychological tests that assess fluid and crystallized cognitive ability, as well as a CR task in which participants reinterpreted the meaning of sad pictures in order to alter (increase or decrease) their emotions. In a control condition, they viewed the pictures without trying to change how they felt. Throughout the task, we indexed subjective emotional experience (self-reported ratings of emotional intensity), expressive behavior (corrugator muscle activity), and autonomic physiology (heart rate and electrodermal activity) as measures of emotional responding. Multilevel models were constructed to explain within-subjects variation in emotional responding as a function of ER contrasts comparing increase or decrease conditions with the view control condition and between-subjects variation as a function of cognitive ability and/or age group (older, younger). As predicted, higher fluid cognitive ability-indexed by perceptual reasoning, processing speed, and working memory-was associated with greater success using reappraisal to alter emotional responding. Reappraisal success did not vary as a function of crystallized cognitive ability or age group. Collectively, our results provide support for a key tenet of the SOC-ER framework that higher levels of relevant resources may confer greater success at emotion regulation.
The effect of spatial auditory landmarks on ambulation.
Karim, Adham M; Rumalla, Kavelin; King, Laurie A; Hullar, Timothy E
2018-02-01
The maintenance of balance and posture is a result of the collaborative efforts of vestibular, proprioceptive, and visual sensory inputs, but a fourth neural input, audition, may also improve balance. Here, we tested the hypothesis that auditory inputs function as environmental spatial landmarks whose effectiveness depends on sound localization ability during ambulation. Eight blindfolded normal young subjects performed the Fukuda-Unterberger test in three auditory conditions: silence, white noise played through headphones (head-referenced condition), and white noise played through a loudspeaker placed directly in front at 135 centimeters away from the ear at ear height (earth-referenced condition). For the earth-referenced condition, an additional experiment was performed where the effect of moving the speaker azimuthal position to 45, 90, 135, and 180° was tested. Subjects performed significantly better in the earth-referenced condition than in the head-referenced or silent conditions. Performance progressively decreased over the range from 0° to 135° but all subjects then improved slightly at the 180° compared to the 135° condition. These results suggest that presence of sound dramatically improves the ability to ambulate when vision is limited, but that sound sources must be located in the external environment in order to improve balance. This supports the hypothesis that they act by providing spatial landmarks against which head and body movement and orientation may be compared and corrected. Balance improvement in the azimuthal plane mirrors sensitivity to sound movement at similar positions, indicating that similar auditory mechanisms may underlie both processes. These results may help optimize the use of auditory cues to improve balance in particular patient populations. Copyright © 2017 Elsevier B.V. All rights reserved.
Muscle fibre conduction velocity during a 30-s Wingate anaerobic test.
Stewart, David; Farina, Dario; Shen, Chao; Macaluso, Andrea
2011-06-01
Ten male volunteers (age 29.2 ± 5.2 years, mean ± SD) were recruited to test the hypothesis that muscle fibre conduction velocity (MFCV) would decrease with power output during a 30-s Wingate test on a mechanically-braked cycle ergometer. Prior to the main test, the optimal pre-fixed load corresponding to the highest power output was selected following a random series of six 10-s sprints. Surface electromyographic (EMG) signals were detected from the right vastus lateralis with linear adhesive arrays of eight electrodes. Power output decreased significantly from 6-s until the end of the test (860.9 ± 207.8 vs. 360.9 ± 11.4 W, respectively) and was correlated with MFCV (R=0.543, P<0.01), which also declined significantly by 26.8 ± 11% (P<0.05). There was a tendency for the mean frequency of the EMG power spectrum (MNF) to decrease, but average rectified values (ARV) remained unchanged throughout the test. The parallel decline of MFCV with power output suggests changes in fibre membrane properties. The unaltered ARV, together with the declined MFCV, would indicate either a decrease in discharge rate, de-recruitment of fatigued motor units or elongation of still present motor unit action potentials. Copyright © 2011 Elsevier Ltd. All rights reserved.
Animal Models for Testing the DOHaD Hypothesis
Since the seminal work in human populations by David Barker and colleagues, several species of animals have been used in the laboratory to test the Developmental Origins of Health and Disease (DOHaD) hypothesis. Rats, mice, guinea pigs, sheep, pigs and non-human primates have bee...
A "Projective" Test of the Golden Section Hypothesis.
ERIC Educational Resources Information Center
Lee, Chris; Adams-Webber, Jack
1987-01-01
In a projective test of the golden section hypothesis, 24 high school students rated themselves and 10 comic strip characters on basis of 12 bipolar constructs. Overall proportion of cartoon figures which subjects assigned to positive poles of constructs was very close to golden section. (Author/NB)
Peterson, Chris J; Dosch, Jerald J; Carson, Walter P
2014-08-01
The nucleation hypothesis appears to explain widespread patterns of succession in tropical pastures, specifically the tendency for isolated trees to promote woody species recruitment. Still, the nucleation hypothesis has usually been tested explicitly for only short durations and in some cases isolated trees fail to promote woody recruitment. Moreover, at times, nucleation occurs in other key habitat patches. Thus, we propose an extension, the matrix discontinuity hypothesis: woody colonization will occur in focal patches that function to mitigate the herbaceous vegetation effects, thus providing safe sites or regeneration niches. We tested predictions of the classical nucleation hypothesis, the matrix discontinuity hypothesis, and a distance from forest edge hypothesis, in five abandoned pastures in Costa Rica, across the first 11 years of succession. Our findings confirmed the matrix discontinuity hypothesis: specifically, rotting logs and steep slopes significantly enhanced woody colonization. Surprisingly, isolated trees did not consistently significantly enhance recruitment; only larger trees did so. Finally, woody recruitment consistently decreased with distance from forest. Our results as well as results from others suggest that the nucleation hypothesis needs to be broadened beyond its historical focus on isolated trees or patches; the matrix discontinuity hypothesis focuses attention on a suite of key patch types or microsites that promote woody species recruitment. We argue that any habitat discontinuities that ameliorate the inhibition by dense graminoid layers will be foci for recruitment. Such patches could easily be manipulated to speed the transition of pastures to closed canopy forests.
Humans have evolved specialized skills of social cognition: the cultural intelligence hypothesis.
Herrmann, Esther; Call, Josep; Hernàndez-Lloreda, Maráa Victoria; Hare, Brian; Tomasello, Michael
2007-09-07
Humans have many cognitive skills not possessed by their nearest primate relatives. The cultural intelligence hypothesis argues that this is mainly due to a species-specific set of social-cognitive skills, emerging early in ontogeny, for participating and exchanging knowledge in cultural groups. We tested this hypothesis by giving a comprehensive battery of cognitive tests to large numbers of two of humans' closest primate relatives, chimpanzees and orangutans, as well as to 2.5-year-old human children before literacy and schooling. Supporting the cultural intelligence hypothesis and contradicting the hypothesis that humans simply have more "general intelligence," we found that the children and chimpanzees had very similar cognitive skills for dealing with the physical world but that the children had more sophisticated cognitive skills than either of the ape species for dealing with the social world.
Clairvoyant fusion: a new methodology for designing robust detection algorithms
NASA Astrophysics Data System (ADS)
Schaum, Alan
2016-10-01
Many realistic detection problems cannot be solved with simple statistical tests for known alternative probability models. Uncontrollable environmental conditions, imperfect sensors, and other uncertainties transform simple detection problems with likelihood ratio solutions into composite hypothesis (CH) testing problems. Recently many multi- and hyperspectral sensing CH problems have been addressed with a new approach. Clairvoyant fusion (CF) integrates the optimal detectors ("clairvoyants") associated with every unspecified value of the parameters appearing in a detection model. For problems with discrete parameter values, logical rules emerge for combining the decisions of the associated clairvoyants. For many problems with continuous parameters, analytic methods of CF have been found that produce closed-form solutions-or approximations for intractable problems. Here the principals of CF are reviewed and mathematical insights are described that have proven useful in the derivation of solutions. It is also shown how a second-stage fusion procedure can be used to create theoretically superior detection algorithms for ALL discrete parameter problems.
Testing the adaptive radiation hypothesis for the lemurs of Madagascar.
Herrera, James P
2017-01-01
Lemurs, the diverse, endemic primates of Madagascar, are thought to represent a classic example of adaptive radiation. Based on the most complete phylogeny of living and extinct lemurs yet assembled, I tested predictions of adaptive radiation theory by estimating rates of speciation, extinction and adaptive phenotypic evolution. As predicted, lemur speciation rate exceeded that of their sister clade by nearly twofold, indicating the diversification dynamics of lemurs and mainland relatives may have been decoupled. Lemur diversification rates did not decline over time, however, as predicted by adaptive radiation theory. Optimal body masses diverged among dietary and activity pattern niches as lineages diversified into unique multidimensional ecospace. Based on these results, lemurs only partially fulfil the predictions of adaptive radiation theory, with phenotypic evolution corresponding to an 'early burst' of adaptive differentiation. The results must be interpreted with caution, however, because over the long evolutionary history of lemurs (approx. 50 million years), the 'early burst' signal of adaptive radiation may have been eroded by extinction.
Testing the adaptive radiation hypothesis for the lemurs of Madagascar
2017-01-01
Lemurs, the diverse, endemic primates of Madagascar, are thought to represent a classic example of adaptive radiation. Based on the most complete phylogeny of living and extinct lemurs yet assembled, I tested predictions of adaptive radiation theory by estimating rates of speciation, extinction and adaptive phenotypic evolution. As predicted, lemur speciation rate exceeded that of their sister clade by nearly twofold, indicating the diversification dynamics of lemurs and mainland relatives may have been decoupled. Lemur diversification rates did not decline over time, however, as predicted by adaptive radiation theory. Optimal body masses diverged among dietary and activity pattern niches as lineages diversified into unique multidimensional ecospace. Based on these results, lemurs only partially fulfil the predictions of adaptive radiation theory, with phenotypic evolution corresponding to an ‘early burst’ of adaptive differentiation. The results must be interpreted with caution, however, because over the long evolutionary history of lemurs (approx. 50 million years), the ‘early burst’ signal of adaptive radiation may have been eroded by extinction. PMID:28280597
Effects of high-color-discrimination capability spectra on color-deficient vision.
Perales, Esther; Linhares, João Manuel Maciel; Masuda, Osamu; Martínez-Verdú, Francisco M; Nascimento, Sérgio Miguel Cardoso
2013-09-01
Light sources with three spectral bands in specific spectral positions are known to have high-color-discrimination capability. W. A. Thornton hypothesized that they may also enhance color discrimination for color-deficient observers. This hypothesis was tested here by comparing the Rösch-MacAdam color volume for color-deficient observers rendered by three of these singular spectra, two reported previously and one derived in this paper by maximization of the Rösch-MacAdam color solid. It was found that all illuminants tested enhance discriminability for deuteranomalous observers, but their impact on other congenital deficiencies was variable. The best illuminant was the one derived here, as it was clearly advantageous for the two red-green anomalies and for tritanopes and almost neutral for red-green dichromats. We conclude that three-band spectra with high-color-discrimination capability for normal observers do not necessarily produce comparable enhancements for color-deficient observers, but suitable spectral optimization clearly enhances the vision of the color deficient.
The missing biology in land carbon models (Invited)
NASA Astrophysics Data System (ADS)
Prentice, I. C.; Cornwell, W.; Dong, N.; Maire, V.; Wang, H.; Wright, I.
2013-12-01
Models of terrestrial carbon cycling give divergent results, and recent developments - notably the inclusion of nitrogen-carbon cycle coupling - have apparently made matters worse. More extensive benchmarking of models would be highly desirable, but is not a panacea. Problems with current models include overparameterization (assigning separate sets of parameter values for each plant functional type can easily obscure more fundamental model limitations), and the widespread persistence of incorrect paradigms to describe plant responses to environment. Next-generation models require a more sound basis in observations and theory. A possible way forward will be outlined. It will be shown how the principle of optimization by natural selection can yield testable, general hypotheses about plant function. A specific optimality hypothesis about the control of CO2 drawdown versus water loss by leaves will be shown to yield global and quantitatively verifable predictions of plant behaviour as demonstrated in field gas-exchange measurements across species from different environments, and in the global pattern of stable carbon isotope discrimination by plants. Combined with the co-limitation hypothesis for the control of photosynthetic capacity and an economic approach to the costs of nutrient acquisition, this hypothesis provides a potential foundation for a comprehensive predictive understanding of the controls of primary production on land.
1986-09-01
HYPOTHESIS TEST .................... 20 III. TIME TO GET RATED TWO FACTOR ANOVA RESULTS ......... 23 IV. TIME TO GET RATED TUKEY’S PAIRED COvfl’PARISON... TEST RESULTS A ............................................ 24 V. TIME TO GET RATED TUKEY’S PAIRED COMPARISON TEST RESULTS B...25 VI. SINGLE FACTOR ANOVA HYPOTHESIS TEST #I............... 27 VII. AT: TIME TO GET RATED ANOVA TEST RESULTS ............. 30
Sensory discrimination and intelligence: testing Spearman's other hypothesis.
Deary, Ian J; Bell, P Joseph; Bell, Andrew J; Campbell, Mary L; Fazal, Nicola D
2004-01-01
At the centenary of Spearman's seminal 1904 article, his general intelligence hypothesis remains one of the most influential in psychology. Less well known is the article's other hypothesis that there is "a correspondence between what may provisionally be called 'General Discrimination' and 'General Intelligence' which works out with great approximation to one or absoluteness" (Spearman, 1904, p. 284). Studies that do not find high correlations between psychometric intelligence and single sensory discrimination tests do not falsify this hypothesis. This study is the first directly to address Spearman's general intelligence-general sensory discrimination hypothesis. It attempts to replicate his findings with a similar sample of schoolchildren. In a well-fitting structural equation model of the data, general intelligence and general discrimination correlated .92. In a reanalysis of data published byActon and Schroeder (2001), general intelligence and general sensory ability correlated .68 in men and women. One hundred years after its conception, Spearman's other hypothesis achieves some confirmation. The association between general intelligence and general sensory ability remains to be replicated and explained.
Dynamic test input generation for multiple-fault isolation
NASA Technical Reports Server (NTRS)
Schaefer, Phil
1990-01-01
Recent work is Causal Reasoning has provided practical techniques for multiple fault diagnosis. These techniques provide a hypothesis/measurement diagnosis cycle. Using probabilistic methods, they choose the best measurements to make, then update fault hypotheses in response. For many applications such as computers and spacecraft, few measurement points may be accessible, or values may change quickly as the system under diagnosis operates. In these cases, a hypothesis/measurement cycle is insufficient. A technique is presented for a hypothesis/test-input/measurement diagnosis cycle. In contrast to generating tests a priori for determining device functionality, it dynamically generates tests in response to current knowledge about fault probabilities. It is shown how the mathematics previously used for measurement specification can be applied to the test input generation process. An example from an efficient implementation called Multi-Purpose Causal (MPC) is presented.
Lee, Chai-Jin; Kang, Dongwon; Lee, Sangseon; Lee, Sunwon; Kang, Jaewoo; Kim, Sun
2018-05-25
Determining functions of a gene requires time consuming, expensive biological experiments. Scientists can speed up this experimental process if the literature information and biological networks can be adequately provided. In this paper, we present a web-based information system that can perform in silico experiments of computationally testing hypothesis on the function of a gene. A hypothesis that is specified in English by the user is converted to genes using a literature and knowledge mining system called BEST. Condition-specific TF, miRNA and PPI (protein-protein interaction) networks are automatically generated by projecting gene and miRNA expression data to template networks. Then, an in silico experiment is to test how well the target genes are connected from the knockout gene through the condition-specific networks. The test result visualizes path from the knockout gene to the target genes in the three networks. Statistical and information-theoretic scores are provided on the resulting web page to help scientists either accept or reject the hypothesis being tested. Our web-based system was extensively tested using three data sets, such as E2f1, Lrrk2, and Dicer1 knockout data sets. We were able to re-produce gene functions reported in the original research papers. In addition, we comprehensively tested with all disease names in MalaCards as hypothesis to show the effectiveness of our system. Our in silico experiment system can be very useful in suggesting biological mechanisms which can be further tested in vivo or in vitro. http://biohealth.snu.ac.kr/software/insilico/. Copyright © 2018 Elsevier Inc. All rights reserved.
Killeen's (2005) "p[subscript rep]" Coefficient: Logical and Mathematical Problems
ERIC Educational Resources Information Center
Maraun, Michael; Gabriel, Stephanie
2010-01-01
In his article, "An Alternative to Null-Hypothesis Significance Tests," Killeen (2005) urged the discipline to abandon the practice of "p[subscript obs]"-based null hypothesis testing and to quantify the signal-to-noise characteristics of experimental outcomes with replication probabilities. He described the coefficient that he…
Using VITA Service Learning Experiences to Teach Hypothesis Testing and P-Value Analysis
ERIC Educational Resources Information Center
Drougas, Anne; Harrington, Steve
2011-01-01
This paper describes a hypothesis testing project designed to capture student interest and stimulate classroom interaction and communication. Using an online survey instrument, the authors collected student demographic information and data regarding university service learning experiences. Introductory statistics students performed a series of…
Random Effects Structure for Confirmatory Hypothesis Testing: Keep It Maximal
ERIC Educational Resources Information Center
Barr, Dale J.; Levy, Roger; Scheepers, Christoph; Tily, Harry J.
2013-01-01
Linear mixed-effects models (LMEMs) have become increasingly prominent in psycholinguistics and related areas. However, many researchers do not seem to appreciate how random effects structures affect the generalizability of an analysis. Here, we argue that researchers using LMEMs for confirmatory hypothesis testing should minimally adhere to the…
USDA-ARS?s Scientific Manuscript database
The effects of bias (over and underestimates) in estimates of disease severity on hypothesis testing using different assessment methods was explored. Nearest percent estimates (NPE), the Horsfall-Barratt (H-B) scale, and two different linear category scales (10% increments, with and without addition...
A Multivariate Test of the Bott Hypothesis in an Urban Irish Setting
ERIC Educational Resources Information Center
Gordon, Michael; Downing, Helen
1978-01-01
Using a sample of 686 married Irish women in Cork City the Bott hypothesis was tested, and the results of a multivariate regression analysis revealed that neither network connectedness nor the strength of the respondent's emotional ties to the network had any explanatory power. (Author)
Polarization, Definition, and Selective Media Learning.
ERIC Educational Resources Information Center
Tichenor, P. J.; And Others
The traditional hypothesis that extreme attitudinal positions on controversial issues are likely to produce low understanding of messages on these issues--especially when the messages represent opposing views--is tested. Data for test of the hypothesis are from two field studies, each dealing with reader attitudes and decoding of one news article…
The Lasting Effects of Introductory Economics Courses.
ERIC Educational Resources Information Center
Sanders, Philip
1980-01-01
Reports research which tests the Stigler Hypothesis. The hypothesis suggests that students who have taken introductory economics courses and those who have not show little difference in test performance five years after completing college. Results of the author's research illustrate that economics students do retain some knowledge of economics…
Robust Approach to Verifying the Weak Form of the Efficient Market Hypothesis
NASA Astrophysics Data System (ADS)
Střelec, Luboš
2011-09-01
The weak form of the efficient markets hypothesis states that prices incorporate only past information about the asset. An implication of this form of the efficient markets hypothesis is that one cannot detect mispriced assets and consistently outperform the market through technical analysis of past prices. One of possible formulations of the efficient market hypothesis used for weak form tests is that share prices follow a random walk. It means that returns are realizations of IID sequence of random variables. Consequently, for verifying the weak form of the efficient market hypothesis, we can use distribution tests, among others, i.e. some tests of normality and/or some graphical methods. Many procedures for testing the normality of univariate samples have been proposed in the literature [7]. Today the most popular omnibus test of normality for a general use is the Shapiro-Wilk test. The Jarque-Bera test is the most widely adopted omnibus test of normality in econometrics and related fields. In particular, the Jarque-Bera test (i.e. test based on the classical measures of skewness and kurtosis) is frequently used when one is more concerned about heavy-tailed alternatives. As these measures are based on moments of the data, this test has a zero breakdown value [2]. In other words, a single outlier can make the test worthless. The reason so many classical procedures are nonrobust to outliers is that the parameters of the model are expressed in terms of moments, and their classical estimators are expressed in terms of sample moments, which are very sensitive to outliers. Another approach to robustness is to concentrate on the parameters of interest suggested by the problem under this study. Consequently, novel robust testing procedures of testing normality are presented in this paper to overcome shortcomings of classical normality tests in the field of financial data, which are typical with occurrence of remote data points and additional types of deviations from normality. This study also discusses some results of simulation power studies of these tests for normality against selected alternatives. Based on outcome of the power simulation study, selected normality tests were consequently used to verify weak form of efficiency in Central Europe stock markets.
Concerns regarding a call for pluralism of information theory and hypothesis testing
Lukacs, P.M.; Thompson, W.L.; Kendall, W.L.; Gould, W.R.; Doherty, P.F.; Burnham, K.P.; Anderson, D.R.
2007-01-01
1. Stephens et al . (2005) argue for `pluralism? in statistical analysis, combining null hypothesis testing and information-theoretic (I-T) methods. We show that I-T methods are more informative even in single variable problems and we provide an ecological example. 2. I-T methods allow inferences to be made from multiple models simultaneously. We believe multimodel inference is the future of data analysis, which cannot be achieved with null hypothesis-testing approaches. 3. We argue for a stronger emphasis on critical thinking in science in general and less reliance on exploratory data analysis and data dredging. Deriving alternative hypotheses is central to science; deriving a single interesting science hypothesis and then comparing it to a default null hypothesis (e.g. `no difference?) is not an efficient strategy for gaining knowledge. We think this single-hypothesis strategy has been relied upon too often in the past. 4. We clarify misconceptions presented by Stephens et al . (2005). 5. We think inference should be made about models, directly linked to scientific hypotheses, and their parameters conditioned on data, Prob(Hj| data). I-T methods provide a basis for this inference. Null hypothesis testing merely provides a probability statement about the data conditioned on a null model, Prob(data |H0). 6. Synthesis and applications. I-T methods provide a more informative approach to inference. I-T methods provide a direct measure of evidence for or against hypotheses and a means to consider simultaneously multiple hypotheses as a basis for rigorous inference. Progress in our science can be accelerated if modern methods can be used intelligently; this includes various I-T and Bayesian methods.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jennings, W.; Green, J.
2001-01-01
The purpose of this research was to determine the optimal configuration of home power systems relevant to different regions in the United States. The hypothesis was that, regardless of region, the optimal system would be a hybrid incorporating wind technology, versus a photovoltaic hybrid system without the use of wind technology. The method used in this research was HOMER, the Hybrid Optimization Model for Electric Renewables. HOMER is a computer program that optimizes electrical configurations under user-defined circumstances. According to HOMER, the optimal system for the four regions studied (Kansas, Massachusetts, Oregon, and Arizona) was a hybrid incorporating wind technology.more » The cost differences between these regions, however, were dependent upon regional renewable resources. Future studies will be necessary, as it is difficult to estimate meteorological impacts for other regions.« less
Martín H., José Antonio
2013-01-01
Many practical problems in almost all scientific and technological disciplines have been classified as computationally hard (NP-hard or even NP-complete). In life sciences, combinatorial optimization problems frequently arise in molecular biology, e.g., genome sequencing; global alignment of multiple genomes; identifying siblings or discovery of dysregulated pathways. In almost all of these problems, there is the need for proving a hypothesis about certain property of an object that can be present if and only if it adopts some particular admissible structure (an NP-certificate) or be absent (no admissible structure), however, none of the standard approaches can discard the hypothesis when no solution can be found, since none can provide a proof that there is no admissible structure. This article presents an algorithm that introduces a novel type of solution method to “efficiently” solve the graph 3-coloring problem; an NP-complete problem. The proposed method provides certificates (proofs) in both cases: present or absent, so it is possible to accept or reject the hypothesis on the basis of a rigorous proof. It provides exact solutions and is polynomial-time (i.e., efficient) however parametric. The only requirement is sufficient computational power, which is controlled by the parameter . Nevertheless, here it is proved that the probability of requiring a value of to obtain a solution for a random graph decreases exponentially: , making tractable almost all problem instances. Thorough experimental analyses were performed. The algorithm was tested on random graphs, planar graphs and 4-regular planar graphs. The obtained experimental results are in accordance with the theoretical expected results. PMID:23349711
Zhang, Yanfei; Kouril, Theresa; Snoep, Jacky L; Siebers, Bettina; Barberis, Matteo; Westerhoff, Hans V
2017-04-20
Mathematical models are key to systems biology where they typically describe the topology and dynamics of biological networks, listing biochemical entities and their relationships with one another. Some (hyper)thermophilic Archaea contain an enzyme, called non-phosphorylating glyceraldehyde-3-phosphate dehydrogenase (GAPN), which catalyzes the direct oxidation of glyceraldehyde-3-phosphate to 3-phosphoglycerate omitting adenosine 5'-triphosphate (ATP) formation by substrate-level-phosphorylation via phosphoglycerate kinase. In this study we formulate three hypotheses that could explain functionally why GAPN exists in these Archaea, and then construct and use mathematical models to test these three hypotheses. We used kinetic parameters of enzymes of Sulfolobus solfataricus ( S. solfataricus ) which is a thermo-acidophilic archaeon that grows optimally between 60 and 90 °C and between pH 2 and 4. For comparison, we used a model of Saccharomyces cerevisiae ( S. cerevisiae ), an organism that can live at moderate temperatures. We find that both the first hypothesis, i.e., that the glyceraldehyde-3-phosphate dehydrogenase (GAPDH) plus phosphoglycerate kinase (PGK) route (the alternative to GAPN) is thermodynamically too much uphill and the third hypothesis, i.e., that GAPDH plus PGK are required to carry the flux in the gluconeogenic direction, are correct. The second hypothesis, i.e., that the GAPDH plus PGK route delivers less than the 1 ATP per pyruvate that is delivered by the GAPN route, is only correct when GAPDH reaction has a high rate and 1,3- bis -phosphoglycerate (BPG) spontaneously degrades to 3PG at a high rate.
Hyaluronan Oligosaccharides for the Promotion of Remyelination (Revised)
2012-10-01
treated with StrepH or with chondroitinase ABC (which degrades chondroitin sulfate into unsaturated disaccharides) at concentrations that were...optimal for the digestion of HA and chondroitin sulfate , respectively (see Fig 1H). This finding is consistent with the hypothesis that specific HA
Hilderson, Deborah; Moons, Philip; Van der Elst, Kristien; Luyckx, Koen; Wouters, Carine; Westhovens, René
2016-01-01
To investigate the clinical impact of a brief transition programme for young people with JIA. The Devices for Optimization of Transfer and Transition of Adolescents with Rheumatic Disorders (DON'T RETARD) project is a mixed method project in which we first conducted a quasi-experimental study employing a one-group pre-test-post-test with a non-equivalent post-test-only comparison group design. In this quantitative study, we investigated clinical outcomes in patients with JIA and their parents who participated in the transition programme (longitudinal analyses). The post-test scores of this intervention group were compared with those of patients who received usual care (comparative analyses). Second, a qualitative study was conducted to explore the experiences of adolescents with JIA and their parents regarding their participation in the transition programme. The primary hypothesis of improved physical (effect size 0.11), psychosocial (effect size 0.46) and rheumatic-specific health status (effect size ranging from 0.21 to 0.33), was confirmed. With respect to the secondary outcomes, improved quality of life (effect size 0.51) and an optimized parenting climate (effect size ranging from 0.21 to 0.28) were observed. No effect was measured in medication adherence (odds ratio 1.46). Implementation of a transition programme as a brief intervention can improve the perceived health and quality of life of adolescents with JIA during the transition process, as well as the parenting behaviours of their parents. Based on the present study, a randomized controlled trial can be designed to evaluate the effectiveness of the transition programme. © The Author 2015. Published by Oxford University Press on behalf of the British Society for Rheumatology. All rights reserved. For Permissions, please email: journals.permissions@oup.com.
Parents’ Optimism, Positive Parenting, and Child Peer Competence in Mexican-Origin Families
Castro-Schilo, Laura; Ferrer, Emilio; Taylor, Zoe E.; Robins, Richard W.; Conger, Rand D.; Widaman, Keith F.
2012-01-01
SYNOPSIS Objective This study examined how parents’ optimism influences positive parenting and child peer competence in Mexican-origin families. Design A sample of 521 families (521 mothers, 438 fathers, and 521 11-year-olds) participated in the cross-sectional study. We used structural equation modeling to assess whether effective parenting would mediate the effect of parents’ optimism on child peer competence and whether mothers’ and fathers’ optimism would moderate the relation between positive parenting and child social competence. Results Mothers’ and fathers’ optimism were associated with effective parenting, which in turn was related to children’s peer competence. Mothers’ and fathers’ optimism also moderated the effect of parenting on child peer competence. High levels of parental optimism buffered children against poor parenting; at low levels of parental optimism, positive parenting was more strongly related to child peer competence. Conclusions Results are consistent with the hypothesis that positive parenting is promoted by parents’ optimism and is a proximal driver of child social competence. Parental optimism moderates effects of parenting on child outcomes. PMID:23526877
Caricati, Luca
2017-01-01
The status-legitimacy hypothesis was tested by analyzing cross-national data about social inequality. Several indicators were used as indexes of social advantage: social class, personal income, and self-position in the social hierarchy. Moreover, inequality and freedom in nations, as indexed by Gini and by the human freedom index, were considered. Results from 36 nations worldwide showed no support for the status-legitimacy hypothesis. The perception that income distribution was fair tended to increase as social advantage increased. Moreover, national context increased the difference between advantaged and disadvantaged people in the perception of social fairness: Contrary to the status-legitimacy hypothesis, disadvantaged people were more likely than advantaged people to perceive income distribution as too large, and this difference increased in nations with greater freedom and equality. The implications for the status-legitimacy hypothesis are discussed.
Tests of the Giant Impact Hypothesis
NASA Technical Reports Server (NTRS)
Jones, J. H.
1998-01-01
The giant impact hypothesis has gained popularity as a means of explaining a volatile-depleted Moon that still has a chemical affinity to the Earth. As Taylor's Axiom decrees, the best models of lunar origin are testable, but this is difficult with the giant impact model. The energy associated with the impact would be sufficient to totally melt and partially vaporize the Earth. And this means that there should he no geological vestige of Barber times. Accordingly, it is important to devise tests that may be used to evaluate the giant impact hypothesis. Three such tests are discussed here. None of these is supportive of the giant impact model, but neither do they disprove it.
1985-02-01
In particular, the optimal control is characterized in terms S of the dual system and conditions are given under which the optimal control is...solution in general and has to be replaced by the differential inclusion x - ex 8 range fi. It is important to note that 0 is boundedly invertible on...above way in terms of operators B, C, T which satisfy the hypotheses (S2-4). -9- L ’- ". * The implications of this hypothesis for the inhomogeneous
Genetics and recent human evolution.
Templeton, Alan R
2007-07-01
Starting with "mitochondrial Eve" in 1987, genetics has played an increasingly important role in studies of the last two million years of human evolution. It initially appeared that genetic data resolved the basic models of recent human evolution in favor of the "out-of-Africa replacement" hypothesis in which anatomically modern humans evolved in Africa about 150,000 years ago, started to spread throughout the world about 100,000 years ago, and subsequently drove to complete genetic extinction (replacement) all other human populations in Eurasia. Unfortunately, many of the genetic studies on recent human evolution have suffered from scientific flaws, including misrepresenting the models of recent human evolution, focusing upon hypothesis compatibility rather than hypothesis testing, committing the ecological fallacy, and failing to consider a broader array of alternative hypotheses. Once these flaws are corrected, there is actually little genetic support for the out-of-Africa replacement hypothesis. Indeed, when genetic data are used in a hypothesis-testing framework, the out-of-Africa replacement hypothesis is strongly rejected. The model of recent human evolution that emerges from a statistical hypothesis-testing framework does not correspond to any of the traditional models of human evolution, but it is compatible with fossil and archaeological data. These studies also reveal that any one gene or DNA region captures only a small part of human evolutionary history, so multilocus studies are essential. As more and more loci became available, genetics will undoubtedly offer additional insights and resolutions of human evolution.
NASA Astrophysics Data System (ADS)
Paasche, Hendrik
2018-01-01
Site characterization requires detailed and ideally spatially continuous information about the subsurface. Geophysical tomographic experiments allow for spatially continuous imaging of physical parameter variations, e.g., seismic wave propagation velocities. Such physical parameters are often related to typical geotechnical or hydrological target parameters, e.g. as achieved from 1D direct push or borehole logging. Here, the probabilistic inference of 2D tip resistance, sleeve friction, and relative dielectric permittivity distributions in near-surface sediments is constrained by ill-posed cross-borehole seismic P- and S-wave and radar wave traveltime tomography. In doing so, we follow a discovery science strategy employing a fully data-driven approach capable of accounting for tomographic ambiguity and differences in spatial resolution between the geophysical tomograms and the geotechnical logging data used for calibration. We compare the outcome to results achieved employing classical hypothesis-driven approaches, i.e., deterministic transfer functions derived empirically for the inference of 2D sleeve friction from S-wave velocity tomograms and theoretically for the inference of 2D dielectric permittivity from radar wave velocity tomograms. The data-driven approach offers maximal flexibility in combination with very relaxed considerations about the character of the expected links. This makes it a versatile tool applicable to almost any combination of data sets. However, error propagation may be critical and justify thinking about a hypothesis-driven pre-selection of an optimal database going along with the risk of excluding relevant information from the analyses. Results achieved by transfer function rely on information about the nature of the link and optimal calibration settings drawn as retrospective hypothesis by other authors. Applying such transfer functions at other sites turns them into a priori valid hypothesis, which can, particularly for empirically derived transfer functions, result in poor predictions. However, a mindful utilization and critical evaluation of the consequences of turning a retrospectively drawn hypothesis into an a priori valid hypothesis can also result in good results for inference and prediction problems when using classical transfer function concepts.
NASA Astrophysics Data System (ADS)
Cachera, M.; Ernande, B.; Villanueva, M. C.; Lefebvre, S.
2017-02-01
Individual diet variation (i.e. diet variation among individuals) impacts intra- and inter-specific interactions. Investigating its sources and relationship with species trophic niche organization is important for understanding community structure and dynamics. Individual diet variation may increase with intra-specific phenotypic (or "individual state") variation and habitat variability, according to Optimal Foraging Theory (OFT), and with species trophic niche width, according to the Niche Variation Hypothesis (NVH). OFT proposes "proximate sources" of individual diet variation such as variations in habitat or size whereas NVH relies on "ultimate sources" related to the competitive balance between intra- and inter-specific competitions. The latter implies as a corollary that species trophic niche overlap, taken as inter-specific competition measure, decreases as species niche width and individual niche variation increase. We tested the complementary predictions of OFT and NVH in a marine fish assemblage using stomach content data and associated trophic niche metrics. The NVH predictions were tested between species of the assemblage and decomposed into a between- and a within-functional group component to assess the potential influence of species' ecological function. For most species, individual diet variation and niche overlap were consistently larger than expected. Individual diet variation increased with intra-specific variability in individual state and habitat, as expected from OFT. It also increased with species niche width but in compliance with the null expectation, thus not supporting the NVH. In contrast, species niche overlap increased significantly less than null expectation with both species niche width and individual diet variation, supporting NVH corollary. The between- and within-functional group components of the NVH relationships were consistent with those between species at the assemblage level. Changing the number of prey categories used to describe diet (from 16 to 41) did not change the results qualitatively. These results suggest that, besides proximate sources, intra-specific competition favors higher individual diet variation than expected while inter-specific competition limits the increase of individual diet variation and of species niche overlap with species niche expansion. This reveals partial trophic resource partitioning between species. Various niche metrics used in combination allow inferring competition effects on trophic niches' organization within communities.
Age Dedifferentiation Hypothesis: Evidence form the WAIS III.
ERIC Educational Resources Information Center
Juan-Espinosa, Manuel; Garcia, Luis F.; Escorial, Sergio; Rebollo, Irene; Colom, Roberto; Abad, Francisco J.
2002-01-01
Used the Spanish standardization of the Wechsler Adult Intelligence Scale III (WAIS III) (n=1,369) to test the age dedifferentiation hypothesis. Results show no changes in the percentage of variance accounted for by "g" and four group factors when restriction of range is controlled. Discusses an age indifferentation hypothesis. (SLD)
Rashid, Jahidur; Alobaida, Ahmad; Al-Hilal, Taslim A; Hammouda, Samia; McMurtry, Ivan F; Nozik-Grayck, Eva; Stenmark, Kurt R; Ahsan, Fakhrul
2018-06-28
Peroxisome-proliferator-activated-receptor-gamma (PPAR-γ) is implicated, in some capacity, in the pathogenesis of pulmonary arterial hypertension (PAH). Rosiglitazone, an oral antidiabetic and PPAR-γ agonist, has the potential to dilate pulmonary arteries and to attenuate arterial remodeling in PAH. Here, we sought to test the hypothesis that rosiglitazone can be repurposed as inhaled formulation for the treatment of PAH. We have tested this conjecture by preparing and optimizing poly(lactic-co-glycolic) acid (PLGA) based particles of rosiglitazone, assessing the drug particles for pulmonary absorption, investigating the efficacy of the plain versus particulate drug formulation in improving the respiratory hemodynamics in PAH animals, and finally studying the effect of the drug in regulating the molecular markers associated with PAH pathogenesis. The optimized particles were slightly porous and spherical, and released 87.9% ± 6.7% of the drug in 24 h. The elimination half-life of the drug formulated in PLGA particles was 2.5-fold greater than that of the plain drug administered via the same route at the same dose. The optimized formulation, given via the pulmonary route, produced pulmonary selective vasodilation in PAH animals, but oral rosiglitazone had no effect in pulmonary hemodynamics. Rosiglitazone ameliorates the pathogenesis of PAH by balancing the molecular regulators involved in the vasoconstriction and vasodilation of human pulmonary arterial smooth muscle cells. All in all, data generated using intact animal and cellular models point to the conclusion that PLGA particles of an antidiabetic drug can be used for the treatment of a different disease, PAH. Copyright © 2018 Elsevier B.V. All rights reserved.
Optimality, stochasticity, and variability in motor behavior
Guigon, Emmanuel; Baraduc, Pierre; Desmurget, Michel
2008-01-01
Recent theories of motor control have proposed that the nervous system acts as a stochastically optimal controller, i.e. it plans and executes motor behaviors taking into account the nature and statistics of noise. Detrimental effects of noise are converted into a principled way of controlling movements. Attractive aspects of such theories are their ability to explain not only characteristic features of single motor acts, but also statistical properties of repeated actions. Here, we present a critical analysis of stochastic optimality in motor control which reveals several difficulties with this hypothesis. We show that stochastic control may not be necessary to explain the stochastic nature of motor behavior, and we propose an alternative framework, based on the action of a deterministic controller coupled with an optimal state estimator, which relieves drawbacks of stochastic optimality and appropriately explains movement variability. PMID:18202922
Multiple Hypothesis Testing for Experimental Gingivitis Based on Wilcoxon Signed Rank Statistics
Preisser, John S.; Sen, Pranab K.; Offenbacher, Steven
2011-01-01
Dental research often involves repeated multivariate outcomes on a small number of subjects for which there is interest in identifying outcomes that exhibit change in their levels over time as well as to characterize the nature of that change. In particular, periodontal research often involves the analysis of molecular mediators of inflammation for which multivariate parametric methods are highly sensitive to outliers and deviations from Gaussian assumptions. In such settings, nonparametric methods may be favored over parametric ones. Additionally, there is a need for statistical methods that control an overall error rate for multiple hypothesis testing. We review univariate and multivariate nonparametric hypothesis tests and apply them to longitudinal data to assess changes over time in 31 biomarkers measured from the gingival crevicular fluid in 22 subjects whereby gingivitis was induced by temporarily withholding tooth brushing. To identify biomarkers that can be induced to change, multivariate Wilcoxon signed rank tests for a set of four summary measures based upon area under the curve are applied for each biomarker and compared to their univariate counterparts. Multiple hypothesis testing methods with choice of control of the false discovery rate or strong control of the family-wise error rate are examined. PMID:21984957
Dynamic motion planning of 3D human locomotion using gradient-based optimization.
Kim, Hyung Joo; Wang, Qian; Rahmatalla, Salam; Swan, Colby C; Arora, Jasbir S; Abdel-Malek, Karim; Assouline, Jose G
2008-06-01
Since humans can walk with an infinite variety of postures and limb movements, there is no unique solution to the modeling problem to predict human gait motions. Accordingly, we test herein the hypothesis that the redundancy of human walking mechanisms makes solving for human joint profiles and force time histories an indeterminate problem best solved by inverse dynamics and optimization methods. A new optimization-based human-modeling framework is thus described for predicting three-dimensional human gait motions on level and inclined planes. The basic unknowns in the framework are the joint motion time histories of a 25-degree-of-freedom human model and its six global degrees of freedom. The joint motion histories are calculated by minimizing an objective function such as deviation of the trunk from upright posture that relates to the human model's performance. A variety of important constraints are imposed on the optimization problem, including (1) satisfaction of dynamic equilibrium equations by requiring the model's zero moment point (ZMP) to lie within the instantaneous geometrical base of support, (2) foot collision avoidance, (3) limits on ground-foot friction, and (4) vanishing yawing moment. Analytical forms of objective and constraint functions are presented and discussed for the proposed human-modeling framework in which the resulting optimization problems are solved using gradient-based mathematical programming techniques. When the framework is applied to the modeling of bipedal locomotion on level and inclined planes, acyclic human walking motions that are smooth and realistic as opposed to less natural robotic motions are obtained. The aspects of the modeling framework requiring further investigation and refinement, as well as potential applications of the framework in biomechanics, are discussed.
A default Bayesian hypothesis test for mediation.
Nuijten, Michèle B; Wetzels, Ruud; Matzke, Dora; Dolan, Conor V; Wagenmakers, Eric-Jan
2015-03-01
In order to quantify the relationship between multiple variables, researchers often carry out a mediation analysis. In such an analysis, a mediator (e.g., knowledge of a healthy diet) transmits the effect from an independent variable (e.g., classroom instruction on a healthy diet) to a dependent variable (e.g., consumption of fruits and vegetables). Almost all mediation analyses in psychology use frequentist estimation and hypothesis-testing techniques. A recent exception is Yuan and MacKinnon (Psychological Methods, 14, 301-322, 2009), who outlined a Bayesian parameter estimation procedure for mediation analysis. Here we complete the Bayesian alternative to frequentist mediation analysis by specifying a default Bayesian hypothesis test based on the Jeffreys-Zellner-Siow approach. We further extend this default Bayesian test by allowing a comparison to directional or one-sided alternatives, using Markov chain Monte Carlo techniques implemented in JAGS. All Bayesian tests are implemented in the R package BayesMed (Nuijten, Wetzels, Matzke, Dolan, & Wagenmakers, 2014).
Knowledge Base Refinement as Improving an Incorrect and Incomplete Domain Theory
1990-04-01
Ginsberg et al., 1985), and RL (Fu and Buchanan, 1985), which perform empirical induction over a library of test cases. This chapter describes a new...state knowledge. Examples of high-level goals are: to test a hypothesis, to differentiate between several plausible hypotheses, to ask a clarifying...one tuple when we Group Hypotheses Test Hypothesis Applyrule Findout Strategy Metarule Strategy Metarule Strategy Metarule Strategy Metarule goal(group
The [Geo]Scientific Method; Hypothesis Testing and Geoscience Proposal Writing for Students
ERIC Educational Resources Information Center
Markley, Michelle J.
2010-01-01
Most undergraduate-level geoscience texts offer a paltry introduction to the nuanced approach to hypothesis testing that geoscientists use when conducting research and writing proposals. Fortunately, there are a handful of excellent papers that are accessible to geoscience undergraduates. Two historical papers by the eminent American geologists G.…
Mental Abilities and School Achievement: A Test of a Mediation Hypothesis
ERIC Educational Resources Information Center
Vock, Miriam; Preckel, Franzis; Holling, Heinz
2011-01-01
This study analyzes the interplay of four cognitive abilities--reasoning, divergent thinking, mental speed, and short-term memory--and their impact on academic achievement in school in a sample of adolescents in grades seven to 10 (N = 1135). Based on information processing approaches to intelligence, we tested a mediation hypothesis, which states…
The Relation between Parental Values and Parenting Behavior: A Test of the Kohn Hypothesis.
ERIC Educational Resources Information Center
Luster, Tom; Rhoades, Kelly
To investigate how values influence parenting beliefs and practices, a test was made of Kohn's hypothesis that parents valuing self-direction emphasize the supportive function of parenting, while parents valuing conformity emphasize control of unsanctioned behaviors. Participating in the study were 65 mother-infant dyads. Infants ranged in age…
Chromosome Connections: Compelling Clues to Common Ancestry
ERIC Educational Resources Information Center
Flammer, Larry
2013-01-01
Students compare banding patterns on hominid chromosomes and see striking evidence of their common ancestry. To test this, human chromosome no. 2 is matched with two shorter chimpanzee chromosomes, leading to the hypothesis that human chromosome 2 resulted from the fusion of the two shorter chromosomes. Students test that hypothesis by looking for…
The main objective of the feasibility study described here was to test the hypothesis that properly plugged wells are effectively sealed by drilling mud. In The process of testing the hypothesis, evidence about dynamics of building mud cake on the wellbore-face was obtained, as ...
A test of the predator satiation hypothesis, acorn predator size, and acorn preference
C.H. Greenberg; S.J. Zarnoch
2018-01-01
Mast seeding is hypothesized to satiate seed predators with heavy production and reduce populations with crop failure, thereby increasing seed survival. Preference for red or white oak acorns could influence recruitment among oak species. We tested the predator satiation hypothesis, acorn preference, and predator size by concurrently...
The Need for Nuance in the Null Hypothesis Significance Testing Debate
ERIC Educational Resources Information Center
Häggström, Olle
2017-01-01
Null hypothesis significance testing (NHST) provides an important statistical toolbox, but there are a number of ways in which it is often abused and misinterpreted, with bad consequences for the reliability and progress of science. Parts of contemporary NHST debate, especially in the psychological sciences, is reviewed, and a suggestion is made…
Acorn Caching in Tree Squirrels: Teaching Hypothesis Testing in the Park
ERIC Educational Resources Information Center
McEuen, Amy B.; Steele, Michael A.
2012-01-01
We developed an exercise for a university-level ecology class that teaches hypothesis testing by examining acorn preferences and caching behavior of tree squirrels (Sciurus spp.). This exercise is easily modified to teach concepts of behavioral ecology for earlier grades, particularly high school, and provides students with a theoretical basis for…
Shaping Up the Practice of Null Hypothesis Significance Testing.
ERIC Educational Resources Information Center
Wainer, Howard; Robinson, Daniel H.
2003-01-01
Discusses criticisms of null hypothesis significance testing (NHST), suggesting that historical use of NHST was reasonable, and current users should read Sir Ronald Fisher's applied work. Notes that modifications to NHST and interpretations of its outcomes might better suit the needs of modern science. Concludes that NHST is most often useful as…
SOME EFFECTS OF DOGMATISM IN ELEMENTARY SCHOOL PRINCIPALS AND TEACHERS.
ERIC Educational Resources Information Center
BENTZEN, MARY M.
THE HYPOTHESIS THAT RATINGS ON CONGENIALITY AS A COWORKER GIVEN TO TEACHERS WILL BE IN PART A FUNCTION OF THE ORGANIZATIONAL STATUS OF THE RATER WAS TESTED. A SECONDARY PROBLEM WAS TO TEST THE HYPOTHESIS THAT DOGMATIC SUBJECTS MORE THAN NONDOGMATIC SUBJECTS WOULD EXHIBIT COGNITIVE BEHAVIOR WHICH INDICATED (1) GREATER DISTINCTION BETWEEN POSITIVE…
Thou Shalt Not Bear False Witness against Null Hypothesis Significance Testing
ERIC Educational Resources Information Center
García-Pérez, Miguel A.
2017-01-01
Null hypothesis significance testing (NHST) has been the subject of debate for decades and alternative approaches to data analysis have been proposed. This article addresses this debate from the perspective of scientific inquiry and inference. Inference is an inverse problem and application of statistical methods cannot reveal whether effects…
Wissman, Kathryn T; Rawson, Katherine A
2018-04-01
Arnold and McDermott [(2013). Test-potentiated learning: Distinguishing between direct and indirect effects of testing. Journal of Experimental Psychology: Learning, Memory, and Cognition, 39, 940-945] isolated the indirect effects of testing and concluded that encoding is enhanced to a greater extent following more versus fewer practice tests, referred to as test-potentiated learning. The current research provided further evidence for test-potentiated learning and evaluated the covert retrieval hypothesis as an alternative explanation for the observed effect. Learners initially studied foreign language word pairs and then completed either one or five practice tests before restudy occurred. Results of greatest interest concern performance on test trials following restudy for items that were not correctly recalled on the test trials that preceded restudy. Results replicate Arnold and McDermott (2013) by demonstrating that more versus fewer tests potentiate learning when trial time is limited. Results also provide strong evidence against the covert retrieval hypothesis concerning why the effect occurs (i.e., it does not reflect differential covert retrieval during pre-restudy trials). In addition, outcomes indicate that the magnitude of the test-potentiated learning effect decreases as trial length increases, revealing an unexpected boundary condition to test-potentiated learning.
Correcting power and p-value calculations for bias in diffusion tensor imaging.
Lauzon, Carolyn B; Landman, Bennett A
2013-07-01
Diffusion tensor imaging (DTI) provides quantitative parametric maps sensitive to tissue microarchitecture (e.g., fractional anisotropy, FA). These maps are estimated through computational processes and subject to random distortions including variance and bias. Traditional statistical procedures commonly used for study planning (including power analyses and p-value/alpha-rate thresholds) specifically model variability, but neglect potential impacts of bias. Herein, we quantitatively investigate the impacts of bias in DTI on hypothesis test properties (power and alpha-rate) using a two-sided hypothesis testing framework. We present theoretical evaluation of bias on hypothesis test properties, evaluate the bias estimation technique SIMEX for DTI hypothesis testing using simulated data, and evaluate the impacts of bias on spatially varying power and alpha rates in an empirical study of 21 subjects. Bias is shown to inflame alpha rates, distort the power curve, and cause significant power loss even in empirical settings where the expected difference in bias between groups is zero. These adverse effects can be attenuated by properly accounting for bias in the calculation of power and p-values. Copyright © 2013 Elsevier Inc. All rights reserved.
Tailoring biocontrol to maximize top-down effects: on the importance of underlying site fertility.
Hovick, Stephen M; Carson, Walter P
2015-01-01
The degree to which biocontrol agents impact invasive plants varies widely across landscapes, often for unknown reasons. Understanding this variability can help optimize invasive species management while also informing our understanding of trophic linkages. To address these issues, we tested three hypotheses with contrasting predictions regarding the likelihood of biocontrol success. (1) The biocontrol effort hypothesis: invasive populations are regulated primarily by top-down effects, predicting that increased biocontrol efforts alone (e.g., more individuals of a given biocontrol agent or more time since agent release) will enhance biocontrol success. (2) The relative fertility hypothesis: invasive populations are regulated primarily by bottom-up effects, predicting that nutrient enrichment will increase dominance by invasives and thus reduce biocontrol success, regardless of biocontrol efforts. (3) The fertility-dependent biocontrol effort hypothesis: top-down effects will only regulate invasive populations if bottom-up effects are weak. It predicts that greater biocontrol efforts will increase biocontrol success, but only in low-nutrient sites. To test these hypotheses, we surveyed 46 sites across three states with prior releases of Galerucella beetles, the most common biocontrol agents used against invasive purple loosestrife (Lythrum salicaria). We found strong support for the fertility-dependent biocontrol effort hypothesis, as biocontrol success occurred most often with greater biocontrol efforts, but only in low-fertility sites. This result held for early stage metrics of biocontrol success (higher Galerucella abundance) and ultimate biocontrol outcomes (decreased loosestrife plant size and abundance). Presence of the invasive grass Phalaris arundinacea was also inversely related to loosestrife abundance, suggesting that biocontrol-based reductions in loosestrife made secondary invasion by P. arundinacea more likely. Our data suggest that low-nutrient sites be prioritized for loosestrife biocontrol and that future monitoring account for variation in site fertility or work to mitigate it. We introduce a new framework that integrates our findings with conflicting patterns previously reported from other biocontrol systems, proposing a unimodal relationship whereby nutrient availability enhances biocontrol success in low-nutrient sites but hampers it in high-nutrient sites. Our results represent one of the first examples of biocontrol success depending on site fertility, which has the potential to inform biocontrol-based management decisions across entire regions and among contrasting systems.
Hypothesis testing and earthquake prediction.
Jackson, D D
1996-04-30
Requirements for testing include advance specification of the conditional rate density (probability per unit time, area, and magnitude) or, alternatively, probabilities for specified intervals of time, space, and magnitude. Here I consider testing fully specified hypotheses, with no parameter adjustments or arbitrary decisions allowed during the test period. Because it may take decades to validate prediction methods, it is worthwhile to formulate testable hypotheses carefully in advance. Earthquake prediction generally implies that the probability will be temporarily higher than normal. Such a statement requires knowledge of "normal behavior"--that is, it requires a null hypothesis. Hypotheses can be tested in three ways: (i) by comparing the number of actual earth-quakes to the number predicted, (ii) by comparing the likelihood score of actual earthquakes to the predicted distribution, and (iii) by comparing the likelihood ratio to that of a null hypothesis. The first two tests are purely self-consistency tests, while the third is a direct comparison of two hypotheses. Predictions made without a statement of probability are very difficult to test, and any test must be based on the ratio of earthquakes in and out of the forecast regions.
Hypothesis testing and earthquake prediction.
Jackson, D D
1996-01-01
Requirements for testing include advance specification of the conditional rate density (probability per unit time, area, and magnitude) or, alternatively, probabilities for specified intervals of time, space, and magnitude. Here I consider testing fully specified hypotheses, with no parameter adjustments or arbitrary decisions allowed during the test period. Because it may take decades to validate prediction methods, it is worthwhile to formulate testable hypotheses carefully in advance. Earthquake prediction generally implies that the probability will be temporarily higher than normal. Such a statement requires knowledge of "normal behavior"--that is, it requires a null hypothesis. Hypotheses can be tested in three ways: (i) by comparing the number of actual earth-quakes to the number predicted, (ii) by comparing the likelihood score of actual earthquakes to the predicted distribution, and (iii) by comparing the likelihood ratio to that of a null hypothesis. The first two tests are purely self-consistency tests, while the third is a direct comparison of two hypotheses. Predictions made without a statement of probability are very difficult to test, and any test must be based on the ratio of earthquakes in and out of the forecast regions. PMID:11607663
Mismatch or cumulative stress: toward an integrated hypothesis of programming effects.
Nederhof, Esther; Schmidt, Mathias V
2012-07-16
This paper integrates the cumulative stress hypothesis with the mismatch hypothesis, taking into account individual differences in sensitivity to programming. According to the cumulative stress hypothesis, individuals are more likely to suffer from disease as adversity accumulates. According to the mismatch hypothesis, individuals are more likely to suffer from disease if a mismatch occurs between the early programming environment and the later adult environment. These seemingly contradicting hypotheses are integrated into a new model proposing that the cumulative stress hypothesis applies to individuals who were not or only to a small extent programmed by their early environment, while the mismatch hypothesis applies to individuals who experienced strong programming effects. Evidence for the main effects of adversity as well as evidence for the interaction between adversity in early and later life is presented from human observational studies and animal models. Next, convincing evidence for individual differences in sensitivity to programming is presented. We extensively discuss how our integrated model can be tested empirically in animal models and human studies, inviting researchers to test this model. Furthermore, this integrated model should tempt clinicians and other intervenors to interpret symptoms as possible adaptations from an evolutionary biology perspective. Copyright © 2011 Elsevier Inc. All rights reserved.
Explaining sex differences in lifespan in terms of optimal energy allocation in the baboon.
King, Annette M; Kirkwood, Thomas B L; Shanley, Daryl P
2017-10-01
We provide a quantitative test of the hypothesis that sex role specialization may account for sex differences in lifespan in baboons if such specialization causes the dependency of fitness upon longevity, and consequently the optimal resolution to an energetic trade-off between somatic maintenance and other physiological functions, to differ between males and females. We present a model in which females provide all offspring care and males compete for access to reproductive females and in which the partitioning of available energy between the competing fitness-enhancing functions of growth, maintenance, and reproduction is modeled as a dynamic behavioral game, with the optimal decision for each individual depending upon his/her state and the behavior of other members of the population. Our model replicates the sexual dimorphism in body size and sex differences in longevity and reproductive scheduling seen in natural populations of baboons. We show that this outcome is generally robust to perturbations in model parameters, an important finding given that the same behavior is seen across multiple populations and species in the wild. This supports the idea that sex differences in longevity result from differences in the value of somatic maintenance relative to other fitness-enhancing functions in keeping with the disposable soma theory. © 2017 The Author(s). Evolution © 2017 The Society for the Study of Evolution.
Learning optimal features for visual pattern recognition
NASA Astrophysics Data System (ADS)
Labusch, Kai; Siewert, Udo; Martinetz, Thomas; Barth, Erhardt
2007-02-01
The optimal coding hypothesis proposes that the human visual system has adapted to the statistical properties of the environment by the use of relatively simple optimality criteria. We here (i) discuss how the properties of different models of image coding, i.e. sparseness, decorrelation, and statistical independence are related to each other (ii) propose to evaluate the different models by verifiable performance measures (iii) analyse the classification performance on images of handwritten digits (MNIST data base). We first employ the SPARSENET algorithm (Olshausen, 1998) to derive a local filter basis (on 13 × 13 pixels windows). We then filter the images in the database (28 × 28 pixels images of digits) and reduce the dimensionality of the resulting feature space by selecting the locally maximal filter responses. We then train a support vector machine on a training set to classify the digits and report results obtained on a separate test set. Currently, the best state-of-the-art result on the MNIST data base has an error rate of 0,4%. This result, however, has been obtained by using explicit knowledge that is specific to the data (elastic distortion model for digits). We here obtain an error rate of 0,55% which is second best but does not use explicit data specific knowledge. In particular it outperforms by far all methods that do not use data-specific knowledge.
Optimized nanostructured TiO2 photocatalysts
NASA Astrophysics Data System (ADS)
Topcu, Selda; Jodhani, Gagan; Gouma, Pelagia
2016-07-01
Titania is the most widely studied photocatalyst. In it’s mixed-phase configuration (anatase-rutile form) -as manifested in the commercially available P25 Degussa material- titania was previously found to exhibit the best photocatalytic properties reported for the pure system. A great deal of published research by various workers in the field have not fully explained the underlying mechanism for the observed behavior of mixed-phase titania photocatalysts. One of the prevalent hypothesis in the literature that is tested in this work involves the presence of small, active clusters of interwoven anatase and rutile crystallites or “catalytic “hot-spots””. Therefore, non-woven nanofibrous mats of titania were produced and upon calcination the mats consisted of nanostructured fibers with different anatase-rutile ratios. By assessing the photocatalytic and photoelectrochemical properties of these samples the optimized photocatalyst was determined. This consisted of TiO2 nanostructures annealed at 500˚C with an anatase /rutile content of 90/10. Since the performance of this material exceeded that of P25 complete structural characterization was employed to understand the catalytic mechanism involved. It was determined that the dominant factors controlling the photocatalytic behavior of the titania system are the relative particle size of the different phases of titania and the growth of rutile laths on anatase grains which allow for rapid electron transfer between the two phases. This explains how to optimize the response of the pure system.
1982-04-01
S. (1979), "Conflict Among Criteria for Testing Hypothesis: Extension and Comments," Econometrica, 47, 203-207 Breusch , T. S. and Pagan , A. R. (1980...Savin, N. E. (1977), "Conflict Among Criteria for Testing Hypothesis in the Multivariate Linear Regression Model," Econometrica, 45, 1263-1278 Breusch , T...VNCLASSIFIED RAND//-6756NL U l~ I- THE RELATION AMONG THE LIKELIHOOD RATIO-, WALD-, AND LAGRANGE MULTIPLIER TESTS AND THEIR APPLICABILITY TO SMALL SAMPLES
NASA Astrophysics Data System (ADS)
Sirenko, M. A.; Tarasenko, P. F.; Pushkarev, M. I.
2017-01-01
One of the most noticeable features of sign-based statistical procedures is an opportunity to build an exact test for simple hypothesis testing of parameters in a regression model. In this article, we expanded a sing-based approach to the nonlinear case with dependent noise. The examined model is a multi-quantile regression, which makes it possible to test hypothesis not only of regression parameters, but of noise parameters as well.
Praveen, Vijayakumar; Praveen, Shama
2017-01-01
Sudden infant death syndrome (SIDS) continues to be a major public health issue. Following its major decline since the “Back to Sleep” campaign, the incidence of SIDS has plateaued, with an annual incidence of about 1,500 SIDS-related deaths in the United States and thousands more throughout the world. The etiology of SIDS, the major cause of postneonatal mortality in the western world, is still poorly understood. Although sleeping in prone position is a major risk factor, SIDS continues to occur even in the supine sleeping position. The triple-risk model of Filiano and Kinney emphasizes the interaction between a susceptible infant during a critical developmental period and stressor/s in the pathogenesis of SIDS. Recent evidence ranges from dysregulated autonomic control to findings of altered neurochemistry, especially the serotonergic system that plays an important role in brainstem cardiorespiratory/thermoregulatory centers. Brainstem serotonin (5-HT) and tryptophan hydroxylase-2 (TPH-2) levels have been shown to be lower in SIDS, supporting the evidence that defects in the medullary serotonergic system play a significant role in SIDS. Pathogenic bacteria and their enterotoxins have been associated with SIDS, although no direct evidence has been established. We present a new hypothesis that the infant’s gut microbiome, and/or its metabolites, by its direct effects on the gut enterochromaffin cells, stimulates the afferent gut vagal endings by releasing serotonin (paracrine effect), optimizing autoresuscitation by modulating brainstem 5-HT levels through the microbiome–gut–brain axis, thus playing a significant role in SIDS during the critical period of gut flora development and vulnerability to SIDS. The shared similarities between various risk factors for SIDS and their relationship with the infant gut microbiome support our hypothesis. Comprehensive gut-microbiome studies are required to test our hypothesis. PMID:28111624
Dathe, A; Postma, J A; Postma-Blaauw, M B; Lynch, J P
2016-09-01
Crops with reduced requirement for nitrogen (N) fertilizer would have substantial benefits in developed nations, while improving food security in developing nations. This study employs the functional structural plant model SimRoot to test the hypothesis that variation in the growth angles of axial roots of maize (Zea mays L.) is an important determinant of N capture. Six phenotypes contrasting in axial root growth angles were modelled for 42 d at seven soil nitrate levels from 10 to 250 kg ha(-1) in a sand and a silt loam, and five precipitation regimes ranging from 0·5× to 1·5× of an ambient rainfall pattern. Model results were compared with soil N measurements of field sites with silt loam and loamy sand textures. For optimal nitrate uptake, root foraging must coincide with nitrate availability in the soil profile, which depends on soil type and precipitation regime. The benefit of specific root architectures for efficient N uptake increases with decreasing soil N content, while the effect of soil type increases with increasing soil N level. Extreme root architectures are beneficial under extreme environmental conditions. Extremely shallow root systems perform well under reduced precipitation, but perform poorly with ambient and greater precipitation. Dimorphic phenotypes with normal or shallow seminal and very steep nodal roots performed well in all scenarios, and consistently outperformed the steep phenotypes. Nitrate uptake increased under reduced leaching conditions in the silt loam and with low precipitation. Results support the hypothesis that root growth angles are primary determinants of N acquisition in maize. With decreasing soil N status, optimal angles resulted in 15-50 % greater N acquisition over 42 d. Optimal root phenotypes for N capture varied with soil and precipitation regimes, suggesting that genetic selection for root phenotypes could be tailored to specific environments. © The Author 2016. Published by Oxford University Press on behalf of the Annals of Botany Company. All rights reserved. For Permissions, please email: journals.permissions@oup.com.
DOT National Transportation Integrated Search
2016-04-01
The objective of the Dynamic Interrogative Data Capture (DIDC) algorithms and software is to optimize the capture and transmission of vehicle-based data under a range of dynamically configurable messaging strategies. The key hypothesis of DIDC is tha...
Hypothesis-driven methods to augment human cognition by optimizing cortical oscillations
Horschig, Jörn M.; Zumer, Johanna M.; Bahramisharif, Ali
2014-01-01
Cortical oscillations have been shown to represent fundamental functions of a working brain, e.g., communication, stimulus binding, error monitoring, and inhibition, and are directly linked to behavior. Recent studies intervening with these oscillations have demonstrated effective modulation of both the oscillations and behavior. In this review, we collect evidence in favor of how hypothesis-driven methods can be used to augment cognition by optimizing cortical oscillations. We elaborate their potential usefulness for three target groups: healthy elderly, patients with attention deficit/hyperactivity disorder, and healthy young adults. We discuss the relevance of neuronal oscillations in each group and show how each of them can benefit from the manipulation of functionally-related oscillations. Further, we describe methods for manipulation of neuronal oscillations including direct brain stimulation as well as indirect task alterations. We also discuss practical considerations about the proposed techniques. In conclusion, we propose that insights from neuroscience should guide techniques to augment human cognition, which in turn can provide a better understanding of how the human brain works. PMID:25018706
Warren, D L; Iglesias, T L
2012-06-01
The 'expensive-tissue hypothesis' states that investment in one metabolically costly tissue necessitates decreased investment in other tissues and has been one of the keystone concepts used in studying the evolution of metabolically expensive tissues. The trade-offs expected under this hypothesis have been investigated in comparative studies in a number of clades, yet support for the hypothesis is mixed. Nevertheless, the expensive-tissue hypothesis has been used to explain everything from the evolution of the human brain to patterns of reproductive investment in bats. The ambiguous support for the hypothesis may be due to interspecific differences in selection, which could lead to spurious results both positive and negative. To control for this, we conduct a study of trade-offs within a single species, Thalassoma bifasciatum, a coral reef fish that exhibits more intraspecific variation in a single tissue (testes) than is seen across many of the clades previously analysed in studies of tissue investment. This constitutes a robust test of the constraints posited under the expensive-tissue hypothesis that is not affected by many of the factors that may confound interspecific studies. However, we find no evidence of trade-offs between investment in testes and investment in liver or brain, which are typically considered to be metabolically expensive. Our results demonstrate that the frequent rejection of the expensive-tissue hypothesis may not be an artefact of interspecific differences in selection and suggests that organisms may be capable of compensating for substantial changes in tissue investment without sacrificing mass in other expensive tissues. © 2012 The Authors. Journal of Evolutionary Biology © 2012 European Society For Evolutionary Biology.
Multi-arm group sequential designs with a simultaneous stopping rule.
Urach, S; Posch, M
2016-12-30
Multi-arm group sequential clinical trials are efficient designs to compare multiple treatments to a control. They allow one to test for treatment effects already in interim analyses and can have a lower average sample number than fixed sample designs. Their operating characteristics depend on the stopping rule: We consider simultaneous stopping, where the whole trial is stopped as soon as for any of the arms the null hypothesis of no treatment effect can be rejected, and separate stopping, where only recruitment to arms for which a significant treatment effect could be demonstrated is stopped, but the other arms are continued. For both stopping rules, the family-wise error rate can be controlled by the closed testing procedure applied to group sequential tests of intersection and elementary hypotheses. The group sequential boundaries for the separate stopping rule also control the family-wise error rate if the simultaneous stopping rule is applied. However, we show that for the simultaneous stopping rule, one can apply improved, less conservative stopping boundaries for local tests of elementary hypotheses. We derive corresponding improved Pocock and O'Brien type boundaries as well as optimized boundaries to maximize the power or average sample number and investigate the operating characteristics and small sample properties of the resulting designs. To control the power to reject at least one null hypothesis, the simultaneous stopping rule requires a lower average sample number than the separate stopping rule. This comes at the cost of a lower power to reject all null hypotheses. Some of this loss in power can be regained by applying the improved stopping boundaries for the simultaneous stopping rule. The procedures are illustrated with clinical trials in systemic sclerosis and narcolepsy. © 2016 The Authors. Statistics in Medicine Published by John Wiley & Sons Ltd. © 2016 The Authors. Statistics in Medicine Published by John Wiley & Sons Ltd.
Precision oncology: origins, optimism, and potential.
Prasad, Vinay; Fojo, Tito; Brada, Michael
2016-02-01
Imatinib, the first and arguably the best targeted therapy, became the springboard for developing drugs aimed at molecular targets deemed crucial to tumours. As this development unfolded, a revolution in the speed and cost of genetic sequencing occurred. The result--an armamentarium of drugs and an array of molecular targets--set the stage for precision oncology, a hypothesis that cancer treatment could be markedly improved if therapies were guided by a tumour's genomic alterations. Drawing lessons from the biological basis of cancer and recent empirical investigations, we take a more measured view of precision oncology's promise. Ultimately, the promise is not our concern, but the threshold at which we declare success. We review reports of precision oncology alongside those of precision diagnostics and novel radiotherapy approaches. Although confirmatory evidence is scarce, these interventions have been widely endorsed. We conclude that the current path will probably not be successful or, at a minimum, will have to undergo substantive adjustments before it can be successful. For the sake of patients with cancer, we hope one form of precision oncology will deliver on its promise. However, until confirmatory studies are completed, precision oncology remains unproven, and as such, a hypothesis in need of rigorous testing. Copyright © 2016 Elsevier Ltd. All rights reserved.
Micó, Victor; Martín, Roberto; Lasunción, Miguel A; Ordovás, Jose M; Daimiel, Lidia
2016-03-01
The recent description of the presence of exogenous plant microRNAs from rice in human plasma had profound implications for the interpretation of microRNAs function in human health. If validated, these results suggest that food should not be considered only as a macronutrient and micronutrient supplier but it could also be a way of genomic interchange between kingdoms. Subsequently, several studies have tried to replicate these results in rice and other plant foods and most of them have failed to find plant microRNAs in human plasma. In this scenario, we aimed to detect plant microRNAs in beer and extra virgin olive oil (EVOO)--two plant-derived liquid products frequently consumed in Spain--as well as in human plasma after an acute ingestion of EVOO. Our hypothesis was that microRNAs present in beer and EVOO raw material could survive manufacturing processes, be part of these liquid products, be absorbed by human gut and circulate in human plasma. To test this hypothesis, we first optimized the microRNA extraction protocol to extract microRNAs from beer and EVOO, and then tried to detect microRNAs in those samples and in plasma samples of healthy volunteers after an acute ingestion of EVOO.
Synchrony and the binding problem in macaque visual cortex
Dong, Yi; Mihalas, Stefan; Qiu, Fangtu; von der Heydt, Rüdiger; Niebur, Ernst
2009-01-01
We tested the binding-by-synchrony hypothesis which proposes that object representations are formed by synchronizing spike activity between neurons that code features of the same object. We studied responses of 32 pairs of neurons recorded with microelectrodes 3 mm apart in the visual cortex of macaques performing a fixation task. Upon mapping the receptive fields of the neurons, a quadrilateral was generated so that two of its sides were centered in the receptive fields at the optimal orientations. This one-figure condition was compared with a two-figure condition in which the neurons were stimulated by two separate figures, keeping the local edges in the receptive fields identical. For each neuron, we also determined its border ownership selectivity (H. Zhou, H. S. Friedman, & R. von der Heydt, 2000). We examined both synchronization and correlation at nonzero time lag. After correcting for effects of the firing rate, we found that synchrony did not depend on the binding condition. However, finding synchrony in a pair of neurons was correlated with finding border-ownership selectivity in both members of the pair. This suggests that the synchrony reflected the connectivity in the network that generates border ownership assignment. Thus, we have not found evidence to support the binding-by-synchrony hypothesis. PMID:19146262
A long-term earthquake rate model for the central and eastern United States from smoothed seismicity
Moschetti, Morgan P.
2015-01-01
I present a long-term earthquake rate model for the central and eastern United States from adaptive smoothed seismicity. By employing pseudoprospective likelihood testing (L-test), I examined the effects of fixed and adaptive smoothing methods and the effects of catalog duration and composition on the ability of the models to forecast the spatial distribution of recent earthquakes. To stabilize the adaptive smoothing method for regions of low seismicity, I introduced minor modifications to the way that the adaptive smoothing distances are calculated. Across all smoothed seismicity models, the use of adaptive smoothing and the use of earthquakes from the recent part of the catalog optimizes the likelihood for tests with M≥2.7 and M≥4.0 earthquake catalogs. The smoothed seismicity models optimized by likelihood testing with M≥2.7 catalogs also produce the highest likelihood values for M≥4.0 likelihood testing, thus substantiating the hypothesis that the locations of moderate-size earthquakes can be forecast by the locations of smaller earthquakes. The likelihood test does not, however, maximize the fraction of earthquakes that are better forecast than a seismicity rate model with uniform rates in all cells. In this regard, fixed smoothing models perform better than adaptive smoothing models. The preferred model of this study is the adaptive smoothed seismicity model, based on its ability to maximize the joint likelihood of predicting the locations of recent small-to-moderate-size earthquakes across eastern North America. The preferred rate model delineates 12 regions where the annual rate of M≥5 earthquakes exceeds 2×10−3. Although these seismic regions have been previously recognized, the preferred forecasts are more spatially concentrated than the rates from fixed smoothed seismicity models, with rate increases of up to a factor of 10 near clusters of high seismic activity.
Mechanisms of eyewitness suggestibility: tests of the explanatory role hypothesis.
Rindal, Eric J; Chrobak, Quin M; Zaragoza, Maria S; Weihing, Caitlin A
2017-10-01
In a recent paper, Chrobak and Zaragoza (Journal of Experimental Psychology: General, 142(3), 827-844, 2013) proposed the explanatory role hypothesis, which posits that the likelihood of developing false memories for post-event suggestions is a function of the explanatory function the suggestion serves. In support of this hypothesis, they provided evidence that participant-witnesses were especially likely to develop false memories for their forced fabrications when their fabrications helped to explain outcomes they had witnessed. In three experiments, we test the generality of the explanatory role hypothesis as a mechanism of eyewitness suggestibility by assessing whether this hypothesis can predict suggestibility errors in (a) situations where the post-event suggestions are provided by the experimenter (as opposed to fabricated by the participant), and (b) across a variety of memory measures and measures of recollective experience. In support of the explanatory role hypothesis, participants were more likely to subsequently freely report (E1) and recollect the suggestions as part of the witnessed event (E2, source test) when the post-event suggestion helped to provide a causal explanation for a witnessed outcome than when it did not serve this explanatory role. Participants were also less likely to recollect the suggestions as part of the witnessed event (on measures of subjective experience) when their explanatory strength had been reduced by the presence of an alternative explanation that could explain the same outcome (E3, source test + warning). Collectively, the results provide strong evidence that the search for explanatory coherence influences people's tendency to misremember witnessing events that were only suggested to them.
Effects of dividing attention during encoding on perceptual priming of unfamiliar visual objects.
Soldan, Anja; Mangels, Jennifer A; Cooper, Lynn A
2008-11-01
According to the distractor-selection hypothesis (Mulligan, 2003), dividing attention during encoding reduces perceptual priming when responses to non-critical (i.e., distractor) stimuli are selected frequently and simultaneously with critical stimulus encoding. Because direct support for this hypothesis comes exclusively from studies using familiar word stimuli, the present study tested whether the predictions of the distractor-selection hypothesis extend to perceptual priming of unfamiliar visual objects using the possible/impossible object decision test. Consistent with the distractor-selection hypothesis, Experiments 1 and 2 found no reduction in priming when the non-critical stimuli were presented infrequently and non-synchronously with the critical target stimuli, even though explicit recognition memory was reduced. In Experiment 3, non-critical stimuli were presented frequently and simultaneously during encoding of critical stimuli; however, no decrement in priming was detected, even when encoding time was reduced. These results suggest that priming in the possible/impossible object decision test is relatively immune to reductions in central attention and that not all aspects of the distractor-selection hypothesis generalise to priming of unfamiliar visual objects. Implications for theoretical models of object decision priming are discussed.
Effects of dividing attention during encoding on perceptual priming of unfamiliar visual objects
Soldan, Anja; Mangels, Jennifer A.; Cooper, Lynn A.
2008-01-01
According to the distractor-selection hypothesis (Mulligan, 2003), dividing attention during encoding reduces perceptual priming when responses to non-critical (i.e., distractor) stimuli are selected frequently and simultaneously with critical stimulus encoding. Because direct support for this hypothesis comes exclusively from studies using familiar word stimuli, the present study tested whether the predictions of the distractor-selection hypothesis extend to perceptual priming of unfamiliar visual objects using the possible/impossible object-decision test. Consistent with the distractor-selection hypothesis, Experiments 1 and 2 found no reduction in priming when the non-critical stimuli were presented infrequently and non-synchronously with the critical target stimuli, even though explicit recognition memory was reduced. In Experiment 3, non-critical stimuli were presented frequently and simultaneously during encoding of critical stimuli; however, no decrement in priming was detected, even when encoding time was reduced. These results suggest that priming in the possible/impossible object-decision test is relatively immune to reductions in central attention and that not all aspects of the distractor-selection hypothesis generalize to priming of unfamiliar visual objects. Implications for theoretical models of object-decision priming are discussed. PMID:18821167
Sex and Class Differences in Parent-Child Interaction: A Test of Kohn's Hypothesis
ERIC Educational Resources Information Center
Gecas, Viktor; Nye, F. Ivan
1974-01-01
This paper focuses on Melvin Kohn's suggestive hypothesis that white-collar parents stress the development of internal standards of conduct in their children while blue-collar parents are more likely to react on the basis of the consequences of the child's behavior. This hypothesis was supported. (Author)
Assess the Critical Period Hypothesis in Second Language Acquisition
ERIC Educational Resources Information Center
Du, Lihong
2010-01-01
The Critical Period Hypothesis aims to investigate the reason for significant difference between first language acquisition and second language acquisition. Over the past few decades, researchers carried out a series of studies to test the validity of the hypothesis. Although there were certain limitations in these studies, most of their results…
Further Evidence on the Weak and Strong Versions of the Screening Hypothesis in Greece.
ERIC Educational Resources Information Center
Lambropoulos, Haris S.
1992-01-01
Uses Greek data for 1981 and 1985 to test screening hypothesis by replicating method proposed by Psacharopoulos. Credentialism, or sheepskin effect of education, directly challenges human capital theory, which views education as a productivity augmenting process. Results do not support the strong version of the screening hypothesis and suggest…
A Clinical Evaluation of the Competing Sources of Input Hypothesis
ERIC Educational Resources Information Center
Fey, Marc E.; Leonard, Laurence B.; Bredin-Oja, Shelley L.; Deevy, Patricia
2017-01-01
Purpose: Our purpose was to test the competing sources of input (CSI) hypothesis by evaluating an intervention based on its principles. This hypothesis proposes that children's use of main verbs without tense is the result of their treating certain sentence types in the input (e.g., "Was 'she laughing'?") as models for declaratives…
Experimental comparisons of hypothesis test and moving average based combustion phase controllers.
Gao, Jinwu; Wu, Yuhu; Shen, Tielong
2016-11-01
For engine control, combustion phase is the most effective and direct parameter to improve fuel efficiency. In this paper, the statistical control strategy based on hypothesis test criterion is discussed. Taking location of peak pressure (LPP) as combustion phase indicator, the statistical model of LPP is first proposed, and then the controller design method is discussed on the basis of both Z and T tests. For comparison, moving average based control strategy is also presented and implemented in this study. The experiments on a spark ignition gasoline engine at various operating conditions show that the hypothesis test based controller is able to regulate LPP close to set point while maintaining the rapid transient response, and the variance of LPP is also well constrained. Copyright © 2016 ISA. Published by Elsevier Ltd. All rights reserved.
Use of Pearson's Chi-Square for Testing Equality of Percentile Profiles across Multiple Populations.
Johnson, William D; Beyl, Robbie A; Burton, Jeffrey H; Johnson, Callie M; Romer, Jacob E; Zhang, Lei
2015-08-01
In large sample studies where distributions may be skewed and not readily transformed to symmetry, it may be of greater interest to compare different distributions in terms of percentiles rather than means. For example, it may be more informative to compare two or more populations with respect to their within population distributions by testing the hypothesis that their corresponding respective 10 th , 50 th , and 90 th percentiles are equal. As a generalization of the median test, the proposed test statistic is asymptotically distributed as Chi-square with degrees of freedom dependent upon the number of percentiles tested and constraints of the null hypothesis. Results from simulation studies are used to validate the nominal 0.05 significance level under the null hypothesis, and asymptotic power properties that are suitable for testing equality of percentile profiles against selected profile discrepancies for a variety of underlying distributions. A pragmatic example is provided to illustrate the comparison of the percentile profiles for four body mass index distributions.
On selecting evidence to test hypotheses: A theory of selection tasks.
Ragni, Marco; Kola, Ilir; Johnson-Laird, Philip N
2018-05-21
How individuals choose evidence to test hypotheses is a long-standing puzzle. According to an algorithmic theory that we present, it is based on dual processes: individuals' intuitions depending on mental models of the hypothesis yield selections of evidence matching instances of the hypothesis, but their deliberations yield selections of potential counterexamples to the hypothesis. The results of 228 experiments using Wason's selection task corroborated the theory's predictions. Participants made dependent choices of items of evidence: the selections in 99 experiments were significantly more redundant (using Shannon's measure) than those of 10,000 simulations of each experiment based on independent selections. Participants tended to select evidence corresponding to instances of hypotheses, or to its counterexamples, or to both. Given certain contents, instructions, or framings of the task, they were more likely to select potential counterexamples to the hypothesis. When participants received feedback about their selections in the "repeated" selection task, they switched from selections of instances of the hypothesis to selection of potential counterexamples. These results eliminated most of the 15 alternative theories of selecting evidence. In a meta-analysis, the model theory yielded a better fit of the results of 228 experiments than the one remaining theory based on reasoning rather than meaning. We discuss the implications of the model theory for hypothesis testing and for a well-known paradox of confirmation. (PsycINFO Database Record (c) 2018 APA, all rights reserved).
Does Maltreatment Beget Maltreatment? A Systematic Review of the Intergenerational Literature
Thornberry, Terence P.; Knight, Kelly E.; Lovegrove, Peter J.
2014-01-01
In this paper, we critically review the literature testing the cycle of maltreatment hypothesis which posits continuity in maltreatment across adjacent generations. That is, we examine whether a history of maltreatment victimization is a significant risk factor for the later perpetration of maltreatment. We begin by establishing 11 methodological criteria that studies testing this hypothesis should meet. They include such basic standards as using representative samples, valid and reliable measures, prospective designs, and different reporters for each generation. We identify 47 studies that investigated this issue and then evaluate them with regard to the 11 methodological criteria. Overall, most of these studies report findings consistent with the cycle of maltreatment hypothesis. Unfortunately, at the same time, few of them satisfy the basic methodological criteria that we established; indeed, even the stronger studies in this area only meet about half of them. Moreover, the methodologically stronger studies present mixed support for the hypothesis. As a result, the positive association often reported in the literature appears to be based largely on the methodologically weaker designs. Based on our systematic methodological review, we conclude that this small and methodologically weak body of literature does not provide a definitive test of the cycle of maltreatment hypothesis. We conclude that it is imperative to develop more robust and methodologically adequate assessments of this hypothesis to more accurately inform the development of prevention and treatment programs. PMID:22673145
Estimating Required Contingency Funds for Construction Projects using Multiple Linear Regression
2006-03-01
Breusch - Pagan test , in which the null hypothesis states that the residuals have constant variance. The alternate hypothesis is that the residuals do not...variance, the Breusch - Pagan test provides statistical evidence that the assumption is justified. For the proposed model, the p-value is 0.173...entire test sample. v Acknowledgments First, I would like to acknowledge the influence and help of Greg Hoffman. His work served as the
DOE Office of Scientific and Technical Information (OSTI.GOV)
Andrews, Stephen A.; Sigeti, David E.
These are a set of slides about Bayesian hypothesis testing, where many hypotheses are tested. The conclusions are the following: The value of the Bayes factor obtained when using the median of the posterior marginal is almost the minimum value of the Bayes factor. The value of τ 2 which minimizes the Bayes factor is a reasonable choice for this parameter. This allows a likelihood ratio to be computed with is the least favorable to H 0.
Karl, Zachary J; Scharf, Michael E
2015-10-01
Termites have recently drawn much attention as models for biomass processing, mainly due to their lignocellulose digestion capabilities and mutualisms with cellulolytic gut symbionts. This research used the lower termite Reticulitermes flavipes to investigate gut enzyme activity changes in response to feeding on five diverse lignocellulosic diets (cellulose filter paper [FP], pine wood [PW], beech wood xylan [X], corn stover [CS], and soybean residue [SB]). Our objectives were to compare whole-gut digestive enzyme activity and host versus symbiont contributions to enzyme activity after feeding on these diets. Our hypothesis was that enzyme activities would vary among diets as an adaptive mechanism enabling termites and symbiota to optimally utilize variable resources. Results support our "diet-adaptation" hypothesis and further indicate that, in most cases, host contributions are greater than those of symbionts with respect to the enzymes and activities studied. The results obtained thus provide indications as to which types of transcriptomic resources, termite or symbiont, are most relevant for developing recombinant enzyme cocktails tailored to specific feedstocks. With regard to the agricultural feedstocks tested (CS and SB), our results suggest endoglucanase and exoglucanase (cellobiohydrolase) activities are most relevant for CS breakdown; whereas endoglucanase and xylosidase activities are relevant for SB breakdown. However, other unexplored activities than those tested may also be important for breakdown of these two feedstocks. These findings provide new protein-level insights into diet adaptation by termites, and also complement host-symbiont metatranscriptomic studies that have been completed for R. flavipes after FP, PW, CS, and SB feeding. © 2015 Wiley Periodicals, Inc.
Kimberly, David A; Salice, Christopher J
2014-07-01
The Intergovernmental Panel on Climate Change projects that global climate change will have significant impacts on environmental conditions including potential effects on sensitivity of organisms to environmental contaminants. The objective of this study was to test the climate-induced toxicant sensitivity (CITS) hypothesis in which acclimation to altered climate parameters increases toxicant sensitivity. Adult Physa pomilia snails were acclimated to a near optimal 22 °C or a high-normal 28 °C for 28 days. After 28 days, snails from each temperature group were challenged with either low (150 μg/L) or high (300 μg/L) cadmium at each temperature (28 or 22 °C). In contrast to the CITS hypothesis, we found that acclimation temperature did not have a strong influence on cadmium sensitivity except at the high cadmium test concentration where snails acclimated to 28 °C were more cadmium tolerant. However, snails that experienced a switch in temperature for the cadmium challenge, regardless of the switch direction, were the most sensitive to cadmium. Within the snails that were switched between temperatures, snails acclimated at 28 °C and then exposed to high cadmium at 22 °C exhibited significantly greater mortality than those snails acclimated to 22 °C and then exposed to cadmium at 28 °C. Our results point to the importance of temperature variability in increasing toxicant sensitivity but also suggest a potentially complex cost of temperature acclimation. Broadly, the type of temporal stressor exposures we simulated may reduce overall plasticity in responses to stress ultimately rendering populations more vulnerable to adverse effects.
Watson, Robert A
2014-08-01
To test the hypothesis that machine learning algorithms increase the predictive power to classify surgical expertise using surgeons' hand motion patterns. In 2012 at the University of North Carolina at Chapel Hill, 14 surgical attendings and 10 first- and second-year surgical residents each performed two bench model venous anastomoses. During the simulated tasks, the participants wore an inertial measurement unit on the dorsum of their dominant (right) hand to capture their hand motion patterns. The pattern from each bench model task performed was preprocessed into a symbolic time series and labeled as expert (attending) or novice (resident). The labeled hand motion patterns were processed and used to train a Support Vector Machine (SVM) classification algorithm. The trained algorithm was then tested for discriminative/predictive power against unlabeled (blinded) hand motion patterns from tasks not used in the training. The Lempel-Ziv (LZ) complexity metric was also measured from each hand motion pattern, with an optimal threshold calculated to separately classify the patterns. The LZ metric classified unlabeled (blinded) hand motion patterns into expert and novice groups with an accuracy of 70% (sensitivity 64%, specificity 80%). The SVM algorithm had an accuracy of 83% (sensitivity 86%, specificity 80%). The results confirmed the hypothesis. The SVM algorithm increased the predictive power to classify blinded surgical hand motion patterns into expert versus novice groups. With further development, the system used in this study could become a viable tool for low-cost, objective assessment of procedural proficiency in a competency-based curriculum.
Hare, Todd; Rangel, Antonio
2013-01-01
Optimal decision-making often requires exercising self-control. A growing fMRI literature has implicated the dorsolateral prefrontal cortex (dlPFC) in successful self-control, but due to the limitations inherent in BOLD measures of brain activity, the neurocomputational role of this region has not been resolved. Here we exploit the high temporal resolution and whole-brain coverage of event-related potentials (ERPs) to test the hypothesis that dlPFC affects dietary self-control through two different mechanisms: attentional filtering and value modulation. Whereas attentional filtering of sensory input should occur early in the decision process, value modulation should occur later on, after the computation of stimulus values begins. Hungry human subjects were asked to make food choices while we measured neural activity using ERP in a natural condition, in which they responded freely and did not exhibit a tendency to regulate their diet, and in a self-control condition, in which they were given a financial incentive to lose weight. We then measured various neural markers associated with the attentional filtering and value modulation mechanisms across the decision period to test for changes in neural activity during the exercise of self-control. Consistent with the hypothesis, we found evidence for top-down attentional filtering early on in the decision period (150–200 ms poststimulus onset) as well as evidence for value modulation later in the process (450–650 ms poststimulus onset). We also found evidence that dlPFC plays a role in the deployment of both mechanisms. PMID:24285897
Computer-Assisted Problem Solving in School Mathematics
ERIC Educational Resources Information Center
Hatfield, Larry L.; Kieren, Thomas E.
1972-01-01
A test of the hypothesis that writing and using computer programs related to selected mathematical content positively affects performance on those topics. Results particularly support the hypothesis. (MM)
[Examination of the hypothesis 'the factors and mechanisms of superiority'].
Sierra-Fitzgerald, O; Quevedo-Caicedo, J; López-Calderón, M G
INTRODUCTION. The hypothesis of Geschwind and Galaburda suggests that specific cognitive superiority arises as a result of an alteration in development of the nervous system. In this article we review the co existence of superiority and inferiority . PATIENTS AND METHODS. A study was made of six children aged between 6 and 8 years old at the Instituto de Belles Artes Antonio Maria Valencia in Cali,Columbia with an educational level between second and third grade at a primary school and of medium low socio economic status. The children were considered to have superior musical ability by music experts, which is the way in which the concept of superiority was to be tested. The concept of inferiority was tested by neuropsychological tests = 1.5 DE below normal for the same age. We estimated the perinatal neurological risk in each case. Subsequently the children s general intelligence and specific cognitive abilities were evaluated. In the first case the WISC R and MSCA were used. The neuropsychological profiles were obtained by broad evaluation using a verbal fluency test, a test using counters, Boston vocabulary test, the Wechster memory scale, sequential verbal memory test, super imposed figures test, Piaget Head battery, Rey Osterrieth complex figure and the Wisconsin card classification test. The RESULTS showed slight/moderate deficits in practical construction ability and mild defects of memory and concept abilities. In general the results supported the hypothesis tested. The mechanisms of superiority proposed in the classical hypothesis mainly involve the contralateral hemisphere: in this study the ipsilateral mechanism was more important.
Mothers Who Kill Their Offspring: Testing Evolutionary Hypothesis in a 110-Case Italian Sample
ERIC Educational Resources Information Center
Camperio Ciani, Andrea S.; Fontanesi, Lilybeth
2012-01-01
Objectives: This research aimed to identify incidents of mothers in Italy killing their own children and to test an adaptive evolutionary hypothesis to explain their occurrence. Methods: 110 cases of mothers killing 123 of their own offspring from 1976 to 2010 were analyzed. Each case was classified using 13 dichotomic variables. Descriptive…
ERIC Educational Resources Information Center
Martin, Todd F.; White, James M.; Perlman, Daniel
2003-01-01
This study used a sample of 2,379 seventh through twelfth graders in 5 Protestant denominations to test the hypothesis that parental influences on religious faith are mediated through peer selection and congregation selection. Findings revealed that peer and parental influence remained stable during the adolescent years. Parental influence did not…
Bayesian Hypothesis Testing for Psychologists: A Tutorial on the Savage-Dickey Method
ERIC Educational Resources Information Center
Wagenmakers, Eric-Jan; Lodewyckx, Tom; Kuriyal, Himanshu; Grasman, Raoul
2010-01-01
In the field of cognitive psychology, the "p"-value hypothesis test has established a stranglehold on statistical reporting. This is unfortunate, as the "p"-value provides at best a rough estimate of the evidence that the data provide for the presence of an experimental effect. An alternative and arguably more appropriate measure of evidence is…
Parenting as a Dynamic Process: A Test of the Resource Dilution Hypothesis
ERIC Educational Resources Information Center
Strohschein, Lisa; Gauthier, Anne H.; Campbell, Rachel; Kleparchuk, Clayton
2008-01-01
In this paper, we tested the resource dilution hypothesis, which posits that, because parenting resources are finite, the addition of a new sibling depletes parenting resources for other children in the household. We estimated growth curve models on the self-reported parenting practices of mothers using four waves of data collected biennially…
ERIC Educational Resources Information Center
Fan, Weihua; Hancock, Gregory R.
2012-01-01
This study proposes robust means modeling (RMM) approaches for hypothesis testing of mean differences for between-subjects designs in order to control the biasing effects of nonnormality and variance inequality. Drawing from structural equation modeling (SEM), the RMM approaches make no assumption of variance homogeneity and employ robust…
USDA-ARS?s Scientific Manuscript database
The impact of rater bias and assessment method on hypothesis testing was studied for different experimental designs for plant disease assessment using balanced and unbalanced data sets. Data sets with the same number of replicate estimates for each of two treatments are termed ‘balanced’, and those ...
ERIC Educational Resources Information Center
Stone, Emily A.; Shackelford, Todd K.; Buss, David M.
2012-01-01
This study tests the hypothesis presented by Penke, Denissen, and Miller (2007a) that condition-dependent traits, including intelligence, attractiveness, and health, are universally and uniformly preferred as characteristics in a mate relative to traits that are less indicative of condition, including personality traits. We analyzed…
ERIC Educational Resources Information Center
Edlin, James M.; Lyle, Keith B.
2013-01-01
The simple act of repeatedly looking left and right can enhance subsequent cognition, including divergent thinking, detection of matching letters from visual arrays, and memory retrieval. One hypothesis is that saccade execution enhances subsequent cognition by altering attentional control. To test this hypothesis, we compared performance…
The Genesis of Pedophilia: Testing the "Abuse-to-Abuser" Hypothesis.
ERIC Educational Resources Information Center
Fedoroff, J. Paul; Pinkus, Shari
1996-01-01
This study tested three versions of the "abuse-to-abuser" hypothesis by comparing men with personal histories of sexual abuse and men without sexual abuse histories. There was a statistically non-significant trend for assaulted offenders to be more likely as adults to commit genital assaults on children. Implications for the abuse-to-abuser…
2004-2006 Puget Sound Traffic Choices Study | Transportation Secure Data
Center | NREL 04-2006 Puget Sound Traffic Choices Study 2004-2006 Puget Sound Traffic Choices Study The 2004-2006 Puget Sound Traffic Choices Study tested the hypothesis that time-of-day variable Administration for a pilot project on congestion-based tolling. Methodology To test the hypothesis, the study
Assessment of Theory of Mind in Children with Communication Disorders: Role of Presentation Mode
ERIC Educational Resources Information Center
van Buijsen, Marit; Hendriks, Angelique; Ketelaars, Mieke; Verhoeven, Ludo
2011-01-01
Children with communication disorders have problems with both language and social interaction. The theory-of-mind hypothesis provides an explanation for these problems, and different tests have been developed to test this hypothesis. However, different modes of presentation are used in these tasks, which make the results difficult to compare. In…
Wiring Economy of Pyramidal Cells in the Juvenile Rat Somatosensory Cortex
Bielza, Concha; Larrañaga, Pedro; DeFelipe, Javier
2016-01-01
Ever since Cajal hypothesized that the structure of neurons is designed in such a way as to save space, time and matter, numerous researchers have analyzed wiring properties at different scales of brain organization. Here we test the hypothesis that individual pyramidal cells, the most abundant type of neuron in the cerebral cortex, optimize brain connectivity in terms of wiring length. In this study, we analyze the neuronal wiring of complete basal arborizations of pyramidal neurons in layer II, III, IV, Va, Vb and VI of the hindlimb somatosensory cortical region of postnatal day 14 rats. For each cell, we search for the optimal basal arborization and compare its length with the length of the real dendritic structure. Here the optimal arborization is defined as the arborization that has the shortest total wiring length provided that all neuron bifurcations are respected and the extent of the dendritic arborizations remain unchanged. We use graph theory and evolutionary computation techniques to search for the minimal wiring arborizations. Despite morphological differences between pyramidal neurons located in different cortical layers, we found that the neuronal wiring is near-optimal in all cases (the biggest difference between the shortest synthetic wiring found for a dendritic arborization and the length of its real wiring was less than 5%). We found, however, that the real neuronal wiring was significantly closer to the best solution found in layers II, III and IV. Our studies show that the wiring economy of cortical neurons is related not to the type of neurons or their morphological complexities but to general wiring economy principles. PMID:27832100
Wiring Economy of Pyramidal Cells in the Juvenile Rat Somatosensory Cortex.
Anton-Sanchez, Laura; Bielza, Concha; Larrañaga, Pedro; DeFelipe, Javier
2016-01-01
Ever since Cajal hypothesized that the structure of neurons is designed in such a way as to save space, time and matter, numerous researchers have analyzed wiring properties at different scales of brain organization. Here we test the hypothesis that individual pyramidal cells, the most abundant type of neuron in the cerebral cortex, optimize brain connectivity in terms of wiring length. In this study, we analyze the neuronal wiring of complete basal arborizations of pyramidal neurons in layer II, III, IV, Va, Vb and VI of the hindlimb somatosensory cortical region of postnatal day 14 rats. For each cell, we search for the optimal basal arborization and compare its length with the length of the real dendritic structure. Here the optimal arborization is defined as the arborization that has the shortest total wiring length provided that all neuron bifurcations are respected and the extent of the dendritic arborizations remain unchanged. We use graph theory and evolutionary computation techniques to search for the minimal wiring arborizations. Despite morphological differences between pyramidal neurons located in different cortical layers, we found that the neuronal wiring is near-optimal in all cases (the biggest difference between the shortest synthetic wiring found for a dendritic arborization and the length of its real wiring was less than 5%). We found, however, that the real neuronal wiring was significantly closer to the best solution found in layers II, III and IV. Our studies show that the wiring economy of cortical neurons is related not to the type of neurons or their morphological complexities but to general wiring economy principles.
Positive affect and psychosocial processes related to health.
Steptoe, Andrew; O'Donnell, Katie; Marmot, Michael; Wardle, Jane
2008-05-01
Positive affect is associated with longevity and favourable physiological function. We tested the hypothesis that positive affect is related to health-protective psychosocial characteristics independently of negative affect and socio-economic status. Both positive and negative affect were measured by aggregating momentary samples collected repeatedly over 1 day, and health-related psychosocial factors were assessed by questionnaire in a sample of 716 men and women aged 58-72 years. Positive affect was associated with greater social connectedness, emotional and practical support, optimism and adaptive coping responses, and lower depression, independently of age, gender, household income, paid employment, smoking status, and negative affect. Negative affect was independently associated with negative relationships, greater exposure to chronic stress, depressed mood, pessimism, and avoidant coping. Positive affect may be beneficial for health outcomes in part because it is a component of a profile of protective psychosocial characteristics.
A dual-task investigation of automaticity in visual word processing
NASA Technical Reports Server (NTRS)
McCann, R. S.; Remington, R. W.; Van Selst, M.
2000-01-01
An analysis of activation models of visual word processing suggests that frequency-sensitive forms of lexical processing should proceed normally while unattended. This hypothesis was tested by having participants perform a speeded pitch discrimination task followed by lexical decisions or word naming. As the stimulus onset asynchrony between the tasks was reduced, lexical-decision and naming latencies increased dramatically. Word-frequency effects were additive with the increase, indicating that frequency-sensitive processing was subject to postponement while attention was devoted to the other task. Either (a) the same neural hardware shares responsibility for lexical processing and central stages of choice reaction time task processing and cannot perform both computations simultaneously, or (b) lexical processing is blocked in order to optimize performance on the pitch discrimination task. Either way, word processing is not as automatic as activation models suggest.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hawes, M.C.
1995-03-01
The objective of this research was to develop a model system to study border cell separation in transgenic pea roots. In addition, the hypothesis that genes encoding pectolytic enzymes in the root cap play a role in the programmed separation of root border cells from the root tip was tested. The following objectives have been accomplished: (1) the use of transgenic hairy roots to study border cell separation has been optimized for Pisum sativum; (2) a cDNA encoding a root cap pectinmethylesterase (PME) has been cloned; (3) PME and polygalacturonase activities in cell walls of the root cap have beenmore » characterized and shown to be correlated with border cell separation. A fusion gene encoding pectate lyase has also been transformed into pea hairy root cells.« less
Phonological universals in early childhood: Evidence from sonority restrictions
Berent, Iris; Harder, Katherine; Lennertz, Tracy
2012-01-01
Across languages, onsets with large sonority distances are preferred to those with smaller distances (e.g., bw>bd>lb; Greenberg, 1978). Optimality theory (Prince & Smolensky, 2004) attributes such facts to grammatical restrictions that are universally active in all grammars. To test this hypothesis, here, we examine whether children extend putatively universal sonority restrictions to onsets unattested in their language. Participants (M=4;04 years) were presented with pairs of auditory words—either identical (e.g., lbif→lbif) or epenthetically related (e.g., lbif→lebif)—and asked to judge their identity. Results showed that, like adults, children’s ability to detect epenthetic distortions was monotonically related to sonority distance (bw>bd>lb), and their performance was inexplicable by several statistical and phonetic factors. These findings suggest that sonority restrictions are active in early childhood and their scope is broad. PMID:22328807
Morehead, Kayla; Dunlosky, John; Rawson, Katherine A; Bishop, Melissa; Pyc, Mary A
2018-04-01
When study is spaced across sessions (versus massed within a single session), final performance is greater after spacing. This spacing effect may have multiple causes, and according to the mediator hypothesis, part of the effect can be explained by the use of mediator-based strategies. This hypothesis proposes that when study is spaced across sessions, rather than massed within a session, more mediators will be generated that are longer lasting and hence more mediators will be available to support criterion recall. In two experiments, participants were randomly assigned to study paired associates using either a spaced or massed schedule. They reported strategy use for each item during study trials and during the final test. Consistent with the mediator hypothesis, participants who had spaced (as compared to massed) practice reported using more mediators on the final test. This use of effective mediators also statistically accounted for some - but not all of - the spacing effect on final performance.
A more powerful test based on ratio distribution for retention noninferiority hypothesis.
Deng, Ling; Chen, Gang
2013-03-11
Rothmann et al. ( 2003 ) proposed a method for the statistical inference of fraction retention noninferiority (NI) hypothesis. A fraction retention hypothesis is defined as a ratio of the new treatment effect verse the control effect in the context of a time to event endpoint. One of the major concerns using this method in the design of an NI trial is that with a limited sample size, the power of the study is usually very low. This makes an NI trial not applicable particularly when using time to event endpoint. To improve power, Wang et al. ( 2006 ) proposed a ratio test based on asymptotic normality theory. Under a strong assumption (equal variance of the NI test statistic under null and alternative hypotheses), the sample size using Wang's test was much smaller than that using Rothmann's test. However, in practice, the assumption of equal variance is generally questionable for an NI trial design. This assumption is removed in the ratio test proposed in this article, which is derived directly from a Cauchy-like ratio distribution. In addition, using this method, the fundamental assumption used in Rothmann's test, that the observed control effect is always positive, that is, the observed hazard ratio for placebo over the control is greater than 1, is no longer necessary. Without assuming equal variance under null and alternative hypotheses, the sample size required for an NI trial can be significantly reduced if using the proposed ratio test for a fraction retention NI hypothesis.
Proposal and validation of a clinical trunk control test in individuals with spinal cord injury.
Quinzaños, J; Villa, A R; Flores, A A; Pérez, R
2014-06-01
One of the problems that arise in spinal cord injury (SCI) is alteration in trunk control. Despite the need for standardized scales, these do not exist for evaluating trunk control in SCI. To propose and validate a trunk control test in individuals with SCI. National Institute of Rehabilitation, Mexico. The test was developed and later evaluated for reliability and criteria, content, and construct validity. We carried out 531 tests on 177 patients and found high inter- and intra-rater reliability. In terms of criterion validity, analysis of variance demonstrated a statistically significant difference in the test score of patients with adequate or inadequate trunk control according to the assessment of a group of experts. A receiver operating characteristic curve was plotted for optimizing the instrument's cutoff point, which was determined at 13 points, with a sensitivity of 98% and a specificity of 92.2%. With regard to construct validity, the correlation between the proposed test and the spinal cord independence measure (SCIM) was 0.873 (P=0.001) and that with the evolution time was 0.437 (P=0.001). For testing the hypothesis with qualitative variables, the Kruskal-Wallis test was performed, which resulted in a statistically significant difference between the scores in the proposed scale of each group defined by these variables. It was proven experimentally that the proposed trunk control test is valid and reliable. Furthermore, the test can be used for all patients with SCI despite the type and level of injury.
Optimizing multiple-choice tests as tools for learning.
Little, Jeri L; Bjork, Elizabeth Ligon
2015-01-01
Answering multiple-choice questions with competitive alternatives can enhance performance on a later test, not only on questions about the information previously tested, but also on questions about related information not previously tested-in particular, on questions about information pertaining to the previously incorrect alternatives. In the present research, we assessed a possible explanation for this pattern: When multiple-choice questions contain competitive incorrect alternatives, test-takers are led to retrieve previously studied information pertaining to all of the alternatives in order to discriminate among them and select an answer, with such processing strengthening later access to information associated with both the correct and incorrect alternatives. Supporting this hypothesis, we found enhanced performance on a later cued-recall test for previously nontested questions when their answers had previously appeared as competitive incorrect alternatives in the initial multiple-choice test, but not when they had previously appeared as noncompetitive alternatives. Importantly, however, competitive alternatives were not more likely than noncompetitive alternatives to be intruded as incorrect responses, indicating that a general increased accessibility for previously presented incorrect alternatives could not be the explanation for these results. The present findings, replicated across two experiments (one in which corrective feedback was provided during the initial multiple-choice testing, and one in which it was not), thus strongly suggest that competitive multiple-choice questions can trigger beneficial retrieval processes for both tested and related information, and the results have implications for the effective use of multiple-choice tests as tools for learning.
A Matched Filter Hypothesis for Cognitive Control
Thompson-Schill, Sharon L.
2013-01-01
The prefrontal cortex exerts top-down influences on several aspects of higher-order cognition by functioning as a filtering mechanism that biases bottom-up sensory information toward a response that is optimal in context. However, research also indicates that not all aspects of complex cognition benefit from prefrontal regulation. Here we review and synthesize this research with an emphasis on the domains of learning and creative cognition, and outline how the appropriate level of cognitive control in a given situation can vary depending on the organism's goals and the characteristics of the given task. We offer a Matched Filter Hypothesis for cognitive control, which proposes that the optimal level of cognitive control is task-dependent, with high levels of cognitive control best suited to tasks that are explicit, rule-based, verbal or abstract, and can be accomplished given the capacity limits of working memory and with low levels of cognitive control best suited to tasks that are implicit, reward-based, non-verbal or intuitive, and which can be accomplished irrespective of working memory limitations. Our approach promotes a view of cognitive control as a tool adapted to a subset of common challenges, rather than an all-purpose optimization system suited to every problem the organism might encounter. PMID:24200920
Haller, Moira; Chassin, Laurie
2014-09-01
The present study utilized longitudinal data from a community sample (n = 377; 166 trauma-exposed; 54% males; 73% non-Hispanic Caucasian; 22% Hispanic; 5% other ethnicity) to test whether pretrauma substance use problems increase risk for trauma exposure (high-risk hypothesis) or posttraumatic stress disorder (PTSD) symptoms (susceptibility hypothesis), whether PTSD symptoms increase risk for later alcohol/drug problems (self-medication hypothesis), and whether the association between PTSD symptoms and alcohol/drug problems is attributable to shared risk factors (shared vulnerability hypothesis). Logistic and negative binomial regressions were performed in a path analysis framework. Results provided the strongest support for the self-medication hypothesis, such that PTSD symptoms predicted higher levels of later alcohol and drug problems, over and above the influences of pretrauma family risk factors, pretrauma substance use problems, trauma exposure, and demographic variables. Results partially supported the high-risk hypothesis, such that adolescent substance use problems increased risk for assaultive violence exposure but did not influence overall risk for trauma exposure. There was no support for the susceptibility hypothesis. Finally, there was little support for the shared vulnerability hypothesis. Neither trauma exposure nor preexisting family adversity accounted for the link between PTSD symptoms and later substance use problems. Rather, PTSD symptoms mediated the effect of pretrauma family adversity on later alcohol and drug problems, thereby supporting the self-medication hypothesis. These findings make important contributions to better understanding the directions of influence among traumatic stress, PTSD symptoms, and substance use problems.
Test of the Brink-Axel Hypothesis for the Pygmy Dipole Resonance
NASA Astrophysics Data System (ADS)
Martin, D.; von Neumann-Cosel, P.; Tamii, A.; Aoi, N.; Bassauer, S.; Bertulani, C. A.; Carter, J.; Donaldson, L.; Fujita, H.; Fujita, Y.; Hashimoto, T.; Hatanaka, K.; Ito, T.; Krugmann, A.; Liu, B.; Maeda, Y.; Miki, K.; Neveling, R.; Pietralla, N.; Poltoratska, I.; Ponomarev, V. Yu.; Richter, A.; Shima, T.; Yamamoto, T.; Zweidinger, M.
2017-11-01
The gamma strength function and level density of 1- states in 96Mo have been extracted from a high-resolution study of the (p → , p→ ' ) reaction at 295 MeV and extreme forward angles. By comparison with compound nucleus γ decay experiments, this allows a test of the generalized Brink-Axel hypothesis in the energy region of the pygmy dipole resonance. The Brink-Axel hypothesis is commonly assumed in astrophysical reaction network calculations and states that the gamma strength function in nuclei is independent of the structure of the initial and final state. The present results validate the Brink-Axel hypothesis for 96Mo and provide independent confirmation of the methods used to separate gamma strength function and level density in γ decay experiments.
Baral, Subhasish; Roy, Rahul; Dixit, Narendra M
2018-05-09
A fraction of chronic hepatitis C patients treated with direct-acting antivirals (DAAs) achieved sustained virological responses (SVR), or cure, despite having detectable viremia at the end of treatment (EOT). This observation, termed EOT + /SVR, remains puzzling and precludes rational optimization of treatment durations. One hypothesis to explain EOT + /SVR, the immunologic hypothesis, argues that the viral decline induced by DAAs during treatment reverses the exhaustion of cytotoxic T lymphocytes (CTLs), which then clear the infection after treatment. Whether the hypothesis is consistent with data of viral load changes in patients who experienced EOT + /SVR is unknown. Here, we constructed a mathematical model of viral kinetics incorporating the immunologic hypothesis and compared its predictions with patient data. We found the predictions to be in quantitative agreement with patient data. Using the model, we unraveled an underlying bistability that gives rise to EOT + /SVR and presents a new avenue to optimize treatment durations. Infected cells trigger both activation and exhaustion of CTLs. CTLs in turn kill infected cells. Due to these competing interactions, two stable steady states, chronic infection and viral clearance, emerge, separated by an unstable steady state with intermediate viremia. When treatment during chronic infection drives viremia sufficiently below the unstable state, spontaneous viral clearance results post-treatment, marking EOT + /SVR. The duration to achieve this desired reduction in viremia defines the minimum treatment duration required for ensuring SVR, which our model can quantify. Estimating parameters defining the CTL response of individuals to HCV infection would enable the application of our model to personalize treatment durations. © 2018 The Authors Immunology & Cell Biology published by John Wiley & Sons Australia, Ltd on behalf of Australasian Society for Immunology Inc.
Energy efficiency drives the global seasonal distribution of birds.
Somveille, Marius; Rodrigues, Ana S L; Manica, Andrea
2018-06-01
The uneven distribution of biodiversity on Earth is one of the most general and puzzling patterns in ecology. Many hypotheses have been proposed to explain it, based on evolutionary processes or on constraints related to geography and energy. However, previous studies investigating these hypotheses have been largely descriptive due to the logistical difficulties of conducting controlled experiments on such large geographical scales. Here, we use bird migration-the seasonal redistribution of approximately 15% of bird species across the world-as a natural experiment for testing the species-energy relationship, the hypothesis that animal diversity is driven by energetic constraints. We develop a mechanistic model of bird distributions across the world, and across seasons, based on simple ecological and energetic principles. Using this model, we show that bird species distributions optimize the balance between energy acquisition and energy expenditure while taking into account competition with other species. These findings support, and provide a mechanistic explanation for, the species-energy relationship. The findings also provide a general explanation of migration as a mechanism that allows birds to optimize their energy budget in the face of seasonality and competition. Finally, our mechanistic model provides a tool for predicting how ecosystems will respond to global anthropogenic change.
NASA Astrophysics Data System (ADS)
Fathololoumi, S.; Dupont, E.; Wasilewski, Z. R.; Chan, C. W. I.; Razavipour, S. G.; Laframboise, S. R.; Huang, Shengxi; Hu, Q.; Ban, D.; Liu, H. C.
2013-03-01
We experimentally investigated the effect of oscillator strength (radiative transition diagonality) on the performance of resonant phonon-based terahertz quantum cascade lasers that have been optimized using a simplified density matrix formalism. Our results show that the maximum lasing temperature (Tmax) is roughly independent of laser transition diagonality within the lasing frequency range of the devices under test (3.2-3.7 THz) when cavity loss is kept low. Furthermore, the threshold current can be lowered by employing more diagonal transition designs, which can effectively suppress parasitic leakage caused by intermediate resonance between the injection and the downstream extraction levels. Nevertheless, the current carrying capacity through the designed lasing channel in more diagonal designs may sacrifice even more, leading to electrical instability and, potentially, complete inhibition of the device's lasing operation. We propose a hypothesis based on electric-field domain formation and competition/switching of different current-carrying channels to explain observed electrical instability in devices with lower oscillator strengths. The study indicates that not only should designers maximize Tmax during device optimization but also they should always consider the risk of electrical instability in device operation.
Seebacher, Frank; James, Rob S
2008-03-01
Thermoregulation and thermal sensitivity of performance are thought to have coevolved so that performance is optimized within the selected body temperature range. However, locomotor performance in thermoregulating crocodiles (Crocodylus porosus) is plastic and maxima shift to different selected body temperatures in different thermal environments. Here we test the hypothesis that muscle metabolic and biomechanical parameters are optimized at the body temperatures selected in different thermal environments. Hence, we related indices of anaerobic (lactate dehydrogenase) and aerobic (cytochrome c oxidase) metabolic capacities and myofibrillar ATPase activity to the biomechanics of isometric and work loop caudofemoralis muscle function. Maximal isometric stress (force per muscle cross-sectional area) did not change with thermal acclimation, but muscle work loop power output increased with cold acclimation as a result of shorter activation and relaxation times. The thermal sensitivity of myofibrillar ATPase activity decreased with cold acclimation in caudofemoralis muscle. Neither aerobic nor anaerobic metabolic capacities were directly linked to changes in muscle performance during thermal acclimation, although there was a negative relationship between anaerobic capacity and isometric twitch stress in cold-acclimated animals. We conclude that by combining thermoregulation with plasticity in biomechanical function, crocodiles maximize performance in environments with highly variable thermal properties.
Optimization of biomass composition explains microbial growth-stoichiometry relationships
Franklin, O.; Hall, E.K.; Kaiser, C.; Battin, T.J.; Richter, A.
2011-01-01
Integrating microbial physiology and biomass stoichiometry opens far-reaching possibilities for linking microbial dynamics to ecosystem processes. For example, the growth-rate hypothesis (GRH) predicts positive correlations among growth rate, RNA content, and biomass phosphorus (P) content. Such relationships have been used to infer patterns of microbial activity, resource availability, and nutrient recycling in ecosystems. However, for microorganisms it is unclear under which resource conditions the GRH applies. We developed a model to test whether the response of microbial biomass stoichiometry to variable resource stoichiometry can be explained by a trade-off among cellular components that maximizes growth. The results show mechanistically why the GRH is valid under P limitation but not under N limitation. We also show why variability of growth rate-biomass stoichiometry relationships is lower under P limitation than under N or C limitation. These theoretical results are supported by experimental data on macromolecular composition (RNA, DNA, and protein) and biomass stoichiometry from two different bacteria. In addition, compared to a model with strictly homeostatic biomass, the optimization mechanism we suggest results in increased microbial N and P mineralization during organic-matter decomposition. Therefore, this mechanism may also have important implications for our understanding of nutrient cycling in ecosystems.
NASA Astrophysics Data System (ADS)
Lehmann, Rüdiger; Lösler, Michael
2017-12-01
Geodetic deformation analysis can be interpreted as a model selection problem. The null model indicates that no deformation has occurred. It is opposed to a number of alternative models, which stipulate different deformation patterns. A common way to select the right model is the usage of a statistical hypothesis test. However, since we have to test a series of deformation patterns, this must be a multiple test. As an alternative solution for the test problem, we propose the p-value approach. Another approach arises from information theory. Here, the Akaike information criterion (AIC) or some alternative is used to select an appropriate model for a given set of observations. Both approaches are discussed and applied to two test scenarios: A synthetic levelling network and the Delft test data set. It is demonstrated that they work but behave differently, sometimes even producing different results. Hypothesis tests are well-established in geodesy, but may suffer from an unfavourable choice of the decision error rates. The multiple test also suffers from statistical dependencies between the test statistics, which are neglected. Both problems are overcome by applying information criterions like AIC.
2014-01-01
Background In complex large-scale experiments, in addition to simultaneously considering a large number of features, multiple hypotheses are often being tested for each feature. This leads to a problem of multi-dimensional multiple testing. For example, in gene expression studies over ordered categories (such as time-course or dose-response experiments), interest is often in testing differential expression across several categories for each gene. In this paper, we consider a framework for testing multiple sets of hypothesis, which can be applied to a wide range of problems. Results We adopt the concept of the overall false discovery rate (OFDR) for controlling false discoveries on the hypothesis set level. Based on an existing procedure for identifying differentially expressed gene sets, we discuss a general two-step hierarchical hypothesis set testing procedure, which controls the overall false discovery rate under independence across hypothesis sets. In addition, we discuss the concept of the mixed-directional false discovery rate (mdFDR), and extend the general procedure to enable directional decisions for two-sided alternatives. We applied the framework to the case of microarray time-course/dose-response experiments, and proposed three procedures for testing differential expression and making multiple directional decisions for each gene. Simulation studies confirm the control of the OFDR and mdFDR by the proposed procedures under independence and positive correlations across genes. Simulation results also show that two of our new procedures achieve higher power than previous methods. Finally, the proposed methodology is applied to a microarray dose-response study, to identify 17 β-estradiol sensitive genes in breast cancer cells that are induced at low concentrations. Conclusions The framework we discuss provides a platform for multiple testing procedures covering situations involving two (or potentially more) sources of multiplicity. The framework is easy to use and adaptable to various practical settings that frequently occur in large-scale experiments. Procedures generated from the framework are shown to maintain control of the OFDR and mdFDR, quantities that are especially relevant in the case of multiple hypothesis set testing. The procedures work well in both simulations and real datasets, and are shown to have better power than existing methods. PMID:24731138
Alcohol dependence and opiate dependence: lack of relationship in mice.
Goldstein, A; Judson, B A
1971-04-16
According to a recently proposed hypothesis, physical dependence upon alcohol is due to the formation of an endogenous opiate. We tested the hypothesis by determining whether or not ethanol-dependent mice would show typical opiate-dependent behavior (withdrawal jumping syndrome) when challenged with the opiate antagonist naloxone. Our results do not support the hypothesis.
On the Flexibility of Social Source Memory: A Test of the Emotional Incongruity Hypothesis
ERIC Educational Resources Information Center
Bell, Raoul; Buchner, Axel; Kroneisen, Meike; Giang, Trang
2012-01-01
A popular hypothesis in evolutionary psychology posits that reciprocal altruism is supported by a cognitive module that helps cooperative individuals to detect and remember cheaters. Consistent with this hypothesis, a source memory advantage for faces of cheaters (better memory for the cheating context in which these faces were encountered) was…
ERIC Educational Resources Information Center
Sackett, Paul R.
1982-01-01
Recent findings suggest individuals seek evidence to confirm initial hypotheses about other people, and that seeking confirmatory evidence makes it likely that a hypothesis will be confirmed. Examined the generalizability of these findings to the employment interview. Consistent use of confirmatory hypothesis testing strategies was not found.…