An empirical comparison of a dynamic software testability metric to static cyclomatic complexity
NASA Technical Reports Server (NTRS)
Voas, Jeffrey M.; Miller, Keith W.; Payne, Jeffrey E.
1993-01-01
This paper compares the dynamic testability prediction technique termed 'sensitivity analysis' to the static testability technique termed cyclomatic complexity. The application that we chose in this empirical study is a CASE generated version of a B-737 autoland system. For the B-737 system we analyzed, we isolated those functions that we predict are more prone to hide errors during system/reliability testing. We also analyzed the code with several other well-known static metrics. This paper compares and contrasts the results of sensitivity analysis to the results of the static metrics.
Authors’ response: mirror neurons: tests and testability.
Catmur, Caroline; Press, Clare; Cook, Richard; Bird, Geoffrey; Heyes, Cecilia
2014-04-01
Commentators have tended to focus on the conceptual framework of our article, the contrast between genetic and associative accounts of mirror neurons, and to challenge it with additional possibilities rather than empirical data. This makes the empirically focused comments especially valuable. The mirror neuron debate is replete with ideas; what it needs now are system-level theories and careful experiments – tests and testability.
“Feature Detection” vs. “Predictive Coding” Models of Plant Behavior
Calvo, Paco; Baluška, František; Sims, Andrew
2016-01-01
In this article we consider the possibility that plants exhibit anticipatory behavior, a mark of intelligence. If plants are able to anticipate and respond accordingly to varying states of their surroundings, as opposed to merely responding online to environmental contingencies, then such capacity may be in principle testable, and subject to empirical scrutiny. Our main thesis is that adaptive behavior can only take place by way of a mechanism that predicts the environmental sources of sensory stimulation. We propose to test for anticipation in plants experimentally by contrasting two empirical hypotheses: “feature detection” and “predictive coding.” We spell out what these contrasting hypotheses consist of by way of illustration from the animal literature, and consider how to transfer the rationale involved to the plant literature. PMID:27757094
Lattice of quantum predictions
NASA Astrophysics Data System (ADS)
Drieschner, Michael
1993-10-01
What is the structure of reality? Physics is supposed to answer this question, but a purely empiristic view is not sufficient to explain its ability to do so. Quantum mechanics has forced us to think more deeply about what a physical theory is. There are preconditions every physical theory must fulfill. It has to contain, e.g., rules for empirically testable predictions. Those preconditions give physics a structure that is “a priori” in the Kantian sense. An example is given how the lattice structure of quantum mechanics can be understood along these lines.
Perceptual Decision-Making as Probabilistic Inference by Neural Sampling.
Haefner, Ralf M; Berkes, Pietro; Fiser, József
2016-05-04
We address two main challenges facing systems neuroscience today: understanding the nature and function of cortical feedback between sensory areas and of correlated variability. Starting from the old idea of perception as probabilistic inference, we show how to use knowledge of the psychophysical task to make testable predictions for the influence of feedback signals on early sensory representations. Applying our framework to a two-alternative forced choice task paradigm, we can explain multiple empirical findings that have been hard to account for by the traditional feedforward model of sensory processing, including the task dependence of neural response correlations and the diverging time courses of choice probabilities and psychophysical kernels. Our model makes new predictions and characterizes a component of correlated variability that represents task-related information rather than performance-degrading noise. It demonstrates a normative way to integrate sensory and cognitive components into physiologically testable models of perceptual decision-making. Copyright © 2016 Elsevier Inc. All rights reserved.
2017-01-01
A central feature of Darwin's theory of natural selection is that it explains the purpose of biological adaptation. Here, I: emphasize the scientific importance of understanding what adaptations are for, in terms of facilitating the derivation of empirically testable predictions; discuss the population genetical basis for Darwin's theory of the purpose of adaptation, with reference to Fisher's ‘fundamental theorem of natural selection'; and show that a deeper understanding of the purpose of adaptation is achieved in the context of social evolution, with reference to inclusive fitness and superorganisms. PMID:28839927
Gardner, Andy
2017-10-06
A central feature of Darwin's theory of natural selection is that it explains the purpose of biological adaptation. Here, I: emphasize the scientific importance of understanding what adaptations are for, in terms of facilitating the derivation of empirically testable predictions; discuss the population genetical basis for Darwin's theory of the purpose of adaptation, with reference to Fisher's 'fundamental theorem of natural selection'; and show that a deeper understanding of the purpose of adaptation is achieved in the context of social evolution, with reference to inclusive fitness and superorganisms.
Empirical approaches to the study of language evolution.
Fitch, W Tecumseh
2017-02-01
The study of language evolution, and human cognitive evolution more generally, has often been ridiculed as unscientific, but in fact it differs little from many other disciplines that investigate past events, such as geology or cosmology. Well-crafted models of language evolution make numerous testable hypotheses, and if the principles of strong inference (simultaneous testing of multiple plausible hypotheses) are adopted, there is an increasing amount of relevant data allowing empirical evaluation of such models. The articles in this special issue provide a concise overview of current models of language evolution, emphasizing the testable predictions that they make, along with overviews of the many sources of data available to test them (emphasizing comparative, neural, and genetic data). The key challenge facing the study of language evolution is not a lack of data, but rather a weak commitment to hypothesis-testing approaches and strong inference, exacerbated by the broad and highly interdisciplinary nature of the relevant data. This introduction offers an overview of the field, and a summary of what needed to evolve to provide our species with language-ready brains. It then briefly discusses different contemporary models of language evolution, followed by an overview of different sources of data to test these models. I conclude with my own multistage model of how different components of language could have evolved.
Covariations in ecological scaling laws fostered by community dynamics.
Zaoli, Silvia; Giometto, Andrea; Maritan, Amos; Rinaldo, Andrea
2017-10-03
Scaling laws in ecology, intended both as functional relationships among ecologically relevant quantities and the probability distributions that characterize their occurrence, have long attracted the interest of empiricists and theoreticians. Empirical evidence exists of power laws associated with the number of species inhabiting an ecosystem, their abundances, and traits. Although their functional form appears to be ubiquitous, empirical scaling exponents vary with ecosystem type and resource supply rate. The idea that ecological scaling laws are linked has been entertained before, but the full extent of macroecological pattern covariations, the role of the constraints imposed by finite resource supply, and a comprehensive empirical verification are still unexplored. Here, we propose a theoretical scaling framework that predicts the linkages of several macroecological patterns related to species' abundances and body sizes. We show that such a framework is consistent with the stationary-state statistics of a broad class of resource-limited community dynamics models, regardless of parameterization and model assumptions. We verify predicted theoretical covariations by contrasting empirical data and provide testable hypotheses for yet unexplored patterns. We thus place the observed variability of ecological scaling exponents into a coherent statistical framework where patterns in ecology embed constrained fluctuations.
Heavy use of equations impedes communication among biologists.
Fawcett, Tim W; Higginson, Andrew D
2012-07-17
Most research in biology is empirical, yet empirical studies rely fundamentally on theoretical work for generating testable predictions and interpreting observations. Despite this interdependence, many empirical studies build largely on other empirical studies with little direct reference to relevant theory, suggesting a failure of communication that may hinder scientific progress. To investigate the extent of this problem, we analyzed how the use of mathematical equations affects the scientific impact of studies in ecology and evolution. The density of equations in an article has a significant negative impact on citation rates, with papers receiving 28% fewer citations overall for each additional equation per page in the main text. Long, equation-dense papers tend to be more frequently cited by other theoretical papers, but this increase is outweighed by a sharp drop in citations from nontheoretical papers (35% fewer citations for each additional equation per page in the main text). In contrast, equations presented in an accompanying appendix do not lessen a paper's impact. Our analysis suggests possible strategies for enhancing the presentation of mathematical models to facilitate progress in disciplines that rely on the tight integration of theoretical and empirical work.
Genetic models of homosexuality: generating testable predictions
Gavrilets, Sergey; Rice, William R
2006-01-01
Homosexuality is a common occurrence in humans and other species, yet its genetic and evolutionary basis is poorly understood. Here, we formulate and study a series of simple mathematical models for the purpose of predicting empirical patterns that can be used to determine the form of selection that leads to polymorphism of genes influencing homosexuality. Specifically, we develop theory to make contrasting predictions about the genetic characteristics of genes influencing homosexuality including: (i) chromosomal location, (ii) dominance among segregating alleles and (iii) effect sizes that distinguish between the two major models for their polymorphism: the overdominance and sexual antagonism models. We conclude that the measurement of the genetic characteristics of quantitative trait loci (QTLs) found in genomic screens for genes influencing homosexuality can be highly informative in resolving the form of natural selection maintaining their polymorphism. PMID:17015344
The evolution of social and semantic networks in epistemic communities
NASA Astrophysics Data System (ADS)
Margolin, Drew Berkley
This study describes and tests a model of scientific inquiry as an evolving, organizational phenomenon. Arguments are derived from organizational ecology and evolutionary theory. The empirical subject of study is an epistemic community of scientists publishing on a research topic in physics: the string theoretic concept of "D-branes." The study uses evolutionary theory as a means of predicting change in the way members of the community choose concepts to communicate acceptable knowledge claims. It is argued that the pursuit of new knowledge is risky, because the reliability of a novel knowledge claim cannot be verified until after substantial resources have been invested. Using arguments from both philosophy of science and organizational ecology, it is suggested that scientists can mitigate and sensibly share the risks of knowledge discovery within the community by articulating their claims in legitimate forms, i.e., forms that are testable within and relevant to the community. Evidence from empirical studies of semantic usage suggests that the legitimacy of a knowledge claim is influenced by the characteristics of the concepts in which it is articulated. A model of conceptual retention, variation, and selection is then proposed for predicting the usage of concepts and conceptual co-occurrences in the future publications of the community, based on its past. Results substantially supported hypothesized retention and selection mechanisms. Future concept usage was predictable from previous concept usage, but was limited by conceptual carrying capacity as predicted by density dependence theory. Also as predicted, retention was stronger when the community showed a more cohesive social structure. Similarly, concepts that showed structural signatures of high testability and relevance were more likely to be selected after previous usage frequency was controlled for. By contrast, hypotheses for variation mechanisms were not supported. Surprisingly, concepts whose structural position suggested they would be easiest to discover through search processes were used less frequently, once previous usage frequency was controlled for. The study also makes a theoretical contribution by suggesting ways that evolutionary theory can be used to integrate findings from the study of science with insights from organizational communication. A variety of concrete directions for future studies of social and semantic network evolution are also proposed.
John S. Bell's concept of local causality
NASA Astrophysics Data System (ADS)
Norsen, Travis
2011-12-01
John Stewart Bell's famous theorem is widely regarded as one of the most important developments in the foundations of physics. Yet even as we approach the 50th anniversary of Bell's discovery, its meaning and implications remain controversial. Many workers assert that Bell's theorem refutes the possibility suggested by Einstein, Podolsky, and Rosen (EPR) of supplementing ordinary quantum theory with ``hidden'' variables that might restore determinism and/or some notion of an observer-independent reality. But Bell himself interpreted the theorem very differently--as establishing an ``essential conflict'' between the well-tested empirical predictions of quantum theory and relativistic local causality. Our goal is to make Bell's own views more widely known and to explain Bell's little-known formulation of the concept of relativistic local causality on which his theorem rests. We also show precisely how Bell's formulation of local causality can be used to derive an empirically testable Bell-type inequality and to recapitulate the EPR argument.
John S. Bell's concept of local causality
NASA Astrophysics Data System (ADS)
Norsen, Travis
2011-12-01
John Stewart Bell's famous theorem is widely regarded as one of the most important developments in the foundations of physics. Yet even as we approach the 50th anniversary of Bell's discovery, its meaning and implications remain controversial. Many workers assert that Bell's theorem refutes the possibility suggested by Einstein, Podolsky, and Rosen (EPR) of supplementing ordinary quantum theory with "hidden" variables that might restore determinism and/or some notion of an observer-independent reality. But Bell himself interpreted the theorem very differently—as establishing an "essential conflict" between the well-tested empirical predictions of quantum theory and relativistic local causality. Our goal is to make Bell's own views more widely known and to explain Bell's little-known formulation of the concept of relativistic local causality on which his theorem rests. We also show precisely how Bell's formulation of local causality can be used to derive an empirically testable Bell-type inequality and to recapitulate the EPR argument.
Perception as a closed-loop convergence process.
Ahissar, Ehud; Assa, Eldad
2016-05-09
Perception of external objects involves sensory acquisition via the relevant sensory organs. A widely-accepted assumption is that the sensory organ is the first station in a serial chain of processing circuits leading to an internal circuit in which a percept emerges. This open-loop scheme, in which the interaction between the sensory organ and the environment is not affected by its concurrent downstream neuronal processing, is strongly challenged by behavioral and anatomical data. We present here a hypothesis in which the perception of external objects is a closed-loop dynamical process encompassing loops that integrate the organism and its environment and converging towards organism-environment steady-states. We discuss the consistency of closed-loop perception (CLP) with empirical data and show that it can be synthesized in a robotic setup. Testable predictions are proposed for empirical distinction between open and closed loop schemes of perception.
Perception as a closed-loop convergence process
Ahissar, Ehud; Assa, Eldad
2016-01-01
Perception of external objects involves sensory acquisition via the relevant sensory organs. A widely-accepted assumption is that the sensory organ is the first station in a serial chain of processing circuits leading to an internal circuit in which a percept emerges. This open-loop scheme, in which the interaction between the sensory organ and the environment is not affected by its concurrent downstream neuronal processing, is strongly challenged by behavioral and anatomical data. We present here a hypothesis in which the perception of external objects is a closed-loop dynamical process encompassing loops that integrate the organism and its environment and converging towards organism-environment steady-states. We discuss the consistency of closed-loop perception (CLP) with empirical data and show that it can be synthesized in a robotic setup. Testable predictions are proposed for empirical distinction between open and closed loop schemes of perception. DOI: http://dx.doi.org/10.7554/eLife.12830.001 PMID:27159238
Mand, Cara; Gillam, Lynn; Delatycki, Martin B; Duncan, Rony E
2012-09-01
Predictive genetic testing is now routinely offered to asymptomatic adults at risk for genetic disease. However, testing of minors at risk for adult-onset conditions, where no treatment or preventive intervention exists, has evoked greater controversy and inspired a debate spanning two decades. This review aims to provide a detailed longitudinal analysis and concludes by examining the debate's current status and prospects for the future. Fifty-three relevant theoretical papers published between 1990 and December 2010 were identified, and interpretative content analysis was employed to catalogue discrete arguments within these papers. Novel conclusions were drawn from this review. While the debate's first voices were raised in opposition of testing and their arguments have retained currency over many years, arguments in favour of testing, which appeared sporadically at first, have gained momentum more recently. Most arguments on both sides are testable empirical claims, so far untested, rather than abstract ethical or philosophical positions. The dispute, therein, lies not so much in whether minors should be permitted to access predictive genetic testing but whether these empirical claims on the relative benefits or harms of testing should be assessed.
A test of the hypothesis that correlational selection generates genetic correlations.
Roff, Derek A; Fairbairn, Daphne J
2012-09-01
Theory predicts that correlational selection on two traits will cause the major axis of the bivariate G matrix to orient itself in the same direction as the correlational selection gradient. Two testable predictions follow from this: for a given pair of traits, (1) the sign of correlational selection gradient should be the same as that of the genetic correlation, and (2) the correlational selection gradient should be positively correlated with the value of the genetic correlation. We test this hypothesis with a meta-analysis utilizing empirical estimates of correlational selection gradients and measures of the correlation between the two focal traits. Our results are consistent with both predictions and hence support the underlying hypothesis that correlational selection generates a genetic correlation between the two traits and hence orients the bivariate G matrix. © 2012 The Author(s). Evolution© 2012 The Society for the Study of Evolution.
Cognitive architectures and language acquisition: a case study in pronoun comprehension.
VAN Rij, Jacolien; VAN Rijn, Hedderik; Hendriks, Petra
2010-06-01
In this paper we discuss a computational cognitive model of children's poor performance on pronoun interpretation (the so-called Delay of Principle B Effect, or DPBE). This cognitive model is based on a theoretical account that attributes the DPBE to children's inability as hearers to also take into account the speaker's perspective. The cognitive model predicts that child hearers are unable to do so because their speed of linguistic processing is too limited to perform this second step in interpretation. We tested this hypothesis empirically in a psycholinguistic study, in which we slowed down the speech rate to give children more time for interpretation, and in a computational simulation study. The results of the two studies confirm the predictions of our model. Moreover, these studies show that embedding a theory of linguistic competence in a cognitive architecture allows for the generation of detailed and testable predictions with respect to linguistic performance.
Factors That Affect Software Testability
NASA Technical Reports Server (NTRS)
Voas, Jeffrey M.
1991-01-01
Software faults that infrequently affect software's output are dangerous. When a software fault causes frequent software failures, testing is likely to reveal the fault before the software is releases; when the fault remains undetected during testing, it can cause disaster after the software is installed. A technique for predicting whether a particular piece of software is likely to reveal faults within itself during testing is found in [Voas91b]. A piece of software that is likely to reveal faults within itself during testing is said to have high testability. A piece of software that is not likely to reveal faults within itself during testing is said to have low testability. It is preferable to design software with higher testabilities from the outset, i.e., create software with as high of a degree of testability as possible to avoid the problems of having undetected faults that are associated with low testability. Information loss is a phenomenon that occurs during program execution that increases the likelihood that a fault will remain undetected. In this paper, I identify two brad classes of information loss, define them, and suggest ways of predicting the potential for information loss to occur. We do this in order to decrease the likelihood that faults will remain undetected during testing.
Nauta, Margaret M
2010-01-01
This article celebrates the 50th anniversary of the introduction of John L. Holland's (1959) theory of vocational personalities and work environments by describing the theory's development and evolution, its instrumentation, and its current status. Hallmarks of Holland's theory are its empirical testability and its user-friendliness. By constructing measures for operationalizing the theory's constructs, Holland and his colleagues helped ensure that the theory could be implemented in practice on a widespread basis. Empirical data offer considerable support for the existence of Holland's RIASEC types and their ordering among persons and environments. Although Holland's congruence hypotheses have received empirical support, congruence appears to have modest predictive power. Mixed support exists for Holland's hypotheses involving the secondary constructs of differentiation, consistency, and vocational identity. Evidence of the continued impact of Holland's theory on the field of counseling psychology, particularly in the area of interest assessment, can be seen from its frequent implementation in practice and its use by scholars. Ideas for future research and practice using Holland's theory are suggested.
Creativity, information, and consciousness: The information dynamics of thinking.
Wiggins, Geraint A
2018-05-07
This paper presents a theory of the basic operation of mind, Information Dynamics of Thinking, which is intended for computational implementation and thence empirical testing. It is based on the information theory of Shannon, and treats the mind/brain as an information processing organ that aims to be information-efficient, in that it predicts its world, so as to use information efficiently, and regularly re-represents it, so as to store information efficiently. The theory is presented in context of a background review of various research areas that impinge upon its development. Consequences of the theory and testable hypotheses arising from it are discussed. Copyright © 2018. Published by Elsevier B.V.
ERIC Educational Resources Information Center
Niaz, Mansoor
1991-01-01
Discusses differences between the epistemic and the psychological subject, the relationship between the epistemic subject and the ideal gas law, the development of general cognitive operations, and the empirical testability of Piaget's epistemic subject. (PR)
The spatial scaling of species interaction networks.
Galiana, Nuria; Lurgi, Miguel; Claramunt-López, Bernat; Fortin, Marie-Josée; Leroux, Shawn; Cazelles, Kevin; Gravel, Dominique; Montoya, José M
2018-05-01
Species-area relationships (SARs) are pivotal to understand the distribution of biodiversity across spatial scales. We know little, however, about how the network of biotic interactions in which biodiversity is embedded changes with spatial extent. Here we develop a new theoretical framework that enables us to explore how different assembly mechanisms and theoretical models affect multiple properties of ecological networks across space. We present a number of testable predictions on network-area relationships (NARs) for multi-trophic communities. Network structure changes as area increases because of the existence of different SARs across trophic levels, the preferential selection of generalist species at small spatial extents and the effect of dispersal limitation promoting beta-diversity. Developing an understanding of NARs will complement the growing body of knowledge on SARs with potential applications in conservation ecology. Specifically, combined with further empirical evidence, NARs can generate predictions of potential effects on ecological communities of habitat loss and fragmentation in a changing world.
Gaillard, Jean-Michel; Lemaître, Jean-François
2017-12-01
Williams' evolutionary theory of senescence based on antagonistic pleiotropy has become a landmark in evolutionary biology, and more recently in biogerontology and evolutionary medicine. In his original article, Williams launched a set of nine "testable deductions" from his theory. Although some of these predictions have been repeatedly discussed, most have been overlooked and no systematic evaluation of the whole set of Williams' original predictions has been performed. For the sixtieth anniversary of the publication of the Williams' article, we provide an updated evaluation of all these predictions. We present the pros and cons of each prediction based on recent accumulation of both theoretical and empirical studies performed in the laboratory and in the wild. From our viewpoint, six predictions are mostly supported by our current knowledge at least under some conditions (although Williams' theory cannot thoroughly explain why for some of them). Three predictions, all involving the timing of senescence, are not supported. Our critical review of Williams' predictions highlights the importance of William's contribution and clearly demonstrates that, 60 years after its publication, his article does not show any sign of senescence. © 2017 The Author(s). Evolution © 2017 The Society for the Study of Evolution.
Changing Perspectives on Basic Research in Adult Learning and Memory
ERIC Educational Resources Information Center
Hultsch, David F.
1977-01-01
It is argued that wheather the course of cognitive development is characterized by growth, stability, or decline is less a matter of the metamodel on which the theories and data are based. Such metamodels are representations of reality that are not empirically testable. (Author)
The Process of Mentoring Pregnant Adolescents: An Exploratory Study.
ERIC Educational Resources Information Center
Blinn-Pike, Lynn; Kuschel, Diane; McDaniel, Annette; Mingus, Suzanne; Mutti, Megan Poole
1998-01-01
The process that occurs in relationships between volunteer adult mentors and pregnant adolescent "mentees" is described empirically; testable hypotheses based on findings concerning the mentor role are proposed. Case records from 20 mentors are analyzed; findings regarding mentors' roles are discussed. Criteria for conceptualizing quasi-parenting…
Martin, Leigh J; Murray, Brad R
2011-05-01
The invasive spread of exotic plants in native vegetation can pose serious threats to native faunal assemblages. This is of particular concern for reptiles and amphibians because they form a significant component of the world's vertebrate fauna, play a pivotal role in ecosystem functioning and are often neglected in biodiversity research. A framework to predict how exotic plant invasion will affect reptile and amphibian assemblages is imperative for conservation, management and the identification of research priorities. Here, we present a new predictive framework that integrates three mechanistic models. These models are based on exotic plant invasion altering: (1) habitat structure; (2) herbivory and predator-prey interactions; (3) the reproductive success of reptile and amphibian species and assemblages. We present a series of testable predictions from these models that arise from the interplay over time among three exotic plant traits (growth form, area of coverage, taxonomic distinctiveness) and six traits of reptiles and amphibians (body size, lifespan, home range size, habitat specialisation, diet, reproductive strategy). A literature review provided robust empirical evidence of exotic plant impacts on reptiles and amphibians from each of the three model mechanisms. Evidence relating to the role of body size and diet was less clear-cut, indicating the need for further research. The literature provided limited empirical support for many of the other model predictions. This was not, however, because findings contradicted our model predictions but because research in this area is sparse. In particular, the small number of studies specifically examining the effects of exotic plants on amphibians highlights the pressing need for quantitative research in this area. There is enormous scope for detailed empirical investigation of interactions between exotic plants and reptile and amphibian species and assemblages. The framework presented here and further testing of predictions will provide a basis for informing and prioritising environmental management and exotic plant control efforts. © 2010 The Authors. Biological Reviews © 2010 Cambridge Philosophical Society.
The Role of Metaphysical Naturalism in Science
NASA Astrophysics Data System (ADS)
Mahner, Martin
2012-10-01
This paper defends the view that metaphysical naturalism is a constitutive ontological principle of science in that the general empirical methods of science, such as observation, measurement and experiment, and thus the very production of empirical evidence, presuppose a no-supernature principle. It examines the consequences of metaphysical naturalism for the testability of supernatural claims, and it argues that explanations involving supernatural entities are pseudo-explanatory due to the many semantic and ontological problems of supernatural concepts. The paper also addresses the controversy about metaphysical versus methodological naturalism.
Effects of temperature on consumer-resource interactions.
Amarasekare, Priyanga
2015-05-01
Understanding how temperature variation influences the negative (e.g. self-limitation) and positive (e.g. saturating functional responses) feedback processes that characterize consumer-resource interactions is an important research priority. Previous work on this topic has yielded conflicting outcomes with some studies predicting that warming should increase consumer-resource oscillations and others predicting that warming should decrease consumer-resource oscillations. Here, I develop a consumer-resource model that both synthesizes previous findings in a common framework and yields novel insights about temperature effects on consumer-resource dynamics. I report three key findings. First, when the resource species' birth rate exhibits a unimodal temperature response, as demonstrated by a large number of empirical studies, the temperature range over which the consumer-resource interaction can persist is determined by the lower and upper temperature limits to the resource species' reproduction. This contrasts with the predictions of previous studies, which assume that the birth rate exhibits a monotonic temperature response, that consumer extinction is determined by temperature effects on consumer species' traits, rather than the resource species' traits. Secondly, the comparative analysis I have conducted shows that whether warming leads to an increase or decrease in consumer-resource oscillations depends on the manner in which temperature affects intraspecific competition. When the strength of self-limitation increases monotonically with temperature, warming causes a decrease in consumer-resource oscillations. However, if self-limitation is strongest at temperatures physiologically optimal for reproduction, a scenario previously unanalysed by theory but amply substantiated by empirical data, warming can cause an increase in consumer-resource oscillations. Thirdly, the model yields testable comparative predictions about consumer-resource dynamics under alternative hypotheses for how temperature affects competitive and resource acquisition traits. Importantly, it does so through empirically quantifiable metrics for predicting temperature effects on consumer viability and consumer-resource oscillations, which obviates the need for parameterizing complex dynamical models. Tests of these metrics with empirical data on a host-parasitoid interaction yield realistic estimates of temperature limits for consumer persistence and the propensity for consumer-resource oscillations, highlighting their utility in predicting temperature effects, particularly warming, on consumer-resource interactions in both natural and agricultural settings. © 2014 The Author. Journal of Animal Ecology © 2014 British Ecological Society.
Phases in the Adoption of Educational Innovations in Teacher Training Institutions.
ERIC Educational Resources Information Center
Hall, Gene E.
An attempt has been made to categorize phenomena observed as 20 teacher training institutions have adopted innovations and to extrapolate from these findings key concepts and principles that could form the basis for developing empirically testable hypotheses and could be of some immediate utility to those involved in innovation adoption. The…
Forensic Impact of the Child Sexual Abuse Medical Examination.
ERIC Educational Resources Information Center
Myers, John E. B.
1998-01-01
This commentary on an article (EC 619 279) about research issues at the interface of medicine and law concerning medical evaluation for child sexual abuse focuses on empirically testable questions: (1) the medical history--its accuracy, interviewing issues, and elicitation and preservation of verbal evidence of abuse; and, (2) expert testimony.…
Two New Empirically Derived Reasons To Use the Assessment of Basic Learning Abilities.
ERIC Educational Resources Information Center
Richards, David F.; Williams, W. Larry; Follette, William C.
2002-01-01
Scores on the Assessment of Basic Learning Abilities (ABLA), Vineland Adaptive Behavior Scales, and the Wechsler Intelligences Scale-Revised (WAIS-R) were obtained for 30 adults with mental retardation. Correlations between the Vineland domains and ABLA were all significant. No participants performing below ABLA Level 6 were testable on the…
Ingram, T; Harmon, L J; Shurin, J B
2012-09-01
Conceptual models of adaptive radiation predict that competitive interactions among species will result in an early burst of speciation and trait evolution followed by a slowdown in diversification rates. Empirical studies often show early accumulation of lineages in phylogenetic trees, but usually fail to detect early bursts of phenotypic evolution. We use an evolutionary simulation model to assemble food webs through adaptive radiation, and examine patterns in the resulting phylogenetic trees and species' traits (body size and trophic position). We find that when foraging trade-offs result in food webs where all species occupy integer trophic levels, lineage diversity and trait disparity are concentrated early in the tree, consistent with the early burst model. In contrast, in food webs in which many omnivorous species feed at multiple trophic levels, high levels of turnover of species' identities and traits tend to eliminate the early burst signal. These results suggest testable predictions about how the niche structure of ecological communities may be reflected by macroevolutionary patterns. © 2012 The Authors. Journal of Evolutionary Biology © 2012 European Society For Evolutionary Biology.
Lightning Scaling Laws Revisited
NASA Technical Reports Server (NTRS)
Boccippio, D. J.; Arnold, James E. (Technical Monitor)
2000-01-01
Scaling laws relating storm electrical generator power (and hence lightning flash rate) to charge transport velocity and storm geometry were originally posed by Vonnegut (1963). These laws were later simplified to yield simple parameterizations for lightning based upon cloud top height, with separate parameterizations derived over land and ocean. It is demonstrated that the most recent ocean parameterization: (1) yields predictions of storm updraft velocity which appear inconsistent with observation, and (2) is formally inconsistent with the theory from which it purports to derive. Revised formulations consistent with Vonnegut's original framework are presented. These demonstrate that Vonnegut's theory is, to first order, consistent with observation. The implications of assuming that flash rate is set by the electrical generator power, rather than the electrical generator current, are examined. The two approaches yield significantly different predictions about the dependence of charge transfer per flash on storm dimensions, which should be empirically testable. The two approaches also differ significantly in their explanation of regional variability in lightning observations.
Bicultural identity conflict in second-generation Asian Canadians.
Stroink, Mirella L; Lalonde, Richard N
2009-02-01
Researchers have shown that bicultural individuals, including 2nd-generation immigrants, face a potential conflict between 2 cultural identities. The present authors extended this primarily qualitative research on the bicultural experience by adopting the social identity perspective (H. Tajfel & J. C. Turner, 1986). They developed and tested an empirically testable model of the role of cultural construals, in-group prototypicality, and identity in bicultural conflict in 2 studies with 2nd-generation Asian Canadians. In both studies, the authors expected and found that participants' construals of their 2 cultures as different predicted lower levels of simultaneous identification with both cultures. Furthermore, the authors found this relation was mediated by participants' feelings of prototypicality as members of both groups. Although the perception of cultural difference did not predict well-being as consistently and directly as the authors expected, levels of simultaneous identification did show these relations. The authors discuss results in the context of social identity theory (H. Tajfel & J. C. Turner) as a framework for understanding bicultural conflict.
ERIC Educational Resources Information Center
Nauta, Margaret M.
2010-01-01
This article celebrates the 50th anniversary of the introduction of John L. Holland's (1959) theory of vocational personalities and work environments by describing the theory's development and evolution, its instrumentation, and its current status. Hallmarks of Holland's theory are its empirical testability and its user-friendliness. By…
ERIC Educational Resources Information Center
Hunter, Lora Rose; Schmidt, Norman B.
2010-01-01
In this review, the extant literature concerning anxiety psychopathology in African American adults is summarized to develop a testable, explanatory framework with implications for future research. The model was designed to account for purported lower rates of anxiety disorders in African Americans compared to European Americans, along with other…
A General, Synthetic Model for Predicting Biodiversity Gradients from Environmental Geometry.
Gross, Kevin; Snyder-Beattie, Andrew
2016-10-01
Latitudinal and elevational biodiversity gradients fascinate ecologists, and have inspired dozens of explanations. The geometry of the abiotic environment is sometimes thought to contribute to these gradients, yet evaluations of geometric explanations are limited by a fragmented understanding of the diversity patterns they predict. This article presents a mathematical model that synthesizes multiple pathways by which environmental geometry can drive diversity gradients. The model characterizes species ranges by their environmental niches and limits on range sizes and places those ranges onto the simplified geometries of a sphere or cone. The model predicts nuanced and realistic species-richness gradients, including latitudinal diversity gradients with tropical plateaus and mid-latitude inflection points and elevational diversity gradients with low-elevation diversity maxima. The model also illustrates the importance of a mid-environment effect that augments species richness at locations with intermediate environments. Model predictions match multiple empirical biodiversity gradients, depend on ecological traits in a testable fashion, and formally synthesize elements of several geometric models. Together, these results suggest that previous assessments of geometric hypotheses should be reconsidered and that environmental geometry may play a deeper role in driving biodiversity gradients than is currently appreciated.
Testability Design Rating System: Testability Handbook. Volume 1
1992-02-01
4-10 4.7.5 Summary of False BIT Alarms (FBA) ............................. 4-10 4.7.6 Smart BIT Technique...Circuit Board PGA Pin Grid Array PLA Programmable Logic Array PLD Programmable Logic Device PN Pseudo-Random Number PREDICT Probabilistic Estimation of...11 4.7.6 Smart BIT ( reference: RADC-TR-85-198). " Smart " BIT is a term given to BIT circuitry in a system LRU which includes dedicated processor/memory
Piaget's epistemic subject and science education: Epistemological vs. psychological issues
NASA Astrophysics Data System (ADS)
Kitchener, Richard F.
1993-06-01
Many individuals claim that Piaget's theory of cognitive development is empirically false or substantially disconfirmed by empirical research. Although there is substance to such a claim, any such conclusion must address three increasingly problematic issues about the possibility of providing an empirical test of Piaget's genetic epistemology: (1) the empirical underdetermination of theory by empirical evidence, (2) the empirical difficulty of testing competence-type explanations, and (3) the difficulty of empirically testing epistemic norms. This is especially true of a central epistemic construct in Piaget's theory — the epistemic subject. To illustrate how similar problems of empirical testability arise in the physical sciences, I briefly examine the case of Galileo and the correlative difficulty of empirically testing Galileo's laws. I then point out some important epistemological similarities between Galileo and Piaget together with correlative changes needed in science studies methodology. I conclude that many psychologists and science educators have failed to appreciate the difficulty of falsifying Piaget's theory because they have tacitly adopted a philosophy of science at odds with the paradigm-case of Galileo.
Rapid Communication: Quasi-gedanken experiment challenging the no-signalling theorem
NASA Astrophysics Data System (ADS)
Kalamidas, Demetrios A.
2018-01-01
Kennedy ( Philos. Sci. 62, 4 (1995)) has argued that the various quantum mechanical no-signalling proofs formulated thus far share a common mathematical framework, are circular in nature, and do not preclude the construction of empirically testable schemes wherein superluminal exchange of information can occur. In light of this thesis, we present a potentially feasible quantum-optical scheme that purports to enable superluminal signalling.
ERIC Educational Resources Information Center
Maestripieri, Dario
2005-01-01
Comparative behavioral research is important for a number of reasons and can contribute to the understanding of human behavior and development in many different ways. Research with animal models of human behavior and development can be a source not only of general principles and testable hypotheses but also of empirical information that may be…
Hayes, Brett K; Heit, Evan; Swendsen, Haruka
2010-03-01
Inductive reasoning entails using existing knowledge or observations to make predictions about novel cases. We review recent findings in research on category-based induction as well as theoretical models of these results, including similarity-based models, connectionist networks, an account based on relevance theory, Bayesian models, and other mathematical models. A number of touchstone empirical phenomena that involve taxonomic similarity are described. We also examine phenomena involving more complex background knowledge about premises and conclusions of inductive arguments and the properties referenced. Earlier models are shown to give a good account of similarity-based phenomena but not knowledge-based phenomena. Recent models that aim to account for both similarity-based and knowledge-based phenomena are reviewed and evaluated. Among the most important new directions in induction research are a focus on induction with uncertain premise categories, the modeling of the relationship between inductive and deductive reasoning, and examination of the neural substrates of induction. A common theme in both the well-established and emerging lines of induction research is the need to develop well-articulated and empirically testable formal models of induction. Copyright © 2010 John Wiley & Sons, Ltd. For further resources related to this article, please visit the WIREs website. Copyright © 2010 John Wiley & Sons, Ltd.
NASA Astrophysics Data System (ADS)
Cavalcanti, Eric G.; Wiseman, Howard M.
2012-10-01
The 1964 theorem of John Bell shows that no model that reproduces the predictions of quantum mechanics can simultaneously satisfy the assumptions of locality and determinism. On the other hand, the assumptions of signal locality plus predictability are also sufficient to derive Bell inequalities. This simple theorem, previously noted but published only relatively recently by Masanes, Acin and Gisin, has fundamental implications not entirely appreciated. Firstly, nothing can be concluded about the ontological assumptions of locality or determinism independently of each other—it is possible to reproduce quantum mechanics with deterministic models that violate locality as well as indeterministic models that satisfy locality. On the other hand, the operational assumption of signal locality is an empirically testable (and well-tested) consequence of relativity. Thus Bell inequality violations imply that we can trust that some events are fundamentally unpredictable, even if we cannot trust that they are indeterministic. This result grounds the quantum-mechanical prohibition of arbitrarily accurate predictions on the assumption of no superluminal signalling, regardless of any postulates of quantum mechanics. It also sheds a new light on an early stage of the historical debate between Einstein and Bohr.
Development of a dynamic computational model of social cognitive theory.
Riley, William T; Martin, Cesar A; Rivera, Daniel E; Hekler, Eric B; Adams, Marc A; Buman, Matthew P; Pavel, Misha; King, Abby C
2016-12-01
Social cognitive theory (SCT) is among the most influential theories of behavior change and has been used as the conceptual basis of health behavior interventions for smoking cessation, weight management, and other health behaviors. SCT and other behavior theories were developed primarily to explain differences between individuals, but explanatory theories of within-person behavioral variability are increasingly needed as new technologies allow for intensive longitudinal measures and interventions adapted from these inputs. These within-person explanatory theoretical applications can be modeled as dynamical systems. SCT constructs, such as reciprocal determinism, are inherently dynamical in nature, but SCT has not been modeled as a dynamical system. This paper describes the development of a dynamical system model of SCT using fluid analogies and control systems principles drawn from engineering. Simulations of this model were performed to assess if the model performed as predicted based on theory and empirical studies of SCT. This initial model generates precise and testable quantitative predictions for future intensive longitudinal research. Dynamic modeling approaches provide a rigorous method for advancing health behavior theory development and refinement and for guiding the development of more potent and efficient interventions.
Liou, Shwu-Ru
2009-01-01
To systematically analyse the Organizational Commitment model and Theory of Reasoned Action and determine concepts that can better explain nurses' intention to leave their job. The Organizational Commitment model and Theory of Reasoned Action have been proposed and applied to understand intention to leave and turnover behaviour, which are major contributors to nursing shortage. However, the appropriateness of applying these two models in nursing was not analysed. Three main criteria of a useful model were used for the analysis: consistency in the use of concepts, testability and predictability. Both theories use concepts consistently. Concepts in the Theory of Reasoned Action are defined broadly whereas they are operationally defined in the Organizational Commitment model. Predictability of the Theory of Reasoned Action is questionable whereas the Organizational Commitment model can be applied to predict intention to leave. A model was proposed based on this analysis. Organizational commitment, intention to leave, work experiences, job characteristics and personal characteristics can be concepts for predicting nurses' intention to leave. Nursing managers may consider nurses' personal characteristics and experiences to increase their organizational commitment and enhance their intention to stay. Empirical studies are needed to test and cross-validate the re-synthesized model for nurses' intention to leave their job.
Econometrics of exhaustible resource supply: a theory and an application
DOE Office of Scientific and Technical Information (OSTI.GOV)
Epple, D.
1983-01-01
This report takes a major step toward developing a fruitful approach to empirical analysis of resource supply. It is the first empirical application of resource theory that has successfully integrated the effects of depletion of nonrenewable resources with the effects of uncertainty about future costs and prices on supply behavior. Thus, the model is a major improvement over traditional engineering-optimization models that assume complete certainty, and over traditional econometrics models that are only implicitly related to the theory of resource supply. The model is used to test hypotheses about interdependence of oil and natural gas discoveries, depletion, ultimate recovery, andmore » the role of price expectations. This paper demonstrates the feasibility of using exhaustible resource theory in the development of empirically testable models. 19 refs., 1 fig., 5 tabs.« less
Resolving the observer reference class problem in cosmology
NASA Astrophysics Data System (ADS)
Friederich, Simon
2017-06-01
The assumption that we are typical observers plays a core role in attempts to make multiverse theories empirically testable. A widely shared worry about this assumption is that it suffers from systematic ambiguity concerning the reference class of observers with respect to which typicality is assumed. As a way out, Srednicki and Hartle recommend that we empirically test typicality with respect to different candidate reference classes in analogy to how we test physical theories. Unfortunately, as this paper argues, this idea fails because typicality is not the kind of assumption that can be subjected to empirical tests. As an alternative, a background information constraint on observer reference class choice is suggested according to which the observer reference class should be chosen such that it includes precisely those observers who one could possibly be, given one's assumed background information.
Froissart, R.; Doumayrou, J.; Vuillaume, F.; Alizon, S.; Michalakis, Y.
2010-01-01
The adaptive hypothesis invoked to explain why parasites harm their hosts is known as the trade-off hypothesis, which states that increased parasite transmission comes at the cost of shorter infection duration. This correlation arises because both transmission and disease-induced mortality (i.e. virulence) are increasing functions of parasite within-host density. There is, however, a glaring lack of empirical data to support this hypothesis. Here, we review empirical investigations reporting to what extent within-host viral accumulation determines the transmission rate and the virulence of vector-borne plant viruses. Studies suggest that the correlation between within-plant viral accumulation and transmission rate of natural isolates is positive. Unfortunately, results on the correlation between viral accumulation and virulence are very scarce. We found only very few appropriate studies testing such a correlation, themselves limited by the fact that they use symptoms as a proxy for virulence and are based on very few viral genotypes. Overall, the available evidence does not allow us to confirm or refute the existence of a transmission–virulence trade-off for vector-borne plant viruses. We discuss the type of data that should be collected and how theoretical models can help us refine testable predictions of virulence evolution. PMID:20478886
Linking short-term responses to ecologically-relevant outcomes
Opportunity to participate in the conduct of collaborative integrative lab, field and modelling efforts to characterize molecular-to-organismal level responses and make quantitative testable predictions of population level outcomes
Not all emotions are created equal: The negativity bias in social-emotional development
Vaish, Amrisha; Grossmann, Tobias; Woodward, Amanda
2013-01-01
There is ample empirical evidence for an asymmetry in the way that adults use positive versus negative information to make sense of their world; specifically, across an array of psychological situations and tasks, adults display a negativity bias, or the propensity to attend to, learn from, and use negative information far more than positive information. This bias is argued to serve critical evolutionarily adaptive functions, but its developmental presence and ontogenetic emergence have never seriously been considered. Here, we argue for the existence of the negativity bias in early development, evident especially in research on infant social referencing but also in other developmental domains. We discuss ontogenetic mechanisms underlying the emergence of this bias, and explore not only its evolutionary but also its developmental functions and consequences. Throughout, we suggest ways to further examine the negativity bias in infants and older children, and we make testable predictions that would help clarify the nature of the negativity bias during early development. PMID:18444702
Mechanisms of mindfulness training: Monitor and Acceptance Theory (MAT).
Lindsay, Emily K; Creswell, J David
2017-02-01
Despite evidence linking trait mindfulness and mindfulness training with a broad range of effects, still little is known about its underlying active mechanisms. Mindfulness is commonly defined as (1) the ongoing monitoring of present-moment experience (2) with an orientation of acceptance. Building on conceptual, clinical, and empirical work, we describe a testable theoretical account to help explain mindfulness effects on cognition, affect, stress, and health outcomes. Specifically, Monitor and Acceptance Theory (MAT) posits that (1), by enhancing awareness of one's experiences, the skill of attention monitoring explains how mindfulness improves cognitive functioning outcomes, yet this same skill can increase affective reactivity. Second (2), by modifying one's relation to monitored experience, acceptance is necessary for reducing affective reactivity, such that attention monitoring and acceptance skills together explain how mindfulness improves negative affectivity, stress, and stress-related health outcomes. We discuss how MAT contributes to mindfulness science, suggest plausible alternatives to the account, and offer specific predictions for future research. Copyright © 2016 Elsevier Ltd. All rights reserved.
Mechanisms of Mindfulness Training: Monitor and Acceptance Theory (MAT)1
Lindsay, Emily K.; Creswell, J. David
2016-01-01
Despite evidence linking trait mindfulness and mindfulness training with a broad range of effects, still little is known about its underlying active mechanisms. Mindfulness is commonly defined as (1) the ongoing monitoring of present-moment experience (2) with an orientation of acceptance. Building on conceptual, clinical, and empirical work, we describe a testable theoretical account to help explain mindfulness effects on cognition, affect, stress, and health outcomes. Specifically, Monitor and Acceptance Theory (MAT) posits that (1), by enhancing awareness of one’s experiences, the skill of attention monitoring explains how mindfulness improves cognitive functioning outcomes, yet this same skill can increase affective reactivity. Second (2), by modifying one’s relation to monitored experience, acceptance is necessary for reducing affective reactivity, such that attention monitoring and acceptance skills together explain how mindfulness improves negative affectivity, stress, and stress-related health outcomes. We discuss how MAT contributes to mindfulness science, suggest plausible alternatives to the account, and offer specific predictions for future research. PMID:27835764
Fischer, Barbara; Telser, Harry; Zweifel, Peter
2018-06-07
Healthcare expenditure (HCE) spent during an individual's last year of life accounts for a high share of lifetime HCE. This finding is puzzling because an investment in health is unlikely to have a sufficiently long payback period. However, Becker et al. (2007) and Philipson et al. (2010) have advanced a theory designed to explain high willingness to pay (WTP) for an extension of life close to its end. Their testable implications are complemented by the concept of 'pain of risk bearing' introduced by Eeckhoudt and Schlesinger (2006). They are tested using a discrete choice experiment performed in 2014, involving 1,529 Swiss adults. An individual setting where the price attribute is substantial out-of-pocket payment for a novel drug for treatment of terminal cancer is distinguished from a societal one, where it is an increase in contributions to social health insurance. Most of the economic predictions receive empirical support. Copyright © 2018. Published by Elsevier B.V.
Simple Model for Identifying Critical Regions in Atrial Fibrillation
NASA Astrophysics Data System (ADS)
Christensen, Kim; Manani, Kishan A.; Peters, Nicholas S.
2015-01-01
Atrial fibrillation (AF) is the most common abnormal heart rhythm and the single biggest cause of stroke. Ablation, destroying regions of the atria, is applied largely empirically and can be curative but with a disappointing clinical success rate. We design a simple model of activation wave front propagation on an anisotropic structure mimicking the branching network of heart muscle cells. This integration of phenomenological dynamics and pertinent structure shows how AF emerges spontaneously when the transverse cell-to-cell coupling decreases, as occurs with age, beyond a threshold value. We identify critical regions responsible for the initiation and maintenance of AF, the ablation of which terminates AF. The simplicity of the model allows us to calculate analytically the risk of arrhythmia and express the threshold value of transversal cell-to-cell coupling as a function of the model parameters. This threshold value decreases with increasing refractory period by reducing the number of critical regions which can initiate and sustain microreentrant circuits. These biologically testable predictions might inform ablation therapies and arrhythmic risk assessment.
Barker, Jessica L.; Bronstein, Judith L.
2016-01-01
Exploitation in cooperative interactions both within and between species is widespread. Although it is assumed to be costly to be exploited, mechanisms to control exploitation are surprisingly rare, making the persistence of cooperation a fundamental paradox in evolutionary biology and ecology. Focusing on between-species cooperation (mutualism), we hypothesize that the temporal sequence in which exploitation occurs relative to cooperation affects its net costs and argue that this can help explain when and where control mechanisms are observed in nature. Our principal prediction is that when exploitation occurs late relative to cooperation, there should be little selection to limit its effects (analogous to “tolerated theft” in human cooperative groups). Although we focus on cases in which mutualists and exploiters are different individuals (of the same or different species), our inferences can readily be extended to cases in which individuals exhibit mixed cooperative-exploitative strategies. We demonstrate that temporal structure should be considered alongside spatial structure as an important process affecting the evolution of cooperation. We also provide testable predictions to guide future empirical research on interspecific as well as intraspecific cooperation. PMID:26841169
Vesselinova, Neda; Alexandrov, Boian; Wall, Michael E.
2016-11-08
We present a dynamical model of drug accumulation in bacteria. The model captures key features in experimental time courses on ofloxacin accumulation: initial uptake; two-phase response; and long-term acclimation. In combination with experimental data, the model provides estimates of import and export rates in each phase, the time of entry into the second phase, and the decrease of internal drug during acclimation. Global sensitivity analysis, local sensitivity analysis, and Bayesian sensitivity analysis of the model provide information about the robustness of these estimates, and about the relative importance of different parameters in determining the features of the accumulation time coursesmore » in three different bacterial species: Escherichia coli, Staphylococcus aureus, and Pseudomonas aeruginosa. The results lead to experimentally testable predictions of the effects of membrane permeability, drug efflux and trapping (e.g., by DNA binding) on drug accumulation. A key prediction is that a sudden increase in ofloxacin accumulation in both E. coli and S. aureus is accompanied by a decrease in membrane permeability.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Vesselinova, Neda; Alexandrov, Boian; Wall, Michael E.
We present a dynamical model of drug accumulation in bacteria. The model captures key features in experimental time courses on ofloxacin accumulation: initial uptake; two-phase response; and long-term acclimation. In combination with experimental data, the model provides estimates of import and export rates in each phase, the time of entry into the second phase, and the decrease of internal drug during acclimation. Global sensitivity analysis, local sensitivity analysis, and Bayesian sensitivity analysis of the model provide information about the robustness of these estimates, and about the relative importance of different parameters in determining the features of the accumulation time coursesmore » in three different bacterial species: Escherichia coli, Staphylococcus aureus, and Pseudomonas aeruginosa. The results lead to experimentally testable predictions of the effects of membrane permeability, drug efflux and trapping (e.g., by DNA binding) on drug accumulation. A key prediction is that a sudden increase in ofloxacin accumulation in both E. coli and S. aureus is accompanied by a decrease in membrane permeability.« less
Beyond the theoretical rhetoric: a proposal to study the consequences of drug legalization.
Yacoubian, G S
2001-01-01
Drug legalization is a frequently-debated drug control policy alternative. It should come as little surprise, therefore, that the arguments in favor of both legalization and prohibition have resulted in a conceptual stalemate. While theoretical deliberations are unquestionably valuable, they seem to have propelled this particular issue to its limit. To date, no works have suggested any empirical studies that might test the framework and potential consequences of drug legalization. In the current study, the arguments surrounding the drug legalization debate are synthesized into a proposal for future research. Such a proposal illustrates that the core elements surrounding drug legalization are not only testable, but that the time may be right to consider such an empirical effort.
The Demographic Transition: Causes and Consequences
Galor, Oded
2013-01-01
This paper develops the theoretical foundations and the testable implications of the various mechanisms that have been proposed as possible triggers for the demographic transition. Moreover, it examines the empirical validity of each of the theories and their significance for the understanding of the transition from stagnation to growth. The analysis suggests that the rise in the demand for human capital in the process of development was the main trigger for the decline in fertility and the transition to modern growth PMID:25089157
Astrobiological Phase Transition: Towards Resolution of Fermi's Paradox
NASA Astrophysics Data System (ADS)
Ćirković, Milan M.; Vukotić, Branislav
2008-12-01
Can astrophysics explain Fermi’s paradox or the “Great Silence” problem? If available, such explanation would be advantageous over most of those suggested in literature which rely on unverifiable cultural and/or sociological assumptions. We suggest, instead, a general astrobiological paradigm which might offer a physical and empirically testable paradox resolution. Based on the idea of James Annis, we develop a model of an astrobiological phase transition of the Milky Way, based on the concept of the global regulation mechanism(s). The dominant regulation mechanisms, arguably, are γ-ray bursts, whose properties and cosmological evolution are becoming well-understood. Secular evolution of regulation mechanisms leads to the brief epoch of phase transition: from an essentially dead place, with pockets of low-complexity life restricted to planetary surfaces, it will, on a short (Fermi-Hart) timescale, become filled with high-complexity life. An observation selection effect explains why we are not, in spite of the very small prior probability, to be surprised at being located in that brief phase of disequilibrium. In addition, we show that, although the phase-transition model may explain the “Great Silence”, it is not supportive of the “contact pessimist” position. To the contrary, the phase-transition model offers a rational motivation for continuation and extension of our present-day Search for ExtraTerrestrial Intelligence (SETI) endeavours. Some of the unequivocal and testable predictions of our model include the decrease of extinction risk in the history of terrestrial life, the absence of any traces of Galactic societies significantly older than human society, complete lack of any extragalactic intelligent signals or phenomena, and the presence of ubiquitous low-complexity life in the Milky Way.
Astrobiological phase transition: towards resolution of Fermi's paradox.
Cirković, Milan M; Vukotić, Branislav
2008-12-01
Can astrophysics explain Fermi's paradox or the "Great Silence" problem? If available, such explanation would be advantageous over most of those suggested in literature which rely on unverifiable cultural and/or sociological assumptions. We suggest, instead, a general astrobiological paradigm which might offer a physical and empirically testable paradox resolution. Based on the idea of James Annis, we develop a model of an astrobiological phase transition of the Milky Way, based on the concept of the global regulation mechanism(s). The dominant regulation mechanisms, arguably, are gamma-ray bursts, whose properties and cosmological evolution are becoming well-understood. Secular evolution of regulation mechanisms leads to the brief epoch of phase transition: from an essentially dead place, with pockets of low-complexity life restricted to planetary surfaces, it will, on a short (Fermi-Hart) timescale, become filled with high-complexity life. An observation selection effect explains why we are not, in spite of the very small prior probability, to be surprised at being located in that brief phase of disequilibrium. In addition, we show that, although the phase-transition model may explain the "Great Silence", it is not supportive of the "contact pessimist" position. To the contrary, the phase-transition model offers a rational motivation for continuation and extension of our present-day Search for ExtraTerrestrial Intelligence (SETI) endeavours. Some of the unequivocal and testable predictions of our model include the decrease of extinction risk in the history of terrestrial life, the absence of any traces of Galactic societies significantly older than human society, complete lack of any extragalactic intelligent signals or phenomena, and the presence of ubiquitous low-complexity life in the Milky Way.
NASA Technical Reports Server (NTRS)
Chen, Chung-Hsing
1992-01-01
In this thesis, a behavioral-level testability analysis approach is presented. This approach is based on analyzing the circuit behavioral description (similar to a C program) to estimate its testability by identifying controllable and observable circuit nodes. This information can be used by a test generator to gain better access to internal circuit nodes and to reduce its search space. The results of the testability analyzer can also be used to select test points or partial scan flip-flops in the early design phase. Based on selection criteria, a novel Synthesis for Testability approach call Test Statement Insertion (TSI) is proposed, which modifies the circuit behavioral description directly. Test Statement Insertion can also be used to modify circuit structural description to improve its testability. As a result, Synthesis for Testability methodology can be combined with an existing behavioral synthesis tool to produce more testable circuits.
2008-12-01
1979; Wasserman and Faust, 1994). SNA thus relies heavily on graph theory to make predictions about network structure and thus social behavior...becomes a tool for increasing the specificity of theory , thinking through the theoretical implications, and generating testable predictions. In...to summarize Construct and its roots in constructural sociological theory . We discover that the (LPM) provides a mathematical bridge between
Modeling Physiological Processes That Relate Toxicant Exposure and Bacterial Population Dynamics
Klanjscek, Tin; Nisbet, Roger M.; Priester, John H.; Holden, Patricia A.
2012-01-01
Quantifying effects of toxicant exposure on metabolic processes is crucial to predicting microbial growth patterns in different environments. Mechanistic models, such as those based on Dynamic Energy Budget (DEB) theory, can link physiological processes to microbial growth. Here we expand the DEB framework to include explicit consideration of the role of reactive oxygen species (ROS). Extensions considered are: (i) additional terms in the equation for the “hazard rate” that quantifies mortality risk; (ii) a variable representing environmental degradation; (iii) a mechanistic description of toxic effects linked to increase in ROS production and aging acceleration, and to non-competitive inhibition of transport channels; (iv) a new representation of the “lag time” based on energy required for acclimation. We estimate model parameters using calibrated Pseudomonas aeruginosa optical density growth data for seven levels of cadmium exposure. The model reproduces growth patterns for all treatments with a single common parameter set, and bacterial growth for treatments of up to 150 mg(Cd)/L can be predicted reasonably well using parameters estimated from cadmium treatments of 20 mg(Cd)/L and lower. Our approach is an important step towards connecting levels of biological organization in ecotoxicology. The presented model reveals possible connections between processes that are not obvious from purely empirical considerations, enables validation and hypothesis testing by creating testable predictions, and identifies research required to further develop the theory. PMID:22328915
Rao, Naren; Menon, Sangeetha
2016-06-01
Preliminary evidence suggests efficacy of yoga as add-on treatment for schizophrenia, but the underlying mechanism by which yoga improves the symptoms of schizophrenia is not completely understood. Yoga improves self-reflection in healthy individuals, and self-reflection abnormalities are typically seen in schizophrenia. However, whether yoga treatment improves impairments in self-reflection typically seen in patients with schizophrenia is not examined. This paper discusses the potential mechanism of yoga in the treatment of schizophrenia and proposes a testable hypothesis for further empirical studies. It is proposed that self-reflection abnormalities in schizophrenia improve with yoga and the neurobiological changes associated with this can be examined using empirical behavioural measures and neuroimaging measures such as magnetic resonance imaging.
Soviet Economic Policy Towards Eastern Europe
1988-11-01
high. Without specifying the determinants of Soviet demand for "allegiance" in more detail, the model is not testable; we cannot predict how subsidy...trade inside (Czechoslovakia, Bulgaria). These countries are behaving as predicted by the model . If this hypothesis is true, the pattern of subsidies...also compares the sum of per capita subsidies by country between 1970 and 1982 with the sum of subsidies predicted by the model . Because of the poor
What can we learn from a two-brain approach to verbal interaction?
Schoot, Lotte; Hagoort, Peter; Segaert, Katrien
2016-09-01
Verbal interaction is one of the most frequent social interactions humans encounter on a daily basis. In the current paper, we zoom in on what the multi-brain approach has contributed, and can contribute in the future, to our understanding of the neural mechanisms supporting verbal interaction. Indeed, since verbal interaction can only exist between individuals, it seems intuitive to focus analyses on inter-individual neural markers, i.e. between-brain neural coupling. To date, however, there is a severe lack of theoretically-driven, testable hypotheses about what between-brain neural coupling actually reflects. In this paper, we develop a testable hypothesis in which between-pair variation in between-brain neural coupling is of key importance. Based on theoretical frameworks and empirical data, we argue that the level of between-brain neural coupling reflects speaker-listener alignment at different levels of linguistic and extra-linguistic representation. We discuss the possibility that between-brain neural coupling could inform us about the highest level of inter-speaker alignment: mutual understanding. Copyright © 2016 Elsevier Ltd. All rights reserved.
Drivers and mechanisms of tree mortality in moist tropical forests.
McDowell, Nate; Allen, Craig D; Anderson-Teixeira, Kristina; Brando, Paulo; Brienen, Roel; Chambers, Jeff; Christoffersen, Brad; Davies, Stuart; Doughty, Chris; Duque, Alvaro; Espirito-Santo, Fernando; Fisher, Rosie; Fontes, Clarissa G; Galbraith, David; Goodsman, Devin; Grossiord, Charlotte; Hartmann, Henrik; Holm, Jennifer; Johnson, Daniel J; Kassim, Abd Rahman; Keller, Michael; Koven, Charlie; Kueppers, Lara; Kumagai, Tomo'omi; Malhi, Yadvinder; McMahon, Sean M; Mencuccini, Maurizio; Meir, Patrick; Moorcroft, Paul; Muller-Landau, Helene C; Phillips, Oliver L; Powell, Thomas; Sierra, Carlos A; Sperry, John; Warren, Jeff; Xu, Chonggang; Xu, Xiangtao
2018-02-16
Tree mortality rates appear to be increasing in moist tropical forests (MTFs) with significant carbon cycle consequences. Here, we review the state of knowledge regarding MTF tree mortality, create a conceptual framework with testable hypotheses regarding the drivers, mechanisms and interactions that may underlie increasing MTF mortality rates, and identify the next steps for improved understanding and reduced prediction. Increasing mortality rates are associated with rising temperature and vapor pressure deficit, liana abundance, drought, wind events, fire and, possibly, CO 2 fertilization-induced increases in stand thinning or acceleration of trees reaching larger, more vulnerable heights. The majority of these mortality drivers may kill trees in part through carbon starvation and hydraulic failure. The relative importance of each driver is unknown. High species diversity may buffer MTFs against large-scale mortality events, but recent and expected trends in mortality drivers give reason for concern regarding increasing mortality within MTFs. Models of tropical tree mortality are advancing the representation of hydraulics, carbon and demography, but require more empirical knowledge regarding the most common drivers and their subsequent mechanisms. We outline critical datasets and model developments required to test hypotheses regarding the underlying causes of increasing MTF mortality rates, and improve prediction of future mortality under climate change. No claim to original US government works New Phytologist © 2018 New Phytologist Trust.
Modelling nutrition across organizational levels: from individuals to superorganisms.
Lihoreau, Mathieu; Buhl, Jerome; Charleston, Michael A; Sword, Gregory A; Raubenheimer, David; Simpson, Stephen J
2014-10-01
The Geometric Framework for nutrition has been increasingly used to describe how individual animals regulate their intake of multiple nutrients to maintain target physiological states maximizing growth and reproduction. However, only a few studies have considered the potential influences of the social context in which these nutritional decisions are made. Social insects, for instance, have evolved extreme levels of nutritional interdependence in which food collection, processing, storage and disposal are performed by different individuals with different nutritional needs. These social interactions considerably complicate nutrition and raise the question of how nutrient regulation is achieved at multiple organizational levels, by individuals and groups. Here, we explore the connections between individual- and collective-level nutrition by developing a modelling framework integrating concepts of nutritional geometry into individual-based models. Using this approach, we investigate how simple nutritional interactions between individuals can mediate a range of emergent collective-level phenomena in social arthropods (insects and spiders) and provide examples of novel and empirically testable predictions. We discuss how our approach could be expanded to a wider range of species and social systems. Copyright © 2014 Elsevier Ltd. All rights reserved.
Effect of marital status on death rates. Part 2: Transient mortality spikes
NASA Astrophysics Data System (ADS)
Richmond, Peter; Roehner, Bertrand M.
2016-05-01
We examine what happens in a population when it experiences an abrupt change in surrounding conditions. Several cases of such ;abrupt transitions; for both physical and living social systems are analyzed from which it can be seen that all share a common pattern. First, a steep rising death rate followed by a much slower relaxation process during which the death rate decreases as a power law. This leads us to propose a general principle which can be summarized as follows: ;Any abrupt change in living conditions generates a mortality spike which acts as a kind of selection process;. This we term the Transient Shock conjecture. It provides a qualitative model which leads to testable predictions. For example, marriage certainly brings about a major change in personal and social conditions and according to our conjecture one would expect a mortality spike in the months following marriage. At first sight this may seem an unlikely proposition but we demonstrate (by three different methods) that even here the existence of mortality spikes is supported by solid empirical evidence.
Bachelot, Benedicte; Lee, Charlotte T
2018-02-01
Evidence accumulates about the role of arbuscular mycorrhizal (AM) fungi in shaping plant communities, but little is known about the factors determining the biomass and coexistence of several types of AM fungi in a plant community. Here, using a consumer-resource framework that treats the relationship between plants and fungi as simultaneous, reciprocal exploitation, we investigated what patterns of dynamic preferential plant carbon allocation to empirically-defined fungal types (on-going partner choice) would be optimal for plants, and how these patterns depend on successional dynamics. We found that ruderal AM fungi can dominate under low steady-state nutrient availability, and competitor AM fungi can dominate at higher steady-state nutrient availability; these are conditions characteristic of early and late succession, respectively. We also found that dynamic preferential allocation alone can maintain a diversity of mutualists, suggesting that on-going partner choice is a new coexistence mechanism for mutualists. Our model can therefore explain both mutualist coexistence and successional strategy, providing a powerful tool to derive testable predictions. © 2017 by the Ecological Society of America.
Eco-genetic modeling of contemporary life-history evolution.
Dunlop, Erin S; Heino, Mikko; Dieckmann, Ulf
2009-10-01
We present eco-genetic modeling as a flexible tool for exploring the course and rates of multi-trait life-history evolution in natural populations. We build on existing modeling approaches by combining features that facilitate studying the ecological and evolutionary dynamics of realistically structured populations. In particular, the joint consideration of age and size structure enables the analysis of phenotypically plastic populations with more than a single growth trajectory, and ecological feedback is readily included in the form of density dependence and frequency dependence. Stochasticity and life-history trade-offs can also be implemented. Critically, eco-genetic models permit the incorporation of salient genetic detail such as a population's genetic variances and covariances and the corresponding heritabilities, as well as the probabilistic inheritance and phenotypic expression of quantitative traits. These inclusions are crucial for predicting rates of evolutionary change on both contemporary and longer timescales. An eco-genetic model can be tightly coupled with empirical data and therefore may have considerable practical relevance, in terms of generating testable predictions and evaluating alternative management measures. To illustrate the utility of these models, we present as an example an eco-genetic model used to study harvest-induced evolution of multiple traits in Atlantic cod. The predictions of our model (most notably that harvesting induces a genetic reduction in age and size at maturation, an increase or decrease in growth capacity depending on the minimum-length limit, and an increase in reproductive investment) are corroborated by patterns observed in wild populations. The predicted genetic changes occur together with plastic changes that could phenotypically mask the former. Importantly, our analysis predicts that evolutionary changes show little signs of reversal following a harvest moratorium. This illustrates how predictions offered by eco-genetic models can enable and guide evolutionarily sustainable resource management.
The Simple Theory of Public Library Services.
ERIC Educational Resources Information Center
Newhouse, Joseph P.
A simple normative theory applicable to public library services was developed as a tool to aid libraries in answering the question: which books should be bought by the library? Although developed for normative purposes, the theory generates testable predictions. It is relevant to measuring benefits from services which are provided publicly because…
Fisher's geometrical model emerges as a property of complex integrated phenotypic networks.
Martin, Guillaume
2014-05-01
Models relating phenotype space to fitness (phenotype-fitness landscapes) have seen important developments recently. They can roughly be divided into mechanistic models (e.g., metabolic networks) and more heuristic models like Fisher's geometrical model. Each has its own drawbacks, but both yield testable predictions on how the context (genomic background or environment) affects the distribution of mutation effects on fitness and thus adaptation. Both have received some empirical validation. This article aims at bridging the gap between these approaches. A derivation of the Fisher model "from first principles" is proposed, where the basic assumptions emerge from a more general model, inspired by mechanistic networks. I start from a general phenotypic network relating unspecified phenotypic traits and fitness. A limited set of qualitative assumptions is then imposed, mostly corresponding to known features of phenotypic networks: a large set of traits is pleiotropically affected by mutations and determines a much smaller set of traits under optimizing selection. Otherwise, the model remains fairly general regarding the phenotypic processes involved or the distribution of mutation effects affecting the network. A statistical treatment and a local approximation close to a fitness optimum yield a landscape that is effectively the isotropic Fisher model or its extension with a single dominant phenotypic direction. The fit of the resulting alternative distributions is illustrated in an empirical data set. These results bear implications on the validity of Fisher's model's assumptions and on which features of mutation fitness effects may vary (or not) across genomic or environmental contexts.
Reply to ``Comment on `Quantum time-of-flight distribution for cold trapped atoms' ''
NASA Astrophysics Data System (ADS)
Ali, Md. Manirul; Home, Dipankar; Majumdar, A. S.; Pan, Alok K.
2008-02-01
In their comment Gomes [Phys. Rev. A 77, 026101 (2008)] have questioned the possibility of empirically testable differences existing between the semiclassical time of flight distribution for cold trapped atoms and a quantum distribution discussed by us recently [Ali , Phys. Rev. A 75, 042110 (2007).]. We argue that their criticism is based on a semiclassical treatment having restricted applicability for a particular trapping potential. Their claim does not preclude, in general, the possibility of differences between the semiclassical calculations and fully quantum results for the arrival time distribution of freely falling atoms.
Reply to 'Comment on 'Quantum time-of-flight distribution for cold trapped atoms''
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ali, Md. Manirul; Home, Dipankar; Pan, Alok K.
2008-02-15
In their comment Gomes et al. [Phys. Rev. A 77, 026101 (2008)] have questioned the possibility of empirically testable differences existing between the semiclassical time of flight distribution for cold trapped atoms and a quantum distribution discussed by us recently [Ali et al., Phys. Rev. A 75, 042110 (2007).]. We argue that their criticism is based on a semiclassical treatment having restricted applicability for a particular trapping potential. Their claim does not preclude, in general, the possibility of differences between the semiclassical calculations and fully quantum results for the arrival time distribution of freely falling atoms.
The evolutionary psychology of hunger.
Al-Shawaf, Laith
2016-10-01
An evolutionary psychological perspective suggests that emotions can be understood as coordinating mechanisms whose job is to regulate various psychological and physiological programs in the service of solving an adaptive problem. This paper suggests that it may also be fruitful to approach hunger from this coordinating mechanism perspective. To this end, I put forward an evolutionary task analysis of hunger, generating novel a priori hypotheses about the coordinating effects of hunger on psychological processes such as perception, attention, categorization, and memory. This approach appears empirically fruitful in that it yields a bounty of testable new hypotheses. Copyright © 2016 Elsevier Ltd. All rights reserved.
Two fundamental questions about protein evolution.
Penny, David; Zhong, Bojian
2015-12-01
Two basic questions are considered that approach protein evolution from different directions; the problems arising from using Markov models for the deeper divergences, and then the origin of proteins themselves. The real problem for the first question (going backwards in time) is that at deeper phylogenies the Markov models of sequence evolution must lose information exponentially at deeper divergences, and several testable methods are suggested that should help resolve these deeper divergences. For the second question (coming forwards in time) a problem is that most models for the origin of protein synthesis do not give a role for the very earliest stages of the process. From our knowledge of the importance of replication accuracy in limiting the length of a coding molecule, a testable hypothesis is proposed. The length of the code, the code itself, and tRNAs would all have prior roles in increasing the accuracy of RNA replication; thus proteins would have been formed only after the tRNAs and the length of the triplet code are already formed. Both questions lead to testable predictions. Copyright © 2014 Elsevier B.V. and Société Française de Biochimie et Biologie Moléculaire (SFBBM). All rights reserved.
Morris, Melody K.; Saez-Rodriguez, Julio; Clarke, David C.; Sorger, Peter K.; Lauffenburger, Douglas A.
2011-01-01
Predictive understanding of cell signaling network operation based on general prior knowledge but consistent with empirical data in a specific environmental context is a current challenge in computational biology. Recent work has demonstrated that Boolean logic can be used to create context-specific network models by training proteomic pathway maps to dedicated biochemical data; however, the Boolean formalism is restricted to characterizing protein species as either fully active or inactive. To advance beyond this limitation, we propose a novel form of fuzzy logic sufficiently flexible to model quantitative data but also sufficiently simple to efficiently construct models by training pathway maps on dedicated experimental measurements. Our new approach, termed constrained fuzzy logic (cFL), converts a prior knowledge network (obtained from literature or interactome databases) into a computable model that describes graded values of protein activation across multiple pathways. We train a cFL-converted network to experimental data describing hepatocytic protein activation by inflammatory cytokines and demonstrate the application of the resultant trained models for three important purposes: (a) generating experimentally testable biological hypotheses concerning pathway crosstalk, (b) establishing capability for quantitative prediction of protein activity, and (c) prediction and understanding of the cytokine release phenotypic response. Our methodology systematically and quantitatively trains a protein pathway map summarizing curated literature to context-specific biochemical data. This process generates a computable model yielding successful prediction of new test data and offering biological insight into complex datasets that are difficult to fully analyze by intuition alone. PMID:21408212
Objections to routine clinical outcomes measurement in mental health services: any evidence so far?
MacDonald, Alastair J D; Trauer, Tom
2010-12-01
Routine clinical outcomes measurement (RCOM) is gaining importance in mental health services. To examine whether criticisms published in advance of the development of RCOM have been borne out by data now available from such a programme. This was an observational study of routine ratings using HoNOS65+ at inception/admission and again at discharge in an old age psychiatry service from 1997 to 2008. Testable hypotheses were generated from each criticism amenable to empirical examination. Inter-rater reliability estimates were applied to observed differences between scores between community and ward patients using resampling. Five thousand one hundred eighty community inceptions and 862 admissions had HoNOS65+ ratings at referral/admission and discharge. We could find no evidence of gaming (artificially worse scores at inception and better at discharge), selection, attrition or detection bias, and ratings were consistent with diagnosis and level of service. Anticipated low levels of inter-rater reliability did not vitiate differences between levels of service. Although only hypotheses testable from within RCOM data were examined, and only 46% of eligible episodes had complete outcomes data, no evidence of the alleged biases were found. RCOM seems valid and practical in mental health services.
NASA Astrophysics Data System (ADS)
Derakhshani, Maaneli
In this thesis, we consider the implications of solving the quantum measurement problem for the Newtonian description of semiclassical gravity. First we review the formalism of the Newtonian description of semiclassical gravity based on standard quantum mechanics---the Schroedinger-Newton theory---and two well-established predictions that come out of it, namely, gravitational 'cat states' and gravitationally-induced wavepacket collapse. Then we review three quantum theories with 'primitive ontologies' that are well-known known to solve the measurement problem---Schroedinger's many worlds theory, the GRW collapse theory with matter density ontology, and Nelson's stochastic mechanics. We extend the formalisms of these three quantum theories to Newtonian models of semiclassical gravity and evaluate their implications for gravitational cat states and gravitational wavepacket collapse. We find that (1) Newtonian semiclassical gravity based on Schroedinger's many worlds theory is mathematically equivalent to the Schroedinger-Newton theory and makes the same predictions; (2) Newtonian semiclassical gravity based on the GRW theory differs from Schroedinger-Newton only in the use of a stochastic collapse law, but this law allows it to suppress gravitational cat states so as not to be in contradiction with experiment, while allowing for gravitational wavepacket collapse to happen as well; (3) Newtonian semiclassical gravity based on Nelson's stochastic mechanics differs significantly from Schroedinger-Newton, and does not predict gravitational cat states nor gravitational wavepacket collapse. Considering that gravitational cat states are experimentally ruled out, but gravitational wavepacket collapse is testable in the near future, this implies that only the latter two are viable theories of Newtonian semiclassical gravity and that they can be experimentally tested against each other in future molecular interferometry experiments that are anticipated to be capable of testing the gravitational wavepacket collapse prediction.
Wynne-Edwards, K E
2001-01-01
Hormone disruption is a major, underappreciated component of the plant chemical arsenal, and the historical coevolution between hormone-disrupting plants and herbivores will have both increased the susceptibility of carnivores and diversified the sensitivities of herbivores to man-made endocrine disruptors. Here I review diverse evidence of the influence of plant secondary compounds on vertebrate reproduction, including human reproduction. Three of the testable hypotheses about the evolutionary responses of vertebrate herbivores to hormone-disrupting challenges from their diet are developed. Specifically, the hypotheses are that a) vertebrate herbivores will express steroid hormone receptors in the buccal cavity and/or the vomeronasal organ; b) absolute sex steroid concentrations will be lower in carnivores than in herbivores; and c) herbivore steroid receptors should be more diverse in their binding affinities than carnivore lineages. The argument developed in this review, if empirically validated by support for the specific hypotheses, suggests that a) carnivores will be more susceptible than herbivores to endocrine-disrupting compounds of anthropogenic origin entering their bodies, and b) diverse herbivore lineages will be variably susceptible to any given natural or synthetic contaminant. As screening methods for hormone-disrupting potential are compared and adopted, comparative endocrine physiology research is urgently needed to develop models that predict the broad applicability of those screening results in diverse vertebrate species. PMID:11401754
Heeger, David J.
2017-01-01
Most models of sensory processing in the brain have a feedforward architecture in which each stage comprises simple linear filtering operations and nonlinearities. Models of this form have been used to explain a wide range of neurophysiological and psychophysical data, and many recent successes in artificial intelligence (with deep convolutional neural nets) are based on this architecture. However, neocortex is not a feedforward architecture. This paper proposes a first step toward an alternative computational framework in which neural activity in each brain area depends on a combination of feedforward drive (bottom-up from the previous processing stage), feedback drive (top-down context from the next stage), and prior drive (expectation). The relative contributions of feedforward drive, feedback drive, and prior drive are controlled by a handful of state parameters, which I hypothesize correspond to neuromodulators and oscillatory activity. In some states, neural responses are dominated by the feedforward drive and the theory is identical to a conventional feedforward model, thereby preserving all of the desirable features of those models. In other states, the theory is a generative model that constructs a sensory representation from an abstract representation, like memory recall. In still other states, the theory combines prior expectation with sensory input, explores different possible perceptual interpretations of ambiguous sensory inputs, and predicts forward in time. The theory, therefore, offers an empirically testable framework for understanding how the cortex accomplishes inference, exploration, and prediction. PMID:28167793
There's No Such Thing as Value-Free Science.
ERIC Educational Resources Information Center
Makosky, Vivian Parker
This paper is based on the view that, although scientists rely on research values such as predictive accuracy and testability, scientific research is still subject to the unscientific values, attitudes, and emotions of the scientists. It is noted that undergraduate students are likely not to think critically about the science they encounter. A…
Active processes make mixed lipid membranes either flat or crumpled
NASA Astrophysics Data System (ADS)
Banerjee, Tirthankar; Basu, Abhik
2018-01-01
Whether live cell membranes show miscibility phase transitions (MPTs), and if so, how they fluctuate near the transitions remain outstanding unresolved issues in physics and biology alike. Motivated by these questions we construct a generic hydrodynamic theory for lipid membranes that are active, due for instance, to the molecular motors in the surrounding cytoskeleton, or active protein components in the membrane itself. We use this to uncover a direct correspondence between membrane fluctuations and MPTs. Several testable predictions are made: (i) generic active stiffening with orientational long range order (flat membrane) or softening with crumpling of the membrane, controlled by the active tension and (ii) for mixed lipid membranes, capturing the nature of putative MPTs by measuring the membrane conformation fluctuations. Possibilities of both first and second order MPTs in mixed active membranes are argued for. Near second order MPTs, active stiffening (softening) manifests as a super-stiff (super-soft) membrane. Our predictions are testable in a variety of in vitro systems, e.g. live cytoskeletal extracts deposited on liposomes and lipid membranes containing active proteins embedded in a passive fluid.
Temporal cognition: Connecting subjective time to perception, attention, and memory.
Matthews, William J; Meck, Warren H
2016-08-01
Time is a universal psychological dimension, but time perception has often been studied and discussed in relative isolation. Increasingly, researchers are searching for unifying principles and integrated models that link time perception to other domains. In this review, we survey the links between temporal cognition and other psychological processes. Specifically, we describe how subjective duration is affected by nontemporal stimulus properties (perception), the allocation of processing resources (attention), and past experience with the stimulus (memory). We show that many of these connections instantiate a "processing principle," according to which perceived time is positively related to perceptual vividity and the ease of extracting information from the stimulus. This empirical generalization generates testable predictions and provides a starting-point for integrated theoretical frameworks. By outlining some of the links between temporal cognition and other domains, and by providing a unifying principle for understanding these effects, we hope to encourage time-perception researchers to situate their work within broader theoretical frameworks, and that researchers from other fields will be inspired to apply their insights, techniques, and theorizing to improve our understanding of the representation and judgment of time. (PsycINFO Database Record (c) 2016 APA, all rights reserved).
Rationality, practice variation and person‐centred health policy: a threshold hypothesis
Hamm, Robert M.; Mayrhofer, Thomas; Hozo, Iztok; Van den Ende, Jef
2015-01-01
Abstract Variation in practice of medicine is one of the major health policy issues of today. Ultimately, it is related to physicians' decision making. Similar patients with similar likelihood of having disease are often managed by different doctors differently: some doctors may elect to observe the patient, others decide to act based on diagnostic testing and yet others may elect to treat without testing. We explain these differences in practice by differences in disease probability thresholds at which physicians decide to act: contextual social and clinical factors and emotions such as regret affect the threshold by influencing the way doctors integrate objective data related to treatment and testing. However, depending on a theoretical construct each of the physician's behaviour can be considered rational. In fact, we showed that the current regulatory policies lead to predictably low thresholds for most decisions in contemporary practice. As a result, we may expect continuing motivation for overuse of treatment and diagnostic tests. We argue that rationality should take into account both formal principles of rationality and human intuitions about good decisions along the lines of Rawls' ‘reflective equilibrium/considered judgment’. In turn, this can help define a threshold model that is empirically testable. PMID:26639018
Developing Cognitive Models for Social Simulation from Survey Data
NASA Astrophysics Data System (ADS)
Alt, Jonathan K.; Lieberman, Stephen
The representation of human behavior and cognition continues to challenge the modeling and simulation community. The use of survey and polling instruments to inform belief states, issue stances and action choice models provides a compelling means of developing models and simulations with empirical data. Using these types of data to population social simulations can greatly enhance the feasibility of validation efforts, the reusability of social and behavioral modeling frameworks, and the testable reliability of simulations. We provide a case study demonstrating these effects, document the use of survey data to develop cognitive models, and suggest future paths forward for social and behavioral modeling.
Expert knowledge as a foundation for the management of secretive species and their habitat
Drew, C. Ashton; Collazo, Jaime
2012-01-01
In this chapter, we share lessons learned during the elicitation and application of expert knowledge in the form of a belief network model for the habitat of a waterbird, the King Rail (Rallus elegans). A belief network is a statistical framework used to graphically represent and evaluate hypothesized cause and effect relationships among variables. Our model was a pilot project to explore the value of such a model as a tool to help the US Fish and Wildlife Service (USFWS) conserve species that lack sufficient empirical data to guide management decisions. Many factors limit the availability of empirical data that can support landscape-scale conservation planning. Globally, most species simply have not yet been subject to empirical study (Wilson 2000). Even for well-studied species, data are often restricted to specific geographic extents, to particular seasons, or to specific segments of a species’ life history. The USFWS mandates that the agency’s conservation actions (1) be coordinated across regional landscapes, (2) be founded on the best available science (with testable assumptions), and (3) support adaptive management through monitoring and assessment of action outcomes. Given limits on the available data, the concept of “best available science” in the context of conservation planning generally includes a mix of empirical data and expert knowledge (Sullivan et al. 2006).
NASA Astrophysics Data System (ADS)
Hargrove, W. W.; Hoffman, F. M.; Kumar, J.; Spruce, J.; Norman, S. P.
2013-12-01
Here we present diverse examples where empirical mining and statistical analysis of large data sets have already been shown to be useful for a wide variety of practical decision-making problems within the realm of large-scale ecology. Because a full understanding and appreciation of particular ecological phenomena are possible only after hypothesis-directed research regarding the existence and nature of that process, some ecologists may feel that purely empirical data harvesting may represent a less-than-satisfactory approach. Restricting ourselves exclusively to process-driven approaches, however, may actually slow progress, particularly for more complex or subtle ecological processes. We may not be able to afford the delays caused by such directed approaches. Rather than attempting to formulate and ask every relevant question correctly, empirical methods allow trends, relationships and associations to emerge freely from the data themselves, unencumbered by a priori theories, ideas and prejudices that have been imposed upon them. Although they cannot directly demonstrate causality, empirical methods can be extremely efficient at uncovering strong correlations with intermediate "linking" variables. In practice, these correlative structures and linking variables, once identified, may provide sufficient predictive power to be useful themselves. Such correlation "shadows" of causation can be harnessed by, e.g., Bayesian Belief Nets, which bias ecological management decisions, made with incomplete information, toward favorable outcomes. Empirical data-harvesting also generates a myriad of testable hypotheses regarding processes, some of which may even be correct. Quantitative statistical regionalizations based on quantitative multivariate similarity have lended insights into carbon eddy-flux direction and magnitude, wildfire biophysical conditions, phenological ecoregions useful for vegetation type mapping and monitoring, forest disease risk maps (e.g., sudden oak death), global aquatic ecoregion risk maps for aquatic invasives, and forest vertical structure ecoregions (e.g., using extensive LiDAR data sets). Multivariate Spatio-Temporal Clustering, which quantitatively places alternative future conditions on a common footing with present conditions, allows prediction of present and future shifts in tree species ranges, given alternative climatic change forecasts. ForWarn, a forest disturbance detection and monitoring system mining 12 years of national 8-day MODIS phenology data, has been operating since 2010, producing national maps every 8 days showing many kinds of potential forest disturbances. Forest resource managers can view disturbance maps via a web-based viewer, and alerts are issued when particular forest disturbances are seen. Regression-based decadal trend analysis showing long-term forest thrive and decline areas, and individual-based, brute-force supercomputing to map potential movement corridors and migration routes across landscapes will also be discussed. As significant ecological changes occur with increasing rapidity, such empirical data-mining approaches may be the most efficient means to help land managers find the best, most-actionable policies and decision strategies.
Coe, Jason B.
2018-01-01
Concerns over cat homelessness, over-taxed animal shelters, public health risks, and environmental impacts has raised attention on urban-cat populations. To truly understand cat population dynamics, the collective population of owned cats, unowned cats, and cats in the shelter system must be considered simultaneously because each subpopulation contributes differently to the overall population of cats in a community (e.g., differences in neuter rates, differences in impacts on wildlife) and cats move among categories through human interventions (e.g., adoption, abandonment). To assess this complex socio-ecological system, we developed a multistate matrix model of cats in urban areas that include owned cats, unowned cats (free-roaming and feral), and cats that move through the shelter system. Our model requires three inputs—location, number of human dwellings, and urban area—to provide testable predictions of cat abundance for any city in North America. Model-predicted population size of unowned cats in seven Canadian cities were not significantly different than published estimates (p = 0.23). Model-predicted proportions of sterile feral cats did not match observed sterile cat proportions for six USA cities (p = 0.001). Using a case study from Guelph, Ontario, Canada, we compared model-predicted to empirical estimates of cat abundance in each subpopulation and used perturbation analysis to calculate relative sensitivity of vital rates to cat abundance to demonstrate how management or mismanagement in one portion of the population could have repercussions across all portions of the network. Our study provides a general framework to consider cat population abundance in urban areas and, with refinement that includes city-specific parameter estimates and modeling, could provide a better understanding of population dynamics of cats in our communities. PMID:29489854
Flockhart, D T Tyler; Coe, Jason B
2018-01-01
Concerns over cat homelessness, over-taxed animal shelters, public health risks, and environmental impacts has raised attention on urban-cat populations. To truly understand cat population dynamics, the collective population of owned cats, unowned cats, and cats in the shelter system must be considered simultaneously because each subpopulation contributes differently to the overall population of cats in a community (e.g., differences in neuter rates, differences in impacts on wildlife) and cats move among categories through human interventions (e.g., adoption, abandonment). To assess this complex socio-ecological system, we developed a multistate matrix model of cats in urban areas that include owned cats, unowned cats (free-roaming and feral), and cats that move through the shelter system. Our model requires three inputs-location, number of human dwellings, and urban area-to provide testable predictions of cat abundance for any city in North America. Model-predicted population size of unowned cats in seven Canadian cities were not significantly different than published estimates (p = 0.23). Model-predicted proportions of sterile feral cats did not match observed sterile cat proportions for six USA cities (p = 0.001). Using a case study from Guelph, Ontario, Canada, we compared model-predicted to empirical estimates of cat abundance in each subpopulation and used perturbation analysis to calculate relative sensitivity of vital rates to cat abundance to demonstrate how management or mismanagement in one portion of the population could have repercussions across all portions of the network. Our study provides a general framework to consider cat population abundance in urban areas and, with refinement that includes city-specific parameter estimates and modeling, could provide a better understanding of population dynamics of cats in our communities.
Towards a universal trait-based model of terrestrial primary production
NASA Astrophysics Data System (ADS)
Wang, H.; Prentice, I. C.; Cornwell, W.; Keenan, T. F.; Davis, T.; Wright, I. J.; Evans, B. J.; Peng, C.
2015-12-01
Systematic variations of plant traits along environmental gradients have been observed for decades. For example, the tendencies of leaf nitrogen per unit area to increase, and of the leaf-internal to ambient CO2 concentration ratio (ci:ca) to decrease, with aridity are well established. But ecosystem models typically represent trait variation based purely on empirical relationships, or on untested conjectures, or not at all. Neglect of quantitative trait variation and its adapative significance probably contributes to the persistent large uncertainties among models in predicting the response of the carbon cycle to environmental change. However, advances in ecological theory and the accumulation of extensive data sets during recent decades suggest that theoretically based and testable predictions of trait variation could be achieved. Based on well-established ecophysiological principles and consideration of the adaptive significance of traits, we propose universal relationships between photosynthetic traits (ci:ca, carbon fixation capacity, and the ratio of electron transport capacity to carbon fixation capacity) and primary environmental variables, which capture observed trait variations both within and between plant functional types. Moreover, incorporating these traits into the standard model of C3photosynthesis allows gross primary production (GPP) of natural vegetation to be predicted by a single equation with just two free parameters, which can be estimated from independent observations. The resulting model performs as well as much more complex models. Our results provide a fresh perspective with potentially high reward: the possibility of a deeper understanding of the relationships between plant traits and environment, simpler and more robust and reliable representation of land processes in Earth system models, and thus improved predictability for biosphere-atmosphere interactions and climate feedbacks.
Mechanics of undulatory swimming in a frictional fluid.
Ding, Yang; Sharpe, Sarah S; Masse, Andrew; Goldman, Daniel I
2012-01-01
The sandfish lizard (Scincus scincus) swims within granular media (sand) using axial body undulations to propel itself without the use of limbs. In previous work we predicted average swimming speed by developing a numerical simulation that incorporated experimentally measured biological kinematics into a multibody sandfish model. The model was coupled to an experimentally validated soft sphere discrete element method simulation of the granular medium. In this paper, we use the simulation to study the detailed mechanics of undulatory swimming in a "granular frictional fluid" and compare the predictions to our previously developed resistive force theory (RFT) which models sand-swimming using empirically determined granular drag laws. The simulation reveals that the forward speed of the center of mass (CoM) oscillates about its average speed in antiphase with head drag. The coupling between overall body motion and body deformation results in a non-trivial pattern in the magnitude of lateral displacement of the segments along the body. The actuator torque and segment power are maximal near the center of the body and decrease to zero toward the head and the tail. Approximately 30% of the net swimming power is dissipated in head drag. The power consumption is proportional to the frequency in the biologically relevant range, which confirms that frictional forces dominate during sand-swimming by the sandfish. Comparison of the segmental forces measured in simulation with the force on a laterally oscillating rod reveals that a granular hysteresis effect causes the overestimation of the body thrust forces in the RFT. Our models provide detailed testable predictions for biological locomotion in a granular environment.
NASA Astrophysics Data System (ADS)
Galic, Nika; Forbes, Valery E.
2017-03-01
Human activities have been modifying ecosystems for centuries, from pressures on wild populations we harvest to modifying habitats through urbanization and agricultural activities. Changes in global climate patterns are adding another layer of, often unpredictable, perturbations to ecosystems on which we rely for life support [1,2]. To ensure the sustainability of ecosystem services, especially at this point in time when the human population is estimated to grow by another 2 billion by 2050 [3], we need to predict possible consequences of our actions and suggest relevant solutions [4,5]. We face several challenges when estimating adverse impacts of our actions on ecosystems. We describe these in the context of ecological risk assessment of chemicals. Firstly, when attempting to assess risk from exposure to chemicals, we base our decisions on a very limited number of species that are easily cultured and kept in the lab. We assume that preventing risk to these species will also protect all of the untested species present in natural ecosystems [6]. Secondly, although we know that chemicals interact with other stressors in the field, the number of stressors that we can test is limited due to logistical and ethical reasons. Similarly, empirical approaches are limited in both spatial and temporal scale due to logistical, financial and ethical reasons [7,8]. To bypass these challenges, we can develop ecological models that integrate relevant life history and other information and make testable predictions across relevant spatial and temporal scales [8-10].
Mechanics of Undulatory Swimming in a Frictional Fluid
Ding, Yang; Sharpe, Sarah S.; Masse, Andrew; Goldman, Daniel I.
2012-01-01
The sandfish lizard (Scincus scincus) swims within granular media (sand) using axial body undulations to propel itself without the use of limbs. In previous work we predicted average swimming speed by developing a numerical simulation that incorporated experimentally measured biological kinematics into a multibody sandfish model. The model was coupled to an experimentally validated soft sphere discrete element method simulation of the granular medium. In this paper, we use the simulation to study the detailed mechanics of undulatory swimming in a “granular frictional fluid” and compare the predictions to our previously developed resistive force theory (RFT) which models sand-swimming using empirically determined granular drag laws. The simulation reveals that the forward speed of the center of mass (CoM) oscillates about its average speed in antiphase with head drag. The coupling between overall body motion and body deformation results in a non-trivial pattern in the magnitude of lateral displacement of the segments along the body. The actuator torque and segment power are maximal near the center of the body and decrease to zero toward the head and the tail. Approximately 30% of the net swimming power is dissipated in head drag. The power consumption is proportional to the frequency in the biologically relevant range, which confirms that frictional forces dominate during sand-swimming by the sandfish. Comparison of the segmental forces measured in simulation with the force on a laterally oscillating rod reveals that a granular hysteresis effect causes the overestimation of the body thrust forces in the RFT. Our models provide detailed testable predictions for biological locomotion in a granular environment. PMID:23300407
Fundamental insights into ontogenetic growth from theory and fish.
Sibly, Richard M; Baker, Joanna; Grady, John M; Luna, Susan M; Kodric-Brown, Astrid; Venditti, Chris; Brown, James H
2015-11-10
The fundamental features of growth may be universal, because growth trajectories of most animals are very similar, but a unified mechanistic theory of growth remains elusive. Still needed is a synthetic explanation for how and why growth rates vary as body size changes, both within individuals over their ontogeny and between populations and species over their evolution. Here, we use Bertalanffy growth equations to characterize growth of ray-finned fishes in terms of two parameters, the growth rate coefficient, K, and final body mass, m∞. We derive two alternative empirically testable hypotheses and test them by analyzing data from FishBase. Across 576 species, which vary in size at maturity by almost nine orders of magnitude, K scaled as [Formula: see text]. This supports our first hypothesis that growth rate scales as [Formula: see text] as predicted by metabolic scaling theory; it implies that species that grow to larger mature sizes grow faster as juveniles. Within fish species, however, K scaled as [Formula: see text]. This supports our second hypothesis, which predicts that growth rate scales as [Formula: see text] when all juveniles grow at the same rate. The unexpected disparity between across- and within-species scaling challenges existing theoretical interpretations. We suggest that the similar ontogenetic programs of closely related populations constrain growth to [Formula: see text] scaling, but as species diverge over evolutionary time they evolve the near-optimal [Formula: see text] scaling predicted by metabolic scaling theory. Our findings have important practical implications because fish supply essential protein in human diets, and sustainable yields from wild harvests and aquaculture depend on growth rates.
Feldstein Ewing, Sarah W.; Filbey, Francesca M.; Hendershot, Christian S.; McEachern, Amber D.; Hutchison, Kent E.
2011-01-01
Objective: Despite the prevalence and profound consequences of alcohol use disorders, psychosocial alcohol interventions have widely varying outcomes. The range of behavior following psychosocial alcohol treatment indicates the need to gain a better understanding of active ingredients and how they may operate. Although this is an area of great interest, at this time there is a limited understanding of how in-session behaviors may catalyze changes in the brain and subsequent alcohol use behavior. Thus, in this review, we aim to identify the neurobiological routes through which psychosocial alcohol interventions may lead to post-session behavior change as well as offer an approach to conceptualize and evaluate these translational relationships. Method: PubMed and PsycINFO searches identified studies that successfully integrated functional magnetic resonance imaging and psychosocial interventions. Results: Based on this research, we identified potential neurobiological substrates through which behavioral alcohol interventions may initiate and sustain behavior change. In addition, we proposed a testable model linking within-session active ingredients to outside-of-session behavior change. Conclusions: Through this review, we present a testable translational model. Additionally, we illustrate how the proposed model can help facilitate empirical evaluations of psychotherapeutic factors and their underlying neural mechanisms, both in the context of motivational interviewing and in the treatment of alcohol use disorders. PMID:22051204
NASA Astrophysics Data System (ADS)
Hirata, N.; Tsuruoka, H.; Yokoi, S.
2011-12-01
The current Japanese national earthquake prediction program emphasizes the importance of modeling as well as monitoring for a sound scientific development of earthquake prediction research. One major focus of the current program is to move toward creating testable earthquake forecast models. For this purpose, in 2009 we joined the Collaboratory for the Study of Earthquake Predictability (CSEP) and installed, through an international collaboration, the CSEP Testing Centre, an infrastructure to encourage researchers to develop testable models for Japan. We started Japanese earthquake predictability experiment on November 1, 2009. The experiment consists of 12 categories, with 4 testing classes with different time spans (1 day, 3 months, 1 year and 3 years) and 3 testing regions called 'All Japan,' 'Mainland,' and 'Kanto.' A total of 160 models, as of August 2013, were submitted, and are currently under the CSEP official suite of tests for evaluating the performance of forecasts. We will present results of prospective forecast and testing for periods before and after the 2011 Tohoku-oki earthquake. Because a seismic activity has changed dramatically since the 2011 event, performances of models have been affected very much. In addition, as there is the problem of authorized catalogue related to the completeness magnitude, most models did not pass the CSEP consistency tests. Also, we will discuss the retrospective earthquake forecast experiments for aftershocks of the 2011 Tohoku-oki earthquake. Our aim is to describe what has turned out to be the first occasion for setting up a research environment for rigorous earthquake forecasting in Japan.
NASA Astrophysics Data System (ADS)
Hirata, N.; Tsuruoka, H.; Yokoi, S.
2013-12-01
The current Japanese national earthquake prediction program emphasizes the importance of modeling as well as monitoring for a sound scientific development of earthquake prediction research. One major focus of the current program is to move toward creating testable earthquake forecast models. For this purpose, in 2009 we joined the Collaboratory for the Study of Earthquake Predictability (CSEP) and installed, through an international collaboration, the CSEP Testing Centre, an infrastructure to encourage researchers to develop testable models for Japan. We started Japanese earthquake predictability experiment on November 1, 2009. The experiment consists of 12 categories, with 4 testing classes with different time spans (1 day, 3 months, 1 year and 3 years) and 3 testing regions called 'All Japan,' 'Mainland,' and 'Kanto.' A total of 160 models, as of August 2013, were submitted, and are currently under the CSEP official suite of tests for evaluating the performance of forecasts. We will present results of prospective forecast and testing for periods before and after the 2011 Tohoku-oki earthquake. Because a seismic activity has changed dramatically since the 2011 event, performances of models have been affected very much. In addition, as there is the problem of authorized catalogue related to the completeness magnitude, most models did not pass the CSEP consistency tests. Also, we will discuss the retrospective earthquake forecast experiments for aftershocks of the 2011 Tohoku-oki earthquake. Our aim is to describe what has turned out to be the first occasion for setting up a research environment for rigorous earthquake forecasting in Japan.
Evolution beyond neo-Darwinism: a new conceptual framework.
Noble, Denis
2015-01-01
Experimental results in epigenetics and related fields of biological research show that the Modern Synthesis (neo-Darwinist) theory of evolution requires either extension or replacement. This article examines the conceptual framework of neo-Darwinism, including the concepts of 'gene', 'selfish', 'code', 'program', 'blueprint', 'book of life', 'replicator' and 'vehicle'. This form of representation is a barrier to extending or replacing existing theory as it confuses conceptual and empirical matters. These need to be clearly distinguished. In the case of the central concept of 'gene', the definition has moved all the way from describing a necessary cause (defined in terms of the inheritable phenotype itself) to an empirically testable hypothesis (in terms of causation by DNA sequences). Neo-Darwinism also privileges 'genes' in causation, whereas in multi-way networks of interactions there can be no privileged cause. An alternative conceptual framework is proposed that avoids these problems, and which is more favourable to an integrated systems view of evolution. © 2015. Published by The Company of Biologists Ltd.
Electrical test prediction using hybrid metrology and machine learning
NASA Astrophysics Data System (ADS)
Breton, Mary; Chao, Robin; Muthinti, Gangadhara Raja; de la Peña, Abraham A.; Simon, Jacques; Cepler, Aron J.; Sendelbach, Matthew; Gaudiello, John; Emans, Susan; Shifrin, Michael; Etzioni, Yoav; Urenski, Ronen; Lee, Wei Ti
2017-03-01
Electrical test measurement in the back-end of line (BEOL) is crucial for wafer and die sorting as well as comparing intended process splits. Any in-line, nondestructive technique in the process flow to accurately predict these measurements can significantly improve mean-time-to-detect (MTTD) of defects and improve cycle times for yield and process learning. Measuring after BEOL metallization is commonly done for process control and learning, particularly with scatterometry (also called OCD (Optical Critical Dimension)), which can solve for multiple profile parameters such as metal line height or sidewall angle and does so within patterned regions. This gives scatterometry an advantage over inline microscopy-based techniques, which provide top-down information, since such techniques can be insensitive to sidewall variations hidden under the metal fill of the trench. But when faced with correlation to electrical test measurements that are specific to the BEOL processing, both techniques face the additional challenge of sampling. Microscopy-based techniques are sampling-limited by their small probe size, while scatterometry is traditionally limited (for microprocessors) to scribe targets that mimic device ground rules but are not necessarily designed to be electrically testable. A solution to this sampling challenge lies in a fast reference-based machine learning capability that allows for OCD measurement directly of the electrically-testable structures, even when they are not OCD-compatible. By incorporating such direct OCD measurements, correlation to, and therefore prediction of, resistance of BEOL electrical test structures is significantly improved. Improvements in prediction capability for multiple types of in-die electrically-testable device structures is demonstrated. To further improve the quality of the prediction of the electrical resistance measurements, hybrid metrology using the OCD measurements as well as X-ray metrology (XRF) is used. Hybrid metrology is the practice of combining information from multiple sources in order to enable or improve the measurement of one or more critical parameters. Here, the XRF measurements are used to detect subtle changes in barrier layer composition and thickness that can have second-order effects on the electrical resistance of the test structures. By accounting for such effects with the aid of the X-ray-based measurements, further improvement in the OCD correlation to electrical test measurements is achieved. Using both types of solution incorporation of fast reference-based machine learning on nonOCD-compatible test structures, and hybrid metrology combining OCD with XRF technology improvement in BEOL cycle time learning could be accomplished through improved prediction capability.
Hinzen, Wolfram; Rosselló, Joana
2015-01-01
We hypothesize that linguistic (dis-)organization in the schizophrenic brain plays a more central role in the pathogenesis of this disease than commonly supposed. Against the standard view, that schizophrenia is a disturbance of thought or selfhood, we argue that the origins of the relevant forms of thought and selfhood at least partially depend on language. The view that they do not is premised by a theoretical conception of language that we here identify as 'Cartesian' and contrast with a recent 'un-Cartesian' model. This linguistic model empirically argues for both (i) a one-to-one correlation between human-specific thought or meaning and forms of grammatical organization, and (ii) an integrative and co-dependent view of linguistic cognition and its sensory-motor dimensions. Core dimensions of meaning mediated by grammar on this model specifically concern forms of referential and propositional meaning. A breakdown of these is virtually definitional of core symptoms. Within this model the three main positive symptoms of schizophrenia fall into place as failures in language-mediated forms of meaning, manifest either as a disorder of speech perception (Auditory Verbal Hallucinations), abnormal speech production running without feedback control (Formal Thought Disorder), or production of abnormal linguistic content (Delusions). Our hypothesis makes testable predictions for the language profile of schizophrenia across symptoms; it simplifies the cognitive neuropsychology of schizophrenia while not being inconsistent with a pattern of neurocognitive deficits and their correlations with symptoms; and it predicts persistent findings on disturbances of language-related circuitry in the schizophrenic brain.
Architectural Analysis of Dynamically Reconfigurable Systems
NASA Technical Reports Server (NTRS)
Lindvall, Mikael; Godfrey, Sally; Ackermann, Chris; Ray, Arnab; Yonkwa, Lyly
2010-01-01
oTpics include: the problem (increased flexibility of architectural styles decrease analyzability, behavior emerges and varies depending on the configuration, does the resulting system run according to the intended design, and architectural decisions can impede or facilitate testing); top down approach to architecture analysis, detection of defects and deviations, and architecture and its testability; currently targeted projects GMSEC and CFS; analyzing software architectures; analyzing runtime events; actual architecture recognition; GMPUB in Dynamic SAVE; sample output from new approach; taking message timing delays into account; CFS examples of architecture and testability; some recommendations for improved testablity; and CFS examples of abstract interfaces and testability; CFS example of opening some internal details.
Retrieval as a Fast Route to Memory Consolidation.
Antony, James W; Ferreira, Catarina S; Norman, Kenneth A; Wimber, Maria
2017-08-01
Retrieval-mediated learning is a powerful way to make memories last, but its neurocognitive mechanisms remain unclear. We propose that retrieval acts as a rapid consolidation event, supporting the creation of adaptive hippocampal-neocortical representations via the 'online' reactivation of associative information. We describe parallels between online retrieval and offline consolidation and offer testable predictions for future research. Copyright © 2017 Elsevier Ltd. All rights reserved.
Models of cooperative dynamics from biomolecules to magnets
NASA Astrophysics Data System (ADS)
Mobley, David Lowell
This work details application of computer models to several biological systems (prion diseases and Alzheimer's disease) and a magnetic system. These share some common themes, which are discussed. Here, simple lattice-based models are applied to aggregation of misfolded protein in prion diseases like Mad Cow disease. These can explain key features of the diseases. The modeling is based on aggregation being essential in establishing the time-course of infectivity. Growth of initial aggregates is assumed to dominate the experimentally observed lag phase. Subsequent fission, regrowth, and fission set apart the exponential doubling phase in disease progression. We explore several possible modes of growth for 2-D aggregates and suggest the model providing the best explanation for the experimental data. We develop testable predictions from this model. Like prion disease, Alzheimer's disease (AD) is an amyloid disease characterized by large aggregates in the brain. However, evidence increasingly points away from these as the toxic agent and towards oligomers of the Abeta peptide. We explore one possible toxicity mechanism---insertion of Abeta into cell membranes and formation of harmful ion channels. We find that mutations in this peptide which cause familial Alzheimer's disease (FAD) also affect the insertion of this peptide into membranes in a fairly consistent way, suggesting that this toxicity mechanism may be relevant biologically. We find a particular inserted configuration which may be especially harmful and develop testable predictions to verify whether or not this is the case. Nucleation is an essential feature of our models for prion disease, in that it protects normal, healthy individuals from getting prion disease. Nucleation is important in many other areas, and we modify our lattice-based nucleation model to apply to a hysteretic magnetic system where nucleation has been suggested to be important. From a simple model, we find qualitative agreement with experiment, and make testable experimental predictions concerning time-dependence and temperature-dependence of the major hysteresis loop and reversal curves which have been experimentally verified. We argue why this model may be suitable for systems like these and explain implications for Ising-like models. We suggest implications for future modeling work. Finally, we present suggestions for future work in all three areas.
Developing Tools to Test the Thermo-Mechanical Models, Examples at Crustal and Upper Mantle Scale
NASA Astrophysics Data System (ADS)
Le Pourhiet, L.; Yamato, P.; Burov, E.; Gurnis, M.
2005-12-01
Testing geodynamical model is never an easy task. Depending on the spatio-temporal scale of the model, different testable predictions are needed and no magic reciepe exist. This contribution first presents different methods that have been used to test themo-mechanical modeling results at upper crustal, lithospheric and upper mantle scale using three geodynamical examples : the Gulf of Corinth (Greece), the Western Alps, and the Sierra Nevada. At short spatio-temporal scale (e.g. Gulf of Corinth). The resolution of the numerical models is usually sufficient to catch the timing and kinematics of the faults precisely enough to be tested by tectono-stratigraphic arguments. In active deforming area, microseismicity can be compared to the effective rheology and P and T axes of the focal mechanism can be compared with local orientation of the major component of the stress tensor. At lithospheric scale the resolution of the models doesn't permit anymore to constrain the models by direct observations (i.e. structural data from field or seismic reflection). Instead, synthetic P-T-t path may be computed and compared to natural ones in term of rate of exhumation for ancient orogens. Topography may also help but on continent it mainly depends on erosion laws that are complicated to constrain. Deeper in the mantle, the only available constrain are long wave length topographic data and tomographic "data". The major problem to overcome now at lithospheric and upper mantle scale, is that the so called "data" results actually from inverse models of the real data and that those inverse model are based on synthetic models. Post processing P and S wave velocities is not sufficient to be able to make testable prediction at upper mantle scale. Instead of that, direct wave propagations model must be computed. This allows checking if the differences between two models constitute a testable prediction or not. On longer term, we may be able to use those synthetic models to reduce the residue in the inversion of elastic wave arrival time
Individual differences in behavioural plasticities.
Stamps, Judy A
2016-05-01
Interest in individual differences in animal behavioural plasticities has surged in recent years, but research in this area has been hampered by semantic confusion as different investigators use the same terms (e.g. plasticity, flexibility, responsiveness) to refer to different phenomena. The first goal of this review is to suggest a framework for categorizing the many different types of behavioural plasticities, describe examples of each, and indicate why using reversibility as a criterion for categorizing behavioural plasticities is problematic. This framework is then used to address a number of timely questions about individual differences in behavioural plasticities. One set of questions concerns the experimental designs that can be used to study individual differences in various types of behavioural plasticities. Although within-individual designs are the default option for empirical studies of many types of behavioural plasticities, in some situations (e.g. when experience at an early age affects the behaviour expressed at subsequent ages), 'replicate individual' designs can provide useful insights into individual differences in behavioural plasticities. To date, researchers using within-individual and replicate individual designs have documented individual differences in all of the major categories of behavioural plasticities described herein. Another important question is whether and how different types of behavioural plasticities are related to one another. Currently there is empirical evidence that many behavioural plasticities [e.g. contextual plasticity, learning rates, IIV (intra-individual variability), endogenous plasticities, ontogenetic plasticities) can themselves vary as a function of experiences earlier in life, that is, many types of behavioural plasticity are themselves developmentally plastic. These findings support the assumption that differences among individuals in prior experiences may contribute to individual differences in behavioural plasticities observed at a given age. Several authors have predicted correlations across individuals between different types of behavioural plasticities, i.e. that some individuals will be generally more plastic than others. However, empirical support for most of these predictions, including indirect evidence from studies of relationships between personality traits and plasticities, is currently sparse and equivocal. The final section of this review suggests how an appreciation of the similarities and differences between different types of behavioural plasticities may help theoreticians formulate testable models to explain the evolution of individual differences in behavioural plasticities and the evolutionary and ecological consequences of individual differences in behavioural plasticities. © 2015 Cambridge Philosophical Society.
Boolean Minimization and Algebraic Factorization Procedures for Fully Testable Sequential Machines
1989-09-01
Boolean Minimization and Algebraic Factorization Procedures for Fully Testable Sequential Machines Srinivas Devadas and Kurt Keutzer F ( Abstract In this...Projects Agency under contract number N00014-87-K-0825. Author Information Devadas : Department of Electrical Engineering and Computer Science, Room 36...MA 02139; (617) 253-0292. 0 * Boolean Minimization and Algebraic Factorization Procedures for Fully Testable Sequential Machines Siivas Devadas
Advanced Diagnostic and Prognostic Testbed (ADAPT) Testability Analysis Report
NASA Technical Reports Server (NTRS)
Ossenfort, John
2008-01-01
As system designs become more complex, determining the best locations to add sensors and test points for the purpose of testing and monitoring these designs becomes more difficult. Not only must the designer take into consideration all real and potential faults of the system, he or she must also find efficient ways of detecting and isolating those faults. Because sensors and cabling take up valuable space and weight on a system, and given constraints on bandwidth and power, it is even more difficult to add sensors into these complex designs after the design has been completed. As a result, a number of software tools have been developed to assist the system designer in proper placement of these sensors during the system design phase of a project. One of the key functions provided by many of these software programs is a testability analysis of the system essentially an evaluation of how observable the system behavior is using available tests. During the design phase, testability metrics can help guide the designer in improving the inherent testability of the design. This may include adding, removing, or modifying tests; breaking up feedback loops, or changing the system to reduce fault propagation. Given a set of test requirements, the analysis can also help to verify that the system will meet those requirements. Of course, a testability analysis requires that a software model of the physical system is available. For the analysis to be most effective in guiding system design, this model should ideally be constructed in parallel with these efforts. The purpose of this paper is to present the final testability results of the Advanced Diagnostic and Prognostic Testbed (ADAPT) after the system model was completed. The tool chosen to build the model and to perform the testability analysis with is the Testability Engineering and Maintenance System Designer (TEAMS-Designer). The TEAMS toolset is intended to be a solution to span all phases of the system, from design and development through health management and maintenance. TEAMS-Designer is the model-building and testability analysis software in that suite.
Henderson, Peter A; Magurran, Anne E
2010-05-22
Species abundance distributions (SADs) are widely used as a tool for summarizing ecological communities but may have different shapes, depending on the currency used to measure species importance. We develop a simple plotting method that links SADs in the alternative currencies of numerical abundance and biomass and is underpinned by testable predictions about how organisms occupy physical space. When log numerical abundance is plotted against log biomass, the species lie within an approximately triangular region. Simple energetic and sampling constraints explain the triangular form. The dispersion of species within this triangle is the key to understanding why SADs of numerical abundance and biomass can differ. Given regular or random species dispersion, we can predict the shape of the SAD for both currencies under a variety of sampling regimes. We argue that this dispersion pattern will lie between regular and random for the following reasons. First, regular dispersion patterns will result if communities are comprised groups of organisms that use different components of the physical space (e.g. open water, the sea bed surface or rock crevices in a marine fish assemblage), and if the abundance of species in each of these spatial guilds is linked to the way individuals of varying size use the habitat. Second, temporal variation in abundance and sampling error will tend to randomize this regular pattern. Data from two intensively studied marine ecosystems offer empirical support for these predictions. Our approach also has application in environmental monitoring and the recognition of anthropogenic disturbance, which may change the shape of the triangular region by, for example, the loss of large body size top predators that occur at low abundance.
Fundamental insights into ontogenetic growth from theory and fish
Sibly, Richard M.; Baker, Joanna; Grady, John M.; Luna, Susan M.; Kodric-Brown, Astrid; Venditti, Chris; Brown, James H.
2015-01-01
The fundamental features of growth may be universal, because growth trajectories of most animals are very similar, but a unified mechanistic theory of growth remains elusive. Still needed is a synthetic explanation for how and why growth rates vary as body size changes, both within individuals over their ontogeny and between populations and species over their evolution. Here, we use Bertalanffy growth equations to characterize growth of ray-finned fishes in terms of two parameters, the growth rate coefficient, K, and final body mass, m∞. We derive two alternative empirically testable hypotheses and test them by analyzing data from FishBase. Across 576 species, which vary in size at maturity by almost nine orders of magnitude, K scaled as m∞−0.23. This supports our first hypothesis that growth rate scales as m∞−0.25 as predicted by metabolic scaling theory; it implies that species that grow to larger mature sizes grow faster as juveniles. Within fish species, however, K scaled as m∞−0.35. This supports our second hypothesis, which predicts that growth rate scales as m∞−0.33 when all juveniles grow at the same rate. The unexpected disparity between across- and within-species scaling challenges existing theoretical interpretations. We suggest that the similar ontogenetic programs of closely related populations constrain growth to m∞−0.33 scaling, but as species diverge over evolutionary time they evolve the near-optimal m∞−0.25 scaling predicted by metabolic scaling theory. Our findings have important practical implications because fish supply essential protein in human diets, and sustainable yields from wild harvests and aquaculture depend on growth rates. PMID:26508641
Henderson, Peter A.; Magurran, Anne E.
2010-01-01
Species abundance distributions (SADs) are widely used as a tool for summarizing ecological communities but may have different shapes, depending on the currency used to measure species importance. We develop a simple plotting method that links SADs in the alternative currencies of numerical abundance and biomass and is underpinned by testable predictions about how organisms occupy physical space. When log numerical abundance is plotted against log biomass, the species lie within an approximately triangular region. Simple energetic and sampling constraints explain the triangular form. The dispersion of species within this triangle is the key to understanding why SADs of numerical abundance and biomass can differ. Given regular or random species dispersion, we can predict the shape of the SAD for both currencies under a variety of sampling regimes. We argue that this dispersion pattern will lie between regular and random for the following reasons. First, regular dispersion patterns will result if communities are comprised groups of organisms that use different components of the physical space (e.g. open water, the sea bed surface or rock crevices in a marine fish assemblage), and if the abundance of species in each of these spatial guilds is linked to the way individuals of varying size use the habitat. Second, temporal variation in abundance and sampling error will tend to randomize this regular pattern. Data from two intensively studied marine ecosystems offer empirical support for these predictions. Our approach also has application in environmental monitoring and the recognition of anthropogenic disturbance, which may change the shape of the triangular region by, for example, the loss of large body size top predators that occur at low abundance. PMID:20071388
Escobar, Luis E.; Qiao, Huijie; Phelps, Nicholas B. D.; Wagner, Carli K.; Larkin, Daniel J.
2016-01-01
Nitellopsis obtusa (starry stonewort) is a dioecious green alga native to Europe and Asia that has emerged as an aquatic invasive species in North America. Nitellopsis obtusa is rare across large portions of its native range, but has spread rapidly in northern-tier lakes in the United States, where it can interfere with recreation and may displace native species. Little is known about the invasion ecology of N. obtusa, making it difficult to forecast future expansion. Using ecological niche modeling we investigated environmental variables associated with invasion risk. We used species records, climate data, and remotely sensed environmental variables to characterize the species’ multidimensional distribution. We found that N. obtusa is exploiting novel ecological niche space in its introduced range, which may help explain its invasiveness. While the fundamental niche of N. obtusa may be stable, there appears to have been a shift in its realized niche associated with invasion in North America. Large portions of the United States are predicted to constitute highly suitable habitat for N. obtusa. Our results can inform early detection and rapid response efforts targeting N. obtusa and provide testable estimates of the physiological tolerances of this species as a baseline for future empirical research. PMID:27363541
Rationality, practice variation and person-centred health policy: a threshold hypothesis.
Djulbegovic, Benjamin; Hamm, Robert M; Mayrhofer, Thomas; Hozo, Iztok; Van den Ende, Jef
2015-12-01
Variation in practice of medicine is one of the major health policy issues of today. Ultimately, it is related to physicians' decision making. Similar patients with similar likelihood of having disease are often managed by different doctors differently: some doctors may elect to observe the patient, others decide to act based on diagnostic testing and yet others may elect to treat without testing. We explain these differences in practice by differences in disease probability thresholds at which physicians decide to act: contextual social and clinical factors and emotions such as regret affect the threshold by influencing the way doctors integrate objective data related to treatment and testing. However, depending on a theoretical construct each of the physician's behaviour can be considered rational. In fact, we showed that the current regulatory policies lead to predictably low thresholds for most decisions in contemporary practice. As a result, we may expect continuing motivation for overuse of treatment and diagnostic tests. We argue that rationality should take into account both formal principles of rationality and human intuitions about good decisions along the lines of Rawls' 'reflective equilibrium/considered judgment'. In turn, this can help define a threshold model that is empirically testable. © 2015 The Authors. Journal of Evaluation in Clinical Practice published by John Wiley & Sons, Ltd.
Intrinsic and extrinsic motivators of attachment under active inference.
Cittern, David; Nolte, Tobias; Friston, Karl; Edalat, Abbas
2018-01-01
This paper addresses the formation of infant attachment types within the context of active inference: a holistic account of action, perception and learning in the brain. We show how the organised forms of attachment (secure, avoidant and ambivalent) might arise in (Bayesian) infants. Specifically, we show that these distinct forms of attachment emerge from a minimisation of free energy-over interoceptive states relating to internal stress levels-when seeking proximity to caregivers who have a varying impact on these interoceptive states. In line with empirical findings in disrupted patterns of affective communication, we then demonstrate how exteroceptive cues (in the form of caregiver-mediated AMBIANCE affective communication errors, ACE) can result in disorganised forms of attachment in infants of caregivers who consistently increase stress when the infant seeks proximity, but can have an organising (towards ambivalence) effect in infants of inconsistent caregivers. In particular, we differentiate disorganised attachment from avoidance in terms of the high epistemic value of proximity seeking behaviours (resulting from the caregiver's misleading exteroceptive cues) that preclude the emergence of coherent and organised behavioural policies. Our work, the first to formulate infant attachment in terms of active inference, makes a new testable prediction with regards to the types of affective communication errors that engender ambivalent attachment.
Loizzo, Joseph
2014-01-01
This article offers an overview of meditation research: its history, recent developments, and future directions. As the number and scope of studies grow, the field has converged with cognitive and affective neuroscience, and spawned many clinical applications. Recent work has shed light on the mechanisms and effects of diverse practices, and is entering a new phase where consensus and coherent paradigms are within reach. This article suggests an unusual path for future advancement: complementing conventional research with rigorous dialogue with the contemplative traditions that train expert meditators and best know the techniques. It explores the Nalanda tradition developed in India and preserved in Tibet, because its cumulative approach to contemplative methods produced a comprehensive framework that may help interpret data and guide research, and because its naturalistic theories and empirical methods may help bridge the gulf between science and other contemplative traditions. Examining recent findings and models in light of this framework, the article introduces the Indic map of the central nervous system and presents three testable predictions based on it. Finally, it reviews two studies that suggest that the multimodal Nalanda approach to contemplative learning is as well received as more familiar approaches, while showing promise of being more effective. PMID:24673149
Model averaging, optimal inference, and habit formation
FitzGerald, Thomas H. B.; Dolan, Raymond J.; Friston, Karl J.
2014-01-01
Postulating that the brain performs approximate Bayesian inference generates principled and empirically testable models of neuronal function—the subject of much current interest in neuroscience and related disciplines. Current formulations address inference and learning under some assumed and particular model. In reality, organisms are often faced with an additional challenge—that of determining which model or models of their environment are the best for guiding behavior. Bayesian model averaging—which says that an agent should weight the predictions of different models according to their evidence—provides a principled way to solve this problem. Importantly, because model evidence is determined by both the accuracy and complexity of the model, optimal inference requires that these be traded off against one another. This means an agent's behavior should show an equivalent balance. We hypothesize that Bayesian model averaging plays an important role in cognition, given that it is both optimal and realizable within a plausible neuronal architecture. We outline model averaging and how it might be implemented, and then explore a number of implications for brain and behavior. In particular, we propose that model averaging can explain a number of apparently suboptimal phenomena within the framework of approximate (bounded) Bayesian inference, focusing particularly upon the relationship between goal-directed and habitual behavior. PMID:25018724
Intrinsic and extrinsic motivators of attachment under active inference
Nolte, Tobias; Friston, Karl; Edalat, Abbas
2018-01-01
This paper addresses the formation of infant attachment types within the context of active inference: a holistic account of action, perception and learning in the brain. We show how the organised forms of attachment (secure, avoidant and ambivalent) might arise in (Bayesian) infants. Specifically, we show that these distinct forms of attachment emerge from a minimisation of free energy—over interoceptive states relating to internal stress levels—when seeking proximity to caregivers who have a varying impact on these interoceptive states. In line with empirical findings in disrupted patterns of affective communication, we then demonstrate how exteroceptive cues (in the form of caregiver-mediated AMBIANCE affective communication errors, ACE) can result in disorganised forms of attachment in infants of caregivers who consistently increase stress when the infant seeks proximity, but can have an organising (towards ambivalence) effect in infants of inconsistent caregivers. In particular, we differentiate disorganised attachment from avoidance in terms of the high epistemic value of proximity seeking behaviours (resulting from the caregiver’s misleading exteroceptive cues) that preclude the emergence of coherent and organised behavioural policies. Our work, the first to formulate infant attachment in terms of active inference, makes a new testable prediction with regards to the types of affective communication errors that engender ambivalent attachment. PMID:29621266
Pinter-Wollman, Noa; Keiser, Carl N.; Wollman, Roy; Pruitt, Jonathan N.
2017-01-01
Collective behavior emerges from interactions among group members who often vary in their behavior. The presence of just one or a few keystone individuals, such as leaders or tutors, may have a large effect on collective outcomes. These individuals can catalyze behavioral changes in other group members, thus altering group composition and collective behavior. The influence of keystone individuals on group function may lead to trade-offs between ecological situations, because the behavioral composition they facilitate may be suitable in one situation but not another. We use computer simulations to examine various mechanisms that allow keystone individuals to exert their influence on group members. We further discuss a trade-off between two potentially conflicting collective outcomes, cooperative prey attack and disease dynamics. Our simulations match empirical data from a social spider system and produce testable predictions for the causes and consequences of the influence of keystone individuals on group composition and collective outcomes. We find that a group’s behavioral composition can be impacted by the keystone individual through changes to interaction patterns or behavioral persistence over time. Group behavioral composition and the mechanisms that drive the distribution of phenotypes influence collective outcomes and lead to trade-offs between disease dynamics and cooperative prey attack. PMID:27420788
A Unified Approach to the Synthesis of Fully Testable Sequential Machines
1989-10-01
N A Unified Approach to the Synthesis of Fully Testable Sequential Machines Srinivas Devadas and Kurt Keutzer Abstract • In this paper we attempt to...research was supported in part by the Defense Advanced Research Projects Agency under contract N00014-87-K-0825. Author Information Devadas : Department...Fully Testable Sequential Maine(S P Sritiivas Devadas Departinent of Electrical Engineerinig anid Comivi Sciec Massachusetts Institute of Technology
Cowden, Tracy L; Cummings, Greta G
2012-07-01
We describe a theoretical model of staff nurses' intentions to stay in their current positions. The global nursing shortage and high nursing turnover rate demand evidence-based retention strategies. Inconsistent study outcomes indicate a need for testable theoretical models of intent to stay that build on previously published models, are reflective of current empirical research and identify causal relationships between model concepts. Two systematic reviews of electronic databases of English language published articles between 1985-2011. This complex, testable model expands on previous models and includes nurses' affective and cognitive responses to work and their effects on nurses' intent to stay. The concepts of desire to stay, job satisfaction, joy at work, and moral distress are included in the model to capture the emotional response of nurses to their work environments. The influence of leadership is integrated within the model. A causal understanding of clinical nurses' intent to stay and the effects of leadership on the development of that intention will facilitate the development of effective retention strategies internationally. Testing theoretical models is necessary to confirm previous research outcomes and to identify plausible sequences of the development of behavioral intentions. Increased understanding of the causal influences on nurses' intent to stay should lead to strategies that may result in higher retention rates and numbers of nurses willing to work in the health sector. © 2012 Blackwell Publishing Ltd.
Social response to technological disaster: the accident at Three Mile Island
DOE Office of Scientific and Technical Information (OSTI.GOV)
Richardson, B.B.
1984-01-01
Until recently the sociological study of man environment relations under extreme circumstances has been restricted to natural hazards (e.g., floods, hurricanes, tornadoes). Technological disasters are becoming more commonplace (e.g., Times Beach, MO, Love Canal, TMI-2) and are growing as potential sources of impact upon human populations. However, theory regarding the social impact of such disasters has not been developed. While research on natural disasters is in part applicable to technological disasters, theory adapted from environmental sociology and psychology are also utilized to develop a theory of social response to extreme environmental events produced by technology. Hypotheses are developed in themore » form of an empirically testable model based on the literature reviewed.« less
Reframing landscape fragmentation's effects on ecosystem services.
Mitchell, Matthew G E; Suarez-Castro, Andrés F; Martinez-Harms, Maria; Maron, Martine; McAlpine, Clive; Gaston, Kevin J; Johansen, Kasper; Rhodes, Jonathan R
2015-04-01
Landscape structure and fragmentation have important effects on ecosystem services, with a common assumption being that fragmentation reduces service provision. This is based on fragmentation's expected effects on ecosystem service supply, but ignores how fragmentation influences the flow of services to people. Here we develop a new conceptual framework that explicitly considers the links between landscape fragmentation, the supply of services, and the flow of services to people. We argue that fragmentation's effects on ecosystem service flow can be positive or negative, and use our framework to construct testable hypotheses about the effects of fragmentation on final ecosystem service provision. Empirical efforts to apply and test this framework are critical to improving landscape management for multiple ecosystem services. Copyright © 2015 Elsevier Ltd. All rights reserved.
Stellivore extraterrestrials? Binary stars as living systems
NASA Astrophysics Data System (ADS)
Vidal, Clément
2016-11-01
We lack signs of extraterrestrial intelligence (ETI) despite decades of observation in the whole electromagnetic spectrum. Could evidence be buried in existing data? To recognize ETI, we first propose criteria discerning life from non-life based on thermodynamics and living systems theory. Then we extrapolate civilizational development to both external and internal growth. Taken together, these two trends lead to an argument that some existing binary stars might actually be ETI. Since these hypothetical beings feed actively on stars, we call them "stellivores". I present an independent thermodynamic argument for their existence, with a metabolic interpretation of interacting binary stars. The jury is still out, but the hypothesis is empirically testable with existing astrophysical data.
Crystal study and econometric model
NASA Technical Reports Server (NTRS)
1975-01-01
An econometric model was developed that can be used to predict demand and supply figures for crystals over a time horizon roughly concurrent with that of NASA's Space Shuttle Program - that is, 1975 through 1990. The model includes an equation to predict the impact on investment in the crystal-growing industry. Actually, two models are presented. The first is a theoretical model which follows rather strictly the standard theoretical economic concepts involved in supply and demand analysis, and a modified version of the model was developed which, though not quite as theoretically sound, was testable utilizing existing data sources.
Extended Testability Analysis Tool
NASA Technical Reports Server (NTRS)
Melcher, Kevin; Maul, William A.; Fulton, Christopher
2012-01-01
The Extended Testability Analysis (ETA) Tool is a software application that supports fault management (FM) by performing testability analyses on the fault propagation model of a given system. Fault management includes the prevention of faults through robust design margins and quality assurance methods, or the mitigation of system failures. Fault management requires an understanding of the system design and operation, potential failure mechanisms within the system, and the propagation of those potential failures through the system. The purpose of the ETA Tool software is to process the testability analysis results from a commercial software program called TEAMS Designer in order to provide a detailed set of diagnostic assessment reports. The ETA Tool is a command-line process with several user-selectable report output options. The ETA Tool also extends the COTS testability analysis and enables variation studies with sensor sensitivity impacts on system diagnostics and component isolation using a single testability output. The ETA Tool can also provide extended analyses from a single set of testability output files. The following analysis reports are available to the user: (1) the Detectability Report provides a breakdown of how each tested failure mode was detected, (2) the Test Utilization Report identifies all the failure modes that each test detects, (3) the Failure Mode Isolation Report demonstrates the system s ability to discriminate between failure modes, (4) the Component Isolation Report demonstrates the system s ability to discriminate between failure modes relative to the components containing the failure modes, (5) the Sensor Sensor Sensitivity Analysis Report shows the diagnostic impact due to loss of sensor information, and (6) the Effect Mapping Report identifies failure modes that result in specified system-level effects.
Software Users Manual (SUM): Extended Testability Analysis (ETA) Tool
NASA Technical Reports Server (NTRS)
Maul, William A.; Fulton, Christopher E.
2011-01-01
This software user manual describes the implementation and use the Extended Testability Analysis (ETA) Tool. The ETA Tool is a software program that augments the analysis and reporting capabilities of a commercial-off-the-shelf (COTS) testability analysis software package called the Testability Engineering And Maintenance System (TEAMS) Designer. An initial diagnostic assessment is performed by the TEAMS Designer software using a qualitative, directed-graph model of the system being analyzed. The ETA Tool utilizes system design information captured within the diagnostic model and testability analysis output from the TEAMS Designer software to create a series of six reports for various system engineering needs. The ETA Tool allows the user to perform additional studies on the testability analysis results by determining the detection sensitivity to the loss of certain sensors or tests. The ETA Tool was developed to support design and development of the NASA Ares I Crew Launch Vehicle. The diagnostic analysis provided by the ETA Tool was proven to be valuable system engineering output that provided consistency in the verification of system engineering requirements. This software user manual provides a description of each output report generated by the ETA Tool. The manual also describes the example diagnostic model and supporting documentation - also provided with the ETA Tool software release package - that were used to generate the reports presented in the manual
The linguistics of schizophrenia: thought disturbance as language pathology across positive symptoms
Hinzen, Wolfram; Rosselló, Joana
2015-01-01
We hypothesize that linguistic (dis-)organization in the schizophrenic brain plays a more central role in the pathogenesis of this disease than commonly supposed. Against the standard view, that schizophrenia is a disturbance of thought or selfhood, we argue that the origins of the relevant forms of thought and selfhood at least partially depend on language. The view that they do not is premised by a theoretical conception of language that we here identify as ‘Cartesian’ and contrast with a recent ‘un-Cartesian’ model. This linguistic model empirically argues for both (i) a one-to-one correlation between human-specific thought or meaning and forms of grammatical organization, and (ii) an integrative and co-dependent view of linguistic cognition and its sensory-motor dimensions. Core dimensions of meaning mediated by grammar on this model specifically concern forms of referential and propositional meaning. A breakdown of these is virtually definitional of core symptoms. Within this model the three main positive symptoms of schizophrenia fall into place as failures in language-mediated forms of meaning, manifest either as a disorder of speech perception (Auditory Verbal Hallucinations), abnormal speech production running without feedback control (Formal Thought Disorder), or production of abnormal linguistic content (Delusions). Our hypothesis makes testable predictions for the language profile of schizophrenia across symptoms; it simplifies the cognitive neuropsychology of schizophrenia while not being inconsistent with a pattern of neurocognitive deficits and their correlations with symptoms; and it predicts persistent findings on disturbances of language-related circuitry in the schizophrenic brain. PMID:26236257
Mass and Environment as Drivers of Galaxy Evolution: Simplicity and its Consequences
NASA Astrophysics Data System (ADS)
Peng, Yingjie
2012-01-01
The galaxy population appears to be composed of infinitely complex different types and properties at first sight, however, when large samples of galaxies are studied, it appears that the vast majority of galaxies just follow simple scaling relations and similar evolutional modes while the outliers represent some minority. The underlying simplicities of the interrelationships among stellar mass, star formation rate and environment are seen in SDSS and zCOSMOS. We demonstrate that the differential effects of mass and environment are completely separable to z 1, indicating that two distinct physical processes are operating, namely the "mass quenching" and "environment quenching". These two simple quenching processes, plus some additional quenching due to merging, then naturally produce the Schechter form of the galaxy stellar mass functions and make quantitative predictions for the inter-relationships between the Schechter parameters of star-forming and passive galaxies in different environments. All of these detailed quantitative relationships are indeed seen, to very high precision, in SDSS, lending strong support to our simple empirically-based model. The model also offers qualitative explanations for the "anti-hierarchical" age-mass relation and the alpha-enrichment patterns for passive galaxies and makes some other testable predictions such as the mass function of the population of transitory objects that are in the process of being quenched, the galaxy major- and minor-merger rates, the galaxy stellar mass assembly history, star formation history and etc. Although still purely phenomenological, the model makes clear what the evolutionary characteristics of the relevant physical processes must in fact be.
NASA Astrophysics Data System (ADS)
Park, Hyeran; Nielsen, Wendy; Woodruff, Earl
2014-05-01
This study examined and compared students' understanding of nature of science (NOS) with 521 Grade 8 Canadian and Korean students using a mixed methods approach. The concepts of NOS were measured using a survey that had both quantitative and qualitative elements. Descriptive statistics and one-way multivariate analysis of variances examined the quantitative data while a conceptually clustered matrix classified the open-ended responses. The country effect could explain 3-12 % of the variances of subjectivity, empirical testability and diverse methods, but it was not significant for the concepts of tentativeness and socio-cultural embeddedness of science. The open-ended responses showed that students believed scientific theories change due to errors or discoveries. Students regarded empirical evidence as undeniable and objective although they acknowledged experiments depend on theories or scientists' knowledge. The open responses revealed that national situations and curriculum content affected their views. For our future democratic citizens to gain scientific literacy, science curricula should include currently acknowledged NOS concepts and should be situated within societal and cultural perspectives.
Zee-Babu type model with U (1 )Lμ-Lτ gauge symmetry
NASA Astrophysics Data System (ADS)
Nomura, Takaaki; Okada, Hiroshi
2018-05-01
We extend the Zee-Babu model, introducing local U (1 )Lμ-Lτ symmetry with several singly charged bosons. We find a predictive neutrino mass texture in a simple hypothesis in which mixings among singly charged bosons are negligible. Also, lepton-flavor violations are less constrained compared with the original model. Then, we explore the testability of the model, focusing on doubly charged boson physics at the LHC and the International Linear Collider.
A collider observable QCD axion
Dimopoulos, Savas; Hook, Anson; Huang, Junwu; ...
2016-11-09
Here, we present a model where the QCD axion is at the TeV scale and visible at a collider via its decays. Conformal dynamics and strong CP considerations account for the axion coupling strongly enough to the standard model to be produced as well as the coincidence between the weak scale and the axion mass. The model predicts additional pseudoscalar color octets whose properties are completely determined by the axion properties rendering the theory testable.
Predicting Predator Recognition in a Changing World.
Carthey, Alexandra J R; Blumstein, Daniel T
2018-02-01
Through natural as well as anthropogenic processes, prey can lose historically important predators and gain novel ones. Both predator gain and loss frequently have deleterious consequences. While numerous hypotheses explain the response of individuals to novel and familiar predators, we lack a unifying conceptual model that predicts the fate of prey following the introduction of a novel or a familiar (reintroduced) predator. Using the concept of eco-evolutionary experience, we create a new framework that allows us to predict whether prey will recognize and be able to discriminate predator cues from non-predator cues and, moreover, the likely persistence outcomes for 11 different predator-prey interaction scenarios. This framework generates useful and testable predictions for ecologists, conservation scientists, and decision-makers. Copyright © 2017 Elsevier Ltd. All rights reserved.
Friesen, Justin P; Campbell, Troy H; Kay, Aaron C
2015-03-01
We propose that people may gain certain "offensive" and "defensive" advantages for their cherished belief systems (e.g., religious and political views) by including aspects of unfalsifiability in those belief systems, such that some aspects of the beliefs cannot be tested empirically and conclusively refuted. This may seem peculiar, irrational, or at least undesirable to many people because it is assumed that the primary purpose of a belief is to know objective truth. However, past research suggests that accuracy is only one psychological motivation among many, and falsifiability or testability may be less important when the purpose of a belief serves other psychological motives (e.g., to maintain one's worldviews, serve an identity). In Experiments 1 and 2 we demonstrate the "offensive" function of unfalsifiability: that it allows religious adherents to hold their beliefs with more conviction and political partisans to polarize and criticize their opponents more extremely. Next we demonstrate unfalsifiability's "defensive" function: When facts threaten their worldviews, religious participants frame specific reasons for their beliefs in more unfalsifiable terms (Experiment 3) and political partisans construe political issues as more unfalsifiable ("moral opinion") instead of falsifiable ("a matter of facts"; Experiment 4). We conclude by discussing how in a world where beliefs and ideas are becoming more easily testable by data, unfalsifiability might be an attractive aspect to include in one's belief systems, and how unfalsifiability may contribute to polarization, intractability, and the marginalization of science in public discourse. PsycINFO Database Record (c) 2015 APA, all rights reserved.
LSI/VLSI design for testability analysis and general approach
NASA Technical Reports Server (NTRS)
Lam, A. Y.
1982-01-01
The incorporation of testability characteristics into large scale digital design is not only necessary for, but also pertinent to effective device testing and enhancement of device reliability. There are at least three major DFT techniques, namely, the self checking, the LSSD, and the partitioning techniques, each of which can be incorporated into a logic design to achieve a specific set of testability and reliability requirements. Detailed analysis of the design theory, implementation, fault coverage, hardware requirements, application limitations, etc., of each of these techniques are also presented.
Larval transport modeling of deep-sea invertebrates can aid the search for undiscovered populations.
Yearsley, Jon M; Sigwart, Julia D
2011-01-01
Many deep-sea benthic animals occur in patchy distributions separated by thousands of kilometres, yet because deep-sea habitats are remote, little is known about their larval dispersal. Our novel method simulates dispersal by combining data from the Argo array of autonomous oceanographic probes, deep-sea ecological surveys, and comparative invertebrate physiology. The predicted particle tracks allow quantitative, testable predictions about the dispersal of benthic invertebrate larvae in the south-west Pacific. In a test case presented here, using non-feeding, non-swimming (lecithotrophic trochophore) larvae of polyplacophoran molluscs (chitons), we show that the likely dispersal pathways in a single generation are significantly shorter than the distances between the three known population centres in our study region. The large-scale density of chiton populations throughout our study region is potentially much greater than present survey data suggest, with intermediate 'stepping stone' populations yet to be discovered. We present a new method that is broadly applicable to studies of the dispersal of deep-sea organisms. This test case demonstrates the power and potential applications of our new method, in generating quantitative, testable hypotheses at multiple levels to solve the mismatch between observed and expected distributions: probabilistic predictions of locations of intermediate populations, potential alternative dispersal mechanisms, and expected population genetic structure. The global Argo data have never previously been used to address benthic biology, and our method can be applied to any non-swimming larvae of the deep-sea, giving information upon dispersal corridors and population densities in habitats that remain intrinsically difficult to assess.
Larval Transport Modeling of Deep-Sea Invertebrates Can Aid the Search for Undiscovered Populations
Yearsley, Jon M.; Sigwart, Julia D.
2011-01-01
Background Many deep-sea benthic animals occur in patchy distributions separated by thousands of kilometres, yet because deep-sea habitats are remote, little is known about their larval dispersal. Our novel method simulates dispersal by combining data from the Argo array of autonomous oceanographic probes, deep-sea ecological surveys, and comparative invertebrate physiology. The predicted particle tracks allow quantitative, testable predictions about the dispersal of benthic invertebrate larvae in the south-west Pacific. Principal Findings In a test case presented here, using non-feeding, non-swimming (lecithotrophic trochophore) larvae of polyplacophoran molluscs (chitons), we show that the likely dispersal pathways in a single generation are significantly shorter than the distances between the three known population centres in our study region. The large-scale density of chiton populations throughout our study region is potentially much greater than present survey data suggest, with intermediate ‘stepping stone’ populations yet to be discovered. Conclusions/Significance We present a new method that is broadly applicable to studies of the dispersal of deep-sea organisms. This test case demonstrates the power and potential applications of our new method, in generating quantitative, testable hypotheses at multiple levels to solve the mismatch between observed and expected distributions: probabilistic predictions of locations of intermediate populations, potential alternative dispersal mechanisms, and expected population genetic structure. The global Argo data have never previously been used to address benthic biology, and our method can be applied to any non-swimming larvae of the deep-sea, giving information upon dispersal corridors and population densities in habitats that remain intrinsically difficult to assess. PMID:21857992
Earthquake Forecasting System in Italy
NASA Astrophysics Data System (ADS)
Falcone, G.; Marzocchi, W.; Murru, M.; Taroni, M.; Faenza, L.
2017-12-01
In Italy, after the 2009 L'Aquila earthquake, a procedure was developed for gathering and disseminating authoritative information about the time dependence of seismic hazard to help communities prepare for a potentially destructive earthquake. The most striking time dependency of the earthquake occurrence process is the time clustering, which is particularly pronounced in time windows of days and weeks. The Operational Earthquake Forecasting (OEF) system that is developed at the Seismic Hazard Center (Centro di Pericolosità Sismica, CPS) of the Istituto Nazionale di Geofisica e Vulcanologia (INGV) is the authoritative source of seismic hazard information for Italian Civil Protection. The philosophy of the system rests on a few basic concepts: transparency, reproducibility, and testability. In particular, the transparent, reproducible, and testable earthquake forecasting system developed at CPS is based on ensemble modeling and on a rigorous testing phase. Such phase is carried out according to the guidance proposed by the Collaboratory for the Study of Earthquake Predictability (CSEP, international infrastructure aimed at evaluating quantitatively earthquake prediction and forecast models through purely prospective and reproducible experiments). In the OEF system, the two most popular short-term models were used: the Epidemic-Type Aftershock Sequences (ETAS) and the Short-Term Earthquake Probabilities (STEP). Here, we report the results from OEF's 24hour earthquake forecasting during the main phases of the 2016-2017 sequence occurred in Central Apennines (Italy).
Emotional intervention strategies for dementia-related behavior: a theory synthesis.
Yao, Lan; Algase, Donna
2008-04-01
Behavioral disturbances of elders with dementia are prevalent. Yet the science guiding development and testing of effective intervention strategies is limited by rudimentary and often-conflicting theories. Using a theory-synthesis approach conducted within the perspective of the need-driven dementia-compromised behavior model, this article presents the locomoting responses to environment in elders with dementia (LRE-EWD) model. This new model, based on empirical and theoretical evidence, integrates the role of emotion with that of cognition in explicating a person-environment dynamic supporting wandering and other dementia-related disturbances. Included is evidence of the theory's testability and elaboration of its implications. The LRE-EWD model resolves conflicting views and evidence from current research on environmental interventions for behavior disturbances and opens new avenues to advance this field of study and practice.
Loss Aversion and Time-Differentiated Electricity Pricing
DOE Office of Scientific and Technical Information (OSTI.GOV)
Spurlock, C. Anna
2015-06-01
I develop a model of loss aversion over electricity expenditure, from which I derive testable predictions for household electricity consumption while on combination time-of-use (TOU) and critical peak pricing (CPP) plans. Testing these predictions results in evidence consistent with loss aversion: (1) spillover effects - positive expenditure shocks resulted in significantly more peak consumption reduction for several weeks thereafter; and (2) clustering - disproportionate probability of consuming such that expenditure would be equal between the TOUCPP or standard flat-rate pricing structures. This behavior is inconsistent with a purely neoclassical utility model, and has important implications for application of time-differentiated electricitymore » pricing.« less
NASA Astrophysics Data System (ADS)
Treuer, Galen; Koebele, Elizabeth; Deslatte, Aaron; Ernst, Kathleen; Garcia, Margaret; Manago, Kim
2017-01-01
Although the water management sector is often characterized as resistant to risk and change, urban areas across the United States are increasingly interested in creating opportunities to transition toward more sustainable water management practices. These transitions are complex and difficult to predict - the product of water managers acting in response to numerous biophysical, regulatory, political, and financial factors within institutional constraints. Gaining a better understanding of how these transitions occur is crucial for continuing to improve water management. This paper presents a replicable methodology for analyzing how urban water utilities transition toward sustainability. The method combines standardized quantitative measures of variables that influence transitions with contextual qualitative information about a utility's unique decision making context to produce structured, data-driven narratives. Data-narratives document the broader context, the utility's pretransition history, key events during an accelerated period of change, and the consequences of transition. Eventually, these narratives should be compared across cases to develop empirically-testable hypotheses about the drivers of and barriers to utility-level urban water management transition. The methodology is illustrated through the case of the Miami-Dade Water and Sewer Department (WASD) in Miami-Dade County, Florida, and its transition toward more sustainable water management in the 2000s, during which per capita water use declined, conservation measures were enacted, water rates increased, and climate adaptive planning became the new norm.
Quantitative and empirical demonstration of the Matthew effect in a study of career longevity
Petersen, Alexander M.; Jung, Woo-Sung; Yang, Jae-Suk; Stanley, H. Eugene
2011-01-01
The Matthew effect refers to the adage written some two-thousand years ago in the Gospel of St. Matthew: “For to all those who have, more will be given.” Even two millennia later, this idiom is used by sociologists to qualitatively describe the dynamics of individual progress and the interplay between status and reward. Quantitative studies of professional careers are traditionally limited by the difficulty in measuring progress and the lack of data on individual careers. However, in some professions, there are well-defined metrics that quantify career longevity, success, and prowess, which together contribute to the overall success rating for an individual employee. Here we demonstrate testable evidence of the age-old Matthew “rich get richer” effect, wherein the longevity and past success of an individual lead to a cumulative advantage in further developing his or her career. We develop an exactly solvable stochastic career progress model that quantitatively incorporates the Matthew effect and validate our model predictions for several competitive professions. We test our model on the careers of 400,000 scientists using data from six high-impact journals and further confirm our findings by testing the model on the careers of more than 20,000 athletes in four sports leagues. Our model highlights the importance of early career development, showing that many careers are stunted by the relative disadvantage associated with inexperience. PMID:21173276
Castellano, Sergio; Cermelli, Paolo
2011-04-07
Mate choice depends on mating preferences and on the manner in which mate-quality information is acquired and used to make decisions. We present a model that describes how these two components of mating decision interact with each other during a comparative evaluation of prospective mates. The model, with its well-explored precedents in psychology and neurophysiology, assumes that decisions are made by the integration over time of noisy information until a stopping-rule criterion is reached. Due to this informational approach, the model builds a coherent theoretical framework for developing an integrated view of functions and mechanisms of mating decisions. From a functional point of view, the model allows us to investigate speed-accuracy tradeoffs in mating decision at both population and individual levels. It shows that, under strong time constraints, decision makers are expected to make fast and frugal decisions and to optimally trade off population-sampling accuracy (i.e. the number of sampled males) against individual-assessment accuracy (i.e. the time spent for evaluating each mate). From the proximate-mechanism point of view, the model makes testable predictions on the interactions of mating preferences and choosiness in different contexts and it might be of compelling empirical utility for a context-independent description of mating preference strength. Copyright © 2011 Elsevier Ltd. All rights reserved.
Crises and Collective Socio-Economic Phenomena: Simple Models and Challenges
NASA Astrophysics Data System (ADS)
Bouchaud, Jean-Philippe
2013-05-01
Financial and economic history is strewn with bubbles and crashes, booms and busts, crises and upheavals of all sorts. Understanding the origin of these events is arguably one of the most important problems in economic theory. In this paper, we review recent efforts to include heterogeneities and interactions in models of decision. We argue that the so-called Random Field Ising model ( rfim) provides a unifying framework to account for many collective socio-economic phenomena that lead to sudden ruptures and crises. We discuss different models that can capture potentially destabilizing self-referential feedback loops, induced either by herding, i.e. reference to peers, or trending, i.e. reference to the past, and that account for some of the phenomenology missing in the standard models. We discuss some empirically testable predictions of these models, for example robust signatures of rfim-like herding effects, or the logarithmic decay of spatial correlations of voting patterns. One of the most striking result, inspired by statistical physics methods, is that Adam Smith's invisible hand can fail badly at solving simple coordination problems. We also insist on the issue of time-scales, that can be extremely long in some cases, and prevent socially optimal equilibria from being reached. As a theoretical challenge, the study of so-called "detailed-balance" violating decision rules is needed to decide whether conclusions based on current models (that all assume detailed-balance) are indeed robust and generic.
The use of models to predict potential contamination aboard orbital vehicles
NASA Technical Reports Server (NTRS)
Boraas, Martin E.; Seale, Dianne B.
1989-01-01
A model of fungal growth on air-exposed, nonnutritive solid surfaces, developed for utilization aboard orbital vehicles is presented. A unique feature of this testable model is that the development of a fungal mycelium can facilitate its own growth by condensation of water vapor from its environment directly onto fungal hyphae. The fungal growth rate is limited by the rate of supply of volatile nutrients and fungal biomass is limited by either the supply of nonvolatile nutrients or by metabolic loss processes. The model discussed is structurally simple, but its dynamics can be quite complex. Biofilm accumulation can vary from a simple linear increase to sustained exponential growth, depending on the values of the environmental variable and model parameters. The results of the model are consistent with data from aquatic biofilm studies, insofar as the two types of systems are comparable. It is shown that the model presented is experimentally testable and provides a platform for the interpretation of observational data that may be directly relevant to the question of growth of organisms aboard the proposed Space Station.
Module generation for self-testing integrated systems
NASA Astrophysics Data System (ADS)
Vanriessen, Ronald Pieter
Hardware used for self test in VLSI (Very Large Scale Integrated) systems is reviewed, and an architecture to control the test hardware in an integrated system is presented. Because of the increase of test times, the use of self test techniques has become practically and economically viable for VLSI systems. Beside the reduction in test times and costs, self test also provides testing at operational speeds. Therefore, a suitable combination of scan path and macrospecific (self) tests is required to reduce test times and costs. An expert system that can be used in a silicon compilation environment is presented. The approach requires a minimum of testability knowledge from a system designer. A user friendly interface was described for specifying and modifying testability requirements by a testability expert. A reason directed backtracking mechanism is used to solve selection failures. Both the hierarchical testable architecture and the design for testability expert system are used in a self test compiler. The definition of a self test compiler was given. A self test compiler is a software tool that selects an appropriate test method for every macro in a design. The hardware to control a macro test will be included in the design automatically. As an example, the integration of the self-test compiler in a silicon compilation system PIRAMID was described. The design of a demonstrator circuit by self test compiler is described. This circuit consists of two self testable macros. Control of the self test hardware is carried out via the test access port of the boundary scan standard.
Refinement of Representation Theorems for Context-Free Languages
NASA Astrophysics Data System (ADS)
Fujioka, Kaoru
In this paper, we obtain some refinement of representation theorems for context-free languages by using Dyck languages, insertion systems, strictly locally testable languages, and morphisms. For instance, we improved the Chomsky-Schützenberger representation theorem and show that each context-free language L can be represented in the form L = h (D ∩ R), where D is a Dyck language, R is a strictly 3-testable language, and h is a morphism. A similar representation for context-free languages can be obtained, using insertion systems of weight (3, 0) and strictly 4-testable languages.
Brain Organization and Psychodynamics
Peled, Avi; Geva, Amir B.
1999-01-01
Any attempt to link brain neural activity and psychodynamic concepts requires a tremendous conceptual leap. Such a leap may be facilitated if a common language between brain and mind can be devised. System theory proposes formulations that may aid in reconceptualizing psychodynamic descriptions in terms of neural organizations in the brain. Once adopted, these formulations can help to generate testable predictions about brain–psychodynamic relations and thus significantly affect the future of psychotherapy. (The Journal of Psychotherapy Practice and Research 1999; 8:24–39) PMID:9888105
Causal Reasoning on Biological Networks: Interpreting Transcriptional Changes
NASA Astrophysics Data System (ADS)
Chindelevitch, Leonid; Ziemek, Daniel; Enayetallah, Ahmed; Randhawa, Ranjit; Sidders, Ben; Brockel, Christoph; Huang, Enoch
Over the past decade gene expression data sets have been generated at an increasing pace. In addition to ever increasing data generation, the biomedical literature is growing exponentially. The PubMed database (Sayers et al., 2010) comprises more than 20 million citations as of October 2010. The goal of our method is the prediction of putative upstream regulators of observed expression changes based on a set of over 400,000 causal relationships. The resulting putative regulators constitute directly testable hypotheses for follow-up.
Scaling properties of multitension domain wall networks
NASA Astrophysics Data System (ADS)
Oliveira, M. F.; Martins, C. J. A. P.
2015-02-01
We study the asymptotic scaling properties of domain wall networks with three different tensions in various cosmological epochs. We discuss the conditions under which a scale-invariant evolution of the network (which is well established for simpler walls) still applies and also consider the limiting case where defects are locally planar and the curvature is concentrated in the junctions. We present detailed quantitative predictions for scaling densities in various contexts, which should be testable by means of future high-resolution numerical simulations.
Beyond Critical Exponents in Neuronal Avalanches
NASA Astrophysics Data System (ADS)
Friedman, Nir; Butler, Tom; Deville, Robert; Beggs, John; Dahmen, Karin
2011-03-01
Neurons form a complex network in the brain, where they interact with one another by firing electrical signals. Neurons firing can trigger other neurons to fire, potentially causing avalanches of activity in the network. In many cases these avalanches have been found to be scale independent, similar to critical phenomena in diverse systems such as magnets and earthquakes. We discuss models for neuronal activity that allow for the extraction of testable, statistical predictions. We compare these models to experimental results, and go beyond critical exponents.
Testability analysis on a hydraulic system in a certain equipment based on simulation model
NASA Astrophysics Data System (ADS)
Zhang, Rui; Cong, Hua; Liu, Yuanhong; Feng, Fuzhou
2018-03-01
Aiming at the problem that the complicated structure and the shortage of fault statistics information in hydraulic systems, a multi value testability analysis method based on simulation model is proposed. Based on the simulation model of AMESim, this method injects the simulated faults and records variation of test parameters ,such as pressure, flow rate, at each test point compared with those under normal conditions .Thus a multi-value fault-test dependency matrix is established. Then the fault detection rate (FDR) and fault isolation rate (FIR) are calculated based on the dependency matrix. Finally the system of testability and fault diagnosis capability are analyzed and evaluated, which can only reach a lower 54%(FDR) and 23%(FIR). In order to improve testability performance of the system,. number and position of the test points are optimized on the system. Results show the proposed test placement scheme can be used to solve the problems that difficulty, inefficiency and high cost in the system maintenance.
Guastello, Stephen J
2009-07-01
The landmarks in the use of chaos and related constructs in psychology were entwined with the growing use of other nonlinear dynamical constructs, especially catastrophes and self-organization. The growth in substantive applications of chaos in psychology is partially related to the development of methodologies that work within the constraints of psychological data. The psychological literature includes rigorous theory with testable propositions, lighter-weight metaphorical uses of the construct, and colloquial uses of "chaos" with no particular theoretical intent. The current state of the chaos construct and supporting empirical research in psychological theory is summarized in neuroscience, psychophysics, psychomotor skill and other learning phenomena, clinical and abnormal psychology, and group dynamics and organizational behavior. Trends indicate that human systems do not remain chaotic indefinitely; they eventually self-organize, and the concept of the complex adaptive system has become prominent. Chaotic turbulence is generally higher in healthy systems compared to unhealthy systems, although opposite appears true in mood disorders. Group dynamics research shows trends consistent with the complex adaptive system, whereas organizational behavior lags behind in empirical studies relative to the quantity of its theory. Future directions for research involving the chaos construct and other nonlinear dynamics are outlined.
Implementation of Instrumental Variable Bounds for Data Missing Not at Random.
Marden, Jessica R; Wang, Linbo; Tchetgen, Eric J Tchetgen; Walter, Stefan; Glymour, M Maria; Wirth, Kathleen E
2018-05-01
Instrumental variables are routinely used to recover a consistent estimator of an exposure causal effect in the presence of unmeasured confounding. Instrumental variable approaches to account for nonignorable missing data also exist but are less familiar to epidemiologists. Like instrumental variables for exposure causal effects, instrumental variables for missing data rely on exclusion restriction and instrumental variable relevance assumptions. Yet these two conditions alone are insufficient for point identification. For estimation, researchers have invoked a third assumption, typically involving fairly restrictive parametric constraints. Inferences can be sensitive to these parametric assumptions, which are typically not empirically testable. The purpose of our article is to discuss another approach for leveraging a valid instrumental variable. Although the approach is insufficient for nonparametric identification, it can nonetheless provide informative inferences about the presence, direction, and magnitude of selection bias, without invoking a third untestable parametric assumption. An important contribution of this article is an Excel spreadsheet tool that can be used to obtain empirical evidence of selection bias and calculate bounds and corresponding Bayesian 95% credible intervals for a nonidentifiable population proportion. For illustrative purposes, we used the spreadsheet tool to analyze HIV prevalence data collected by the 2007 Zambia Demographic and Health Survey (DHS).
Language, music, syntax and the brain.
Patel, Aniruddh D
2003-07-01
The comparative study of music and language is drawing an increasing amount of research interest. Like language, music is a human universal involving perceptually discrete elements organized into hierarchically structured sequences. Music and language can thus serve as foils for each other in the study of brain mechanisms underlying complex sound processing, and comparative research can provide novel insights into the functional and neural architecture of both domains. This review focuses on syntax, using recent neuroimaging data and cognitive theory to propose a specific point of convergence between syntactic processing in language and music. This leads to testable predictions, including the prediction that that syntactic comprehension problems in Broca's aphasia are not selective to language but influence music perception as well.
Family nonuniversal Z' models with protected flavor-changing interactions
NASA Astrophysics Data System (ADS)
Celis, Alejandro; Fuentes-Martín, Javier; Jung, Martin; Serôdio, Hugo
2015-07-01
We define a new class of Z' models with neutral flavor-changing interactions at tree level in the down-quark sector. They are related in an exact way to elements of the quark mixing matrix due to an underlying flavored U(1)' gauge symmetry, rendering these models particularly predictive. The same symmetry implies lepton-flavor nonuniversal couplings, fully determined by the gauge structure of the model. Our models allow us to address presently observed deviations from the standard model and specific correlations among the new physics contributions to the Wilson coefficients C9,10' ℓ can be tested in b →s ℓ+ℓ- transitions. We furthermore predict lepton-universality violations in Z' decays, testable at the LHC.
Mäs, Michael; Flache, Andreas
2013-01-01
Explanations of opinion bi-polarization hinge on the assumption of negative influence, individuals' striving to amplify differences to disliked others. However, empirical evidence for negative influence is inconclusive, which motivated us to search for an alternative explanation. Here, we demonstrate that bi-polarization can be explained without negative influence, drawing on theories that emphasize the communication of arguments as central mechanism of influence. Due to homophily, actors interact mainly with others whose arguments will intensify existing tendencies for or against the issue at stake. We develop an agent-based model of this theory and compare its implications to those of existing social-influence models, deriving testable hypotheses about the conditions of bi-polarization. Hypotheses were tested with a group-discussion experiment (N = 96). Results demonstrate that argument exchange can entail bi-polarization even when there is no negative influence.
The diffusion decision model: theory and data for two-choice decision tasks.
Ratcliff, Roger; McKoon, Gail
2008-04-01
The diffusion decision model allows detailed explanations of behavior in two-choice discrimination tasks. In this article, the model is reviewed to show how it translates behavioral data-accuracy, mean response times, and response time distributions-into components of cognitive processing. Three experiments are used to illustrate experimental manipulations of three components: stimulus difficulty affects the quality of information on which a decision is based; instructions emphasizing either speed or accuracy affect the criterial amounts of information that a subject requires before initiating a response; and the relative proportions of the two stimuli affect biases in drift rate and starting point. The experiments also illustrate the strong constraints that ensure the model is empirically testable and potentially falsifiable. The broad range of applications of the model is also reviewed, including research in the domains of aging and neurophysiology.
Muscle MRI findings in facioscapulohumeral muscular dystrophy.
Gerevini, Simonetta; Scarlato, Marina; Maggi, Lorenzo; Cava, Mariangela; Caliendo, Giandomenico; Pasanisi, Barbara; Falini, Andrea; Previtali, Stefano Carlo; Morandi, Lucia
2016-03-01
Facioscapulohumeral muscular dystrophy (FSHD) is characterized by extremely variable degrees of facial, scapular and lower limb muscle involvement. Clinical and genetic determination can be difficult, as molecular analysis is not always definitive, and other similar muscle disorders may have overlapping clinical manifestations. Whole-body muscle MRI examination for fat infiltration, atrophy and oedema was performed to identify specific patterns of muscle involvement in FSHD patients (30 subjects), and compared to a group of control patients (23) affected by other myopathies (NFSHD). In FSHD patients, we detected a specific pattern of muscle fatty replacement and atrophy, particularly in upper girdle muscles. The most frequently affected muscles, including paucisymptomatic and severely affected FSHD patients, were trapezius, teres major and serratus anterior. Moreover, asymmetric muscle involvement was significantly higher in FSHD as compared to NFSHD patients. In conclusion, muscle MRI is very sensitive for identifying a specific pattern of involvement in FSHD patients and in detecting selective muscle involvement of non-clinically testable muscles. Muscle MRI constitutes a reliable tool for differentiating FSHD from other muscular dystrophies to direct diagnostic molecular analysis, as well as to investigate FSHD natural history and follow-up of the disease. Muscle MRI identifies a specific pattern of muscle involvement in FSHD patients. Muscle MRI may predict FSHD in asymptomatic and severely affected patients. Muscle MRI of upper girdle better predicts FSHD. Muscle MRI may differentiate FSHD from other forms of muscular dystrophy. Muscle MRI may show the involvement of non-clinical testable muscles.
Reheating predictions in gravity theories with derivative coupling
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dalianis, Ioannis; Koutsoumbas, George; Ntrekis, Konstantinos
2017-02-01
We investigate the inflationary predictions of a simple Horndeski theory where the inflaton scalar field has a non-minimal derivative coupling (NMDC) to the Einstein tensor. The NMDC is very motivated for the construction of successful models for inflation, nevertheless its inflationary predictions are not observationally distinct. We show that it is possible to probe the effects of the NMDC on the CMB observables by taking into account both the dynamics of the inflationary slow-roll phase and the subsequent reheating. We perform a comparative study between representative inflationary models with canonical fields minimally coupled to gravity and models with NMDC. Wemore » find that the inflation models with dominant NMDC generically predict a higher reheating temperature and a different range for the tilt of the scalar perturbation spectrum n {sub s} and scalar-to-tensor ratio r , potentially testable by current and future CMB experiments.« less
Phenemenological vs. biophysical models of thermal stress in aquatic eggs
NASA Astrophysics Data System (ADS)
Martin, B.
2016-12-01
Predicting species responses to climate change is a central challenge in ecology, with most efforts relying on lab derived phenomenological relationships between temperature and fitness metrics. We tested one of these models using the embryonic stage of a Chinook salmon population. We parameterized the model with laboratory data, applied it to predict survival in the field, and found that it significantly underestimated field-derived estimates of thermal mortality. We used a biophysical model based on mass-transfer theory to show that the discrepancy was due to the differences in water flow velocities between the lab and the field. This mechanistic approach provides testable predictions for how the thermal tolerance of embryos depends on egg size and flow velocity of the surrounding water. We found support for these predictions across more than 180 fish species, suggesting that flow and temperature mediated oxygen limitation is a general mechanism underlying the thermal tolerance of embryos.
Generating Testable Questions in the Science Classroom: The BDC Model
ERIC Educational Resources Information Center
Tseng, ChingMei; Chen, Shu-Bi Shu-Bi; Chang, Wen-Hua
2015-01-01
Guiding students to generate testable scientific questions is essential in the inquiry classroom, but it is not easy. The purpose of the BDC ("Big Idea, Divergent Thinking, and Convergent Thinking") instructional model is to to scaffold students' inquiry learning. We illustrate the use of this model with an example lesson, designed…
Easily Testable PLA-Based Finite State Machines
1989-03-01
PLATYPUS (20]. Then, justifi- type 1, 4 and 5 can be guaranteed to be testable via cation paths are obtained from the STG using simple logic...next state lines is found, if such a vector par that is gnrt d y the trupt eexists, using PLATYPUS [20]. pair that is generated by the first corrupted
1983-11-01
compound operations, with status. (h) Pre-programmed CRC and double-precision multiply/divide algo- rithms. (i) Double length accumulator with full...IH1.25 _ - MICROCOP ’ RESOLUTION TEST CHART NATIONAL BUREAU OF STANDARDS-1963-A .4 ’* • • . - . .. •. . . . . . . . . . . . . . • - -. .• ,. o. . . .- "o
Eye Examination Testability in Children with Autism and in Typical Peers
Coulter, Rachel Anastasia; Bade, Annette; Tea, Yin; Fecho, Gregory; Amster, Deborah; Jenewein, Erin; Rodena, Jacqueline; Lyons, Kara Kelley; Mitchell, G. Lynn; Quint, Nicole; Dunbar, Sandra; Ricamato, Michele; Trocchio, Jennie; Kabat, Bonnie; Garcia, Chantel; Radik, Irina
2015-01-01
ABSTRACT Purpose To compare testability of vision and eye tests in an examination protocol of 9- to 17-year-old patients with autism spectrum disorder (ASD) to typically developing (TD) peers. Methods In a prospective pilot study, 61 children and adolescents (34 with ASD and 27 who were TD) aged 9 to 17 years completed an eye examination protocol including tests of visual acuity, refraction, convergence (eye teaming), stereoacuity (depth perception), ocular motility, and ocular health. Patients who required new refractive correction were retested after wearing their updated spectacle prescription for 1 month. The specialized protocol incorporated visual, sensory, and communication supports. A psychologist determined group status/eligibility using DSM-IV-TR (Diagnostic and Statistical Manual of Mental Disorders, Fourth Edition, Text Revision) criteria by review of previous evaluations and parent responses on the Social Communication Questionnaire. Before the examination, parents provided information regarding patients’ sex, race, ethnicity, and, for ASD patients, verbal communication level (nonverbal, uses short words, verbal). Parents indicated whether the patient wore a refractive correction, whether the patient had ever had an eye examination, and the age at the last examination. Chi-square tests compared testability results for TD and ASD groups. Results Typically developing and ASD groups did not differ by age (p = 0.54), sex (p = 0.53), or ethnicity (p = 0.22). Testability was high on most tests (TD, 100%; ASD, 88 to 100%), except for intraocular pressure (IOP), which was reduced for both the ASD (71%) and the TD (89%) patients. Among ASD patients, IOP testability varied greatly with verbal communication level (p < 0.001). Although IOP measurements were completed on all verbal patients, only 37.5% of nonverbal and 44.4% of ASD patients who used short words were successful. Conclusions Patients with ASD can complete most vision and eye tests within an examination protocol. Testability of IOPs is reduced, particularly for nonverbal patients and patients who use short words to communicate. PMID:25415280
The proper treatment of language acquisition and change in a population setting.
Niyogi, Partha; Berwick, Robert C
2009-06-23
Language acquisition maps linguistic experience, primary linguistic data (PLD), onto linguistic knowledge, a grammar. Classically, computational models of language acquisition assume a single target grammar and one PLD source, the central question being whether the target grammar can be acquired from the PLD. However, real-world learners confront populations with variation, i.e., multiple target grammars and PLDs. Removing this idealization has inspired a new class of population-based language acquisition models. This paper contrasts 2 such models. In the first, iterated learning (IL), each learner receives PLD from one target grammar but different learners can have different targets. In the second, social learning (SL), each learner receives PLD from possibly multiple targets, e.g., from 2 parents. We demonstrate that these 2 models have radically different evolutionary consequences. The IL model is dynamically deficient in 2 key respects. First, the IL model admits only linear dynamics and so cannot describe phase transitions, attested rapid changes in languages over time. Second, the IL model cannot properly describe the stability of languages over time. In contrast, the SL model leads to nonlinear dynamics, bifurcations, and possibly multiple equilibria and so suffices to model both the case of stable language populations, mixtures of more than 1 language, as well as rapid language change. The 2 models also make distinct, empirically testable predictions about language change. Using historical data, we show that the SL model more faithfully replicates the dynamics of the evolution of Middle English.
Towards a theory of individual differences in statistical learning
Bogaerts, Louisa; Christiansen, Morten H.; Frost, Ram
2017-01-01
In recent years, statistical learning (SL) research has seen a growing interest in tracking individual performance in SL tasks, mainly as a predictor of linguistic abilities. We review studies from this line of research and outline three presuppositions underlying the experimental approach they employ: (i) that SL is a unified theoretical construct; (ii) that current SL tasks are interchangeable, and equally valid for assessing SL ability; and (iii) that performance in the standard forced-choice test in the task is a good proxy of SL ability. We argue that these three critical presuppositions are subject to a number of theoretical and empirical issues. First, SL shows patterns of modality- and informational-specificity, suggesting that SL cannot be treated as a unified construct. Second, different SL tasks may tap into separate sub-components of SL that are not necessarily interchangeable. Third, the commonly used forced-choice tests in most SL tasks are subject to inherent limitations and confounds. As a first step, we offer a methodological approach that explicitly spells out a potential set of different SL dimensions, allowing for better transparency in choosing a specific SL task as a predictor of a given linguistic outcome. We then offer possible methodological solutions for better tracking and measuring SL ability. Taken together, these discussions provide a novel theoretical and methodological approach for assessing individual differences in SL, with clear testable predictions. This article is part of the themed issue ‘New frontiers for statistical learning in the cognitive sciences’. PMID:27872377
Julius Edgar Lilienfeld Prize Lecture: The Higgs Boson, String Theory, and the Real World
NASA Astrophysics Data System (ADS)
Kane, Gordon
2012-03-01
In this talk I'll describe how string theory is exciting because it can address most, perhaps all, of the questions we hope to understand about our world: why quarks and leptons make up our world, what forces form our world, cosmology, parity violation, and much more. I'll explain why string theory is testable in basically the same ways as the rest of physics, and why much of what is written about that is misleading. String theory is already or soon being tested in several ways, including correctly predicting the recently observed Higgs boson properties and mass, and predictions for dark matter, LHC physics, cosmological history, and more, from work in the increasingly active subfield ``string phenomenology.''
A conceptual model for generating and validating in-session clinical judgments
Jacinto, Sofia B.; Lewis, Cara C.; Braga, João N.; Scott, Kelli
2016-01-01
Objective Little attention has been paid to the nuanced and complex decisions made in the clinical session context and how these decisions influence therapy effectiveness. Despite decades of research on the dual-processing systems, it remains unclear when and how intuitive and analytical reasoning influence the direction of the clinical session. Method This paper puts forth a testable conceptual model, guided by an interdisciplinary integration of the literature, that posits that the clinical session context moderates the use of intuitive versus analytical reasoning. Results A synthesis of studies examining professional best practices in clinical decision-making, empirical evidence from clinical judgment research, and the application of decision science theories indicate that intuitive and analytical reasoning may have profoundly different impacts on clinical practice and outcomes. Conclusions The proposed model is discussed with respect to its implications for clinical practice and future research. PMID:27088962
The Psychology of Working Theory.
Duffy, Ryan D; Blustein, David L; Diemer, Matthew A; Autin, Kelsey L
2016-03-01
In the current article, we build on research from vocational psychology, multicultural psychology, intersectionality, and the sociology of work to construct an empirically testable Psychology of Working Theory (PWT). Our central aim is to explain the work experiences of all individuals, but particularly people near or in poverty, people who face discrimination and marginalization in their lives, and people facing challenging work-based transitions for which contextual factors are often the primary drivers of the ability to secure decent work. The concept of decent work is defined and positioned as the central variable within the theory. A series of propositions is offered concerning (a) contextual predictors of securing decent work, (b) psychological and economic mediators and moderators of these relations, and (c) outcomes of securing decent work. Recommendations are suggested for researchers seeking to use the theory and practical implications are offered concerning counseling, advocacy, and public policy. (c) 2016 APA, all rights reserved).
Wolves in sheep's clothing: Is non-profit status used to signal quality?
Jones, Daniel B; Propper, Carol; Smith, Sarah
2017-09-01
Why do many firms in the healthcare sector adopt non-profit status? One argument is that non-profit status serves as a signal of quality when consumers are not well informed. A testable implication is that an increase in consumer information may lead to a reduction in the number of non-profits in a market. We test this idea empirically by exploiting an exogenous increase in consumer information in the US nursing home industry. We find that the information shock led to a reduction in the share of non-profit homes, driven by a combination of home closure and sector switching. The lowest quality non-profits were the most likely to exit. Our results have important implications for the effects of reforms to increase consumer provision in a number of public services. Copyright © 2017. Published by Elsevier B.V.
Mäs, Michael; Flache, Andreas
2013-01-01
Explanations of opinion bi-polarization hinge on the assumption of negative influence, individuals’ striving to amplify differences to disliked others. However, empirical evidence for negative influence is inconclusive, which motivated us to search for an alternative explanation. Here, we demonstrate that bi-polarization can be explained without negative influence, drawing on theories that emphasize the communication of arguments as central mechanism of influence. Due to homophily, actors interact mainly with others whose arguments will intensify existing tendencies for or against the issue at stake. We develop an agent-based model of this theory and compare its implications to those of existing social-influence models, deriving testable hypotheses about the conditions of bi-polarization. Hypotheses were tested with a group-discussion experiment (N = 96). Results demonstrate that argument exchange can entail bi-polarization even when there is no negative influence. PMID:24312164
A discrete control model of PLANT
NASA Technical Reports Server (NTRS)
Mitchell, C. M.
1985-01-01
A model of the PLANT system using the discrete control modeling techniques developed by Miller is described. Discrete control models attempt to represent in a mathematical form how a human operator might decompose a complex system into simpler parts and how the control actions and system configuration are coordinated so that acceptable overall system performance is achieved. Basic questions include knowledge representation, information flow, and decision making in complex systems. The structure of the model is a general hierarchical/heterarchical scheme which structurally accounts for coordination and dynamic focus of attention. Mathematically, the discrete control model is defined in terms of a network of finite state systems. Specifically, the discrete control model accounts for how specific control actions are selected from information about the controlled system, the environment, and the context of the situation. The objective is to provide a plausible and empirically testable accounting and, if possible, explanation of control behavior.
A Framework for Evidence-Based Licensure of Adaptive Autonomous Systems
2016-03-01
insights gleaned to DoD. The autonomy community has identified significant challenges associated with test, evaluation verification and validation of...licensure as a test, evaluation, verification , and validation (TEVV) framework that can address these challenges. IDA found that traditional...language requirements to testable (preferably machine testable) specifications • Design of architectures that treat development and verification of
Testable solution of the cosmological constant and coincidence problems
NASA Astrophysics Data System (ADS)
Shaw, Douglas J.; Barrow, John D.
2011-02-01
We present a new solution to the cosmological constant (CC) and coincidence problems in which the observed value of the CC, Λ, is linked to other observable properties of the Universe. This is achieved by promoting the CC from a parameter that must be specified, to a field that can take many possible values. The observed value of Λ≈(9.3Gyrs)-2 [≈10-120 in Planck units] is determined by a new constraint equation which follows from the application of a causally restricted variation principle. When applied to our visible Universe, the model makes a testable prediction for the dimensionless spatial curvature of Ωk0=-0.0056(ζb/0.5), where ζb˜1/2 is a QCD parameter. Requiring that a classical history exist, our model determines the probability of observing a given Λ. The observed CC value, which we successfully predict, is typical within our model even before the effects of anthropic selection are included. When anthropic selection effects are accounted for, we find that the observed coincidence between tΛ=Λ-1/2 and the age of the Universe, tU, is a typical occurrence in our model. In contrast to multiverse explanations of the CC problems, our solution is independent of the choice of a prior weighting of different Λ values and does not rely on anthropic selection effects. Our model includes no unnatural small parameters and does not require the introduction of new dynamical scalar fields or modifications to general relativity, and it can be tested by astronomical observations in the near future.
A parsimonious modular approach to building a mechanistic belowground carbon and nitrogen model
NASA Astrophysics Data System (ADS)
Abramoff, Rose Z.; Davidson, Eric A.; Finzi, Adrien C.
2017-09-01
Soil decomposition models range from simple empirical functions to those that represent physical, chemical, and biological processes. Here we develop a parsimonious, modular C and N cycle model, the Dual Arrhenius Michaelis-Menten-Microbial Carbon and Nitrogen Phyisology (DAMM-MCNiP), that generates testable hypotheses regarding the effect of temperature, moisture, and substrate supply on C and N cycling. We compared this model to DAMM alone and an empirical model of heterotrophic respiration based on Harvard Forest data. We show that while different model structures explain similar amounts of variation in respiration, they differ in their ability to infer processes that affect C flux. We applied DAMM-MCNiP to explain an observed seasonal hysteresis in the relationship between respiration and temperature and show using an exudation simulation that the strength of the priming effect depended on the stoichiometry of the inputs. Low C:N inputs stimulated priming of soil organic matter decomposition, but high C:N inputs were preferentially utilized by microbes as a C source with limited priming. The simplicity of DAMM-MCNiP's simultaneous representations of temperature, moisture, substrate supply, enzyme activity, and microbial growth processes is unique among microbial physiology models and is sufficiently parsimonious that it could be incorporated into larger-scale models of C and N cycling.
Benchmarking test of empirical root water uptake models
NASA Astrophysics Data System (ADS)
dos Santos, Marcos Alex; de Jong van Lier, Quirijn; van Dam, Jos C.; Freire Bezerra, Andre Herman
2017-01-01
Detailed physical models describing root water uptake (RWU) are an important tool for the prediction of RWU and crop transpiration, but the hydraulic parameters involved are hardly ever available, making them less attractive for many studies. Empirical models are more readily used because of their simplicity and the associated lower data requirements. The purpose of this study is to evaluate the capability of some empirical models to mimic the RWU distribution under varying environmental conditions predicted from numerical simulations with a detailed physical model. A review of some empirical models used as sub-models in ecohydrological models is presented, and alternative empirical RWU models are proposed. All these empirical models are analogous to the standard Feddes model, but differ in how RWU is partitioned over depth or how the transpiration reduction function is defined. The parameters of the empirical models are determined by inverse modelling of simulated depth-dependent RWU. The performance of the empirical models and their optimized empirical parameters depends on the scenario. The standard empirical Feddes model only performs well in scenarios with low root length density R, i.e. for scenarios with low RWU compensation
. For medium and high R, the Feddes RWU model cannot mimic properly the root uptake dynamics as predicted by the physical model. The Jarvis RWU model in combination with the Feddes reduction function (JMf) only provides good predictions for low and medium R scenarios. For high R, it cannot mimic the uptake patterns predicted by the physical model. Incorporating a newly proposed reduction function into the Jarvis model improved RWU predictions. Regarding the ability of the models to predict plant transpiration, all models accounting for compensation show good performance. The Akaike information criterion (AIC) indicates that the Jarvis (2010) model (JMII), with no empirical parameters to be estimated, is the best model
. The proposed models are better in predicting RWU patterns similar to the physical model. The statistical indices point to them as the best alternatives for mimicking RWU predictions of the physical model.
Hypothesis testing and earthquake prediction.
Jackson, D D
1996-04-30
Requirements for testing include advance specification of the conditional rate density (probability per unit time, area, and magnitude) or, alternatively, probabilities for specified intervals of time, space, and magnitude. Here I consider testing fully specified hypotheses, with no parameter adjustments or arbitrary decisions allowed during the test period. Because it may take decades to validate prediction methods, it is worthwhile to formulate testable hypotheses carefully in advance. Earthquake prediction generally implies that the probability will be temporarily higher than normal. Such a statement requires knowledge of "normal behavior"--that is, it requires a null hypothesis. Hypotheses can be tested in three ways: (i) by comparing the number of actual earth-quakes to the number predicted, (ii) by comparing the likelihood score of actual earthquakes to the predicted distribution, and (iii) by comparing the likelihood ratio to that of a null hypothesis. The first two tests are purely self-consistency tests, while the third is a direct comparison of two hypotheses. Predictions made without a statement of probability are very difficult to test, and any test must be based on the ratio of earthquakes in and out of the forecast regions.
Hypothesis testing and earthquake prediction.
Jackson, D D
1996-01-01
Requirements for testing include advance specification of the conditional rate density (probability per unit time, area, and magnitude) or, alternatively, probabilities for specified intervals of time, space, and magnitude. Here I consider testing fully specified hypotheses, with no parameter adjustments or arbitrary decisions allowed during the test period. Because it may take decades to validate prediction methods, it is worthwhile to formulate testable hypotheses carefully in advance. Earthquake prediction generally implies that the probability will be temporarily higher than normal. Such a statement requires knowledge of "normal behavior"--that is, it requires a null hypothesis. Hypotheses can be tested in three ways: (i) by comparing the number of actual earth-quakes to the number predicted, (ii) by comparing the likelihood score of actual earthquakes to the predicted distribution, and (iii) by comparing the likelihood ratio to that of a null hypothesis. The first two tests are purely self-consistency tests, while the third is a direct comparison of two hypotheses. Predictions made without a statement of probability are very difficult to test, and any test must be based on the ratio of earthquakes in and out of the forecast regions. PMID:11607663
Artificial Intelligence Applications to Testability.
1984-10-01
general software assistant; examining testability utilization of it should wait a few years until the software assistant is a well defined product ...ago. It provides a single host which satisfies the needs of developers, product developers, and end users . As shown in table 5.10-2, it also provides...follows a trend towards more user -oriented design approaches to interactive computer systems. The implicit goal in this trend is the
The need for theory to guide concussion research.
Molfese, Dennis L
2015-01-01
Although research into concussion has greatly expanded over the past decade, progress in identifying the mechanisms and consequences of head injury and recovery are largely absent. Instead, data are accumulated without the guidance of a systematic theory to direct research questions or generate testable hypotheses. As part of this special issue on sports concussion, I advance a theory that emphasizes changes in spatial and temporal distributions of the brain's neural networks during normal learning and the disruptions of these networks following injury. Specific predictions are made regarding both the development of the network as well as its breakdown following injury.
Predicting rates of interspecific interaction from phylogenetic trees.
Nuismer, Scott L; Harmon, Luke J
2015-01-01
Integrating phylogenetic information can potentially improve our ability to explain species' traits, patterns of community assembly, the network structure of communities, and ecosystem function. In this study, we use mathematical models to explore the ecological and evolutionary factors that modulate the explanatory power of phylogenetic information for communities of species that interact within a single trophic level. We find that phylogenetic relationships among species can influence trait evolution and rates of interaction among species, but only under particular models of species interaction. For example, when interactions within communities are mediated by a mechanism of phenotype matching, phylogenetic trees make specific predictions about trait evolution and rates of interaction. In contrast, if interactions within a community depend on a mechanism of phenotype differences, phylogenetic information has little, if any, predictive power for trait evolution and interaction rate. Together, these results make clear and testable predictions for when and how evolutionary history is expected to influence contemporary rates of species interaction. © 2014 John Wiley & Sons Ltd/CNRS.
Simple neural substrate predicts complex rhythmic structure in duetting birds
NASA Astrophysics Data System (ADS)
Amador, Ana; Trevisan, M. A.; Mindlin, G. B.
2005-09-01
Horneros (Furnarius Rufus) are South American birds well known for their oven-looking nests and their ability to sing in couples. Previous work has analyzed the rhythmic organization of the duets, unveiling a mathematical structure behind the songs. In this work we analyze in detail an extended database of duets. The rhythms of the songs are compatible with the dynamics presented by a wide class of dynamical systems: forced excitable systems. Compatible with this nonlinear rule, we build a biologically inspired model for how the neural and the anatomical elements may interact to produce the observed rhythmic patterns. This model allows us to synthesize songs presenting the acoustic and rhythmic features observed in real songs. We also make testable predictions in order to support our hypothesis.
Are there two processes in reasoning? The dimensionality of inductive and deductive inferences.
Stephens, Rachel G; Dunn, John C; Hayes, Brett K
2018-03-01
Single-process accounts of reasoning propose that the same cognitive mechanisms underlie inductive and deductive inferences. In contrast, dual-process accounts propose that these inferences depend upon 2 qualitatively different mechanisms. To distinguish between these accounts, we derived a set of single-process and dual-process models based on an overarching signal detection framework. We then used signed difference analysis to test each model against data from an argument evaluation task, in which induction and deduction judgments are elicited for sets of valid and invalid arguments. Three data sets were analyzed: data from Singmann and Klauer (2011), a database of argument evaluation studies, and the results of an experiment designed to test model predictions. Of the large set of testable models, we found that almost all could be rejected, including all 2-dimensional models. The only testable model able to account for all 3 data sets was a model with 1 dimension of argument strength and independent decision criteria for induction and deduction judgments. We conclude that despite the popularity of dual-process accounts, current results from the argument evaluation task are best explained by a single-process account that incorporates separate decision thresholds for inductive and deductive inferences. (PsycINFO Database Record (c) 2018 APA, all rights reserved).
Harris, Jenine K; Erwin, Paul C; Smith, Carson; Brownson, Ross C
2015-01-01
Evidence-based decision making (EBDM) is the process, in local health departments (LHDs) and other settings, of translating the best available scientific evidence into practice. Local health departments are more likely to be successful if they use evidence-based strategies. However, EBDM and use of evidence-based strategies by LHDs are not widespread. Drawing on diffusion of innovations theory, we sought to understand how LHD directors and program managers perceive the relative advantage, compatibility, simplicity, and testability of EBDM. Directors and managers of programs in chronic disease, environmental health, and infectious disease from LHDs nationwide completed a survey including demographic information and questions about diffusion attributes (advantage, compatibility, simplicity, and testability) related to EBDM. Bivariate inferential tests were used to compare responses between directors and managers and to examine associations between participant characteristics and diffusion attributes. Relative advantage and compatibility scores were high for directors and managers, whereas simplicity and testability scores were lower. Although health department directors and managers of programs in chronic disease generally had higher scores than other groups, there were few significant or large differences between directors and managers across the diffusion attributes. Larger jurisdiction population size was associated with higher relative advantage and compatibility scores for both directors and managers. Overall, directors and managers were in strong agreement on the relative advantage of an LHD using EBDM, with directors in stronger agreement than managers. Perceived relative advantage has been demonstrated to be the most important factor in the rate of innovation adoption, suggesting an opportunity for directors to speed EBDM adoption. However, lower average scores across all groups for simplicity and testability may be hindering EBDM adoption. Recommended strategies for increasing perceived EBDM simplicity and testability are provided.
A neuro-computational model of economic decisions.
Rustichini, Aldo; Padoa-Schioppa, Camillo
2015-09-01
Neuronal recordings and lesion studies indicate that key aspects of economic decisions take place in the orbitofrontal cortex (OFC). Previous work identified in this area three groups of neurons encoding the offer value, the chosen value, and the identity of the chosen good. An important and open question is whether and how decisions could emerge from a neural circuit formed by these three populations. Here we adapted a biophysically realistic neural network previously proposed for perceptual decisions (Wang XJ. Neuron 36: 955-968, 2002; Wong KF, Wang XJ. J Neurosci 26: 1314-1328, 2006). The domain of economic decisions is significantly broader than that for which the model was originally designed, yet the model performed remarkably well. The input and output nodes of the network were naturally mapped onto two groups of cells in OFC. Surprisingly, the activity of interneurons in the network closely resembled that of the third group of cells, namely, chosen value cells. The model reproduced several phenomena related to the neuronal origins of choice variability. It also generated testable predictions on the excitatory/inhibitory nature of different neuronal populations and on their connectivity. Some aspects of the empirical data were not reproduced, but simple extensions of the model could overcome these limitations. These results render a biologically credible model for the neuronal mechanisms of economic decisions. They demonstrate that choices could emerge from the activity of cells in the OFC, suggesting that chosen value cells directly participate in the decision process. Importantly, Wang's model provides a platform to investigate the implications of neuroscience results for economic theory. Copyright © 2015 the American Physiological Society.
Testable solution of the cosmological constant and coincidence problems
DOE Office of Scientific and Technical Information (OSTI.GOV)
Shaw, Douglas J.; Barrow, John D.
2011-02-15
We present a new solution to the cosmological constant (CC) and coincidence problems in which the observed value of the CC, {Lambda}, is linked to other observable properties of the Universe. This is achieved by promoting the CC from a parameter that must be specified, to a field that can take many possible values. The observed value of {Lambda}{approx_equal}(9.3 Gyrs){sup -2}[{approx_equal}10{sup -120} in Planck units] is determined by a new constraint equation which follows from the application of a causally restricted variation principle. When applied to our visible Universe, the model makes a testable prediction for the dimensionless spatial curvaturemore » of {Omega}{sub k0}=-0.0056({zeta}{sub b}/0.5), where {zeta}{sub b}{approx}1/2 is a QCD parameter. Requiring that a classical history exist, our model determines the probability of observing a given {Lambda}. The observed CC value, which we successfully predict, is typical within our model even before the effects of anthropic selection are included. When anthropic selection effects are accounted for, we find that the observed coincidence between t{sub {Lambda}={Lambda}}{sup -1/2} and the age of the Universe, t{sub U}, is a typical occurrence in our model. In contrast to multiverse explanations of the CC problems, our solution is independent of the choice of a prior weighting of different {Lambda} values and does not rely on anthropic selection effects. Our model includes no unnatural small parameters and does not require the introduction of new dynamical scalar fields or modifications to general relativity, and it can be tested by astronomical observations in the near future.« less
Framing Effects: Dynamics and Task Domains
Wang
1996-11-01
The author examines the mechanisms and dynamics of framing effects in risky choices across three distinct task domains (i.e., life-death, public property, and personal money). The choice outcomes of the problems presented in each of the three task domains had a binary structure of a sure thing vs a gamble of equal expected value; the outcomes differed in their framing conditions and the expected values, raging from 6000, 600, 60, to 6, numerically. It was hypothesized that subjects would become more risk seeking, if the sure outcome was below their aspiration level (the minimum requirement). As predicted, more subjects preferred the gamble when facing the life-death choice problems than facing the counterpart problems presented in the other two task domains. Subjects' risk preference varied categorically along the group size dimension in the life-death domain but changed more linearly over the expected value dimension in the monetary domain. Framing effects were observed in 7 of 13 pairs of problems, showing a positive frame-risk aversion and negative frame-risk seeking relationship. In addition, two types of framing effects were theoretically defined and empirically identified. A bidirectional framing effect involves a reversal in risk preference, and occurs when a decision maker's risk preference is ambiguous or weak. Four bidirectional effects were observed; in each case a majority of subjects preferred the sure outcome under a positive frame but the gamble under a negative frame. In contrast, a unidirectional framing effect refers to a preference shift due to the framing of choice outcomes: A majority of subjects preferred one choice outcome (either the sure thing or the gamble) under both framing conditions, with positive frame augmented the preference for the sure thing and negative frame augmented the preference for the gamble. These findings revealed some dynamic regularities of framing effects and posed implications for developing predictive and testable models of human decision making.
Schmickl, Thomas; Karsai, Istvan
2014-01-01
We develop a model to produce plausible patterns of task partitioning in the ponerine ant Ectatomma ruidum based on the availability of living prey and prey corpses. The model is based on the organizational capabilities of a “common stomach” through which the colony utilizes the availability of a natural (food) substance as a major communication channel to regulate the income and expenditure of the very same substance. This communication channel has also a central role in regulating task partitioning of collective hunting behavior in a supply&demand-driven manner. Our model shows that task partitioning of the collective hunting behavior in E. ruidum can be explained by regulation due to a common stomach system. The saturation of the common stomach provides accessible information to individual ants so that they can adjust their hunting behavior accordingly by engaging in or by abandoning from stinging or transporting tasks. The common stomach is able to establish and to keep stabilized an effective mix of workforce to exploit the prey population and to transport food into the nest. This system is also able to react to external perturbations in a de-centralized homeostatic way, such as to changes in the prey density or to accumulation of food in the nest. In case of stable conditions the system develops towards an equilibrium concerning colony size and prey density. Our model shows that organization of work through a common stomach system can allow Ectatomma ruidum to collectively forage for food in a robust, reactive and reliable way. The model is compared to previously published models that followed a different modeling approach. Based on our model analysis we also suggest a series of experiments for which our model gives plausible predictions. These predictions are used to formulate a set of testable hypotheses that should be investigated empirically in future experimentation. PMID:25493558
Smart substrates: Making multi-chip modules smarter
NASA Astrophysics Data System (ADS)
Wunsch, T. F.; Treece, R. K.
1995-05-01
A novel multi-chip module (MCM) design and manufacturing methodology which utilizes active CMOS circuits in what is normally a passive substrate realizes the 'smart substrate' for use in highly testable, high reliability MCMS. The active devices are used to test the bare substrate, diagnose assembly errors or integrated circuit (IC) failures that require rework, and improve the testability of the final MCM assembly. A static random access memory (SRAM) MCM has been designed and fabricated in Sandia Microelectronics Development Laboratory in order to demonstrate the technical feasibility of this concept and to examine design and manufacturing issues which will ultimately determine the economic viability of this approach. The smart substrate memory MCM represents a first in MCM packaging. At the time the first modules were fabricated, no other company or MCM vendor had incorporated active devices in the substrate to improve manufacturability and testability, and thereby improve MCM reliability and reduce cost.
Constant-roll (quasi-)linear inflation
NASA Astrophysics Data System (ADS)
Karam, A.; Marzola, L.; Pappas, T.; Racioppi, A.; Tamvakis, K.
2018-05-01
In constant-roll inflation, the scalar field that drives the accelerated expansion of the Universe is rolling down its potential at a constant rate. Within this framework, we highlight the relations between the Hubble slow-roll parameters and the potential ones, studying in detail the case of a single-field Coleman-Weinberg model characterised by a non-minimal coupling of the inflaton to gravity. With respect to the exact constant-roll predictions, we find that assuming an approximate slow-roll behaviour yields a difference of Δ r = 0.001 in the tensor-to-scalar ratio prediction. Such a discrepancy is in principle testable by future satellite missions. As for the scalar spectral index ns, we find that the existing 2-σ bound constrains the value of the non-minimal coupling to ξphi ~ 0.29–0.31 in the model under consideration.
Computational model for living nematic
NASA Astrophysics Data System (ADS)
Genkin, Mikhail; Sokolov, Andrey; Lavrentovich, Oleg; Aranson, Igor
A realization of an active system has been conceived by combining swimming bacteria and a lyotropic nematic liquid crystal. Here, by coupling the well-established and validated model of nematic liquid crystals with the bacterial dynamics we developed a computational model describing intricate properties of such a living nematic. In faithful agreement with the experiment, the model reproduces the onset of periodic undulation of the nematic director and consequent proliferation of topological defects with the increase in bacterial concentration. It yields testable prediction on the accumulation and transport of bacteria in the cores of +1/2 topological defects and depletion of bacteria in the cores of -1/2 defects. Our new experiment on motile bacteria suspended in a free-standing liquid crystalline film fully confirmed this prediction. This effect can be used to capture and manipulation of small amounts of bacteria.
Food web complexity and stability across habitat connectivity gradients.
LeCraw, Robin M; Kratina, Pavel; Srivastava, Diane S
2014-12-01
The effects of habitat connectivity on food webs have been studied both empirically and theoretically, yet the question of whether empirical results support theoretical predictions for any food web metric other than species richness has received little attention. Our synthesis brings together theory and empirical evidence for how habitat connectivity affects both food web stability and complexity. Food web stability is often predicted to be greatest at intermediate levels of connectivity, representing a compromise between the stabilizing effects of dispersal via rescue effects and prey switching, and the destabilizing effects of dispersal via regional synchronization of population dynamics. Empirical studies of food web stability generally support both this pattern and underlying mechanisms. Food chain length has been predicted to have both increasing and unimodal relationships with connectivity as a result of predators being constrained by the patch occupancy of their prey. Although both patterns have been documented empirically, the underlying mechanisms may differ from those predicted by models. In terms of other measures of food web complexity, habitat connectivity has been empirically found to generally increase link density but either reduce or have no effect on connectance, whereas a unimodal relationship is expected. In general, there is growing concordance between empirical patterns and theoretical predictions for some effects of habitat connectivity on food webs, but many predictions remain to be tested over a full connectivity gradient, and empirical metrics of complexity are rarely modeled. Closing these gaps will allow a deeper understanding of how natural and anthropogenic changes in connectivity can affect real food webs.
The social architecture of capitalism
NASA Astrophysics Data System (ADS)
Wright, Ian
2005-02-01
A dynamic model of the social relations between workers and capitalists is introduced. The model self-organises into a dynamic equilibrium with statistical properties that are in close qualitative and in many cases quantitative agreement with a broad range of known empirical distributions of developed capitalism, including the power-law firm size distribution, the Laplace firm and GDP growth distribution, the lognormal firm demises distribution, the exponential recession duration distribution, the lognormal-Pareto income distribution, and the gamma-like firm rate-of-profit distribution. Normally these distributions are studied in isolation, but this model unifies and connects them within a single causal framework. The model also generates business cycle phenomena, including fluctuating wage and profit shares in national income about values consistent with empirical studies. The generation of an approximately lognormal-Pareto income distribution and an exponential-Pareto wealth distribution demonstrates that the power-law regime of the income distribution can be explained by an additive process on a power-law network that models the social relation between employers and employees organised in firms, rather than a multiplicative process that models returns to investment in financial markets. A testable consequence of the model is the conjecture that the rate-of-profit distribution is consistent with a parameter-mix of a ratio of normal variates with means and variances that depend on a firm size parameter that is distributed according to a power-law.
Stoltenberg, Scott F.; Nag, Parthasarathi
2010-01-01
Despite more than a decade of empirical work on the role of genetic polymorphisms in the serotonin system on behavior, the details across levels of analysis are not well understood. We describe a mathematical model of the genetic control of presynaptic serotonergic function that is based on control theory, implemented using systems of differential equations, and focused on better characterizing pathways from genes to behavior. We present the results of model validation tests that include the comparison of simulation outcomes with empirical data on genetic effects on brain response to affective stimuli and on impulsivity. Patterns of simulated neural firing were consistent with recent findings of additive effects of serotonin transporter and tryptophan hydroxylase-2 polymorphisms on brain activation. In addition, simulated levels of cerebral spinal fluid 5-hydroxyindoleacetic acid (CSF 5-HIAA) were negatively correlated with Barratt Impulsiveness Scale (Version 11) Total scores in college students (r = −.22, p = .002, N = 187), which is consistent with the well-established negative correlation between CSF 5-HIAA and impulsivity. The results of the validation tests suggest that the model captures important aspects of the genetic control of presynaptic serotonergic function and behavior via brain activation. The proposed model can be: (1) extended to include other system components, neurotransmitter systems, behaviors and environmental influences; (2) used to generate testable hypotheses. PMID:20111992
Current challenges in fundamental physics
NASA Astrophysics Data System (ADS)
Egana Ugrinovic, Daniel
The discovery of the Higgs boson at the Large Hadron Collider completed the Standard Model of particle physics. The Standard Model is a remarkably successful theory of fundamental physics, but it suffers from severe problems. It does not provide an explanation for the origin or stability of the electroweak scale nor for the origin and structure of flavor and CP violation. It predicts vanishing neutrino masses, in disagreement with experimental observations. It also fails to explain the matter-antimatter asymmetry of the universe, and it does not provide a particle candidate for dark matter. In this thesis we provide experimentally testable solutions for most of these problems and we study their phenomenology.
Mapping the landscape of metabolic goals of a cell
Zhao, Qi; Stettner, Arion I.; Reznik, Ed; ...
2016-05-23
Here, genome-scale flux balance models of metabolism provide testable predictions of all metabolic rates in an organism, by assuming that the cell is optimizing a metabolic goal known as the objective function. We introduce an efficient inverse flux balance analysis (invFBA) approach, based on linear programming duality, to characterize the space of possible objective functions compatible with measured fluxes. After testing our algorithm on simulated E. coli data and time-dependent S. oneidensis fluxes inferred from gene expression, we apply our inverse approach to flux measurements in long-term evolved E. coli strains, revealing objective functions that provide insight into metabolic adaptationmore » trajectories.« less
Advanced Deployable Structural Systems for Small Satellites
NASA Technical Reports Server (NTRS)
Belvin, W. Keith; Straubel, Marco; Wilkie, W. Keats; Zander, Martin E.; Fernandez, Juan M.; Hillebrandt, Martin F.
2016-01-01
One of the key challenges for small satellites is packaging and reliable deployment of structural booms and arrays used for power, communication, and scientific instruments. The lack of reliable and efficient boom and membrane deployment concepts for small satellites is addressed in this work through a collaborative project between NASA and DLR. The paper provides a state of the art overview on existing spacecraft deployable appendages, the special requirements for small satellites, and initial concepts for deployable booms and arrays needed for various small satellite applications. The goal is to enhance deployable boom predictability and ground testability, develop designs that are tolerant of manufacturing imperfections, and incorporate simple and reliable deployment systems.
The Central Role of Tether-Cutting Reconnection in the Production of CMEs
NASA Technical Reports Server (NTRS)
Moore, Ron; Sterling, Alphonse; Suess, Steve
2007-01-01
This viewgraph presentation describes tether-cutting reconnection in the production of Coronal Mass Ejections (CMEs). The topics include: 1) Birth and Release of the CME Plasmoid; 2) Resulting CME in Outer Corona; 3) Governing Role of Surrounding Field; 4) Testable Prediction of the Standard Scenario Magnetic Bubble CME Model; 5) Lateral Pressure in Outer Corona; 6) Measured Angular Widths of 3 CMEs; 7) LASCO Image of each CME at Final Width; 8) Source of the CME of 2002 May 20; 9) Source of the CME of 1999 Feb 9; 10) Source of the CME of 2003 Nov 4; and 11) Test Results.
Field-aligned currents and ion convection at high altitudes
NASA Technical Reports Server (NTRS)
Burch, J. L.; Reiff, P. H.
1985-01-01
Hot plasma observations from Dynamics Explorer 1 have been used to investigate solar-wind ion injection, Birkeland currents, and plasma convection at altitudes above 2 earth-radii in the morning sector. The results of the study, along with the antiparallel merging hypothesis, have been used to construct a By-dependent global convection model. A significant element of the model is the coexistence of three types of convection cells (merging cells, viscous cells, and lobe cells). As the IMF direction varies, the model accounts for the changing roles of viscous and merging processes and makes testable predictions about several magnetospheric phenomena, including the newly-observed theta aurora in the polar cap.
Self-organization in the limb: a Turing mechanism for digit development.
Cooper, Kimberly L
2015-06-01
The statistician George E. P. Box stated, 'Essentially all models are wrong, but some are useful.' (Box GEP, Draper NR: Empirical Model-Building and Response Surfaces. Wiley; 1987). Modeling biological processes is challenging for many of the reasons classically trained developmental biologists often resist the idea that black and white equations can explain the grayscale subtleties of living things. Although a simplified mathematical model of development will undoubtedly fall short of precision, a good model is exceedingly useful if it raises at least as many testable questions as it answers. Self-organizing Turing models that simulate the pattern of digits in the hand replicate events that have not yet been explained by classical approaches. The union of theory and experimentation has recently identified and validated the minimal components of a Turing network for digit pattern and triggered a cascade of questions that will undoubtedly be well-served by the continued merging of disciplines. Copyright © 2015 Elsevier Ltd. All rights reserved.
Greenwood, Pamela M; Blumberg, Eric J; Scheldrup, Melissa R
2018-03-01
A comprehensive explanation is lacking for the broad array of cognitive effects modulated by transcranial direct current stimulation (tDCS). We advanced the testable hypothesis that tDCS to the default mode network (DMN) increases processing of goals and stored information at the expense of external events. We further hypothesized that tDCS to the dorsal attention network (DAN) increases processing of external events at the expense of goals and stored information. A literature search (PsychINFO) identified 42 empirical studies and 3 meta-analyses examining effects of prefrontal and/or parietal tDCS on tasks that selectively required external and/or internal processing. Most, though not all, of the studies that met our search criteria supported our hypothesis. Three meta-analyses supported our hypothesis. The hypothesis we advanced provides a framework for the design and interpretation of results in light of the role of large-scale intrinsic networks that govern attention. Copyright © 2017 Elsevier Ltd. All rights reserved.
The evolution of speech: a comparative review.
Fitch
2000-07-01
The evolution of speech can be studied independently of the evolution of language, with the advantage that most aspects of speech acoustics, physiology and neural control are shared with animals, and thus open to empirical investigation. At least two changes were necessary prerequisites for modern human speech abilities: (1) modification of vocal tract morphology, and (2) development of vocal imitative ability. Despite an extensive literature, attempts to pinpoint the timing of these changes using fossil data have proven inconclusive. However, recent comparative data from nonhuman primates have shed light on the ancestral use of formants (a crucial cue in human speech) to identify individuals and gauge body size. Second, comparative analysis of the diverse vertebrates that have evolved vocal imitation (humans, cetaceans, seals and birds) provides several distinct, testable hypotheses about the adaptive function of vocal mimicry. These developments suggest that, for understanding the evolution of speech, comparative analysis of living species provides a viable alternative to fossil data. However, the neural basis for vocal mimicry and for mimesis in general remains unknown.
Hunter, Lora Rose; Schmidt, Norman B
2010-03-01
In this review, the extant literature concerning anxiety psychopathology in African American adults is summarized to develop a testable, explanatory framework with implications for future research. The model was designed to account for purported lower rates of anxiety disorders in African Americans compared to European Americans, along with other ethnoracial differences reported in the literature. Three specific beliefs or attitudes related to the sociocultural experience of African Americans are identified: awareness of racism, stigma of mental illness, and salience of physical illnesses. In our model, we propose that these psychological processes influence interpretations and behaviors relevant to the expression of nonpathological anxiety as well as features of diagnosable anxiety conditions. Moreover, differences in these processes may explain the differential assessed rates of anxiety disorders in African Americans. The model is discussed in the context of existing models of anxiety etiology. Specific follow-up research is also suggested, along with implications for clinical assessment, diagnosis, and treatment.
Russians in treatment: the evidence base supporting cultural adaptations.
Jurcik, Tomas; Chentsova-Dutton, Yulia E; Solopieieva-Jurcikova, Ielyzaveta; Ryder, Andrew G
2013-07-01
Despite large waves of westward migration, little is known about how to adapt services to assist Russian-speaking immigrants. In an attempt to bridge the scientist-practitioner gap, the current review synthesizes diverse literatures regarding what is known about immigrants from the Former Soviet Union. Relevant empirical studies and reviews from cross-cultural and cultural psychology, sociology, psychiatric epidemiology, mental health, management, linguistics, history, and anthropology literature were synthesized into three broad topics: culture of origin issues, common psychosocial challenges, and clinical recommendations. Russian speakers probably differ in their form of collectivism, gender relations, emotion norms, social support, and parenting styles from what many clinicians are familiar with and exhibit an apparent paradoxical mix of modern and traditional values. While some immigrant groups from the Former Soviet Union are adjusting well, others have shown elevated levels of depression, somatization, and alcoholism, which can inform cultural adaptations. Testable assessment and therapy adaptations for Russians were outlined based on integrating clinical and cultural psychology perspectives. © 2013 Wiley Periodicals, Inc.
Gender and Physics: a Theoretical Analysis
NASA Astrophysics Data System (ADS)
Rolin, Kristina
This article argues that the objections raised by Koertge (1998), Gross and Levitt (1994), and Weinberg (1996) against feminist scholarship on gender and physics are unwarranted. The objections are that feminist science studies perpetuate gender stereotypes, are irrelevant to the content of physics, or promote epistemic relativism. In the first part of this article I argue that the concept of gender, as it has been developed in feminist theory, is a key to understanding why the first objection is misguided. Instead of reinforcing gender stereotypes, feminist science studies scholars can formulate empirically testable hypotheses regarding local and contested beliefs about gender. In the second part of this article I argue that a social analysis of scientific knowledge is a key to understanding why the second and the third objections are misguided. The concept of gender is relevant for understanding the social practice of physics, and the social practice of physics can be of epistemic importance. Instead of advancing epistemic relativism, feminist science studies scholars can make important contributions to a subfield of philosophy called social epistemology.
Prediction and typicality in multiverse cosmology
NASA Astrophysics Data System (ADS)
Azhar, Feraz
2014-02-01
In the absence of a fundamental theory that precisely predicts values for observable parameters, anthropic reasoning attempts to constrain probability distributions over those parameters in order to facilitate the extraction of testable predictions. The utility of this approach has been vigorously debated of late, particularly in light of theories that claim we live in a multiverse, where parameters may take differing values in regions lying outside our observable horizon. Within this cosmological framework, we investigate the efficacy of top-down anthropic reasoning based on the weak anthropic principle. We argue contrary to recent claims that it is not clear one can either dispense with notions of typicality altogether or presume typicality, in comparing resulting probability distributions with observations. We show in a concrete, top-down setting related to dark matter, that assumptions about typicality can dramatically affect predictions, thereby providing a guide to how errors in reasoning regarding typicality translate to errors in the assessment of predictive power. We conjecture that this dependence on typicality is an integral feature of anthropic reasoning in broader cosmological contexts, and argue in favour of the explicit inclusion of measures of typicality in schemes invoking anthropic reasoning, with a view to extracting predictions from multiverse scenarios.
Lee, Joy L; DeCamp, Matthew; Dredze, Mark; Chisolm, Margaret S; Berger, Zackary D
2014-10-15
Twitter is home to many health professionals who send messages about a variety of health-related topics. Amid concerns about physicians posting inappropriate content online, more in-depth knowledge about these messages is needed to understand health professionals' behavior on Twitter. Our goal was to characterize the content of Twitter messages, specifically focusing on health professionals and their tweets relating to health. We performed an in-depth content analysis of 700 tweets. Qualitative content analysis was conducted on tweets by health users on Twitter. The primary objective was to describe the general type of content (ie, health-related versus non-health related) on Twitter authored by health professionals and further to describe health-related tweets on the basis of the type of statement made. Specific attention was given to whether a tweet was personal (as opposed to professional) or made a claim that users would expect to be supported by some level of medical evidence (ie, a "testable" claim). A secondary objective was to compare content types among different users, including patients, physicians, nurses, health care organizations, and others. Health-related users are posting a wide range of content on Twitter. Among health-related tweets, 53.2% (184/346) contained a testable claim. Of health-related tweets by providers, 17.6% (61/346) were personal in nature; 61% (59/96) made testable statements. While organizations and businesses use Twitter to promote their services and products, patient advocates are using this tool to share their personal experiences with health. Twitter users in health-related fields tweet about both testable claims and personal experiences. Future work should assess the relationship between testable tweets and the actual level of evidence supporting them, including how Twitter users-especially patients-interpret the content of tweets posted by health providers.
NASA Technical Reports Server (NTRS)
Donnelly, R. E. (Editor)
1980-01-01
Papers about prediction of ionospheric and radio propagation conditions based primarily on empirical or statistical relations is discussed. Predictions of sporadic E, spread F, and scintillations generally involve statistical or empirical predictions. The correlation between solar-activity and terrestrial seismic activity and the possible relation between solar activity and biological effects is discussed.
Cremer, Jonas; Arnoldini, Markus; Hwa, Terence
2017-06-20
The human gut harbors a dynamic microbial community whose composition bears great importance for the health of the host. Here, we investigate how colonic physiology impacts bacterial growth, which ultimately dictates microbiota composition. Combining measurements of bacterial physiology with analysis of published data on human physiology into a quantitative, comprehensive modeling framework, we show how water flow in the colon, in concert with other physiological factors, determine the abundances of the major bacterial phyla. Mechanistically, our model shows that local pH values in the lumen, which differentially affect the growth of different bacteria, drive changes in microbiota composition. It identifies key factors influencing the delicate regulation of colonic pH, including epithelial water absorption, nutrient inflow, and luminal buffering capacity, and generates testable predictions on their effects. Our findings show that a predictive and mechanistic understanding of microbial ecology in the gut is possible. Such predictive understanding is needed for the rational design of intervention strategies to actively control the microbiota.
Cremer, Jonas; Arnoldini, Markus; Hwa, Terence
2017-01-01
The human gut harbors a dynamic microbial community whose composition bears great importance for the health of the host. Here, we investigate how colonic physiology impacts bacterial growth, which ultimately dictates microbiota composition. Combining measurements of bacterial physiology with analysis of published data on human physiology into a quantitative, comprehensive modeling framework, we show how water flow in the colon, in concert with other physiological factors, determine the abundances of the major bacterial phyla. Mechanistically, our model shows that local pH values in the lumen, which differentially affect the growth of different bacteria, drive changes in microbiota composition. It identifies key factors influencing the delicate regulation of colonic pH, including epithelial water absorption, nutrient inflow, and luminal buffering capacity, and generates testable predictions on their effects. Our findings show that a predictive and mechanistic understanding of microbial ecology in the gut is possible. Such predictive understanding is needed for the rational design of intervention strategies to actively control the microbiota. PMID:28588144
The effect of analytic and experiential modes of thought on moral judgment.
Kvaran, Trevor; Nichols, Shaun; Sanfey, Alan
2013-01-01
According to dual-process theories, moral judgments are the result of two competing processes: a fast, automatic, affect-driven process and a slow, deliberative, reason-based process. Accordingly, these models make clear and testable predictions about the influence of each system. Although a small number of studies have attempted to examine each process independently in the context of moral judgment, no study has yet tried to experimentally manipulate both processes within a single study. In this chapter, a well-established "mode-of-thought" priming technique was used to place participants in either an experiential/emotional or analytic mode while completing a task in which participants provide judgments about a series of moral dilemmas. We predicted that individuals primed analytically would make more utilitarian responses than control participants, while emotional priming would lead to less utilitarian responses. Support was found for both of these predictions. Implications of these findings for dual-process theories of moral judgment will be discussed. Copyright © 2013 Elsevier B.V. All rights reserved.
Gillison, Andrew N; Asner, Gregory P; Fernandes, Erick C M; Mafalacusser, Jacinto; Banze, Aurélio; Izidine, Samira; da Fonseca, Ambrósio R; Pacate, Hermenegildo
2016-07-15
Sustainable biodiversity and land management require a cost-effective means of forecasting landscape response to environmental change. Conventional species-based, regional biodiversity assessments are rarely adequate for policy planning and decision making. We show how new ground and remotely-sensed survey methods can be coordinated to help elucidate and predict relationships between biodiversity, land use and soil properties along complex biophysical gradients that typify many similar landscapes worldwide. In the lower Zambezi valley, Mozambique we used environmental, gradient-directed transects (gradsects) to sample vascular plant species, plant functional types, vegetation structure, soil properties and land-use characteristics. Soil fertility indices were derived using novel multidimensional scaling of soil properties. To facilitate spatial analysis, we applied a probabilistic remote sensing approach, analyzing Landsat 7 satellite imagery to map photosynthetically active and inactive vegetation and bare soil along each gradsect. Despite the relatively low sample number, we found highly significant correlations between single and combined sets of specific plant, soil and remotely sensed variables that permitted testable spatial projections of biodiversity and soil fertility across the regional land-use mosaic. This integrative and rapid approach provides a low-cost, high-return and readily transferable methodology that permits the ready identification of testable biodiversity indicators for adaptive management of biodiversity and potential agricultural productivity. Copyright © 2016 Elsevier Ltd. All rights reserved.
Broadening conceptions of learning in medical education: the message from teamworking.
Bleakley, Alan
2006-02-01
There is a mismatch between the broad range of learning theories offered in the wider education literature and a relatively narrow range of theories privileged in the medical education literature. The latter are usually described under the heading of 'adult learning theory'. This paper critically addresses the limitations of the current dominant learning theories informing medical education. An argument is made that such theories, which address how an individual learns, fail to explain how learning occurs in dynamic, complex and unstable systems such as fluid clinical teams. Models of learning that take into account distributed knowing, learning through time as well as space, and the complexity of a learning environment including relationships between persons and artefacts, are more powerful in explaining and predicting how learning occurs in clinical teams. Learning theories may be privileged for ideological reasons, such as medicine's concern with autonomy. Where an increasing amount of medical education occurs in workplace contexts, sociocultural learning theories offer a best-fit exploration and explanation of such learning. We need to continue to develop testable models of learning that inform safe work practice. One type of learning theory will not inform all practice contexts and we need to think about a range of fit-for-purpose theories that are testable in practice. Exciting current developments include dynamicist models of learning drawing on complexity theory.
Schädler, Marc R; Warzybok, Anna; Kollmeier, Birger
2018-01-01
The simulation framework for auditory discrimination experiments (FADE) was adopted and validated to predict the individual speech-in-noise recognition performance of listeners with normal and impaired hearing with and without a given hearing-aid algorithm. FADE uses a simple automatic speech recognizer (ASR) to estimate the lowest achievable speech reception thresholds (SRTs) from simulated speech recognition experiments in an objective way, independent from any empirical reference data. Empirical data from the literature were used to evaluate the model in terms of predicted SRTs and benefits in SRT with the German matrix sentence recognition test when using eight single- and multichannel binaural noise-reduction algorithms. To allow individual predictions of SRTs in binaural conditions, the model was extended with a simple better ear approach and individualized by taking audiograms into account. In a realistic binaural cafeteria condition, FADE explained about 90% of the variance of the empirical SRTs for a group of normal-hearing listeners and predicted the corresponding benefits with a root-mean-square prediction error of 0.6 dB. This highlights the potential of the approach for the objective assessment of benefits in SRT without prior knowledge about the empirical data. The predictions for the group of listeners with impaired hearing explained 75% of the empirical variance, while the individual predictions explained less than 25%. Possibly, additional individual factors should be considered for more accurate predictions with impaired hearing. A competing talker condition clearly showed one limitation of current ASR technology, as the empirical performance with SRTs lower than -20 dB could not be predicted.
Schädler, Marc R.; Warzybok, Anna; Kollmeier, Birger
2018-01-01
The simulation framework for auditory discrimination experiments (FADE) was adopted and validated to predict the individual speech-in-noise recognition performance of listeners with normal and impaired hearing with and without a given hearing-aid algorithm. FADE uses a simple automatic speech recognizer (ASR) to estimate the lowest achievable speech reception thresholds (SRTs) from simulated speech recognition experiments in an objective way, independent from any empirical reference data. Empirical data from the literature were used to evaluate the model in terms of predicted SRTs and benefits in SRT with the German matrix sentence recognition test when using eight single- and multichannel binaural noise-reduction algorithms. To allow individual predictions of SRTs in binaural conditions, the model was extended with a simple better ear approach and individualized by taking audiograms into account. In a realistic binaural cafeteria condition, FADE explained about 90% of the variance of the empirical SRTs for a group of normal-hearing listeners and predicted the corresponding benefits with a root-mean-square prediction error of 0.6 dB. This highlights the potential of the approach for the objective assessment of benefits in SRT without prior knowledge about the empirical data. The predictions for the group of listeners with impaired hearing explained 75% of the empirical variance, while the individual predictions explained less than 25%. Possibly, additional individual factors should be considered for more accurate predictions with impaired hearing. A competing talker condition clearly showed one limitation of current ASR technology, as the empirical performance with SRTs lower than −20 dB could not be predicted. PMID:29692200
Flight control system design factors for applying automated testing techniques
NASA Technical Reports Server (NTRS)
Sitz, Joel R.; Vernon, Todd H.
1990-01-01
The principal design features and operational experiences of the X-29 forward-swept-wing aircraft and F-18 high alpha research vehicle (HARV) automated test systems are discussed. It is noted that operational experiences in developing and using these automated testing techniques have highlighted the need for incorporating target system features to improve testability. Improved target system testability can be accomplished with the addition of nonreal-time and real-time features. Online access to target system implementation details, unobtrusive real-time access to internal user-selectable variables, and proper software instrumentation are all desirable features of the target system. Also, test system and target system design issues must be addressed during the early stages of the target system development. Processing speeds of up to 20 million instructions/s and the development of high-bandwidth reflective memory systems have improved the ability to integrate the target system and test system for the application of automated testing techniques. It is concluded that new methods of designing testability into the target systems are required.
NASA Astrophysics Data System (ADS)
Jaber, Abobaker M.
2014-12-01
Two nonparametric methods for prediction and modeling of financial time series signals are proposed. The proposed techniques are designed to handle non-stationary and non-linearity behave and to extract meaningful signals for reliable prediction. Due to Fourier Transform (FT), the methods select significant decomposed signals that will be employed for signal prediction. The proposed techniques developed by coupling Holt-winter method with Empirical Mode Decomposition (EMD) and it is Extending the scope of empirical mode decomposition by smoothing (SEMD). To show performance of proposed techniques, we analyze daily closed price of Kuala Lumpur stock market index.
Abu Bakar, Nurul Farhana; Chen, Ai-Hong
2014-02-01
Children with learning disabilities might have difficulties to communicate effectively and give reliable responses as required in various visual function testing procedures. The purpose of this study was to compare the testability of visual acuity using the modified Early Treatment Diabetic Retinopathy Study (ETDRS) and Cambridge Crowding Cards, stereo acuity using Lang Stereo test II and Butterfly stereo tests and colour perception using Colour Vision Test Made Easy (CVTME) and Ishihara's Test for Colour Deficiency (Ishihara Test) between children in mainstream classes and children with learning disabilities in special education classes in government primary schools. A total of 100 primary school children (50 children from mainstream classes and 50 children from special education classes) matched in age were recruited in this cross-sectional comparative study. The testability was determined by the percentage of children who were able to give reliable respond as required by the respective tests. 'Unable to test' was defined as inappropriate response or uncooperative despite best efforts of the screener. The testability of the modified ETDRS, Butterfly stereo test and Ishihara test for respective visual function tests were found lower among children in special education classes ( P < 0.001) but not in Cambridge Crowding Cards, Lang Stereo test II and CVTME. Non verbal or "matching" approaches were found to be more superior in testing visual functions in children with learning disabilities. Modifications of vision testing procedures are essential for children with learning disabilities.
Integrated PK-PD and agent-based modeling in oncology.
Wang, Zhihui; Butner, Joseph D; Cristini, Vittorio; Deisboeck, Thomas S
2015-04-01
Mathematical modeling has become a valuable tool that strives to complement conventional biomedical research modalities in order to predict experimental outcome, generate new medical hypotheses, and optimize clinical therapies. Two specific approaches, pharmacokinetic-pharmacodynamic (PK-PD) modeling, and agent-based modeling (ABM), have been widely applied in cancer research. While they have made important contributions on their own (e.g., PK-PD in examining chemotherapy drug efficacy and resistance, and ABM in describing and predicting tumor growth and metastasis), only a few groups have started to combine both approaches together in an effort to gain more insights into the details of drug dynamics and the resulting impact on tumor growth. In this review, we focus our discussion on some of the most recent modeling studies building on a combined PK-PD and ABM approach that have generated experimentally testable hypotheses. Some future directions are also discussed.
Integrated PK-PD and Agent-Based Modeling in Oncology
Wang, Zhihui; Butner, Joseph D.; Cristini, Vittorio
2016-01-01
Mathematical modeling has become a valuable tool that strives to complement conventional biomedical research modalities in order to predict experimental outcome, generate new medical hypotheses, and optimize clinical therapies. Two specific approaches, pharmacokinetic-pharmacodynamic (PK-PD) modeling, and agent-based modeling (ABM), have been widely applied in cancer research. While they have made important contributions on their own (e.g., PK-PD in examining chemotherapy drug efficacy and resistance, and ABM in describing and predicting tumor growth and metastasis), only a few groups have started to combine both approaches together in an effort to gain more insights into the details of drug dynamics and the resulting impact on tumor growth. In this review, we focus our discussion on some of the most recent modeling studies building on a combined PK-PD and ABM approach that have generated experimentally testable hypotheses. Some future directions are also discussed. PMID:25588379
Solving puzzles of GW150914 by primordial black holes
DOE Office of Scientific and Technical Information (OSTI.GOV)
Blinnikov, S.; Dolgov, A.; Porayko, N.K.
The black hole binary properties inferred from the LIGO gravitational wave signal GW150914 posed several serious problems. The high masses and low effective spin of black hole binary can be explained if they are primordial (PBH) rather than the products of the stellar binary evolution. Such PBH properties are postulated ad hoc but not derived from fundamental theory. We show that the necessary features of PBHs naturally follow from the slightly modified Affleck-Dine (AD) mechanism of baryogenesis. The log-normal distribution of PBHs, predicted within the AD paradigm, is adjusted to provide an abundant population of low-spin stellar mass black holes.more » The same distribution gives a sufficient number of quickly growing seeds of supermassive black holes observed at high redshifts and may comprise an appreciable fraction of Dark Matter which does not contradict any existing observational limits. Testable predictions of this scenario are discussed.« less
Predicting the dynamics of bacterial growth inhibition by ribosome-targeting antibiotics
NASA Astrophysics Data System (ADS)
Greulich, Philip; Doležal, Jakub; Scott, Matthew; Evans, Martin R.; Allen, Rosalind J.
2017-12-01
Understanding how antibiotics inhibit bacteria can help to reduce antibiotic use and hence avoid antimicrobial resistance—yet few theoretical models exist for bacterial growth inhibition by a clinically relevant antibiotic treatment regimen. In particular, in the clinic, antibiotic treatment is time-dependent. Here, we use a theoretical model, previously applied to steady-state bacterial growth, to predict the dynamical response of a bacterial cell to a time-dependent dose of ribosome-targeting antibiotic. Our results depend strongly on whether the antibiotic shows reversible transport and/or low-affinity ribosome binding (‘low-affinity antibiotic’) or, in contrast, irreversible transport and/or high affinity ribosome binding (‘high-affinity antibiotic’). For low-affinity antibiotics, our model predicts that growth inhibition depends on the duration of the antibiotic pulse, and can show a transient period of very fast growth following removal of the antibiotic. For high-affinity antibiotics, growth inhibition depends on peak dosage rather than dose duration, and the model predicts a pronounced post-antibiotic effect, due to hysteresis, in which growth can be suppressed for long times after the antibiotic dose has ended. These predictions are experimentally testable and may be of clinical significance.
Predicting the dynamics of bacterial growth inhibition by ribosome-targeting antibiotics
Greulich, Philip; Doležal, Jakub; Scott, Matthew; Evans, Martin R; Allen, Rosalind J
2017-01-01
Understanding how antibiotics inhibit bacteria can help to reduce antibiotic use and hence avoid antimicrobial resistance—yet few theoretical models exist for bacterial growth inhibition by a clinically relevant antibiotic treatment regimen. In particular, in the clinic, antibiotic treatment is time-dependent. Here, we use a theoretical model, previously applied to steady-state bacterial growth, to predict the dynamical response of a bacterial cell to a time-dependent dose of ribosome-targeting antibiotic. Our results depend strongly on whether the antibiotic shows reversible transport and/or low-affinity ribosome binding (‘low-affinity antibiotic’) or, in contrast, irreversible transport and/or high affinity ribosome binding (‘high-affinity antibiotic’). For low-affinity antibiotics, our model predicts that growth inhibition depends on the duration of the antibiotic pulse, and can show a transient period of very fast growth following removal of the antibiotic. For high-affinity antibiotics, growth inhibition depends on peak dosage rather than dose duration, and the model predicts a pronounced post-antibiotic effect, due to hysteresis, in which growth can be suppressed for long times after the antibiotic dose has ended. These predictions are experimentally testable and may be of clinical significance. PMID:28714461
Predicting the High Redshift Galaxy Population for JWST
NASA Astrophysics Data System (ADS)
Flynn, Zoey; Benson, Andrew
2017-01-01
The James Webb Space Telescope will be launched in Oct 2018 with the goal of observing galaxies in the redshift range of z = 10 - 15. As redshift increases, the age of the Universe decreases, allowing us to study objects formed only a few hundred million years after the Big Bang. This will provide a valuable opportunity to test and improve current galaxy formation theory by comparing predictions for mass, luminosity, and number density to the observed data. We have made testable predictions with the semi-analytical galaxy formation model Galacticus. The code uses Markov Chain Monte Carlo methods to determine viable sets of model parameters that match current astronomical data. The resulting constrained model was then set to match the specifications of the JWST Ultra Deep Field Imaging Survey. Predictions utilizing up to 100 viable parameter sets were calculated, allowing us to assess the uncertainty in current theoretical expectations. We predict that the planned UDF will be able to observe a significant number of objects past redshift z > 9 but nothing at redshift z > 11. In order to detect these faint objects at redshifts z = 11-15 we need to increase exposure time by at least a factor of 1.66.
Empirical Observations on the Sensitivity of Hot Cathode Ionization Type Vacuum Gages
NASA Technical Reports Server (NTRS)
Summers, R. L.
1969-01-01
A study of empirical methods of predicting tile relative sensitivities of hot cathode ionization gages is presented. Using previously published gage sensitivities, several rules for predicting relative sensitivity are tested. The relative sensitivity to different gases is shown to be invariant with gage type, in the linear range of gage operation. The total ionization cross section, molecular and molar polarizability, and refractive index are demonstrated to be useful parameters for predicting relative gage sensitivity. Using data from the literature, the probable error of predictions of relative gage sensitivity based on these molecular properties is found to be about 10 percent. A comprehensive table of predicted relative sensitivities, based on empirical methods, is presented.
Sherman, Deborah Witt; Rosedale, Mary; Haber, Judith
2012-05-01
To develop a substantive theory of the process of breast cancer survivorship. Grounded theory. A LISTSERV announcement posted on the SHARE Web site and purposeful recruitment of women known to be diagnosed and treated for breast cancer. 15 women diagnosed with early-stage breast cancer. Constant comparative analysis. Breast cancer survivorship. The core variable identified was Reclaiming Life on One's Own Terms. The perceptions and experiences of the participants revealed overall that the diagnosis of breast cancer was a turning point in life and the stimulus for change. That was followed by the recognition of breast cancer as now being a part of life, leading to the necessity of learning to live with breast cancer, and finally, creating a new life after breast cancer. Participants revealed that breast cancer survivorship is a process marked and shaped by time, the perception of support, and coming to terms with the trauma of a cancer diagnosis and the aftermath of treatment. The process of survivorship continues by assuming an active role in self-healing, gaining a new perspective and reconciling paradoxes, creating a new mindset and moving to a new normal, developing a new way of being in the world on one's own terms, and experiencing growth through adversity beyond survivorship. The process of survivorship for women with breast cancer is an evolutionary journey with short- and long-term challenges. This study shows the development of an empirically testable theory of survivorship that describes and predicts women's experiences following breast cancer treatment from the initial phase of recovery and beyond. The theory also informs interventions that not only reduce negative outcomes, but promote ongoing healing, adjustment, and resilience over time.
Maslo, Brooke; Fefferman, Nina H
2015-08-01
Ecological factors generally affect population viability on rapid time scales. Traditional population viability analyses (PVA) therefore focus on alleviating ecological pressures, discounting potential evolutionary impacts on individual phenotypes. Recent studies of evolutionary rescue (ER) focus on cases in which severe, environmentally induced population bottlenecks trigger a rapid evolutionary response that can potentially reverse demographic threats. ER models have focused on shifting genetics and resulting population recovery, but no one has explored how to incorporate those findings into PVA. We integrated ER into PVA to identify the critical decision interval for evolutionary rescue (DIER) under which targeted conservation action should be applied to buffer populations undergoing ER against extinction from stochastic events and to determine the most appropriate vital rate to target to promote population recovery. We applied this model to little brown bats (Myotis lucifugus) affected by white-nose syndrome (WNS), a fungal disease causing massive declines in several North American bat populations. Under the ER scenario, the model predicted that the DIER period for little brown bats was within 11 years of initial WNS emergence, after which they stabilized at a positive growth rate (λ = 1.05). By comparing our model results with population trajectories of multiple infected hibernacula across the WNS range, we concluded that ER is a potential explanation of observed little brown bat population trajectories across multiple hibernacula within the affected range. Our approach provides a tool that can be used by all managers to provide testable hypotheses regarding the occurrence of ER in declining populations, suggest empirical studies to better parameterize the population genetics and conservation-relevant vital rates, and identify the DIER period during which management strategies will be most effective for species conservation. © 2015 Society for Conservation Biology.
Testability, Test Automation and Test Driven Development for the Trick Simulation Toolkit
NASA Technical Reports Server (NTRS)
Penn, John
2014-01-01
This paper describes the adoption of a Test Driven Development approach and a Continuous Integration System in the development of the Trick Simulation Toolkit, a generic simulation development environment for creating high fidelity training and engineering simulations at the NASA Johnson Space Center and many other NASA facilities. It describes the approach, and the significant benefits seen, such as fast, thorough and clear test feedback every time code is checked into the code repository. It also describes an approach that encourages development of code that is testable and adaptable.
Stochastic recruitment leads to symmetry breaking in foraging populations
NASA Astrophysics Data System (ADS)
Biancalani, Tommaso; Dyson, Louise; McKane, Alan
2014-03-01
When an ant colony is faced with two identical equidistant food sources, the foraging ants are found to concentrate more on one source than the other. Analogous symmetry-breaking behaviours have been reported in various population systems, (such as queueing or stock market trading) suggesting the existence of a simple universal mechanism. Past studies have neglected the effect of demographic noise and required rather complicated models to qualitatively reproduce this behaviour. I will show how including the effects of demographic noise leads to a radically different conclusion. The symmetry-breaking arises solely due to the process of recruitment and ceases to occur for large population sizes. The latter fact provides a testable prediction for a real system.
The polyadenylation code: a unified model for the regulation of mRNA alternative polyadenylation*
Davis, Ryan; Shi, Yongsheng
2014-01-01
The majority of eukaryotic genes produce multiple mRNA isoforms with distinct 3′ ends through a process called mRNA alternative polyadenylation (APA). Recent studies have demonstrated that APA is dynamically regulated during development and in response to environmental stimuli. A number of mechanisms have been described for APA regulation. In this review, we attempt to integrate all the known mechanisms into a unified model. This model not only explains most of previous results, but also provides testable predictions that will improve our understanding of the mechanistic details of APA regulation. Finally, we briefly discuss the known and putative functions of APA regulation. PMID:24793760
Modelling the spread of innovation in wild birds.
Shultz, Thomas R; Montrey, Marcel; Aplin, Lucy M
2017-06-01
We apply three plausible algorithms in agent-based computer simulations to recent experiments on social learning in wild birds. Although some of the phenomena are simulated by all three learning algorithms, several manifestations of social conformity bias are simulated by only the approximate majority (AM) algorithm, which has roots in chemistry, molecular biology and theoretical computer science. The simulations generate testable predictions and provide several explanatory insights into the diffusion of innovation through a population. The AM algorithm's success raises the possibility of its usefulness in studying group dynamics more generally, in several different scientific domains. Our differential-equation model matches simulation results and provides mathematical insights into the dynamics of these algorithms. © 2017 The Author(s).
Daniels, Marcus G; Farmer, J Doyne; Gillemot, László; Iori, Giulia; Smith, Eric
2003-03-14
We model trading and price formation in a market under the assumption that order arrival and cancellations are Poisson random processes. This model makes testable predictions for the most basic properties of markets, such as the diffusion rate of prices (which is the standard measure of financial risk) and the spread and price impact functions (which are the main determinants of transaction cost). Guided by dimensional analysis, simulation, and mean-field theory, we find scaling relations in terms of order flow rates. We show that even under completely random order flow the need to store supply and demand to facilitate trading induces anomalous diffusion and temporal structure in prices.
NASA Astrophysics Data System (ADS)
Daniels, Marcus G.; Farmer, J. Doyne; Gillemot, László; Iori, Giulia; Smith, Eric
2003-03-01
We model trading and price formation in a market under the assumption that order arrival and cancellations are Poisson random processes. This model makes testable predictions for the most basic properties of markets, such as the diffusion rate of prices (which is the standard measure of financial risk) and the spread and price impact functions (which are the main determinants of transaction cost). Guided by dimensional analysis, simulation, and mean-field theory, we find scaling relations in terms of order flow rates. We show that even under completely random order flow the need to store supply and demand to facilitate trading induces anomalous diffusion and temporal structure in prices.
Are perytons signatures of ball lightning?
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dodin, I. Y.; Fisch, N. J.
2014-10-20
The enigmatic downchirped signals, called 'perytons', that are detected by radio telescopes in the GHz frequency range may be produced by an atmospheric phenomenon known as ball lightning (BL). If BLs act as nonstationary radio frequency cavities, their characteristic emission frequencies and evolution timescales are consistent with peryton observations, and so are general patterns in which BLs are known to occur. Based on this evidence, testable predictions are made that can confirm or rule out a causal connection between perytons and BLs. In either case, how perytons are searched for in observational data may warrant reconsideration because existing procedures maymore » be discarding events that have the same nature as known perytons.« less
Neutrino mass, dark matter, and Baryon asymmetry via TeV-scale physics without fine-tuning.
Aoki, Mayumi; Kanemura, Shinya; Seto, Osamu
2009-02-06
We propose an extended version of the standard model, in which neutrino oscillation, dark matter, and the baryon asymmetry of the Universe can be simultaneously explained by the TeV-scale physics without assuming a large hierarchy among the mass scales. Tiny neutrino masses are generated at the three-loop level due to the exact Z2 symmetry, by which the stability of the dark matter candidate is guaranteed. The extra Higgs doublet is required not only for the tiny neutrino masses but also for successful electroweak baryogenesis. The model provides discriminative predictions especially in Higgs phenomenology, so that it is testable at current and future collider experiments.
NASA Astrophysics Data System (ADS)
Hirata, N.; Yokoi, S.; Nanjo, K. Z.; Tsuruoka, H.
2012-04-01
One major focus of the current Japanese earthquake prediction research program (2009-2013), which is now integrated with the research program for prediction of volcanic eruptions, is to move toward creating testable earthquake forecast models. For this purpose we started an experiment of forecasting earthquake activity in Japan under the framework of the Collaboratory for the Study of Earthquake Predictability (CSEP) through an international collaboration. We established the CSEP Testing Centre, an infrastructure to encourage researchers to develop testable models for Japan, and to conduct verifiable prospective tests of their model performance. We started the 1st earthquake forecast testing experiment in Japan within the CSEP framework. We use the earthquake catalogue maintained and provided by the Japan Meteorological Agency (JMA). The experiment consists of 12 categories, with 4 testing classes with different time spans (1 day, 3 months, 1 year, and 3 years) and 3 testing regions called "All Japan," "Mainland," and "Kanto." A total of 105 models were submitted, and are currently under the CSEP official suite of tests for evaluating the performance of forecasts. The experiments were completed for 92 rounds for 1-day, 6 rounds for 3-month, and 3 rounds for 1-year classes. For 1-day testing class all models passed all the CSEP's evaluation tests at more than 90% rounds. The results of the 3-month testing class also gave us new knowledge concerning statistical forecasting models. All models showed a good performance for magnitude forecasting. On the other hand, observation is hardly consistent in space distribution with most models when many earthquakes occurred at a spot. Now we prepare the 3-D forecasting experiment with a depth range of 0 to 100 km in Kanto region. The testing center is improving an evaluation system for 1-day class experiment to finish forecasting and testing results within one day. The special issue of 1st part titled Earthquake Forecast Testing Experiment in Japan was published on the Earth, Planets and Space Vol. 63, No.3, 2011 on March, 2011. The 2nd part of this issue, which is now on line, will be published soon. An outline of the experiment and activities of the Japanese Testing Center are published on our WEB site; http://wwweic.eri.u-tokyo.ac.jp/ZISINyosoku/wiki.en/wiki.cgi
Climate Prediction for Brazil's Nordeste: Performance of Empirical and Numerical Modeling Methods.
NASA Astrophysics Data System (ADS)
Moura, Antonio Divino; Hastenrath, Stefan
2004-07-01
Comparisons of performance of climate forecast methods require consistency in the predictand and a long common reference period. For Brazil's Nordeste, empirical methods developed at the University of Wisconsin use preseason (October January) rainfall and January indices of the fields of meridional wind component and sea surface temperature (SST) in the tropical Atlantic and the equatorial Pacific as input to stepwise multiple regression and neural networking. These are used to predict the March June rainfall at a network of 27 stations. An experiment at the International Research Institute for Climate Prediction, Columbia University, with a numerical model (ECHAM4.5) used global SST information through February to predict the March June rainfall at three grid points in the Nordeste. The predictands for the empirical and numerical model forecasts are correlated at +0.96, and the period common to the independent portion of record of the empirical prediction and the numerical modeling is 1968 99. Over this period, predicted versus observed rainfall are evaluated in terms of correlation, root-mean-square error, absolute error, and bias. Performance is high for both approaches. Numerical modeling produces a correlation of +0.68, moderate errors, and strong negative bias. For the empirical methods, errors and bias are small, and correlations of +0.73 and +0.82 are reached between predicted and observed rainfall.
Galindo-Romero, Marta; Lippert, Tristan; Gavrilov, Alexander
2015-12-01
This paper presents an empirical linear equation to predict peak pressure level of anthropogenic impulsive signals based on its correlation with the sound exposure level. The regression coefficients are shown to be weakly dependent on the environmental characteristics but governed by the source type and parameters. The equation can be applied to values of the sound exposure level predicted with a numerical model, which provides a significant improvement in the prediction of the peak pressure level. Part I presents the analysis for airgun arrays signals, and Part II considers the application of the empirical equation to offshore impact piling noise.
Hall, E.K.; Maixner, F.; Franklin, O.; Daims, H.; Richter, A.; Battin, T.
2011-01-01
Currently, one of the biggest challenges in microbial and ecosystem ecology is to develop conceptual models that organize the growing body of information on environmental microbiology into a clear mechanistic framework with a direct link to ecosystem processes. Doing so will enable development of testable hypotheses to better direct future research and increase understanding of key constraints on biogeochemical networks. Although the understanding of phenotypic and genotypic diversity of microorganisms in the environment is rapidly accumulating, how controls on microbial physiology ultimately affect biogeochemical fluxes remains poorly understood. We propose that insight into constraints on biogeochemical cycles can be achieved by a more rigorous evaluation of microbial community biomass composition within the context of ecological stoichiometry. Multiple recent studies have pointed to microbial biomass stoichiometry as an important determinant of when microorganisms retain or recycle mineral nutrients. We identify the relevant cellular components that most likely drive changes in microbial biomass stoichiometry by defining a conceptual model rooted in ecological stoichiometry. More importantly, we show how X-ray microanalysis (XRMA), nanoscale secondary ion mass spectroscopy (NanoSIMS), Raman microspectroscopy, and in situ hybridization techniques (for example, FISH) can be applied in concert to allow for direct empirical evaluation of the proposed conceptual framework. This approach links an important piece of the ecological literature, ecological stoichiometry, with the molecular front of the microbial revolution, in an attempt to provide new insight into how microbial physiology could constrain ecosystem processes.
Abu Bakar, Nurul Farhana; Chen, Ai-Hong
2014-01-01
Context: Children with learning disabilities might have difficulties to communicate effectively and give reliable responses as required in various visual function testing procedures. Aims: The purpose of this study was to compare the testability of visual acuity using the modified Early Treatment Diabetic Retinopathy Study (ETDRS) and Cambridge Crowding Cards, stereo acuity using Lang Stereo test II and Butterfly stereo tests and colour perception using Colour Vision Test Made Easy (CVTME) and Ishihara's Test for Colour Deficiency (Ishihara Test) between children in mainstream classes and children with learning disabilities in special education classes in government primary schools. Materials and Methods: A total of 100 primary school children (50 children from mainstream classes and 50 children from special education classes) matched in age were recruited in this cross-sectional comparative study. The testability was determined by the percentage of children who were able to give reliable respond as required by the respective tests. ‘Unable to test’ was defined as inappropriate response or uncooperative despite best efforts of the screener. Results: The testability of the modified ETDRS, Butterfly stereo test and Ishihara test for respective visual function tests were found lower among children in special education classes (P < 0.001) but not in Cambridge Crowding Cards, Lang Stereo test II and CVTME. Conclusion: Non verbal or “matching” approaches were found to be more superior in testing visual functions in children with learning disabilities. Modifications of vision testing procedures are essential for children with learning disabilities. PMID:24008790
Kabore, Achille; Biritwum, Nana-Kwadwo; Downs, Philip W.; Soares Magalhaes, Ricardo J.; Zhang, Yaobi; Ottesen, Eric A.
2013-01-01
Background Mapping the distribution of schistosomiasis is essential to determine where control programs should operate, but because it is impractical to assess infection prevalence in every potentially endemic community, model-based geostatistics (MBG) is increasingly being used to predict prevalence and determine intervention strategies. Methodology/Principal Findings To assess the accuracy of MBG predictions for Schistosoma haematobium infection in Ghana, school surveys were evaluated at 79 sites to yield empiric prevalence values that could be compared with values derived from recently published MBG predictions. Based on these findings schools were categorized according to WHO guidelines so that practical implications of any differences could be determined. Using the mean predicted values alone, 21 of the 25 empirically determined ‘high-risk’ schools requiring yearly praziquantel would have been undertreated and almost 20% of the remaining schools would have been treated despite empirically-determined absence of infection – translating into 28% of the children in the 79 schools being undertreated and 12% receiving treatment in the absence of any demonstrated need. Conclusions/Significance Using the current predictive map for Ghana as a spatial decision support tool by aggregating prevalence estimates to the district level was clearly not adequate for guiding the national program, but the alternative of assessing each school in potentially endemic areas of Ghana or elsewhere is not at all feasible; modelling must be a tool complementary to empiric assessments. Thus for practical usefulness, predictive risk mapping should not be thought of as a one-time exercise but must, as in the current study, be an iterative process that incorporates empiric testing and model refining to create updated versions that meet the needs of disease control operational managers. PMID:23505584
A simple theoretical framework for understanding heterogeneous differentiation of CD4+ T cells
2012-01-01
Background CD4+ T cells have several subsets of functional phenotypes, which play critical yet diverse roles in the immune system. Pathogen-driven differentiation of these subsets of cells is often heterogeneous in terms of the induced phenotypic diversity. In vitro recapitulation of heterogeneous differentiation under homogeneous experimental conditions indicates some highly regulated mechanisms by which multiple phenotypes of CD4+ T cells can be generated from a single population of naïve CD4+ T cells. Therefore, conceptual understanding of induced heterogeneous differentiation will shed light on the mechanisms controlling the response of populations of CD4+ T cells under physiological conditions. Results We present a simple theoretical framework to show how heterogeneous differentiation in a two-master-regulator paradigm can be governed by a signaling network motif common to all subsets of CD4+ T cells. With this motif, a population of naïve CD4+ T cells can integrate the signals from their environment to generate a functionally diverse population with robust commitment of individual cells. Notably, two positive feedback loops in this network motif govern three bistable switches, which in turn, give rise to three types of heterogeneous differentiated states, depending upon particular combinations of input signals. We provide three prototype models illustrating how to use this framework to explain experimental observations and make specific testable predictions. Conclusions The process in which several types of T helper cells are generated simultaneously to mount complex immune responses upon pathogenic challenges can be highly regulated, and a simple signaling network motif can be responsible for generating all possible types of heterogeneous populations with respect to a pair of master regulators controlling CD4+ T cell differentiation. The framework provides a mathematical basis for understanding the decision-making mechanisms of CD4+ T cells, and it can be helpful for interpreting experimental results. Mathematical models based on the framework make specific testable predictions that may improve our understanding of this differentiation system. PMID:22697466
The attention schema theory: a mechanistic account of subjective awareness
Graziano, Michael S. A.; Webb, Taylor W.
2015-01-01
We recently proposed the attention schema theory, a novel way to explain the brain basis of subjective awareness in a mechanistic and scientifically testable manner. The theory begins with attention, the process by which signals compete for the brain’s limited computing resources. This internal signal competition is partly under a bottom–up influence and partly under top–down control. We propose that the top–down control of attention is improved when the brain has access to a simplified model of attention itself. The brain therefore constructs a schematic model of the process of attention, the ‘attention schema,’ in much the same way that it constructs a schematic model of the body, the ‘body schema.’ The content of this internal model leads a brain to conclude that it has a subjective experience. One advantage of this theory is that it explains how awareness and attention can sometimes become dissociated; the brain’s internal models are never perfect, and sometimes a model becomes dissociated from the object being modeled. A second advantage of this theory is that it explains how we can be aware of both internal and external events. The brain can apply attention to many types of information including external sensory information and internal information about emotions and cognitive states. If awareness is a model of attention, then this model should pertain to the same domains of information to which attention pertains. A third advantage of this theory is that it provides testable predictions. If awareness is the internal model of attention, used to help control attention, then without awareness, attention should still be possible but should suffer deficits in control. In this article, we review the existing literature on the relationship between attention and awareness, and suggest that at least some of the predictions of the theory are borne out by the evidence. PMID:25954242
Rozier, Kelvin; Bondarenko, Vladimir E
2017-05-01
The β 1 - and β 2 -adrenergic signaling systems play different roles in the functioning of cardiac cells. Experimental data show that the activation of the β 1 -adrenergic signaling system produces significant inotropic, lusitropic, and chronotropic effects in the heart, whereas the effects of the β 2 -adrenergic signaling system is less apparent. In this paper, a comprehensive compartmentalized experimentally based mathematical model of the combined β 1 - and β 2 -adrenergic signaling systems in mouse ventricular myocytes is developed to simulate the experimental findings and make testable predictions of the behavior of the cardiac cells under different physiological conditions. Simulations describe the dynamics of major signaling molecules in different subcellular compartments; kinetics and magnitudes of phosphorylation of ion channels, transporters, and Ca 2+ handling proteins; modifications of action potential shape and duration; and [Ca 2+ ] i and [Na + ] i dynamics upon stimulation of β 1 - and β 2 -adrenergic receptors (β 1 - and β 2 -ARs). The model reveals physiological conditions when β 2 -ARs do not produce significant physiological effects and when their effects can be measured experimentally. Simulations demonstrated that stimulation of β 2 -ARs with isoproterenol caused a marked increase in the magnitude of the L-type Ca 2+ current, [Ca 2+ ] i transient, and phosphorylation of phospholamban only upon additional application of pertussis toxin or inhibition of phosphodiesterases of type 3 and 4. The model also made testable predictions of the changes in magnitudes of [Ca 2+ ] i and [Na + ] i fluxes, the rate of decay of [Na + ] i concentration upon both combined and separate stimulation of β 1 - and β 2 -ARs, and the contribution of phosphorylation of PKA targets to the changes in the action potential and [Ca 2+ ] i transient. Copyright © 2017 the American Physiological Society.
High Speed Jet Noise Prediction Using Large Eddy Simulation
NASA Technical Reports Server (NTRS)
Lele, Sanjiva K.
2002-01-01
Current methods for predicting the noise of high speed jets are largely empirical. These empirical methods are based on the jet noise data gathered by varying primarily the jet flow speed, and jet temperature for a fixed nozzle geometry. Efforts have been made to correlate the noise data of co-annular (multi-stream) jets and for the changes associated with the forward flight within these empirical correlations. But ultimately these emipirical methods fail to provide suitable guidance in the selection of new, low-noise nozzle designs. This motivates the development of a new class of prediction methods which are based on computational simulations, in an attempt to remove the empiricism of the present day noise predictions.
Symmetry in locomotor central pattern generators and animal gaits
NASA Astrophysics Data System (ADS)
Golubitsky, Martin; Stewart, Ian; Buono, Pietro-Luciano; Collins, J. J.
1999-10-01
Animal locomotion is controlled, in part, by a central pattern generator (CPG), which is an intraspinal network of neurons capable of generating a rhythmic output. The spatio-temporal symmetries of the quadrupedal gaits walk, trot and pace lead to plausible assumptions about the symmetries of locomotor CPGs. These assumptions imply that the CPG of a quadruped should consist of eight nominally identical subcircuits, arranged in an essentially unique matter. Here we apply analogous arguments to myriapod CPGs. Analyses based on symmetry applied to these networks lead to testable predictions, including a distinction between primary and secondary gaits, the existence of a new primary gait called `jump', and the occurrence of half-integer wave numbers in myriapod gaits. For bipeds, our analysis also predicts two gaits with the out-of-phase symmetry of the walk and two gaits with the in-phase symmetry of the hop. We present data that support each of these predictions. This work suggests that symmetry can be used to infer a plausible class of CPG network architectures from observed patterns of animal gaits.
NASA Astrophysics Data System (ADS)
Harrington, David M.; Sueoka, Stacey R.
2017-01-01
We outline polarization performance calculations and predictions for the Daniel K. Inouye Solar Telescope (DKIST) optics and show Mueller matrices for two of the first light instruments. Telescope polarization is due to polarization-dependent mirror reflectivity and rotations between groups of mirrors as the telescope moves in altitude and azimuth. The Zemax optical modeling software has polarization ray-trace capabilities and predicts system performance given a coating prescription. We develop a model coating formula that approximates measured witness sample polarization properties. Estimates show the DKIST telescope Mueller matrix as functions of wavelength, azimuth, elevation, and field angle for the cryogenic near infra-red spectro-polarimeter (CryoNIRSP) and visible spectro-polarimeter. Footprint variation is substantial and shows vignetted field points will have strong polarization effects. We estimate 2% variation of some Mueller matrix elements over the 5-arc min CryoNIRSP field. We validate the Zemax model by showing limiting cases for flat mirrors in collimated and powered designs that compare well with theoretical approximations and are testable with lab ellipsometers.
Describing Myxococcus xanthus Aggregation Using Ostwald Ripening Equations for Thin Liquid Films
Bahar, Fatmagül; Pratt-Szeliga, Philip C.; Angus, Stuart; Guo, Jiaye; Welch, Roy D.
2014-01-01
When starved, a swarm of millions of Myxococcus xanthus cells coordinate their movement from outward swarming to inward coalescence. The cells then execute a synchronous program of multicellular development, arranging themselves into dome shaped aggregates. Over the course of development, about half of the initial aggregates disappear, while others persist and mature into fruiting bodies. This work seeks to develop a quantitative model for aggregation that accurately simulates which will disappear and which will persist. We analyzed time-lapse movies of M. xanthus development, modeled aggregation using the equations that describe Ostwald ripening of droplets in thin liquid films, and predicted the disappearance and persistence of aggregates with an average accuracy of 85%. We then experimentally validated a prediction that is fundamental to this model by tracking individual fluorescent cells as they moved between aggregates and demonstrating that cell movement towards and away from aggregates correlates with aggregate disappearance. Describing development through this model may limit the number and type of molecular genetic signals needed to complete M. xanthus development, and it provides numerous additional testable predictions. PMID:25231319
Density-dependent recruitment rates in great tits: the importance of being heavier
Both, C.; Visser, M. E.; Verboven, N.
1999-01-01
In birds, individuals with a higher mass at fledging have a higher probability of recruiting into the breeding population. This can be because mass is an indicator of general condition and thereby of the ability to survive adverse circumstances and/or because fledging mass is positively related to competitive strength in interactions with other fledglings. This latter explanation leads to two testable predictions: (i) there is stronger selection for fledging mass when there is more severe competition (i.e. at higher densities); and (ii) that besides absolute fledging mass, relative mass of fledglings within a cohort is important. We test these two predictions in two great tit (Parus major) populations. The first prediction was met for one of the populations, showing that competition affects the importance of mass-dependent recruitment. The second prediction, that fledglings recruit relatively well if they are heavy compared to the other fledglings, is met for both populations. The consequence of the importance of relative rather than absolute fledging mass is that the fitness consequences of reproductive decisions affecting fledging mass, such as clutch size, depend on the decisions of the other individuals in the population.
Dissociative identity disorder and the process of couple therapy.
Macintosh, Heather B
2013-01-01
Couple therapy in the context of dissociative identity disorder (DID) has been neglected as an area of exploration and development in the couple therapy and trauma literature. What little discussion exists focuses primarily on couple therapy as an adjunct to individual therapy rather than as a primary treatment for couple distress and trauma. Couple therapy researchers have begun to develop adaptations to provide effective support to couples dealing with the impact of childhood trauma in their relationships, but little attention has been paid to the specific and complex needs of DID patients in couple therapy (H. B. MacIntosh & S. Johnson, 2008 ). This review and case presentation explores the case of "Lisa," a woman diagnosed with DID, and "Don," her partner, and illustrates the themes of learning to communicate, handling conflicting needs, responding to child alters, and addressing sexuality and education through their therapy process. It is the hope of the author that this discussion will renew interest in the field of couple therapy in the context of DID, with the eventual goal of developing an empirically testable model of treatment for couples.
Sexual and Emotional Infidelity: Evolved Gender Differences in Jealousy Prove Robust and Replicable.
Buss, David M
2018-03-01
Infidelity poses threats to high-investment mating relationships. Because of gender differences in some aspects of reproductive biology, such as internal female fertilization, the nature of these threats differs for men and women. Men, but not women, for example, have recurrently faced the problem of uncertainty in their genetic parenthood. Jealousy is an emotion hypothesized to have evolved to combat these threats. The 1992 article Sex Differences in Jealousy: Evolution, Physiology, and Psychology reported three empirical studies using two different methods, forced-choice and physiological experiments. Results supported the evolution-based hypotheses. The article became highly cited for several reasons. It elevated the status of jealousy as an important emotion to be explained by any comprehensive theory of human emotions. Subsequent meta-analyses robustly supported the evolutionary hypotheses. Moreover, the work supported the evolutionary meta-theory of gender differences, which posits differences only in domains in which the sexes have recurrently faced distinct adaptive problems. It also heralded the newly emerging field of evolutionary psychology as a useful perspective that possesses the scientific virtues of testability, falsifiability, and heuristic value in discovering previously unknown psychological phenomena.
Crowell, Sheila E.; Beauchaine, Theodore P.; Linehan, Marsha M.
2009-01-01
Over the past several decades, research has focused increasingly on developmental precursors to psychological disorders that were previously assumed to emerge only in adulthood. This change in focus follows from the recognition that complex transactions between biological vulnerabilities and psychosocial risk factors shape emotional and behavioral development beginning at conception. To date, however, empirical research on the development of borderline personality is extremely limited. Indeed, in the decade since M. M. Linehan initially proposed a biosocial model of the development of borderline personality disorder, there have been few attempts to test the model among at-risk youth. In this review, diverse literatures are reviewed that can inform understanding of the ontogenesis of borderline pathology, and testable hypotheses are proposed to guide future research with at-risk children and adolescents. One probable pathway is identified that leads to borderline personality disorder; it begins with early vulnerability, expressed initially as impulsivity and followed by heightened emotional sensitivity. These vulnerabilities are potentiated across development by environmental risk factors that give rise to more extreme emotional, behavioral, and cognitive dysregulation. PMID:19379027
Crowell, Sheila E; Beauchaine, Theodore P; Linehan, Marsha M
2009-05-01
Over the past several decades, research has focused increasingly on developmental precursors to psychological disorders that were previously assumed to emerge only in adulthood. This change in focus follows from the recognition that complex transactions between biological vulnerabilities and psychosocial risk factors shape emotional and behavioral development beginning at conception. To date, however, empirical research on the development of borderline personality is extremely limited. Indeed, in the decade since M. M. Linehan initially proposed a biosocial model of the development of borderline personality disorder, there have been few attempts to test the model among at-risk youth. In this review, diverse literatures are reviewed that can inform understanding of the ontogenesis of borderline pathology, and testable hypotheses are proposed to guide future research with at-risk children and adolescents. One probable pathway is identified that leads to borderline personality disorder; it begins with early vulnerability, expressed initially as impulsivity and followed by heightened emotional sensitivity. These vulnerabilities are potentiated across development by environmental risk factors that give rise to more extreme emotional, behavioral, and cognitive dysregulation. (PsycINFO Database Record (c) 2009 APA, all rights reserved).
Oyster reefs can outpace sea-level rise
NASA Astrophysics Data System (ADS)
Rodriguez, Antonio B.; Fodrie, F. Joel; Ridge, Justin T.; Lindquist, Niels L.; Theuerkauf, Ethan J.; Coleman, Sara E.; Grabowski, Jonathan H.; Brodeur, Michelle C.; Gittman, Rachel K.; Keller, Danielle A.; Kenworthy, Matthew D.
2014-06-01
In the high-salinity seaward portions of estuaries, oysters seek refuge from predation, competition and disease in intertidal areas, but this sanctuary will be lost if vertical reef accretion cannot keep pace with sea-level rise (SLR). Oyster-reef abundance has already declined ~85% globally over the past 100 years, mainly from over harvesting, making any additional losses due to SLR cause for concern. Before any assessment of reef response to accelerated SLR can be made, direct measures of reef growth are necessary. Here, we present direct measurements of intertidal oyster-reef growth from cores and terrestrial lidar-derived digital elevation models. On the basis of our measurements collected within a mid-Atlantic estuary over a 15-year period, we developed a globally testable empirical model of intertidal oyster-reef accretion. We show that previous estimates of vertical reef growth, based on radiocarbon dates and bathymetric maps, may be greater than one order of magnitude too slow. The intertidal reefs we studied should be able to keep up with any future accelerated rate of SLR (ref. ) and may even benefit from the additional subaqueous space allowing extended vertical accretion.
Modal Interpretation of Quantum Mechanics and Classical Physical Theories
NASA Astrophysics Data System (ADS)
Ingarden, R. S.
In 1990, Bas C. van Fraassen defined the modal interpretation of quantum mechanics as the consideration of it as ``a pure theory of the possible, with testable, empirical implications for what actually happens". This is a narrow, traditional understanding of modality, only in the sense of the concept of possibility (usually denoted in logic by the C. I. Lewis's symbol 3) and the concept of necessity 2 defined by means of 3. In modern logic, however, modality is understood in a much wider sense as any intensional functor (i.e. non-extensional or determined not only by the truth value of a sentence). In the recent (independent of van Fraassen) publications of the author (1997), an attempt was made to apply this wider understanding of modality to interpretation of classical and quantum physics. In the present lecture, these problems are discussed on the background of a brief review of the logical approch to quantum mechanics in the recent 7 decades. In this discussion, the new concepts of sub-modality and super-modality of many orders are used.
Earthquake prediction evaluation standards applied to the VAN Method
NASA Astrophysics Data System (ADS)
Jackson, David D.
Earthquake prediction research must meet certain standards before it can be suitably evaluated for potential application in decision making. For methods that result in a binary (on or off) alarm condition, requirements include (1) a quantitative description of observables that trigger an alarm, (2) a quantitative description, including ranges of time, location, and magnitude, of the predicted earthquakes, (3) documented evidence of all previous alarms, (4) a complete list of predicted earthquakes, (5) a complete list of unpredicted earthquakes. The VAN technique [Varotsos and Lazaridou, 1991; Varotsos et al., 1996] has not yet been stated as a testable hypothesis. It fails criteria (1) and (2) so it is not ready to be evaluated properly. Although telegrams were transmitted in advance of claimed successes, these telegrams did not fully specify the predicted events, and all of the published statistical evaluations involve many subjective ex post facto decisions. Lacking a statistically demonstrated relationship to earthquakes, a candidate prediction technique should satisfy several plausibility criteria, including: (1) a reasonable relationship between the location of the candidate precursor and that of the predicted earthquake, (2) some demonstration that the candidate precursory observations are related to stress, strain, or other quantities related to earthquakes, and (3) the existence of co-seismic as well as pre-seismic variations of the candidate precursor. The VAN technique meets none of these criteria.
NASA Astrophysics Data System (ADS)
Augustine, Starrlight; Rosa, Sara; Kooijman, Sebastiaan A. L. M.; Carlotti, François; Poggiale, Jean-Christophe
2014-11-01
Parameters for the standard Dynamic Energy Budget (DEB) model were estimated for the purple mauve stinger, Pelagia noctiluca, using literature data. Overall, the model predictions are in good agreement with data covering the full life-cycle. The parameter set we obtain suggests that P. noctiluca is well adapted to survive long periods of starvation since the predicted maximum reserve capacity is extremely high. Moreover we predict that the reproductive output of larger individuals is relatively insensitive to changes in food level while wet mass and length are. Furthermore, the parameters imply that even if food were scarce (ingestion levels only 14% of the maximum for a given size) an individual would still mature and be able to reproduce. We present detailed model predictions for embryo development and discuss the developmental energetics of the species such as the fact that the metabolism of ephyrae accelerates for several days after birth. Finally we explore a number of concrete testable model predictions which will help to guide future research. The application of DEB theory to the collected data allowed us to conclude that P. noctiluca combines maximizing allocation to reproduction with rather extreme capabilities to survive starvation. The combination of these properties might explain why P. noctiluca is a rapidly growing concern to fisheries and tourism.
Emotional foundations of cognitive control.
Inzlicht, Michael; Bartholow, Bruce D; Hirsh, Jacob B
2015-03-01
Often seen as the paragon of higher cognition, here we suggest that cognitive control is dependent on emotion. Rather than asking whether control is influenced by emotion, we ask whether control itself can be understood as an emotional process. Reviewing converging evidence from cybernetics, animal research, cognitive neuroscience, and social and personality psychology, we suggest that cognitive control is initiated when goal conflicts evoke phasic changes to emotional primitives that both focus attention on the presence of goal conflicts and energize conflict resolution to support goal-directed behavior. Critically, we propose that emotion is not an inert byproduct of conflict but is instrumental in recruiting control. Appreciating the emotional foundations of control leads to testable predictions that can spur future research. Copyright © 2015 Elsevier Ltd. All rights reserved.
Modeling the attenuation and failure of action potentials in the dendrites of hippocampal neurons.
Migliore, M
1996-01-01
We modeled two different mechanisms, a shunting conductance and a slow sodium inactivation, to test whether they could modulate the active propagation of a train of action potentials in a dendritic tree. Computer simulations, using a compartmental model of a pyramidal neuron, suggest that each of these two mechanisms could account for the activity-dependent attenuation and failure of the action potentials in the dendrites during the train. Each mechanism is shown to be in good qualitative agreement with experimental findings on somatic or dendritic stimulation and on the effects of hyperpolarization. The conditions under which branch point failures can be observed, and a few experimentally testable predictions, are presented and discussed. PMID:8913580
Initial eccentricity fluctuations and their relation to higher-order flow harmonics
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lacey, R.; Wei,R.; Jia,J.
2011-06-01
Monte Carlo simulations are used to compute the centrality dependence of the participant eccentricities ({var_epsilon}{sub n}) in Au+Au collisions for the two primary models currently employed for eccentricity estimates - the Glauber and the factorized Kharzeev-Levin-Nardi (fKLN) models. They suggest specific testable predictions for the magnitude and centrality dependence of the flow coefficients v{sub n}, respectively measured relative to the event planes {Psi}{sub n}. They also indicate that the ratios of several of these coefficients may provide an additional constraint for distinguishing between the models. Such a constraint could be important for a more precise determination of the specific viscositymore » of the matter produced in heavy ion collisions.« less
Modelling toehold-mediated RNA strand displacement.
Šulc, Petr; Ouldridge, Thomas E; Romano, Flavio; Doye, Jonathan P K; Louis, Ard A
2015-03-10
We study the thermodynamics and kinetics of an RNA toehold-mediated strand displacement reaction with a recently developed coarse-grained model of RNA. Strand displacement, during which a single strand displaces a different strand previously bound to a complementary substrate strand, is an essential mechanism in active nucleic acid nanotechnology and has also been hypothesized to occur in vivo. We study the rate of displacement reactions as a function of the length of the toehold and temperature and make two experimentally testable predictions: that the displacement is faster if the toehold is placed at the 5' end of the substrate; and that the displacement slows down with increasing temperature for longer toeholds. Copyright © 2015 Biophysical Society. Published by Elsevier Inc. All rights reserved.
NASA Astrophysics Data System (ADS)
Li, Tianjun; Nanopoulos, Dimitri V.; Walker, Joel W.
2010-10-01
We consider proton decay in the testable flipped SU(5)×U(1)X models with TeV-scale vector-like particles which can be realized in free fermionic string constructions and F-theory model building. We significantly improve upon the determination of light threshold effects from prior studies, and perform a fresh calculation of the second loop for the process p→eπ from the heavy gauge boson exchange. The cumulative result is comparatively fast proton decay, with a majority of the most plausible parameter space within reach of the future Hyper-Kamiokande and DUSEL experiments. Because the TeV-scale vector-like particles can be produced at the LHC, we predict a strong correlation between the most exciting particle physics experiments of the coming decade.
Emotional foundations of cognitive control
Inzlicht, Michael; Bartholow, Bruce D.; Hirsh, Jacob B.
2015-01-01
Often seen as the paragon of higher cognition, here we suggest that cognitive control is dependent on emotion. Rather than asking whether control is influenced by emotion, we ask whether control itself can be understood as an emotional process. Reviewing converging evidence from cybernetics, animal research, cognitive neuroscience, and social and personality psychology, we suggest that cognitive control is initiated when goal conflicts evoke phasic changes to emotional primitives that both focus attention on the presence of goal conflicts and energize conflict resolution to support goal-directed behavior. Critically, we propose that emotion is not an inert byproduct of conflict but is instrumental in recruiting control. Appreciating the emotional foundations of control leads to testable predictions that can spur future research. PMID:25659515
Metabolic network flux analysis for engineering plant systems.
Shachar-Hill, Yair
2013-04-01
Metabolic network flux analysis (NFA) tools have proven themselves to be powerful aids to metabolic engineering of microbes by providing quantitative insights into the flows of material and energy through cellular systems. The development and application of NFA tools to plant systems has advanced in recent years and are yielding significant insights and testable predictions. Plants present substantial opportunities for the practical application of NFA but they also pose serious challenges related to the complexity of plant metabolic networks and to deficiencies in our knowledge of their structure and regulation. By considering the tools available and selected examples, this article attempts to assess where and how NFA is most likely to have a real impact on plant biotechnology. Copyright © 2013 Elsevier Ltd. All rights reserved.
Supermassive Black Holes and Galaxy Evolution
NASA Technical Reports Server (NTRS)
Merritt, D.
2004-01-01
Supermassive black holes appear to be generic components of galactic nuclei. The formation and growth of black holes is intimately connected with the evolution of galaxies on a wide range of scales. For instance, mergers between galaxies containing nuclear black holes would produce supermassive binaries which eventually coalesce via the emission of gravitational radiation. The formation and decay of these binaries is expected to produce a number of observable signatures in the stellar distribution. Black holes can also affect the large-scale structure of galaxies by perturbing the orbits of stars that pass through the nucleus. Large-scale N-body simulations are beginning to generate testable predictions about these processes which will allow us to draw inferences about the formation history of supermassive black holes.
Majorana dark matter with B+L gauge symmetry
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chao, Wei; Guo, Huai-Ke; Zhang, Yongchao
Here, we present a new model that extends the Standard Model (SM) with the local B + L symmetry, and point out that the lightest new fermion, introduced to cancel anomalies and stabilized automatically by the B + L symmetry, can serve as the cold dark matter candidate. We also study constraints on the model from Higgs measurements, electroweak precision measurements as well as the relic density and direct detections of the dark matter. Our numerical results reveal that the pseudo-vector coupling of with Z and the Yukawa coupling with the SM Higgs are highly constrained by the latest resultsmore » of LUX, while there are viable parameter space that could satisfy all the constraints and give testable predictions.« less
Majorana dark matter with B+L gauge symmetry
Chao, Wei; Guo, Huai-Ke; Zhang, Yongchao
2017-04-07
Here, we present a new model that extends the Standard Model (SM) with the local B + L symmetry, and point out that the lightest new fermion, introduced to cancel anomalies and stabilized automatically by the B + L symmetry, can serve as the cold dark matter candidate. We also study constraints on the model from Higgs measurements, electroweak precision measurements as well as the relic density and direct detections of the dark matter. Our numerical results reveal that the pseudo-vector coupling of with Z and the Yukawa coupling with the SM Higgs are highly constrained by the latest resultsmore » of LUX, while there are viable parameter space that could satisfy all the constraints and give testable predictions.« less
Domain generality vs. modality specificity: The paradox of statistical learning
Frost, Ram; Armstrong, Blair C.; Siegelman, Noam; Christiansen, Morten H.
2015-01-01
Statistical learning is typically considered to be a domain-general mechanism by which cognitive systems discover the underlying distributional properties of the input. Recent studies examining whether there are commonalities in the learning of distributional information across different domains or modalities consistently reveal, however, modality and stimulus specificity. An important question is, therefore, how and why a hypothesized domain-general learning mechanism systematically produces such effects. We offer a theoretical framework according to which statistical learning is not a unitary mechanism, but a set of domain-general computational principles, that operate in different modalities and therefore are subject to the specific constraints characteristic of their respective brain regions. This framework offers testable predictions and we discuss its computational and neurobiological plausibility. PMID:25631249
Tailor, Vijay; Glaze, Selina; Unwin, Hilary; Bowman, Richard; Thompson, Graham; Dahlmann-Noor, Annegret
2016-10-01
Children and adults with neurological impairments are often not able to access conventional perimetry; however, information about the visual field is valuable. A new technology, saccadic vector optokinetic perimetry (SVOP), may have improved accessibility, but its accuracy has not been evaluated. We aimed to explore accessibility, testability and accuracy of SVOP in children with neurodisability or isolated visual pathway deficits. Cohort study; recruitment October 2013-May 2014, at children's eye clinics at a tertiary referral centre and a regional Child Development Centre; full orthoptic assessment, SVOP (central 30° of the visual field) and confrontation visual fields (CVF). Group 1: age 1-16 years, neurodisability (n=16), group 2: age 10-16 years, confirmed or suspected visual field defect (n=21); group 2 also completed Goldmann visual field testing (GVFT). Group 1: testability with a full 40-point test protocol is 12.5%; with reduced test protocols, testability is 100%, but plots may be clinically meaningless. Children (44%) and parents/carers (62.5%) find the test easy. SVOP and CVF agree in 50%. Group 2: testability is 62% for the 40-point protocol, and 90.5% for reduced protocols. Corneal changes in childhood glaucoma interfere with SVOP testing. All children and parents/carers find SVOP easy. Overall agreement with GVFT is 64.7%. While SVOP is highly accessible to children, many cannot complete a full 40-point test. Agreement with current standard tests is moderate to poor. Abnormal saccades cause an apparent non-specific visual field defect. In children with glaucoma or nystagmus SVOP calibration often fails. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://www.bmj.com/company/products-services/rights-and-licensing/
The Study of Rain Specific Attenuation for the Prediction of Satellite Propagation in Malaysia
NASA Astrophysics Data System (ADS)
Mandeep, J. S.; Ng, Y. Y.; Abdullah, H.; Abdullah, M.
2010-06-01
Specific attenuation is the fundamental quantity in the calculation of rain attenuation for terrestrial path and slant paths representing as rain attenuation per unit distance (dB/km). Specific attenuation is an important element in developing the predicted rain attenuation model. This paper deals with the empirical determination of the power law coefficients which allow calculating the specific attenuation in dB/km from the knowledge of the rain rate in mm/h. The main purpose of the paper is to obtain the coefficients of k and α of power law relationship between specific attenuation. Three years (from 1st January 2006 until 31st December 2008) rain gauge and beacon data taken from USM, Nibong Tebal have been used to do the empirical procedure analysis of rain specific attenuation. The data presented are semi-empirical in nature. A year-to-year variation of the coefficients has been indicated and the empirical measured data was compared with ITU-R provided regression coefficient. The result indicated that the USM empirical measured data was significantly vary from ITU-R predicted value. Hence, ITU-R recommendation for regression coefficients of rain specific attenuation is not suitable for predicting rain attenuation at Malaysia.
An empirical potential for simulating vacancy clusters in tungsten.
Mason, D R; Nguyen-Manh, D; Becquart, C S
2017-12-20
We present an empirical interatomic potential for tungsten, particularly well suited for simulations of vacancy-type defects. We compare energies and structures of vacancy clusters generated with the empirical potential with an extensive new database of values computed using density functional theory, and show that the new potential predicts low-energy defect structures and formation energies with high accuracy. A significant difference to other popular embedded-atom empirical potentials for tungsten is the correct prediction of surface energies. Interstitial properties and short-range pairwise behaviour remain similar to the Ackford-Thetford potential on which it is based, making this potential well-suited to simulations of microstructural evolution following irradiation damage cascades. Using atomistic kinetic Monte Carlo simulations, we predict vacancy cluster dissociation in the range 1100-1300 K, the temperature range generally associated with stage IV recovery.
On Burst Detection and Prediction in Retweeting Sequence
2015-05-22
We conduct a comprehensive empirical analysis of a large microblogging dataset collected from the Sina Weibo and report our observations of burst...whether and how accurate we can predict bursts using classifiers based on the extracted features. Our empirical study of the Sina Weibo data shows the...feasibility of burst prediction using appropriately extracted features and classic classifiers. 1 Introduction Microblogging, such as Twitter and Sina
"Don׳t" versus "won׳t": principles, mechanisms, and intention in action inhibition.
Ridderinkhof, K Richard; van den Wildenberg, Wery P M; Brass, Marcel
2014-12-01
The aim of the present review is to provide a theoretical analysis of the role of intentions in inhibition. We will first outline four dimensions along which inhibition can be categorized: intentionality, timing, specificity, and the nature of the to-be-inhibited action. Next, we relate the concept of inhibition to theories of intentional action. In particular, we integrate ideomotor theory with motor control theories that involve predictive forward modeling of the consequences of one׳s action, and evaluate how the dimensional classification of inhibition fits into such an integrative approach. Furthermore, we will outline testable predictions that derive from this novel hypothesis of ideomotor inhibition. We then discuss the viability of the ideomotor inhibition hypothesis and our classification in view of the available evidence on the neural mechanisms of action inhibition, indicating that sensorimotor and ideomotor inhibition engages largely overlapping networks with additional recruitment of dFMC for ideomotor inhibition. Copyright © 2014 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Ao, Ping
2011-03-01
There has been a tremendous progress in cancer research. However, it appears the current dominant cancer research framework of regarding cancer as diseases of genome leads impasse. Naturally questions have been asked that whether it is possible to develop alternative frameworks such that they can connect both to mutations and other genetic/genomic effects and to environmental factors. Furthermore, such framework can be made quantitative and with predictions experimentally testable. In this talk, I will present a positive answer to this calling. I will explain on our construction of endogenous network theory based on molecular-cellular agencies as dynamical variable. Such cancer theory explicitly demonstrates a profound connection to many fundamental concepts in physics, as such stochastic non-equilibrium processes, ``energy'' landscape, metastability, etc. It suggests that neneath cancer's daunting complexity may lie a simplicity that gives grounds for hope. The rationales behind such theory, its predictions, and its initial experimental verifications will be presented. Supported by USA NIH and China NSF.
Is ``the Theory of Everything'' Merely the Ultimate Ensemble Theory?
NASA Astrophysics Data System (ADS)
Tegmark, Max
1998-11-01
We discuss some physical consequences of what might be called "the ultimate ensemble theory,", where not only worlds corresponding to say different sets of initial data or different physical constants are considered equally real, but also worlds ruled by altogether different equations. The only postulate in this theory is that all structures that exist mathematically exist also physically, by which we mean that in those complex enough to contain self-aware substructures (SASs), these SASs will subjectively perceive themselves as existing in a physically "real" world. We find that it is far from clear that this simple theory, which has no free parameters whatsoever, is observationally ruled out. The predictions of the theory take the form of probability distributions for the outcome of experiments, which makes it testable. In addition, it may be possible to rule it out by comparing its a priori predictions for the observable attributes of nature (the particle masses, the dimensionality of spacetime, etc.) with what is observed.
Wollenberg Valero, Katharina C.; Garcia-Porta, Joan; Rodríguez, Ariel; Arias, Mónica; Shah, Abhijeet; Randrianiaina, Roger Daniel; Brown, Jason L.; Glaw, Frank; Amat, Felix; Künzel, Sven; Metzler, Dirk; Isokpehi, Raphael D.; Vences, Miguel
2017-01-01
Anuran amphibians undergo major morphological transitions during development, but the contribution of their markedly different life-history phases to macroevolution has rarely been analysed. Here we generate testable predictions for coupling versus uncoupling of phenotypic evolution of tadpole and adult life-history phases, and for the underlying expression of genes related to morphological feature formation. We test these predictions by combining evidence from gene expression in two distantly related frogs, Xenopus laevis and Mantidactylus betsileanus, with patterns of morphological evolution in the entire radiation of Madagascan mantellid frogs. Genes linked to morphological structure formation are expressed in a highly phase-specific pattern, suggesting uncoupling of phenotypic evolution across life-history phases. This gene expression pattern agrees with uncoupled rates of trait evolution among life-history phases in the mantellids, which we show to have undergone an adaptive radiation. Our results validate a prevalence of uncoupling in the evolution of tadpole and adult phenotypes of frogs. PMID:28504275
Dynamic allostery of protein alpha helical coiled-coils
Hawkins, Rhoda J; McLeish, Tom C.B
2005-01-01
Alpha helical coiled-coils appear in many important allosteric proteins such as the dynein molecular motor and bacteria chemotaxis transmembrane receptors. As a mechanism for transmitting the information of ligand binding to a distant site across an allosteric protein, an alternative to conformational change in the mean static structure is an induced change in the pattern of the internal dynamics of the protein. We explore how ligand binding may change the intramolecular vibrational free energy of a coiled-coil, using parameterized coarse-grained models, treating the case of dynein in detail. The models predict that coupling of slide, bend and twist modes of the coiled-coil transmits an allosteric free energy of ∼2kBT, consistent with experimental results. A further prediction is a quantitative increase in the effective stiffness of the coiled-coil without any change in inherent flexibility of the individual helices. The model provides a possible and experimentally testable mechanism for transmission of information through the alpha helical coiled-coil of dynein. PMID:16849225
Jackson, Chris J; Izadikah, Zahra; Oei, Tian P S
2012-06-01
Jackson's (2005, 2008a) hybrid model of learning identifies a number of learning mechanisms that lead to the emergence and maintenance of the balance between rationality and irrationality. We test a general hypothesis that Jackson's model will predict depressive symptoms, such that poor learning is related to depression. We draw comparisons between Jackson's model and Ellis' (2004) Rational Emotive Behavior Therapy and Theory (REBT) and thereby provide a set of testable learning mechanisms potentially underlying REBT. Results from 80 patients diagnosed with depression completed the learning styles profiler (LSP; Jackson, 2005) and two measures of depression. Results provide support for the proposed model of learning and further evidence that low rationality is a key predictor of depression. We conclude that the hybrid model of learning has the potential to explain some of the learning and cognitive processes related to the development and maintenance of irrational beliefs and depression. Copyright © 2011. Published by Elsevier B.V.
The ORF1 Protein Encoded by LINE-1: Structure and Function During L1 Retrotransposition
Martin, Sandra L.
2006-01-01
LINE-1, or L1 is an autonomous non-LTR retrotransposon in mammals. Retrotransposition requires the function of the two, L1-encoded polypeptides, ORF1p and ORF2p. Early recognition of regions of homology between the predicted amino acid sequence of ORF2 and known endonuclease and reverse transcriptase enzymes led to testable hypotheses regarding the function of ORF2p in retrotransposition. As predicted, ORF2p has been demonstrated to have both endonuclease and reverse transcriptase activities. In contrast, no homologs of known function have contributed to our understanding of the function of ORF1p during retrotransposition. Nevertheless, significant advances have been made such that we now know that ORF1p is a high affinity RNA binding protein that forms a ribonucleoprotein particle together with L1 RNA. Furthermore, ORF1p is a nucleic acid chaperone and this nucleic acid chaperone activity is required for L1 retrotransposition. PMID:16877816
Five potential consequences of climate change for invasive species.
Hellmann, Jessica J; Byers, James E; Bierwagen, Britta G; Dukes, Jeffrey S
2008-06-01
Scientific and societal unknowns make it difficult to predict how global environmental changes such as climate change and biological invasions will affect ecological systems. In the long term, these changes may have interacting effects and compound the uncertainty associated with each individual driver. Nonetheless, invasive species are likely to respond in ways that should be qualitatively predictable, and some of these responses will be distinct from those of native counterparts. We used the stages of invasion known as the "invasion pathway" to identify 5 nonexclusive consequences of climate change for invasive species: (1) altered transport and introduction mechanisms, (2) establishment of new invasive species, (3) altered impact of existing invasive species, (4) altered distribution of existing invasive species, and (5) altered effectiveness of control strategies. We then used these consequences to identify testable hypotheses about the responses of invasive species to climate change and provide suggestions for invasive-species management plans. The 5 consequences also emphasize the need for enhanced environmental monitoring and expanded coordination among entities involved in invasive-species management.
Multidisciplinary analysis and design of printed wiring boards
NASA Astrophysics Data System (ADS)
Fulton, Robert E.; Hughes, Joseph L.; Scott, Waymond R., Jr.; Umeagukwu, Charles; Yeh, Chao-Pin
1991-04-01
Modern printed wiring board design depends on electronic prototyping using computer-based simulation and design tools. Existing electrical computer-aided design (ECAD) tools emphasize circuit connectivity with only rudimentary analysis capabilities. This paper describes a prototype integrated PWB design environment denoted Thermal Structural Electromagnetic Testability (TSET) being developed at Georgia Tech in collaboration with companies in the electronics industry. TSET provides design guidance based on enhanced electrical and mechanical CAD capabilities including electromagnetic modeling testability analysis thermal management and solid mechanics analysis. TSET development is based on a strong analytical and theoretical science base and incorporates an integrated information framework and a common database design based on a systematic structured methodology.
Soy-Based Therapeutic Baby Formulas: Testable Hypotheses Regarding the Pros and Cons.
Westmark, Cara J
2016-01-01
Soy-based infant formulas have been consumed in the United States since 1909, and currently constitute a significant portion of the infant formula market. There are efforts underway to generate genetically modified soybeans that produce therapeutic agents of interest with the intent to deliver those agents in a soy-based infant formula platform. The threefold purpose of this review article is to first discuss the pros and cons of soy-based infant formulas, then present testable hypotheses to discern the suitability of a soy platform for drug delivery in babies, and finally start a discussion to inform public policy on this important area of infant nutrition.
Bayesian naturalness, simplicity, and testability applied to the B ‑ L MSSM GUT
NASA Astrophysics Data System (ADS)
Fundira, Panashe; Purves, Austin
2018-04-01
Recent years have seen increased use of Bayesian model comparison to quantify notions such as naturalness, simplicity, and testability, especially in the area of supersymmetric model building. After demonstrating that Bayesian model comparison can resolve a paradox that has been raised in the literature concerning the naturalness of the proton mass, we apply Bayesian model comparison to GUTs, an area to which it has not been applied before. We find that the GUTs are substantially favored over the nonunifying puzzle model. Of the GUTs we consider, the B ‑ L MSSM GUT is the most favored, but the MSSM GUT is almost equally favored.
Guimaraes, Sandra; Fernandes, Tiago; Costa, Patrício; Silva, Eduardo
2018-06-01
To determine a normative of tumbling E optotype and its feasibility for visual acuity (VA) assessment in children aged 3-4 years. A cross-sectional study of 1756 children who were invited to participate in a comprehensive non-invasive eye exam. Uncorrected monocular VA with crowded tumbling E with a comprehensive ophthalmological examination were assessed. Testability rates of the whole population and VA of the healthy children for different age subgroups, gender, school type and the order of testing in which the ophthalmological examination was performed were evaluated. The overall testability rate was 95% (92% and 98% for children aged 3 and 4 years, respectively). The mean VA of the first-day assessment (first-VA) and best-VA over 2 days' assessments was 0.14 logMAR (95% CI 0.14 to 0.15) (decimal=0.72, 95% CI 0.71 to 0.73) and 0.13 logMAR (95% CI 0.13 to 0.14) (decimal=0.74, 95% CI 0.73 to 0.74). Analysis with age showed differences between groups in first-VA (F(3,1146)=10.0; p<0.001; η2=0.026) and best-VA (F(3,1155)=8.8; p<0.001; η2=0.022). Our normative was very highly correlated with previous reported HOTV-Amblyopia-Treatment-Study (HOTV-ATS) (first-VA, r=0.97; best-VA, r=0.99), with 0.8 to 0.7 lines consistent overestimation for HOTV-ATS as described in literature. Overall false-positive referral was 1.3%, being specially low regarding anisometropias of ≥2 logMAR lines (0.17%). Interocular difference ≥1 line VA logMAR was not associated with age (p=0.195). This is the first normative for European Caucasian children with single crowded tumbling E in healthy eyes and the largest study comparing 3 and 4 years old testability. Testability rates are higher than found in literature with other optotypes, especially in children aged 3 years, where we found 5%-11% better testability rates. © Article author(s) (or their employer(s) unless otherwise stated in the text of the article) 2018. All rights reserved. No commercial use is permitted unless otherwise expressly granted.
NASA Technical Reports Server (NTRS)
English, Robert E; Cavicchi, Richard H
1951-01-01
Empirical methods of Ainley and Kochendorfer and Nettles were used to predict performances of nine turbine designs. Measured and predicted performances were compared. Appropriate values of blade-loss parameter were determined for the method of Kochendorfer and Nettles. The measured design-point efficiencies were lower than predicted by as much as 0.09 (Ainley and 0.07 (Kochendorfer and Nettles). For the method of Kochendorfer and Nettles, appropriate values of blade-loss parameter ranged from 0.63 to 0.87 and the off-design performance was accurately predicted.
Automated System Checkout to Support Predictive Maintenance for the Reusable Launch Vehicle
NASA Technical Reports Server (NTRS)
Patterson-Hine, Ann; Deb, Somnath; Kulkarni, Deepak; Wang, Yao; Lau, Sonie (Technical Monitor)
1998-01-01
The Propulsion Checkout and Control System (PCCS) is a predictive maintenance software system. The real-time checkout procedures and diagnostics are designed to detect components that need maintenance based on their condition, rather than using more conventional approaches such as scheduled or reliability centered maintenance. Predictive maintenance can reduce turn-around time and cost and increase safety as compared to conventional maintenance approaches. Real-time sensor validation, limit checking, statistical anomaly detection, and failure prediction based on simulation models are employed. Multi-signal models, useful for testability analysis during system design, are used during the operational phase to detect and isolate degraded or failed components. The TEAMS-RT real-time diagnostic engine was developed to utilize the multi-signal models by Qualtech Systems, Inc. Capability of predicting the maintenance condition was successfully demonstrated with a variety of data, from simulation to actual operation on the Integrated Propulsion Technology Demonstrator (IPTD) at Marshall Space Flight Center (MSFC). Playback of IPTD valve actuations for feature recognition updates identified an otherwise undetectable Main Propulsion System 12 inch prevalve degradation. The algorithms were loaded into the Propulsion Checkout and Control System for further development and are the first known application of predictive Integrated Vehicle Health Management to an operational cryogenic testbed. The software performed successfully in real-time, meeting the required performance goal of 1 second cycle time.
Interest is increasing in using biological community data to provide information on the specific types of anthropogenic influences impacting streams. We built empirical models that predict the level of six different types of stress with fish and benthic macroinvertebrate data as...
MERGANSER - An Empirical Model to Predict Fish and Loon Mercury in New England Lakes
MERGANSER (MERcury Geo-spatial AssessmeNtS for the New England Region) is an empirical least-squares multiple regression model using mercury (Hg) deposition and readily obtainable lake and watershed features to predict fish (fillet) and common loon (blood) Hg in New England lakes...
Koopmeiners, Joseph S; Feng, Ziding
2011-01-01
The receiver operating characteristic (ROC) curve, the positive predictive value (PPV) curve and the negative predictive value (NPV) curve are three measures of performance for a continuous diagnostic biomarker. The ROC, PPV and NPV curves are often estimated empirically to avoid assumptions about the distributional form of the biomarkers. Recently, there has been a push to incorporate group sequential methods into the design of diagnostic biomarker studies. A thorough understanding of the asymptotic properties of the sequential empirical ROC, PPV and NPV curves will provide more flexibility when designing group sequential diagnostic biomarker studies. In this paper we derive asymptotic theory for the sequential empirical ROC, PPV and NPV curves under case-control sampling using sequential empirical process theory. We show that the sequential empirical ROC, PPV and NPV curves converge to the sum of independent Kiefer processes and show how these results can be used to derive asymptotic results for summaries of the sequential empirical ROC, PPV and NPV curves.
Koopmeiners, Joseph S.; Feng, Ziding
2013-01-01
The receiver operating characteristic (ROC) curve, the positive predictive value (PPV) curve and the negative predictive value (NPV) curve are three measures of performance for a continuous diagnostic biomarker. The ROC, PPV and NPV curves are often estimated empirically to avoid assumptions about the distributional form of the biomarkers. Recently, there has been a push to incorporate group sequential methods into the design of diagnostic biomarker studies. A thorough understanding of the asymptotic properties of the sequential empirical ROC, PPV and NPV curves will provide more flexibility when designing group sequential diagnostic biomarker studies. In this paper we derive asymptotic theory for the sequential empirical ROC, PPV and NPV curves under case-control sampling using sequential empirical process theory. We show that the sequential empirical ROC, PPV and NPV curves converge to the sum of independent Kiefer processes and show how these results can be used to derive asymptotic results for summaries of the sequential empirical ROC, PPV and NPV curves. PMID:24039313
A genetic programming approach for Burkholderia Pseudomallei diagnostic pattern discovery
Yang, Zheng Rong; Lertmemongkolchai, Ganjana; Tan, Gladys; Felgner, Philip L.; Titball, Richard
2009-01-01
Motivation: Finding diagnostic patterns for fighting diseases like Burkholderia pseudomallei using biomarkers involves two key issues. First, exhausting all subsets of testable biomarkers (antigens in this context) to find a best one is computationally infeasible. Therefore, a proper optimization approach like evolutionary computation should be investigated. Second, a properly selected function of the antigens as the diagnostic pattern which is commonly unknown is a key to the diagnostic accuracy and the diagnostic effectiveness in clinical use. Results: A conversion function is proposed to convert serum tests of antigens on patients to binary values based on which Boolean functions as the diagnostic patterns are developed. A genetic programming approach is designed for optimizing the diagnostic patterns in terms of their accuracy and effectiveness. During optimization, it is aimed to maximize the coverage (the rate of positive response to antigens) in the infected patients and minimize the coverage in the non-infected patients while maintaining the fewest number of testable antigens used in the Boolean functions as possible. The final coverage in the infected patients is 96.55% using 17 of 215 (7.4%) antigens with zero coverage in the non-infected patients. Among these 17 antigens, BPSL2697 is the most frequently selected one for the diagnosis of Burkholderia Pseudomallei. The approach has been evaluated using both the cross-validation and the Jack–knife simulation methods with the prediction accuracy as 93% and 92%, respectively. A novel approach is also proposed in this study to evaluate a model with binary data using ROC analysis. Contact: z.r.yang@ex.ac.uk PMID:19561021
Evolutionary Perspectives on Genetic and Environmental Risk Factors for Psychiatric Disorders.
Keller, Matthew C
2018-05-07
Evolutionary medicine uses evolutionary theory to help elucidate why humans are vulnerable to disease and disorders. I discuss two different types of evolutionary explanations that have been used to help understand human psychiatric disorders. First, a consistent finding is that psychiatric disorders are moderately to highly heritable, and many, such as schizophrenia, are also highly disabling and appear to decrease Darwinian fitness. Models used in evolutionary genetics to understand why genetic variation exists in fitness-related traits can be used to understand why risk alleles for psychiatric disorders persist in the population. The usual explanation for species-typical adaptations-natural selection-is less useful for understanding individual differences in genetic risk to disorders. Rather, two other types of models, mutation-selection-drift and balancing selection, offer frameworks for understanding why genetic variation in risk to psychiatric (and other) disorders exists, and each makes predictions that are now testable using whole-genome data. Second, species-typical capacities to mount reactions to negative events are likely to have been crafted by natural selection to minimize fitness loss. The pain reaction to tissue damage is almost certainly such an example, but it has been argued that the capacity to experience depressive symptoms such as sadness, anhedonia, crying, and fatigue in the face of adverse life situations may have been crafted by natural selection as well. I review the rationale and strength of evidence for this hypothesis. Evolutionary hypotheses of psychiatric disorders are important not only for offering explanations for why psychiatric disorders exist, but also for generating new, testable hypotheses and understanding how best to design studies and analyze data.
The Diffusion Model Is Not a Deterministic Growth Model: Comment on Jones and Dzhafarov (2014)
Smith, Philip L.; Ratcliff, Roger; McKoon, Gail
2015-01-01
Jones and Dzhafarov (2014) claim that several current models of speeded decision making in cognitive tasks, including the diffusion model, can be viewed as special cases of other general models or model classes. The general models can be made to match any set of response time (RT) distribution and accuracy data exactly by a suitable choice of parameters and so are unfalsifiable. The implication of their claim is that models like the diffusion model are empirically testable only by artificially restricting them to exclude unfalsifiable instances of the general model. We show that Jones and Dzhafarov’s argument depends on enlarging the class of “diffusion” models to include models in which there is little or no diffusion. The unfalsifiable models are deterministic or near-deterministic growth models, from which the effects of within-trial variability have been removed or in which they are constrained to be negligible. These models attribute most or all of the variability in RT and accuracy to across-trial variability in the rate of evidence growth, which is permitted to be distributed arbitrarily and to vary freely across experimental conditions. In contrast, in the standard diffusion model, within-trial variability in evidence is the primary determinant of variability in RT. Across-trial variability, which determines the relative speed of correct responses and errors, is theoretically and empirically constrained. Jones and Dzhafarov’s attempt to include the diffusion model in a class of models that also includes deterministic growth models misrepresents and trivializes it and conveys a misleading picture of cognitive decision-making research. PMID:25347314
Ilukor, John; Birner, Regina; Nielsen, Thea
2015-11-01
Providing adequate animal health services to smallholder farmers in developing countries has remained a challenge, in spite of various reform efforts during the past decades. The focuses of the past reforms were on market failures to decide what the public sector, the private sector, and the "third sector" (the community-based sector) should do with regard to providing animal health services. However, such frameworks have paid limited attention to the governance challenges inherent in the provision of animal health services. This paper presents a framework for analyzing institutional arrangements for providing animal health services that focus not only on market failures, but also on governance challenges, such as elite capture, and absenteeism of staff. As an analytical basis, Williamson's discriminating alignment hypothesis is applied to assess the cost-effectiveness of different institutional arrangements for animal health services in view of both market failures and governance challenges. This framework is used to generate testable hypotheses on the appropriateness of different institutional arrangements for providing animal health services, depending on context-specific circumstances. Data from Uganda and Kenya on clinical veterinary services is used to provide an empirical test of these hypotheses and to demonstrate application of Williamson's transaction cost theory to veterinary service delivery. The paper concludes that strong public sector involvement, especially in building and strengthening a synergistic relation-based referral arrangement between paraprofessionals and veterinarians is imperative in improving animal health service delivery in developing countries. Copyright © 2015 Elsevier B.V. All rights reserved.
Reviews of theoretical frameworks: Challenges and judging the quality of theory application.
Hean, Sarah; Anderson, Liz; Green, Chris; John, Carol; Pitt, Richard; O'Halloran, Cath
2016-06-01
Rigorous reviews of available information, from a range of resources, are required to support medical and health educators in their decision making. The aim of this article is to highlight the importance of a review of theoretical frameworks specifically as a supplement to reviews that focus on a synthesis of the empirical evidence alone. Establishing a shared understanding of theory as a concept is highlighted as a challenge and some practical strategies to achieving this are presented. This article also introduces the concept of theoretical quality, arguing that a critique of how theory is applied should complement the methodological appraisal of the literature in a review. We illustrate the challenge of establishing a shared meaning of theory through reference to experiences of an on-going review of this kind conducted in the field of interprofessional education (IPE) and use a high scoring paper selected in this review to illustrate how theoretical quality can be assessed. In reaching a shared understanding of theory as a concept, practical strategies that promote experiential and practical ways of knowing are required in addition to more propositional ways of sharing knowledge. Concepts of parsimony, testability, operational adequacy and empirical adequacy are explored as concepts that establish theoretical quality. Reviews of theoretical frameworks used in medical education are required to inform educational practice. Review teams should make time and effort to reach a shared understanding of the term theory. Theory reviews, and reviews more widely, should add an assessment of theory application to the protocol of their review method.
Expanding our Lens: Female Pathways to Antisocial Behavior in Adolescence and Adulthood
Javdani, Shabnam; Sadeh, Naomi; Verona, Edelyn
2012-01-01
Women and girls’ engagement in antisocial behavior represents a psychological issue of great concern given the radiating impact that women’s antisociality can have on individuals, families, and communities. Despite its importance and relevance for psychological science, this topic has received limited attention to date and no systematic review of risk factors exists. The present paper aims to systematically review the empirical literature informing risk factors relevant to women’s antisocial behavior, with a focus on adolescence and adulthood. Primary aims are to 1) review empirical literatures on risk factors for female antisocial behavior across multiple levels of influence (e.g., person-level characteristics, risky family factors, and gender-salient contexts) and fields of study (e.g., psychology, sociology); 2) evaluate the relevance of each factor for female antisocial behavior; and 3) incorporate an analysis of how gender at both the individual and ecological level shapes pathways to antisocial behavior in women and girls. We conclude that women’s antisocial behavior is best-understood as being influenced by person-level or individual vulnerabilities, risky family factors, and exposure to gender-salient interpersonal contexts, and underscore the importance of examining women’s antisocial behavior through an expanded lens that views gender as an individual level attribute as well as a social category that organizes the social context in ways that may promote engagement in antisocial behavior. Based on the present systematic review, an integrative pathway model is proposed toward the goal of synthesizing current knowledge and generating testable hypotheses for future research. PMID:22001339
Recent ecological responses to climate change support predictions of high extinction risk
Maclean, Ilya M. D.; Wilson, Robert J.
2011-01-01
Predicted effects of climate change include high extinction risk for many species, but confidence in these predictions is undermined by a perceived lack of empirical support. Many studies have now documented ecological responses to recent climate change, providing the opportunity to test whether the magnitude and nature of recent responses match predictions. Here, we perform a global and multitaxon metaanalysis to show that empirical evidence for the realized effects of climate change supports predictions of future extinction risk. We use International Union for Conservation of Nature (IUCN) Red List criteria as a common scale to estimate extinction risks from a wide range of climate impacts, ecological responses, and methods of analysis, and we compare predictions with observations. Mean extinction probability across studies making predictions of the future effects of climate change was 7% by 2100 compared with 15% based on observed responses. After taking account of possible bias in the type of climate change impact analyzed and the parts of the world and taxa studied, there was less discrepancy between the two approaches: predictions suggested a mean extinction probability of 10% across taxa and regions, whereas empirical evidence gave a mean probability of 14%. As well as mean overall extinction probability, observations also supported predictions in terms of variability in extinction risk and the relative risk associated with broad taxonomic groups and geographic regions. These results suggest that predictions are robust to methodological assumptions and provide strong empirical support for the assertion that anthropogenic climate change is now a major threat to global biodiversity. PMID:21746924
Recent ecological responses to climate change support predictions of high extinction risk.
Maclean, Ilya M D; Wilson, Robert J
2011-07-26
Predicted effects of climate change include high extinction risk for many species, but confidence in these predictions is undermined by a perceived lack of empirical support. Many studies have now documented ecological responses to recent climate change, providing the opportunity to test whether the magnitude and nature of recent responses match predictions. Here, we perform a global and multitaxon metaanalysis to show that empirical evidence for the realized effects of climate change supports predictions of future extinction risk. We use International Union for Conservation of Nature (IUCN) Red List criteria as a common scale to estimate extinction risks from a wide range of climate impacts, ecological responses, and methods of analysis, and we compare predictions with observations. Mean extinction probability across studies making predictions of the future effects of climate change was 7% by 2100 compared with 15% based on observed responses. After taking account of possible bias in the type of climate change impact analyzed and the parts of the world and taxa studied, there was less discrepancy between the two approaches: predictions suggested a mean extinction probability of 10% across taxa and regions, whereas empirical evidence gave a mean probability of 14%. As well as mean overall extinction probability, observations also supported predictions in terms of variability in extinction risk and the relative risk associated with broad taxonomic groups and geographic regions. These results suggest that predictions are robust to methodological assumptions and provide strong empirical support for the assertion that anthropogenic climate change is now a major threat to global biodiversity.
Component-based model to predict aerodynamic noise from high-speed train pantographs
NASA Astrophysics Data System (ADS)
Latorre Iglesias, E.; Thompson, D. J.; Smith, M. G.
2017-04-01
At typical speeds of modern high-speed trains the aerodynamic noise produced by the airflow over the pantograph is a significant source of noise. Although numerical models can be used to predict this they are still very computationally intensive. A semi-empirical component-based prediction model is proposed to predict the aerodynamic noise from train pantographs. The pantograph is approximated as an assembly of cylinders and bars with particular cross-sections. An empirical database is used to obtain the coefficients of the model to account for various factors: incident flow speed, diameter, cross-sectional shape, yaw angle, rounded edges, length-to-width ratio, incoming turbulence and directivity. The overall noise from the pantograph is obtained as the incoherent sum of the predicted noise from the different pantograph struts. The model is validated using available wind tunnel noise measurements of two full-size pantographs. The results show the potential of the semi-empirical model to be used as a rapid tool to predict aerodynamic noise from train pantographs.
A prospective earthquake forecast experiment for Japan
NASA Astrophysics Data System (ADS)
Yokoi, Sayoko; Nanjo, Kazuyoshi; Tsuruoka, Hiroshi; Hirata, Naoshi
2013-04-01
One major focus of the current Japanese earthquake prediction research program (2009-2013) is to move toward creating testable earthquake forecast models. For this purpose we started an experiment of forecasting earthquake activity in Japan under the framework of the Collaboratory for the Study of Earthquake Predictability (CSEP) through an international collaboration. We established the CSEP Testing Centre, an infrastructure to encourage researchers to develop testable models for Japan, and to conduct verifiable prospective tests of their model performance. On 1 November in 2009, we started the 1st earthquake forecast testing experiment for the Japan area. We use the unified JMA catalogue compiled by the Japan Meteorological Agency as authorized catalogue. The experiment consists of 12 categories, with 4 testing classes with different time spans (1 day, 3 months, 1 year, and 3 years) and 3 testing regions called All Japan, Mainland, and Kanto. A total of 91 models were submitted to CSEP-Japan, and are evaluated with the CSEP official suite of tests about forecast performance. In this presentation, we show the results of the experiment of the 3-month testing class for 5 rounds. HIST-ETAS7pa, MARFS and RI10K models corresponding to the All Japan, Mainland and Kanto regions showed the best score based on the total log-likelihood. It is also clarified that time dependency of model parameters is no effective factor to pass the CSEP consistency tests for the 3-month testing class in all regions. Especially, spatial distribution in the All Japan region was too difficult to pass consistency test due to multiple events at a bin. Number of target events for a round in the Mainland region tended to be smaller than model's expectation during all rounds, which resulted in rejections of consistency test because of overestimation. In the Kanto region, pass ratios of consistency tests in each model showed more than 80%, which was associated with good balanced forecasting of event number and spatial distribution. Due to the multiple rounds of the experiment, we are now understanding the stability of models, robustness of model selection and earthquake predictability in each region beyond stochastic fluctuations of seismicity. We plan to use the results for design of 3 dimensional earthquake forecasting model in Kanto region, which is supported by the special project for reducing vulnerability for urban mega earthquake disasters from Ministy of Education, Culture, Sports and Technology of Japan.
Ecological Forecasting in Chesapeake Bay: Using a Mechanistic-Empirical Modelling Approach
DOE Office of Scientific and Technical Information (OSTI.GOV)
Brown, C. W.; Hood, Raleigh R.; Long, Wen
The Chesapeake Bay Ecological Prediction System (CBEPS) automatically generates daily nowcasts and three-day forecasts of several environmental variables, such as sea-surface temperature and salinity, the concentrations of chlorophyll, nitrate, and dissolved oxygen, and the likelihood of encountering several noxious species, including harmful algal blooms and water-borne pathogens, for the purpose of monitoring the Bay's ecosystem. While the physical and biogeochemical variables are forecast mechanistically using the Regional Ocean Modeling System configured for the Chesapeake Bay, the species predictions are generated using a novel mechanistic empirical approach, whereby real-time output from the coupled physical biogeochemical model drives multivariate empirical habitat modelsmore » of the target species. The predictions, in the form of digital images, are available via the World Wide Web to interested groups to guide recreational, management, and research activities. Though full validation of the integrated forecasts for all species is still a work in progress, we argue that the mechanistic–empirical approach can be used to generate a wide variety of short-term ecological forecasts, and that it can be applied in any marine system where sufficient data exist to develop empirical habitat models. This paper provides an overview of this system, its predictions, and the approach taken.« less
Do We Know the Actual Magnetopause Position for Typical Solar Wind Conditions?
NASA Technical Reports Server (NTRS)
Samsonov, A. A.; Gordeev, E.; Tsyganenko, N. A.; Safrankova, J.; Nemecek, Z.; Simunek, J.; Sibeck, D. G.; Toth, G.; Merkin, V. G.; Raeder, J.
2016-01-01
We compare predicted magnetopause positions at the subsolar point and four reference points in the terminator plane obtained from several empirical and numerical MHD (magnetohydrodynamics) models. Empirical models using various sets of magnetopause crossings and making different assumptions about the magnetopause shape predict significantly different magnetopause positions (with a scatter greater than 1 Earth radius (R (sub E)) even at the subsolar point. Axisymmetric magnetopause models cannot reproduce the cusp indentations or the changes related to the dipole tilt effect, and most of them predict the magnetopause closer to the Earth than non axisymmetric models for typical solar wind conditions and zero tilt angle. Predictions of two global non axisymmetric models do not match each other, and the models need additional verification. MHD models often predict the magnetopause closer to the Earth than the non axisymmetric empirical models, but the predictions of MHD simulations may need corrections for the ring current effect and decreases of the solar wind pressure that occur in the foreshock. Comparing MHD models in which the ring current magnetic field is taken into account with the empirical Lin et al. model, we find that the differences in the reference point positions predicted by these models are relatively small for B (sub z) equals 0 (note: B (sub z) is when the Earth's magnetic field points north versus Sun's magnetic field pointing south). Therefore, we assume that these predictions indicate the actual magnetopause position, but future investigations are still needed.
On the predictability of land surface fluxes from meteorological variables
NASA Astrophysics Data System (ADS)
Haughton, Ned; Abramowitz, Gab; Pitman, Andy J.
2018-01-01
Previous research has shown that land surface models (LSMs) are performing poorly when compared with relatively simple empirical models over a wide range of metrics and environments. Atmospheric driving data appear to provide information about land surface fluxes that LSMs are not fully utilising. Here, we further quantify the information available in the meteorological forcing data that are used by LSMs for predicting land surface fluxes, by interrogating FLUXNET data, and extending the benchmarking methodology used in previous experiments. We show that substantial performance improvement is possible for empirical models using meteorological data alone, with no explicit vegetation or soil properties, thus setting lower bounds on a priori expectations on LSM performance. The process also identifies key meteorological variables that provide predictive power. We provide an ensemble of empirical benchmarks that are simple to reproduce and provide a range of behaviours and predictive performance, acting as a baseline benchmark set for future studies. We reanalyse previously published LSM simulations and show that there is more diversity between LSMs than previously indicated, although it remains unclear why LSMs are broadly performing so much worse than simple empirical models.
NASA Astrophysics Data System (ADS)
Phillips, C. B.; Jerolmack, D. J.
2017-12-01
Understanding when coarse sediment begins to move in a river is essential for linking rivers to the evolution of mountainous landscapes. Unfortunately, the threshold of surface particle motion is notoriously difficult to measure in the field. However, recent studies have shown that the threshold of surface motion is empirically correlated with channel slope, a property that is easy to measure and readily available from the literature. These studies have thoroughly examined the mechanistic underpinnings behind the observed correlation and produced suitably complex models. These models are difficult to implement for natural rivers using widely available data, and thus others have treated the empirical regression between slope and the threshold of motion as a predictive model. We note that none of the authors of the original studies exploring this correlation suggested their empirical regressions be used in a predictive fashion, nevertheless these regressions between slope and the threshold of motion have found their way into numerous recent studies engendering potentially spurious conclusions. We demonstrate that there are two significant problems with using these empirical equations for prediction: (1) the empirical regressions are based on a limited sampling of the phase space of bed-load rivers and (2) the empirical measurements of bankfull and critical shear stresses are paired. The upshot of these problems limits the empirical relations predictive capacity to field sites drawn from the same region of the bed-load river phase space and that the paired nature of the data introduces a spurious correlation when considering the ratio of bankfull to critical shear stress. Using a large compilation of bed-load river hydraulic geometry data, we demonstrate that the variation within independently measured values of the threshold of motion changes systematically with bankfull shields stress and not channel slope. Additionally, we highlight using several recent datasets the potential pitfalls that one can encounter when using simplistic empirical regressions to predict the threshold of motion showing that while these concerns could be construed as subtle the resulting implications can be substantial.
The U.S. Earthquake Prediction Program
Wesson, R.L.; Filson, J.R.
1981-01-01
There are two distinct motivations for earthquake prediction. The mechanistic approach aims to understand the processes leading to a large earthquake. The empirical approach is governed by the immediate need to protect lives and property. With our current lack of knowledge about the earthquake process, future progress cannot be made without gathering a large body of measurements. These are required not only for the empirical prediction of earthquakes, but also for the testing and development of hypotheses that further our understanding of the processes at work. The earthquake prediction program is basically a program of scientific inquiry, but one which is motivated by social, political, economic, and scientific reasons. It is a pursuit that cannot rely on empirical observations alone nor can it carried out solely on a blackboard or in a laboratory. Experiments must be carried out in the real Earth.
Life histories of hosts and pathogens predict patterns in tropical fungal plant diseases.
García-Guzmán, Graciela; Heil, Martin
2014-03-01
Plant pathogens affect the fitness of their hosts and maintain biodiversity. However, we lack theories to predict the type and intensity of infections in wild plants. Here we demonstrate using fungal pathogens of tropical plants that an examination of the life histories of hosts and pathogens can reveal general patterns in their interactions. Fungal infections were more commonly reported for light-demanding than for shade-tolerant species and for evergreen rather than for deciduous hosts. Both patterns are consistent with classical defence theory, which predicts lower resistance in fast-growing species and suggests that the deciduous habit can reduce enemy populations. In our literature survey, necrotrophs were found mainly to infect shade-tolerant woody species whereas biotrophs dominated in light-demanding herbaceous hosts. Far-red signalling and its inhibitory effects on jasmonic acid signalling are likely to explain this phenomenon. Multiple changes between the necrotrophic and the symptomless endophytic lifestyle at the ecological and evolutionary scale indicate that endophytes should be considered when trying to understand large-scale patterns in the fungal infections of plants. Combining knowledge about the molecular mechanisms of pathogen resistance with classical defence theory enables the formulation of testable predictions concerning general patterns in the infections of wild plants by fungal pathogens. © 2013 The Authors. New Phytologist © 2013 New Phytologist Trust.
Multi-omics approach identifies molecular mechanisms of plant-fungus mycorrhizal interaction
Larsen, Peter E.; Sreedasyam, Avinash; Trivedi, Geetika; ...
2016-01-19
In mycorrhizal symbiosis, plant roots form close, mutually beneficial interactions with soil fungi. Before this mycorrhizal interaction can be established however, plant roots must be capable of detecting potential beneficial fungal partners and initiating the gene expression patterns necessary to begin symbiosis. To predict a plant root – mycorrhizal fungi sensor systems, we analyzed in vitro experiments of Populus tremuloides (aspen tree) and Laccaria bicolor (mycorrhizal fungi) interaction and leveraged over 200 previously published transcriptomic experimental data sets, 159 experimentally validated plant transcription factor binding motifs, and more than 120-thousand experimentally validated protein-protein interactions to generate models of pre-mycorrhizal sensormore » systems in aspen root. These sensor mechanisms link extracellular signaling molecules with gene regulation through a network comprised of membrane receptors, signal cascade proteins, transcription factors, and transcription factor biding DNA motifs. Modeling predicted four pre-mycorrhizal sensor complexes in aspen that interact with fifteen transcription factors to regulate the expression of 1184 genes in response to extracellular signals synthesized by Laccaria. Predicted extracellular signaling molecules include common signaling molecules such as phenylpropanoids, salicylate, and, jasmonic acid. Lastly, this multi-omic computational modeling approach for predicting the complex sensory networks yielded specific, testable biological hypotheses for mycorrhizal interaction signaling compounds, sensor complexes, and mechanisms of gene regulation.« less
Multi-omics approach identifies molecular mechanisms of plant-fungus mycorrhizal interaction
DOE Office of Scientific and Technical Information (OSTI.GOV)
Larsen, Peter E.; Sreedasyam, Avinash; Trivedi, Geetika
In mycorrhizal symbiosis, plant roots form close, mutually beneficial interactions with soil fungi. Before this mycorrhizal interaction can be established however, plant roots must be capable of detecting potential beneficial fungal partners and initiating the gene expression patterns necessary to begin symbiosis. To predict a plant root – mycorrhizal fungi sensor systems, we analyzed in vitro experiments of Populus tremuloides (aspen tree) and Laccaria bicolor (mycorrhizal fungi) interaction and leveraged over 200 previously published transcriptomic experimental data sets, 159 experimentally validated plant transcription factor binding motifs, and more than 120-thousand experimentally validated protein-protein interactions to generate models of pre-mycorrhizal sensormore » systems in aspen root. These sensor mechanisms link extracellular signaling molecules with gene regulation through a network comprised of membrane receptors, signal cascade proteins, transcription factors, and transcription factor biding DNA motifs. Modeling predicted four pre-mycorrhizal sensor complexes in aspen that interact with fifteen transcription factors to regulate the expression of 1184 genes in response to extracellular signals synthesized by Laccaria. Predicted extracellular signaling molecules include common signaling molecules such as phenylpropanoids, salicylate, and, jasmonic acid. Lastly, this multi-omic computational modeling approach for predicting the complex sensory networks yielded specific, testable biological hypotheses for mycorrhizal interaction signaling compounds, sensor complexes, and mechanisms of gene regulation.« less
Why is there a dearth of close-in planets around fast-rotating stars?
DOE Office of Scientific and Technical Information (OSTI.GOV)
Teitler, Seth; Königl, Arieh, E-mail: satelite@gmail.com, E-mail: akonigl@uchicago.edu
2014-05-10
We propose that the reported dearth of Kepler objects of interest (KOIs) with orbital periods P {sub orb} ≲ 2-3 days around stars with rotation periods P {sub rot} ≲ 5-10 days can be attributed to tidal ingestion of close-in planets by their host stars. We show that the planet distribution in this region of the log P {sub orb}-log P {sub rot} plane is qualitatively reproduced with a model that incorporates tidal interaction and magnetic braking as well as the dependence on the stellar core-envelope coupling timescale. We demonstrate the consistency of this scenario with the inferred break inmore » the P {sub orb} distribution of close-in KOIs and point out a potentially testable prediction of this interpretation.« less
The Synchrotron Shock Model Confronts a "Line of Death" in the BATSE Gamma-Ray Burst Data
NASA Technical Reports Server (NTRS)
Preece, Robert D.; Briggs, Michael S.; Mallozzi, Robert S.; Pendleton, Geoffrey N.; Paciesas, W. S.; Band, David L.
1998-01-01
The synchrotron shock model (SSM) for gamma-ray burst emission makes a testable prediction: that the observed low-energy power-law photon number spectral index cannot exceed -2/3 (where the photon model is defined with a positive index: $dN/dE \\propto E{alpha}$). We have collected time-resolved spectral fit parameters for over 100 bright bursts observed by the Burst And Transient Source Experiment on board the {\\it Compton Gamma Ray Observatory}. Using this database, we find 23 bursts in which the spectral index limit of the SSM is violated, We discuss elements of the analysis methodology that affect the robustness of this result, as well as some of the escape hatches left for the SSM by theory.
Fault Management Technology Maturation for NASA's Constellation Program
NASA Technical Reports Server (NTRS)
Waterman, Robert D.
2010-01-01
This slide presentation reviews the maturation of fault management technology in preparation for the Constellation Program. There is a review of the Space Shuttle Main Engine (SSME) and a discussion of a couple of incidents with the shuttle main engine and tanking that indicated the necessity for predictive maintenance. Included is a review of the planned Ares I-X Ground Diagnostic Prototype (GDP) and further information about detection and isolation of faults using Testability Engineering and Maintenance System (TEAMS). Another system that being readied for use that detects anomalies, the Inductive Monitoring System (IMS). The IMS automatically learns how the system behaves and alerts operations it the current behavior is anomalous. The comparison of STS-83 and STS-107 (i.e., the Columbia accident) is shown as an example of the anomaly detection capabilities.
Tong, Xiuli; McBride, Catherine
2017-07-01
Following a review of contemporary models of word-level processing for reading and their limitations, we propose a new hypothetical model of Chinese character reading, namely, the graded lexical space mapping model that characterizes how sublexical radicals and lexical information are involved in Chinese character reading development. The underlying assumption of this model is that Chinese character recognition is a process of competitive mappings of phonology, semantics, and orthography in both lexical and sublexical systems, operating as functions of statistical properties of print input based on the individual's specific level of reading. This model leads to several testable predictions concerning how the quasiregularity and continuity of Chinese-specific radicals are organized in memory for both child and adult readers at different developmental stages of reading.
Observational exclusion of a consistent loop quantum cosmology scenario
NASA Astrophysics Data System (ADS)
Bolliet, Boris; Barrau, Aurélien; Grain, Julien; Schander, Susanne
2016-06-01
It is often argued that inflation erases all the information about what took place before it started. Quantum gravity, relevant in the Planck era, seems therefore mostly impossible to probe with cosmological observations. In general, only very ad hoc scenarios or hyper fine-tuned initial conditions can lead to observationally testable theories. Here we consider a well-defined and well-motivated candidate quantum cosmology model that predicts inflation. Using the most recent observational constraints on the cosmic microwave background B-modes, we show that the model is excluded for all its parameter space, without any tuning. Some important consequences are drawn for the deformed algebra approach to loop quantum cosmology. We emphasize that neither loop quantum cosmology in general nor loop quantum gravity are disfavored by this study but their falsifiability is established.
Tripp, Gail; Wickens, Jeff R
2008-07-01
This review considers the hypothesis that changes in dopamine signalling might account for altered sensitivity to positive reinforcement in children with ADHD. The existing evidence regarding dopamine cell activity in relation to positive reinforcement is reviewed. We focus on the anticipatory firing of dopamine cells brought about by a transfer of dopamine cell responses to cues that precede reinforcers. It is proposed that in children with ADHD there is diminished anticipatory dopamine cell firing, which we call the dopamine transfer deficit (DTD). The DTD theory leads to specific and testable predictions for human and animal research. The extent to which DTD explains symptoms of ADHD and effects of pharmacological interventions is discussed. We conclude by considering the neural changes underlying the etiology of DTD.
Causes and consequences of reduced blood volume in space flight - A multi-discipline modeling study
NASA Technical Reports Server (NTRS)
Leonard, J. I.
1983-01-01
A group of mathematical models of various physiological systems have been developed and applied to studying problems associated with adaptation to weightlessness. One biomedical issue which could be addressed by at least three of these models from varying perspectives was the reduction in blood volume that universally occurs in astronauts. Accordingly, models of fluid-electrolyte, erythropoiesis, and cardiovascular regulation were employed to study the causes and consequences of blood volume loss during space flight. This analysis confirms the notion that alterations of blood volume are central to an understanding of adaptation to prolonged space flight. More importantly, the modeling studies resulted in specific hypotheses accounting for plasma volume and red cell mass losses and testable predictions concerning the behavior of the circulatory system.
Computational Psychiatry of ADHD: Neural Gain Impairments across Marrian Levels of Analysis
Hauser, Tobias U.; Fiore, Vincenzo G.; Moutoussis, Michael; Dolan, Raymond J.
2016-01-01
Attention-deficit hyperactivity disorder (ADHD), one of the most common psychiatric disorders, is characterised by unstable response patterns across multiple cognitive domains. However, the neural mechanisms that explain these characteristic features remain unclear. Using a computational multilevel approach, we propose that ADHD is caused by impaired gain modulation in systems that generate this phenotypic increased behavioural variability. Using Marr's three levels of analysis as a heuristic framework, we focus on this variable behaviour, detail how it can be explained algorithmically, and how it might be implemented at a neural level through catecholamine influences on corticostriatal loops. This computational, multilevel, approach to ADHD provides a framework for bridging gaps between descriptions of neuronal activity and behaviour, and provides testable predictions about impaired mechanisms. PMID:26787097
Thalamocortical mechanisms for integrating musical tone and rhythm
Musacchia, Gabriella; Large, Edward
2014-01-01
Studies over several decades have identified many of the neuronal substrates of music perception by pursuing pitch and rhythm perception separately. Here, we address the question of how these mechanisms interact, starting with the observation that the peripheral pathways of the so-called “Core” and “Matrix” thalamocortical system provide the anatomical bases for tone and rhythm channels. We then examine the hypothesis that these specialized inputs integrate tonal content within rhythm context in auditory cortex using classical types of “driving” and “modulatory” mechanisms. This hypothesis provides a framework for deriving testable predictions about the early stages of music processing. Furthermore, because thalamocortical circuits are shared by speech and music processing, such a model provides concrete implications for how music experience contributes to the development of robust speech encoding mechanisms. PMID:24103509
Complex Causal Process Diagrams for Analyzing the Health Impacts of Policy Interventions
Joffe, Michael; Mindell, Jennifer
2006-01-01
Causal diagrams are rigorous tools for controlling confounding. They also can be used to describe complex causal systems, which is done routinely in communicable disease epidemiology. The use of change diagrams has advantages over static diagrams, because change diagrams are more tractable, relate better to interventions, and have clearer interpretations. Causal diagrams are a useful basis for modeling. They make assumptions explicit, provide a framework for analysis, generate testable predictions, explore the effects of interventions, and identify data gaps. Causal diagrams can be used to integrate different types of information and to facilitate communication both among public health experts and between public health experts and experts in other fields. Causal diagrams allow the use of instrumental variables, which can help control confounding and reverse causation. PMID:16449586
Sneutrino dark matter in gauged inverse seesaw models for neutrinos.
An, Haipeng; Dev, P S Bhupal; Cai, Yi; Mohapatra, R N
2012-02-24
Extending the minimal supersymmetric standard model to explain small neutrino masses via the inverse seesaw mechanism can lead to a new light supersymmetric scalar partner which can play the role of inelastic dark matter (IDM). It is a linear combination of the superpartners of the neutral fermions in the theory (the light left-handed neutrino and two heavy standard model singlet neutrinos) which can be very light with mass in ~5-20 GeV range, as suggested by some current direct detection experiments. The IDM in this class of models has keV-scale mass splitting, which is intimately connected to the small Majorana masses of neutrinos. We predict the differential scattering rate and annual modulation of the IDM signal which can be testable at future germanium- and xenon-based detectors.
Constraining the loop quantum gravity parameter space from phenomenology
NASA Astrophysics Data System (ADS)
Brahma, Suddhasattwa; Ronco, Michele
2018-03-01
Development of quantum gravity theories rarely takes inputs from experimental physics. In this letter, we take a small step towards correcting this by establishing a paradigm for incorporating putative quantum corrections, arising from canonical quantum gravity (QG) theories, in deriving falsifiable modified dispersion relations (MDRs) for particles on a deformed Minkowski space-time. This allows us to differentiate and, hopefully, pick between several quantization choices via testable, state-of-the-art phenomenological predictions. Although a few explicit examples from loop quantum gravity (LQG) (such as the regularization scheme used or the representation of the gauge group) are shown here to establish the claim, our framework is more general and is capable of addressing other quantization ambiguities within LQG and also those arising from other similar QG approaches.
Soy-Based Therapeutic Baby Formulas: Testable Hypotheses Regarding the Pros and Cons
Westmark, Cara J.
2017-01-01
Soy-based infant formulas have been consumed in the United States since 1909, and currently constitute a significant portion of the infant formula market. There are efforts underway to generate genetically modified soybeans that produce therapeutic agents of interest with the intent to deliver those agents in a soy-based infant formula platform. The threefold purpose of this review article is to first discuss the pros and cons of soy-based infant formulas, then present testable hypotheses to discern the suitability of a soy platform for drug delivery in babies, and finally start a discussion to inform public policy on this important area of infant nutrition. PMID:28149839
Domain fusion analysis by applying relational algebra to protein sequence and domain databases
Truong, Kevin; Ikura, Mitsuhiko
2003-01-01
Background Domain fusion analysis is a useful method to predict functionally linked proteins that may be involved in direct protein-protein interactions or in the same metabolic or signaling pathway. As separate domain databases like BLOCKS, PROSITE, Pfam, SMART, PRINTS-S, ProDom, TIGRFAMs, and amalgamated domain databases like InterPro continue to grow in size and quality, a computational method to perform domain fusion analysis that leverages on these efforts will become increasingly powerful. Results This paper proposes a computational method employing relational algebra to find domain fusions in protein sequence databases. The feasibility of this method was illustrated on the SWISS-PROT+TrEMBL sequence database using domain predictions from the Pfam HMM (hidden Markov model) database. We identified 235 and 189 putative functionally linked protein partners in H. sapiens and S. cerevisiae, respectively. From scientific literature, we were able to confirm many of these functional linkages, while the remainder offer testable experimental hypothesis. Results can be viewed at . Conclusion As the analysis can be computed quickly on any relational database that supports standard SQL (structured query language), it can be dynamically updated along with the sequence and domain databases, thereby improving the quality of predictions over time. PMID:12734020
Optimal flight initiation distance.
Cooper, William E; Frederick, William G
2007-01-07
Decisions regarding flight initiation distance have received scant theoretical attention. A graphical model by Ydenberg and Dill (1986. The economics of fleeing from predators. Adv. Stud. Behav. 16, 229-249) that has guided research for the past 20 years specifies when escape begins. In the model, a prey detects a predator, monitors its approach until costs of escape and of remaining are equal, and then flees. The distance between predator and prey when escape is initiated (approach distance = flight initiation distance) occurs where decreasing cost of remaining and increasing cost of fleeing intersect. We argue that prey fleeing as predicted cannot maximize fitness because the best prey can do is break even during an encounter. We develop two optimality models, one applying when all expected future contribution to fitness (residual reproductive value) is lost if the prey dies, the other when any fitness gained (increase in expected RRV) during the encounter is retained after death. Both models predict optimal flight initiation distance from initial expected fitness, benefits obtainable during encounters, costs of escaping, and probability of being killed. Predictions match extensively verified predictions of Ydenberg and Dill's (1986) model. Our main conclusion is that optimality models are preferable to break-even models because they permit fitness maximization, offer many new testable predictions, and allow assessment of prey decisions in many naturally occurring situations through modification of benefit, escape cost, and risk functions.
Reliability/maintainability/testability design for dormancy
NASA Astrophysics Data System (ADS)
Seman, Robert M.; Etzl, Julius M.; Purnell, Arthur W.
1988-05-01
This document has been prepared as a tool for designers of dormant military equipment and systems. The purpose of this handbook is to provide design engineers with Reliability/Maintainability/Testability design guidelines for systems which spend significant portions of their life cycle in a dormant state. The dormant state is defined as a nonoperating mode where a system experiences very little or no electrical stress. The guidelines in this report present design criteria in the following categories: (1) Part Selection and Control; (2) Derating Practices; (3) Equipment/System Packaging; (4) Transportation and Handling; (5) Maintainability Design; (6) Testability Design; (7) Evaluation Methods for In-Plant and Field Evaluation; and (8) Product Performance Agreements. Whereever applicable, design guidelines for operating systems were included with the dormant design guidelines. This was done in an effort to produce design guidelines for a more complete life cycle. Although dormant systems spend significant portions of their life cycle in a nonoperating mode, the designer must design the system for the complete life cycle, including nonoperating as well as operating modes. The guidelines are primarily intended for use in the design of equipment composed of electronic parts and components. However, they can also be used for the design of systems which encompass both electronic and nonelectronic parts, as well as for the modification of existing systems.
Delay test generation for synchronous sequential circuits
NASA Astrophysics Data System (ADS)
Devadas, Srinivas
1989-05-01
We address the problem of generating tests for delay faults in non-scan synchronous sequential circuits. Delay test generation for sequential circuits is a considerably more difficult problem than delay testing of combinational circuits and has received much less attention. In this paper, we present a method for generating test sequences to detect delay faults in sequential circuits using the stuck-at fault sequential test generator STALLION. The method is complete in that it will generate a delay test sequence for a targeted fault given sufficient CPU time, if such a sequence exists. We term faults for which no delay test sequence exists, under out test methodology, sequentially delay redundant. We describe means of eliminating sequential delay redundancies in logic circuits. We present a partial-scan methodology for enhancing the testability of difficult-to-test of untestable sequential circuits, wherein a small number of flip-flops are selected and made controllable/observable. The selection process guarantees the elimination of all sequential delay redundancies. We show that an intimate relationship exists between state assignment and delay testability of a sequential machine. We describe a state assignment algorithm for the synthesis of sequential machines with maximal delay fault testability. Preliminary experimental results using the test generation, partial-scan and synthesis algorithm are presented.
Huang, Dan; Chen, Xuejuan; Gong, Qi; Yuan, Chaoqun; Ding, Hui; Bai, Jing; Zhu, Hui; Fu, Zhujun; Yu, Rongbin; Liu, Hu
2016-01-01
This survey was conducted to determine the testability, distribution and associations of ocular biometric parameters in Chinese preschool children. Ocular biometric examinations, including the axial length (AL) and corneal radius of curvature (CR), were conducted on 1,688 3-year-old subjects by using an IOLMaster in August 2015. Anthropometric parameters, including height and weight, were measured according to a standardized protocol, and body mass index (BMI) was calculated. The testability was 93.7% for the AL and 78.6% for the CR overall, and both measures improved with age. Girls performed slightly better in AL measurements (P = 0.08), and the difference in CR was statistically significant (P < 0.05). The AL distribution was normal in girls (P = 0.12), whereas it was not in boys (P < 0.05). For CR1, all subgroups presented normal distributions (P = 0.16 for boys; P = 0.20 for girls), but the distribution varied when the subgroups were combined (P < 0.05). CR2 presented a normal distribution (P = 0.11), whereas the AL/CR ratio was abnormal (P < 0.001). Boys exhibited a significantly longer AL, a greater CR and a greater AL/CR ratio than girls (all P < 0.001). PMID:27384307
Selection of fire spread model for Russian fire behavior prediction system
Alexandra V. Volokitina; Kevin C. Ryan; Tatiana M. Sofronova; Mark A. Sofronov
2010-01-01
Mathematical modeling of fire behavior prediction is only possible if the models are supplied with an information database that provides spatially explicit input parameters for modeled area. Mathematical models can be of three kinds: 1) physical; 2) empirical; and 3) quasi-empirical (Sullivan, 2009). Physical models (Grishin, 1992) are of academic interest only because...
Questioning the Faith - Models and Prediction in Stream Restoration (Invited)
NASA Astrophysics Data System (ADS)
Wilcock, P.
2013-12-01
River management and restoration demand prediction at and beyond our present ability. Management questions, framed appropriately, can motivate fundamental advances in science, although the connection between research and application is not always easy, useful, or robust. Why is that? This presentation considers the connection between models and management, a connection that requires critical and creative thought on both sides. Essential challenges for managers include clearly defining project objectives and accommodating uncertainty in any model prediction. Essential challenges for the research community include matching the appropriate model to project duration, space, funding, information, and social constraints and clearly presenting answers that are actually useful to managers. Better models do not lead to better management decisions or better designs if the predictions are not relevant to and accepted by managers. In fact, any prediction may be irrelevant if the need for prediction is not recognized. The predictive target must be developed in an active dialog between managers and modelers. This relationship, like any other, can take time to develop. For example, large segments of stream restoration practice have remained resistant to models and prediction because the foundational tenet - that channels built to a certain template will be able to transport the supplied sediment with the available flow - has no essential physical connection between cause and effect. Stream restoration practice can be steered in a predictive direction in which project objectives are defined as predictable attributes and testable hypotheses. If stream restoration design is defined in terms of the desired performance of the channel (static or dynamic, sediment surplus or deficit), then channel properties that provide these attributes can be predicted and a basis exists for testing approximations, models, and predictions.
List, Jeffrey; Benedet, Lindino; Hanes, Daniel M.; Ruggiero, Peter
2009-01-01
Predictions of alongshore transport gradients are critical for forecasting shoreline change. At the previous ICCE conference, it was demonstrated that alongshore transport gradients predicted by the empirical CERC equation can differ substantially from predictions made by the hydrodynamics-based model Delft3D in the case of a simulated borrow pit on the shoreface. Here we use the Delft3D momentum balance to examine the reason for this difference. Alongshore advective flow accelerations in our Delft3D simulation are mainly driven by pressure gradients resulting from alongshore variations in wave height and setup, and Delft3D transport gradients are controlled by these flow accelerations. The CERC equation does not take this process into account, and for this reason a second empirical transport term is sometimes added when alongshore gradients in wave height are thought to be significant. However, our test case indicates that this second term does not properly predict alongshore transport gradients.
Stereoacuity of preschool children with and without vision disorders.
Ciner, Elise B; Ying, Gui-Shuang; Kulp, Marjean Taylor; Maguire, Maureen G; Quinn, Graham E; Orel-Bixler, Deborah; Cyert, Lynn A; Moore, Bruce; Huang, Jiayan
2014-03-01
To evaluate associations between stereoacuity and presence, type, and severity of vision disorders in Head Start preschool children and determine testability and levels of stereoacuity by age in children without vision disorders. Stereoacuity of children aged 3 to 5 years (n = 2898) participating in the Vision in Preschoolers (VIP) Study was evaluated using the Stereo Smile II test during a comprehensive vision examination. This test uses a two-alternative forced-choice paradigm with four stereoacuity levels (480 to 60 seconds of arc). Children were classified by the presence (n = 871) or absence (n = 2027) of VIP Study-targeted vision disorders (amblyopia, strabismus, significant refractive error, or unexplained reduced visual acuity), including type and severity. Median stereoacuity between groups and among severity levels of vision disorders was compared using Wilcoxon rank sum and Kruskal-Wallis tests. Testability and stereoacuity levels were determined for children without VIP Study-targeted disorders overall and by age. Children with VIP Study-targeted vision disorders had significantly worse median stereoacuity than that of children without vision disorders (120 vs. 60 seconds of arc, p < 0.001). Children with the most severe vision disorders had worse stereoacuity than that of children with milder disorders (median 480 vs. 120 seconds of arc, p < 0.001). Among children without vision disorders, testability was 99.6% overall, increasing with age to 100% for 5-year-olds (p = 0.002). Most of the children without vision disorders (88%) had stereoacuity at the two best disparities (60 or 120 seconds of arc); the percentage increasing with age (82% for 3-, 89% for 4-, and 92% for 5-year-olds; p < 0.001). The presence of any VIP Study-targeted vision disorder was associated with significantly worse stereoacuity in preschool children. Severe vision disorders were more likely associated with poorer stereopsis than milder or no vision disorders. Testability was excellent at all ages. These results support the validity of the Stereo Smile II for assessing random-dot stereoacuity in preschool children.
DeCamp, Matthew; Dredze, Mark; Chisolm, Margaret S; Berger, Zackary D
2014-01-01
Background Twitter is home to many health professionals who send messages about a variety of health-related topics. Amid concerns about physicians posting inappropriate content online, more in-depth knowledge about these messages is needed to understand health professionals’ behavior on Twitter. Objective Our goal was to characterize the content of Twitter messages, specifically focusing on health professionals and their tweets relating to health. Methods We performed an in-depth content analysis of 700 tweets. Qualitative content analysis was conducted on tweets by health users on Twitter. The primary objective was to describe the general type of content (ie, health-related versus non-health related) on Twitter authored by health professionals and further to describe health-related tweets on the basis of the type of statement made. Specific attention was given to whether a tweet was personal (as opposed to professional) or made a claim that users would expect to be supported by some level of medical evidence (ie, a “testable” claim). A secondary objective was to compare content types among different users, including patients, physicians, nurses, health care organizations, and others. Results Health-related users are posting a wide range of content on Twitter. Among health-related tweets, 53.2% (184/346) contained a testable claim. Of health-related tweets by providers, 17.6% (61/346) were personal in nature; 61% (59/96) made testable statements. While organizations and businesses use Twitter to promote their services and products, patient advocates are using this tool to share their personal experiences with health. Conclusions Twitter users in health-related fields tweet about both testable claims and personal experiences. Future work should assess the relationship between testable tweets and the actual level of evidence supporting them, including how Twitter users—especially patients—interpret the content of tweets posted by health providers. PMID:25591063
Prediction of Very High Reynolds Number Compressible Skin Friction
NASA Technical Reports Server (NTRS)
Carlson, John R.
1998-01-01
Flat plate skin friction calculations over a range of Mach numbers from 0.4 to 3.5 at Reynolds numbers from 16 million to 492 million using a Navier Stokes method with advanced turbulence modeling are compared with incompressible skin friction coefficient correlations. The semi-empirical correlation theories of van Driest; Cope; Winkler and Cha; and Sommer and Short T' are used to transform the predicted skin friction coefficients of solutions using two algebraic Reynolds stress turbulence models in the Navier-Stokes method PAB3D. In general, the predicted skin friction coefficients scaled well with each reference temperature theory though, overall the theory by Sommer and Short appeared to best collapse the predicted coefficients. At the lower Reynolds number 3 to 30 million, both the Girimaji and Shih, Zhu and Lumley turbulence models predicted skin-friction coefficients within 2% of the semi-empirical correlation skin friction coefficients. At the higher Reynolds numbers of 100 to 500 million, the turbulence models by Shih, Zhu and Lumley and Girimaji predicted coefficients that were 6% less and 10% greater, respectively, than the semi-empirical coefficients.
Novak, Mark; Wootton, J. Timothy; Doak, Daniel F.; Emmerson, Mark; Estes, James A.; Tinker, M. Timothy
2011-01-01
How best to predict the effects of perturbations to ecological communities has been a long-standing goal for both applied and basic ecology. This quest has recently been revived by new empirical data, new analysis methods, and increased computing speed, with the promise that ecologically important insights may be obtainable from a limited knowledge of community interactions. We use empirically based and simulated networks of varying size and connectance to assess two limitations to predicting perturbation responses in multispecies communities: (1) the inaccuracy by which species interaction strengths are empirically quantified and (2) the indeterminacy of species responses due to indirect effects associated with network size and structure. We find that even modest levels of species richness and connectance (∼25 pairwise interactions) impose high requirements for interaction strength estimates because system indeterminacy rapidly overwhelms predictive insights. Nevertheless, even poorly estimated interaction strengths provide greater average predictive certainty than an approach that uses only the sign of each interaction. Our simulations provide guidance in dealing with the trade-offs involved in maximizing the utility of network approaches for predicting dynamics in multispecies communities.
Is Ecosystem-Atmosphere Observation in Long-Term Networks actually Science?
NASA Astrophysics Data System (ADS)
Schmid, H. P. E.
2015-12-01
Science uses observations to build knowledge by testable explanations and predictions. The "scientific method" requires controlled systematic observation to examine questions, hypotheses and predictions. Thus, enquiry along the scientific method responds to questions of the type "what if …?" In contrast, long-term observation programs follow a different strategy: we commonly take great care to minimize our influence on the environment of our measurements, with the aim to maximize their external validity. We observe what we think are key variables for ecosystem-atmosphere exchange and ask questions such as "what happens next?" or "how did this happen?" This apparent deviation from the scientific method begs the question whether any explanations we come up with for the phenomena we observe are actually contributing to testable knowledge, or whether their value remains purely anecdotal. Here, we present examples to argue that, under certain conditions, data from long-term observations and observation networks can have equivalent or even higher scientific validity than controlled experiments. Internal validity is particularly enhanced if observations are combined with modeling. Long-term observations of ecosystem-atmosphere fluxes identify trends and temporal scales of variability. Observation networks reveal spatial patterns and variations, and long-term observation networks combine both aspects. A necessary condition for such observations to gain validity beyond the anecdotal is the requirement that the data are comparable: a comparison of two measured values, separated in time or space, must inform us objectively whether (e.g.) one value is larger than the other. In turn, a necessary condition for the comparability of data is the compatibility of the sensors and procedures used to generate them. Compatibility ensures that we compare "apples to apples": that measurements conducted in identical conditions give the same values (within suitable uncertainty intervals). In principle, a useful tool to achieve comparability and compatibility is the standardization of sensors and methods. However, due to the diversity of ecosystems and settings, standardization in ecosystem-atmosphere exchange is difficult. We discuss some of the challenges and pitfalls of standardization across networks.
Sexual imprinting: what strategies should we expect to see in nature?
Chaffee, Dalton W; Griffin, Hayes; Gilman, R Tucker
2013-12-01
Sexual imprinting occurs when juveniles learn mate preferences by observing the phenotypes of other members of their populations, and it is ubiquitous in nature. Imprinting strategies, that is which individuals and phenotypes are observed and how strong preferences become, vary among species. Imprinting can affect trait evolution and the probability of speciation, and different imprinting strategies are expected to have different effects. However, little is known about how and why different imprinting strategies evolve, or which strategies we should expect to see in nature. We used a mathematical model to study how the evolution of sexual imprinting depends on (1) imprinting costs and (2) the sex-specific fitness effects of the phenotype on which individuals imprint. We found that even small fixed costs prevent the evolution of sexual imprinting, but small relative costs do not. When imprinting does evolve, we identified the conditions under which females should evolve to imprint on their fathers, their mothers, or on other members of their populations. Our results provide testable hypotheses for empirical work and help to explain the conditions under which sexual imprinting might evolve to promote speciation. © 2013 The Author(s). Evolution © 2013 The Society for the Study of Evolution.
Young, Jacob T N
2014-01-01
Recent scholarship has focused on the role of social status in peer groups to explain the fact that delinquency is disproportionately committed during adolescence. Yet, the precise mechanism linking adolescence, social status, and antisocial behavior is not well understood. Dual-taxonomy postulates a testable mechanism that links the sudden increase in risky behavior among adolescents to the social magnetism of a small group of persistently antisocial individuals, referred to here as the "role magnet" hypothesis. Using semi-parametric group-based trajectory modeling and growth-curve modeling, this study provides the first test of this hypothesis by examining physical violence and popularity trajectories for 1,845 male respondents age 11-32 from a nationally representative sample (54 % non-Hispanic White; 21 % non-Hispanic African American; 17 % Hispanic; 8 % Asian). Individuals assigned to a "chronic violence" trajectory group showed consistently lower average levels of popularity from 11 to 19. But, these same individuals experienced increases in popularity during early adolescence and subsequent declines in late adolescence. These findings are linked to current research examining social status as a mechanism generating antisocial behavior during adolescence and the consequences of delayed entry into adult roles.
Perfetto, Ralph; Woodside, Arch G
2009-09-01
The present study informs understanding of customer segmentation strategies by extending Twedt's heavy-half propositions to include a segment of users that represent less than 2% of all households-consumers demonstrating extremely frequent behavior (EFB). Extremely frequent behavior (EFB) theory provides testable propositions relating to the observation that few (2%) consumers in many product and service categories constitute more than 25% of the frequency of product or service use. Using casino gambling as an example for testing EFB theory, an analysis of national survey data shows that extremely frequent casino gamblers do exist and that less than 2% of all casino gamblers are responsible for nearly 25% of all casino gambling usage. Approximately 14% of extremely frequent casino users have very low-household income, suggesting somewhat paradoxical consumption patterns (where do very low-income users find the money to gamble so frequently?). Understanding the differences light, heavy, and extreme users and non-users can help marketers and policymakers identify and exploit "blue ocean" opportunities (Kim and Mauborgne, Blue ocean strategy, Harvard Business School Press, Boston, 2005), for example, creating effective strategies to convert extreme users into non-users or non-users into new users.
Emotional competence as antecedent to performance: a contingency framework.
Abraham, Rebecca
2004-05-01
Emotional intelligence is the ability to monitor one's own and others' thinking and actions. In this integrative review, the author seeks to determine the causes of the weak relationship between emotional intelligence and performance by positing that certain emotional competencies, rather than emotional intelligence, are the true predictors of performance. The author theorizes that emotional competencies (including self-control, resilience, social skills, conscientiousness, reliability, integrity, and motivation) interact with organizational climate and job demands or job autonomy to influence performance, as represented in the form of 5 empirically testable propositions. Self-control and emotional resilience are considered to delay the onset of a decline in performance from excessive job demands. Social skills, conscientiousness, reliability, and integrity assist to promote trust, which in turn may build cohesiveness among the members of work groups. Motivation may fuel job involvement in environments that promise psychological safety and psychological meaningfulness. A combination of superior social skills and conscientiousness may enhance the self-sacrifice of benevolent employees to heightened levels of dependability and consideration. Finally, emotional honesty, self-confidence, and emotional resilience can promote superior performance, if positive feedback is delivered in an informative manner, and can mitigate the adverse effects of negative feedback.
Carpiano, Richard M
2006-01-01
Within the past several years, a considerable body of research on social capital has emerged in public health. Although offering the potential for new insights into how community factors impact health and well being, this research has received criticism for being undertheorized and methodologically flawed. In an effort to address some of these limitations, this paper applies Pierre Bourdieu's (1986) [Bourdieu, P. (1986). Handbook of theory and research for the sociology of education (pp. 241-258). New York: Greenwood] social capital theory to create a conceptual model of neighborhood socioeconomic processes, social capital (resources inhered within social networks), and health. After briefly reviewing the social capital conceptualizations of Bourdieu and Putnam, I attempt to integrate these authors' theories to better understand how social capital might operate within neighborhoods or local areas. Next, I describe a conceptual model that incorporates this theoretical integration of social capital into a framework of neighborhood social processes as health determinants. Discussion focuses on the utility of this Bourdieu-based neighborhood social capital theory and model for examining several under-addressed issues of social capital in the neighborhood effects literature and generating specific, empirically testable hypotheses for future research.
Testability of evolutionary game dynamics based on experimental economics data
NASA Astrophysics Data System (ADS)
Wang, Yijia; Chen, Xiaojie; Wang, Zhijian
In order to better understand the dynamic processes of a real game system, we need an appropriate dynamics model, so to evaluate the validity of a model is not a trivial task. Here, we demonstrate an approach, considering the dynamical macroscope patterns of angular momentum and speed as the measurement variables, to evaluate the validity of various dynamics models. Using the data in real time Rock-Paper-Scissors (RPS) games experiments, we obtain the experimental dynamic patterns, and then derive the related theoretical dynamic patterns from a series of typical dynamics models respectively. By testing the goodness-of-fit between the experimental and theoretical patterns, the validity of the models can be evaluated. One of the results in our study case is that, among all the nonparametric models tested, the best-known Replicator dynamics model performs almost worst, while the Projection dynamics model performs best. Besides providing new empirical macroscope patterns of social dynamics, we demonstrate that the approach can be an effective and rigorous tool to test game dynamics models. Fundamental Research Funds for the Central Universities (SSEYI2014Z) and the National Natural Science Foundation of China (Grants No. 61503062).
a Bayesian Synthesis of Predictions from Different Models for Setting Water Quality Criteria
NASA Astrophysics Data System (ADS)
Arhonditsis, G. B.; Ecological Modelling Laboratory
2011-12-01
Skeptical views of the scientific value of modelling argue that there is no true model of an ecological system, but rather several adequate descriptions of different conceptual basis and structure. In this regard, rather than picking the single "best-fit" model to predict future system responses, we can use Bayesian model averaging to synthesize the forecasts from different models. Hence, by acknowledging that models from different areas of the complexity spectrum have different strengths and weaknesses, the Bayesian model averaging is an appealing approach to improve the predictive capacity and to overcome the ambiguity surrounding the model selection or the risk of basing ecological forecasts on a single model. Our study addresses this question using a complex ecological model, developed by Ramin et al. (2011; Environ Modell Softw 26, 337-353) to guide the water quality criteria setting process in the Hamilton Harbour (Ontario, Canada), along with a simpler plankton model that considers the interplay among phosphate, detritus, and generic phytoplankton and zooplankton state variables. This simple approach is more easily subjected to detailed sensitivity analysis and also has the advantage of fewer unconstrained parameters. Using Markov Chain Monte Carlo simulations, we calculate the relative mean standard error to assess the posterior support of the two models from the existing data. Predictions from the two models are then combined using the respective standard error estimates as weights in a weighted model average. The model averaging approach is used to examine the robustness of predictive statements made from our earlier work regarding the response of Hamilton Harbour to the different nutrient loading reduction strategies. The two eutrophication models are then used in conjunction with the SPAtially Referenced Regressions On Watershed attributes (SPARROW) watershed model. The Bayesian nature of our work is used: (i) to alleviate problems of spatiotemporal resolution mismatch between watershed and receiving waterbody models; and (ii) to overcome the conceptual or scale misalignment between processes of interest and supporting information. The proposed Bayesian approach provides an effective means of empirically estimating the relation between in-stream measurements of nutrient fluxes and the sources/sinks of nutrients within the watershed, while explicitly accounting for the uncertainty associated with the existing knowledge from the system along with the different types of spatial correlation typically underlying the parameter estimation of watershed models. Our modelling exercise offers the first estimates of the export coefficients and the delivery rates from the different subcatchments and thus generates testable hypotheses regarding the nutrient export "hot spots" in the studied watershed. Finally, we conduct modeling experiments that evaluate the potential improvement of the model parameter estimates and the decrease of the predictive uncertainty, if the uncertainty associated with the contemporary nutrient loading estimates is reduced. The lessons learned from this study will contribute towards the development of integrated modelling frameworks.
A study of fault prediction and reliability assessment in the SEL environment
NASA Technical Reports Server (NTRS)
Basili, Victor R.; Patnaik, Debabrata
1986-01-01
An empirical study on estimation and prediction of faults, prediction of fault detection and correction effort, and reliability assessment in the Software Engineering Laboratory environment (SEL) is presented. Fault estimation using empirical relationships and fault prediction using curve fitting method are investigated. Relationships between debugging efforts (fault detection and correction effort) in different test phases are provided, in order to make an early estimate of future debugging effort. This study concludes with the fault analysis, application of a reliability model, and analysis of a normalized metric for reliability assessment and reliability monitoring during development of software.
Yurek, Simeon; DeAngelis, Donald L.; Trexler, Joel C.; Klassen, Stephen; Larsen, Laurel G.
2016-01-01
In flood-pulsed ecosystems, hydrology and landscape structure mediate transfers of energy up the food chain by expanding and contracting in area, enabling spatial expansion and growth of fish populations during rising water levels, and subsequent concentration during the drying phase. Connectivity of flooded areas is dynamic as waters rise and fall, and is largely determined by landscape geomorphology and anisotropy. We developed a methodology for simulating fish dispersal and concentration on spatially-explicit, dynamic floodplain wetlands with pulsed food web dynamics, to evaluate how changes in connectivity through time contribute to the concentration of fish biomass that is essential for higher trophic levels. The model also tracks a connectivity index (DCI) over different compass directions to see if fish biomass dynamics can be related in a simple way to topographic pattern. We demonstrate the model for a seasonally flood-pulsed, oligotrophic system, the Everglades, where flow regimes have been greatly altered. Three dispersing populations of functional fish groups were simulated with empirically-based dispersal rules on two landscapes, and two twelve-year time series of managed water levels for those areas were applied. The topographies of the simulations represented intact and degraded ridge-and-slough landscapes (RSL). Simulation results showed large pulses of biomass concentration forming during the onset of the drying phase, when water levels were falling and fish began to converge into the sloughs. As water levels fell below the ridges, DCI declined over different directions, closing down dispersal lanes, and fish density spiked. Persistence of intermediate levels of connectivity on the intact RSL enabled persistent concentration events throughout the drying phase. The intact landscape also buffered effects of wet season population growth. Water level reversals on both landscapes negatively affected fish densities by depleting fish populations without allowing enough time for them to regenerate. Testable, spatiotemporal predictions of the timing, location, duration, and magnitude of fish concentration pulses were produced by the model, and can be applied to restoration planning.
MLIBlast: A program to empirically predict hypervelocity impact damage to the Space Station
NASA Technical Reports Server (NTRS)
Rule, William K.
1991-01-01
MLIBlast is described, which consists of a number of DOC PC based MIcrosoft BASIC program modules written to provide spacecraft designers with empirical predictions of space debris damage to orbiting spacecraft. The Spacecraft wall configuration is assumed to consist of multilayer insulation (MLI) placed between a Whipple style bumper and a pressure wall. Predictions are based on data sets of experimental results obtained from simulating debris impact on spacecraft. One module of MLIBlast facilitates creation of the data base of experimental results that is used by the damage prediction modules of the code. The user has a choice of three different prediction modules to predict damage to the bumper, the MLI, and the pressure wall.
Anger and its control in Graeco-Roman and modern psychology.
Schimmel, S
1979-11-01
Modern psychologists have studied the phenomena of anger and hostility with diverse methodologies and from a variety of theoretical orientations. The close relationships between anger and aggression, psychosomatic disorder and personal unhappiness, make the understanding and control of anger an important individual and social goal. For all of its sophistication and accomplishment, however, most of the modern research demonstrates, to its disadvantage, a lack of historical perspective with respect to the analysis and treatment of anger, whether normal or pathological. This attitude has deprived psychology of a rich source of empirical observations, intriguing, testable hypotheses, and ingenious techniques of treatment. Of the literature that has been neglected, the analyses of the emotion of anger in the writings of Greek and Roman moral philosophers, particularly Aristotle (4th century B.C.), Seneca (1st century A.D.) and Plutarch (early 2nd century A.D.) are of particular interest. Although modern analyses and methods of treatment are in some ways more refined and more quantitatively precise, and are often subjected to validation and modification by empirical-experimental tests, scientific psychology has, to date, contributed relatively little to the understanding and control of anger that is novel except for research on its physiological dimensions. We can still benefit from the insight, prescriptions and procedures of the classicists, who in some respects offer more powerful methods of control than the most recently published works. Naturally, the modern psychotherapist or behavior therapist can and must go beyond the ancients, as is inherent in all scientific and intellectual progress, but there are no scientific or rational grounds for ignoring them as has been done for 75 years.
Empirical models for the prediction of ground motion duration for intraplate earthquakes
NASA Astrophysics Data System (ADS)
Anbazhagan, P.; Neaz Sheikh, M.; Bajaj, Ketan; Mariya Dayana, P. J.; Madhura, H.; Reddy, G. R.
2017-07-01
Many empirical relationships for the earthquake ground motion duration were developed for interplate region, whereas only a very limited number of empirical relationships exist for intraplate region. Also, the existing relationships were developed based mostly on the scaled recorded interplate earthquakes to represent intraplate earthquakes. To the author's knowledge, none of the existing relationships for the intraplate regions were developed using only the data from intraplate regions. Therefore, an attempt is made in this study to develop empirical predictive relationships of earthquake ground motion duration (i.e., significant and bracketed) with earthquake magnitude, hypocentral distance, and site conditions (i.e., rock and soil sites) using the data compiled from intraplate regions of Canada, Australia, Peninsular India, and the central and southern parts of the USA. The compiled earthquake ground motion data consists of 600 records with moment magnitudes ranging from 3.0 to 6.5 and hypocentral distances ranging from 4 to 1000 km. The non-linear mixed-effect (NLMEs) and logistic regression techniques (to account for zero duration) were used to fit predictive models to the duration data. The bracketed duration was found to be decreased with an increase in the hypocentral distance and increased with an increase in the magnitude of the earthquake. The significant duration was found to be increased with the increase in the magnitude and hypocentral distance of the earthquake. Both significant and bracketed durations were predicted higher in rock sites than in soil sites. The predictive relationships developed herein are compared with the existing relationships for interplate and intraplate regions. The developed relationship for bracketed duration predicts lower durations for rock and soil sites. However, the developed relationship for a significant duration predicts lower durations up to a certain distance and thereafter predicts higher durations compared to the existing relationships.
Friction law and hysteresis in granular materials
Wyart, M.
2017-01-01
The macroscopic friction of particulate materials often weakens as the flow rate is increased, leading to potentially disastrous intermittent phenomena including earthquakes and landslides. We theoretically and numerically study this phenomenon in simple granular materials. We show that velocity weakening, corresponding to a nonmonotonic behavior in the friction law, μ(I), is present even if the dynamic and static microscopic friction coefficients are identical, but disappears for softer particles. We argue that this instability is induced by endogenous acoustic noise, which tends to make contacts slide, leading to faster flow and increased noise. We show that soft spots, or excitable regions in the materials, correspond to rolling contacts that are about to slide, whose density is described by a nontrivial exponent θs. We build a microscopic theory for the nonmonotonicity of μ(I), which also predicts the scaling behavior of acoustic noise, the fraction of sliding contacts χ, and the sliding velocity, in terms of θs. Surprisingly, these quantities have no limit when particles become infinitely hard, as confirmed numerically. Our analysis rationalizes previously unexplained observations and makes experimentally testable predictions. PMID:28811373
Friction law and hysteresis in granular materials
NASA Astrophysics Data System (ADS)
DeGiuli, E.; Wyart, M.
2017-08-01
The macroscopic friction of particulate materials often weakens as the flow rate is increased, leading to potentially disastrous intermittent phenomena including earthquakes and landslides. We theoretically and numerically study this phenomenon in simple granular materials. We show that velocity weakening, corresponding to a nonmonotonic behavior in the friction law, μ(I), is present even if the dynamic and static microscopic friction coefficients are identical, but disappears for softer particles. We argue that this instability is induced by endogenous acoustic noise, which tends to make contacts slide, leading to faster flow and increased noise. We show that soft spots, or excitable regions in the materials, correspond to rolling contacts that are about to slide, whose density is described by a nontrivial exponent θs. We build a microscopic theory for the nonmonotonicity of μ(I), which also predicts the scaling behavior of acoustic noise, the fraction of sliding contacts χ, and the sliding velocity, in terms of θs. Surprisingly, these quantities have no limit when particles become infinitely hard, as confirmed numerically. Our analysis rationalizes previously unexplained observations and makes experimentally testable predictions.
Conroy, M.J.; Runge, M.C.; Nichols, J.D.; Stodola, K.W.; Cooper, R.J.
2011-01-01
The broad physical and biological principles behind climate change and its potential large scale ecological impacts on biota are fairly well understood, although likely responses of biotic communities at fine spatio-temporal scales are not, limiting the ability of conservation programs to respond effectively to climate change outside the range of human experience. Much of the climate debate has focused on attempts to resolve key uncertainties in a hypothesis-testing framework. However, conservation decisions cannot await resolution of these scientific issues and instead must proceed in the face of uncertainty. We suggest that conservation should precede in an adaptive management framework, in which decisions are guided by predictions under multiple, plausible hypotheses about climate impacts. Under this plan, monitoring is used to evaluate the response of the system to climate drivers, and management actions (perhaps experimental) are used to confront testable predictions with data, in turn providing feedback for future decision making. We illustrate these principles with the problem of mitigating the effects of climate change on terrestrial bird communities in the southern Appalachian Mountains, USA. ?? 2010 Elsevier Ltd.
Steps in the bacterial flagellar motor.
Mora, Thierry; Yu, Howard; Sowa, Yoshiyuki; Wingreen, Ned S
2009-10-01
The bacterial flagellar motor is a highly efficient rotary machine used by many bacteria to propel themselves. It has recently been shown that at low speeds its rotation proceeds in steps. Here we propose a simple physical model, based on the storage of energy in protein springs, that accounts for this stepping behavior as a random walk in a tilted corrugated potential that combines torque and contact forces. We argue that the absolute angular position of the rotor is crucial for understanding step properties and show this hypothesis to be consistent with the available data, in particular the observation that backward steps are smaller on average than forward steps. We also predict a sublinear speed versus torque relationship for fixed load at low torque, and a peak in rotor diffusion as a function of torque. Our model provides a comprehensive framework for understanding and analyzing stepping behavior in the bacterial flagellar motor and proposes novel, testable predictions. More broadly, the storage of energy in protein springs by the flagellar motor may provide useful general insights into the design of highly efficient molecular machines.
On Geomagnetism and Paleomagnetism I
NASA Technical Reports Server (NTRS)
Voorhies, Coerte V.
2000-01-01
A partial description of Earth's broad scale, core-source magnetic field has been developed and tested three ways. The description features an expected, or mean, spatial magnetic power spectrum that is approximately inversely proportional to horizontal wavenumber atop Earth's core. This multipole spectrum describes a magnetic energy range; it is not steep enough for Gubbins' magnetic dissipation range. Temporal variations of core multipole powers about mean values are to be expected and are described statistically, via trial probability distribution functions, instead of deterministically, via trial solution of closed transport equations. The distributions considered here are closed and neither require nor prohibit magnetic isotropy. The description is therefore applicable to, and tested against, both dipole and low degree non-dipole fields. In Part 1, a physical basis for an expectation spectrum is developed and checked. The description is then combined with main field models of twentieth century satellite and surface geomagnetic field measurements to make testable predictions of the radius of Earth's core. The predicted core radius is 0.7% above the 3480 km seismological value. Partial descriptions of other planetary dipole fields are noted.
Optimal assessment of multiple cues.
Fawcett, Tim W; Johnstone, Rufus A
2003-01-01
In a wide range of contexts from mate choice to foraging, animals are required to discriminate between alternative options on the basis of multiple cues. How should they best assess such complex multicomponent stimuli? Here, we construct a model to investigate this problem, focusing on a simple case where a 'chooser' faces a discrimination task involving two cues. These cues vary in their accuracy and in how costly they are to assess. As an example, we consider a mate-choice situation where females choose between males of differing quality. Our model predicts the following: (i) females should become less choosy as the cost of finding new males increases; (ii) females should prioritize cues differently depending on how choosy they are; (iii) females may sometimes prioritize less accurate cues; and (iv) which cues are most important depends on the abundance of desirable mates. These predictions are testable in mate-choice experiments where the costs of choice can be manipulated. Our findings are applicable to other discrimination tasks besides mate choice, for example a predator's choice between palatable and unpalatable prey, or an altruist's choice between kin and non-kin. PMID:12908986
Mental workload prediction based on attentional resource allocation and information processing.
Xiao, Xu; Wanyan, Xiaoru; Zhuang, Damin
2015-01-01
Mental workload is an important component in complex human-machine systems. The limited applicability of empirical workload measures produces the need for workload modeling and prediction methods. In the present study, a mental workload prediction model is built on the basis of attentional resource allocation and information processing to ensure pilots' accuracy and speed in understanding large amounts of flight information on the cockpit display interface. Validation with an empirical study of an abnormal attitude recovery task showed that this model's prediction of mental workload highly correlated with experimental results. This mental workload prediction model provides a new tool for optimizing human factors interface design and reducing human errors.
Symbiotic immuno-suppression: is disease susceptibility the price of bleaching resistance?
Merselis, Daniel G; Lirman, Diego; Rodriguez-Lanetty, Mauricio
2018-01-01
Accelerating anthropogenic climate change threatens to destroy coral reefs worldwide through the processes of bleaching and disease. These major contributors to coral mortality are both closely linked with thermal stress intensified by anthropogenic climate change. Disease outbreaks typically follow bleaching events, but a direct positive linkage between bleaching and disease has been debated. By tracking 152 individual coral ramets through the 2014 mass bleaching in a South Florida coral restoration nursery, we revealed a highly significant negative correlation between bleaching and disease in the Caribbean staghorn coral, Acropora cervicornis . To explain these results, we propose a mechanism for transient immunological protection through coral bleaching: removal of Symbiodinium during bleaching may also temporarily eliminate suppressive symbiont modulation of host immunological function. We contextualize this hypothesis within an ecological perspective in order to generate testable predictions for future investigation.
Transport dynamics of molecular motors that switch between an active and inactive state
NASA Astrophysics Data System (ADS)
Pinkoviezky, I.; Gov, N. S.
2013-08-01
Molecular motors are involved in key transport processes in the cell. Many of these motors can switch from an active to a nonactive state, either spontaneously or depending on their interaction with other molecules. When active, the motors move processively along the filaments, while when inactive they are stationary. We treat here the simple case of spontaneously switching motors, between the active and inactive states, along an open linear track. We use our recent analogy with vehicular traffic, where we go beyond the mean-field description. We map the phase diagram of this system, and find that it clearly breaks the symmetry between the different phases, as compared to the standard total asymmetric exclusion process. We make several predictions that may be testable using molecular motors in vitro and in living cells.
Wagenmakers, Eric-Jan; Farrell, Simon; Ratcliff, Roger
2005-01-01
Recently, G. C. Van Orden, J. G. Holden, and M. T. Turvey (2003) proposed to abandon the conventional framework of cognitive psychology in favor of the framework of nonlinear dynamical systems theory. Van Orden et al. presented evidence that “purposive behavior originates in self-organized criticality” (p. 333). Here, the authors show that Van Orden et al.’s analyses do not test their hypotheses. Further, the authors argue that a confirmation of Van Orden et al.’s hypotheses would not have constituted firm evidence in support of their framework. Finally, the absence of a specific model for how self-organized criticality produces the observed behavior makes it very difficult to derive testable predictions. The authors conclude that the proposed paradigm shift is presently unwarranted. PMID:15702966
NASA Astrophysics Data System (ADS)
Allahverdi, Rouzbeh; Dev, P. S. Bhupal; Dutta, Bhaskar
2018-04-01
We study a simple TeV-scale model of baryon number violation which explains the observed proximity of the dark matter and baryon abundances. The model has constraints arising from both low and high-energy processes, and in particular, predicts a sizable rate for the neutron-antineutron (n - n bar) oscillation at low energy and the monojet signal at the LHC. We find an interesting complementarity among the constraints arising from the observed baryon asymmetry, ratio of dark matter and baryon abundances, n - n bar oscillation lifetime and the LHC monojet signal. There are regions in the parameter space where the n - n bar oscillation lifetime is found to be more constraining than the LHC constraints, which illustrates the importance of the next-generation n - n bar oscillation experiments.
Percolation mechanism drives actin gels to the critically connected state
NASA Astrophysics Data System (ADS)
Lee, Chiu Fan; Pruessner, Gunnar
2016-05-01
Cell motility and tissue morphogenesis depend crucially on the dynamic remodeling of actomyosin networks. An actomyosin network consists of an actin polymer network connected by cross-linker proteins and motor protein myosins that generate internal stresses on the network. A recent discovery shows that for a range of experimental parameters, actomyosin networks contract to clusters with a power-law size distribution [J. Alvarado, Nat. Phys. 9, 591 (2013), 10.1038/nphys2715]. Here, we argue that actomyosin networks can exhibit a robust critical signature without fine-tuning because the dynamics of the system can be mapped onto a modified version of percolation with trapping (PT), which is known to show critical behavior belonging to the static percolation universality class without the need for fine-tuning of a control parameter. We further employ our PT model to generate experimentally testable predictions.
Brain Evolution and Human Neuropsychology: The Inferential Brain Hypothesis
Koscik, Timothy R.; Tranel, Daniel
2013-01-01
Collaboration between human neuropsychology and comparative neuroscience has generated invaluable contributions to our understanding of human brain evolution and function. Further cross-talk between these disciplines has the potential to continue to revolutionize these fields. Modern neuroimaging methods could be applied in a comparative context, yielding exciting new data with the potential of providing insight into brain evolution. Conversely, incorporating an evolutionary base into the theoretical perspectives from which we approach human neuropsychology could lead to novel hypotheses and testable predictions. In the spirit of these objectives, we present here a new theoretical proposal, the Inferential Brain Hypothesis, whereby the human brain is thought to be characterized by a shift from perceptual processing to inferential computation, particularly within the social realm. This shift is believed to be a driving force for the evolution of the large human cortex. PMID:22459075
How hierarchical is language use?
Frank, Stefan L.; Bod, Rens; Christiansen, Morten H.
2012-01-01
It is generally assumed that hierarchical phrase structure plays a central role in human language. However, considerations of simplicity and evolutionary continuity suggest that hierarchical structure should not be invoked too hastily. Indeed, recent neurophysiological, behavioural and computational studies show that sequential sentence structure has considerable explanatory power and that hierarchical processing is often not involved. In this paper, we review evidence from the recent literature supporting the hypothesis that sequential structure may be fundamental to the comprehension, production and acquisition of human language. Moreover, we provide a preliminary sketch outlining a non-hierarchical model of language use and discuss its implications and testable predictions. If linguistic phenomena can be explained by sequential rather than hierarchical structure, this will have considerable impact in a wide range of fields, such as linguistics, ethology, cognitive neuroscience, psychology and computer science. PMID:22977157
Origin and Proliferation of Multiple-Drug Resistance in Bacterial Pathogens
Chang, Hsiao-Han; Cohen, Ted; Grad, Yonatan H.; Hanage, William P.; O'Brien, Thomas F.
2015-01-01
SUMMARY Many studies report the high prevalence of multiply drug-resistant (MDR) strains. Because MDR infections are often significantly harder and more expensive to treat, they represent a growing public health threat. However, for different pathogens, different underlying mechanisms are traditionally used to explain these observations, and it is unclear whether each bacterial taxon has its own mechanism(s) for multidrug resistance or whether there are common mechanisms between distantly related pathogens. In this review, we provide a systematic overview of the causes of the excess of MDR infections and define testable predictions made by each hypothetical mechanism, including experimental, epidemiological, population genomic, and other tests of these hypotheses. Better understanding the cause(s) of the excess of MDR is the first step to rational design of more effective interventions to prevent the origin and/or proliferation of MDR. PMID:25652543
The Law of Self-Acting Machines and Irreversible Processes with Reversible Replicas
NASA Astrophysics Data System (ADS)
Valev, Pentcho
2002-11-01
Clausius and Kelvin saved Carnot theorem and developed the second law by assuming that Carnot machines can work in the absence of an operator and that all the irreversible processes have reversible replicas. The former assumption restored Carnot theorem as an experience of mankind whereas the latter generated "the law of ever increasing entropy". Both assumptions are wrong so it makes sense to return to Carnot theorem (or some equivalent) and test it experimentally. Two testable paradigms - the system performing two types of reversible work and the system in dynamical equilibrium - suggest that perpetuum mobile of the second kind in the presence of an operator is possible. The deviation from the second law prediction, expressed as difference between partial derivatives in a Maxwell relation, measures the degree of structural-functional evolution for the respective system.
Prioritizing Information during Working Memory: Beyond Sustained Internal Attention.
Myers, Nicholas E; Stokes, Mark G; Nobre, Anna C
2017-06-01
Working memory (WM) has limited capacity. This leaves attention with the important role of allowing into storage only the most relevant information. It is increasingly evident that attention is equally crucial for prioritizing representations within WM as the importance of individual items changes. Retrospective prioritization has been proposed to result from a focus of internal attention highlighting one of several representations. Here, we suggest an updated model, in which prioritization acts in multiple steps: first orienting towards and selecting a memory, and then reconfiguring its representational state in the service of upcoming task demands. Reconfiguration sets up an optimized perception-action mapping, obviating the need for sustained attention. This view is consistent with recent literature, makes testable predictions, and links WM with task switching and action preparation. Copyright © 2017 Elsevier Ltd. All rights reserved.
Neuromechanics of crawling in D. melanogaster larvae
NASA Astrophysics Data System (ADS)
Pehlevan, Cengiz; Paoletti, Paolo; Mahadevan, L.
2015-03-01
Nervous system, body and environment interact in non-trivial ways to generate locomotion and thence behavior in an organism. Here we present a minimal integrative mathematical model to describe the simple behavior of forward crawling in Drosophila larvae. Our model couples the excitation-inhibition circuits in the nervous system to force production in the muscles and body movement in a frictional environment, which in turn leads to a proprioceptive signal that feeds back to the nervous system. Our results explain the basic observed phenomenology of crawling with or without proprioception, and elucidate the stabilizing role of proprioception in crawling with respect to external and internal perturbations. Our integrated approach allows us to make testable predictions on the effect of changing body-environment interactions on crawling, and serves as a substrate for the development of hierarchical models linking cellular processes to behavior.
Towards Understanding The Origin And Evolution Of Ultra-Diffuse Galaxies
NASA Astrophysics Data System (ADS)
van der Burg, Remco F. J.; Sifón, Cristóbal; Muzzin, Adam; Hoekstra, Henk; KiDS Collaboration; GAMA Collaboration
2017-06-01
Recent observations have shown that Ultra-Diffuse Galaxies (UDGs, which have the luminosities of dwarfs but sizes of giant galaxies) are surprisingly abundant in clusters of galaxies. The origin of these galaxies remains unclear, since one would naively expect them to be easily disrupted by tidal interactions in the cluster environment. Several formation scenarios have been proposed for UDGs, but these make a wide range of different testable observational predictions. I'll summarise recent results on two key observables that have the potential to differentiate between the proposed models, namely 1) a measurement of their (sub)halo masses using weak gravitational lensing, and 2) their abundance in lower-mass haloes using data from the GAMA and KiDS surveys. I'll discuss implications and future prospects to learn more about the properties and formation histories of these elusive galaxies.
The Transition to Minimal Consciousness through the Evolution of Associative Learning
Bronfman, Zohar Z.; Ginsburg, Simona; Jablonka, Eva
2016-01-01
The minimal state of consciousness is sentience. This includes any phenomenal sensory experience – exteroceptive, such as vision and olfaction; interoceptive, such as pain and hunger; or proprioceptive, such as the sense of bodily position and movement. We propose unlimited associative learning (UAL) as the marker of the evolutionary transition to minimal consciousness (or sentience), its phylogenetically earliest sustainable manifestation and the driver of its evolution. We define and describe UAL at the behavioral and functional level and argue that the structural-anatomical implementations of this mode of learning in different taxa entail subjective feelings (sentience). We end with a discussion of the implications of our proposal for the distribution of consciousness in the animal kingdom, suggesting testable predictions, and revisiting the ongoing debate about the function of minimal consciousness in light of our approach. PMID:28066282
Symbiotic immuno-suppression: is disease susceptibility the price of bleaching resistance?
Merselis, Daniel G.; Lirman, Diego
2018-01-01
Accelerating anthropogenic climate change threatens to destroy coral reefs worldwide through the processes of bleaching and disease. These major contributors to coral mortality are both closely linked with thermal stress intensified by anthropogenic climate change. Disease outbreaks typically follow bleaching events, but a direct positive linkage between bleaching and disease has been debated. By tracking 152 individual coral ramets through the 2014 mass bleaching in a South Florida coral restoration nursery, we revealed a highly significant negative correlation between bleaching and disease in the Caribbean staghorn coral, Acropora cervicornis. To explain these results, we propose a mechanism for transient immunological protection through coral bleaching: removal of Symbiodinium during bleaching may also temporarily eliminate suppressive symbiont modulation of host immunological function. We contextualize this hypothesis within an ecological perspective in order to generate testable predictions for future investigation. PMID:29682405
NASA Astrophysics Data System (ADS)
Wang, Jun-Wei; Zhou, Tian-Shou
2009-12-01
In this paper, we develop a new mathematical model for the mammalian circadian clock, which incorporates both transcriptional/translational feedback loops (TTFLs) and a cAMP-mediated feedback loop. The model shows that TTFLs and cAMP signalling cooperatively drive the circadian rhythms. It reproduces typical experimental observations with qualitative similarities, e.g. circadian oscillations in constant darkness and entrainment to light-dark cycles. In addition, it can explain the phenotypes of cAMP-mutant and Rev-erbα-/--mutant mice, and help us make an experimentally-testable prediction: oscillations may be rescued when arrhythmic mice with constitutively low concentrations of cAMP are crossed with Rev-erbα-/- mutant mice. The model enhances our understanding of the mammalian circadian clockwork from the viewpoint of the entire cell.
Using Empirical Models for Communication Prediction of Spacecraft
NASA Technical Reports Server (NTRS)
Quasny, Todd
2015-01-01
A viable communication path to a spacecraft is vital for its successful operation. For human spaceflight, a reliable and predictable communication link between the spacecraft and the ground is essential not only for the safety of the vehicle and the success of the mission, but for the safety of the humans on board as well. However, analytical models of these communication links are challenged by unique characteristics of space and the vehicle itself. For example, effects of radio frequency during high energy solar events while traveling through a solar array of a spacecraft can be difficult to model, and thus to predict. This presentation covers the use of empirical methods of communication link predictions, using the International Space Station (ISS) and its associated historical data as the verification platform and test bed. These empirical methods can then be incorporated into communication prediction and automation tools for the ISS in order to better understand the quality of the communication path given a myriad of variables, including solar array positions, line of site to satellites, position of the sun, and other dynamic structures on the outside of the ISS. The image on the left below show the current analytical model of one of the communication systems on the ISS. The image on the right shows a rudimentary empirical model of the same system based on historical archived data from the ISS.
Wavelet modeling and prediction of the stability of states: the Roman Empire and the European Union
NASA Astrophysics Data System (ADS)
Yaroshenko, Tatyana Y.; Krysko, Dmitri V.; Dobriyan, Vitalii; Zhigalov, Maksim V.; Vos, Hendrik; Vandenabeele, Peter; Krysko, Vadim A.
2015-09-01
How can the stability of a state be quantitatively determined and its future stability predicted? The rise and collapse of empires and states is very complex, and it is exceedingly difficult to understand and predict it. Existing theories are usually formulated as verbal models and, consequently, do not yield sharply defined, quantitative prediction that can be unambiguously validated with data. Here we describe a model that determines whether the state is in a stable or chaotic condition and predicts its future condition. The central model, which we test, is that growth and collapse of states is reflected by the changes of their territories, populations and budgets. The model was simulated within the historical societies of the Roman Empire (400 BC to 400 AD) and the European Union (1957-2007) by using wavelets and analysis of the sign change of the spectrum of Lyapunov exponents. The model matches well with the historical events. During wars and crises, the state becomes unstable; this is reflected in the wavelet analysis by a significant increase in the frequency ω (t) and wavelet coefficients W (ω, t) and the sign of the largest Lyapunov exponent becomes positive, indicating chaos. We successfully reconstructed and forecasted time series in the Roman Empire and the European Union by applying artificial neural network. The proposed model helps to quantitatively determine and forecast the stability of a state.
Kim, Soo-Jeong; Cheong, June-Won; Min, Yoo Hong; Choi, Young Jin; Lee, Dong-Gun; Lee, Je-Hwan; Yang, Deok-Hwan; Lee, Sang Min; Kim, Sung-Hyun; Kim, Yang Soo; Kwak, Jae-Yong; Park, Jinny; Kim, Jin Young; Kim, Hoon-Gu; Kim, Byung Soo; Ryoo, Hun-Mo; Jang, Jun Ho; Kim, Min Kyoung; Kang, Hye Jin; Cho, In Sung; Mun, Yeung Chul; Jo, Deog-Yeon; Kim, Ho Young; Park, Byeong-Bae; Kim, Jin Seok
2014-01-01
We assessed the success rate of empirical antifungal therapy with itraconazole and evaluated risk factors for predicting the failure of empirical antifungal therapy. A multicenter, prospective, observational study was performed in patients with hematological malignancies who had neutropenic fever and received empirical antifungal therapy with itraconazole at 22 centers. A total of 391 patients who had abnormal findings on chest imaging tests (31.0%) or a positive result of enzyme immunoassay for serum galactomannan (17.6%) showed a 56.5% overall success rate. Positive galactomannan tests before the initiation of the empirical antifungal therapy (P=0.026, hazard ratio [HR], 2.28; 95% confidence interval [CI], 1.10-4.69) and abnormal findings on the chest imaging tests before initiation of the empirical antifungal therapy (P=0.022, HR, 2.03; 95% CI, 1.11-3.71) were significantly associated with poor outcomes for the empirical antifungal therapy. Eight patients (2.0%) had premature discontinuation of itraconazole therapy due to toxicity. It is suggested that positive galactomannan tests and abnormal findings on the chest imaging tests at the time of initiation of the empirical antifungal therapy are risk factors for predicting the failure of the empirical antifungal therapy with itraconazole. (Clinical Trial Registration on National Cancer Institute website, NCT01060462).
NASA Astrophysics Data System (ADS)
Kim, Taeyoun; Hwang, Seho; Jang, Seonghyung
2017-01-01
When finding the "sweet spot" of a shale gas reservoir, it is essential to estimate the brittleness index (BI) and total organic carbon (TOC) of the formation. Particularly, the BI is one of the key factors in determining the crack propagation and crushing efficiency for hydraulic fracturing. There are several methods for estimating the BI of a formation, but most of them are empirical equations that are specific to particular rock types. We estimated the mineralogical BI based on elemental capture spectroscopy (ECS) log and elastic BI based on well log data, and we propose a new method for predicting S-wave velocity (VS) using mineralogical BI and elastic BI. The TOC is related to the gas content of shale gas reservoirs. Since it is difficult to perform core analysis for all intervals of shale gas reservoirs, we make empirical equations for the Horn River Basin, Canada, as well as TOC log using a linear relation between core-tested TOC and well log data. In addition, two empirical equations have been suggested for VS prediction based on density and gamma ray log used for TOC analysis. By applying the empirical equations proposed from the perspective of BI and TOC to another well log data and then comparing predicted VS log with real VS log, the validity of empirical equations suggested in this paper has been tested.
The Factor Content of Bilateral Trade: An Empirical Test.
ERIC Educational Resources Information Center
Choi, Yong-Seok; Krishna, Pravin
2004-01-01
The factor proportions model of international trade is one of the most influential theories in international economics. Its central standing in this field has appropriately prompted, particularly recently, intense empirical scrutiny. A substantial and growing body of empirical work has tested the predictions of the theory on the net factor content…
The role of prediction in social neuroscience
Brown, Elliot C.; Brüne, Martin
2012-01-01
Research has shown that the brain is constantly making predictions about future events. Theories of prediction in perception, action and learning suggest that the brain serves to reduce the discrepancies between expectation and actual experience, i.e., by reducing the prediction error. Forward models of action and perception propose the generation of a predictive internal representation of the expected sensory outcome, which is matched to the actual sensory feedback. Shared neural representations have been found when experiencing one's own and observing other's actions, rewards, errors, and emotions such as fear and pain. These general principles of the “predictive brain” are well established and have already begun to be applied to social aspects of cognition. The application and relevance of these predictive principles to social cognition are discussed in this article. Evidence is presented to argue that simple non-social cognitive processes can be extended to explain complex cognitive processes required for social interaction, with common neural activity seen for both social and non-social cognitions. A number of studies are included which demonstrate that bottom-up sensory input and top-down expectancies can be modulated by social information. The concept of competing social forward models and a partially distinct category of social prediction errors are introduced. The evolutionary implications of a “social predictive brain” are also mentioned, along with the implications on psychopathology. The review presents a number of testable hypotheses and novel comparisons that aim to stimulate further discussion and integration between currently disparate fields of research, with regard to computational models, behavioral and neurophysiological data. This promotes a relatively new platform for inquiry in social neuroscience with implications in social learning, theory of mind, empathy, the evolution of the social brain, and potential strategies for treating social cognitive deficits. PMID:22654749
Plant water potential improves prediction of empirical stomatal models.
Anderegg, William R L; Wolf, Adam; Arango-Velez, Adriana; Choat, Brendan; Chmura, Daniel J; Jansen, Steven; Kolb, Thomas; Li, Shan; Meinzer, Frederick; Pita, Pilar; Resco de Dios, Víctor; Sperry, John S; Wolfe, Brett T; Pacala, Stephen
2017-01-01
Climate change is expected to lead to increases in drought frequency and severity, with deleterious effects on many ecosystems. Stomatal responses to changing environmental conditions form the backbone of all ecosystem models, but are based on empirical relationships and are not well-tested during drought conditions. Here, we use a dataset of 34 woody plant species spanning global forest biomes to examine the effect of leaf water potential on stomatal conductance and test the predictive accuracy of three major stomatal models and a recently proposed model. We find that current leaf-level empirical models have consistent biases of over-prediction of stomatal conductance during dry conditions, particularly at low soil water potentials. Furthermore, the recently proposed stomatal conductance model yields increases in predictive capability compared to current models, and with particular improvement during drought conditions. Our results reveal that including stomatal sensitivity to declining water potential and consequent impairment of plant water transport will improve predictions during drought conditions and show that many biomes contain a diversity of plant stomatal strategies that range from risky to conservative stomatal regulation during water stress. Such improvements in stomatal simulation are greatly needed to help unravel and predict the response of ecosystems to future climate extremes.
Novak, M.; Wootton, J.T.; Doak, D.F.; Emmerson, M.; Estes, J.A.; Tinker, M.T.
2011-01-01
How best to predict the effects of perturbations to ecological communities has been a long-standing goal for both applied and basic ecology. This quest has recently been revived by new empirical data, new analysis methods, and increased computing speed, with the promise that ecologically important insights may be obtainable from a limited knowledge of community interactions. We use empirically based and simulated networks of varying size and connectance to assess two limitations to predicting perturbation responses in multispecies communities: (1) the inaccuracy by which species interaction strengths are empirically quantified and (2) the indeterminacy of species responses due to indirect effects associated with network size and structure. We find that even modest levels of species richness and connectance (??25 pairwise interactions) impose high requirements for interaction strength estimates because system indeterminacy rapidly overwhelms predictive insights. Nevertheless, even poorly estimated interaction strengths provide greater average predictive certainty than an approach that uses only the sign of each interaction. Our simulations provide guidance in dealing with the trade-offs involved in maximizing the utility of network approaches for predicting dynamics in multispecies communities. ?? 2011 by the Ecological Society of America.
Evolution of language: An empirical study at eBay Big Data Lab
Bodoff, David; Dai, Julie
2017-01-01
The evolutionary theory of language predicts that a language will tend towards fewer synonyms for a given object. We subject this and related predictions to empirical tests, using data from the eBay Big Data Lab which let us access all records of the words used by eBay vendors in their item titles, and by consumers in their searches. We find support for the predictions of the evolutionary theory of language. In particular, the mapping from object to words sharpens over time on both sides of the market, i.e. among consumers and among vendors. In addition, the word mappings used on the two sides of the market become more similar over time. Our research contributes to the literature on language evolution by reporting results of a truly unique large-scale empirical study. PMID:29261686
Evolution of language: An empirical study at eBay Big Data Lab.
Bodoff, David; Bekkerman, Ron; Dai, Julie
2017-01-01
The evolutionary theory of language predicts that a language will tend towards fewer synonyms for a given object. We subject this and related predictions to empirical tests, using data from the eBay Big Data Lab which let us access all records of the words used by eBay vendors in their item titles, and by consumers in their searches. We find support for the predictions of the evolutionary theory of language. In particular, the mapping from object to words sharpens over time on both sides of the market, i.e. among consumers and among vendors. In addition, the word mappings used on the two sides of the market become more similar over time. Our research contributes to the literature on language evolution by reporting results of a truly unique large-scale empirical study.
Ali, Ashehad A.; Medlyn, Belinda E.; Aubier, Thomas G.; ...
2015-10-06
Differential species responses to atmospheric CO 2 concentration (C a) could lead to quantitative changes in competition among species and community composition, with flow-on effects for ecosystem function. However, there has been little theoretical analysis of how elevated C a (eC a) will affect plant competition, or how composition of plant communities might change. Such theoretical analysis is needed for developing testable hypotheses to frame experimental research. Here, we investigated theoretically how plant competition might change under eC a by implementing two alternative competition theories, resource use theory and resource capture theory, in a plant carbon and nitrogen cycling model.more » The model makes several novel predictions for the impact of eC a on plant community composition. Using resource use theory, the model predicts that eC a is unlikely to change species dominance in competition, but is likely to increase coexistence among species. Using resource capture theory, the model predicts that eC a may increase community evenness. Collectively, both theories suggest that eC a will favor coexistence and hence that species diversity should increase with eC a. Our theoretical analysis leads to a novel hypothesis for the impact of eC a on plant community composition. In this study, the hypothesis has potential to help guide the design and interpretation of eC a experiments.« less
Empirical fitness landscapes and the predictability of evolution.
de Visser, J Arjan G M; Krug, Joachim
2014-07-01
The genotype-fitness map (that is, the fitness landscape) is a key determinant of evolution, yet it has mostly been used as a superficial metaphor because we know little about its structure. This is now changing, as real fitness landscapes are being analysed by constructing genotypes with all possible combinations of small sets of mutations observed in phylogenies or in evolution experiments. In turn, these first glimpses of empirical fitness landscapes inspire theoretical analyses of the predictability of evolution. Here, we review these recent empirical and theoretical developments, identify methodological issues and organizing principles, and discuss possibilities to develop more realistic fitness landscape models.
Sumowski, Chris Vanessa; Hanni, Matti; Schweizer, Sabine; Ochsenfeld, Christian
2014-01-14
The structural sensitivity of NMR chemical shifts as computed by quantum chemical methods is compared to a variety of empirical approaches for the example of a prototypical peptide, the 38-residue kaliotoxin KTX comprising 573 atoms. Despite the simplicity of empirical chemical shift prediction programs, the agreement with experimental results is rather good, underlining their usefulness. However, we show in our present work that they are highly insensitive to structural changes, which renders their use for validating predicted structures questionable. In contrast, quantum chemical methods show the expected high sensitivity to structural and electronic changes. This appears to be independent of the quantum chemical approach or the inclusion of solvent effects. For the latter, explicit solvent simulations with increasing number of snapshots were performed for two conformers of an eight amino acid sequence. In conclusion, the empirical approaches neither provide the expected magnitude nor the patterns of NMR chemical shifts determined by the clearly more costly ab initio methods upon structural changes. This restricts the use of empirical prediction programs in studies where peptide and protein structures are utilized for the NMR chemical shift evaluation such as in NMR refinement processes, structural model verifications, or calculations of NMR nuclear spin relaxation rates.
On testing VLSI chips for the big Viterbi decoder
NASA Technical Reports Server (NTRS)
Hsu, I. S.
1989-01-01
A general technique that can be used in testing very large scale integrated (VLSI) chips for the Big Viterbi Decoder (BVD) system is described. The test technique is divided into functional testing and fault-coverage testing. The purpose of functional testing is to verify that the design works functionally. Functional test vectors are converted from outputs of software simulations which simulate the BVD functionally. Fault-coverage testing is used to detect and, in some cases, to locate faulty components caused by bad fabrication. This type of testing is useful in screening out bad chips. Finally, design for testability, which is included in the BVD VLSI chip design, is described in considerable detail. Both the observability and controllability of a VLSI chip are greatly enhanced by including the design for the testability feature.
Model-Based Testability Assessment and Directed Troubleshooting of Shuttle Wiring Systems
NASA Technical Reports Server (NTRS)
Deb, Somnath; Domagala, Chuck; Shrestha, Roshan; Malepati, Venkatesh; Cavanaugh, Kevin; Patterson-Hine, Ann; Sanderfer, Dwight; Cockrell, Jim; Norvig, Peter (Technical Monitor)
2000-01-01
We have recently completed a pilot study on the Space shuttle wiring system commissioned by the Wiring Integrity Research (WIRe) team at NASA Ames Research Center, As the space shuttle ages, it is experiencing wiring degradation problems including arcing, chaffing insulation breakdown and broken conductors. A systematic and comprehensive test process is required to thoroughly test and quality assure (QA) the wiring systems. The NASA WIRe team recognized the value of a formal model based analysis for risk-assessment and fault coverage analysis. However. wiring systems are complex and involve over 50,000 wire segments. Therefore, NASA commissioned this pilot study with Qualtech Systems. Inc. (QSI) to explore means of automatically extracting high fidelity multi-signal models from wiring information database for use with QSI's Testability Engineering and Maintenance System (TEAMS) tool.
QUANTITATIVE TESTS OF ELMS AS INTERMEDIATE N PEELING-BALOONING MODES
DOE Office of Scientific and Technical Information (OSTI.GOV)
LAO,LL; SNYDER,PB; LEONARD,AW
2003-03-01
A271 QUANTITATIVE TESTS OF ELMS AS INTERMEDIATE N PEELING-BALOONING MODES. Several testable features of the working model of edge localized modes (ELMs) as intermediate toroidal mode number peeling-ballooning modes are evaluated quantitatively using DIII-D and JT-60U experimental data and the ELITE MHD stability code. These include the hypothesis that ELM sizes are related to the radial widths of the unstable MHD modes, the unstable modes have a strong ballooning character localized in the outboard bad curvature region, and ELM size generally becomes smaller at high edge collisionality. ELMs are triggered when the growth rates of the unstable MHD modes becomemore » significantly large. These testable features are consistent with many ELM observations in DIII-D and JT-60U discharges.« less
NASA Technical Reports Server (NTRS)
Campbell, J. W. (Editor)
1981-01-01
The detection of anthropogenic disturbances in the Earth's ozone layer was studied. Two topics were addressed: (1) the level at which a trend in total ozoning is detected by existing data sources; and (2) empirical evidence in the prediction of the depletion in total ozone. Error sources are identified. The predictability of climatological series, whether empirical models can be trusted, and how errors in the Dobson total ozone data impact trend detectability, are discussed.
Analysis of optimality in natural and perturbed metabolic networks
Segrè, Daniel; Vitkup, Dennis; Church, George M.
2002-01-01
An important goal of whole-cell computational modeling is to integrate detailed biochemical information with biological intuition to produce testable predictions. Based on the premise that prokaryotes such as Escherichia coli have maximized their growth performance along evolution, flux balance analysis (FBA) predicts metabolic flux distributions at steady state by using linear programming. Corroborating earlier results, we show that recent intracellular flux data for wild-type E. coli JM101 display excellent agreement with FBA predictions. Although the assumption of optimality for a wild-type bacterium is justifiable, the same argument may not be valid for genetically engineered knockouts or other bacterial strains that were not exposed to long-term evolutionary pressure. We address this point by introducing the method of minimization of metabolic adjustment (MOMA), whereby we test the hypothesis that knockout metabolic fluxes undergo a minimal redistribution with respect to the flux configuration of the wild type. MOMA employs quadratic programming to identify a point in flux space, which is closest to the wild-type point, compatibly with the gene deletion constraint. Comparing MOMA and FBA predictions to experimental flux data for E. coli pyruvate kinase mutant PB25, we find that MOMA displays a significantly higher correlation than FBA. Our method is further supported by experimental data for E. coli knockout growth rates. It can therefore be used for predicting the behavior of perturbed metabolic networks, whose growth performance is in general suboptimal. MOMA and its possible future extensions may be useful in understanding the evolutionary optimization of metabolism. PMID:12415116
Higher-order Fourier analysis over finite fields and applications
NASA Astrophysics Data System (ADS)
Hatami, Pooya
Higher-order Fourier analysis is a powerful tool in the study of problems in additive and extremal combinatorics, for instance the study of arithmetic progressions in primes, where the traditional Fourier analysis comes short. In recent years, higher-order Fourier analysis has found multiple applications in computer science in fields such as property testing and coding theory. In this thesis, we develop new tools within this theory with several new applications such as a characterization theorem in algebraic property testing. One of our main contributions is a strong near-equidistribution result for regular collections of polynomials. The densities of small linear structures in subsets of Abelian groups can be expressed as certain analytic averages involving linear forms. Higher-order Fourier analysis examines such averages by approximating the indicator function of a subset by a function of bounded number of polynomials. Then, to approximate the average, it suffices to know the joint distribution of the polynomials applied to the linear forms. We prove a near-equidistribution theorem that describes these distributions for the group F(n/p) when p is a fixed prime. This fundamental fact was previously known only under various extra assumptions about the linear forms or the field size. We use this near-equidistribution theorem to settle a conjecture of Gowers and Wolf on the true complexity of systems of linear forms. Our next application is towards a characterization of testable algebraic properties. We prove that every locally characterized affine-invariant property of functions f : F(n/p) → R with n∈ N, is testable. In fact, we prove that any such property P is proximity-obliviously testable. More generally, we show that any affine-invariant property that is closed under subspace restrictions and has "bounded complexity" is testable. We also prove that any property that can be described as the property of decomposing into a known structure of low-degree polynomials is locally characterized and is, hence, testable. We discuss several notions of regularity which allow us to deduce algorithmic versions of various regularity lemmas for polynomials by Green and Tao and by Kaufman and Lovett. We show that our algorithmic regularity lemmas for polynomials imply algorithmic versions of several results relying on regularity, such as decoding Reed-Muller codes beyond the list decoding radius (for certain structured errors), and prescribed polynomial decompositions. Finally, motivated by the definition of Gowers norms, we investigate norms defined by different systems of linear forms. We give necessary conditions on the structure of systems of linear forms that define norms. We prove that such norms can be one of only two types, and assuming that |F p| is sufficiently large, they essentially are equivalent to either a Gowers norm or Lp norms.
Domain fusion analysis by applying relational algebra to protein sequence and domain databases.
Truong, Kevin; Ikura, Mitsuhiko
2003-05-06
Domain fusion analysis is a useful method to predict functionally linked proteins that may be involved in direct protein-protein interactions or in the same metabolic or signaling pathway. As separate domain databases like BLOCKS, PROSITE, Pfam, SMART, PRINTS-S, ProDom, TIGRFAMs, and amalgamated domain databases like InterPro continue to grow in size and quality, a computational method to perform domain fusion analysis that leverages on these efforts will become increasingly powerful. This paper proposes a computational method employing relational algebra to find domain fusions in protein sequence databases. The feasibility of this method was illustrated on the SWISS-PROT+TrEMBL sequence database using domain predictions from the Pfam HMM (hidden Markov model) database. We identified 235 and 189 putative functionally linked protein partners in H. sapiens and S. cerevisiae, respectively. From scientific literature, we were able to confirm many of these functional linkages, while the remainder offer testable experimental hypothesis. Results can be viewed at http://calcium.uhnres.utoronto.ca/pi. As the analysis can be computed quickly on any relational database that supports standard SQL (structured query language), it can be dynamically updated along with the sequence and domain databases, thereby improving the quality of predictions over time.
Is health care infected by Baumol's cost disease? Test of a new model.
Atanda, Akinwande; Menclova, Andrea Kutinova; Reed, W Robert
2018-05-01
Rising health care costs are a policy concern across the Organisation for Economic Co-operation and Development, and relatively little consensus exists concerning their causes. One explanation that has received revived attention is Baumol's cost disease (BCD). However, developing a theoretically appropriate test of BCD has been a challenge. In this paper, we construct a 2-sector model firmly based on Baumol's axioms. We then derive several testable propositions. In particular, the model predicts that (a) the share of total labor employed in the health care sector and (b) the relative price index of the health and non-health care sectors should both be positively related to economy-wide productivity. The model also predicts that (c) the share of labor in the health sector will be negatively related and (d) the ratio of prices in the health and non-health sectors unrelated, to the demand for non-health services. Using annual data from 28 Organisation for Economic Co-operation and Development countries over the years 1995-2016 and from 14 U.S. industry groups over the years 1947-2015, we find little evidence to support the predictions of BCD once we address spurious correlation due to coincident trending and other econometric issues. Copyright © 2018 John Wiley & Sons, Ltd.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Arcadi, Giorgio; Institute for Theoretical Physics, Georg-August University Göttingen, Friedrich-Hund-Platz 1, Göttingen, D-37077; Mambrini, Yann
2015-03-11
We propose to generalize the extensions of the Standard Model where the Z boson serves as a mediator between the Standard Model sector and the dark sector χ. We show that, like in the Higgs portal case, the combined constraints from the recent direct searches restrict severely the nature of the coupling of the dark matter to the Z boson and set a limit m{sub χ}≳200 GeV (except in a very narrow region around the Z-pole region). Using complementarity between spin dependent, spin independent and FERMI limits, we predict the nature of this coupling, more specifically the axial/vectorial ratio thatmore » respects a thermal dark matter coupled through a Z-portal while not being excluded by the current observations. We also show that the next generation of experiments of the type LZ or XENON1T will test Z-portal scenario for dark matter mass up to 2 TeV. The condition of a thermal dark matter naturally predicts the spin-dependent scattering cross section on the neutron to be σ{sub χn}{sup SD}≃10{sup −40} cm{sup 2}, which then becomes a clear prediction of the model and a signature testable in the near future experiments.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Arcadi, Giorgio; Mambrini, Yann; Richard, Francois, E-mail: giorgio.arcadi@th.u-psud.fr, E-mail: yann.mambrini@th.u-psud.fr, E-mail: richard@lal.in2p3.fr
2015-03-01
We propose to generalize the extensions of the Standard Model where the Z boson serves as a mediator between the Standard Model sector and the dark sector χ. We show that, like in the Higgs portal case, the combined constraints from the recent direct searches restrict severely the nature of the coupling of the dark matter to the Z boson and set a limit m{sub χ} ∼> 200 GeV (except in a very narrow region around the Z-pole region). Using complementarity between spin dependent, spin independent and FERMI limits, we predict the nature of this coupling, more specifically the axial/vectorial ratio thatmore » respects a thermal dark matter coupled through a Z-portal while not being excluded by the current observations. We also show that the next generation of experiments of the type LZ or XENON1T will test Z-portal scenario for dark matter mass up to 2 TeV . The condition of a thermal dark matter naturally predicts the spin-dependent scattering cross section on the neutron to be σ{sup SD}{sub χn} ≅ 10{sup −40} cm{sup 2}, which then becomes a clear prediction of the model and a signature testable in the near future experiments.« less
NASA Astrophysics Data System (ADS)
Bunn, Henry T.; Pickering, Travis Rayne
2010-11-01
The world's first archaeological traces from 2.6 million years ago (Ma) at Gona, in Ethiopia, include sharp-edged cutting tools and cut-marked animal bones, which indicate consumption of skeletal muscle by early hominin butchers. From that point, evidence of hominin meat-eating becomes increasingly more common throughout the Pleistocene archaeological record. Thus, the substantive debate about hominin meat-eating now centers on mode(s) of carcass resource acquisition. Two prominent hypotheses suggest, alternatively, (1) that early Homo hunted ungulate prey by running them to physiological failure and then dispatching them, or (2) that early Homo was relegated to passively scavenging carcass residues abandoned by carnivore predators. Various paleontologically testable predictions can be formulated for both hypotheses. Here we test four predictions concerning age-frequency distributions for bovids that contributed carcass remains to the 1.8 Ma. old FLK 22 Zinjanthropus (FLK Zinj, Olduvai Gorge, Tanzania) fauna, which zooarchaeological and taphonomic data indicate was formed predominantly by early Homo. In all but one case, the bovid mortality data from FLK Zinj violate test predictions of the endurance running-hunting and passive scavenging hypotheses. When combined with other taphonomic data, these results falsify both hypotheses, and lead to the hypothesis that early Homo operated successfully as an ambush predator.
Stalk model of membrane fusion: solution of energy crisis.
Kozlovsky, Yonathan; Kozlov, Michael M
2002-01-01
Membrane fusion proceeds via formation of intermediate nonbilayer structures. The stalk model of fusion intermediate is commonly recognized to account for the major phenomenology of the fusion process. However, in its current form, the stalk model poses a challenge. On one hand, it is able to describe qualitatively the modulation of the fusion reaction by the lipid composition of the membranes. On the other, it predicts very large values of the stalk energy, so that the related energy barrier for fusion cannot be overcome by membranes within a biologically reasonable span of time. We suggest a new structure for the fusion stalk, which resolves the energy crisis of the model. Our approach is based on a combined deformation of the stalk membrane including bending of the membrane surface and tilt of the hydrocarbon chains of lipid molecules. We demonstrate that the energy of the fusion stalk is a few times smaller than those predicted previously and the stalks are feasible in real systems. We account quantitatively for the experimental results on dependence of the fusion reaction on the lipid composition of different membrane monolayers. We analyze the dependence of the stalk energy on the distance between the fusing membranes and provide the experimentally testable predictions for the structural features of the stalk intermediates. PMID:11806930
The experience of agency: an interplay between prediction and postdiction
Synofzik, Matthis; Vosgerau, Gottfried; Voss, Martin
2013-01-01
The experience of agency, i.e., the registration that I am the initiator of my actions, is a basic and constant underpinning of our interaction with the world. Whereas several accounts have underlined predictive processes as the central mechanism (e.g., the comparator model by C. Frith), others emphasized postdictive inferences (e.g., post-hoc inference account by D. Wegner). Based on increasing evidence that both predictive and postdictive processes contribute to the experience of agency, we here present a unifying but at the same time parsimonious approach that reconciles these accounts: predictive and postdictive processes are both integrated by the brain according to the principles of optimal cue integration. According to this framework, predictive and postdictive processes each serve as authorship cues that are continuously integrated and weighted depending on their availability and reliability in a given situation. Both sensorimotor and cognitive signals can serve as predictive cues (e.g., internal predictions based on an efferency copy of the motor command or cognitive anticipations based on priming). Similarly, other sensorimotor and cognitive cues can each serve as post-hoc cues (e.g., visual feedback of the action or the affective valence of the action outcome). Integration and weighting of these cues might not only differ between contexts and individuals, but also between different subject and disease groups. For example, schizophrenia patients with delusions of influence seem to rely less on (probably imprecise) predictive motor signals of the action and more on post-hoc action cues like e.g., visual feedback and, possibly, the affective valence of the action outcome. Thus, the framework of optimal cue integration offers a promising approach that directly stimulates a wide range of experimentally testable hypotheses on agency processing in different subject groups. PMID:23508565
Prediction of early summer rainfall over South China by a physical-empirical model
NASA Astrophysics Data System (ADS)
Yim, So-Young; Wang, Bin; Xing, Wen
2014-10-01
In early summer (May-June, MJ) the strongest rainfall belt of the northern hemisphere occurs over the East Asian (EA) subtropical front. During this period the South China (SC) rainfall reaches its annual peak and represents the maximum rainfall variability over EA. Hence we establish an SC rainfall index, which is the MJ mean precipitation averaged over 72 stations over SC (south of 28°N and east of 110°E) and represents superbly the leading empirical orthogonal function mode of MJ precipitation variability over EA. In order to predict SC rainfall, we established a physical-empirical model. Analysis of 34-year observations (1979-2012) reveals three physically consequential predictors. A plentiful SC rainfall is preceded in the previous winter by (a) a dipole sea surface temperature (SST) tendency in the Indo-Pacific warm pool, (b) a tripolar SST tendency in North Atlantic Ocean, and (c) a warming tendency in northern Asia. These precursors foreshadow enhanced Philippine Sea subtropical High and Okhotsk High in early summer, which are controlling factors for enhanced subtropical frontal rainfall. The physical empirical model built on these predictors achieves a cross-validated forecast correlation skill of 0.75 for 1979-2012. Surprisingly, this skill is substantially higher than four-dynamical models' ensemble prediction for 1979-2010 period (0.15). The results here suggest that the low prediction skill of current dynamical models is largely due to models' deficiency and the dynamical prediction has large room to improve.
Testing for ontological errors in probabilistic forecasting models of natural systems
Marzocchi, Warner; Jordan, Thomas H.
2014-01-01
Probabilistic forecasting models describe the aleatory variability of natural systems as well as our epistemic uncertainty about how the systems work. Testing a model against observations exposes ontological errors in the representation of a system and its uncertainties. We clarify several conceptual issues regarding the testing of probabilistic forecasting models for ontological errors: the ambiguity of the aleatory/epistemic dichotomy, the quantification of uncertainties as degrees of belief, the interplay between Bayesian and frequentist methods, and the scientific pathway for capturing predictability. We show that testability of the ontological null hypothesis derives from an experimental concept, external to the model, that identifies collections of data, observed and not yet observed, that are judged to be exchangeable when conditioned on a set of explanatory variables. These conditional exchangeability judgments specify observations with well-defined frequencies. Any model predicting these behaviors can thus be tested for ontological error by frequentist methods; e.g., using P values. In the forecasting problem, prior predictive model checking, rather than posterior predictive checking, is desirable because it provides more severe tests. We illustrate experimental concepts using examples from probabilistic seismic hazard analysis. Severe testing of a model under an appropriate set of experimental concepts is the key to model validation, in which we seek to know whether a model replicates the data-generating process well enough to be sufficiently reliable for some useful purpose, such as long-term seismic forecasting. Pessimistic views of system predictability fail to recognize the power of this methodology in separating predictable behaviors from those that are not. PMID:25097265
Testing Feedback Models with Nearby Star Forming Regions
NASA Astrophysics Data System (ADS)
Doran, E.; Crowther, P.
2012-12-01
The feedback from massive stars plays a crucial role in the evolution of galaxies. Accurate modelling of this feedback is essential in understanding distant star forming regions. Young nearby, high mass (> 104 M⊙) clusters such as R136 (in the 30 Doradus region) are ideal test beds for population synthesis since they host large numbers of spatially resolved massive stars at a pre-supernovae stage. We present a quantitative comparison of empirical calibrations of radiative and mechanical feedback from individual stars in R136, with instantaneous burst predictions from the popular Starburst99 evolution synthesis code. We find that empirical results exceed predictions by factors of ˜3-9, as a result of limiting simulations to an upper limit of 100 M⊙. 100-300 M⊙ stars should to be incorporated in population synthesis models for high mass clusters to bring predictions into close agreement with empirical results.
NASA Astrophysics Data System (ADS)
Sergeeva, Tatiana F.; Moshkova, Albina N.; Erlykina, Elena I.; Khvatova, Elena M.
2016-04-01
Creatine kinase is a key enzyme of energy metabolism in the brain. There are known cytoplasmic and mitochondrial creatine kinase isoenzymes. Mitochondrial creatine kinase exists as a mixture of two oligomeric forms - dimer and octamer. The aim of investigation was to study catalytic properties of cytoplasmic and mitochondrial creatine kinase and using of the method of empirical dependences for the possible prediction of the activity of these enzymes in cerebral ischemia. Ischemia was revealed to be accompanied with the changes of the activity of creatine kinase isoenzymes and oligomeric state of mitochondrial isoform. There were made the models of multiple regression that permit to study the activity of creatine kinase system in cerebral ischemia using a calculating method. Therefore, the mathematical method of empirical dependences can be applied for estimation and prediction of the functional state of the brain by the activity of creatine kinase isoenzymes in cerebral ischemia.
Forecasting stochastic neural network based on financial empirical mode decomposition.
Wang, Jie; Wang, Jun
2017-06-01
In an attempt to improve the forecasting accuracy of stock price fluctuations, a new one-step-ahead model is developed in this paper which combines empirical mode decomposition (EMD) with stochastic time strength neural network (STNN). The EMD is a processing technique introduced to extract all the oscillatory modes embedded in a series, and the STNN model is established for considering the weight of occurrence time of the historical data. The linear regression performs the predictive availability of the proposed model, and the effectiveness of EMD-STNN is revealed clearly through comparing the predicted results with the traditional models. Moreover, a new evaluated method (q-order multiscale complexity invariant distance) is applied to measure the predicted results of real stock index series, and the empirical results show that the proposed model indeed displays a good performance in forecasting stock market fluctuations. Copyright © 2017 Elsevier Ltd. All rights reserved.
Predicting Operator Execution Times Using CogTool
NASA Technical Reports Server (NTRS)
Santiago-Espada, Yamira; Latorella, Kara A.
2013-01-01
Researchers and developers of NextGen systems can use predictive human performance modeling tools as an initial approach to obtain skilled user performance times analytically, before system testing with users. This paper describes the CogTool models for a two pilot crew executing two different types of a datalink clearance acceptance tasks, and on two different simulation platforms. The CogTool time estimates for accepting and executing Required Time of Arrival and Interval Management clearances were compared to empirical data observed in video tapes and registered in simulation files. Results indicate no statistically significant difference between empirical data and the CogTool predictions. A population comparison test found no significant differences between the CogTool estimates and the empirical execution times for any of the four test conditions. We discuss modeling caveats and considerations for applying CogTool to crew performance modeling in advanced cockpit environments.
Prediction of light aircraft interior noise
NASA Technical Reports Server (NTRS)
Howlett, J. T.; Morales, D. A.
1976-01-01
At the present time, predictions of aircraft interior noise depend heavily on empirical correction factors derived from previous flight measurements. However, to design for acceptable interior noise levels and to optimize acoustic treatments, analytical techniques which do not depend on empirical data are needed. This paper describes a computerized interior noise prediction method for light aircraft. An existing analytical program (developed for commercial jets by Cockburn and Jolly in 1968) forms the basis of some modal analysis work which is described. The accuracy of this modal analysis technique for predicting low-frequency coupled acoustic-structural natural frequencies is discussed along with trends indicating the effects of varying parameters such as fuselage length and diameter, structural stiffness, and interior acoustic absorption.
An empirical approach to improving tidal predictions using recent real-time tide gauge data
NASA Astrophysics Data System (ADS)
Hibbert, Angela; Royston, Samantha; Horsburgh, Kevin J.; Leach, Harry
2014-05-01
Classical harmonic methods of tidal prediction are often problematic in estuarine environments due to the distortion of tidal fluctuations in shallow water, which results in a disparity between predicted and observed sea levels. This is of particular concern in the Bristol Channel, where the error associated with tidal predictions is potentially greater due to an unusually large tidal range of around 12m. As such predictions are fundamental to the short-term forecasting of High Water (HW) extremes, it is vital that alternative solutions are found. In a pilot study, using a year-long observational sea level record from the Port of Avonmouth in the Bristol Channel, the UK National Tidal and Sea Level Facility (NTSLF) tested the potential for reducing tidal prediction errors, using three alternatives to the Harmonic Method of tidal prediction. The three methods evaluated were (1) the use of Artificial Neural Network (ANN) models, (2) the Species Concordance technique and (3) a simple empirical procedure for correcting Harmonic Method High Water predictions based upon a few recent observations (referred to as the Empirical Correction Method). This latter method was then successfully applied to sea level records from an additional 42 of the 45 tide gauges that comprise the UK Tide Gauge Network. Consequently, it is to be incorporated into the operational systems of the UK Coastal Monitoring and Forecasting Partnership in order to improve short-term sea level predictions for the UK and in particular, the accurate estimation of HW extremes.
Draft user's guide for UDOT mechanistic-empirical pavement design.
DOT National Transportation Integrated Search
2009-10-01
Validation of the new AASHTO Mechanistic-Empirical Pavement Design Guides (MEPDG) nationally calibrated pavement distress and smoothness prediction models when applied under Utah conditions, and local calibration of the new hot-mix asphalt (HMA) p...
A methodology for reduced order modeling and calibration of the upper atmosphere
NASA Astrophysics Data System (ADS)
Mehta, Piyush M.; Linares, Richard
2017-10-01
Atmospheric drag is the largest source of uncertainty in accurately predicting the orbit of satellites in low Earth orbit (LEO). Accurately predicting drag for objects that traverse LEO is critical to space situational awareness. Atmospheric models used for orbital drag calculations can be characterized either as empirical or physics-based (first principles based). Empirical models are fast to evaluate but offer limited real-time predictive/forecasting ability, while physics based models offer greater predictive/forecasting ability but require dedicated parallel computational resources. Also, calibration with accurate data is required for either type of models. This paper presents a new methodology based on proper orthogonal decomposition toward development of a quasi-physical, predictive, reduced order model that combines the speed of empirical and the predictive/forecasting capabilities of physics-based models. The methodology is developed to reduce the high dimensionality of physics-based models while maintaining its capabilities. We develop the methodology using the Naval Research Lab's Mass Spectrometer Incoherent Scatter model and show that the diurnal and seasonal variations can be captured using a small number of modes and parameters. We also present calibration of the reduced order model using the CHAMP and GRACE accelerometer-derived densities. Results show that the method performs well for modeling and calibration of the upper atmosphere.
Possible seasonality in large deep-focus earthquakes
NASA Astrophysics Data System (ADS)
Zhan, Zhongwen; Shearer, Peter M.
2015-09-01
Large deep-focus earthquakes (magnitude > 7.0, depth > 500 km) have exhibited strong seasonality in their occurrence times since the beginning of global earthquake catalogs. Of 60 such events from 1900 to the present, 42 have occurred in the middle half of each year. The seasonality appears strongest in the northwest Pacific subduction zones and weakest in the Tonga region. Taken at face value, the surplus of northern hemisphere summer events is statistically significant, but due to the ex post facto hypothesis testing, the absence of seasonality in smaller deep earthquakes, and the lack of a known physical triggering mechanism, we cannot rule out that the observed seasonality is just random chance. However, we can make a testable prediction of seasonality in future large deep-focus earthquakes, which, given likely earthquake occurrence rates, should be verified or falsified within a few decades. If confirmed, deep earthquake seasonality would challenge our current understanding of deep earthquakes.
Strength and Vulnerability Integration (SAVI): A Model of Emotional Well-Being Across Adulthood
Charles, Susan Turk
2010-01-01
The following paper presents the theoretical model of Strength and Vulnerability Integration (SAVI) to explain factors that influence emotion regulation and emotional well-being across adulthood. The model posits that trajectories of adult development are marked by age-related enhancement in the use of strategies that serve to avoid or limit exposure to negative stimuli, but age-related vulnerabilities in situations that elicit high levels of sustained emotional arousal. When older adults avoid or reduce exposure to emotional distress, they often respond better than younger adults; when they experience high levels of sustained emotional arousal, however, age-related advantages in emotional well-being are attenuated, and older adults are hypothesized to have greater difficulties returning to homeostasis. SAVI provides a testable model to understand the literature on emotion and aging and to predict trajectories of emotional experience across the adult life span. PMID:21038939
Strength and vulnerability integration: a model of emotional well-being across adulthood.
Charles, Susan Turk
2010-11-01
The following article presents the theoretical model of strength and vulnerability integration (SAVI) to explain factors that influence emotion regulation and emotional well-being across adulthood. The model posits that trajectories of adult development are marked by age-related enhancement in the use of strategies that serve to avoid or limit exposure to negative stimuli but by age-related vulnerabilities in situations that elicit high levels of sustained emotional arousal. When older adults avoid or reduce exposure to emotional distress, they often respond better than younger adults; when they experience high levels of sustained emotional arousal, however, age-related advantages in emotional well-being are attenuated, and older adults are hypothesized to have greater difficulties returning to homeostasis. SAVI provides a testable model to understand the literature on emotion and aging and to predict trajectories of emotional experience across the adult life span.
Modelling the molecular mechanisms of aging
Mc Auley, Mark T.; Guimera, Alvaro Martinez; Hodgson, David; Mcdonald, Neil; Mooney, Kathleen M.; Morgan, Amy E.
2017-01-01
The aging process is driven at the cellular level by random molecular damage that slowly accumulates with age. Although cells possess mechanisms to repair or remove damage, they are not 100% efficient and their efficiency declines with age. There are many molecular mechanisms involved and exogenous factors such as stress also contribute to the aging process. The complexity of the aging process has stimulated the use of computational modelling in order to increase our understanding of the system, test hypotheses and make testable predictions. As many different mechanisms are involved, a wide range of models have been developed. This paper gives an overview of the types of models that have been developed, the range of tools used, modelling standards and discusses many specific examples of models that have been grouped according to the main mechanisms that they address. We conclude by discussing the opportunities and challenges for future modelling in this field. PMID:28096317
NASA Astrophysics Data System (ADS)
Xu, Jin-Shi; Li, Chuan-Feng; Guo, Guang-Can
2016-11-01
In 1935, Einstein, Podolsky and Rosen published their influential paper proposing a now famous paradox (the EPR paradox) that threw doubt on the completeness of quantum mechanics. Two fundamental concepts: entanglement and steering, were given in the response to the EPR paper by Schrodinger, which both reflect the nonlocal nature of quantum mechanics. In 1964, John Bell obtained an experimentally testable inequality, in which its violation contradicts the prediction of local hidden variable models and agrees with that of quantum mechanics. Since then, great efforts have been made to experimentally investigate the nonlocal feature of quantum mechanics and many distinguished quantum properties were observed. In this work, along with the discussion of the development of quantum nonlocality, we would focus on our recent experimental efforts in investigating quantum correlations and their applications with optical systems, including the study of entanglement-assisted entropic uncertainty principle, Einstein-Podolsky-Rosen steering and the dynamics of quantum correlations.
An application of statistics to comparative metagenomics
Rodriguez-Brito, Beltran; Rohwer, Forest; Edwards, Robert A
2006-01-01
Background Metagenomics, sequence analyses of genomic DNA isolated directly from the environments, can be used to identify organisms and model community dynamics of a particular ecosystem. Metagenomics also has the potential to identify significantly different metabolic potential in different environments. Results Here we use a statistical method to compare curated subsystems, to predict the physiology, metabolism, and ecology from metagenomes. This approach can be used to identify those subsystems that are significantly different between metagenome sequences. Subsystems that were overrepresented in the Sargasso Sea and Acid Mine Drainage metagenome when compared to non-redundant databases were identified. Conclusion The methodology described herein applies statistics to the comparisons of metabolic potential in metagenomes. This analysis reveals those subsystems that are more, or less, represented in the different environments that are compared. These differences in metabolic potential lead to several testable hypotheses about physiology and metabolism of microbes from these ecosystems. PMID:16549025
An application of statistics to comparative metagenomics.
Rodriguez-Brito, Beltran; Rohwer, Forest; Edwards, Robert A
2006-03-20
Metagenomics, sequence analyses of genomic DNA isolated directly from the environments, can be used to identify organisms and model community dynamics of a particular ecosystem. Metagenomics also has the potential to identify significantly different metabolic potential in different environments. Here we use a statistical method to compare curated subsystems, to predict the physiology, metabolism, and ecology from metagenomes. This approach can be used to identify those subsystems that are significantly different between metagenome sequences. Subsystems that were overrepresented in the Sargasso Sea and Acid Mine Drainage metagenome when compared to non-redundant databases were identified. The methodology described herein applies statistics to the comparisons of metabolic potential in metagenomes. This analysis reveals those subsystems that are more, or less, represented in the different environments that are compared. These differences in metabolic potential lead to several testable hypotheses about physiology and metabolism of microbes from these ecosystems.
p p →A →Z h and the wrong-sign limit of the two-Higgs-doublet model
NASA Astrophysics Data System (ADS)
Ferreira, Pedro M.; Liebler, Stefan; Wittbrodt, Jonas
2018-03-01
We point out the importance of the decay channels A →Z h and H →V V in the wrong-sign limit of the two-Higgs-doublet model (2HDM) of type II. They can be the dominant decay modes at moderate values of tan β , even if the (pseudo)scalar mass is above the threshold where the decay into a pair of top quarks is kinematically open. Accordingly, large cross sections p p →A →Z h and p p →H →V V are obtained and currently probed by the LHC experiments, yielding conclusive statements about the remaining parameter space of the wrong-sign limit. In addition, mild excesses—as recently found in the ATLAS analysis b b ¯→A →Z h —could be explained. The wrong-sign limit makes other important testable predictions for the light Higgs boson couplings.
Bosdriesz, Evert; Magnúsdóttir, Stefanía; Bruggeman, Frank J; Teusink, Bas; Molenaar, Douwe
2015-06-01
Microorganisms rely on binding-protein assisted, active transport systems to scavenge for scarce nutrients. Several advantages of using binding proteins in such uptake systems have been proposed. However, a systematic, rigorous and quantitative analysis of the function of binding proteins is lacking. By combining knowledge of selection pressure and physiochemical constraints, we derive kinetic, thermodynamic, and stoichiometric properties of binding-protein dependent transport systems that enable a maximal import activity per amount of transporter. Under the hypothesis that this maximal specific activity of the transport complex is the selection objective, binding protein concentrations should exceed the concentration of both the scarce nutrient and the transporter. This increases the encounter rate of transporter with loaded binding protein at low substrate concentrations, thereby enhancing the affinity and specific uptake rate. These predictions are experimentally testable, and a number of observations confirm them. © 2015 FEBS.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kenyon, Scott J.; Bromley, Benjamin C., E-mail: skenyon@cfa.harvard.edu, E-mail: bromley@physics.utah.edu
2012-03-15
We investigate whether coagulation models of planet formation can explain the observed size distributions of trans-Neptunian objects (TNOs). Analyzing published and new calculations, we demonstrate robust relations between the size of the largest object and the slope of the size distribution for sizes 0.1 km and larger. These relations yield clear, testable predictions for TNOs and other icy objects throughout the solar system. Applying our results to existing observations, we show that a broad range of initial disk masses, planetesimal sizes, and fragmentation parameters can explain the data. Adding dynamical constraints on the initial semimajor axis of 'hot' Kuiper Beltmore » objects along with probable TNO formation times of 10-700 Myr restricts the viable models to those with a massive disk composed of relatively small (1-10 km) planetesimals.« less
The evolution of mimicry under constraints.
Holen, Øistein Haugsten; Johnstone, Rufus A
2004-11-01
The resemblance between mimetic organisms and their models varies from near perfect to very crude. One possible explanation, which has received surprisingly little attention, is that evolution can improve mimicry only at some cost to the mimetic organism. In this article, an evolutionary game theory model of mimicry is presented that incorporates such constraints. The model generates novel and testable predictions. First, Batesian mimics that are very common and/or mimic very weakly defended models should evolve either inaccurate mimicry (by stabilizing selection) or mimetic polymorphism. Second, Batesian mimics that are very common and/or mimic very weakly defended models are more likely to evolve mimetic polymorphism if they encounter predators at high rates and/or are bad at evading predator attacks. The model also examines how cognitive constraints acting on signal receivers may help determine evolutionarily stable levels of mimicry. Surprisingly, improved discrimination abilities among signal receivers may sometimes select for less accurate mimicry.
Beyond Λ CDM: Problems, solutions, and the road ahead
NASA Astrophysics Data System (ADS)
Bull, Philip; Akrami, Yashar; Adamek, Julian; Baker, Tessa; Bellini, Emilio; Beltrán Jiménez, Jose; Bentivegna, Eloisa; Camera, Stefano; Clesse, Sébastien; Davis, Jonathan H.; Di Dio, Enea; Enander, Jonas; Heavens, Alan; Heisenberg, Lavinia; Hu, Bin; Llinares, Claudio; Maartens, Roy; Mörtsell, Edvard; Nadathur, Seshadri; Noller, Johannes; Pasechnik, Roman; Pawlowski, Marcel S.; Pereira, Thiago S.; Quartin, Miguel; Ricciardone, Angelo; Riemer-Sørensen, Signe; Rinaldi, Massimiliano; Sakstein, Jeremy; Saltas, Ippocratis D.; Salzano, Vincenzo; Sawicki, Ignacy; Solomon, Adam R.; Spolyar, Douglas; Starkman, Glenn D.; Steer, Danièle; Tereno, Ismael; Verde, Licia; Villaescusa-Navarro, Francisco; von Strauss, Mikael; Winther, Hans A.
2016-06-01
Despite its continued observational successes, there is a persistent (and growing) interest in extending cosmology beyond the standard model, Λ CDM. This is motivated by a range of apparently serious theoretical issues, involving such questions as the cosmological constant problem, the particle nature of dark matter, the validity of general relativity on large scales, the existence of anomalies in the CMB and on small scales, and the predictivity and testability of the inflationary paradigm. In this paper, we summarize the current status of Λ CDM as a physical theory, and review investigations into possible alternatives along a number of different lines, with a particular focus on highlighting the most promising directions. While the fundamental problems are proving reluctant to yield, the study of alternative cosmologies has led to considerable progress, with much more to come if hopes about forthcoming high-precision observations and new theoretical ideas are fulfilled.
Unified TeV scale picture of baryogenesis and dark matter.
Babu, K S; Mohapatra, R N; Nasri, Salah
2007-04-20
We present a simple extension of the minimal supersymmetric standard model which provides a unified picture of cosmological baryon asymmetry and dark matter. Our model introduces a gauge singlet field N and a color triplet field X which couple to the right-handed quark fields. The out-of-equilibrium decay of the Majorana fermion N mediated by the exchange of the scalar field X generates adequate baryon asymmetry for MN approximately 100 GeV and MX approximately TeV. The scalar partner of N (denoted N1) is naturally the lightest SUSY particle as it has no gauge interactions and plays the role of dark matter. The model is experimentally testable in (i) neutron-antineutron oscillations with a transition time estimated to be around 10(10)sec, (ii) discovery of colored particles X at LHC with mass of order TeV, and (iii) direct dark matter detection with a predicted cross section in the observable range.
The evolution of dispersal in a Levins' type metapopulation model.
Jansen, Vincent A A; Vitalis, Renaud
2007-10-01
We study the evolution of the dispersal rate in a metapopulation model with extinction and colonization dynamics, akin to the model as originally described by Levins. To do so we extend the metapopulation model with a description of the within patch dynamics. By means of a separation of time scales we analytically derive a fitness expression from first principles for this model. The fitness function can be written as an inclusive fitness equation (Hamilton's rule). By recasting this equation in a form that emphasizes the effects of competition we show the effect of the local competition and the local population size on the evolution of dispersal. We find that the evolution of dispersal cannot be easily interpreted in terms of avoidance of kin competition, but rather that increased dispersal reduces the competitive ability. Our model also yields a testable prediction in term of relatedness and life-history parameters.
Detecting Rotational Superradiance in Fluid Laboratories
NASA Astrophysics Data System (ADS)
Cardoso, Vitor; Coutant, Antonin; Richartz, Mauricio; Weinfurtner, Silke
2016-12-01
Rotational superradiance was predicted theoretically decades ago, and is chiefly responsible for a number of important effects and phenomenology in black-hole physics. However, rotational superradiance has never been observed experimentally. Here, with the aim of probing superradiance in the lab, we investigate the behavior of sound and surface waves in fluids resting in a circular basin at the center of which a rotating cylinder is placed. We show that with a suitable choice for the material of the cylinder, surface and sound waves are amplified. Two types of instabilities are studied: one sets in whenever superradiant modes are confined near the rotating cylinder and the other, which does not rely on confinement, corresponds to a local excitation of the cylinder. Our findings are experimentally testable in existing fluid laboratories and, hence, offer experimental exploration and comparison of dynamical instabilities arising from rapidly rotating boundary layers in astrophysical as well as in fluid dynamical systems.
Minimal model linking two great mysteries: Neutrino mass and dark matter
DOE Office of Scientific and Technical Information (OSTI.GOV)
Farzan, Yasaman
2009-10-01
We present an economic model that establishes a link between neutrino masses and properties of the dark matter candidate. The particle content of the model can be divided into two groups: light particles with masses lighter than the electroweak scale and heavy particles. The light particles, which also include the dark matter candidate, are predicted to show up in the low energy experiments such as (K{yields}l+missing energy), making the model testable. The heavy sector can show up at the LHC and may give rise to Br(l{sub i}{yields}l{sub j}{gamma}) close to the present bounds. In principle, the new couplings of themore » model can independently be derived from the data from the LHC and from the information on neutrino masses and lepton flavor violating rare decays, providing the possibility of an intensive cross-check of the model.« less
Brains studying brains: look before you think in vision
NASA Astrophysics Data System (ADS)
Zhaoping, Li
2016-06-01
Using our own brains to study our brains is extraordinary. For example, in vision this makes us naturally blind to our own blindness, since our impression of seeing our world clearly is consistent with our ignorance of what we do not see. Our brain employs its ‘conscious’ part to reason and make logical deductions using familiar rules and past experience. However, human vision employs many ‘subconscious’ brain parts that follow rules alien to our intuition. Our blindness to our unknown unknowns and our presumptive intuitions easily lead us astray in asking and formulating theoretical questions, as witnessed in many unexpected and counter-intuitive difficulties and failures encountered by generations of scientists. We should therefore pay a more than usual amount of attention and respect to experimental data when studying our brain. I show that this can be productive by reviewing two vision theories that have provided testable predictions and surprising insights.
Sequential pattern formation governed by signaling gradients
NASA Astrophysics Data System (ADS)
Jörg, David J.; Oates, Andrew C.; Jülicher, Frank
2016-10-01
Rhythmic and sequential segmentation of the embryonic body plan is a vital developmental patterning process in all vertebrate species. However, a theoretical framework capturing the emergence of dynamic patterns of gene expression from the interplay of cell oscillations with tissue elongation and shortening and with signaling gradients, is still missing. Here we show that a set of coupled genetic oscillators in an elongating tissue that is regulated by diffusing and advected signaling molecules can account for segmentation as a self-organized patterning process. This system can form a finite number of segments and the dynamics of segmentation and the total number of segments formed depend strongly on kinetic parameters describing tissue elongation and signaling molecules. The model accounts for existing experimental perturbations to signaling gradients, and makes testable predictions about novel perturbations. The variety of different patterns formed in our model can account for the variability of segmentation between different animal species.
Resolving Microzooplankton Functional Groups In A Size-Structured Planktonic Model
NASA Astrophysics Data System (ADS)
Taniguchi, D.; Dutkiewicz, S.; Follows, M. J.; Jahn, O.; Menden-Deuer, S.
2016-02-01
Microzooplankton are important marine grazers, often consuming a large fraction of primary productivity. They consist of a great diversity of organisms with different behaviors, characteristics, and rates. This functional diversity, and its consequences, are not currently reflected in large-scale ocean ecological simulations. How should these organisms be represented, and what are the implications for their biogeography? We develop a size-structured, trait-based model to characterize a diversity of microzooplankton functional groups. We compile and examine size-based laboratory data on the traits, revealing some patterns with size and functional group that we interpret with mechanistic theory. Fitting the model to the data provides parameterizations of key rates and properties, which we employ in a numerical ocean model. The diversity of grazing preference, rates, and trophic strategies enables the coexistence of different functional groups of micro-grazers under various environmental conditions, and the model produces testable predictions of the biogeography.
Brains studying brains: look before you think in vision.
Zhaoping, Li
2016-05-11
Using our own brains to study our brains is extraordinary. For example, in vision this makes us naturally blind to our own blindness, since our impression of seeing our world clearly is consistent with our ignorance of what we do not see. Our brain employs its 'conscious' part to reason and make logical deductions using familiar rules and past experience. However, human vision employs many 'subconscious' brain parts that follow rules alien to our intuition. Our blindness to our unknown unknowns and our presumptive intuitions easily lead us astray in asking and formulating theoretical questions, as witnessed in many unexpected and counter-intuitive difficulties and failures encountered by generations of scientists. We should therefore pay a more than usual amount of attention and respect to experimental data when studying our brain. I show that this can be productive by reviewing two vision theories that have provided testable predictions and surprising insights.
Examining the nature of retrocausal effects in biology and psychology
NASA Astrophysics Data System (ADS)
Mossbridge, Julia
2017-05-01
Multiple laboratories have reported physiological and psychological changes associated with future events that are designed to be unpredictable by normal sensory means. Such phenomena seem to be examples of retrocausality at the macroscopic level. Here I will discuss the characteristics of seemingly retrocausal effects in biology and psychology, specifically examining a biological and a psychological form of precognition, predictive anticipatory activity (PAA) and implicit precognition. The aim of this examination is to offer an analysis of the constraints posed by the characteristics of macroscopic retrocausal effects. Such constraints are critical to assessing any physical theory that purports to explain these effects. Following a brief introduction to recent research on PAA and implicit precognition, I will describe what I believe we have learned so far about the nature of these effects, and conclude with a testable, yet embryonic, model of macroscopic retrocausal phenomena.
New streams and springs after the 2014 Mw6.0 South Napa earthquake.
Wang, Chi-Yuen; Manga, Michael
2015-07-09
Many streams and springs, which were dry or nearly dry before the 2014 Mw6.0 South Napa earthquake, started to flow after the earthquake. A United States Geological Survey stream gauge also registered a coseismic increase in discharge. Public interest was heightened by a state of extreme drought in California. Since the new flows were not contaminated by pre-existing surface water, their composition allowed unambiguous identification of their origin. Following the earthquake we repeatedly surveyed the new flows, collecting data to test hypotheses about their origin. We show that the new flows originated from groundwater in nearby mountains released by the earthquake. The estimated total amount of new water is ∼ 10(6) m(3), about 1/40 of the annual water use in the Napa-Sonoma area. Our model also makes a testable prediction of a post-seismic decrease of seismic velocity in the shallow crust of the affected region.
Singlet-triplet fermionic dark matter and LHC phenomenology
NASA Astrophysics Data System (ADS)
Choubey, Sandhya; Khan, Sarif; Mitra, Manimala; Mondal, Subhadeep
2018-04-01
It is well known that for the pure standard model triplet fermionic WIMP-type dark matter (DM), the relic density is satisfied around 2 TeV. For such a heavy mass particle, the production cross-section at 13 TeV run of LHC will be very small. Extending the model further with a singlet fermion and a triplet scalar, DM relic density can be satisfied for even much lower masses. The lower mass DM can be copiously produced at LHC and hence the model can be tested at collider. For the present model we have studied the multi jet (≥ 2 j) + missing energy ([InlineEquation not available: see fulltext.]) signal and show that this can be detected in the near future of the LHC 13 TeV run. We also predict that the present model is testable by the earth based DM direct detection experiments like Xenon-1T and in future by Darwin.
Conceptual frameworks and methods for advancing invasion ecology.
Heger, Tina; Pahl, Anna T; Botta-Dukát, Zoltan; Gherardi, Francesca; Hoppe, Christina; Hoste, Ivan; Jax, Kurt; Lindström, Leena; Boets, Pieter; Haider, Sylvia; Kollmann, Johannes; Wittmann, Meike J; Jeschke, Jonathan M
2013-09-01
Invasion ecology has much advanced since its early beginnings. Nevertheless, explanation, prediction, and management of biological invasions remain difficult. We argue that progress in invasion research can be accelerated by, first, pointing out difficulties this field is currently facing and, second, looking for measures to overcome them. We see basic and applied research in invasion ecology confronted with difficulties arising from (A) societal issues, e.g., disparate perceptions of invasive species; (B) the peculiarity of the invasion process, e.g., its complexity and context dependency; and (C) the scientific methodology, e.g., imprecise hypotheses. To overcome these difficulties, we propose three key measures: (1) a checklist for definitions to encourage explicit definitions; (2) implementation of a hierarchy of hypotheses (HoH), where general hypotheses branch into specific and precisely testable hypotheses; and (3) platforms for improved communication. These measures may significantly increase conceptual clarity and enhance communication, thus advancing invasion ecology.
Mercury's magnetic field - A thermoelectric dynamo?
NASA Technical Reports Server (NTRS)
Stevenson, D. J.
1987-01-01
Permanent magnetism and conventional dynamo theory are possible but problematic explanations for the magnitude of the Mercurian magnetic field. A new model is proposed in which thermoelectric currents driven by temperature differences at a bumpy core-mantle boundary are responsible for the (unobserved) toroidal field, and the helicity of convective motions in a thin outer core (thickness of about 100 km) induces the observed poloidal field from the toroidal field. The observed field of about 3 x 10 to the -7th T can be reproduced provided the electrical conductivity of Mercury's semiconducting mantle approaches 1000/ohm per m. This model may be testable by future missions to Mercury because it predicts a more complicated field geometry than conventional dynamo theories. However, it is argued that polar wander may cause the core-mantle topography to migrate so that some aspects of the rotational symmetry may be reflected in the observed field.
NASA Technical Reports Server (NTRS)
Schlegel, R. G.
1982-01-01
It is important for industry and NASA to assess the status of acoustic design technology for predicting and controlling helicopter external noise in order for a meaningful research program to be formulated which will address this problem. The prediction methodologies available to the designer and the acoustic engineer are three-fold. First is what has been described as a first principle analysis. This analysis approach attempts to remove any empiricism from the analysis process and deals with a theoretical mechanism approach to predicting the noise. The second approach attempts to combine first principle methodology (when available) with empirical data to formulate source predictors which can be combined to predict vehicle levels. The third is an empirical analysis, which attempts to generalize measured trends into a vehicle noise prediction method. This paper will briefly address each.
Empirical Prediction of Aircraft Landing Gear Noise
NASA Technical Reports Server (NTRS)
Golub, Robert A. (Technical Monitor); Guo, Yue-Ping
2005-01-01
This report documents a semi-empirical/semi-analytical method for landing gear noise prediction. The method is based on scaling laws of the theory of aerodynamic noise generation and correlation of these scaling laws with current available test data. The former gives the method a sound theoretical foundation and the latter quantitatively determines the relations between the parameters of the landing gear assembly and the far field noise, enabling practical predictions of aircraft landing gear noise, both for parametric trends and for absolute noise levels. The prediction model is validated by wind tunnel test data for an isolated Boeing 737 landing gear and by flight data for the Boeing 777 airplane. In both cases, the predictions agree well with data, both in parametric trends and in absolute noise levels.
A SEU-Hard Flip-Flop for Antifuse FPGAs
NASA Technical Reports Server (NTRS)
Katz, R.; Wang, J. J.; McCollum, J.; Cronquist, B.; Chan, R.; Yu, D.; Kleyner, I.; Day, John H. (Technical Monitor)
2001-01-01
A single event upset (SEU)-hardened flip-flop has been designed and developed for antifuse Field Programmable Gate Array (FPGA) application. Design and application issues, testability, test methods, simulation, and results are discussed.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Davis, B.C.
This study is an assessment of the ground shock which may be generated in the event of an accidental explosion at J5 or the Proposed Large Altitude Rocket Cell (LARC) at the Arnold Engineering Development Center (AEDC). The assessment is accomplished by reviewing existing empirical relationships for predicting ground motion from ground shock. These relationships are compared with data for surface explosions at sites with similar geology and with yields similar to expected conditions at AEDC. Empirical relationships are developed from these data and a judgment made whether to use existing empirical relationships or the relationships developed in this study.more » An existing relationship (Lipner et al.) is used to predict velocity; the empirical relationships developed in the course of this study are used to predict acceleration and displacement. The ground motions are presented in table form and as contour plots. Included also is a discussion of damage criteria from blast and earthquake studies. This report recommends using velocity rather than acceleration as an indicator of structural blast damage. It is recommended that v = 2 ips (v = .167 fps) be used as the damage threshold value (no major damage for v less than or equal to 2 ips). 13 references, 25 figures, 6 tables.« less
The changing features of the body-mind problem.
Agassi, Joseph
2007-01-01
The body-mind problem invites scientific study, since mental events are repeated and repeatable and invite testable explanations. They seemed troublesome because of the classical theory of substance that failed to solve its own central problems. These are soluble with the aid of the theory of the laws of nature, particularly in its emergentist version [Bunge, M., 1980. The Body-mind Problem, Pergamon, Oxford] that invites refutable explanations [Popper, K.R., 1959. The Logic of Scientific Discovery, Hutchinson, London]. The view of mental properties as emergent is a modification of the two chief classical views, materialism and dualism. As this view invites testable explanations of events of the inner world, it is better than the quasi-behaviorist view of self-awareness as computer-style self-monitoring [Minsky, M., Laske, O., 1992. A conversation with Marvin Minsky. AI Magazine 13 (3), 31-45].
Testability of evolutionary game dynamics based on experimental economics data
NASA Astrophysics Data System (ADS)
Wang, Yijia; Chen, Xiaojie; Wang, Zhijian
2017-11-01
Understanding the dynamic processes of a real game system requires an appropriate dynamics model, and rigorously testing a dynamics model is nontrivial. In our methodological research, we develop an approach to testing the validity of game dynamics models that considers the dynamic patterns of angular momentum and speed as measurement variables. Using Rock-Paper-Scissors (RPS) games as an example, we illustrate the geometric patterns in the experiment data. We then derive the related theoretical patterns from a series of typical dynamics models. By testing the goodness-of-fit between the experimental and theoretical patterns, we show that the validity of these models can be evaluated quantitatively. Our approach establishes a link between dynamics models and experimental systems, which is, to the best of our knowledge, the most effective and rigorous strategy for ascertaining the testability of evolutionary game dynamics models.
Design for testability and diagnosis at the system-level
NASA Technical Reports Server (NTRS)
Simpson, William R.; Sheppard, John W.
1993-01-01
The growing complexity of full-scale systems has surpassed the capabilities of most simulation software to provide detailed models or gate-level failure analyses. The process of system-level diagnosis approaches the fault-isolation problem in a manner that differs significantly from the traditional and exhaustive failure mode search. System-level diagnosis is based on a functional representation of the system. For example, one can exercise one portion of a radar algorithm (the Fast Fourier Transform (FFT) function) by injecting several standard input patterns and comparing the results to standardized output results. An anomalous output would point to one of several items (including the FFT circuit) without specifying the gate or failure mode. For system-level repair, identifying an anomalous chip is sufficient. We describe here an information theoretic and dependency modeling approach that discards much of the detailed physical knowledge about the system and analyzes its information flow and functional interrelationships. The approach relies on group and flow associations and, as such, is hierarchical. Its hierarchical nature allows the approach to be applicable to any level of complexity and to any repair level. This approach has been incorporated in a product called STAMP (System Testability and Maintenance Program) which was developed and refined through more than 10 years of field-level applications to complex system diagnosis. The results have been outstanding, even spectacular in some cases. In this paper we describe system-level testability, system-level diagnoses, and the STAMP analysis approach, as well as a few STAMP applications.
Empirical predictions of hypervelocity impact damage to the space station
NASA Technical Reports Server (NTRS)
Rule, W. K.; Hayashida, K. B.
1991-01-01
A family of user-friendly, DOS PC based, Microsoft BASIC programs written to provide spacecraft designers with empirical predictions of space debris damage to orbiting spacecraft is described. The spacecraft wall configuration is assumed to consist of multilayer insulation (MLI) placed between a Whipple style bumper and the pressure wall. Predictions are based on data sets of experimental results obtained from simulating debris impacts on spacecraft using light gas guns on Earth. A module of the program facilitates the creation of the data base of experimental results that are used by the damage prediction modules of the code. The user has the choice of three different prediction modules to predict damage to the bumper, the MLI, and the pressure wall. One prediction module is based on fitting low order polynomials through subsets of the experimental data. Another prediction module fits functions based on nondimensional parameters through the data. The last prediction technique is a unique approach that is based on weighting the experimental data according to the distance from the design point.
Simple, empirical approach to predict neutron capture cross sections from nuclear masses
NASA Astrophysics Data System (ADS)
Couture, A.; Casten, R. F.; Cakirli, R. B.
2017-12-01
Background: Neutron capture cross sections are essential to understanding the astrophysical s and r processes, the modeling of nuclear reactor design and performance, and for a wide variety of nuclear forensics applications. Often, cross sections are needed for nuclei where experimental measurements are difficult. Enormous effort, over many decades, has gone into attempting to develop sophisticated statistical reaction models to predict these cross sections. Such work has met with some success but is often unable to reproduce measured cross sections to better than 40 % , and has limited predictive power, with predictions from different models rapidly differing by an order of magnitude a few nucleons from the last measurement. Purpose: To develop a new approach to predicting neutron capture cross sections over broad ranges of nuclei that accounts for their values where known and which has reliable predictive power with small uncertainties for many nuclei where they are unknown. Methods: Experimental neutron capture cross sections were compared to empirical mass observables in regions of similar structure. Results: We present an extremely simple method, based solely on empirical mass observables, that correlates neutron capture cross sections in the critical energy range from a few keV to a couple hundred keV. We show that regional cross sections are compactly correlated in medium and heavy mass nuclei with the two-neutron separation energy. These correlations are easily amenable to predict unknown cross sections, often converting the usual extrapolations to more reliable interpolations. It almost always reproduces existing data to within 25 % and estimated uncertainties are below about 40 % up to 10 nucleons beyond known data. Conclusions: Neutron capture cross sections display a surprisingly strong connection to the two-neutron separation energy, a nuclear structure property. The simple, empirical correlations uncovered provide model-independent predictions of neutron capture cross sections, extending far from stability, including for nuclei of the highest sensitivity to r -process nucleosynthesis.
NASA Astrophysics Data System (ADS)
Emami Niri, Mohammad; Amiri Kolajoobi, Rasool; Khodaiy Arbat, Mohammad; Shahbazi Raz, Mahdi
2018-06-01
Seismic wave velocities, along with petrophysical data, provide valuable information during the exploration and development stages of oil and gas fields. The compressional-wave velocity (VP ) is acquired using conventional acoustic logging tools in many drilled wells. But the shear-wave velocity (VS ) is recorded using advanced logging tools only in a limited number of wells, mainly because of the high operational costs. In addition, laboratory measurements of seismic velocities on core samples are expensive and time consuming. So, alternative methods are often used to estimate VS . Heretofore, several empirical correlations that predict VS by using well logging measurements and petrophysical data such as VP , porosity and density are proposed. However, these empirical relations can only be used in limited cases. The use of intelligent systems and optimization algorithms are inexpensive, fast and efficient approaches for predicting VS. In this study, in addition to the widely used Greenberg–Castagna empirical method, we implement three relatively recently developed metaheuristic algorithms to construct linear and nonlinear models for predicting VS : teaching–learning based optimization, imperialist competitive and artificial bee colony algorithms. We demonstrate the applicability and performance of these algorithms to predict Vs using conventional well logs in two field data examples, a sandstone formation from an offshore oil field and a carbonate formation from an onshore oil field. We compared the estimated VS using each of the employed metaheuristic approaches with observed VS and also with those predicted by Greenberg–Castagna relations. The results indicate that, for both sandstone and carbonate case studies, all three implemented metaheuristic algorithms are more efficient and reliable than the empirical correlation to predict VS . The results also demonstrate that in both sandstone and carbonate case studies, the performance of an artificial bee colony algorithm in VS prediction is slightly higher than two other alternative employed approaches.
Assessment of Current Jet Noise Prediction Capabilities
NASA Technical Reports Server (NTRS)
Hunter, Craid A.; Bridges, James E.; Khavaran, Abbas
2008-01-01
An assessment was made of the capability of jet noise prediction codes over a broad range of jet flows, with the objective of quantifying current capabilities and identifying areas requiring future research investment. Three separate codes in NASA s possession, representative of two classes of jet noise prediction codes, were evaluated, one empirical and two statistical. The empirical code is the Stone Jet Noise Module (ST2JET) contained within the ANOPP aircraft noise prediction code. It is well documented, and represents the state of the art in semi-empirical acoustic prediction codes where virtual sources are attributed to various aspects of noise generation in each jet. These sources, in combination, predict the spectral directivity of a jet plume. A total of 258 jet noise cases were examined on the ST2JET code, each run requiring only fractions of a second to complete. Two statistical jet noise prediction codes were also evaluated, JeNo v1, and Jet3D. Fewer cases were run for the statistical prediction methods because they require substantially more resources, typically a Reynolds-Averaged Navier-Stokes solution of the jet, volume integration of the source statistical models over the entire plume, and a numerical solution of the governing propagation equation within the jet. In the evaluation process, substantial justification of experimental datasets used in the evaluations was made. In the end, none of the current codes can predict jet noise within experimental uncertainty. The empirical code came within 2dB on a 1/3 octave spectral basis for a wide range of flows. The statistical code Jet3D was within experimental uncertainty at broadside angles for hot supersonic jets, but errors in peak frequency and amplitude put it out of experimental uncertainty at cooler, lower speed conditions. Jet3D did not predict changes in directivity in the downstream angles. The statistical code JeNo,v1 was within experimental uncertainty predicting noise from cold subsonic jets at all angles, but did not predict changes with heating of the jet and did not account for directivity changes at supersonic conditions. Shortcomings addressed here give direction for future work relevant to the statistical-based prediction methods. A full report will be released as a chapter in a NASA publication assessing the state of the art in aircraft noise prediction.
Schulz, Stefan; Woestmann, Barbara; Huenges, Bert; Schweikardt, Christoph; Schäfer, Thorsten
2012-01-01
Objectives: It was investigated how students judge the teaching of medical ethics and the history of medicine at the start and during their studies, and the influence which subject-specific teaching of the history, theory and ethics of medicine (GTE) - or the lack thereof - has on the judgement of these subjects. Methods: From a total of 533 students who were in their first and 5th semester of the Bochum Model curriculum (GTE teaching from the first semester onwards) or followed the traditional curriculum (GTE teaching in the 5th/6th semester), questionnaires were requested in the winter semester 2005/06 and in the summer semester 2006. They were asked both before and after the 1st and 5th (model curriculum) or 6th semester (traditional curriculum). We asked students to judge the importance of teaching medical ethics and the history of medicine, the significance of these subjects for physicians and about teachability and testability (Likert scale from -2 (do not agree at all) to +2 (agree completely)). Results: 331 questionnaire pairs were included in the study. There were no significant differences between the students of the two curricula at the start of the 1st semester. The views on medical ethics and the history of medicine, in contrast, were significantly different at the start of undergraduate studies: The importance of medical ethics for the individual and the physician was considered very high but their teachability and testability were rated considerably worse. For the history of medicine, the results were exactly opposite. GTE teaching led to a more positive assessment of items previously ranked less favourably in both curricula. A lack of teaching led to a drop in the assessment of both subjects which had previously been rated well. Conclusion: Consistent with the literature, our results support the hypothesis that the teaching of GTE has a positive impact on the views towards the history and ethics of medicine, with a lack of teaching having a negative impact. Therefore the teaching of GTE should already begin in the 1st semester. The teaching of GTE must take into account that even right at the start of their studies, students judge medical ethics and the history of medicine differently. PMID:22403593
Schulz, Stefan; Woestmann, Barbara; Huenges, Bert; Schweikardt, Christoph; Schäfer, Thorsten
2012-01-01
It was investigated how students judge the teaching of medical ethics and the history of medicine at the start and during their studies, and the influence which subject-specific teaching of the history, theory and ethics of medicine (GTE)--or the lack thereof--has on the judgement of these subjects. From a total of 533 students who were in their first and 5th semester of the Bochum Model curriculum (GTE teaching from the first semester onwards) or followed the traditional curriculum (GTE teaching in the 5th/6th semester), questionnaires were requested in the winter semester 2005/06 and in the summer semester 2006. They were asked both before and after the 1st and 5th (model curriculum) or 6th semester (traditional curriculum). We asked students to judge the importance of teaching medical ethics and the history of medicine, the significance of these subjects for physicians and about teachability and testability (Likert scale from -2 (do not agree at all) to +2 (agree completely)). 331 questionnaire pairs were included in the study. There were no significant differences between the students of the two curricula at the start of the 1st semester. The views on medical ethics and the history of medicine, in contrast, were significantly different at the start of undergraduate studies: The importance of medical ethics for the individual and the physician was considered very high but their teachability and testability were rated considerably worse. For the history of medicine, the results were exactly opposite. GTE teaching led to a more positive assessment of items previously ranked less favourably in both curricula. A lack of teaching led to a drop in the assessment of both subjects which had previously been rated well. Consistent with the literature, our results support the hypothesis that the teaching of GTE has a positive impact on the views towards the history and ethics of medicine, with a lack of teaching having a negative impact. Therefore the teaching of GTE should already begin in the 1st semester. The teaching of GTE must take into account that even right at the start of their studies, students judge medical ethics and the history of medicine differently.
Iowa calibration of MEPDG performance prediction models.
DOT National Transportation Integrated Search
2013-06-01
This study aims to improve the accuracy of AASHTO Mechanistic-Empirical Pavement Design Guide (MEPDG) pavement : performance predictions for Iowa pavement systems through local calibration of MEPDG prediction models. A total of 130 : representative p...
Probabilistic empirical prediction of seasonal climate: evaluation and potential applications
NASA Astrophysics Data System (ADS)
Dieppois, B.; Eden, J.; van Oldenborgh, G. J.
2017-12-01
Preparing for episodes with risks of anomalous weather a month to a year ahead is an important challenge for governments, non-governmental organisations, and private companies and is dependent on the availability of reliable forecasts. The majority of operational seasonal forecasts are made using process-based dynamical models, which are complex, computationally challenging and prone to biases. Empirical forecast approaches built on statistical models to represent physical processes offer an alternative to dynamical systems and can provide either a benchmark for comparison or independent supplementary forecasts. Here, we present a new evaluation of an established empirical system used to predict seasonal climate across the globe. Forecasts for surface air temperature, precipitation and sea level pressure are produced by the KNMI Probabilistic Empirical Prediction (K-PREP) system every month and disseminated via the KNMI Climate Explorer (climexp.knmi.nl). K-PREP is based on multiple linear regression and built on physical principles to the fullest extent with predictive information taken from the global CO2-equivalent concentration, large-scale modes of variability in the climate system and regional-scale information. K-PREP seasonal forecasts for the period 1981-2016 will be compared with corresponding dynamically generated forecasts produced by operational forecast systems. While there are many regions of the world where empirical forecast skill is extremely limited, several areas are identified where K-PREP offers comparable skill to dynamical systems. We discuss two key points in the future development and application of the K-PREP system: (a) the potential for K-PREP to provide a more useful basis for reference forecasts than those based on persistence or climatology, and (b) the added value of including K-PREP forecast information in multi-model forecast products, at least for known regions of good skill. We also discuss the potential development of stakeholder-driven applications of the K-PREP system, including empirical forecasts for circumboreal fire activity.
Marto, Aminaton; Jahed Armaghani, Danial; Tonnizam Mohamad, Edy; Makhtar, Ahmad Mahir
2014-01-01
Flyrock is one of the major disturbances induced by blasting which may cause severe damage to nearby structures. This phenomenon has to be precisely predicted and subsequently controlled through the changing in the blast design to minimize potential risk of blasting. The scope of this study is to predict flyrock induced by blasting through a novel approach based on the combination of imperialist competitive algorithm (ICA) and artificial neural network (ANN). For this purpose, the parameters of 113 blasting operations were accurately recorded and flyrock distances were measured for each operation. By applying the sensitivity analysis, maximum charge per delay and powder factor were determined as the most influential parameters on flyrock. In the light of this analysis, two new empirical predictors were developed to predict flyrock distance. For a comparison purpose, a predeveloped backpropagation (BP) ANN was developed and the results were compared with those of the proposed ICA-ANN model and empirical predictors. The results clearly showed the superiority of the proposed ICA-ANN model in comparison with the proposed BP-ANN model and empirical approaches. PMID:25147856
A Formal Approach to Empirical Dynamic Model Optimization and Validation
NASA Technical Reports Server (NTRS)
Crespo, Luis G; Morelli, Eugene A.; Kenny, Sean P.; Giesy, Daniel P.
2014-01-01
A framework was developed for the optimization and validation of empirical dynamic models subject to an arbitrary set of validation criteria. The validation requirements imposed upon the model, which may involve several sets of input-output data and arbitrary specifications in time and frequency domains, are used to determine if model predictions are within admissible error limits. The parameters of the empirical model are estimated by finding the parameter realization for which the smallest of the margins of requirement compliance is as large as possible. The uncertainty in the value of this estimate is characterized by studying the set of model parameters yielding predictions that comply with all the requirements. Strategies are presented for bounding this set, studying its dependence on admissible prediction error set by the analyst, and evaluating the sensitivity of the model predictions to parameter variations. This information is instrumental in characterizing uncertainty models used for evaluating the dynamic model at operating conditions differing from those used for its identification and validation. A practical example based on the short period dynamics of the F-16 is used for illustration.
Marto, Aminaton; Hajihassani, Mohsen; Armaghani, Danial Jahed; Mohamad, Edy Tonnizam; Makhtar, Ahmad Mahir
2014-01-01
Flyrock is one of the major disturbances induced by blasting which may cause severe damage to nearby structures. This phenomenon has to be precisely predicted and subsequently controlled through the changing in the blast design to minimize potential risk of blasting. The scope of this study is to predict flyrock induced by blasting through a novel approach based on the combination of imperialist competitive algorithm (ICA) and artificial neural network (ANN). For this purpose, the parameters of 113 blasting operations were accurately recorded and flyrock distances were measured for each operation. By applying the sensitivity analysis, maximum charge per delay and powder factor were determined as the most influential parameters on flyrock. In the light of this analysis, two new empirical predictors were developed to predict flyrock distance. For a comparison purpose, a predeveloped backpropagation (BP) ANN was developed and the results were compared with those of the proposed ICA-ANN model and empirical predictors. The results clearly showed the superiority of the proposed ICA-ANN model in comparison with the proposed BP-ANN model and empirical approaches.
Empirical and semi-analytical models for predicting peak outflows caused by embankment dam failures
NASA Astrophysics Data System (ADS)
Wang, Bo; Chen, Yunliang; Wu, Chao; Peng, Yong; Song, Jiajun; Liu, Wenjun; Liu, Xin
2018-07-01
Prediction of peak discharge of floods has attracted great attention for researchers and engineers. In present study, nine typical nonlinear mathematical models are established based on database of 40 historical dam failures. The first eight models that were developed with a series of regression analyses are purely empirical, while the last one is a semi-analytical approach that was derived from an analytical solution of dam-break floods in a trapezoidal channel. Water depth above breach invert (Hw), volume of water stored above breach invert (Vw), embankment length (El), and average embankment width (Ew) are used as independent variables to develop empirical formulas of estimating the peak outflow from breached embankment dams. It is indicated from the multiple regression analysis that a function using the former two variables (i.e., Hw and Vw) produce considerably more accurate results than that using latter two variables (i.e., El and Ew). It is shown that the semi-analytical approach works best in terms of both prediction accuracy and uncertainty, and the established empirical models produce considerably reasonable results except the model only using El. Moreover, present models have been compared with other models available in literature for estimating peak discharge.
NASA Astrophysics Data System (ADS)
Park, Joonam; Appiah, Williams Agyei; Byun, Seoungwoo; Jin, Dahee; Ryou, Myung-Hyun; Lee, Yong Min
2017-10-01
To overcome the limitation of simple empirical cycle life models based on only equivalent circuits, we attempt to couple a conventional empirical capacity loss model with Newman's porous composite electrode model, which contains both electrochemical reaction kinetics and material/charge balances. In addition, an electrolyte depletion function is newly introduced to simulate a sudden capacity drop at the end of cycling, which is frequently observed in real lithium-ion batteries (LIBs). When simulated electrochemical properties are compared with experimental data obtained with 20 Ah-level graphite/LiFePO4 LIB cells, our semi-empirical model is sufficiently accurate to predict a voltage profile having a low standard deviation of 0.0035 V, even at 5C. Additionally, our model can provide broad cycle life color maps under different c-rate and depth-of-discharge operating conditions. Thus, this semi-empirical model with an electrolyte depletion function will be a promising platform to predict long-term cycle lives of large-format LIB cells under various operating conditions.
A Programmable Cellular-Automata Polarized Dirac Vacuum
NASA Astrophysics Data System (ADS)
Osoroma, Drahcir S.
2013-09-01
We explore properties of a `Least Cosmological Unit' (LCU) as an inherent spacetime raster tiling or tessellating the unique backcloth of Holographic Anthropic Multiverse (HAM) cosmology as an array of programmable cellular automata. The HAM vacuum is a scale-invariant HD extension of a covariant polarized Dirac vacuum with `bumps' and `holes' typically described by extended electromagnetic theory corresponding to an Einstein energy-dependent spacetime metric admitting a periodic photon mass. The new cosmology incorporates a unique form of M-Theoretic Calabi-Yau-Poincaré Dodecadedral-AdS5-DS5space (PDS) with mirror symmetry best described by an HD extension of Cramer's Transactional Interpretation when integrated also with an HD extension of the de Broglie-Bohm-Vigier causal interpretation of quantum theory. We incorporate a unique form of large-scale additional dimensionality (LSXD) bearing some similarity to that conceived by Randall and Sundrum; and extend the fundamental basis of our model to the Unified Field, UF. A Sagnac Effect rf-pulsed incursive resonance hierarchy is utilized to manipulate and ballistically program the geometric-topological properties of this putative LSXD space-spacetime network. The model is empirically testable; and it is proposed that a variety of new technologies will arise from ballistic programming of tessellated LCU vacuum cellular automata.
A testable theory of problem solving courts: Avoiding past empirical and legal failures.
Wiener, Richard L; Winick, Bruce J; Georges, Leah Skovran; Castro, Anthony
2010-01-01
Recent years have seen a proliferation of problem solving courts designed to rehabilitate certain classes of offenders and thereby resolve the underlying problems that led to their court involvement in the first place. Some commentators have reacted positively to these courts, considering them an extension of the philosophy and logic of Therapeutic Jurisprudence, but others show concern that the discourse surrounding these specialty courts has not examined their process or outcomes critically enough. This paper examines that criticism from historical and social scientific perspectives. The analysis culminates in a model that describes how offenders are likely to respond to the process as they engage in problem solving court programs and the ways in which those courts might impact subsequent offender conduct. This Therapeutic Jurisprudence model of problem solving courts draws heavily on social cognitive psychology and more specifically on theories of procedural justice, motivation, and anticipated emotion to offer an explanation of how offenders respond to these programs. We offer this model as a lens through which social scientists can begin to address the concern that there is not enough critical analysis of the process and outcome of these courts. Applying this model to specialty courts constitutes an important step in critically examining the contribution of problem solving courts. Copyright © 2010 Elsevier Ltd. All rights reserved.
Decision-support models for empiric antibiotic selection in Gram-negative bloodstream infections.
MacFadden, D R; Coburn, B; Shah, N; Robicsek, A; Savage, R; Elligsen, M; Daneman, N
2018-04-25
Early empiric antibiotic therapy in patients can improve clinical outcomes in Gram-negative bacteraemia. However, the widespread prevalence of antibiotic-resistant pathogens compromises our ability to provide adequate therapy while minimizing use of broad antibiotics. We sought to determine whether readily available electronic medical record data could be used to develop predictive models for decision support in Gram-negative bacteraemia. We performed a multi-centre cohort study, in Canada and the USA, of hospitalized patients with Gram-negative bloodstream infection from April 2010 to March 2015. We analysed multivariable models for prediction of antibiotic susceptibility at two empiric windows: Gram-stain-guided and pathogen-guided treatment. Decision-support models for empiric antibiotic selection were developed based on three clinical decision thresholds of acceptable adequate coverage (80%, 90% and 95%). A total of 1832 patients with Gram-negative bacteraemia were evaluated. Multivariable models showed good discrimination across countries and at both Gram-stain-guided (12 models, areas under the curve (AUCs) 0.68-0.89, optimism-corrected AUCs 0.63-0.85) and pathogen-guided (12 models, AUCs 0.75-0.98, optimism-corrected AUCs 0.64-0.95) windows. Compared to antibiogram-guided therapy, decision-support models of antibiotic selection incorporating individual patient characteristics and prior culture results have the potential to increase use of narrower-spectrum antibiotics (in up to 78% of patients) while reducing inadequate therapy. Multivariable models using readily available epidemiologic factors can be used to predict antimicrobial susceptibility in infecting pathogens with reasonable discriminatory ability. Implementation of sequential predictive models for real-time individualized empiric antibiotic decision-making has the potential to both optimize adequate coverage for patients while minimizing overuse of broad-spectrum antibiotics, and therefore requires further prospective evaluation. Readily available epidemiologic risk factors can be used to predict susceptibility of Gram-negative organisms among patients with bacteraemia, using automated decision-making models. Copyright © 2018 European Society of Clinical Microbiology and Infectious Diseases. Published by Elsevier Ltd. All rights reserved.
Bobovská, Adela; Tvaroška, Igor; Kóňa, Juraj
2016-05-01
Human Golgi α-mannosidase II (GMII), a zinc ion co-factor dependent glycoside hydrolase (E.C.3.2.1.114), is a pharmaceutical target for the design of inhibitors with anti-cancer activity. The discovery of an effective inhibitor is complicated by the fact that all known potent inhibitors of GMII are involved in unwanted co-inhibition with lysosomal α-mannosidase (LMan, E.C.3.2.1.24), a relative to GMII. Routine empirical QSAR models for both GMII and LMan did not work with a required accuracy. Therefore, we have developed a fast computational protocol to build predictive models combining interaction energy descriptors from an empirical docking scoring function (Glide-Schrödinger), Linear Interaction Energy (LIE) method, and quantum mechanical density functional theory (QM-DFT) calculations. The QSAR models were built and validated with a library of structurally diverse GMII and LMan inhibitors and non-active compounds. A critical role of QM-DFT descriptors for the more accurate prediction abilities of the models is demonstrated. The predictive ability of the models was significantly improved when going from the empirical docking scoring function to mixed empirical-QM-DFT QSAR models (Q(2)=0.78-0.86 when cross-validation procedures were carried out; and R(2)=0.81-0.83 for a testing set). The average error for the predicted ΔGbind decreased to 0.8-1.1kcalmol(-1). Also, 76-80% of non-active compounds were successfully filtered out from GMII and LMan inhibitors. The QSAR models with the fragmented QM-DFT descriptors may find a useful application in structure-based drug design where pure empirical and force field methods reached their limits and where quantum mechanics effects are critical for ligand-receptor interactions. The optimized models will apply in lead optimization processes for GMII drug developments. Copyright © 2016 Elsevier Inc. All rights reserved.
Sources, Sinks, and Model Accuracy
Spatial demographic models are a necessary tool for understanding how to manage landscapes sustainably for animal populations. These models, therefore, must offer precise and testable predications about animal population dynamics and how animal demographic parameters respond to ...
PREDICTING CHEMICAL RESIDUES IN AQUATIC FOOD CHAINS
The need to accurately predict chemical accumulation in aquatic organisms is critical for a variety of environmental applications including the assessment of contaminated sediments. Approaches for predicting chemical residues can be divided into two general classes, empirical an...
Learning linear transformations between counting-based and prediction-based word embeddings
Hayashi, Kohei; Kawarabayashi, Ken-ichi
2017-01-01
Despite the growing interest in prediction-based word embedding learning methods, it remains unclear as to how the vector spaces learnt by the prediction-based methods differ from that of the counting-based methods, or whether one can be transformed into the other. To study the relationship between counting-based and prediction-based embeddings, we propose a method for learning a linear transformation between two given sets of word embeddings. Our proposal contributes to the word embedding learning research in three ways: (a) we propose an efficient method to learn a linear transformation between two sets of word embeddings, (b) using the transformation learnt in (a), we empirically show that it is possible to predict distributed word embeddings for novel unseen words, and (c) empirically it is possible to linearly transform counting-based embeddings to prediction-based embeddings, for frequent words, different POS categories, and varying degrees of ambiguities. PMID:28926629
NASA Astrophysics Data System (ADS)
Chen, Dar-Hsin; Chou, Heng-Chih; Wang, David; Zaabar, Rim
2011-06-01
Most empirical research of the path-dependent, exotic-option credit risk model focuses on developed markets. Taking Taiwan as an example, this study investigates the bankruptcy prediction performance of the path-dependent, barrier option model in the emerging market. We adopt Duan's (1994) [11], (2000) [12] transformed-data maximum likelihood estimation (MLE) method to directly estimate the unobserved model parameters, and compare the predictive ability of the barrier option model to the commonly adopted credit risk model, Merton's model. Our empirical findings show that the barrier option model is more powerful than Merton's model in predicting bankruptcy in the emerging market. Moreover, we find that the barrier option model predicts bankruptcy much better for highly-leveraged firms. Finally, our findings indicate that the prediction accuracy of the credit risk model can be improved by higher asset liquidity and greater financial transparency.
Hurst exponent and prediction based on weak-form efficient market hypothesis of stock markets
NASA Astrophysics Data System (ADS)
Eom, Cheoljun; Choi, Sunghoon; Oh, Gabjin; Jung, Woo-Sung
2008-07-01
We empirically investigated the relationships between the degree of efficiency and the predictability in financial time-series data. The Hurst exponent was used as the measurement of the degree of efficiency, and the hit rate calculated from the nearest-neighbor prediction method was used for the prediction of the directions of future price changes. We used 60 market indexes of various countries. We empirically discovered that the relationship between the degree of efficiency (the Hurst exponent) and the predictability (the hit rate) is strongly positive. That is, a market index with a higher Hurst exponent tends to have a higher hit rate. These results suggested that the Hurst exponent is useful for predicting future price changes. Furthermore, we also discovered that the Hurst exponent and the hit rate are useful as standards that can distinguish emerging capital markets from mature capital markets.
Gas Generator Feedline Orifice Sizing Methodology: Effects of Unsteadiness and Non-Axisymmetric Flow
NASA Technical Reports Server (NTRS)
Rothermel, Jeffry; West, Jeffrey S.
2011-01-01
Engine LH2 and LO2 gas generator feed assemblies were modeled with computational fluid dynamics (CFD) methods at 100% rated power level, using on-center square- and round-edge orifices. The purpose of the orifices is to regulate the flow of fuel and oxidizer to the gas generator, enabling optimal power supply to the turbine and pump assemblies. The unsteady Reynolds-Averaged Navier-Stokes equations were solved on unstructured grids at second-order spatial and temporal accuracy. The LO2 model was validated against published experimental data and semi-empirical relationships for thin-plate orifices over a range of Reynolds numbers. Predictions for the LO2 square- and round-edge orifices precisely match experiment and semi-empirical formulas, despite complex feedline geometry whereby a portion of the flow from the engine main feedlines travels at a right-angle through a smaller-diameter pipe containing the orifice. Predictions for LH2 square- and round-edge orifice designs match experiment and semi-empirical formulas to varying degrees depending on the semi-empirical formula being evaluated. LO2 mass flow rate through the square-edge orifice is predicted to be 25 percent less than the flow rate budgeted in the original engine balance, which was subsequently modified. LH2 mass flow rate through the square-edge orifice is predicted to be 5 percent greater than the flow rate budgeted in the engine balance. Since CFD predictions for LO2 and LH2 square-edge orifice pressure loss coefficients, K, both agree with published data, the equation for K has been used to define a procedure for orifice sizing.
Modeling the risk of water pollution by pesticides from imbalanced data.
Trajanov, Aneta; Kuzmanovski, Vladimir; Real, Benoit; Perreau, Jonathan Marks; Džeroski, Sašo; Debeljak, Marko
2018-04-30
The pollution of ground and surface waters with pesticides is a serious ecological issue that requires adequate treatment. Most of the existing water pollution models are mechanistic mathematical models. While they have made a significant contribution to understanding the transfer processes, they face the problem of validation because of their complexity, the user subjectivity in their parameterization, and the lack of empirical data for validation. In addition, the data describing water pollution with pesticides are, in most cases, very imbalanced. This is due to strict regulations for pesticide applications, which lead to only a few pollution events. In this study, we propose the use of data mining to build models for assessing the risk of water pollution by pesticides in field-drained outflow water. Unlike the mechanistic models, the models generated by data mining are based on easily obtainable empirical data, while the parameterization of the models is not influenced by the subjectivity of ecological modelers. We used empirical data from field trials at the La Jaillière experimental site in France and applied the random forests algorithm to build predictive models that predict "risky" and "not-risky" pesticide application events. To address the problems of the imbalanced classes in the data, cost-sensitive learning and different measures of predictive performance were used. Despite the high imbalance between risky and not-risky application events, we managed to build predictive models that make reliable predictions. The proposed modeling approach can be easily applied to other ecological modeling problems where we encounter empirical data with highly imbalanced classes.
DOT National Transportation Integrated Search
2015-08-01
A mechanistic-empirical (ME) pavement design procedure allows for analyzing and selecting pavement structures based : on predicted distress progression resulting from stresses and strains within the pavement over its design life. The Virginia : Depar...
Validation of pavement performance curves for the mechanistic-empirical pavement design guide.
DOT National Transportation Integrated Search
2009-02-01
The objective of this research is to determine whether the nationally calibrated performance models used in the Mechanistic-Empirical : Pavement Design Guide (MEPDG) provide a reasonable prediction of actual field performance, and if the desired accu...
An empirically-based model for the lift coefficients of twisted airfoils with leading-edge tubercles
NASA Astrophysics Data System (ADS)
Ni, Zao; Su, Tsung-chow; Dhanak, Manhar
2018-04-01
Experimental data for untwisted airfoils are utilized to propose a model for predicting the lift coefficients of twisted airfoils with leading-edge tubercles. The effectiveness of the empirical model is verified through comparison with results of a corresponding computational fluid-dynamic (CFD) study. The CFD study is carried out for both twisted and untwisted airfoils with tubercles, the latter shown to compare well with available experimental data. Lift coefficients of twisted airfoils predicted from the proposed empirically-based model match well with the corresponding coefficients determined using the verified CFD study. Flow details obtained from the latter provide better insight into the underlying mechanism and behavior at stall of twisted airfoils with leading edge tubercles.
Empirical Models for the Shielding and Reflection of Jet Mixing Noise by a Surface
NASA Technical Reports Server (NTRS)
Brown, Cliff
2015-01-01
Empirical models for the shielding and refection of jet mixing noise by a nearby surface are described and the resulting models evaluated. The flow variables are used to non-dimensionalize the surface position variables, reducing the variable space and producing models that are linear function of non-dimensional surface position and logarithmic in Strouhal frequency. A separate set of coefficients are determined at each observer angle in the dataset and linear interpolation is used to for the intermediate observer angles. The shielding and rejection models are then combined with existing empirical models for the jet mixing and jet-surface interaction noise sources to produce predicted spectra for a jet operating near a surface. These predictions are then evaluated against experimental data.
Empirical Models for the Shielding and Reflection of Jet Mixing Noise by a Surface
NASA Technical Reports Server (NTRS)
Brown, Clifford A.
2016-01-01
Empirical models for the shielding and reflection of jet mixing noise by a nearby surface are described and the resulting models evaluated. The flow variables are used to non-dimensionalize the surface position variables, reducing the variable space and producing models that are linear function of non-dimensional surface position and logarithmic in Strouhal frequency. A separate set of coefficients are determined at each observer angle in the dataset and linear interpolation is used to for the intermediate observer angles. The shielding and reflection models are then combined with existing empirical models for the jet mixing and jet-surface interaction noise sources to produce predicted spectra for a jet operating near a surface. These predictions are then evaluated against experimental data.
Semi-empirical model for prediction of unsteady forces on an airfoil with application to flutter
NASA Technical Reports Server (NTRS)
Mahajan, Aparajit J.; Kaza, Krishna Rao V.
1992-01-01
A semi-empirical model is described for predicting unsteady aerodynamic forces on arbitrary airfoils under mildly stalled and unstalled conditions. Aerodynamic forces are modeled using second order ordinary differential equations for lift and moment with airfoil motion as the input. This model is simultaneously integrated with structural dynamics equations to determine flutter characteristics for a two degrees-of-freedom system. Results for a number of cases are presented to demonstrate the suitability of this model to predict flutter. Comparison is made to the flutter characteristics determined by a Navier-Stokes solver and also the classical incompressible potential flow theory.
Semi-empirical model for prediction of unsteady forces on an airfoil with application to flutter
NASA Technical Reports Server (NTRS)
Mahajan, A. J.; Kaza, K. R. V.; Dowell, E. H.
1993-01-01
A semi-empirical model is described for predicting unsteady aerodynamic forces on arbitrary airfoils under mildly stalled and unstalled conditions. Aerodynamic forces are modeled using second order ordinary differential equations for lift and moment with airfoil motion as the input. This model is simultaneously integrated with structural dynamics equations to determine flutter characteristics for a two degrees-of-freedom system. Results for a number of cases are presented to demonstrate the suitability of this model to predict flutter. Comparison is made to the flutter characteristics determined by a Navier-Stokes solver and also the classical incompressible potential flow theory.
Styron, J D; Cooper, G W; Ruiz, C L; Hahn, K D; Chandler, G A; Nelson, A J; Torres, J A; McWatters, B R; Carpenter, Ken; Bonura, M A
2014-11-01
A methodology for obtaining empirical curves relating absolute measured scintillation light output to beta energy deposited is presented. Output signals were measured from thin plastic scintillator using NIST traceable beta and gamma sources and MCNP5 was used to model the energy deposition from each source. Combining the experimental and calculated results gives the desired empirical relationships. To validate, the sensitivity of a beryllium/scintillator-layer neutron activation detector was predicted and then exposed to a known neutron fluence from a Deuterium-Deuterium fusion plasma (DD). The predicted and the measured sensitivity were in statistical agreement.
Predicting Thermal Conductivity
NASA Technical Reports Server (NTRS)
Penn, B.; Ledbetter, F. E., III; Clemons, J.
1984-01-01
Empirical equation predicts thermal conductivity of composite insulators consisting of cellular, granular or fibrous material embedded in matrix of solid viscoelastic material. Application in designing custom insulators for particular environments.
NASA Astrophysics Data System (ADS)
Yin, Yip Chee; Hock-Eam, Lim
2012-09-01
This paper investigates the forecasting ability of Mallows Model Averaging (MMA) by conducting an empirical analysis of five Asia countries, Malaysia, Thailand, Philippines, Indonesia and China's GDP growth rate. Results reveal that MMA has no noticeable differences in predictive ability compared to the general autoregressive fractional integrated moving average model (ARFIMA) and its predictive ability is sensitive to the effect of financial crisis. MMA could be an alternative forecasting method for samples without recent outliers such as financial crisis.
Analysis methods for Kevlar shield response to rotor fragments
NASA Technical Reports Server (NTRS)
Gerstle, J. H.
1977-01-01
Several empirical and analytical approaches to rotor burst shield sizing are compared and principal differences in metal and fabric dynamic behavior are discussed. The application of transient structural response computer programs to predict Kevlar containment limits is described. For preliminary shield sizing, present analytical methods are useful if insufficient test data for empirical modeling are available. To provide other information useful for engineering design, analytical methods require further developments in material characterization, failure criteria, loads definition, and post-impact fragment trajectory prediction.
Global Change And Water Availability And Quality: Challenges Ahead
NASA Astrophysics Data System (ADS)
Larsen, M. C.; Ryker, S. J.
2012-12-01
The United States is in the midst of a continental-scale, multi-year water-resources experiment, in which society has not defined testable hypotheses or set the duration and scope of the experiment. What are we doing? We are expanding population at two to three times the national growth rate in our most water-scarce states, in the southwest, where water stress is already great and modeling predicts decreased streamflow by the middle of this century. We are expanding irrigated agriculture from the west into the east, particularly to the southeastern states, where increased competition for ground and surface water has urban, agricultural, and environmental interests at odds, and increasingly, in court. We are expanding our consumption of pharmaceutical and personal care products to historic high levels and disposing of them in surface and groundwater, through sewage treatment plants and individual septic systems that were not designed to treat them. These and other examples of our national-scale experiment are likely to continue well into the 21st century. This experiment and related challenges will continue and likely intensify as non-climatic and climatic factors, such as predicted rising temperature and changes in the distribution of precipitation in time and space, continue to develop.
NASA Astrophysics Data System (ADS)
Kane, Gordon
2015-12-01
String/M-theory is an exciting framework within which we try to understand our universe and its properties. Compactified string/M-theories address and offer solutions to almost every important question and issue in particle physics and particle cosmology. But earlier goals of finding a top-down “vacuum selection” principle and deriving the 4D theory have not yet been realized. Does that mean we should stop trying, as nearly all string theorists have? Or can we proceed in the historical way to make a few generic, robust assumptions not closely related to observables, and follow where they lead to testable predictions and explanations? Making only very generic assumptions is a significant issue. I discuss how to try to proceed with this approach, particularly in M-theory compactified on a 7D manifold of G2 holonomy. One goal is to understand our universe as a string/M-theory vacuum for its own sake, in the long tradition of trying to understand our world, and what that implies. In addition, understanding our vacuum may be a prelude to understanding its connection to the multiverse.
Computational Approaches to Drug Repurposing and Pharmacology
Hodos, Rachel A; Kidd, Brian A; Khader, Shameer; Readhead, Ben P; Dudley, Joel T
2016-01-01
Data in the biological, chemical, and clinical domains are accumulating at ever-increasing rates and have the potential to accelerate and inform drug development in new ways. Challenges and opportunities now lie in developing analytic tools to transform these often complex and heterogeneous data into testable hypotheses and actionable insights. This is the aim of computational pharmacology, which uses in silico techniques to better understand and predict how drugs affect biological systems, which can in turn improve clinical use, avoid unwanted side effects, and guide selection and development of better treatments. One exciting application of computational pharmacology is drug repurposing- finding new uses for existing drugs. Already yielding many promising candidates, this strategy has the potential to improve the efficiency of the drug development process and reach patient populations with previously unmet needs such as those with rare diseases. While current techniques in computational pharmacology and drug repurposing often focus on just a single data modality such as gene expression or drug-target interactions, we rationalize that methods such as matrix factorization that can integrate data within and across diverse data types have the potential to improve predictive performance and provide a fuller picture of a drug's pharmacological action. PMID:27080087
Asymmetric patch size distribution leads to disruptive selection on dispersal.
Massol, François; Duputié, Anne; David, Patrice; Jarne, Philippe
2011-02-01
Numerous models have been designed to understand how dispersal ability evolves when organisms live in a fragmented landscape. Most of them predict a single dispersal rate at evolutionary equilibrium, and when diversification of dispersal rates has been predicted, it occurs as a response to perturbation or environmental fluctuation regimes. Yet abundant variation in dispersal ability is observed in natural populations and communities, even in relatively stable environments. We show that this diversification can operate in a simple island model without temporal variability: disruptive selection on dispersal occurs when the environment consists of many small and few large patches, a common feature in natural spatial systems. This heterogeneity in patch size results in a high variability in the number of related patch mates by individual, which, in turn, triggers disruptive selection through a high per capita variance of inclusive fitness. Our study provides a likely, parsimonious and testable explanation for the diversity of dispersal rates encountered in nature. It also suggests that biological conservation policies aiming at preserving ecological communities should strive to keep the distribution of patch size sufficiently asymmetric and variable. © 2010 The Author(s). Evolution© 2010 The Society for the Study of Evolution.
Dark matter, proton decay and other phenomenological constraints in F-SU(5)
NASA Astrophysics Data System (ADS)
Li, Tianjun; Maxin, James A.; Nanopoulos, Dimitri V.; Walker, Joel W.
2011-07-01
We study gravity mediated supersymmetry breaking in F-SU(5) and its low-energy supersymmetric phenomenology. The gaugino masses are not unified at the traditional grand unification scale, but we nonetheless have the same one-loop gaugino mass relation at the electroweak scale as minimal supergravity (mSUGRA). We introduce parameters testable at the colliders to measure the small second loop deviation from the mSUGRA gaugino mass relation at the electroweak scale. In the minimal SU(5) model with gravity mediated supersymmetry breaking, we show that the deviations from the mSUGRA gaugino mass relations are within 5%. However, in F-SU(5), we predict the deviations from the mSUGRA gaugino mass relations to be larger due to the presence of vector-like particles, which can be tested at the colliders. We determine the viable parameter space that satisfies all the latest experimental constraints and find it is consistent with the CDMS II experiment. Further, we compute the cross-sections of neutralino annihilations into gamma-rays and compare to the first published Fermi-LAT measurement. Finally, the corresponding range of proton lifetime predictions is calculated and found to be within reach of the future Hyper-Kamiokande and DUSEL experiments.
Tramacere, Antonella; Pievani, Telmo; Ferrari, Pier F
2017-08-01
Considering the properties of mirror neurons (MNs) in terms of development and phylogeny, we offer a novel, unifying, and testable account of their evolution according to the available data and try to unify apparently discordant research, including the plasticity of MNs during development, their adaptive value and their phylogenetic relationships and continuity. We hypothesize that the MN system reflects a set of interrelated traits, each with an independent natural history due to unique selective pressures, and propose that there are at least three evolutionarily significant trends that gave raise to three subtypes: hand visuomotor, mouth visuomotor, and audio-vocal. Specifically, we put forward a mosaic evolution hypothesis, which posits that different types of MNs may have evolved at different rates within and among species. This evolutionary hypothesis represents an alternative to both adaptationist and associative models. Finally, the review offers a strong heuristic potential in predicting the circumstances under which specific variations and properties of MNs are expected. Such predictive value is critical to test new hypotheses about MN activity and its plastic changes, depending on the species, the neuroanatomical substrates, and the ecological niche. © 2016 Cambridge Philosophical Society.
Taking Bioinformatics to Systems Medicine.
van Kampen, Antoine H C; Moerland, Perry D
2016-01-01
Systems medicine promotes a range of approaches and strategies to study human health and disease at a systems level with the aim of improving the overall well-being of (healthy) individuals, and preventing, diagnosing, or curing disease. In this chapter we discuss how bioinformatics critically contributes to systems medicine. First, we explain the role of bioinformatics in the management and analysis of data. In particular we show the importance of publicly available biological and clinical repositories to support systems medicine studies. Second, we discuss how the integration and analysis of multiple types of omics data through integrative bioinformatics may facilitate the determination of more predictive and robust disease signatures, lead to a better understanding of (patho)physiological molecular mechanisms, and facilitate personalized medicine. Third, we focus on network analysis and discuss how gene networks can be constructed from omics data and how these networks can be decomposed into smaller modules. We discuss how the resulting modules can be used to generate experimentally testable hypotheses, provide insight into disease mechanisms, and lead to predictive models. Throughout, we provide several examples demonstrating how bioinformatics contributes to systems medicine and discuss future challenges in bioinformatics that need to be addressed to enable the advancement of systems medicine.
The Long and Viscous Road: Uncovering Nuclear Diffusion Barriers in Closed Mitosis
Zavala, Eder; Marquez-Lago, Tatiana T.
2014-01-01
Diffusion barriers are effective means for constraining protein lateral exchange in cellular membranes. In Saccharomyces cerevisiae, they have been shown to sustain parental identity through asymmetric segregation of ageing factors during closed mitosis. Even though barriers have been extensively studied in the plasma membrane, their identity and organization within the nucleus remains poorly understood. Based on different lines of experimental evidence, we present a model of the composition and structural organization of a nuclear diffusion barrier during anaphase. By means of spatial stochastic simulations, we propose how specialised lipid domains, protein rings, and morphological changes of the nucleus may coordinate to restrict protein exchange between mother and daughter nuclear lobes. We explore distinct, plausible configurations of these diffusion barriers and offer testable predictions regarding their protein exclusion properties and the diffusion regimes they generate. Our model predicts that, while a specialised lipid domain and an immobile protein ring at the bud neck can compartmentalize the nucleus during early anaphase; a specialised lipid domain spanning the elongated bridge between lobes would be entirely sufficient during late anaphase. Our work shows how complex nuclear diffusion barriers in closed mitosis may arise from simple nanoscale biophysical interactions. PMID:25032937
Gönner, Lorenz; Vitay, Julien; Hamker, Fred H.
2017-01-01
Hippocampal place-cell sequences observed during awake immobility often represent previous experience, suggesting a role in memory processes. However, recent reports of goals being overrepresented in sequential activity suggest a role in short-term planning, although a detailed understanding of the origins of hippocampal sequential activity and of its functional role is still lacking. In particular, it is unknown which mechanism could support efficient planning by generating place-cell sequences biased toward known goal locations, in an adaptive and constructive fashion. To address these questions, we propose a model of spatial learning and sequence generation as interdependent processes, integrating cortical contextual coding, synaptic plasticity and neuromodulatory mechanisms into a map-based approach. Following goal learning, sequential activity emerges from continuous attractor network dynamics biased by goal memory inputs. We apply Bayesian decoding on the resulting spike trains, allowing a direct comparison with experimental data. Simulations show that this model (1) explains the generation of never-experienced sequence trajectories in familiar environments, without requiring virtual self-motion signals, (2) accounts for the bias in place-cell sequences toward goal locations, (3) highlights their utility in flexible route planning, and (4) provides specific testable predictions. PMID:29075187
Rajeev, Lara; Luning, Eric G; Dehal, Paramvir S; Price, Morgan N; Arkin, Adam P; Mukhopadhyay, Aindrila
2011-10-12
Two component regulatory systems are the primary form of signal transduction in bacteria. Although genomic binding sites have been determined for several eukaryotic and bacterial transcription factors, comprehensive identification of gene targets of two component response regulators remains challenging due to the lack of knowledge of the signals required for their activation. We focused our study on Desulfovibrio vulgaris Hildenborough, a sulfate reducing bacterium that encodes unusually diverse and largely uncharacterized two component signal transduction systems. We report the first systematic mapping of the genes regulated by all transcriptionally acting response regulators in a single bacterium. Our results enabled functional predictions for several response regulators and include key processes of carbon, nitrogen and energy metabolism, cell motility and biofilm formation, and responses to stresses such as nitrite, low potassium and phosphate starvation. Our study also led to the prediction of new genes and regulatory networks, which found corroboration in a compendium of transcriptome data available for D. vulgaris. For several regulators we predicted and experimentally verified the binding site motifs, most of which were discovered as part of this study. The gene targets identified for the response regulators allowed strong functional predictions to be made for the corresponding two component systems. By tracking the D. vulgaris regulators and their motifs outside the Desulfovibrio spp. we provide testable hypotheses regarding the functions of orthologous regulators in other organisms. The in vitro array based method optimized here is generally applicable for the study of such systems in all organisms.
Inferior olive mirrors joint dynamics to implement an inverse controller.
Alvarez-Icaza, Rodrigo; Boahen, Kwabena
2012-10-01
To produce smooth and coordinated motion, our nervous systems need to generate precisely timed muscle activation patterns that, due to axonal conduction delay, must be generated in a predictive and feedforward manner. Kawato proposed that the cerebellum accomplishes this by acting as an inverse controller that modulates descending motor commands to predictively drive the spinal cord such that the musculoskeletal dynamics are canceled out. This and other cerebellar theories do not, however, account for the rich biophysical properties expressed by the olivocerebellar complex's various cell types, making these theories difficult to verify experimentally. Here we propose that a multizonal microcomplex's (MZMC) inferior olivary neurons use their subthreshold oscillations to mirror a musculoskeletal joint's underdamped dynamics, thereby achieving inverse control. We used control theory to map a joint's inverse model onto an MZMC's biophysics, and we used biophysical modeling to confirm that inferior olivary neurons can express the dynamics required to mirror biomechanical joints. We then combined both techniques to predict how experimentally injecting current into the inferior olive would affect overall motor output performance. We found that this experimental manipulation unmasked a joint's natural dynamics, as observed by motor output ringing at the joint's natural frequency, with amplitude proportional to the amount of current. These results support the proposal that the cerebellum-in particular an MZMC-is an inverse controller; the results also provide a biophysical implementation for this controller and allow one to make an experimentally testable prediction.
Timing of birth: Parsimony favors strategic over dysregulated parturition.
Catalano, Ralph; Goodman, Julia; Margerison-Zilko, Claire; Falconi, April; Gemmill, Alison; Karasek, Deborah; Anderson, Elizabeth
2016-01-01
The "dysregulated parturition" narrative posits that the human stress response includes a cascade of hormones that "dysregulates" and accelerates parturition but provides questionable utility as a guide to understand or prevent preterm birth. We offer and test a "strategic parturition" narrative that not only predicts the excess preterm births that dysregulated parturition predicts but also makes testable, sex-specific predictions of the effect of stressful environments on the timing of birth among term pregnancies. We use interrupted time-series modeling of cohorts conceived over 101 months to test for lengthening of early term male gestations in stressed population. We use an event widely reported to have stressed Americans and to have increased the incidence of low birth weight and fetal death across the country-the terrorist attacks of September 2001. We tested the hypothesis that the odds of male infants conceived in December 2000 (i.e., at term in September 2001) being born early as opposed to full term fell below the value expected from those conceived in the 50 prior and 50 following months. We found that term male gestations exposed to the terrorist attacks exhibited 4% lower likelihood of early, as opposed to full or late, term birth. Strategic parturition explains observed data for which the dysregulated parturition narrative offers no prediction-the timing of birth among gestations stressed at term. Our narrative may help explain why findings from studies examining associations between population- and/or individual-level stressors and preterm birth are generally mixed. © 2015 Wiley Periodicals, Inc.
Volatility in financial markets: stochastic models and empirical results
NASA Astrophysics Data System (ADS)
Miccichè, Salvatore; Bonanno, Giovanni; Lillo, Fabrizio; Mantegna, Rosario N.
2002-11-01
We investigate the historical volatility of the 100 most capitalized stocks traded in US equity markets. An empirical probability density function (pdf) of volatility is obtained and compared with the theoretical predictions of a lognormal model and of the Hull and White model. The lognormal model well describes the pdf in the region of low values of volatility whereas the Hull and White model better approximates the empirical pdf for large values of volatility. Both models fail in describing the empirical pdf over a moderately large volatility range.
Work-Centered Technology Development (WTD)
2005-03-01
theoretical, testable, inductive, and repeatable foundations of science. o Theoretical foundations include notions such as statistical versus analytical...Human Factors and Ergonomics Society, 263-267. 179 Eggleston, R. G. (2005). Coursebook : Work-Centered Design (WCD). AFRL/HECS WCD course training
Writing testable software requirements
DOE Office of Scientific and Technical Information (OSTI.GOV)
Knirk, D.
1997-11-01
This tutorial identifies common problems in analyzing requirements in the problem and constructing a written specification of what the software is to do. It deals with two main problem areas: identifying and describing problem requirements, and analyzing and describing behavior specifications.
All pure bipartite entangled states can be self-tested
Coladangelo, Andrea; Goh, Koon Tong; Scarani, Valerio
2017-01-01
Quantum technologies promise advantages over their classical counterparts in the fields of computation, security and sensing. It is thus desirable that classical users are able to obtain guarantees on quantum devices, even without any knowledge of their inner workings. That such classical certification is possible at all is remarkable: it is a consequence of the violation of Bell inequalities by entangled quantum systems. Device-independent self-testing refers to the most complete such certification: it enables a classical user to uniquely identify the quantum state shared by uncharacterized devices by simply inspecting the correlations of measurement outcomes. Self-testing was first demonstrated for the singlet state and a few other examples of self-testable states were reported in recent years. Here, we address the long-standing open question of whether every pure bipartite entangled state is self-testable. We answer it affirmatively by providing explicit self-testing correlations for all such states. PMID:28548093
All pure bipartite entangled states can be self-tested
NASA Astrophysics Data System (ADS)
Coladangelo, Andrea; Goh, Koon Tong; Scarani, Valerio
2017-05-01
Quantum technologies promise advantages over their classical counterparts in the fields of computation, security and sensing. It is thus desirable that classical users are able to obtain guarantees on quantum devices, even without any knowledge of their inner workings. That such classical certification is possible at all is remarkable: it is a consequence of the violation of Bell inequalities by entangled quantum systems. Device-independent self-testing refers to the most complete such certification: it enables a classical user to uniquely identify the quantum state shared by uncharacterized devices by simply inspecting the correlations of measurement outcomes. Self-testing was first demonstrated for the singlet state and a few other examples of self-testable states were reported in recent years. Here, we address the long-standing open question of whether every pure bipartite entangled state is self-testable. We answer it affirmatively by providing explicit self-testing correlations for all such states.
All pure bipartite entangled states can be self-tested.
Coladangelo, Andrea; Goh, Koon Tong; Scarani, Valerio
2017-05-26
Quantum technologies promise advantages over their classical counterparts in the fields of computation, security and sensing. It is thus desirable that classical users are able to obtain guarantees on quantum devices, even without any knowledge of their inner workings. That such classical certification is possible at all is remarkable: it is a consequence of the violation of Bell inequalities by entangled quantum systems. Device-independent self-testing refers to the most complete such certification: it enables a classical user to uniquely identify the quantum state shared by uncharacterized devices by simply inspecting the correlations of measurement outcomes. Self-testing was first demonstrated for the singlet state and a few other examples of self-testable states were reported in recent years. Here, we address the long-standing open question of whether every pure bipartite entangled state is self-testable. We answer it affirmatively by providing explicit self-testing correlations for all such states.
Prediction of Partition Coefficients of Organic Compounds between SPME/PDMS and Aqueous Solution
Chao, Keh-Ping; Lu, Yu-Ting; Yang, Hsiu-Wen
2014-01-01
Polydimethylsiloxane (PDMS) is commonly used as the coated polymer in the solid phase microextraction (SPME) technique. In this study, the partition coefficients of organic compounds between SPME/PDMS and the aqueous solution were compiled from the literature sources. The correlation analysis for partition coefficients was conducted to interpret the effect of their physicochemical properties and descriptors on the partitioning process. The PDMS-water partition coefficients were significantly correlated to the polarizability of organic compounds (r = 0.977, p < 0.05). An empirical model, consisting of the polarizability, the molecular connectivity index, and an indicator variable, was developed to appropriately predict the partition coefficients of 61 organic compounds for the training set. The predictive ability of the empirical model was demonstrated by using it on a test set of 26 chemicals not included in the training set. The empirical model, applying the straightforward calculated molecular descriptors, for estimating the PDMS-water partition coefficient will contribute to the practical applications of the SPME technique. PMID:24534804
Rethinking Indian monsoon rainfall prediction in the context of recent global warming
NASA Astrophysics Data System (ADS)
Wang, Bin; Xiang, Baoqiang; Li, Juan; Webster, Peter J.; Rajeevan, Madhavan N.; Liu, Jian; Ha, Kyung-Ja
2015-05-01
Prediction of Indian summer monsoon rainfall (ISMR) is at the heart of tropical climate prediction. Despite enormous progress having been made in predicting ISMR since 1886, the operational forecasts during recent decades (1989-2012) have little skill. Here we show, with both dynamical and physical-empirical models, that this recent failure is largely due to the models' inability to capture new predictability sources emerging during recent global warming, that is, the development of the central-Pacific El Nino-Southern Oscillation (CP-ENSO), the rapid deepening of the Asian Low and the strengthening of North and South Pacific Highs during boreal spring. A physical-empirical model that captures these new predictors can produce an independent forecast skill of 0.51 for 1989-2012 and a 92-year retrospective forecast skill of 0.64 for 1921-2012. The recent low skills of the dynamical models are attributed to deficiencies in capturing the developing CP-ENSO and anomalous Asian Low. The results reveal a considerable gap between ISMR prediction skill and predictability.
Stadler, Tanja; Degnan, James H.; Rosenberg, Noah A.
2016-01-01
Classic null models for speciation and extinction give rise to phylogenies that differ in distribution from empirical phylogenies. In particular, empirical phylogenies are less balanced and have branching times closer to the root compared to phylogenies predicted by common null models. This difference might be due to null models of the speciation and extinction process being too simplistic, or due to the empirical datasets not being representative of random phylogenies. A third possibility arises because phylogenetic reconstruction methods often infer gene trees rather than species trees, producing an incongruity between models that predict species tree patterns and empirical analyses that consider gene trees. We investigate the extent to which the difference between gene trees and species trees under a combined birth–death and multispecies coalescent model can explain the difference in empirical trees and birth–death species trees. We simulate gene trees embedded in simulated species trees and investigate their difference with respect to tree balance and branching times. We observe that the gene trees are less balanced and typically have branching times closer to the root than the species trees. Empirical trees from TreeBase are also less balanced than our simulated species trees, and model gene trees can explain an imbalance increase of up to 8% compared to species trees. However, we see a much larger imbalance increase in empirical trees, about 100%, meaning that additional features must also be causing imbalance in empirical trees. This simulation study highlights the necessity of revisiting the assumptions made in phylogenetic analyses, as these assumptions, such as equating the gene tree with the species tree, might lead to a biased conclusion. PMID:26968785
Using biological markets principles to examine patterns of grooming exchange in Macaca thibetana.
Balasubramaniam, K N; Berman, C M; Ogawa, H; Li, J
2011-12-01
Biological markets principles offer testable hypotheses to explain variation in grooming exchange patterns among nonhuman primates. They predict that when within-group contest competition (WGC) is high and dominance hierarchies steep, grooming interchange with other "commodity" behaviors (such as agonistic support) should prevail. In contrast, when WGC is low and gradients shallow, market theory predicts that grooming reciprocity should prevail. We tested these predictions in a wild, provisioned Tibetan macaque (Macaca thibetana) group across six time periods during which the group had been subjected to varying degrees of range restriction. Data on female-female aggression, grooming, and support were collected using all-occurrences and focal animal sampling techniques, and analyzed using ANCOVA methods and correlation analyses. We found that hierarchical steepness varied significantly across periods, but did not correlate with two indirect indicators of WGC (group size and range restriction) in predicted directions. Contrary to expectations, we found a negative correlation between steepness and group size, perhaps because the responses of group members to external risks (i.e. prolonged and unavoidable exposure to humans) may have overshadowed the effects of WGC. As predicted, grooming reciprocity was significant in each period and negatively correlated with steepness, even after we controlled group size, kinship, rank differences, and proximity. In contrast, there was no evidence for grooming interchange with agonistic support or for a positive relationship between interchange and steepness. We hypothesize that stressful conditions and/or the presence of stable hierarchies during each period may have led to a greater market demand for grooming than support. We suggest that future studies testing these predictions consider more direct measures of WGC and commodities in addition to support, such as feeding tolerance and access to infants. © 2011 Wiley Periodicals, Inc.
Prediction of gene-phenotype associations in humans, mice, and plants using phenologs.
Woods, John O; Singh-Blom, Ulf Martin; Laurent, Jon M; McGary, Kriston L; Marcotte, Edward M
2013-06-21
Phenotypes and diseases may be related to seemingly dissimilar phenotypes in other species by means of the orthology of underlying genes. Such "orthologous phenotypes," or "phenologs," are examples of deep homology, and may be used to predict additional candidate disease genes. In this work, we develop an unsupervised algorithm for ranking phenolog-based candidate disease genes through the integration of predictions from the k nearest neighbor phenologs, comparing classifiers and weighting functions by cross-validation. We also improve upon the original method by extending the theory to paralogous phenotypes. Our algorithm makes use of additional phenotype data--from chicken, zebrafish, and E. coli, as well as new datasets for C. elegans--establishing that several types of annotations may be treated as phenotypes. We demonstrate the use of our algorithm to predict novel candidate genes for human atrial fibrillation (such as HRH2, ATP4A, ATP4B, and HOPX) and epilepsy (e.g., PAX6 and NKX2-1). We suggest gene candidates for pharmacologically-induced seizures in mouse, solely based on orthologous phenotypes from E. coli. We also explore the prediction of plant gene-phenotype associations, as for the Arabidopsis response to vernalization phenotype. We are able to rank gene predictions for a significant portion of the diseases in the Online Mendelian Inheritance in Man database. Additionally, our method suggests candidate genes for mammalian seizures based only on bacterial phenotypes and gene orthology. We demonstrate that phenotype information may come from diverse sources, including drug sensitivities, gene ontology biological processes, and in situ hybridization annotations. Finally, we offer testable candidates for a variety of human diseases, plant traits, and other classes of phenotypes across a wide array of species.
Development of an Empirical Methods for Predicting Jet Mixing Noise of Cold Flow Rectangular Jets
NASA Technical Reports Server (NTRS)
Russell, James W.
1999-01-01
This report presents an empirical method for predicting the jet mixing noise levels of cold flow rectangular jets. The report presents a detailed analysis of the methodology used in development of the prediction method. The empirical correlations used are based on narrow band acoustic data for cold flow rectangular model nozzle tests conducted in the NASA Langley Jet Noise Laboratory. There were 20 separate nozzle test operating conditions. For each operating condition 60 Hz bandwidth microphone measurements were made over a frequency range from 0 to 60,000 Hz. Measurements were performed at 16 polar directivity angles ranging from 45 degrees to 157.5 degrees. At each polar directivity angle, measurements were made at 9 azimuth directivity angles. The report shows the methods employed to remove screech tones and shock noise from the data in order to obtain the jet mixing noise component. The jet mixing noise was defined in terms of one third octave band spectral content, polar and azimuth directivity, and overall power level. Empirical correlations were performed over the range of test conditions to define each of these jet mixing noise parameters as a function of aspect ratio, jet velocity, and polar and azimuth directivity angles. The report presents the method for predicting the overall power level, the average polar directivity, the azimuth directivity and the location and shape of the spectra for jet mixing noise of cold flow rectangular jets.
NASA Technical Reports Server (NTRS)
Desai, S.; Wahr, J.
1998-01-01
Empirical models of the two largest constituents of the long-period ocean tides, the monthly and the fortnightly constituents, are estimated from repeat cycles 10 to 210 of the TOPEX/POSEIDON (T/P) mission.