Sample records for experimentally testable predictions

  1. Dynamical Model of Drug Accumulation in Bacteria: Sensitivity Analysis and Experimentally Testable Predictions

    DOE PAGES

    Vesselinova, Neda; Alexandrov, Boian; Wall, Michael E.

    2016-11-08

    We present a dynamical model of drug accumulation in bacteria. The model captures key features in experimental time courses on ofloxacin accumulation: initial uptake; two-phase response; and long-term acclimation. In combination with experimental data, the model provides estimates of import and export rates in each phase, the time of entry into the second phase, and the decrease of internal drug during acclimation. Global sensitivity analysis, local sensitivity analysis, and Bayesian sensitivity analysis of the model provide information about the robustness of these estimates, and about the relative importance of different parameters in determining the features of the accumulation time coursesmore » in three different bacterial species: Escherichia coli, Staphylococcus aureus, and Pseudomonas aeruginosa. The results lead to experimentally testable predictions of the effects of membrane permeability, drug efflux and trapping (e.g., by DNA binding) on drug accumulation. A key prediction is that a sudden increase in ofloxacin accumulation in both E. coli and S. aureus is accompanied by a decrease in membrane permeability.« less

  2. Dynamical Model of Drug Accumulation in Bacteria: Sensitivity Analysis and Experimentally Testable Predictions

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Vesselinova, Neda; Alexandrov, Boian; Wall, Michael E.

    We present a dynamical model of drug accumulation in bacteria. The model captures key features in experimental time courses on ofloxacin accumulation: initial uptake; two-phase response; and long-term acclimation. In combination with experimental data, the model provides estimates of import and export rates in each phase, the time of entry into the second phase, and the decrease of internal drug during acclimation. Global sensitivity analysis, local sensitivity analysis, and Bayesian sensitivity analysis of the model provide information about the robustness of these estimates, and about the relative importance of different parameters in determining the features of the accumulation time coursesmore » in three different bacterial species: Escherichia coli, Staphylococcus aureus, and Pseudomonas aeruginosa. The results lead to experimentally testable predictions of the effects of membrane permeability, drug efflux and trapping (e.g., by DNA binding) on drug accumulation. A key prediction is that a sudden increase in ofloxacin accumulation in both E. coli and S. aureus is accompanied by a decrease in membrane permeability.« less

  3. Models of cooperative dynamics from biomolecules to magnets

    NASA Astrophysics Data System (ADS)

    Mobley, David Lowell

    This work details application of computer models to several biological systems (prion diseases and Alzheimer's disease) and a magnetic system. These share some common themes, which are discussed. Here, simple lattice-based models are applied to aggregation of misfolded protein in prion diseases like Mad Cow disease. These can explain key features of the diseases. The modeling is based on aggregation being essential in establishing the time-course of infectivity. Growth of initial aggregates is assumed to dominate the experimentally observed lag phase. Subsequent fission, regrowth, and fission set apart the exponential doubling phase in disease progression. We explore several possible modes of growth for 2-D aggregates and suggest the model providing the best explanation for the experimental data. We develop testable predictions from this model. Like prion disease, Alzheimer's disease (AD) is an amyloid disease characterized by large aggregates in the brain. However, evidence increasingly points away from these as the toxic agent and towards oligomers of the Abeta peptide. We explore one possible toxicity mechanism---insertion of Abeta into cell membranes and formation of harmful ion channels. We find that mutations in this peptide which cause familial Alzheimer's disease (FAD) also affect the insertion of this peptide into membranes in a fairly consistent way, suggesting that this toxicity mechanism may be relevant biologically. We find a particular inserted configuration which may be especially harmful and develop testable predictions to verify whether or not this is the case. Nucleation is an essential feature of our models for prion disease, in that it protects normal, healthy individuals from getting prion disease. Nucleation is important in many other areas, and we modify our lattice-based nucleation model to apply to a hysteretic magnetic system where nucleation has been suggested to be important. From a simple model, we find qualitative agreement with experiment, and make testable experimental predictions concerning time-dependence and temperature-dependence of the major hysteresis loop and reversal curves which have been experimentally verified. We argue why this model may be suitable for systems like these and explain implications for Ising-like models. We suggest implications for future modeling work. Finally, we present suggestions for future work in all three areas.

  4. An empirical comparison of a dynamic software testability metric to static cyclomatic complexity

    NASA Technical Reports Server (NTRS)

    Voas, Jeffrey M.; Miller, Keith W.; Payne, Jeffrey E.

    1993-01-01

    This paper compares the dynamic testability prediction technique termed 'sensitivity analysis' to the static testability technique termed cyclomatic complexity. The application that we chose in this empirical study is a CASE generated version of a B-737 autoland system. For the B-737 system we analyzed, we isolated those functions that we predict are more prone to hide errors during system/reliability testing. We also analyzed the code with several other well-known static metrics. This paper compares and contrasts the results of sensitivity analysis to the results of the static metrics.

  5. Factors That Affect Software Testability

    NASA Technical Reports Server (NTRS)

    Voas, Jeffrey M.

    1991-01-01

    Software faults that infrequently affect software's output are dangerous. When a software fault causes frequent software failures, testing is likely to reveal the fault before the software is releases; when the fault remains undetected during testing, it can cause disaster after the software is installed. A technique for predicting whether a particular piece of software is likely to reveal faults within itself during testing is found in [Voas91b]. A piece of software that is likely to reveal faults within itself during testing is said to have high testability. A piece of software that is not likely to reveal faults within itself during testing is said to have low testability. It is preferable to design software with higher testabilities from the outset, i.e., create software with as high of a degree of testability as possible to avoid the problems of having undetected faults that are associated with low testability. Information loss is a phenomenon that occurs during program execution that increases the likelihood that a fault will remain undetected. In this paper, I identify two brad classes of information loss, define them, and suggest ways of predicting the potential for information loss to occur. We do this in order to decrease the likelihood that faults will remain undetected during testing.

  6. Beyond Critical Exponents in Neuronal Avalanches

    NASA Astrophysics Data System (ADS)

    Friedman, Nir; Butler, Tom; Deville, Robert; Beggs, John; Dahmen, Karin

    2011-03-01

    Neurons form a complex network in the brain, where they interact with one another by firing electrical signals. Neurons firing can trigger other neurons to fire, potentially causing avalanches of activity in the network. In many cases these avalanches have been found to be scale independent, similar to critical phenomena in diverse systems such as magnets and earthquakes. We discuss models for neuronal activity that allow for the extraction of testable, statistical predictions. We compare these models to experimental results, and go beyond critical exponents.

  7. Current challenges in fundamental physics

    NASA Astrophysics Data System (ADS)

    Egana Ugrinovic, Daniel

    The discovery of the Higgs boson at the Large Hadron Collider completed the Standard Model of particle physics. The Standard Model is a remarkably successful theory of fundamental physics, but it suffers from severe problems. It does not provide an explanation for the origin or stability of the electroweak scale nor for the origin and structure of flavor and CP violation. It predicts vanishing neutrino masses, in disagreement with experimental observations. It also fails to explain the matter-antimatter asymmetry of the universe, and it does not provide a particle candidate for dark matter. In this thesis we provide experimentally testable solutions for most of these problems and we study their phenomenology.

  8. Integrated PK-PD and agent-based modeling in oncology.

    PubMed

    Wang, Zhihui; Butner, Joseph D; Cristini, Vittorio; Deisboeck, Thomas S

    2015-04-01

    Mathematical modeling has become a valuable tool that strives to complement conventional biomedical research modalities in order to predict experimental outcome, generate new medical hypotheses, and optimize clinical therapies. Two specific approaches, pharmacokinetic-pharmacodynamic (PK-PD) modeling, and agent-based modeling (ABM), have been widely applied in cancer research. While they have made important contributions on their own (e.g., PK-PD in examining chemotherapy drug efficacy and resistance, and ABM in describing and predicting tumor growth and metastasis), only a few groups have started to combine both approaches together in an effort to gain more insights into the details of drug dynamics and the resulting impact on tumor growth. In this review, we focus our discussion on some of the most recent modeling studies building on a combined PK-PD and ABM approach that have generated experimentally testable hypotheses. Some future directions are also discussed.

  9. Integrated PK-PD and Agent-Based Modeling in Oncology

    PubMed Central

    Wang, Zhihui; Butner, Joseph D.; Cristini, Vittorio

    2016-01-01

    Mathematical modeling has become a valuable tool that strives to complement conventional biomedical research modalities in order to predict experimental outcome, generate new medical hypotheses, and optimize clinical therapies. Two specific approaches, pharmacokinetic-pharmacodynamic (PK-PD) modeling, and agent-based modeling (ABM), have been widely applied in cancer research. While they have made important contributions on their own (e.g., PK-PD in examining chemotherapy drug efficacy and resistance, and ABM in describing and predicting tumor growth and metastasis), only a few groups have started to combine both approaches together in an effort to gain more insights into the details of drug dynamics and the resulting impact on tumor growth. In this review, we focus our discussion on some of the most recent modeling studies building on a combined PK-PD and ABM approach that have generated experimentally testable hypotheses. Some future directions are also discussed. PMID:25588379

  10. “Feature Detection” vs. “Predictive Coding” Models of Plant Behavior

    PubMed Central

    Calvo, Paco; Baluška, František; Sims, Andrew

    2016-01-01

    In this article we consider the possibility that plants exhibit anticipatory behavior, a mark of intelligence. If plants are able to anticipate and respond accordingly to varying states of their surroundings, as opposed to merely responding online to environmental contingencies, then such capacity may be in principle testable, and subject to empirical scrutiny. Our main thesis is that adaptive behavior can only take place by way of a mechanism that predicts the environmental sources of sensory stimulation. We propose to test for anticipation in plants experimentally by contrasting two empirical hypotheses: “feature detection” and “predictive coding.” We spell out what these contrasting hypotheses consist of by way of illustration from the animal literature, and consider how to transfer the rationale involved to the plant literature. PMID:27757094

  11. Distinct physiological effects of β1- and β2-adrenoceptors in mouse ventricular myocytes: insights from a compartmentalized mathematical model.

    PubMed

    Rozier, Kelvin; Bondarenko, Vladimir E

    2017-05-01

    The β 1 - and β 2 -adrenergic signaling systems play different roles in the functioning of cardiac cells. Experimental data show that the activation of the β 1 -adrenergic signaling system produces significant inotropic, lusitropic, and chronotropic effects in the heart, whereas the effects of the β 2 -adrenergic signaling system is less apparent. In this paper, a comprehensive compartmentalized experimentally based mathematical model of the combined β 1 - and β 2 -adrenergic signaling systems in mouse ventricular myocytes is developed to simulate the experimental findings and make testable predictions of the behavior of the cardiac cells under different physiological conditions. Simulations describe the dynamics of major signaling molecules in different subcellular compartments; kinetics and magnitudes of phosphorylation of ion channels, transporters, and Ca 2+ handling proteins; modifications of action potential shape and duration; and [Ca 2+ ] i and [Na + ] i dynamics upon stimulation of β 1 - and β 2 -adrenergic receptors (β 1 - and β 2 -ARs). The model reveals physiological conditions when β 2 -ARs do not produce significant physiological effects and when their effects can be measured experimentally. Simulations demonstrated that stimulation of β 2 -ARs with isoproterenol caused a marked increase in the magnitude of the L-type Ca 2+ current, [Ca 2+ ] i transient, and phosphorylation of phospholamban only upon additional application of pertussis toxin or inhibition of phosphodiesterases of type 3 and 4. The model also made testable predictions of the changes in magnitudes of [Ca 2+ ] i and [Na + ] i fluxes, the rate of decay of [Na + ] i concentration upon both combined and separate stimulation of β 1 - and β 2 -ARs, and the contribution of phosphorylation of PKA targets to the changes in the action potential and [Ca 2+ ] i transient. Copyright © 2017 the American Physiological Society.

  12. The use of models to predict potential contamination aboard orbital vehicles

    NASA Technical Reports Server (NTRS)

    Boraas, Martin E.; Seale, Dianne B.

    1989-01-01

    A model of fungal growth on air-exposed, nonnutritive solid surfaces, developed for utilization aboard orbital vehicles is presented. A unique feature of this testable model is that the development of a fungal mycelium can facilitate its own growth by condensation of water vapor from its environment directly onto fungal hyphae. The fungal growth rate is limited by the rate of supply of volatile nutrients and fungal biomass is limited by either the supply of nonvolatile nutrients or by metabolic loss processes. The model discussed is structurally simple, but its dynamics can be quite complex. Biofilm accumulation can vary from a simple linear increase to sustained exponential growth, depending on the values of the environmental variable and model parameters. The results of the model are consistent with data from aquatic biofilm studies, insofar as the two types of systems are comparable. It is shown that the model presented is experimentally testable and provides a platform for the interpretation of observational data that may be directly relevant to the question of growth of organisms aboard the proposed Space Station.

  13. Modeling the attenuation and failure of action potentials in the dendrites of hippocampal neurons.

    PubMed Central

    Migliore, M

    1996-01-01

    We modeled two different mechanisms, a shunting conductance and a slow sodium inactivation, to test whether they could modulate the active propagation of a train of action potentials in a dendritic tree. Computer simulations, using a compartmental model of a pyramidal neuron, suggest that each of these two mechanisms could account for the activity-dependent attenuation and failure of the action potentials in the dendrites during the train. Each mechanism is shown to be in good qualitative agreement with experimental findings on somatic or dendritic stimulation and on the effects of hyperpolarization. The conditions under which branch point failures can be observed, and a few experimentally testable predictions, are presented and discussed. PMID:8913580

  14. From Einstein-Podolsky-Rosen paradox to quantum nonlocality: experimental investigation of quantum correlations

    NASA Astrophysics Data System (ADS)

    Xu, Jin-Shi; Li, Chuan-Feng; Guo, Guang-Can

    2016-11-01

    In 1935, Einstein, Podolsky and Rosen published their influential paper proposing a now famous paradox (the EPR paradox) that threw doubt on the completeness of quantum mechanics. Two fundamental concepts: entanglement and steering, were given in the response to the EPR paper by Schrodinger, which both reflect the nonlocal nature of quantum mechanics. In 1964, John Bell obtained an experimentally testable inequality, in which its violation contradicts the prediction of local hidden variable models and agrees with that of quantum mechanics. Since then, great efforts have been made to experimentally investigate the nonlocal feature of quantum mechanics and many distinguished quantum properties were observed. In this work, along with the discussion of the development of quantum nonlocality, we would focus on our recent experimental efforts in investigating quantum correlations and their applications with optical systems, including the study of entanglement-assisted entropic uncertainty principle, Einstein-Podolsky-Rosen steering and the dynamics of quantum correlations.

  15. Testability Design Rating System: Testability Handbook. Volume 1

    DTIC Science & Technology

    1992-02-01

    4-10 4.7.5 Summary of False BIT Alarms (FBA) ............................. 4-10 4.7.6 Smart BIT Technique...Circuit Board PGA Pin Grid Array PLA Programmable Logic Array PLD Programmable Logic Device PN Pseudo-Random Number PREDICT Probabilistic Estimation of...11 4.7.6 Smart BIT ( reference: RADC-TR-85-198). " Smart " BIT is a term given to BIT circuitry in a system LRU which includes dedicated processor/memory

  16. Multi-omics approach identifies molecular mechanisms of plant-fungus mycorrhizal interaction

    DOE PAGES

    Larsen, Peter E.; Sreedasyam, Avinash; Trivedi, Geetika; ...

    2016-01-19

    In mycorrhizal symbiosis, plant roots form close, mutually beneficial interactions with soil fungi. Before this mycorrhizal interaction can be established however, plant roots must be capable of detecting potential beneficial fungal partners and initiating the gene expression patterns necessary to begin symbiosis. To predict a plant root – mycorrhizal fungi sensor systems, we analyzed in vitro experiments of Populus tremuloides (aspen tree) and Laccaria bicolor (mycorrhizal fungi) interaction and leveraged over 200 previously published transcriptomic experimental data sets, 159 experimentally validated plant transcription factor binding motifs, and more than 120-thousand experimentally validated protein-protein interactions to generate models of pre-mycorrhizal sensormore » systems in aspen root. These sensor mechanisms link extracellular signaling molecules with gene regulation through a network comprised of membrane receptors, signal cascade proteins, transcription factors, and transcription factor biding DNA motifs. Modeling predicted four pre-mycorrhizal sensor complexes in aspen that interact with fifteen transcription factors to regulate the expression of 1184 genes in response to extracellular signals synthesized by Laccaria. Predicted extracellular signaling molecules include common signaling molecules such as phenylpropanoids, salicylate, and, jasmonic acid. Lastly, this multi-omic computational modeling approach for predicting the complex sensory networks yielded specific, testable biological hypotheses for mycorrhizal interaction signaling compounds, sensor complexes, and mechanisms of gene regulation.« less

  17. Multi-omics approach identifies molecular mechanisms of plant-fungus mycorrhizal interaction

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Larsen, Peter E.; Sreedasyam, Avinash; Trivedi, Geetika

    In mycorrhizal symbiosis, plant roots form close, mutually beneficial interactions with soil fungi. Before this mycorrhizal interaction can be established however, plant roots must be capable of detecting potential beneficial fungal partners and initiating the gene expression patterns necessary to begin symbiosis. To predict a plant root – mycorrhizal fungi sensor systems, we analyzed in vitro experiments of Populus tremuloides (aspen tree) and Laccaria bicolor (mycorrhizal fungi) interaction and leveraged over 200 previously published transcriptomic experimental data sets, 159 experimentally validated plant transcription factor binding motifs, and more than 120-thousand experimentally validated protein-protein interactions to generate models of pre-mycorrhizal sensormore » systems in aspen root. These sensor mechanisms link extracellular signaling molecules with gene regulation through a network comprised of membrane receptors, signal cascade proteins, transcription factors, and transcription factor biding DNA motifs. Modeling predicted four pre-mycorrhizal sensor complexes in aspen that interact with fifteen transcription factors to regulate the expression of 1184 genes in response to extracellular signals synthesized by Laccaria. Predicted extracellular signaling molecules include common signaling molecules such as phenylpropanoids, salicylate, and, jasmonic acid. Lastly, this multi-omic computational modeling approach for predicting the complex sensory networks yielded specific, testable biological hypotheses for mycorrhizal interaction signaling compounds, sensor complexes, and mechanisms of gene regulation.« less

  18. A simple theoretical framework for understanding heterogeneous differentiation of CD4+ T cells

    PubMed Central

    2012-01-01

    Background CD4+ T cells have several subsets of functional phenotypes, which play critical yet diverse roles in the immune system. Pathogen-driven differentiation of these subsets of cells is often heterogeneous in terms of the induced phenotypic diversity. In vitro recapitulation of heterogeneous differentiation under homogeneous experimental conditions indicates some highly regulated mechanisms by which multiple phenotypes of CD4+ T cells can be generated from a single population of naïve CD4+ T cells. Therefore, conceptual understanding of induced heterogeneous differentiation will shed light on the mechanisms controlling the response of populations of CD4+ T cells under physiological conditions. Results We present a simple theoretical framework to show how heterogeneous differentiation in a two-master-regulator paradigm can be governed by a signaling network motif common to all subsets of CD4+ T cells. With this motif, a population of naïve CD4+ T cells can integrate the signals from their environment to generate a functionally diverse population with robust commitment of individual cells. Notably, two positive feedback loops in this network motif govern three bistable switches, which in turn, give rise to three types of heterogeneous differentiated states, depending upon particular combinations of input signals. We provide three prototype models illustrating how to use this framework to explain experimental observations and make specific testable predictions. Conclusions The process in which several types of T helper cells are generated simultaneously to mount complex immune responses upon pathogenic challenges can be highly regulated, and a simple signaling network motif can be responsible for generating all possible types of heterogeneous populations with respect to a pair of master regulators controlling CD4+ T cell differentiation. The framework provides a mathematical basis for understanding the decision-making mechanisms of CD4+ T cells, and it can be helpful for interpreting experimental results. Mathematical models based on the framework make specific testable predictions that may improve our understanding of this differentiation system. PMID:22697466

  19. Testability of evolutionary game dynamics based on experimental economics data

    NASA Astrophysics Data System (ADS)

    Wang, Yijia; Chen, Xiaojie; Wang, Zhijian

    2017-11-01

    Understanding the dynamic processes of a real game system requires an appropriate dynamics model, and rigorously testing a dynamics model is nontrivial. In our methodological research, we develop an approach to testing the validity of game dynamics models that considers the dynamic patterns of angular momentum and speed as measurement variables. Using Rock-Paper-Scissors (RPS) games as an example, we illustrate the geometric patterns in the experiment data. We then derive the related theoretical patterns from a series of typical dynamics models. By testing the goodness-of-fit between the experimental and theoretical patterns, we show that the validity of these models can be evaluated quantitatively. Our approach establishes a link between dynamics models and experimental systems, which is, to the best of our knowledge, the most effective and rigorous strategy for ascertaining the testability of evolutionary game dynamics models.

  20. Detecting Rotational Superradiance in Fluid Laboratories

    NASA Astrophysics Data System (ADS)

    Cardoso, Vitor; Coutant, Antonin; Richartz, Mauricio; Weinfurtner, Silke

    2016-12-01

    Rotational superradiance was predicted theoretically decades ago, and is chiefly responsible for a number of important effects and phenomenology in black-hole physics. However, rotational superradiance has never been observed experimentally. Here, with the aim of probing superradiance in the lab, we investigate the behavior of sound and surface waves in fluids resting in a circular basin at the center of which a rotating cylinder is placed. We show that with a suitable choice for the material of the cylinder, surface and sound waves are amplified. Two types of instabilities are studied: one sets in whenever superradiant modes are confined near the rotating cylinder and the other, which does not rely on confinement, corresponds to a local excitation of the cylinder. Our findings are experimentally testable in existing fluid laboratories and, hence, offer experimental exploration and comparison of dynamical instabilities arising from rapidly rotating boundary layers in astrophysical as well as in fluid dynamical systems.

  1. Towards a Quantitative Endogenous Network Theory of Cancer Genesis and Progression: beyond ``cancer as diseases of genome''

    NASA Astrophysics Data System (ADS)

    Ao, Ping

    2011-03-01

    There has been a tremendous progress in cancer research. However, it appears the current dominant cancer research framework of regarding cancer as diseases of genome leads impasse. Naturally questions have been asked that whether it is possible to develop alternative frameworks such that they can connect both to mutations and other genetic/genomic effects and to environmental factors. Furthermore, such framework can be made quantitative and with predictions experimentally testable. In this talk, I will present a positive answer to this calling. I will explain on our construction of endogenous network theory based on molecular-cellular agencies as dynamical variable. Such cancer theory explicitly demonstrates a profound connection to many fundamental concepts in physics, as such stochastic non-equilibrium processes, ``energy'' landscape, metastability, etc. It suggests that neneath cancer's daunting complexity may lie a simplicity that gives grounds for hope. The rationales behind such theory, its predictions, and its initial experimental verifications will be presented. Supported by USA NIH and China NSF.

  2. Dynamic allostery of protein alpha helical coiled-coils

    PubMed Central

    Hawkins, Rhoda J; McLeish, Tom C.B

    2005-01-01

    Alpha helical coiled-coils appear in many important allosteric proteins such as the dynein molecular motor and bacteria chemotaxis transmembrane receptors. As a mechanism for transmitting the information of ligand binding to a distant site across an allosteric protein, an alternative to conformational change in the mean static structure is an induced change in the pattern of the internal dynamics of the protein. We explore how ligand binding may change the intramolecular vibrational free energy of a coiled-coil, using parameterized coarse-grained models, treating the case of dynein in detail. The models predict that coupling of slide, bend and twist modes of the coiled-coil transmits an allosteric free energy of ∼2kBT, consistent with experimental results. A further prediction is a quantitative increase in the effective stiffness of the coiled-coil without any change in inherent flexibility of the individual helices. The model provides a possible and experimentally testable mechanism for transmission of information through the alpha helical coiled-coil of dynein. PMID:16849225

  3. Mathematical models of the fate of lymphoma B cells after antigen receptor ligation with specific antibodies.

    PubMed

    Alarcón, Tomás; Marches, Radu; Page, Karen M

    2006-05-07

    We formulate models of the mechanism(s) by which B cell lymphoma cells stimulated with an antibody specific to the B cell receptor (IgM) become quiescent or apoptotic. In particular, we aim to reproduce experimental results by Marches et al. according to which the fate of the targeted cells (Daudi) depends on the levels of expression of p21(Waf1) (p21) cell-cycle inhibitor. A simple model is formulated in which the basic ingredients are p21 and caspase activity, and their mutual inhibition. We show that this model does not reproduce the experimental results and that further refinement is needed. A second model successfully reproduces the experimental observations, for a given set of parameter values, indicating a critical role for Myc in the fate decision process. We use bifurcation analysis and objective sensitivity analysis to assess the robustness of our results. Importantly, this analysis yields experimentally testable predictions on the role of Myc, which could have therapeutic implications.

  4. Inferior olive mirrors joint dynamics to implement an inverse controller.

    PubMed

    Alvarez-Icaza, Rodrigo; Boahen, Kwabena

    2012-10-01

    To produce smooth and coordinated motion, our nervous systems need to generate precisely timed muscle activation patterns that, due to axonal conduction delay, must be generated in a predictive and feedforward manner. Kawato proposed that the cerebellum accomplishes this by acting as an inverse controller that modulates descending motor commands to predictively drive the spinal cord such that the musculoskeletal dynamics are canceled out. This and other cerebellar theories do not, however, account for the rich biophysical properties expressed by the olivocerebellar complex's various cell types, making these theories difficult to verify experimentally. Here we propose that a multizonal microcomplex's (MZMC) inferior olivary neurons use their subthreshold oscillations to mirror a musculoskeletal joint's underdamped dynamics, thereby achieving inverse control. We used control theory to map a joint's inverse model onto an MZMC's biophysics, and we used biophysical modeling to confirm that inferior olivary neurons can express the dynamics required to mirror biomechanical joints. We then combined both techniques to predict how experimentally injecting current into the inferior olive would affect overall motor output performance. We found that this experimental manipulation unmasked a joint's natural dynamics, as observed by motor output ringing at the joint's natural frequency, with amplitude proportional to the amount of current. These results support the proposal that the cerebellum-in particular an MZMC-is an inverse controller; the results also provide a biophysical implementation for this controller and allow one to make an experimentally testable prediction.

  5. Percolation mechanism drives actin gels to the critically connected state

    NASA Astrophysics Data System (ADS)

    Lee, Chiu Fan; Pruessner, Gunnar

    2016-05-01

    Cell motility and tissue morphogenesis depend crucially on the dynamic remodeling of actomyosin networks. An actomyosin network consists of an actin polymer network connected by cross-linker proteins and motor protein myosins that generate internal stresses on the network. A recent discovery shows that for a range of experimental parameters, actomyosin networks contract to clusters with a power-law size distribution [J. Alvarado, Nat. Phys. 9, 591 (2013), 10.1038/nphys2715]. Here, we argue that actomyosin networks can exhibit a robust critical signature without fine-tuning because the dynamics of the system can be mapped onto a modified version of percolation with trapping (PT), which is known to show critical behavior belonging to the static percolation universality class without the need for fine-tuning of a control parameter. We further employ our PT model to generate experimentally testable predictions.

  6. CROSS-DISCIPLINARY PHYSICS AND RELATED AREAS OF SCIENCE AND TECHNOLOGY: A new mammalian circadian oscillator model including the cAMP module

    NASA Astrophysics Data System (ADS)

    Wang, Jun-Wei; Zhou, Tian-Shou

    2009-12-01

    In this paper, we develop a new mathematical model for the mammalian circadian clock, which incorporates both transcriptional/translational feedback loops (TTFLs) and a cAMP-mediated feedback loop. The model shows that TTFLs and cAMP signalling cooperatively drive the circadian rhythms. It reproduces typical experimental observations with qualitative similarities, e.g. circadian oscillations in constant darkness and entrainment to light-dark cycles. In addition, it can explain the phenotypes of cAMP-mutant and Rev-erbα-/--mutant mice, and help us make an experimentally-testable prediction: oscillations may be rescued when arrhythmic mice with constitutively low concentrations of cAMP are crossed with Rev-erbα-/- mutant mice. The model enhances our understanding of the mammalian circadian clockwork from the viewpoint of the entire cell.

  7. Testing for ontological errors in probabilistic forecasting models of natural systems

    PubMed Central

    Marzocchi, Warner; Jordan, Thomas H.

    2014-01-01

    Probabilistic forecasting models describe the aleatory variability of natural systems as well as our epistemic uncertainty about how the systems work. Testing a model against observations exposes ontological errors in the representation of a system and its uncertainties. We clarify several conceptual issues regarding the testing of probabilistic forecasting models for ontological errors: the ambiguity of the aleatory/epistemic dichotomy, the quantification of uncertainties as degrees of belief, the interplay between Bayesian and frequentist methods, and the scientific pathway for capturing predictability. We show that testability of the ontological null hypothesis derives from an experimental concept, external to the model, that identifies collections of data, observed and not yet observed, that are judged to be exchangeable when conditioned on a set of explanatory variables. These conditional exchangeability judgments specify observations with well-defined frequencies. Any model predicting these behaviors can thus be tested for ontological error by frequentist methods; e.g., using P values. In the forecasting problem, prior predictive model checking, rather than posterior predictive checking, is desirable because it provides more severe tests. We illustrate experimental concepts using examples from probabilistic seismic hazard analysis. Severe testing of a model under an appropriate set of experimental concepts is the key to model validation, in which we seek to know whether a model replicates the data-generating process well enough to be sufficiently reliable for some useful purpose, such as long-term seismic forecasting. Pessimistic views of system predictability fail to recognize the power of this methodology in separating predictable behaviors from those that are not. PMID:25097265

  8. Broken SU(3) antidecuplet for {Theta}{sup +} and {Xi}{sub 3/2}

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Pakvasa, Sandip; Suzuki, Mahiko

    2004-05-05

    If the narrow exotic baryon resonances {Theta}{sup +}(1540) and {Xi}{sub 3/2} are members of the J{sup P} = 1/2{sup +} antidecuplet with N*(1710), the octet-antidecuplet mixing is required not only by the mass spectrum but also by the decay pattern of N*(1710). This casts doubt on validity of the {Theta}{sup +} mass prediction by the chiral soliton model. While all pieces of the existing experimental information point to a small octet-decuplet mixing, the magnitude of mixing required by the mass spectrum is not consistent with the value needed to account for the hadronic decay rates. The discrepancy is not resolvedmore » even after the large experimental uncertainty is taken into consideration. We fail to find an alternative SU(3) assignment even with different spin-parity assignment. When we extend the analysis to mixing with a higher SU(3) multiplet, we find one experimentally testable scenario in the case of mixing with a 27-plet.« less

  9. Linking short-term responses to ecologically-relevant outcomes

    EPA Pesticide Factsheets

    Opportunity to participate in the conduct of collaborative integrative lab, field and modelling efforts to characterize molecular-to-organismal level responses and make quantitative testable predictions of population level outcomes

  10. The effect of analytic and experiential modes of thought on moral judgment.

    PubMed

    Kvaran, Trevor; Nichols, Shaun; Sanfey, Alan

    2013-01-01

    According to dual-process theories, moral judgments are the result of two competing processes: a fast, automatic, affect-driven process and a slow, deliberative, reason-based process. Accordingly, these models make clear and testable predictions about the influence of each system. Although a small number of studies have attempted to examine each process independently in the context of moral judgment, no study has yet tried to experimentally manipulate both processes within a single study. In this chapter, a well-established "mode-of-thought" priming technique was used to place participants in either an experiential/emotional or analytic mode while completing a task in which participants provide judgments about a series of moral dilemmas. We predicted that individuals primed analytically would make more utilitarian responses than control participants, while emotional priming would lead to less utilitarian responses. Support was found for both of these predictions. Implications of these findings for dual-process theories of moral judgment will be discussed. Copyright © 2013 Elsevier B.V. All rights reserved.

  11. cisMEP: an integrated repository of genomic epigenetic profiles and cis-regulatory modules in Drosophila

    PubMed Central

    2014-01-01

    Background Cis-regulatory modules (CRMs), or the DNA sequences required for regulating gene expression, play the central role in biological researches on transcriptional regulation in metazoan species. Nowadays, the systematic understanding of CRMs still mainly resorts to computational methods due to the time-consuming and small-scale nature of experimental methods. But the accuracy and reliability of different CRM prediction tools are still unclear. Without comparative cross-analysis of the results and combinatorial consideration with extra experimental information, there is no easy way to assess the confidence of the predicted CRMs. This limits the genome-wide understanding of CRMs. Description It is known that transcription factor binding and epigenetic profiles tend to determine functions of CRMs in gene transcriptional regulation. Thus integration of the genome-wide epigenetic profiles with systematically predicted CRMs can greatly help researchers evaluate and decipher the prediction confidence and possible transcriptional regulatory functions of these potential CRMs. However, these data are still fragmentary in the literatures. Here we performed the computational genome-wide screening for potential CRMs using different prediction tools and constructed the pioneer database, cisMEP (cis-regulatory module epigenetic profile database), to integrate these computationally identified CRMs with genomic epigenetic profile data. cisMEP collects the literature-curated TFBS location data and nine genres of epigenetic data for assessing the confidence of these potential CRMs and deciphering the possible CRM functionality. Conclusions cisMEP aims to provide a user-friendly interface for researchers to assess the confidence of different potential CRMs and to understand the functions of CRMs through experimentally-identified epigenetic profiles. The deposited potential CRMs and experimental epigenetic profiles for confidence assessment provide experimentally testable hypotheses for the molecular mechanisms of metazoan gene regulation. We believe that the information deposited in cisMEP will greatly facilitate the comparative usage of different CRM prediction tools and will help biologists to study the modular regulatory mechanisms between different TFs and their target genes. PMID:25521507

  12. Modelling toehold-mediated RNA strand displacement.

    PubMed

    Šulc, Petr; Ouldridge, Thomas E; Romano, Flavio; Doye, Jonathan P K; Louis, Ard A

    2015-03-10

    We study the thermodynamics and kinetics of an RNA toehold-mediated strand displacement reaction with a recently developed coarse-grained model of RNA. Strand displacement, during which a single strand displaces a different strand previously bound to a complementary substrate strand, is an essential mechanism in active nucleic acid nanotechnology and has also been hypothesized to occur in vivo. We study the rate of displacement reactions as a function of the length of the toehold and temperature and make two experimentally testable predictions: that the displacement is faster if the toehold is placed at the 5' end of the substrate; and that the displacement slows down with increasing temperature for longer toeholds. Copyright © 2015 Biophysical Society. Published by Elsevier Inc. All rights reserved.

  13. Stalk model of membrane fusion: solution of energy crisis.

    PubMed Central

    Kozlovsky, Yonathan; Kozlov, Michael M

    2002-01-01

    Membrane fusion proceeds via formation of intermediate nonbilayer structures. The stalk model of fusion intermediate is commonly recognized to account for the major phenomenology of the fusion process. However, in its current form, the stalk model poses a challenge. On one hand, it is able to describe qualitatively the modulation of the fusion reaction by the lipid composition of the membranes. On the other, it predicts very large values of the stalk energy, so that the related energy barrier for fusion cannot be overcome by membranes within a biologically reasonable span of time. We suggest a new structure for the fusion stalk, which resolves the energy crisis of the model. Our approach is based on a combined deformation of the stalk membrane including bending of the membrane surface and tilt of the hydrocarbon chains of lipid molecules. We demonstrate that the energy of the fusion stalk is a few times smaller than those predicted previously and the stalks are feasible in real systems. We account quantitatively for the experimental results on dependence of the fusion reaction on the lipid composition of different membrane monolayers. We analyze the dependence of the stalk energy on the distance between the fusing membranes and provide the experimentally testable predictions for the structural features of the stalk intermediates. PMID:11806930

  14. BETA: Behavioral testability analyzer and its application to high-level test generation and synthesis for testability. Ph.D. Thesis

    NASA Technical Reports Server (NTRS)

    Chen, Chung-Hsing

    1992-01-01

    In this thesis, a behavioral-level testability analysis approach is presented. This approach is based on analyzing the circuit behavioral description (similar to a C program) to estimate its testability by identifying controllable and observable circuit nodes. This information can be used by a test generator to gain better access to internal circuit nodes and to reduce its search space. The results of the testability analyzer can also be used to select test points or partial scan flip-flops in the early design phase. Based on selection criteria, a novel Synthesis for Testability approach call Test Statement Insertion (TSI) is proposed, which modifies the circuit behavioral description directly. Test Statement Insertion can also be used to modify circuit structural description to improve its testability. As a result, Synthesis for Testability methodology can be combined with an existing behavioral synthesis tool to produce more testable circuits.

  15. Network Simulation Models

    DTIC Science & Technology

    2008-12-01

    1979; Wasserman and Faust, 1994). SNA thus relies heavily on graph theory to make predictions about network structure and thus social behavior...becomes a tool for increasing the specificity of theory , thinking through the theoretical implications, and generating testable predictions. In...to summarize Construct and its roots in constructural sociological theory . We discover that the (LPM) provides a mathematical bridge between

  16. Predicting the dynamics of bacterial growth inhibition by ribosome-targeting antibiotics

    NASA Astrophysics Data System (ADS)

    Greulich, Philip; Doležal, Jakub; Scott, Matthew; Evans, Martin R.; Allen, Rosalind J.

    2017-12-01

    Understanding how antibiotics inhibit bacteria can help to reduce antibiotic use and hence avoid antimicrobial resistance—yet few theoretical models exist for bacterial growth inhibition by a clinically relevant antibiotic treatment regimen. In particular, in the clinic, antibiotic treatment is time-dependent. Here, we use a theoretical model, previously applied to steady-state bacterial growth, to predict the dynamical response of a bacterial cell to a time-dependent dose of ribosome-targeting antibiotic. Our results depend strongly on whether the antibiotic shows reversible transport and/or low-affinity ribosome binding (‘low-affinity antibiotic’) or, in contrast, irreversible transport and/or high affinity ribosome binding (‘high-affinity antibiotic’). For low-affinity antibiotics, our model predicts that growth inhibition depends on the duration of the antibiotic pulse, and can show a transient period of very fast growth following removal of the antibiotic. For high-affinity antibiotics, growth inhibition depends on peak dosage rather than dose duration, and the model predicts a pronounced post-antibiotic effect, due to hysteresis, in which growth can be suppressed for long times after the antibiotic dose has ended. These predictions are experimentally testable and may be of clinical significance.

  17. Predicting the dynamics of bacterial growth inhibition by ribosome-targeting antibiotics

    PubMed Central

    Greulich, Philip; Doležal, Jakub; Scott, Matthew; Evans, Martin R; Allen, Rosalind J

    2017-01-01

    Understanding how antibiotics inhibit bacteria can help to reduce antibiotic use and hence avoid antimicrobial resistance—yet few theoretical models exist for bacterial growth inhibition by a clinically relevant antibiotic treatment regimen. In particular, in the clinic, antibiotic treatment is time-dependent. Here, we use a theoretical model, previously applied to steady-state bacterial growth, to predict the dynamical response of a bacterial cell to a time-dependent dose of ribosome-targeting antibiotic. Our results depend strongly on whether the antibiotic shows reversible transport and/or low-affinity ribosome binding (‘low-affinity antibiotic’) or, in contrast, irreversible transport and/or high affinity ribosome binding (‘high-affinity antibiotic’). For low-affinity antibiotics, our model predicts that growth inhibition depends on the duration of the antibiotic pulse, and can show a transient period of very fast growth following removal of the antibiotic. For high-affinity antibiotics, growth inhibition depends on peak dosage rather than dose duration, and the model predicts a pronounced post-antibiotic effect, due to hysteresis, in which growth can be suppressed for long times after the antibiotic dose has ended. These predictions are experimentally testable and may be of clinical significance. PMID:28714461

  18. Training Signaling Pathway Maps to Biochemical Data with Constrained Fuzzy Logic: Quantitative Analysis of Liver Cell Responses to Inflammatory Stimuli

    PubMed Central

    Morris, Melody K.; Saez-Rodriguez, Julio; Clarke, David C.; Sorger, Peter K.; Lauffenburger, Douglas A.

    2011-01-01

    Predictive understanding of cell signaling network operation based on general prior knowledge but consistent with empirical data in a specific environmental context is a current challenge in computational biology. Recent work has demonstrated that Boolean logic can be used to create context-specific network models by training proteomic pathway maps to dedicated biochemical data; however, the Boolean formalism is restricted to characterizing protein species as either fully active or inactive. To advance beyond this limitation, we propose a novel form of fuzzy logic sufficiently flexible to model quantitative data but also sufficiently simple to efficiently construct models by training pathway maps on dedicated experimental measurements. Our new approach, termed constrained fuzzy logic (cFL), converts a prior knowledge network (obtained from literature or interactome databases) into a computable model that describes graded values of protein activation across multiple pathways. We train a cFL-converted network to experimental data describing hepatocytic protein activation by inflammatory cytokines and demonstrate the application of the resultant trained models for three important purposes: (a) generating experimentally testable biological hypotheses concerning pathway crosstalk, (b) establishing capability for quantitative prediction of protein activity, and (c) prediction and understanding of the cytokine release phenotypic response. Our methodology systematically and quantitatively trains a protein pathway map summarizing curated literature to context-specific biochemical data. This process generates a computable model yielding successful prediction of new test data and offering biological insight into complex datasets that are difficult to fully analyze by intuition alone. PMID:21408212

  19. Analysis of optimality in natural and perturbed metabolic networks

    PubMed Central

    Segrè, Daniel; Vitkup, Dennis; Church, George M.

    2002-01-01

    An important goal of whole-cell computational modeling is to integrate detailed biochemical information with biological intuition to produce testable predictions. Based on the premise that prokaryotes such as Escherichia coli have maximized their growth performance along evolution, flux balance analysis (FBA) predicts metabolic flux distributions at steady state by using linear programming. Corroborating earlier results, we show that recent intracellular flux data for wild-type E. coli JM101 display excellent agreement with FBA predictions. Although the assumption of optimality for a wild-type bacterium is justifiable, the same argument may not be valid for genetically engineered knockouts or other bacterial strains that were not exposed to long-term evolutionary pressure. We address this point by introducing the method of minimization of metabolic adjustment (MOMA), whereby we test the hypothesis that knockout metabolic fluxes undergo a minimal redistribution with respect to the flux configuration of the wild type. MOMA employs quadratic programming to identify a point in flux space, which is closest to the wild-type point, compatibly with the gene deletion constraint. Comparing MOMA and FBA predictions to experimental flux data for E. coli pyruvate kinase mutant PB25, we find that MOMA displays a significantly higher correlation than FBA. Our method is further supported by experimental data for E. coli knockout growth rates. It can therefore be used for predicting the behavior of perturbed metabolic networks, whose growth performance is in general suboptimal. MOMA and its possible future extensions may be useful in understanding the evolutionary optimization of metabolism. PMID:12415116

  20. Multidisciplinary approaches to understanding collective cell migration in developmental biology.

    PubMed

    Schumacher, Linus J; Kulesa, Paul M; McLennan, Rebecca; Baker, Ruth E; Maini, Philip K

    2016-06-01

    Mathematical models are becoming increasingly integrated with experimental efforts in the study of biological systems. Collective cell migration in developmental biology is a particularly fruitful application area for the development of theoretical models to predict the behaviour of complex multicellular systems with many interacting parts. In this context, mathematical models provide a tool to assess the consistency of experimental observations with testable mechanistic hypotheses. In this review, we showcase examples from recent years of multidisciplinary investigations of neural crest cell migration. The neural crest model system has been used to study how collective migration of cell populations is shaped by cell-cell interactions, cell-environmental interactions and heterogeneity between cells. The wide range of emergent behaviours exhibited by neural crest cells in different embryonal locations and in different organisms helps us chart out the spectrum of collective cell migration. At the same time, this diversity in migratory characteristics highlights the need to reconcile or unify the array of currently hypothesized mechanisms through the next generation of experimental data and generalized theoretical descriptions. © 2016 The Authors.

  1. Constraining the loop quantum gravity parameter space from phenomenology

    NASA Astrophysics Data System (ADS)

    Brahma, Suddhasattwa; Ronco, Michele

    2018-03-01

    Development of quantum gravity theories rarely takes inputs from experimental physics. In this letter, we take a small step towards correcting this by establishing a paradigm for incorporating putative quantum corrections, arising from canonical quantum gravity (QG) theories, in deriving falsifiable modified dispersion relations (MDRs) for particles on a deformed Minkowski space-time. This allows us to differentiate and, hopefully, pick between several quantization choices via testable, state-of-the-art phenomenological predictions. Although a few explicit examples from loop quantum gravity (LQG) (such as the regularization scheme used or the representation of the gauge group) are shown here to establish the claim, our framework is more general and is capable of addressing other quantization ambiguities within LQG and also those arising from other similar QG approaches.

  2. Describing Myxococcus xanthus Aggregation Using Ostwald Ripening Equations for Thin Liquid Films

    PubMed Central

    Bahar, Fatmagül; Pratt-Szeliga, Philip C.; Angus, Stuart; Guo, Jiaye; Welch, Roy D.

    2014-01-01

    When starved, a swarm of millions of Myxococcus xanthus cells coordinate their movement from outward swarming to inward coalescence. The cells then execute a synchronous program of multicellular development, arranging themselves into dome shaped aggregates. Over the course of development, about half of the initial aggregates disappear, while others persist and mature into fruiting bodies. This work seeks to develop a quantitative model for aggregation that accurately simulates which will disappear and which will persist. We analyzed time-lapse movies of M. xanthus development, modeled aggregation using the equations that describe Ostwald ripening of droplets in thin liquid films, and predicted the disappearance and persistence of aggregates with an average accuracy of 85%. We then experimentally validated a prediction that is fundamental to this model by tracking individual fluorescent cells as they moved between aggregates and demonstrating that cell movement towards and away from aggregates correlates with aggregate disappearance. Describing development through this model may limit the number and type of molecular genetic signals needed to complete M. xanthus development, and it provides numerous additional testable predictions. PMID:25231319

  3. Soviet Economic Policy Towards Eastern Europe

    DTIC Science & Technology

    1988-11-01

    high. Without specifying the determinants of Soviet demand for "allegiance" in more detail, the model is not testable; we cannot predict how subsidy...trade inside (Czechoslovakia, Bulgaria). These countries are behaving as predicted by the model . If this hypothesis is true, the pattern of subsidies...also compares the sum of per capita subsidies by country between 1970 and 1982 with the sum of subsidies predicted by the model . Because of the poor

  4. The Experimental Study of Bacterial Evolution and Its Implications for the Modern Synthesis of Evolutionary Biology.

    PubMed

    O'Malley, Maureen A

    2018-06-01

    Since the 1940s, microbiologists, biochemists and population geneticists have experimented with the genetic mechanisms of microorganisms in order to investigate evolutionary processes. These evolutionary studies of bacteria and other microorganisms gained some recognition from the standard-bearers of the modern synthesis of evolutionary biology, especially Theodosius Dobzhansky and Ledyard Stebbins. A further period of post-synthesis bacterial evolutionary research occurred between the 1950s and 1980s. These experimental analyses focused on the evolution of population and genetic structure, the adaptive gain of new functions, and the evolutionary consequences of competition dynamics. This large body of research aimed to make evolutionary theory testable and predictive, by giving it mechanistic underpinnings. Although evolutionary microbiologists promoted bacterial experiments as methodologically advantageous and a source of general insight into evolution, they also acknowledged the biological differences of bacteria. My historical overview concludes with reflections on what bacterial evolutionary research achieved in this period, and its implications for the still-developing modern synthesis.

  5. Regulation of multispanning membrane protein topology via post-translational annealing.

    PubMed

    Van Lehn, Reid C; Zhang, Bin; Miller, Thomas F

    2015-09-26

    The canonical mechanism for multispanning membrane protein topogenesis suggests that protein topology is established during cotranslational membrane integration. However, this mechanism is inconsistent with the behavior of EmrE, a dual-topology protein for which the mutation of positively charged loop residues, even close to the C-terminus, leads to dramatic shifts in its topology. We use coarse-grained simulations to investigate the Sec-facilitated membrane integration of EmrE and its mutants on realistic biological timescales. This work reveals a mechanism for regulating membrane-protein topogenesis, in which initially misintegrated configurations of the proteins undergo post-translational annealing to reach fully integrated multispanning topologies. The energetic barriers associated with this post-translational annealing process enforce kinetic pathways that dictate the topology of the fully integrated proteins. The proposed mechanism agrees well with the experimentally observed features of EmrE topogenesis and provides a range of experimentally testable predictions regarding the effect of translocon mutations on membrane protein topogenesis.

  6. Experimental Test of Compatibility-Loophole-Free Contextuality with Spatially Separated Entangled Qutrits.

    PubMed

    Hu, Xiao-Min; Chen, Jiang-Shan; Liu, Bi-Heng; Guo, Yu; Huang, Yun-Feng; Zhou, Zong-Quan; Han, Yong-Jian; Li, Chuan-Feng; Guo, Guang-Can

    2016-10-21

    The physical impact and the testability of the Kochen-Specker (KS) theorem is debated because of the fact that perfect compatibility in a single quantum system cannot be achieved in practical experiments with finite precision. Here, we follow the proposal of A. Cabello and M. T. Cunha [Phys. Rev. Lett. 106, 190401 (2011)], and present a compatibility-loophole-free experimental violation of an inequality of noncontextual theories by two spatially separated entangled qutrits. A maximally entangled qutrit-qutrit state with a fidelity as high as 0.975±0.001 is prepared and distributed to separated spaces, and these two photons are then measured locally, providing the compatibility requirement. The results show that the inequality for noncontextual theory is violated by 31 standard deviations. Our experiments pave the way to close the debate about the testability of the KS theorem. In addition, the method to generate high-fidelity and high-dimension entangled states will provide significant advantages in high-dimension quantum encoding and quantum communication.

  7. Perceptual Decision-Making as Probabilistic Inference by Neural Sampling.

    PubMed

    Haefner, Ralf M; Berkes, Pietro; Fiser, József

    2016-05-04

    We address two main challenges facing systems neuroscience today: understanding the nature and function of cortical feedback between sensory areas and of correlated variability. Starting from the old idea of perception as probabilistic inference, we show how to use knowledge of the psychophysical task to make testable predictions for the influence of feedback signals on early sensory representations. Applying our framework to a two-alternative forced choice task paradigm, we can explain multiple empirical findings that have been hard to account for by the traditional feedforward model of sensory processing, including the task dependence of neural response correlations and the diverging time courses of choice probabilities and psychophysical kernels. Our model makes new predictions and characterizes a component of correlated variability that represents task-related information rather than performance-degrading noise. It demonstrates a normative way to integrate sensory and cognitive components into physiologically testable models of perceptual decision-making. Copyright © 2016 Elsevier Inc. All rights reserved.

  8. From Cookbook to Experimental Design

    ERIC Educational Resources Information Center

    Flannagan, Jenny Sue; McMillan, Rachel

    2009-01-01

    Developing expertise, whether from cook to chef or from student to scientist, occurs over time and requires encouragement, guidance, and support. One key goal of an elementary science program should be to move students toward expertise in their ability to design investigative questions. The ability to design a testable question is difficult for…

  9. Links between Parents' Epistemological Stance and Children's Evidence Talk

    ERIC Educational Resources Information Center

    Luce, Megan R.; Callanan, Maureen A.; Smilovic, Sarah

    2013-01-01

    Recent experimental research highlights young children's selectivity in learning from others. Little is known, however, about the patterns of information that children actually encounter in conversations with adults. This study investigated variation in parents' tendency to focus on testable evidence as a way to answer science-related questions…

  10. Origin and Proliferation of Multiple-Drug Resistance in Bacterial Pathogens

    PubMed Central

    Chang, Hsiao-Han; Cohen, Ted; Grad, Yonatan H.; Hanage, William P.; O'Brien, Thomas F.

    2015-01-01

    SUMMARY Many studies report the high prevalence of multiply drug-resistant (MDR) strains. Because MDR infections are often significantly harder and more expensive to treat, they represent a growing public health threat. However, for different pathogens, different underlying mechanisms are traditionally used to explain these observations, and it is unclear whether each bacterial taxon has its own mechanism(s) for multidrug resistance or whether there are common mechanisms between distantly related pathogens. In this review, we provide a systematic overview of the causes of the excess of MDR infections and define testable predictions made by each hypothetical mechanism, including experimental, epidemiological, population genomic, and other tests of these hypotheses. Better understanding the cause(s) of the excess of MDR is the first step to rational design of more effective interventions to prevent the origin and/or proliferation of MDR. PMID:25652543

  11. The Law of Self-Acting Machines and Irreversible Processes with Reversible Replicas

    NASA Astrophysics Data System (ADS)

    Valev, Pentcho

    2002-11-01

    Clausius and Kelvin saved Carnot theorem and developed the second law by assuming that Carnot machines can work in the absence of an operator and that all the irreversible processes have reversible replicas. The former assumption restored Carnot theorem as an experience of mankind whereas the latter generated "the law of ever increasing entropy". Both assumptions are wrong so it makes sense to return to Carnot theorem (or some equivalent) and test it experimentally. Two testable paradigms - the system performing two types of reversible work and the system in dynamical equilibrium - suggest that perpetuum mobile of the second kind in the presence of an operator is possible. The deviation from the second law prediction, expressed as difference between partial derivatives in a Maxwell relation, measures the degree of structural-functional evolution for the respective system.

  12. QUANTITATIVE TESTS OF ELMS AS INTERMEDIATE N PEELING-BALOONING MODES

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    LAO,LL; SNYDER,PB; LEONARD,AW

    2003-03-01

    A271 QUANTITATIVE TESTS OF ELMS AS INTERMEDIATE N PEELING-BALOONING MODES. Several testable features of the working model of edge localized modes (ELMs) as intermediate toroidal mode number peeling-ballooning modes are evaluated quantitatively using DIII-D and JT-60U experimental data and the ELITE MHD stability code. These include the hypothesis that ELM sizes are related to the radial widths of the unstable MHD modes, the unstable modes have a strong ballooning character localized in the outboard bad curvature region, and ELM size generally becomes smaller at high edge collisionality. ELMs are triggered when the growth rates of the unstable MHD modes becomemore » significantly large. These testable features are consistent with many ELM observations in DIII-D and JT-60U discharges.« less

  13. Heating of trapped ultracold atoms by collapse dynamics

    NASA Astrophysics Data System (ADS)

    Laloë, Franck; Mullin, William J.; Pearle, Philip

    2014-11-01

    The continuous spontaneous localization (CSL) theory alters the Schrödinger equation. It describes wave-function collapse as a dynamical process instead of an ill-defined postulate, thereby providing macroscopic uniqueness and solving the so-called measurement problem of standard quantum theory. CSL contains a parameter λ giving the collapse rate of an isolated nucleon in a superposition of two spatially separated states and, more generally, characterizing the collapse time for any physical situation. CSL is experimentally testable, since it predicts some behavior different from that predicted by standard quantum theory. One example is the narrowing of wave functions, which results in energy imparted to particles. Here we consider energy given to trapped ultracold atoms. Since these are the coldest samples under experimental investigation, it is worth inquiring how they are affected by the CSL heating mechanism. We examine the CSL heating of a Bose-Einstein condensate (BEC) in contact with its thermal cloud. Of course, other mechanisms also provide heat and also particle loss. From varied data on optically trapped cesium BECs, we present an energy audit for known heating and loss mechanisms. The result provides an upper limit on CSL heating and thereby an upper limit on the parameter λ . We obtain λ ≲1 (±1 ) ×10-7 s-1.

  14. Computational Examination of Orientation-Dependent Morphological Evolution during the Electrodeposition and Electrodissolution of Magnesium

    DOE PAGES

    DeWitt, S.; Hahn, N.; Zavadil, K.; ...

    2015-12-30

    Here a new model of electrodeposition and electrodissolution is developed and applied to the evolution of Mg deposits during anode cycling. The model captures Butler-Volmer kinetics, facet evolution, the spatially varying potential in the electrolyte, and the time-dependent electrolyte concentration. The model utilizes a diffuse interface approach, employing the phase field and smoothed boundary methods. Scanning electron microscope (SEM) images of magnesium deposited on a gold substrate show the formation of faceted deposits, often in the form of hexagonal prisms. Orientation-dependent reaction rate coefficients were parameterized using the experimental SEM images. Three-dimensional simulations of the growth of magnesium deposits yieldmore » deposit morphologies consistent with the experimental results. The simulations predict that the deposits become narrower and taller as the current density increases due to the depletion of the electrolyte concentration near the sides of the deposits. Increasing the distance between the deposits leads to increased depletion of the electrolyte surrounding the deposit. Two models relating the orientation-dependence of the deposition and dissolution reactions are presented. Finally, the morphology of the Mg deposit after one deposition-dissolution cycle is significantly different between the two orientation-dependence models, providing testable predictions that suggest the underlying physical mechanisms governing morphology evolution during deposition and dissolution.« less

  15. Propagating Cell-Membrane Waves Driven by Curved Activators of Actin Polymerization

    PubMed Central

    Peleg, Barak; Disanza, Andrea; Scita, Giorgio; Gov, Nir

    2011-01-01

    Cells exhibit propagating membrane waves which involve the actin cytoskeleton. One type of such membranal waves are Circular Dorsal Ruffles (CDR) which are related to endocytosis and receptor internalization. Experimentally, CDRs have been associated with membrane bound activators of actin polymerization of concave shape. We present experimental evidence for the localization of convex membrane proteins in these structures, and their insensitivity to inhibition of myosin II contractility in immortalized mouse embryo fibroblasts cell cultures. These observations lead us to propose a theoretical model which explains the formation of these waves due to the interplay between complexes that contain activators of actin polymerization and membrane-bound curved proteins of both types of curvature (concave and convex). Our model predicts that the activity of both types of curved proteins is essential for sustaining propagating waves, which are abolished when one type of curved activator is removed. Within this model waves are initiated when the level of actin polymerization induced by the curved activators is higher than some threshold value, which allows the cell to control CDR formation. We demonstrate that the model can explain many features of CDRs, and give several testable predictions. This work demonstrates the importance of curved membrane proteins in organizing the actin cytoskeleton and cell shape. PMID:21533032

  16. The Simple Theory of Public Library Services.

    ERIC Educational Resources Information Center

    Newhouse, Joseph P.

    A simple normative theory applicable to public library services was developed as a tool to aid libraries in answering the question: which books should be bought by the library? Although developed for normative purposes, the theory generates testable predictions. It is relevant to measuring benefits from services which are provided publicly because…

  17. Two fundamental questions about protein evolution.

    PubMed

    Penny, David; Zhong, Bojian

    2015-12-01

    Two basic questions are considered that approach protein evolution from different directions; the problems arising from using Markov models for the deeper divergences, and then the origin of proteins themselves. The real problem for the first question (going backwards in time) is that at deeper phylogenies the Markov models of sequence evolution must lose information exponentially at deeper divergences, and several testable methods are suggested that should help resolve these deeper divergences. For the second question (coming forwards in time) a problem is that most models for the origin of protein synthesis do not give a role for the very earliest stages of the process. From our knowledge of the importance of replication accuracy in limiting the length of a coding molecule, a testable hypothesis is proposed. The length of the code, the code itself, and tRNAs would all have prior roles in increasing the accuracy of RNA replication; thus proteins would have been formed only after the tRNAs and the length of the triplet code are already formed. Both questions lead to testable predictions. Copyright © 2014 Elsevier B.V. and Société Française de Biochimie et Biologie Moléculaire (SFBBM). All rights reserved.

  18. A computational approach to negative priming

    NASA Astrophysics Data System (ADS)

    Schrobsdorff, H.; Ihrke, M.; Kabisch, B.; Behrendt, J.; Hasselhorn, M.; Herrmann, J. Michael

    2007-09-01

    Priming is characterized by a sensitivity of reaction times to the sequence of stimuli in psychophysical experiments. The reduction of the reaction time observed in positive priming is well-known and experimentally understood (Scarborough et al., J. Exp. Psycholol: Hum. Percept. Perform., 3, pp. 1-17, 1977). Negative priming—the opposite effect—is experimentally less tangible (Fox, Psychonom. Bull. Rev., 2, pp. 145-173, 1995). The dependence on subtle parameter changes (such as response-stimulus interval) usually varies. The sensitivity of the negative priming effect bears great potential for applications in research in fields such as memory, selective attention, and ageing effects. We develop and analyse a computational realization, CISAM, of a recent psychological model for action decision making, the ISAM (Kabisch, PhD thesis, Friedrich-Schiller-Universitat, 2003), which is sensitive to priming conditions. With the dynamical systems approach of the CISAM, we show that a single adaptive threshold mechanism is sufficient to explain both positive and negative priming effects. This is achieved by comparing results obtained by the computational modelling with experimental data from our laboratory. The implementation provides a rich base from which testable predictions can be derived, e.g. with respect to hitherto untested stimulus combinations (e.g. single-object trials).

  19. Statistical mechanics of monatomic liquids

    NASA Astrophysics Data System (ADS)

    Wallace, Duane C.

    1997-10-01

    Two key experimental properties of elemental liquids, together with an analysis of the condensed-system potential-energy surface, lead us logically to the dynamical theory of monatomic liquids. Experimentally, the ion motional specific heat is approximately 3Nk for N ions, implying the normal modes of motion are approximately 3N independent harmonic oscillators. This implies the potential surface contains nearly harmonic valleys. The equilibrium configuration at the bottom of each valley is a ``structure.'' Structures are crystalline or amorphous, and amorphous structures can have a remnant of local crystal symmetry, or can be random. The random structures are by far the most numerous, and hence dominate the statistical mechanics of the liquid state, and their macroscopic properties are uniform over the structure class, for large-N systems. The Hamiltonian for any structural valley is the static structure potential, a sum of harmonic normal modes, and an anharmonic correction. Again from experiment, the constant-density entropy of melting contains a universal disordering contribution of NkΔ, suggesting the random structural valleys are of universal number wN, where lnw=Δ. Our experimental estimate for Δ is 0.80. In quasiharmonic approximation, the liquid theory for entropy agrees with experiment, for all currently analyzable experimental data at elevated temperatures, to within 1-2% of the total entropy. Further testable predictions of the theory are mentioned.

  20. Domain fusion analysis by applying relational algebra to protein sequence and domain databases

    PubMed Central

    Truong, Kevin; Ikura, Mitsuhiko

    2003-01-01

    Background Domain fusion analysis is a useful method to predict functionally linked proteins that may be involved in direct protein-protein interactions or in the same metabolic or signaling pathway. As separate domain databases like BLOCKS, PROSITE, Pfam, SMART, PRINTS-S, ProDom, TIGRFAMs, and amalgamated domain databases like InterPro continue to grow in size and quality, a computational method to perform domain fusion analysis that leverages on these efforts will become increasingly powerful. Results This paper proposes a computational method employing relational algebra to find domain fusions in protein sequence databases. The feasibility of this method was illustrated on the SWISS-PROT+TrEMBL sequence database using domain predictions from the Pfam HMM (hidden Markov model) database. We identified 235 and 189 putative functionally linked protein partners in H. sapiens and S. cerevisiae, respectively. From scientific literature, we were able to confirm many of these functional linkages, while the remainder offer testable experimental hypothesis. Results can be viewed at . Conclusion As the analysis can be computed quickly on any relational database that supports standard SQL (structured query language), it can be dynamically updated along with the sequence and domain databases, thereby improving the quality of predictions over time. PMID:12734020

  1. Phenoscape: Identifying Candidate Genes for Evolutionary Phenotypes

    PubMed Central

    Edmunds, Richard C.; Su, Baofeng; Balhoff, James P.; Eames, B. Frank; Dahdul, Wasila M.; Lapp, Hilmar; Lundberg, John G.; Vision, Todd J.; Dunham, Rex A.; Mabee, Paula M.; Westerfield, Monte

    2016-01-01

    Phenotypes resulting from mutations in genetic model organisms can help reveal candidate genes for evolutionarily important phenotypic changes in related taxa. Although testing candidate gene hypotheses experimentally in nonmodel organisms is typically difficult, ontology-driven information systems can help generate testable hypotheses about developmental processes in experimentally tractable organisms. Here, we tested candidate gene hypotheses suggested by expert use of the Phenoscape Knowledgebase, specifically looking for genes that are candidates responsible for evolutionarily interesting phenotypes in the ostariophysan fishes that bear resemblance to mutant phenotypes in zebrafish. For this, we searched ZFIN for genetic perturbations that result in either loss of basihyal element or loss of scales phenotypes, because these are the ancestral phenotypes observed in catfishes (Siluriformes). We tested the identified candidate genes by examining their endogenous expression patterns in the channel catfish, Ictalurus punctatus. The experimental results were consistent with the hypotheses that these features evolved through disruption in developmental pathways at, or upstream of, brpf1 and eda/edar for the ancestral losses of basihyal element and scales, respectively. These results demonstrate that ontological annotations of the phenotypic effects of genetic alterations in model organisms, when aggregated within a knowledgebase, can be used effectively to generate testable, and useful, hypotheses about evolutionary changes in morphology. PMID:26500251

  2. A bioinformatics expert system linking functional data to anatomical outcomes in limb regeneration

    PubMed Central

    Lobo, Daniel; Feldman, Erica B.; Shah, Michelle; Malone, Taylor J.

    2014-01-01

    Abstract Amphibians and molting arthropods have the remarkable capacity to regenerate amputated limbs, as described by an extensive literature of experimental cuts, amputations, grafts, and molecular techniques. Despite a rich history of experimental effort, no comprehensive mechanistic model exists that can account for the pattern regulation observed in these experiments. While bioinformatics algorithms have revolutionized the study of signaling pathways, no such tools have heretofore been available to assist scientists in formulating testable models of large‐scale morphogenesis that match published data in the limb regeneration field. Major barriers to preventing an algorithmic approach are the lack of formal descriptions for experimental regenerative information and a repository to centralize storage and mining of functional data on limb regeneration. Establishing a new bioinformatics of shape would significantly accelerate the discovery of key insights into the mechanisms that implement complex regeneration. Here, we describe a novel mathematical ontology for limb regeneration to unambiguously encode phenotype, manipulation, and experiment data. Based on this formalism, we present the first centralized formal database of published limb regeneration experiments together with a user‐friendly expert system tool to facilitate its access and mining. These resources are freely available for the community and will assist both human biologists and artificial intelligence systems to discover testable, mechanistic models of limb regeneration. PMID:25729585

  3. Assessment of Scientific Reasoning: the Effects of Task Context, Data, and Design on Student Reasoning in Control of Variables.

    PubMed

    Zhou, Shaona; Han, Jing; Koenig, Kathleen; Raplinger, Amy; Pi, Yuan; Li, Dan; Xiao, Hua; Fu, Zhao; Bao, Lei

    2016-03-01

    Scientific reasoning is an important component under the cognitive strand of the 21st century skills and is highly emphasized in the new science education standards. This study focuses on the assessment of student reasoning in control of variables (COV), which is a core sub-skill of scientific reasoning. The main research question is to investigate the extent to which the existence of experimental data in questions impacts student reasoning and performance. This study also explores the effects of task contexts on student reasoning as well as students' abilities to distinguish between testability and causal influences of variables in COV experiments. Data were collected with students from both USA and China. Students received randomly one of two test versions, one with experimental data and one without. The results show that students from both populations (1) perform better when experimental data are not provided, (2) perform better in physics contexts than in real-life contexts, and (3) students have a tendency to equate non-influential variables to non-testable variables. In addition, based on the analysis of both quantitative and qualitative data, a possible progression of developmental levels of student reasoning in control of variables is proposed, which can be used to inform future development of assessment and instruction.

  4. Assessment of Scientific Reasoning: the Effects of Task Context, Data, and Design on Student Reasoning in Control of Variables

    PubMed Central

    Zhou, Shaona; Han, Jing; Koenig, Kathleen; Raplinger, Amy; Pi, Yuan; Li, Dan; Xiao, Hua; Fu, Zhao

    2015-01-01

    Scientific reasoning is an important component under the cognitive strand of the 21st century skills and is highly emphasized in the new science education standards. This study focuses on the assessment of student reasoning in control of variables (COV), which is a core sub-skill of scientific reasoning. The main research question is to investigate the extent to which the existence of experimental data in questions impacts student reasoning and performance. This study also explores the effects of task contexts on student reasoning as well as students’ abilities to distinguish between testability and causal influences of variables in COV experiments. Data were collected with students from both USA and China. Students received randomly one of two test versions, one with experimental data and one without. The results show that students from both populations (1) perform better when experimental data are not provided, (2) perform better in physics contexts than in real-life contexts, and (3) students have a tendency to equate non-influential variables to non-testable variables. In addition, based on the analysis of both quantitative and qualitative data, a possible progression of developmental levels of student reasoning in control of variables is proposed, which can be used to inform future development of assessment and instruction. PMID:26949425

  5. Antenna Mechanism of Length Control of Actin Cables

    PubMed Central

    Mohapatra, Lishibanya; Goode, Bruce L.; Kondev, Jane

    2015-01-01

    Actin cables are linear cytoskeletal structures that serve as tracks for myosin-based intracellular transport of vesicles and organelles in both yeast and mammalian cells. In a yeast cell undergoing budding, cables are in constant dynamic turnover yet some cables grow from the bud neck toward the back of the mother cell until their length roughly equals the diameter of the mother cell. This raises the question: how is the length of these cables controlled? Here we describe a novel molecular mechanism for cable length control inspired by recent experimental observations in cells. This “antenna mechanism” involves three key proteins: formins, which polymerize actin, Smy1 proteins, which bind formins and inhibit actin polymerization, and myosin motors, which deliver Smy1 to formins, leading to a length-dependent actin polymerization rate. We compute the probability distribution of cable lengths as a function of several experimentally tuneable parameters such as the formin-binding affinity of Smy1 and the concentration of myosin motors delivering Smy1. These results provide testable predictions of the antenna mechanism of actin-cable length control. PMID:26107518

  6. Antenna Mechanism of Length Control of Actin Cables.

    PubMed

    Mohapatra, Lishibanya; Goode, Bruce L; Kondev, Jane

    2015-06-01

    Actin cables are linear cytoskeletal structures that serve as tracks for myosin-based intracellular transport of vesicles and organelles in both yeast and mammalian cells. In a yeast cell undergoing budding, cables are in constant dynamic turnover yet some cables grow from the bud neck toward the back of the mother cell until their length roughly equals the diameter of the mother cell. This raises the question: how is the length of these cables controlled? Here we describe a novel molecular mechanism for cable length control inspired by recent experimental observations in cells. This "antenna mechanism" involves three key proteins: formins, which polymerize actin, Smy1 proteins, which bind formins and inhibit actin polymerization, and myosin motors, which deliver Smy1 to formins, leading to a length-dependent actin polymerization rate. We compute the probability distribution of cable lengths as a function of several experimentally tuneable parameters such as the formin-binding affinity of Smy1 and the concentration of myosin motors delivering Smy1. These results provide testable predictions of the antenna mechanism of actin-cable length control.

  7. A spiking neural network model of 3D perception for event-based neuromorphic stereo vision systems

    PubMed Central

    Osswald, Marc; Ieng, Sio-Hoi; Benosman, Ryad; Indiveri, Giacomo

    2017-01-01

    Stereo vision is an important feature that enables machine vision systems to perceive their environment in 3D. While machine vision has spawned a variety of software algorithms to solve the stereo-correspondence problem, their implementation and integration in small, fast, and efficient hardware vision systems remains a difficult challenge. Recent advances made in neuromorphic engineering offer a possible solution to this problem, with the use of a new class of event-based vision sensors and neural processing devices inspired by the organizing principles of the brain. Here we propose a radically novel model that solves the stereo-correspondence problem with a spiking neural network that can be directly implemented with massively parallel, compact, low-latency and low-power neuromorphic engineering devices. We validate the model with experimental results, highlighting features that are in agreement with both computational neuroscience stereo vision theories and experimental findings. We demonstrate its features with a prototype neuromorphic hardware system and provide testable predictions on the role of spike-based representations and temporal dynamics in biological stereo vision processing systems. PMID:28079187

  8. A spiking neural network model of 3D perception for event-based neuromorphic stereo vision systems.

    PubMed

    Osswald, Marc; Ieng, Sio-Hoi; Benosman, Ryad; Indiveri, Giacomo

    2017-01-12

    Stereo vision is an important feature that enables machine vision systems to perceive their environment in 3D. While machine vision has spawned a variety of software algorithms to solve the stereo-correspondence problem, their implementation and integration in small, fast, and efficient hardware vision systems remains a difficult challenge. Recent advances made in neuromorphic engineering offer a possible solution to this problem, with the use of a new class of event-based vision sensors and neural processing devices inspired by the organizing principles of the brain. Here we propose a radically novel model that solves the stereo-correspondence problem with a spiking neural network that can be directly implemented with massively parallel, compact, low-latency and low-power neuromorphic engineering devices. We validate the model with experimental results, highlighting features that are in agreement with both computational neuroscience stereo vision theories and experimental findings. We demonstrate its features with a prototype neuromorphic hardware system and provide testable predictions on the role of spike-based representations and temporal dynamics in biological stereo vision processing systems.

  9. A spiking neural network model of 3D perception for event-based neuromorphic stereo vision systems

    NASA Astrophysics Data System (ADS)

    Osswald, Marc; Ieng, Sio-Hoi; Benosman, Ryad; Indiveri, Giacomo

    2017-01-01

    Stereo vision is an important feature that enables machine vision systems to perceive their environment in 3D. While machine vision has spawned a variety of software algorithms to solve the stereo-correspondence problem, their implementation and integration in small, fast, and efficient hardware vision systems remains a difficult challenge. Recent advances made in neuromorphic engineering offer a possible solution to this problem, with the use of a new class of event-based vision sensors and neural processing devices inspired by the organizing principles of the brain. Here we propose a radically novel model that solves the stereo-correspondence problem with a spiking neural network that can be directly implemented with massively parallel, compact, low-latency and low-power neuromorphic engineering devices. We validate the model with experimental results, highlighting features that are in agreement with both computational neuroscience stereo vision theories and experimental findings. We demonstrate its features with a prototype neuromorphic hardware system and provide testable predictions on the role of spike-based representations and temporal dynamics in biological stereo vision processing systems.

  10. How and why does the immunological synapse form? Physical chemistry meets cell biology.

    PubMed

    Chakraborty, Arup K

    2002-03-05

    During T lymphocyte (T cell) recognition of an antigen, a highly organized and specific pattern of membrane proteins forms in the junction between the T cell and the antigen-presenting cell (APC). This specialized cell-cell junction is called the immunological synapse. It is several micrometers large and forms over many minutes. A plethora of experiments are being performed to study the mechanisms that underlie synapse formation and the way in which information transfer occurs across the synapse. The wealth of experimental data that is beginning to emerge must be understood within a mechanistic framework if it is to prove useful in developing modalities to control the immune response. Quantitative models can complement experiments in the quest for such a mechanistic understanding by suggesting experimentally testable hypotheses. Here, a quantitative synapse assembly model is described. The model uses concepts developed in physical chemistry and cell biology and is able to predict the spatiotemporal evolution of cell shape and receptor protein patterns observed during synapse formation. Attention is directed to how the juxtaposition of model predictions and experimental data has led to intriguing hypotheses regarding the role of null and self peptides during synapse assembly, as well as correlations between T cell effector functions and the robustness of synapse assembly. We remark on some ways in which synergistic experiments and modeling studies can improve current models, and we take steps toward a better understanding of information transfer across the T cell-APC junction.

  11. There's No Such Thing as Value-Free Science.

    ERIC Educational Resources Information Center

    Makosky, Vivian Parker

    This paper is based on the view that, although scientists rely on research values such as predictive accuracy and testability, scientific research is still subject to the unscientific values, attitudes, and emotions of the scientists. It is noted that undergraduate students are likely not to think critically about the science they encounter. A…

  12. Active processes make mixed lipid membranes either flat or crumpled

    NASA Astrophysics Data System (ADS)

    Banerjee, Tirthankar; Basu, Abhik

    2018-01-01

    Whether live cell membranes show miscibility phase transitions (MPTs), and if so, how they fluctuate near the transitions remain outstanding unresolved issues in physics and biology alike. Motivated by these questions we construct a generic hydrodynamic theory for lipid membranes that are active, due for instance, to the molecular motors in the surrounding cytoskeleton, or active protein components in the membrane itself. We use this to uncover a direct correspondence between membrane fluctuations and MPTs. Several testable predictions are made: (i) generic active stiffening with orientational long range order (flat membrane) or softening with crumpling of the membrane, controlled by the active tension and (ii) for mixed lipid membranes, capturing the nature of putative MPTs by measuring the membrane conformation fluctuations. Possibilities of both first and second order MPTs in mixed active membranes are argued for. Near second order MPTs, active stiffening (softening) manifests as a super-stiff (super-soft) membrane. Our predictions are testable in a variety of in vitro systems, e.g. live cytoskeletal extracts deposited on liposomes and lipid membranes containing active proteins embedded in a passive fluid.

  13. Friction law and hysteresis in granular materials

    PubMed Central

    Wyart, M.

    2017-01-01

    The macroscopic friction of particulate materials often weakens as the flow rate is increased, leading to potentially disastrous intermittent phenomena including earthquakes and landslides. We theoretically and numerically study this phenomenon in simple granular materials. We show that velocity weakening, corresponding to a nonmonotonic behavior in the friction law, μ(I), is present even if the dynamic and static microscopic friction coefficients are identical, but disappears for softer particles. We argue that this instability is induced by endogenous acoustic noise, which tends to make contacts slide, leading to faster flow and increased noise. We show that soft spots, or excitable regions in the materials, correspond to rolling contacts that are about to slide, whose density is described by a nontrivial exponent θs. We build a microscopic theory for the nonmonotonicity of μ(I), which also predicts the scaling behavior of acoustic noise, the fraction of sliding contacts χ, and the sliding velocity, in terms of θs. Surprisingly, these quantities have no limit when particles become infinitely hard, as confirmed numerically. Our analysis rationalizes previously unexplained observations and makes experimentally testable predictions. PMID:28811373

  14. Friction law and hysteresis in granular materials

    NASA Astrophysics Data System (ADS)

    DeGiuli, E.; Wyart, M.

    2017-08-01

    The macroscopic friction of particulate materials often weakens as the flow rate is increased, leading to potentially disastrous intermittent phenomena including earthquakes and landslides. We theoretically and numerically study this phenomenon in simple granular materials. We show that velocity weakening, corresponding to a nonmonotonic behavior in the friction law, μ(I), is present even if the dynamic and static microscopic friction coefficients are identical, but disappears for softer particles. We argue that this instability is induced by endogenous acoustic noise, which tends to make contacts slide, leading to faster flow and increased noise. We show that soft spots, or excitable regions in the materials, correspond to rolling contacts that are about to slide, whose density is described by a nontrivial exponent θs. We build a microscopic theory for the nonmonotonicity of μ(I), which also predicts the scaling behavior of acoustic noise, the fraction of sliding contacts χ, and the sliding velocity, in terms of θs. Surprisingly, these quantities have no limit when particles become infinitely hard, as confirmed numerically. Our analysis rationalizes previously unexplained observations and makes experimentally testable predictions.

  15. Conservation in the face of climate change: The roles of alternative models, monitoring, and adaptation in confronting and reducing uncertainty

    USGS Publications Warehouse

    Conroy, M.J.; Runge, M.C.; Nichols, J.D.; Stodola, K.W.; Cooper, R.J.

    2011-01-01

    The broad physical and biological principles behind climate change and its potential large scale ecological impacts on biota are fairly well understood, although likely responses of biotic communities at fine spatio-temporal scales are not, limiting the ability of conservation programs to respond effectively to climate change outside the range of human experience. Much of the climate debate has focused on attempts to resolve key uncertainties in a hypothesis-testing framework. However, conservation decisions cannot await resolution of these scientific issues and instead must proceed in the face of uncertainty. We suggest that conservation should precede in an adaptive management framework, in which decisions are guided by predictions under multiple, plausible hypotheses about climate impacts. Under this plan, monitoring is used to evaluate the response of the system to climate drivers, and management actions (perhaps experimental) are used to confront testable predictions with data, in turn providing feedback for future decision making. We illustrate these principles with the problem of mitigating the effects of climate change on terrestrial bird communities in the southern Appalachian Mountains, USA. ?? 2010 Elsevier Ltd.

  16. Binding proteins enhance specific uptake rate by increasing the substrate-transporter encounter rate.

    PubMed

    Bosdriesz, Evert; Magnúsdóttir, Stefanía; Bruggeman, Frank J; Teusink, Bas; Molenaar, Douwe

    2015-06-01

    Microorganisms rely on binding-protein assisted, active transport systems to scavenge for scarce nutrients. Several advantages of using binding proteins in such uptake systems have been proposed. However, a systematic, rigorous and quantitative analysis of the function of binding proteins is lacking. By combining knowledge of selection pressure and physiochemical constraints, we derive kinetic, thermodynamic, and stoichiometric properties of binding-protein dependent transport systems that enable a maximal import activity per amount of transporter. Under the hypothesis that this maximal specific activity of the transport complex is the selection objective, binding protein concentrations should exceed the concentration of both the scarce nutrient and the transporter. This increases the encounter rate of transporter with loaded binding protein at low substrate concentrations, thereby enhancing the affinity and specific uptake rate. These predictions are experimentally testable, and a number of observations confirm them. © 2015 FEBS.

  17. Unified TeV scale picture of baryogenesis and dark matter.

    PubMed

    Babu, K S; Mohapatra, R N; Nasri, Salah

    2007-04-20

    We present a simple extension of the minimal supersymmetric standard model which provides a unified picture of cosmological baryon asymmetry and dark matter. Our model introduces a gauge singlet field N and a color triplet field X which couple to the right-handed quark fields. The out-of-equilibrium decay of the Majorana fermion N mediated by the exchange of the scalar field X generates adequate baryon asymmetry for MN approximately 100 GeV and MX approximately TeV. The scalar partner of N (denoted N1) is naturally the lightest SUSY particle as it has no gauge interactions and plays the role of dark matter. The model is experimentally testable in (i) neutron-antineutron oscillations with a transition time estimated to be around 10(10)sec, (ii) discovery of colored particles X at LHC with mass of order TeV, and (iii) direct dark matter detection with a predicted cross section in the observable range.

  18. Brains studying brains: look before you think in vision

    NASA Astrophysics Data System (ADS)

    Zhaoping, Li

    2016-06-01

    Using our own brains to study our brains is extraordinary. For example, in vision this makes us naturally blind to our own blindness, since our impression of seeing our world clearly is consistent with our ignorance of what we do not see. Our brain employs its ‘conscious’ part to reason and make logical deductions using familiar rules and past experience. However, human vision employs many ‘subconscious’ brain parts that follow rules alien to our intuition. Our blindness to our unknown unknowns and our presumptive intuitions easily lead us astray in asking and formulating theoretical questions, as witnessed in many unexpected and counter-intuitive difficulties and failures encountered by generations of scientists. We should therefore pay a more than usual amount of attention and respect to experimental data when studying our brain. I show that this can be productive by reviewing two vision theories that have provided testable predictions and surprising insights.

  19. Sequential pattern formation governed by signaling gradients

    NASA Astrophysics Data System (ADS)

    Jörg, David J.; Oates, Andrew C.; Jülicher, Frank

    2016-10-01

    Rhythmic and sequential segmentation of the embryonic body plan is a vital developmental patterning process in all vertebrate species. However, a theoretical framework capturing the emergence of dynamic patterns of gene expression from the interplay of cell oscillations with tissue elongation and shortening and with signaling gradients, is still missing. Here we show that a set of coupled genetic oscillators in an elongating tissue that is regulated by diffusing and advected signaling molecules can account for segmentation as a self-organized patterning process. This system can form a finite number of segments and the dynamics of segmentation and the total number of segments formed depend strongly on kinetic parameters describing tissue elongation and signaling molecules. The model accounts for existing experimental perturbations to signaling gradients, and makes testable predictions about novel perturbations. The variety of different patterns formed in our model can account for the variability of segmentation between different animal species.

  20. Brains studying brains: look before you think in vision.

    PubMed

    Zhaoping, Li

    2016-05-11

    Using our own brains to study our brains is extraordinary. For example, in vision this makes us naturally blind to our own blindness, since our impression of seeing our world clearly is consistent with our ignorance of what we do not see. Our brain employs its 'conscious' part to reason and make logical deductions using familiar rules and past experience. However, human vision employs many 'subconscious' brain parts that follow rules alien to our intuition. Our blindness to our unknown unknowns and our presumptive intuitions easily lead us astray in asking and formulating theoretical questions, as witnessed in many unexpected and counter-intuitive difficulties and failures encountered by generations of scientists. We should therefore pay a more than usual amount of attention and respect to experimental data when studying our brain. I show that this can be productive by reviewing two vision theories that have provided testable predictions and surprising insights.

  1. A Systems Biology Approach Reveals Converging Molecular Mechanisms that Link Different POPs to Common Metabolic Diseases.

    PubMed

    Ruiz, Patricia; Perlina, Ally; Mumtaz, Moiz; Fowler, Bruce A

    2016-07-01

    A number of epidemiological studies have identified statistical associations between persistent organic pollutants (POPs) and metabolic diseases, but testable hypotheses regarding underlying molecular mechanisms to explain these linkages have not been published. We assessed the underlying mechanisms of POPs that have been associated with metabolic diseases; three well-known POPs [2,3,7,8-tetrachlorodibenzodioxin (TCDD), 2,2´,4,4´,5,5´-hexachlorobiphenyl (PCB 153), and 4,4´-dichlorodiphenyldichloroethylene (p,p´-DDE)] were studied. We used advanced database search tools to delineate testable hypotheses and to guide laboratory-based research studies into underlying mechanisms by which this POP mixture could produce or exacerbate metabolic diseases. For our searches, we used proprietary systems biology software (MetaCore™/MetaDrug™) to conduct advanced search queries for the underlying interactions database, followed by directional network construction to identify common mechanisms for these POPs within two or fewer interaction steps downstream of their primary targets. These common downstream pathways belong to various cytokine and chemokine families with experimentally well-documented causal associations with type 2 diabetes. Our systems biology approach allowed identification of converging pathways leading to activation of common downstream targets. To our knowledge, this is the first study to propose an integrated global set of step-by-step molecular mechanisms for a combination of three common POPs using a systems biology approach, which may link POP exposure to diseases. Experimental evaluation of the proposed pathways may lead to development of predictive biomarkers of the effects of POPs, which could translate into disease prevention and effective clinical treatment strategies. Ruiz P, Perlina A, Mumtaz M, Fowler BA. 2016. A systems biology approach reveals converging molecular mechanisms that link different POPs to common metabolic diseases. Environ Health Perspect 124:1034-1041; http://dx.doi.org/10.1289/ehp.1510308.

  2. Restructurable VLSI Program

    DTIC Science & Technology

    1981-03-31

    logic testing element and a concomitant testability criterion ideally suited to dynamic circuit applications and appro- priate for automatic computer...making connections automatically . PF is an experimental feature which provides users with only four different chip sizes (full, half, quarter, and eighth...initial solution is found constructively which is improved by pair-wise swapping. Results show, however, that the constructive initial sorter , which

  3. Delay test generation for synchronous sequential circuits

    NASA Astrophysics Data System (ADS)

    Devadas, Srinivas

    1989-05-01

    We address the problem of generating tests for delay faults in non-scan synchronous sequential circuits. Delay test generation for sequential circuits is a considerably more difficult problem than delay testing of combinational circuits and has received much less attention. In this paper, we present a method for generating test sequences to detect delay faults in sequential circuits using the stuck-at fault sequential test generator STALLION. The method is complete in that it will generate a delay test sequence for a targeted fault given sufficient CPU time, if such a sequence exists. We term faults for which no delay test sequence exists, under out test methodology, sequentially delay redundant. We describe means of eliminating sequential delay redundancies in logic circuits. We present a partial-scan methodology for enhancing the testability of difficult-to-test of untestable sequential circuits, wherein a small number of flip-flops are selected and made controllable/observable. The selection process guarantees the elimination of all sequential delay redundancies. We show that an intimate relationship exists between state assignment and delay testability of a sequential machine. We describe a state assignment algorithm for the synthesis of sequential machines with maximal delay fault testability. Preliminary experimental results using the test generation, partial-scan and synthesis algorithm are presented.

  4. Newtonian semiclassical gravity in three ontological quantum theories that solve the measurement problem: Formalisms and empirical predictions

    NASA Astrophysics Data System (ADS)

    Derakhshani, Maaneli

    In this thesis, we consider the implications of solving the quantum measurement problem for the Newtonian description of semiclassical gravity. First we review the formalism of the Newtonian description of semiclassical gravity based on standard quantum mechanics---the Schroedinger-Newton theory---and two well-established predictions that come out of it, namely, gravitational 'cat states' and gravitationally-induced wavepacket collapse. Then we review three quantum theories with 'primitive ontologies' that are well-known known to solve the measurement problem---Schroedinger's many worlds theory, the GRW collapse theory with matter density ontology, and Nelson's stochastic mechanics. We extend the formalisms of these three quantum theories to Newtonian models of semiclassical gravity and evaluate their implications for gravitational cat states and gravitational wavepacket collapse. We find that (1) Newtonian semiclassical gravity based on Schroedinger's many worlds theory is mathematically equivalent to the Schroedinger-Newton theory and makes the same predictions; (2) Newtonian semiclassical gravity based on the GRW theory differs from Schroedinger-Newton only in the use of a stochastic collapse law, but this law allows it to suppress gravitational cat states so as not to be in contradiction with experiment, while allowing for gravitational wavepacket collapse to happen as well; (3) Newtonian semiclassical gravity based on Nelson's stochastic mechanics differs significantly from Schroedinger-Newton, and does not predict gravitational cat states nor gravitational wavepacket collapse. Considering that gravitational cat states are experimentally ruled out, but gravitational wavepacket collapse is testable in the near future, this implies that only the latter two are viable theories of Newtonian semiclassical gravity and that they can be experimentally tested against each other in future molecular interferometry experiments that are anticipated to be capable of testing the gravitational wavepacket collapse prediction.

  5. A Novel Computational Model Predicts Key Regulators of Chemokine Gradient Formation in Lymph Nodes and Site-Specific Roles for CCL19 and ACKR4

    PubMed Central

    Brook, Bindi S.

    2017-01-01

    The chemokine receptor CCR7 drives leukocyte migration into and within lymph nodes (LNs). It is activated by chemokines CCL19 and CCL21, which are scavenged by the atypical chemokine receptor ACKR4. CCR7-dependent navigation is determined by the distribution of extracellular CCL19 and CCL21, which form concentration gradients at specific microanatomical locations. The mechanisms underpinning the establishment and regulation of these gradients are poorly understood. In this article, we have incorporated multiple biochemical processes describing the CCL19–CCL21–CCR7–ACKR4 network into our model of LN fluid flow to establish a computational model to investigate intranodal chemokine gradients. Importantly, the model recapitulates CCL21 gradients observed experimentally in B cell follicles and interfollicular regions, building confidence in its ability to accurately predict intranodal chemokine distribution. Parameter variation analysis indicates that the directionality of these gradients is robust, but their magnitude is sensitive to these key parameters: chemokine production, diffusivity, matrix binding site availability, and CCR7 abundance. The model indicates that lymph flow shapes intranodal CCL21 gradients, and that CCL19 is functionally important at the boundary between B cell follicles and the T cell area. It also predicts that ACKR4 in LNs prevents CCL19/CCL21 accumulation in efferent lymph, but does not control intranodal gradients. Instead, it attributes the disrupted interfollicular CCL21 gradients observed in Ackr4-deficient LNs to ACKR4 loss upstream. Our novel approach has therefore generated new testable hypotheses and alternative interpretations of experimental data. Moreover, it acts as a framework to investigate gradients at other locations, including those that cannot be visualized experimentally or involve other chemokines. PMID:28807994

  6. Identifying interactions in the time and frequency domains in local and global networks - A Granger Causality Approach.

    PubMed

    Zou, Cunlu; Ladroue, Christophe; Guo, Shuixia; Feng, Jianfeng

    2010-06-21

    Reverse-engineering approaches such as Bayesian network inference, ordinary differential equations (ODEs) and information theory are widely applied to deriving causal relationships among different elements such as genes, proteins, metabolites, neurons, brain areas and so on, based upon multi-dimensional spatial and temporal data. There are several well-established reverse-engineering approaches to explore causal relationships in a dynamic network, such as ordinary differential equations (ODE), Bayesian networks, information theory and Granger Causality. Here we focused on Granger causality both in the time and frequency domain and in local and global networks, and applied our approach to experimental data (genes and proteins). For a small gene network, Granger causality outperformed all the other three approaches mentioned above. A global protein network of 812 proteins was reconstructed, using a novel approach. The obtained results fitted well with known experimental findings and predicted many experimentally testable results. In addition to interactions in the time domain, interactions in the frequency domain were also recovered. The results on the proteomic data and gene data confirm that Granger causality is a simple and accurate approach to recover the network structure. Our approach is general and can be easily applied to other types of temporal data.

  7. Domain fusion analysis by applying relational algebra to protein sequence and domain databases.

    PubMed

    Truong, Kevin; Ikura, Mitsuhiko

    2003-05-06

    Domain fusion analysis is a useful method to predict functionally linked proteins that may be involved in direct protein-protein interactions or in the same metabolic or signaling pathway. As separate domain databases like BLOCKS, PROSITE, Pfam, SMART, PRINTS-S, ProDom, TIGRFAMs, and amalgamated domain databases like InterPro continue to grow in size and quality, a computational method to perform domain fusion analysis that leverages on these efforts will become increasingly powerful. This paper proposes a computational method employing relational algebra to find domain fusions in protein sequence databases. The feasibility of this method was illustrated on the SWISS-PROT+TrEMBL sequence database using domain predictions from the Pfam HMM (hidden Markov model) database. We identified 235 and 189 putative functionally linked protein partners in H. sapiens and S. cerevisiae, respectively. From scientific literature, we were able to confirm many of these functional linkages, while the remainder offer testable experimental hypothesis. Results can be viewed at http://calcium.uhnres.utoronto.ca/pi. As the analysis can be computed quickly on any relational database that supports standard SQL (structured query language), it can be dynamically updated along with the sequence and domain databases, thereby improving the quality of predictions over time.

  8. Japanese earthquake predictability experiment with multiple runs before and after the 2011 Tohoku-oki earthquake

    NASA Astrophysics Data System (ADS)

    Hirata, N.; Tsuruoka, H.; Yokoi, S.

    2011-12-01

    The current Japanese national earthquake prediction program emphasizes the importance of modeling as well as monitoring for a sound scientific development of earthquake prediction research. One major focus of the current program is to move toward creating testable earthquake forecast models. For this purpose, in 2009 we joined the Collaboratory for the Study of Earthquake Predictability (CSEP) and installed, through an international collaboration, the CSEP Testing Centre, an infrastructure to encourage researchers to develop testable models for Japan. We started Japanese earthquake predictability experiment on November 1, 2009. The experiment consists of 12 categories, with 4 testing classes with different time spans (1 day, 3 months, 1 year and 3 years) and 3 testing regions called 'All Japan,' 'Mainland,' and 'Kanto.' A total of 160 models, as of August 2013, were submitted, and are currently under the CSEP official suite of tests for evaluating the performance of forecasts. We will present results of prospective forecast and testing for periods before and after the 2011 Tohoku-oki earthquake. Because a seismic activity has changed dramatically since the 2011 event, performances of models have been affected very much. In addition, as there is the problem of authorized catalogue related to the completeness magnitude, most models did not pass the CSEP consistency tests. Also, we will discuss the retrospective earthquake forecast experiments for aftershocks of the 2011 Tohoku-oki earthquake. Our aim is to describe what has turned out to be the first occasion for setting up a research environment for rigorous earthquake forecasting in Japan.

  9. Japanese earthquake predictability experiment with multiple runs before and after the 2011 Tohoku-oki earthquake

    NASA Astrophysics Data System (ADS)

    Hirata, N.; Tsuruoka, H.; Yokoi, S.

    2013-12-01

    The current Japanese national earthquake prediction program emphasizes the importance of modeling as well as monitoring for a sound scientific development of earthquake prediction research. One major focus of the current program is to move toward creating testable earthquake forecast models. For this purpose, in 2009 we joined the Collaboratory for the Study of Earthquake Predictability (CSEP) and installed, through an international collaboration, the CSEP Testing Centre, an infrastructure to encourage researchers to develop testable models for Japan. We started Japanese earthquake predictability experiment on November 1, 2009. The experiment consists of 12 categories, with 4 testing classes with different time spans (1 day, 3 months, 1 year and 3 years) and 3 testing regions called 'All Japan,' 'Mainland,' and 'Kanto.' A total of 160 models, as of August 2013, were submitted, and are currently under the CSEP official suite of tests for evaluating the performance of forecasts. We will present results of prospective forecast and testing for periods before and after the 2011 Tohoku-oki earthquake. Because a seismic activity has changed dramatically since the 2011 event, performances of models have been affected very much. In addition, as there is the problem of authorized catalogue related to the completeness magnitude, most models did not pass the CSEP consistency tests. Also, we will discuss the retrospective earthquake forecast experiments for aftershocks of the 2011 Tohoku-oki earthquake. Our aim is to describe what has turned out to be the first occasion for setting up a research environment for rigorous earthquake forecasting in Japan.

  10. Mechanics of undulatory swimming in a frictional fluid.

    PubMed

    Ding, Yang; Sharpe, Sarah S; Masse, Andrew; Goldman, Daniel I

    2012-01-01

    The sandfish lizard (Scincus scincus) swims within granular media (sand) using axial body undulations to propel itself without the use of limbs. In previous work we predicted average swimming speed by developing a numerical simulation that incorporated experimentally measured biological kinematics into a multibody sandfish model. The model was coupled to an experimentally validated soft sphere discrete element method simulation of the granular medium. In this paper, we use the simulation to study the detailed mechanics of undulatory swimming in a "granular frictional fluid" and compare the predictions to our previously developed resistive force theory (RFT) which models sand-swimming using empirically determined granular drag laws. The simulation reveals that the forward speed of the center of mass (CoM) oscillates about its average speed in antiphase with head drag. The coupling between overall body motion and body deformation results in a non-trivial pattern in the magnitude of lateral displacement of the segments along the body. The actuator torque and segment power are maximal near the center of the body and decrease to zero toward the head and the tail. Approximately 30% of the net swimming power is dissipated in head drag. The power consumption is proportional to the frequency in the biologically relevant range, which confirms that frictional forces dominate during sand-swimming by the sandfish. Comparison of the segmental forces measured in simulation with the force on a laterally oscillating rod reveals that a granular hysteresis effect causes the overestimation of the body thrust forces in the RFT. Our models provide detailed testable predictions for biological locomotion in a granular environment.

  11. Mechanics of Undulatory Swimming in a Frictional Fluid

    PubMed Central

    Ding, Yang; Sharpe, Sarah S.; Masse, Andrew; Goldman, Daniel I.

    2012-01-01

    The sandfish lizard (Scincus scincus) swims within granular media (sand) using axial body undulations to propel itself without the use of limbs. In previous work we predicted average swimming speed by developing a numerical simulation that incorporated experimentally measured biological kinematics into a multibody sandfish model. The model was coupled to an experimentally validated soft sphere discrete element method simulation of the granular medium. In this paper, we use the simulation to study the detailed mechanics of undulatory swimming in a “granular frictional fluid” and compare the predictions to our previously developed resistive force theory (RFT) which models sand-swimming using empirically determined granular drag laws. The simulation reveals that the forward speed of the center of mass (CoM) oscillates about its average speed in antiphase with head drag. The coupling between overall body motion and body deformation results in a non-trivial pattern in the magnitude of lateral displacement of the segments along the body. The actuator torque and segment power are maximal near the center of the body and decrease to zero toward the head and the tail. Approximately 30% of the net swimming power is dissipated in head drag. The power consumption is proportional to the frequency in the biologically relevant range, which confirms that frictional forces dominate during sand-swimming by the sandfish. Comparison of the segmental forces measured in simulation with the force on a laterally oscillating rod reveals that a granular hysteresis effect causes the overestimation of the body thrust forces in the RFT. Our models provide detailed testable predictions for biological locomotion in a granular environment. PMID:23300407

  12. Elevated carbon dioxide is predicted to promote coexistence among competing species in a trait-based model

    DOE PAGES

    Ali, Ashehad A.; Medlyn, Belinda E.; Aubier, Thomas G.; ...

    2015-10-06

    Differential species responses to atmospheric CO 2 concentration (C a) could lead to quantitative changes in competition among species and community composition, with flow-on effects for ecosystem function. However, there has been little theoretical analysis of how elevated C a (eC a) will affect plant competition, or how composition of plant communities might change. Such theoretical analysis is needed for developing testable hypotheses to frame experimental research. Here, we investigated theoretically how plant competition might change under eC a by implementing two alternative competition theories, resource use theory and resource capture theory, in a plant carbon and nitrogen cycling model.more » The model makes several novel predictions for the impact of eC a on plant community composition. Using resource use theory, the model predicts that eC a is unlikely to change species dominance in competition, but is likely to increase coexistence among species. Using resource capture theory, the model predicts that eC a may increase community evenness. Collectively, both theories suggest that eC a will favor coexistence and hence that species diversity should increase with eC a. Our theoretical analysis leads to a novel hypothesis for the impact of eC a on plant community composition. In this study, the hypothesis has potential to help guide the design and interpretation of eC a experiments.« less

  13. Electrical test prediction using hybrid metrology and machine learning

    NASA Astrophysics Data System (ADS)

    Breton, Mary; Chao, Robin; Muthinti, Gangadhara Raja; de la Peña, Abraham A.; Simon, Jacques; Cepler, Aron J.; Sendelbach, Matthew; Gaudiello, John; Emans, Susan; Shifrin, Michael; Etzioni, Yoav; Urenski, Ronen; Lee, Wei Ti

    2017-03-01

    Electrical test measurement in the back-end of line (BEOL) is crucial for wafer and die sorting as well as comparing intended process splits. Any in-line, nondestructive technique in the process flow to accurately predict these measurements can significantly improve mean-time-to-detect (MTTD) of defects and improve cycle times for yield and process learning. Measuring after BEOL metallization is commonly done for process control and learning, particularly with scatterometry (also called OCD (Optical Critical Dimension)), which can solve for multiple profile parameters such as metal line height or sidewall angle and does so within patterned regions. This gives scatterometry an advantage over inline microscopy-based techniques, which provide top-down information, since such techniques can be insensitive to sidewall variations hidden under the metal fill of the trench. But when faced with correlation to electrical test measurements that are specific to the BEOL processing, both techniques face the additional challenge of sampling. Microscopy-based techniques are sampling-limited by their small probe size, while scatterometry is traditionally limited (for microprocessors) to scribe targets that mimic device ground rules but are not necessarily designed to be electrically testable. A solution to this sampling challenge lies in a fast reference-based machine learning capability that allows for OCD measurement directly of the electrically-testable structures, even when they are not OCD-compatible. By incorporating such direct OCD measurements, correlation to, and therefore prediction of, resistance of BEOL electrical test structures is significantly improved. Improvements in prediction capability for multiple types of in-die electrically-testable device structures is demonstrated. To further improve the quality of the prediction of the electrical resistance measurements, hybrid metrology using the OCD measurements as well as X-ray metrology (XRF) is used. Hybrid metrology is the practice of combining information from multiple sources in order to enable or improve the measurement of one or more critical parameters. Here, the XRF measurements are used to detect subtle changes in barrier layer composition and thickness that can have second-order effects on the electrical resistance of the test structures. By accounting for such effects with the aid of the X-ray-based measurements, further improvement in the OCD correlation to electrical test measurements is achieved. Using both types of solution incorporation of fast reference-based machine learning on nonOCD-compatible test structures, and hybrid metrology combining OCD with XRF technology improvement in BEOL cycle time learning could be accomplished through improved prediction capability.

  14. Perspective: researching the transition from non-living to the first microorganisms: methods and experiments are major challenges.

    PubMed

    Trevors, J T

    2010-06-01

    Methods to research the origin of microbial life are limited. However, microorganisms were the first organisms on the Earth capable of cell growth and division, and interactions with their environment, other microbial cells, and eventually with diverse eukaryotic organisms. The origin of microbial life and the supporting scientific evidence are both an enigma and a scientific priority. Numerous hypotheses have been proposed, scenarios imagined, speculations presented in papers, insights shared, and assumptions made without supporting experimentation, which have led to limited progress in understanding the origin of microbial life. The use of the human imagination to envision the origin of life events, without supporting experimentation, observation and independently replicated experiments required for science, is a significant constraint. The challenge remains how to better understand the origin of microbial life using observations and experimental methods as opposed to speculation, assumptions, scenarios, envisioning events and un-testable hypotheses. This is not an easy challenge as experimental design and plausible hypothesis testing are difficult. Since past approaches have been inconclusive in providing evidence for the origin of microbial life mechanisms and the manner in which genetic instructions was encoded into DNA/RNA, it is reasonable and logical to propose that progress will be made when testable, plausible hypotheses and methods are used in the origin of microbial life research, and the experimental observations are, or are not reproduced in independent laboratories. These perspectives will be discussed in this article as well as the possibility that a pre-biotic film preceded a microbial biofilm as a possible micro-location for the origin of microbial cells capable of growth and division. 2010 Elsevier B.V. All rights reserved.

  15. Dark matter, proton decay and other phenomenological constraints in F-SU(5)

    NASA Astrophysics Data System (ADS)

    Li, Tianjun; Maxin, James A.; Nanopoulos, Dimitri V.; Walker, Joel W.

    2011-07-01

    We study gravity mediated supersymmetry breaking in F-SU(5) and its low-energy supersymmetric phenomenology. The gaugino masses are not unified at the traditional grand unification scale, but we nonetheless have the same one-loop gaugino mass relation at the electroweak scale as minimal supergravity (mSUGRA). We introduce parameters testable at the colliders to measure the small second loop deviation from the mSUGRA gaugino mass relation at the electroweak scale. In the minimal SU(5) model with gravity mediated supersymmetry breaking, we show that the deviations from the mSUGRA gaugino mass relations are within 5%. However, in F-SU(5), we predict the deviations from the mSUGRA gaugino mass relations to be larger due to the presence of vector-like particles, which can be tested at the colliders. We determine the viable parameter space that satisfies all the latest experimental constraints and find it is consistent with the CDMS II experiment. Further, we compute the cross-sections of neutralino annihilations into gamma-rays and compare to the first published Fermi-LAT measurement. Finally, the corresponding range of proton lifetime predictions is calculated and found to be within reach of the future Hyper-Kamiokande and DUSEL experiments.

  16. Taking Bioinformatics to Systems Medicine.

    PubMed

    van Kampen, Antoine H C; Moerland, Perry D

    2016-01-01

    Systems medicine promotes a range of approaches and strategies to study human health and disease at a systems level with the aim of improving the overall well-being of (healthy) individuals, and preventing, diagnosing, or curing disease. In this chapter we discuss how bioinformatics critically contributes to systems medicine. First, we explain the role of bioinformatics in the management and analysis of data. In particular we show the importance of publicly available biological and clinical repositories to support systems medicine studies. Second, we discuss how the integration and analysis of multiple types of omics data through integrative bioinformatics may facilitate the determination of more predictive and robust disease signatures, lead to a better understanding of (patho)physiological molecular mechanisms, and facilitate personalized medicine. Third, we focus on network analysis and discuss how gene networks can be constructed from omics data and how these networks can be decomposed into smaller modules. We discuss how the resulting modules can be used to generate experimentally testable hypotheses, provide insight into disease mechanisms, and lead to predictive models. Throughout, we provide several examples demonstrating how bioinformatics contributes to systems medicine and discuss future challenges in bioinformatics that need to be addressed to enable the advancement of systems medicine.

  17. The Long and Viscous Road: Uncovering Nuclear Diffusion Barriers in Closed Mitosis

    PubMed Central

    Zavala, Eder; Marquez-Lago, Tatiana T.

    2014-01-01

    Diffusion barriers are effective means for constraining protein lateral exchange in cellular membranes. In Saccharomyces cerevisiae, they have been shown to sustain parental identity through asymmetric segregation of ageing factors during closed mitosis. Even though barriers have been extensively studied in the plasma membrane, their identity and organization within the nucleus remains poorly understood. Based on different lines of experimental evidence, we present a model of the composition and structural organization of a nuclear diffusion barrier during anaphase. By means of spatial stochastic simulations, we propose how specialised lipid domains, protein rings, and morphological changes of the nucleus may coordinate to restrict protein exchange between mother and daughter nuclear lobes. We explore distinct, plausible configurations of these diffusion barriers and offer testable predictions regarding their protein exclusion properties and the diffusion regimes they generate. Our model predicts that, while a specialised lipid domain and an immobile protein ring at the bud neck can compartmentalize the nucleus during early anaphase; a specialised lipid domain spanning the elongated bridge between lobes would be entirely sufficient during late anaphase. Our work shows how complex nuclear diffusion barriers in closed mitosis may arise from simple nanoscale biophysical interactions. PMID:25032937

  18. Predictive Place-Cell Sequences for Goal-Finding Emerge from Goal Memory and the Cognitive Map: A Computational Model

    PubMed Central

    Gönner, Lorenz; Vitay, Julien; Hamker, Fred H.

    2017-01-01

    Hippocampal place-cell sequences observed during awake immobility often represent previous experience, suggesting a role in memory processes. However, recent reports of goals being overrepresented in sequential activity suggest a role in short-term planning, although a detailed understanding of the origins of hippocampal sequential activity and of its functional role is still lacking. In particular, it is unknown which mechanism could support efficient planning by generating place-cell sequences biased toward known goal locations, in an adaptive and constructive fashion. To address these questions, we propose a model of spatial learning and sequence generation as interdependent processes, integrating cortical contextual coding, synaptic plasticity and neuromodulatory mechanisms into a map-based approach. Following goal learning, sequential activity emerges from continuous attractor network dynamics biased by goal memory inputs. We apply Bayesian decoding on the resulting spike trains, allowing a direct comparison with experimental data. Simulations show that this model (1) explains the generation of never-experienced sequence trajectories in familiar environments, without requiring virtual self-motion signals, (2) accounts for the bias in place-cell sequences toward goal locations, (3) highlights their utility in flexible route planning, and (4) provides specific testable predictions. PMID:29075187

  19. Lattice of quantum predictions

    NASA Astrophysics Data System (ADS)

    Drieschner, Michael

    1993-10-01

    What is the structure of reality? Physics is supposed to answer this question, but a purely empiristic view is not sufficient to explain its ability to do so. Quantum mechanics has forced us to think more deeply about what a physical theory is. There are preconditions every physical theory must fulfill. It has to contain, e.g., rules for empirically testable predictions. Those preconditions give physics a structure that is “a priori” in the Kantian sense. An example is given how the lattice structure of quantum mechanics can be understood along these lines.

  20. Architectural Analysis of Dynamically Reconfigurable Systems

    NASA Technical Reports Server (NTRS)

    Lindvall, Mikael; Godfrey, Sally; Ackermann, Chris; Ray, Arnab; Yonkwa, Lyly

    2010-01-01

    oTpics include: the problem (increased flexibility of architectural styles decrease analyzability, behavior emerges and varies depending on the configuration, does the resulting system run according to the intended design, and architectural decisions can impede or facilitate testing); top down approach to architecture analysis, detection of defects and deviations, and architecture and its testability; currently targeted projects GMSEC and CFS; analyzing software architectures; analyzing runtime events; actual architecture recognition; GMPUB in Dynamic SAVE; sample output from new approach; taking message timing delays into account; CFS examples of architecture and testability; some recommendations for improved testablity; and CFS examples of abstract interfaces and testability; CFS example of opening some internal details.

  1. Retrieval as a Fast Route to Memory Consolidation.

    PubMed

    Antony, James W; Ferreira, Catarina S; Norman, Kenneth A; Wimber, Maria

    2017-08-01

    Retrieval-mediated learning is a powerful way to make memories last, but its neurocognitive mechanisms remain unclear. We propose that retrieval acts as a rapid consolidation event, supporting the creation of adaptive hippocampal-neocortical representations via the 'online' reactivation of associative information. We describe parallels between online retrieval and offline consolidation and offer testable predictions for future research. Copyright © 2017 Elsevier Ltd. All rights reserved.

  2. Developing Tools to Test the Thermo-Mechanical Models, Examples at Crustal and Upper Mantle Scale

    NASA Astrophysics Data System (ADS)

    Le Pourhiet, L.; Yamato, P.; Burov, E.; Gurnis, M.

    2005-12-01

    Testing geodynamical model is never an easy task. Depending on the spatio-temporal scale of the model, different testable predictions are needed and no magic reciepe exist. This contribution first presents different methods that have been used to test themo-mechanical modeling results at upper crustal, lithospheric and upper mantle scale using three geodynamical examples : the Gulf of Corinth (Greece), the Western Alps, and the Sierra Nevada. At short spatio-temporal scale (e.g. Gulf of Corinth). The resolution of the numerical models is usually sufficient to catch the timing and kinematics of the faults precisely enough to be tested by tectono-stratigraphic arguments. In active deforming area, microseismicity can be compared to the effective rheology and P and T axes of the focal mechanism can be compared with local orientation of the major component of the stress tensor. At lithospheric scale the resolution of the models doesn't permit anymore to constrain the models by direct observations (i.e. structural data from field or seismic reflection). Instead, synthetic P-T-t path may be computed and compared to natural ones in term of rate of exhumation for ancient orogens. Topography may also help but on continent it mainly depends on erosion laws that are complicated to constrain. Deeper in the mantle, the only available constrain are long wave length topographic data and tomographic "data". The major problem to overcome now at lithospheric and upper mantle scale, is that the so called "data" results actually from inverse models of the real data and that those inverse model are based on synthetic models. Post processing P and S wave velocities is not sufficient to be able to make testable prediction at upper mantle scale. Instead of that, direct wave propagations model must be computed. This allows checking if the differences between two models constitute a testable prediction or not. On longer term, we may be able to use those synthetic models to reduce the residue in the inversion of elastic wave arrival time

  3. Boolean Minimization and Algebraic Factorization Procedures for Fully Testable Sequential Machines

    DTIC Science & Technology

    1989-09-01

    Boolean Minimization and Algebraic Factorization Procedures for Fully Testable Sequential Machines Srinivas Devadas and Kurt Keutzer F ( Abstract In this...Projects Agency under contract number N00014-87-K-0825. Author Information Devadas : Department of Electrical Engineering and Computer Science, Room 36...MA 02139; (617) 253-0292. 0 * Boolean Minimization and Algebraic Factorization Procedures for Fully Testable Sequential Machines Siivas Devadas

  4. Advanced Diagnostic and Prognostic Testbed (ADAPT) Testability Analysis Report

    NASA Technical Reports Server (NTRS)

    Ossenfort, John

    2008-01-01

    As system designs become more complex, determining the best locations to add sensors and test points for the purpose of testing and monitoring these designs becomes more difficult. Not only must the designer take into consideration all real and potential faults of the system, he or she must also find efficient ways of detecting and isolating those faults. Because sensors and cabling take up valuable space and weight on a system, and given constraints on bandwidth and power, it is even more difficult to add sensors into these complex designs after the design has been completed. As a result, a number of software tools have been developed to assist the system designer in proper placement of these sensors during the system design phase of a project. One of the key functions provided by many of these software programs is a testability analysis of the system essentially an evaluation of how observable the system behavior is using available tests. During the design phase, testability metrics can help guide the designer in improving the inherent testability of the design. This may include adding, removing, or modifying tests; breaking up feedback loops, or changing the system to reduce fault propagation. Given a set of test requirements, the analysis can also help to verify that the system will meet those requirements. Of course, a testability analysis requires that a software model of the physical system is available. For the analysis to be most effective in guiding system design, this model should ideally be constructed in parallel with these efforts. The purpose of this paper is to present the final testability results of the Advanced Diagnostic and Prognostic Testbed (ADAPT) after the system model was completed. The tool chosen to build the model and to perform the testability analysis with is the Testability Engineering and Maintenance System Designer (TEAMS-Designer). The TEAMS toolset is intended to be a solution to span all phases of the system, from design and development through health management and maintenance. TEAMS-Designer is the model-building and testability analysis software in that suite.

  5. Inconclusive quantum measurements and decisions under uncertainty

    NASA Astrophysics Data System (ADS)

    Yukalov, Vyacheslav; Sornette, Didier

    2016-04-01

    We give a mathematical definition for the notion of inconclusive quantum measurements. In physics, such measurements occur at intermediate stages of a complex measurement procedure, with the final measurement result being operationally testable. Since the mathematical structure of Quantum Decision Theory has been developed in analogy with the theory of quantum measurements, the inconclusive quantum measurements correspond, in Quantum Decision Theory, to intermediate stages of decision making in the process of taking decisions under uncertainty. The general form of the quantum probability for a composite event is the sum of a utility factor, describing a rational evaluation of the considered prospect, and of an attraction factor, characterizing irrational, subconscious attitudes of the decision maker. Despite the involved irrationality, the probability of prospects can be evaluated. This is equivalent to the possibility of calculating quantum probabilities without specifying hidden variables. We formulate a general way of evaluation, based on the use of non-informative priors. As an example, we suggest the explanation of the decoy effect. Our quantitative predictions are in very good agreement with experimental data.

  6. Autism as the Low-Fitness Extreme of a Parentally Selected Fitness Indicator.

    PubMed

    Shaner, Andrew; Miller, Geoffrey; Mintz, Jim

    2008-12-01

    Siblings compete for parental care and feeding, while parents must allocate scarce resources to those offspring most likely to survive and reproduce. This could cause offspring to evolve traits that advertise health, and thereby attract parental resources. For example, experimental evidence suggests that bright orange filaments covering the heads of North American coot chicks may have evolved for this fitness-advertising purpose. Could any human mental disorders be the equivalent of dull filaments in coot chicks-low-fitness extremes of mental abilities that evolved as fitness indicators? One possibility is autism. Suppose that the ability of very young children to charm their parents evolved as a parentally selected fitness indicator. Young children would vary greatly in their ability to charm parents, that variation would correlate with underlying fitness, and autism could be the low-fitness extreme of this variation. This view explains many seemingly disparate facts about autism and leads to some surprising and testable predictions.

  7. Yukawa unification in an SO(10) SUSY GUT: SUSY on the edge

    NASA Astrophysics Data System (ADS)

    Poh, Zijie; Raby, Stuart

    2015-07-01

    In this paper we analyze Yukawa unification in a three family SO(10) SUSY GUT. We perform a global χ2 analysis and show that supersymmetry (SUSY) effects do not decouple even though the universal scalar mass parameter at the grand unified theory (GUT) scale, m16, is found to lie between 15 and 30 TeV with the best fit given for m16≈25 TeV . Note, SUSY effects do not decouple since stops and bottoms have mass of order 5 TeV, due to renormalization group running from MGUT. The model has many testable predictions. Gauginos are the lightest sparticles and the light Higgs boson is very much standard model-like. The model is consistent with flavor and C P observables with the BR (μ →e γ ) close to the experimental upper bound. With such a large value of m16 we clearly cannot be considered "natural" SUSY nor are we "split" SUSY. We are thus in the region in between or "SUSY on the edge."

  8. Exploring the role of internal friction in the dynamics of unfolded proteins using simple polymer models.

    PubMed

    Cheng, Ryan R; Hawk, Alexander T; Makarov, Dmitrii E

    2013-02-21

    Recent experiments showed that the reconfiguration dynamics of unfolded proteins are often adequately described by simple polymer models. In particular, the Rouse model with internal friction (RIF) captures internal friction effects as observed in single-molecule fluorescence correlation spectroscopy (FCS) studies of a number of proteins. Here we use RIF, and its non-free draining analog, Zimm model with internal friction, to explore the effect of internal friction on the rate with which intramolecular contacts can be formed within the unfolded chain. Unlike the reconfiguration times inferred from FCS experiments, which depend linearly on the solvent viscosity, the first passage times to form intramolecular contacts are shown to display a more complex viscosity dependence. We further describe scaling relationships obeyed by contact formation times in the limits of high and low internal friction. Our findings provide experimentally testable predictions that can serve as a framework for the analysis of future studies of contact formation in proteins.

  9. A Unified Approach to the Synthesis of Fully Testable Sequential Machines

    DTIC Science & Technology

    1989-10-01

    N A Unified Approach to the Synthesis of Fully Testable Sequential Machines Srinivas Devadas and Kurt Keutzer Abstract • In this paper we attempt to...research was supported in part by the Defense Advanced Research Projects Agency under contract N00014-87-K-0825. Author Information Devadas : Department...Fully Testable Sequential Maine(S P Sritiivas Devadas Departinent of Electrical Engineerinig anid Comivi Sciec Massachusetts Institute of Technology

  10. The experience of agency: an interplay between prediction and postdiction

    PubMed Central

    Synofzik, Matthis; Vosgerau, Gottfried; Voss, Martin

    2013-01-01

    The experience of agency, i.e., the registration that I am the initiator of my actions, is a basic and constant underpinning of our interaction with the world. Whereas several accounts have underlined predictive processes as the central mechanism (e.g., the comparator model by C. Frith), others emphasized postdictive inferences (e.g., post-hoc inference account by D. Wegner). Based on increasing evidence that both predictive and postdictive processes contribute to the experience of agency, we here present a unifying but at the same time parsimonious approach that reconciles these accounts: predictive and postdictive processes are both integrated by the brain according to the principles of optimal cue integration. According to this framework, predictive and postdictive processes each serve as authorship cues that are continuously integrated and weighted depending on their availability and reliability in a given situation. Both sensorimotor and cognitive signals can serve as predictive cues (e.g., internal predictions based on an efferency copy of the motor command or cognitive anticipations based on priming). Similarly, other sensorimotor and cognitive cues can each serve as post-hoc cues (e.g., visual feedback of the action or the affective valence of the action outcome). Integration and weighting of these cues might not only differ between contexts and individuals, but also between different subject and disease groups. For example, schizophrenia patients with delusions of influence seem to rely less on (probably imprecise) predictive motor signals of the action and more on post-hoc action cues like e.g., visual feedback and, possibly, the affective valence of the action outcome. Thus, the framework of optimal cue integration offers a promising approach that directly stimulates a wide range of experimentally testable hypotheses on agency processing in different subject groups. PMID:23508565

  11. Elementary signaling modes predict the essentiality of signal transduction network components

    PubMed Central

    2011-01-01

    Background Understanding how signals propagate through signaling pathways and networks is a central goal in systems biology. Quantitative dynamic models help to achieve this understanding, but are difficult to construct and validate because of the scarcity of known mechanistic details and kinetic parameters. Structural and qualitative analysis is emerging as a feasible and useful alternative for interpreting signal transduction. Results In this work, we present an integrative computational method for evaluating the essentiality of components in signaling networks. This approach expands an existing signaling network to a richer representation that incorporates the positive or negative nature of interactions and the synergistic behaviors among multiple components. Our method simulates both knockout and constitutive activation of components as node disruptions, and takes into account the possible cascading effects of a node's disruption. We introduce the concept of elementary signaling mode (ESM), as the minimal set of nodes that can perform signal transduction independently. Our method ranks the importance of signaling components by the effects of their perturbation on the ESMs of the network. Validation on several signaling networks describing the immune response of mammals to bacteria, guard cell abscisic acid signaling in plants, and T cell receptor signaling shows that this method can effectively uncover the essentiality of components mediating a signal transduction process and results in strong agreement with the results of Boolean (logical) dynamic models and experimental observations. Conclusions This integrative method is an efficient procedure for exploratory analysis of large signaling and regulatory networks where dynamic modeling or experimental tests are impractical. Its results serve as testable predictions, provide insights into signal transduction and regulatory mechanisms and can guide targeted computational or experimental follow-up studies. The source codes for the algorithms developed in this study can be found at http://www.phys.psu.edu/~ralbert/ESM. PMID:21426566

  12. Linking Ecology and Epidemiology to Understand Predictors of Multi-Host Responses to an Emerging Pathogen, the Amphibian Chytrid Fungus

    PubMed Central

    Stephens, Patrick R.; Hua, Jessica; Searle, Catherine L.; Xie, Gisselle Yang; Urbina, Jenny; Olson, Deanna H.; Bancroft, Betsy A.; Weis, Virginia; Hammond, John I.; Relyea, Rick A.; Blaustein, Andrew R.

    2017-01-01

    Variation in host responses to pathogens can have cascading effects on populations and communities when some individuals or groups of individuals display disproportionate vulnerability to infection or differ in their competence to transmit infection. The fungal pathogen, Batrachochytrium dendrobatidis (Bd) has been detected in almost 700 different amphibian species and is implicated in numerous global amphibian population declines. Identifying key hosts in the amphibian-Bd system–those who are at greatest risk or who pose the greatest risk for others–is challenging due in part to many extrinsic environmental factors driving spatiotemporal Bd distribution and context-dependent host responses to Bd in the wild. One way to improve predictive risk models and generate testable mechanistic hypotheses about vulnerability is to complement what we know about the spatial epidemiology of Bd with data collected through comparative experimental studies. We used standardized pathogen challenges to quantify amphibian survival and infection trajectories across 20 post-metamorphic North American species raised from eggs. We then incorporated trait-based models to investigate the predictive power of phylogenetic history, habitat use, and ecological and life history traits in explaining responses to Bd. True frogs (Ranidae) displayed the lowest infection intensities, whereas toads (Bufonidae) generally displayed the greatest levels of mortality after Bd exposure. Affiliation with ephemeral aquatic habitat and breadth of habitat use were strong predictors of vulnerability to and intensity of infection and several other traits including body size, lifespan, age at sexual maturity, and geographic range also appeared in top models explaining host responses to Bd. Several of the species examined are highly understudied with respect to Bd such that this study represents the first experimental susceptibility data. Combining insights gained from experimental studies with observations of landscape-level disease prevalence may help explain current and predict future pathogen dynamics in the Bd system. PMID:28095428

  13. Linking Ecology and Epidemiology to Understand Predictors of Multi-Host Responses to an Emerging Pathogen, the Amphibian Chytrid Fungus.

    PubMed

    Gervasi, Stephanie S; Stephens, Patrick R; Hua, Jessica; Searle, Catherine L; Xie, Gisselle Yang; Urbina, Jenny; Olson, Deanna H; Bancroft, Betsy A; Weis, Virginia; Hammond, John I; Relyea, Rick A; Blaustein, Andrew R

    2017-01-01

    Variation in host responses to pathogens can have cascading effects on populations and communities when some individuals or groups of individuals display disproportionate vulnerability to infection or differ in their competence to transmit infection. The fungal pathogen, Batrachochytrium dendrobatidis (Bd) has been detected in almost 700 different amphibian species and is implicated in numerous global amphibian population declines. Identifying key hosts in the amphibian-Bd system-those who are at greatest risk or who pose the greatest risk for others-is challenging due in part to many extrinsic environmental factors driving spatiotemporal Bd distribution and context-dependent host responses to Bd in the wild. One way to improve predictive risk models and generate testable mechanistic hypotheses about vulnerability is to complement what we know about the spatial epidemiology of Bd with data collected through comparative experimental studies. We used standardized pathogen challenges to quantify amphibian survival and infection trajectories across 20 post-metamorphic North American species raised from eggs. We then incorporated trait-based models to investigate the predictive power of phylogenetic history, habitat use, and ecological and life history traits in explaining responses to Bd. True frogs (Ranidae) displayed the lowest infection intensities, whereas toads (Bufonidae) generally displayed the greatest levels of mortality after Bd exposure. Affiliation with ephemeral aquatic habitat and breadth of habitat use were strong predictors of vulnerability to and intensity of infection and several other traits including body size, lifespan, age at sexual maturity, and geographic range also appeared in top models explaining host responses to Bd. Several of the species examined are highly understudied with respect to Bd such that this study represents the first experimental susceptibility data. Combining insights gained from experimental studies with observations of landscape-level disease prevalence may help explain current and predict future pathogen dynamics in the Bd system.

  14. Improving accuracy and power with transfer learning using a meta-analytic database.

    PubMed

    Schwartz, Yannick; Varoquaux, Gaël; Pallier, Christophe; Pinel, Philippe; Poline, Jean-Baptiste; Thirion, Bertrand

    2012-01-01

    Typical cohorts in brain imaging studies are not large enough for systematic testing of all the information contained in the images. To build testable working hypotheses, investigators thus rely on analysis of previous work, sometimes formalized in a so-called meta-analysis. In brain imaging, this approach underlies the specification of regions of interest (ROIs) that are usually selected on the basis of the coordinates of previously detected effects. In this paper, we propose to use a database of images, rather than coordinates, and frame the problem as transfer learning: learning a discriminant model on a reference task to apply it to a different but related new task. To facilitate statistical analysis of small cohorts, we use a sparse discriminant model that selects predictive voxels on the reference task and thus provides a principled procedure to define ROIs. The benefits of our approach are twofold. First it uses the reference database for prediction, i.e., to provide potential biomarkers in a clinical setting. Second it increases statistical power on the new task. We demonstrate on a set of 18 pairs of functional MRI experimental conditions that our approach gives good prediction. In addition, on a specific transfer situation involving different scanners at different locations, we show that voxel selection based on transfer learning leads to higher detection power on small cohorts.

  15. Systematic mapping of two component response regulators to gene targets in a model sulfate reducing bacterium.

    PubMed

    Rajeev, Lara; Luning, Eric G; Dehal, Paramvir S; Price, Morgan N; Arkin, Adam P; Mukhopadhyay, Aindrila

    2011-10-12

    Two component regulatory systems are the primary form of signal transduction in bacteria. Although genomic binding sites have been determined for several eukaryotic and bacterial transcription factors, comprehensive identification of gene targets of two component response regulators remains challenging due to the lack of knowledge of the signals required for their activation. We focused our study on Desulfovibrio vulgaris Hildenborough, a sulfate reducing bacterium that encodes unusually diverse and largely uncharacterized two component signal transduction systems. We report the first systematic mapping of the genes regulated by all transcriptionally acting response regulators in a single bacterium. Our results enabled functional predictions for several response regulators and include key processes of carbon, nitrogen and energy metabolism, cell motility and biofilm formation, and responses to stresses such as nitrite, low potassium and phosphate starvation. Our study also led to the prediction of new genes and regulatory networks, which found corroboration in a compendium of transcriptome data available for D. vulgaris. For several regulators we predicted and experimentally verified the binding site motifs, most of which were discovered as part of this study. The gene targets identified for the response regulators allowed strong functional predictions to be made for the corresponding two component systems. By tracking the D. vulgaris regulators and their motifs outside the Desulfovibrio spp. we provide testable hypotheses regarding the functions of orthologous regulators in other organisms. The in vitro array based method optimized here is generally applicable for the study of such systems in all organisms.

  16. Crystal study and econometric model

    NASA Technical Reports Server (NTRS)

    1975-01-01

    An econometric model was developed that can be used to predict demand and supply figures for crystals over a time horizon roughly concurrent with that of NASA's Space Shuttle Program - that is, 1975 through 1990. The model includes an equation to predict the impact on investment in the crystal-growing industry. Actually, two models are presented. The first is a theoretical model which follows rather strictly the standard theoretical economic concepts involved in supply and demand analysis, and a modified version of the model was developed which, though not quite as theoretically sound, was testable utilizing existing data sources.

  17. Extended Testability Analysis Tool

    NASA Technical Reports Server (NTRS)

    Melcher, Kevin; Maul, William A.; Fulton, Christopher

    2012-01-01

    The Extended Testability Analysis (ETA) Tool is a software application that supports fault management (FM) by performing testability analyses on the fault propagation model of a given system. Fault management includes the prevention of faults through robust design margins and quality assurance methods, or the mitigation of system failures. Fault management requires an understanding of the system design and operation, potential failure mechanisms within the system, and the propagation of those potential failures through the system. The purpose of the ETA Tool software is to process the testability analysis results from a commercial software program called TEAMS Designer in order to provide a detailed set of diagnostic assessment reports. The ETA Tool is a command-line process with several user-selectable report output options. The ETA Tool also extends the COTS testability analysis and enables variation studies with sensor sensitivity impacts on system diagnostics and component isolation using a single testability output. The ETA Tool can also provide extended analyses from a single set of testability output files. The following analysis reports are available to the user: (1) the Detectability Report provides a breakdown of how each tested failure mode was detected, (2) the Test Utilization Report identifies all the failure modes that each test detects, (3) the Failure Mode Isolation Report demonstrates the system s ability to discriminate between failure modes, (4) the Component Isolation Report demonstrates the system s ability to discriminate between failure modes relative to the components containing the failure modes, (5) the Sensor Sensor Sensitivity Analysis Report shows the diagnostic impact due to loss of sensor information, and (6) the Effect Mapping Report identifies failure modes that result in specified system-level effects.

  18. Software Users Manual (SUM): Extended Testability Analysis (ETA) Tool

    NASA Technical Reports Server (NTRS)

    Maul, William A.; Fulton, Christopher E.

    2011-01-01

    This software user manual describes the implementation and use the Extended Testability Analysis (ETA) Tool. The ETA Tool is a software program that augments the analysis and reporting capabilities of a commercial-off-the-shelf (COTS) testability analysis software package called the Testability Engineering And Maintenance System (TEAMS) Designer. An initial diagnostic assessment is performed by the TEAMS Designer software using a qualitative, directed-graph model of the system being analyzed. The ETA Tool utilizes system design information captured within the diagnostic model and testability analysis output from the TEAMS Designer software to create a series of six reports for various system engineering needs. The ETA Tool allows the user to perform additional studies on the testability analysis results by determining the detection sensitivity to the loss of certain sensors or tests. The ETA Tool was developed to support design and development of the NASA Ares I Crew Launch Vehicle. The diagnostic analysis provided by the ETA Tool was proven to be valuable system engineering output that provided consistency in the verification of system engineering requirements. This software user manual provides a description of each output report generated by the ETA Tool. The manual also describes the example diagnostic model and supporting documentation - also provided with the ETA Tool software release package - that were used to generate the reports presented in the manual

  19. Zee-Babu type model with U (1 )Lμ-Lτ gauge symmetry

    NASA Astrophysics Data System (ADS)

    Nomura, Takaaki; Okada, Hiroshi

    2018-05-01

    We extend the Zee-Babu model, introducing local U (1 )Lμ-Lτ symmetry with several singly charged bosons. We find a predictive neutrino mass texture in a simple hypothesis in which mixings among singly charged bosons are negligible. Also, lepton-flavor violations are less constrained compared with the original model. Then, we explore the testability of the model, focusing on doubly charged boson physics at the LHC and the International Linear Collider.

  20. A collider observable QCD axion

    DOE PAGES

    Dimopoulos, Savas; Hook, Anson; Huang, Junwu; ...

    2016-11-09

    Here, we present a model where the QCD axion is at the TeV scale and visible at a collider via its decays. Conformal dynamics and strong CP considerations account for the axion coupling strongly enough to the standard model to be produced as well as the coincidence between the weak scale and the axion mass. The model predicts additional pseudoscalar color octets whose properties are completely determined by the axion properties rendering the theory testable.

  1. Effects of flow on the dynamics of a ferromagnetic nematic liquid crystal

    NASA Astrophysics Data System (ADS)

    Potisk, Tilen; Pleiner, Harald; Svenšek, Daniel; Brand, Helmut R.

    2018-04-01

    We investigate the effects of flow on the dynamics of ferromagnetic nematic liquid crystals. As a model, we study the coupled dynamics of the magnetization, M , the director field, n , associated with the liquid crystalline orientational order, and the velocity field, v . We evaluate how simple shear flow in a ferromagnetic nematic is modified in the presence of small external magnetic fields, and we make experimentally testable predictions for the resulting effective shear viscosity: an increase by a factor of 2 in a magnetic field of about 20 mT. Flow alignment, a characteristic feature of classical uniaxial nematic liquid crystals, is analyzed for ferromagnetic nematics for the two cases of magnetization in or perpendicular to the shear plane. In the former case, we find that small in-plane magnetic fields are sufficient to suppress tumbling and thus that the boundary between flow alignment and tumbling can be controlled easily. In the latter case, we furthermore find a possibility of flow alignment in a regime for which one obtains tumbling for the pure nematic component. We derive the analogs of the three Miesowicz viscosities well-known from usual nematic liquid crystals, corresponding to nine different configurations. Combinations of these can be used to determine several dynamic coefficients experimentally.

  2. A genome-wide longitudinal transcriptome analysis of the aging model Podospora anserina.

    PubMed

    Philipp, Oliver; Hamann, Andrea; Servos, Jörg; Werner, Alexandra; Koch, Ina; Osiewacz, Heinz D

    2013-01-01

    Aging of biological systems is controlled by various processes which have a potential impact on gene expression. Here we report a genome-wide transcriptome analysis of the fungal aging model Podospora anserina. Total RNA of three individuals of defined age were pooled and analyzed by SuperSAGE (serial analysis of gene expression). A bioinformatics analysis identified different molecular pathways to be affected during aging. While the abundance of transcripts linked to ribosomes and to the proteasome quality control system were found to decrease during aging, those associated with autophagy increase, suggesting that autophagy may act as a compensatory quality control pathway. Transcript profiles associated with the energy metabolism including mitochondrial functions were identified to fluctuate during aging. Comparison of wild-type transcripts, which are continuously down-regulated during aging, with those down-regulated in the long-lived, copper-uptake mutant grisea, validated the relevance of age-related changes in cellular copper metabolism. Overall, we (i) present a unique age-related data set of a longitudinal study of the experimental aging model P. anserina which represents a reference resource for future investigations in a variety of organisms, (ii) suggest autophagy to be a key quality control pathway that becomes active once other pathways fail, and (iii) present testable predictions for subsequent experimental investigations.

  3. Elastic strain and twist analysis of protein structural data and allostery of the transmembrane channel KcsA

    NASA Astrophysics Data System (ADS)

    Mitchell, Michael R.; Leibler, Stanislas

    2018-05-01

    The abundance of available static protein structural data makes the more effective analysis and interpretation of this data a valuable tool to supplement the experimental study of protein mechanics. Structural displacements can be difficult to analyze and interpret. Previously, we showed that strains provide a more natural and interpretable representation of protein deformations, revealing mechanical coupling between spatially distinct sites of allosteric proteins. Here, we demonstrate that other transformations of displacements yield additional insights. We calculate the divergence and curl of deformations of the transmembrane channel KcsA. Additionally, we introduce quantities analogous to bend, splay, and twist deformation energies of nematic liquid crystals. These transformations enable the decomposition of displacements into different modes of deformation, helping to characterize the type of deformation a protein undergoes. We apply these calculations to study the filter and gating regions of KcsA. We observe a continuous path of rotational deformations physically coupling these two regions, and, we propose, underlying the allosteric interaction between these regions. Bend, splay, and twist distinguish KcsA gate opening, filter opening, and filter-gate coupling, respectively. In general, physically meaningful representations of deformations (like strain, curl, bend, splay, and twist) can make testable predictions and yield insights into protein mechanics, augmenting experimental methods and more fully exploiting available structural data.

  4. A simple mechanistic explanation for original antigenic sin and its alleviation by adjuvants.

    PubMed

    Ndifon, Wilfred

    2015-11-06

    A large number of published studies have shown that adaptive immunity to a particular antigen, including pathogen-derived, can be boosted by another, cross-reacting antigen while inducing suboptimal immunity to the latter. Although this phenomenon, called original antigenic sin (OAS), was first reported approximately 70 years ago (Francis et al. 1947 Am. J. Public Health 37, 1013-1016 (doi:10.2105/AJPH.37.8.1013)), its underlying biological mechanisms are still inadequately understood (Kim et al. Proc. Natl Acad. Sci. USA 109, 13 751-13 756 (doi:10.1073/pnas.0912458109)). Here, focusing on the humoral aspects of adaptive immunity, I propose a simple and testable mechanism: that OAS occurs when T regulatory cells induced by the first antigen decrease the dose of the second antigen that is loaded by dendritic cells and available to activate naive lymphocytes. I use both a parsimonious mathematical model and experimental data to confirm the deductive validity of this proposal. This model also explains the puzzling experimental observation that administering certain dendritic cell-activating adjuvants during antigen exposure alleviates OAS. Specifically, the model predicts that such adjuvants will attenuate T regulatory suppression of naive lymphocyte activation. Together, these results suggest additional strategies for redeeming adaptive immunity from the destructive consequences of antigenic 'sin'. © 2015 The Author(s).

  5. Interrogating selectivity in catalysis using molecular vibrations

    NASA Astrophysics Data System (ADS)

    Milo, Anat; Bess, Elizabeth N.; Sigman, Matthew S.

    2014-03-01

    The delineation of molecular properties that underlie reactivity and selectivity is at the core of physical organic chemistry, and this knowledge can be used to inform the design of improved synthetic methods or identify new chemical transformations. For this reason, the mathematical representation of properties affecting reactivity and selectivity trends, that is, molecular parameters, is paramount. Correlations produced by equating these molecular parameters with experimental outcomes are often defined as free-energy relationships and can be used to evaluate the origin of selectivity and to generate new, experimentally testable hypotheses. The premise behind successful correlations of this type is that a systematically perturbed molecular property affects a transition-state interaction between the catalyst, substrate and any reaction components involved in the determination of selectivity. Classic physical organic molecular descriptors, such as Hammett, Taft or Charton parameters, seek to independently probe isolated electronic or steric effects. However, these parameters cannot address simultaneous, non-additive variations to more than one molecular property, which limits their utility. Here we report a parameter system based on the vibrational response of a molecule to infrared radiation that can be used to mathematically model and predict selectivity trends for reactions with interlinked steric and electronic effects at positions of interest. The disclosed parameter system is mechanistically derived and should find broad use in the study of chemical and biological systems.

  6. A Predictive Model of the Oxygen and Heme Regulatory Network in Yeast

    PubMed Central

    Kundaje, Anshul; Xin, Xiantong; Lan, Changgui; Lianoglou, Steve; Zhou, Mei; Zhang, Li; Leslie, Christina

    2008-01-01

    Deciphering gene regulatory mechanisms through the analysis of high-throughput expression data is a challenging computational problem. Previous computational studies have used large expression datasets in order to resolve fine patterns of coexpression, producing clusters or modules of potentially coregulated genes. These methods typically examine promoter sequence information, such as DNA motifs or transcription factor occupancy data, in a separate step after clustering. We needed an alternative and more integrative approach to study the oxygen regulatory network in Saccharomyces cerevisiae using a small dataset of perturbation experiments. Mechanisms of oxygen sensing and regulation underlie many physiological and pathological processes, and only a handful of oxygen regulators have been identified in previous studies. We used a new machine learning algorithm called MEDUSA to uncover detailed information about the oxygen regulatory network using genome-wide expression changes in response to perturbations in the levels of oxygen, heme, Hap1, and Co2+. MEDUSA integrates mRNA expression, promoter sequence, and ChIP-chip occupancy data to learn a model that accurately predicts the differential expression of target genes in held-out data. We used a novel margin-based score to extract significant condition-specific regulators and assemble a global map of the oxygen sensing and regulatory network. This network includes both known oxygen and heme regulators, such as Hap1, Mga2, Hap4, and Upc2, as well as many new candidate regulators. MEDUSA also identified many DNA motifs that are consistent with previous experimentally identified transcription factor binding sites. Because MEDUSA's regulatory program associates regulators to target genes through their promoter sequences, we directly tested the predicted regulators for OLE1, a gene specifically induced under hypoxia, by experimental analysis of the activity of its promoter. In each case, deletion of the candidate regulator resulted in the predicted effect on promoter activity, confirming that several novel regulators identified by MEDUSA are indeed involved in oxygen regulation. MEDUSA can reveal important information from a small dataset and generate testable hypotheses for further experimental analysis. Supplemental data are included. PMID:19008939

  7. Combinatorial DNA Damage Pairing Model Based on X-Ray-Induced Foci Predicts the Dose and LET Dependence of Cell Death in Human Breast Cells

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Vadhavkar, Nikhil; Pham, Christopher; Georgescu, Walter

    In contrast to the classic view of static DNA double-strand breaks (DSBs) being repaired at the site of damage, we hypothesize that DSBs move and merge with each other over large distances (m). As X-ray dose increases, the probability of having DSB clusters increases as does the probability of misrepair and cell death. Experimental work characterizing the X-ray dose dependence of radiation-induced foci (RIF) in nonmalignant human mammary epithelial cells (MCF10A) is used here to validate a DSB clustering model. We then use the principles of the local effect model (LEM) to predict the yield of DSBs at the submicronmore » level. Two mechanisms for DSB clustering, namely random coalescence of DSBs versus active movement of DSBs into repair domains are compared and tested. Simulations that best predicted both RIF dose dependence and cell survival after X-ray irradiation favored the repair domain hypothesis, suggesting the nucleus is divided into an array of regularly spaced repair domains of ~;;1.55 m sides. Applying the same approach to high-linear energy transfer (LET) ion tracks, we are able to predict experimental RIF/m along tracks with an overall relative error of 12percent, for LET ranging between 30 350 keV/m and for three different ions. Finally, cell death was predicted by assuming an exponential dependence on the total number of DSBs and of all possible combinations of paired DSBs within each simulated RIF. Relative biological effectiveness (RBE) predictions for cell survival of MCF10A exposed to high-LET showed an LET dependence that matches previous experimental results for similar cell types. Overall, this work suggests that microdosimetric properties of ion tracks at the submicron level are sufficient to explain both RIF data and survival curves for any LET, similarly to the LEM assumption. Conversely, high-LET death mechanism does not have to infer linear-quadratic dose formalism as done in the LEM. In addition, the size of repair domains derived in our model are based on experimental RIF and are three times larger than the hypothetical LEM voxel used to fit survival curves. Our model is therefore an alternative to previous approaches that provides a testable biological mechanism (i.e., RIF). In addition, we propose that DSB pairing will help develop more accurate alternatives to the linear cancer risk model (LNT) currently used for regulating exposure to very low levels of ionizing radiation.« less

  8. Within-group behavioural consequences of between-group conflict: a prospective review.

    PubMed

    Radford, Andrew N; Majolo, Bonaventura; Aureli, Filippo

    2016-11-30

    Conflict is rife in group-living species and exerts a powerful selective force. Group members face a variety of threats from extra-group conspecifics, from individuals looking for reproductive opportunities to rival groups seeking resources. Theory predicts that such between-group conflict should influence within-group behaviour. However, compared with the extensive literature on the consequences of within-group conflict, relatively little research has considered the behavioural impacts of between-group conflict. We give an overview of why between-group conflict is expected to influence subsequent behaviour among group members. We then use what is known about the consequences of within-group conflict to generate testable predictions about how between-group conflict might affect within-group behaviour in the aftermath. We consider the types of behaviour that could change and how the role of different group members in the conflict can exert an influence. Furthermore, we discuss how conflict characteristics and outcome, group size, social structure and within-group relationship quality might modulate post-conflict behavioural changes. Finally, we propose the need for consistent definitions, a broader range of examined behaviours and taxa, individual-focused data collection, complementary observational and experimental approaches, and a consideration of lasting effects if we are to understand fully the significant influence of between-group conflict on social behaviour. © 2016 The Author(s).

  9. Within-group behavioural consequences of between-group conflict: a prospective review

    PubMed Central

    Aureli, Filippo

    2016-01-01

    Conflict is rife in group-living species and exerts a powerful selective force. Group members face a variety of threats from extra-group conspecifics, from individuals looking for reproductive opportunities to rival groups seeking resources. Theory predicts that such between-group conflict should influence within-group behaviour. However, compared with the extensive literature on the consequences of within-group conflict, relatively little research has considered the behavioural impacts of between-group conflict. We give an overview of why between-group conflict is expected to influence subsequent behaviour among group members. We then use what is known about the consequences of within-group conflict to generate testable predictions about how between-group conflict might affect within-group behaviour in the aftermath. We consider the types of behaviour that could change and how the role of different group members in the conflict can exert an influence. Furthermore, we discuss how conflict characteristics and outcome, group size, social structure and within-group relationship quality might modulate post-conflict behavioural changes. Finally, we propose the need for consistent definitions, a broader range of examined behaviours and taxa, individual-focused data collection, complementary observational and experimental approaches, and a consideration of lasting effects if we are to understand fully the significant influence of between-group conflict on social behaviour. PMID:27903869

  10. Theoretical and computational validation of the Kuhn barrier friction mechanism in unfolded proteins.

    PubMed

    Avdoshenko, Stanislav M; Das, Atanu; Satija, Rohit; Papoian, Garegin A; Makarov, Dmitrii E

    2017-03-21

    A long time ago, Kuhn predicted that long polymers should approach a limit where their global motion is controlled by solvent friction alone, with ruggedness of their energy landscapes having no consequences for their dynamics. In contrast, internal friction effects are important for polymers of modest length. Internal friction in proteins, in particular, affects how fast they fold or find their binding targets and, as such, has attracted much recent attention. Here we explore the molecular origins of internal friction in unfolded proteins using atomistic simulations, coarse-grained models and analytic theory. We show that the characteristic internal friction timescale is directly proportional to the timescale of hindered dihedral rotations within the polypeptide chain, with a proportionality coefficient b that is independent of the chain length. Such chain length independence of b provides experimentally testable evidence that internal friction arises from concerted, crankshaft-like dihedral rearrangements. In accord with phenomenological models of internal friction, we find the global reconfiguration timescale of a polypeptide to be the sum of solvent friction and internal friction timescales. At the same time, the time evolution of inter-monomer distances within polypeptides deviates both from the predictions of those models and from a simple, one-dimensional diffusion model.

  11. Predicting Predator Recognition in a Changing World.

    PubMed

    Carthey, Alexandra J R; Blumstein, Daniel T

    2018-02-01

    Through natural as well as anthropogenic processes, prey can lose historically important predators and gain novel ones. Both predator gain and loss frequently have deleterious consequences. While numerous hypotheses explain the response of individuals to novel and familiar predators, we lack a unifying conceptual model that predicts the fate of prey following the introduction of a novel or a familiar (reintroduced) predator. Using the concept of eco-evolutionary experience, we create a new framework that allows us to predict whether prey will recognize and be able to discriminate predator cues from non-predator cues and, moreover, the likely persistence outcomes for 11 different predator-prey interaction scenarios. This framework generates useful and testable predictions for ecologists, conservation scientists, and decision-makers. Copyright © 2017 Elsevier Ltd. All rights reserved.

  12. A plausible radiobiological model of cardiovascular disease at low or fractionated doses

    NASA Astrophysics Data System (ADS)

    Little, Mark; Vandoolaeghe, Wendy; Gola, Anna; Tzoulaki, Ioanna

    Atherosclerosis is the main cause of coronary heart disease and stroke, the two major causes of death in developed society. There is emerging evidence of excess risk of cardiovascular disease at low radiation doses in various occupationally-exposed groups receiving small daily radia-tion doses. Assuming that they are causal, the mechanisms for effects of chronic fractionated radiation exposures on cardiovascular disease are unclear. We outline a spatial reaction-diffusion model for atherosclerosis, and perform stability analysis, based wherever possible on human data. We show that a predicted consequence of multiple small radiation doses is to cause mean chemo-attractant (MCP-1) concentration to increase linearly with cumulative dose. The main driver for the increase in MCP-1 is monocyte death, and consequent reduction in MCP-1 degradation. The radiation-induced risks predicted by the model are quantitatively consistent with those observed in a number of occupationally-exposed groups. The changes in equilibrium MCP-1 concentrations with low density lipoprotein cholesterol concentration are also consistent with experimental and epidemiologic data. This proposed mechanism would be experimentally testable. If true, it also has substantive implications for radiological protection, which at present does not take cardiovascular disease into account. The Japanese A-bomb survivor data implies that cardiovascular disease and can-cer mortality contribute similarly to radiogenic risk. The major uncertainty in assessing the low-dose risk of cardiovascular disease is the shape of the dose response relationship, which is unclear in the Japanese data. The analysis of the present paper suggests that linear extrapo-lation would be appropriate for this endpoint.

  13. LSI/VLSI design for testability analysis and general approach

    NASA Technical Reports Server (NTRS)

    Lam, A. Y.

    1982-01-01

    The incorporation of testability characteristics into large scale digital design is not only necessary for, but also pertinent to effective device testing and enhancement of device reliability. There are at least three major DFT techniques, namely, the self checking, the LSSD, and the partitioning techniques, each of which can be incorporated into a logic design to achieve a specific set of testability and reliability requirements. Detailed analysis of the design theory, implementation, fault coverage, hardware requirements, application limitations, etc., of each of these techniques are also presented.

  14. Larval transport modeling of deep-sea invertebrates can aid the search for undiscovered populations.

    PubMed

    Yearsley, Jon M; Sigwart, Julia D

    2011-01-01

    Many deep-sea benthic animals occur in patchy distributions separated by thousands of kilometres, yet because deep-sea habitats are remote, little is known about their larval dispersal. Our novel method simulates dispersal by combining data from the Argo array of autonomous oceanographic probes, deep-sea ecological surveys, and comparative invertebrate physiology. The predicted particle tracks allow quantitative, testable predictions about the dispersal of benthic invertebrate larvae in the south-west Pacific. In a test case presented here, using non-feeding, non-swimming (lecithotrophic trochophore) larvae of polyplacophoran molluscs (chitons), we show that the likely dispersal pathways in a single generation are significantly shorter than the distances between the three known population centres in our study region. The large-scale density of chiton populations throughout our study region is potentially much greater than present survey data suggest, with intermediate 'stepping stone' populations yet to be discovered. We present a new method that is broadly applicable to studies of the dispersal of deep-sea organisms. This test case demonstrates the power and potential applications of our new method, in generating quantitative, testable hypotheses at multiple levels to solve the mismatch between observed and expected distributions: probabilistic predictions of locations of intermediate populations, potential alternative dispersal mechanisms, and expected population genetic structure. The global Argo data have never previously been used to address benthic biology, and our method can be applied to any non-swimming larvae of the deep-sea, giving information upon dispersal corridors and population densities in habitats that remain intrinsically difficult to assess.

  15. Larval Transport Modeling of Deep-Sea Invertebrates Can Aid the Search for Undiscovered Populations

    PubMed Central

    Yearsley, Jon M.; Sigwart, Julia D.

    2011-01-01

    Background Many deep-sea benthic animals occur in patchy distributions separated by thousands of kilometres, yet because deep-sea habitats are remote, little is known about their larval dispersal. Our novel method simulates dispersal by combining data from the Argo array of autonomous oceanographic probes, deep-sea ecological surveys, and comparative invertebrate physiology. The predicted particle tracks allow quantitative, testable predictions about the dispersal of benthic invertebrate larvae in the south-west Pacific. Principal Findings In a test case presented here, using non-feeding, non-swimming (lecithotrophic trochophore) larvae of polyplacophoran molluscs (chitons), we show that the likely dispersal pathways in a single generation are significantly shorter than the distances between the three known population centres in our study region. The large-scale density of chiton populations throughout our study region is potentially much greater than present survey data suggest, with intermediate ‘stepping stone’ populations yet to be discovered. Conclusions/Significance We present a new method that is broadly applicable to studies of the dispersal of deep-sea organisms. This test case demonstrates the power and potential applications of our new method, in generating quantitative, testable hypotheses at multiple levels to solve the mismatch between observed and expected distributions: probabilistic predictions of locations of intermediate populations, potential alternative dispersal mechanisms, and expected population genetic structure. The global Argo data have never previously been used to address benthic biology, and our method can be applied to any non-swimming larvae of the deep-sea, giving information upon dispersal corridors and population densities in habitats that remain intrinsically difficult to assess. PMID:21857992

  16. Earthquake Forecasting System in Italy

    NASA Astrophysics Data System (ADS)

    Falcone, G.; Marzocchi, W.; Murru, M.; Taroni, M.; Faenza, L.

    2017-12-01

    In Italy, after the 2009 L'Aquila earthquake, a procedure was developed for gathering and disseminating authoritative information about the time dependence of seismic hazard to help communities prepare for a potentially destructive earthquake. The most striking time dependency of the earthquake occurrence process is the time clustering, which is particularly pronounced in time windows of days and weeks. The Operational Earthquake Forecasting (OEF) system that is developed at the Seismic Hazard Center (Centro di Pericolosità Sismica, CPS) of the Istituto Nazionale di Geofisica e Vulcanologia (INGV) is the authoritative source of seismic hazard information for Italian Civil Protection. The philosophy of the system rests on a few basic concepts: transparency, reproducibility, and testability. In particular, the transparent, reproducible, and testable earthquake forecasting system developed at CPS is based on ensemble modeling and on a rigorous testing phase. Such phase is carried out according to the guidance proposed by the Collaboratory for the Study of Earthquake Predictability (CSEP, international infrastructure aimed at evaluating quantitatively earthquake prediction and forecast models through purely prospective and reproducible experiments). In the OEF system, the two most popular short-term models were used: the Epidemic-Type Aftershock Sequences (ETAS) and the Short-Term Earthquake Probabilities (STEP). Here, we report the results from OEF's 24hour earthquake forecasting during the main phases of the 2016-2017 sequence occurred in Central Apennines (Italy).

  17. Loss Aversion and Time-Differentiated Electricity Pricing

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Spurlock, C. Anna

    2015-06-01

    I develop a model of loss aversion over electricity expenditure, from which I derive testable predictions for household electricity consumption while on combination time-of-use (TOU) and critical peak pricing (CPP) plans. Testing these predictions results in evidence consistent with loss aversion: (1) spillover effects - positive expenditure shocks resulted in significantly more peak consumption reduction for several weeks thereafter; and (2) clustering - disproportionate probability of consuming such that expenditure would be equal between the TOUCPP or standard flat-rate pricing structures. This behavior is inconsistent with a purely neoclassical utility model, and has important implications for application of time-differentiated electricitymore » pricing.« less

  18. The dynamics of hurricane balls

    NASA Astrophysics Data System (ADS)

    Andersen, W. L.; Werner, Steven

    2015-09-01

    We examine the theory of the hurricane balls toy. This toy consists of two steel balls, welded together that are sent spinning on a horizontal surface somewhat like a top. Unlike a top, at high frequency the symmetry axis approaches a limiting inclination that is not perpendicular to the surface. We calculate (and experimentally verify) the limiting inclinations for three toy geometries. We find that at high frequencies, hurricane balls provide an easily realized and testable example of the Poinsot theory of freely rotating symmetrical bodies.

  19. Module generation for self-testing integrated systems

    NASA Astrophysics Data System (ADS)

    Vanriessen, Ronald Pieter

    Hardware used for self test in VLSI (Very Large Scale Integrated) systems is reviewed, and an architecture to control the test hardware in an integrated system is presented. Because of the increase of test times, the use of self test techniques has become practically and economically viable for VLSI systems. Beside the reduction in test times and costs, self test also provides testing at operational speeds. Therefore, a suitable combination of scan path and macrospecific (self) tests is required to reduce test times and costs. An expert system that can be used in a silicon compilation environment is presented. The approach requires a minimum of testability knowledge from a system designer. A user friendly interface was described for specifying and modifying testability requirements by a testability expert. A reason directed backtracking mechanism is used to solve selection failures. Both the hierarchical testable architecture and the design for testability expert system are used in a self test compiler. The definition of a self test compiler was given. A self test compiler is a software tool that selects an appropriate test method for every macro in a design. The hardware to control a macro test will be included in the design automatically. As an example, the integration of the self-test compiler in a silicon compilation system PIRAMID was described. The design of a demonstrator circuit by self test compiler is described. This circuit consists of two self testable macros. Control of the self test hardware is carried out via the test access port of the boundary scan standard.

  20. A quantitative systems physiology model of renal function and blood pressure regulation: Model description.

    PubMed

    Hallow, K M; Gebremichael, Y

    2017-06-01

    Renal function plays a central role in cardiovascular, kidney, and multiple other diseases, and many existing and novel therapies act through renal mechanisms. Even with decades of accumulated knowledge of renal physiology, pathophysiology, and pharmacology, the dynamics of renal function remain difficult to understand and predict, often resulting in unexpected or counterintuitive therapy responses. Quantitative systems pharmacology modeling of renal function integrates this accumulated knowledge into a quantitative framework, allowing evaluation of competing hypotheses, identification of knowledge gaps, and generation of new experimentally testable hypotheses. Here we present a model of renal physiology and control mechanisms involved in maintaining sodium and water homeostasis. This model represents the core renal physiological processes involved in many research questions in drug development. The model runs in R and the code is made available. In a companion article, we present a case study using the model to explore mechanisms and pharmacology of salt-sensitive hypertension. © 2017 The Authors CPT: Pharmacometrics & Systems Pharmacology published by Wiley Periodicals, Inc. on behalf of American Society for Clinical Pharmacology and Therapeutics.

  1. Higgs mass corrections in the SUSY B - L model with inverse seesaw

    NASA Astrophysics Data System (ADS)

    Elsayed, A.; Khalil, S.; Moretti, S.

    2012-08-01

    In the context of the Supersymmetric (SUSY) B - L (Baryon minus Lepton number) model with inverse seesaw mechanism, we calculate the one-loop radiative corrections due to right-handed (s)neutrinos to the mass of the lightest Higgs boson when the latter is Standard Model (SM)-like. We show that such effects can be as large as O (100) GeV, thereby giving an absolute upper limit on such a mass around 180 GeV. The importance of this result from a phenomenological point of view is twofold. On the one hand, this enhancement greatly reconciles theory and experiment, by alleviating the so-called 'little hierarchy problem' of the minimal SUSY realization, whereby the current experimental limit on the SM-like Higgs mass is very near its absolute upper limit predicted theoretically, of 130 GeV. On the other hand, a SM-like Higgs boson with mass below 180 GeV is still well within the reach of the Large Hadron Collider (LHC), so that the SUSY realization discussed here is just as testable as the minimal version.

  2. Multimodal transport and dispersion of organelles in narrow tubular cells

    NASA Astrophysics Data System (ADS)

    Mogre, Saurabh S.; Koslover, Elena F.

    2018-04-01

    Intracellular components explore the cytoplasm via active motor-driven transport in conjunction with passive diffusion. We model the motion of organelles in narrow tubular cells using analytical techniques and numerical simulations to study the efficiency of different transport modes in achieving various cellular objectives. Our model describes length and time scales over which each transport mode dominates organelle motion, along with various metrics to quantify exploration of intracellular space. For organelles that search for a specific target, we obtain the average capture time for given transport parameters and show that diffusion and active motion contribute to target capture in the biologically relevant regime. Because many organelles have been found to tether to microtubules when not engaged in active motion, we study the interplay between immobilization due to tethering and increased probability of active transport. We derive parameter-dependent conditions under which tethering enhances long-range transport and improves the target capture time. These results shed light on the optimization of intracellular transport machinery and provide experimentally testable predictions for the effects of transport regulation mechanisms such as tethering.

  3. Behavioral Stochastic Resonance

    NASA Astrophysics Data System (ADS)

    Freund, Jan A.; Schimansky-Geier, Lutz; Beisner, Beatrix; Neiman, Alexander; Russell, David F.; Yakusheva, Tatyana; Moss, Frank

    2001-03-01

    Zooplankton emit weak electric fields into the surrounding water that originate from their own muscular activities associated with swimming and feeding. Juvenile paddlefish prey upon single zooplankton by detecting and tracking these weak electric signatures. The passive electric sense in the fish is provided by an elaborate array of electroreceptors, Ampullae Lorenzini, spread over the surface of an elongated rostrum. We have previously shown that the fish use stochastic resonance to enhance prey capture near the detection threshold of their sensory system. But stochastic resonance requires an external source of electrical noise in order to function. The required noise can be provided by a swarm of plankton, for example Daphnia. Thus juvenile paddlefish can detect and attack single Daphnia as outliers in the vicinity of the swarm by making use of noise from the swarm itself. From the power spectral density of the noise plus the weak signal from a single Daphnia we calculate the signal-to-noise ratio and the Fisher information at the surface of the paddlefish's rostrum. The results predict a specific attack pattern for the paddlefish that appears to be experimentally testable.

  4. Exploring the Conformational Transitions of Biomolecular Systems Using a Simple Two-State Anisotropic Network Model

    PubMed Central

    Jo, Sunhwan; Bahar, Ivet; Roux, Benoît

    2014-01-01

    Biomolecular conformational transitions are essential to biological functions. Most experimental methods report on the long-lived functional states of biomolecules, but information about the transition pathways between these stable states is generally scarce. Such transitions involve short-lived conformational states that are difficult to detect experimentally. For this reason, computational methods are needed to produce plausible hypothetical transition pathways that can then be probed experimentally. Here we propose a simple and computationally efficient method, called ANMPathway, for constructing a physically reasonable pathway between two endpoints of a conformational transition. We adopt a coarse-grained representation of the protein and construct a two-state potential by combining two elastic network models (ENMs) representative of the experimental structures resolved for the endpoints. The two-state potential has a cusp hypersurface in the configuration space where the energies from both the ENMs are equal. We first search for the minimum energy structure on the cusp hypersurface and then treat it as the transition state. The continuous pathway is subsequently constructed by following the steepest descent energy minimization trajectories starting from the transition state on each side of the cusp hypersurface. Application to several systems of broad biological interest such as adenylate kinase, ATP-driven calcium pump SERCA, leucine transporter and glutamate transporter shows that ANMPathway yields results in good agreement with those from other similar methods and with data obtained from all-atom molecular dynamics simulations, in support of the utility of this simple and efficient approach. Notably the method provides experimentally testable predictions, including the formation of non-native contacts during the transition which we were able to detect in two of the systems we studied. An open-access web server has been created to deliver ANMPathway results. PMID:24699246

  5. Systematic mapping of two component response regulators to gene targets in a model sulfate reducing bacterium

    PubMed Central

    2011-01-01

    Background Two component regulatory systems are the primary form of signal transduction in bacteria. Although genomic binding sites have been determined for several eukaryotic and bacterial transcription factors, comprehensive identification of gene targets of two component response regulators remains challenging due to the lack of knowledge of the signals required for their activation. We focused our study on Desulfovibrio vulgaris Hildenborough, a sulfate reducing bacterium that encodes unusually diverse and largely uncharacterized two component signal transduction systems. Results We report the first systematic mapping of the genes regulated by all transcriptionally acting response regulators in a single bacterium. Our results enabled functional predictions for several response regulators and include key processes of carbon, nitrogen and energy metabolism, cell motility and biofilm formation, and responses to stresses such as nitrite, low potassium and phosphate starvation. Our study also led to the prediction of new genes and regulatory networks, which found corroboration in a compendium of transcriptome data available for D. vulgaris. For several regulators we predicted and experimentally verified the binding site motifs, most of which were discovered as part of this study. Conclusions The gene targets identified for the response regulators allowed strong functional predictions to be made for the corresponding two component systems. By tracking the D. vulgaris regulators and their motifs outside the Desulfovibrio spp. we provide testable hypotheses regarding the functions of orthologous regulators in other organisms. The in vitro array based method optimized here is generally applicable for the study of such systems in all organisms. PMID:21992415

  6. Using next generation transcriptome sequencing to predict an ectomycorrhizal metablome.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Larsen, P. E.; Sreedasyam, A.; Trivedi, G

    Mycorrhizae, symbiotic interactions between soil fungi and tree roots, are ubiquitous in terrestrial ecosystems. The fungi contribute phosphorous, nitrogen and mobilized nutrients from organic matter in the soil and in return the fungus receives photosynthetically-derived carbohydrates. This union of plant and fungal metabolisms is the mycorrhizal metabolome. Understanding this symbiotic relationship at a molecular level provides important contributions to the understanding of forest ecosystems and global carbon cycling. We generated next generation short-read transcriptomic sequencing data from fully-formed ectomycorrhizae between Laccaria bicolor and aspen (Populus tremuloides) roots. The transcriptomic data was used to identify statistically significantly expressed gene models usingmore » a bootstrap-style approach, and these expressed genes were mapped to specific metabolic pathways. Integration of expressed genes that code for metabolic enzymes and the set of expressed membrane transporters generates a predictive model of the ectomycorrhizal metabolome. The generated model of mycorrhizal metabolome predicts that the specific compounds glycine, glutamate, and allantoin are synthesized by L. bicolor and that these compounds or their metabolites may be used for the benefit of aspen in exchange for the photosynthetically-derived sugars fructose and glucose. The analysis illustrates an approach to generate testable biological hypotheses to investigate the complex molecular interactions that drive ectomycorrhizal symbiosis. These models are consistent with experimental environmental data and provide insight into the molecular exchange processes for organisms in this complex ecosystem. The method used here for predicting metabolomic models of mycorrhizal systems from deep RNA sequencing data can be generalized and is broadly applicable to transcriptomic data derived from complex systems.« less

  7. Refinement of Representation Theorems for Context-Free Languages

    NASA Astrophysics Data System (ADS)

    Fujioka, Kaoru

    In this paper, we obtain some refinement of representation theorems for context-free languages by using Dyck languages, insertion systems, strictly locally testable languages, and morphisms. For instance, we improved the Chomsky-Schützenberger representation theorem and show that each context-free language L can be represented in the form L = h (D ∩ R), where D is a Dyck language, R is a strictly 3-testable language, and h is a morphism. A similar representation for context-free languages can be obtained, using insertion systems of weight (3, 0) and strictly 4-testable languages.

  8. Brain Organization and Psychodynamics

    PubMed Central

    Peled, Avi; Geva, Amir B.

    1999-01-01

    Any attempt to link brain neural activity and psychodynamic concepts requires a tremendous conceptual leap. Such a leap may be facilitated if a common language between brain and mind can be devised. System theory proposes formulations that may aid in reconceptualizing psychodynamic descriptions in terms of neural organizations in the brain. Once adopted, these formulations can help to generate testable predictions about brain–psychodynamic relations and thus significantly affect the future of psychotherapy. (The Journal of Psychotherapy Practice and Research 1999; 8:24–39) PMID:9888105

  9. The purpose of adaptation

    PubMed Central

    2017-01-01

    A central feature of Darwin's theory of natural selection is that it explains the purpose of biological adaptation. Here, I: emphasize the scientific importance of understanding what adaptations are for, in terms of facilitating the derivation of empirically testable predictions; discuss the population genetical basis for Darwin's theory of the purpose of adaptation, with reference to Fisher's ‘fundamental theorem of natural selection'; and show that a deeper understanding of the purpose of adaptation is achieved in the context of social evolution, with reference to inclusive fitness and superorganisms. PMID:28839927

  10. The purpose of adaptation.

    PubMed

    Gardner, Andy

    2017-10-06

    A central feature of Darwin's theory of natural selection is that it explains the purpose of biological adaptation. Here, I: emphasize the scientific importance of understanding what adaptations are for, in terms of facilitating the derivation of empirically testable predictions; discuss the population genetical basis for Darwin's theory of the purpose of adaptation, with reference to Fisher's 'fundamental theorem of natural selection'; and show that a deeper understanding of the purpose of adaptation is achieved in the context of social evolution, with reference to inclusive fitness and superorganisms.

  11. Causal Reasoning on Biological Networks: Interpreting Transcriptional Changes

    NASA Astrophysics Data System (ADS)

    Chindelevitch, Leonid; Ziemek, Daniel; Enayetallah, Ahmed; Randhawa, Ranjit; Sidders, Ben; Brockel, Christoph; Huang, Enoch

    Over the past decade gene expression data sets have been generated at an increasing pace. In addition to ever increasing data generation, the biomedical literature is growing exponentially. The PubMed database (Sayers et al., 2010) comprises more than 20 million citations as of October 2010. The goal of our method is the prediction of putative upstream regulators of observed expression changes based on a set of over 400,000 causal relationships. The resulting putative regulators constitute directly testable hypotheses for follow-up.

  12. Scaling properties of multitension domain wall networks

    NASA Astrophysics Data System (ADS)

    Oliveira, M. F.; Martins, C. J. A. P.

    2015-02-01

    We study the asymptotic scaling properties of domain wall networks with three different tensions in various cosmological epochs. We discuss the conditions under which a scale-invariant evolution of the network (which is well established for simpler walls) still applies and also consider the limiting case where defects are locally planar and the curvature is concentrated in the junctions. We present detailed quantitative predictions for scaling densities in various contexts, which should be testable by means of future high-resolution numerical simulations.

  13. Testability analysis on a hydraulic system in a certain equipment based on simulation model

    NASA Astrophysics Data System (ADS)

    Zhang, Rui; Cong, Hua; Liu, Yuanhong; Feng, Fuzhou

    2018-03-01

    Aiming at the problem that the complicated structure and the shortage of fault statistics information in hydraulic systems, a multi value testability analysis method based on simulation model is proposed. Based on the simulation model of AMESim, this method injects the simulated faults and records variation of test parameters ,such as pressure, flow rate, at each test point compared with those under normal conditions .Thus a multi-value fault-test dependency matrix is established. Then the fault detection rate (FDR) and fault isolation rate (FIR) are calculated based on the dependency matrix. Finally the system of testability and fault diagnosis capability are analyzed and evaluated, which can only reach a lower 54%(FDR) and 23%(FIR). In order to improve testability performance of the system,. number and position of the test points are optimized on the system. Results show the proposed test placement scheme can be used to solve the problems that difficulty, inefficiency and high cost in the system maintenance.

  14. Language, music, syntax and the brain.

    PubMed

    Patel, Aniruddh D

    2003-07-01

    The comparative study of music and language is drawing an increasing amount of research interest. Like language, music is a human universal involving perceptually discrete elements organized into hierarchically structured sequences. Music and language can thus serve as foils for each other in the study of brain mechanisms underlying complex sound processing, and comparative research can provide novel insights into the functional and neural architecture of both domains. This review focuses on syntax, using recent neuroimaging data and cognitive theory to propose a specific point of convergence between syntactic processing in language and music. This leads to testable predictions, including the prediction that that syntactic comprehension problems in Broca's aphasia are not selective to language but influence music perception as well.

  15. Family nonuniversal Z' models with protected flavor-changing interactions

    NASA Astrophysics Data System (ADS)

    Celis, Alejandro; Fuentes-Martín, Javier; Jung, Martin; Serôdio, Hugo

    2015-07-01

    We define a new class of Z' models with neutral flavor-changing interactions at tree level in the down-quark sector. They are related in an exact way to elements of the quark mixing matrix due to an underlying flavored U(1)' gauge symmetry, rendering these models particularly predictive. The same symmetry implies lepton-flavor nonuniversal couplings, fully determined by the gauge structure of the model. Our models allow us to address presently observed deviations from the standard model and specific correlations among the new physics contributions to the Wilson coefficients C9,10' ℓ can be tested in b →s ℓ+ℓ- transitions. We furthermore predict lepton-universality violations in Z' decays, testable at the LHC.

  16. MoCha: Molecular Characterization of Unknown Pathways.

    PubMed

    Lobo, Daniel; Hammelman, Jennifer; Levin, Michael

    2016-04-01

    Automated methods for the reverse-engineering of complex regulatory networks are paving the way for the inference of mechanistic comprehensive models directly from experimental data. These novel methods can infer not only the relations and parameters of the known molecules defined in their input datasets, but also unknown components and pathways identified as necessary by the automated algorithms. Identifying the molecular nature of these unknown components is a crucial step for making testable predictions and experimentally validating the models, yet no specific and efficient tools exist to aid in this process. To this end, we present here MoCha (Molecular Characterization), a tool optimized for the search of unknown proteins and their pathways from a given set of known interacting proteins. MoCha uses the comprehensive dataset of protein-protein interactions provided by the STRING database, which currently includes more than a billion interactions from over 2,000 organisms. MoCha is highly optimized, performing typical searches within seconds. We demonstrate the use of MoCha with the characterization of unknown components from reverse-engineered models from the literature. MoCha is useful for working on network models by hand or as a downstream step of a model inference engine workflow and represents a valuable and efficient tool for the characterization of unknown pathways using known data from thousands of organisms. MoCha and its source code are freely available online under the GPLv3 license.

  17. An Improved, Bias-Reduced Probabilistic Functional Gene Network of Baker's Yeast, Saccharomyces cerevisiae

    PubMed Central

    Lee, Insuk; Li, Zhihua; Marcotte, Edward M.

    2007-01-01

    Background Probabilistic functional gene networks are powerful theoretical frameworks for integrating heterogeneous functional genomics and proteomics data into objective models of cellular systems. Such networks provide syntheses of millions of discrete experimental observations, spanning DNA microarray experiments, physical protein interactions, genetic interactions, and comparative genomics; the resulting networks can then be easily applied to generate testable hypotheses regarding specific gene functions and associations. Methodology/Principal Findings We report a significantly improved version (v. 2) of a probabilistic functional gene network [1] of the baker's yeast, Saccharomyces cerevisiae. We describe our optimization methods and illustrate their effects in three major areas: the reduction of functional bias in network training reference sets, the application of a probabilistic model for calculating confidences in pair-wise protein physical or genetic interactions, and the introduction of simple thresholds that eliminate many false positive mRNA co-expression relationships. Using the network, we predict and experimentally verify the function of the yeast RNA binding protein Puf6 in 60S ribosomal subunit biogenesis. Conclusions/Significance YeastNet v. 2, constructed using these optimizations together with additional data, shows significant reduction in bias and improvements in precision and recall, in total covering 102,803 linkages among 5,483 yeast proteins (95% of the validated proteome). YeastNet is available from http://www.yeastnet.org. PMID:17912365

  18. Muscle MRI findings in facioscapulohumeral muscular dystrophy.

    PubMed

    Gerevini, Simonetta; Scarlato, Marina; Maggi, Lorenzo; Cava, Mariangela; Caliendo, Giandomenico; Pasanisi, Barbara; Falini, Andrea; Previtali, Stefano Carlo; Morandi, Lucia

    2016-03-01

    Facioscapulohumeral muscular dystrophy (FSHD) is characterized by extremely variable degrees of facial, scapular and lower limb muscle involvement. Clinical and genetic determination can be difficult, as molecular analysis is not always definitive, and other similar muscle disorders may have overlapping clinical manifestations. Whole-body muscle MRI examination for fat infiltration, atrophy and oedema was performed to identify specific patterns of muscle involvement in FSHD patients (30 subjects), and compared to a group of control patients (23) affected by other myopathies (NFSHD). In FSHD patients, we detected a specific pattern of muscle fatty replacement and atrophy, particularly in upper girdle muscles. The most frequently affected muscles, including paucisymptomatic and severely affected FSHD patients, were trapezius, teres major and serratus anterior. Moreover, asymmetric muscle involvement was significantly higher in FSHD as compared to NFSHD patients. In conclusion, muscle MRI is very sensitive for identifying a specific pattern of involvement in FSHD patients and in detecting selective muscle involvement of non-clinically testable muscles. Muscle MRI constitutes a reliable tool for differentiating FSHD from other muscular dystrophies to direct diagnostic molecular analysis, as well as to investigate FSHD natural history and follow-up of the disease. Muscle MRI identifies a specific pattern of muscle involvement in FSHD patients. Muscle MRI may predict FSHD in asymptomatic and severely affected patients. Muscle MRI of upper girdle better predicts FSHD. Muscle MRI may differentiate FSHD from other forms of muscular dystrophy. Muscle MRI may show the involvement of non-clinical testable muscles.

  19. A Motor-Gradient and Clustering Model of the Centripetal Motility of MTOCs in Meiosis I of Mouse Oocytes

    PubMed Central

    2016-01-01

    Asters nucleated by Microtubule (MT) organizing centers (MTOCs) converge on chromosomes during spindle assembly in mouse oocytes undergoing meiosis I. Time-lapse imaging suggests that this centripetal motion is driven by a biased ‘search-and-capture’ mechanism. Here, we develop a model of a random walk in a drift field to test the nature of the bias and the spatio-temporal dynamics of the search process. The model is used to optimize the spatial field of drift in simulations, by comparison to experimental motility statistics. In a second step, this optimized gradient is used to determine the location of immobilized dynein motors and MT polymerization parameters, since these are hypothesized to generate the gradient of forces needed to move MTOCs. We compare these scenarios to self-organized mechanisms by which asters have been hypothesized to find the cell-center- MT pushing at the cell-boundary and clustering motor complexes. By minimizing the error between simulation outputs and experiments, we find a model of “pulling” by a gradient of dynein motors alone can drive the centripetal motility. Interestingly, models of passive MT based “pushing” at the cortex, clustering by cross-linking motors and MT-dynamic instability gradients alone, by themselves do not result in the observed motility. The model predicts the sensitivity of the results to motor density and stall force, but not MTs per aster. A hybrid model combining a chromatin-centered immobilized dynein gradient, diffusible minus-end directed clustering motors and pushing at the cell cortex, is required to comprehensively explain the available data. The model makes experimentally testable predictions of a spatial bias and self-organized mechanisms by which MT asters can find the center of a large cell. PMID:27706163

  20. Interaction of E. coli outer-membrane protein A with sugars on the receptors of the brain microvascular endothelial cells.

    PubMed

    Datta, Deepshikha; Vaidehi, Nagarajan; Floriano, Wely B; Kim, Kwang S; Prasadarao, Nemani V; Goddard, William A

    2003-02-01

    Esherichia coli, the most common gram-negative bacteria, can penetrate the brain microvascular endothelial cells (BMECs) during the neonatal period to cause meningitis with significant morbidity and mortality. Experimental studies have shown that outer-membrane protein A (OmpA) of E. coli plays a key role in the initial steps of the invasion process by binding to specific sugar moieties present on the glycoproteins of BMEC. These experiments also show that polymers of chitobiose (GlcNAcbeta1-4GlcNAc) block the invasion, while epitopes substituted with the L-fucosyl group do not. We used HierDock computational technique that consists of a hierarchy of coarse grain docking method with molecular dynamics (MD) to predict the binding sites and energies of interactions of GlcNAcbeta1-4GlcNAc and other sugars with OmpA. The results suggest two important binding sites for the interaction of carbohydrate epitopes of BMEC glycoproteins to OmpA. We identify one site as the binding pocket for chitobiose (GlcNAcbeta1-4GlcNAc) in OmpA, while the second region (including loops 1 and 2) may be important for recognition of specific sugars. We find that the site involving loops 1 and 2 has relative binding energies that correlate well with experimental observations. This theoretical study elucidates the interaction sites of chitobiose with OmpA and the binding site predictions made in this article are testable either by mutation studies or invasion assays. These results can be further extended in suggesting possible peptide antagonists and drug design for therapeutic strategies. Copyright 2002 Wiley-Liss, Inc.

  1. A Motor-Gradient and Clustering Model of the Centripetal Motility of MTOCs in Meiosis I of Mouse Oocytes.

    PubMed

    Khetan, Neha; Athale, Chaitanya A

    2016-10-01

    Asters nucleated by Microtubule (MT) organizing centers (MTOCs) converge on chromosomes during spindle assembly in mouse oocytes undergoing meiosis I. Time-lapse imaging suggests that this centripetal motion is driven by a biased 'search-and-capture' mechanism. Here, we develop a model of a random walk in a drift field to test the nature of the bias and the spatio-temporal dynamics of the search process. The model is used to optimize the spatial field of drift in simulations, by comparison to experimental motility statistics. In a second step, this optimized gradient is used to determine the location of immobilized dynein motors and MT polymerization parameters, since these are hypothesized to generate the gradient of forces needed to move MTOCs. We compare these scenarios to self-organized mechanisms by which asters have been hypothesized to find the cell-center- MT pushing at the cell-boundary and clustering motor complexes. By minimizing the error between simulation outputs and experiments, we find a model of "pulling" by a gradient of dynein motors alone can drive the centripetal motility. Interestingly, models of passive MT based "pushing" at the cortex, clustering by cross-linking motors and MT-dynamic instability gradients alone, by themselves do not result in the observed motility. The model predicts the sensitivity of the results to motor density and stall force, but not MTs per aster. A hybrid model combining a chromatin-centered immobilized dynein gradient, diffusible minus-end directed clustering motors and pushing at the cell cortex, is required to comprehensively explain the available data. The model makes experimentally testable predictions of a spatial bias and self-organized mechanisms by which MT asters can find the center of a large cell.

  2. Lift and drag in three-dimensional steady viscous and compressible flow

    NASA Astrophysics Data System (ADS)

    Liu, L. Q.; Wu, J. Z.; Su, W. D.; Kang, L. L.

    2017-11-01

    In a recent paper, Liu, Zhu, and Wu ["Lift and drag in two-dimensional steady viscous and compressible flow," J. Fluid Mech. 784, 304-341 (2015)] present a force theory for a body in a two-dimensional, viscous, compressible, and steady flow. In this companion paper, we do the same for three-dimensional flows. Using the fundamental solution of the linearized Navier-Stokes equations, we improve the force formula for incompressible flows originally derived by Goldstein in 1931 and summarized by Milne-Thomson in 1968, both being far from complete, to its perfect final form, which is further proved to be universally true from subsonic to supersonic flows. We call this result the unified force theorem, which states that the forces are always determined by the vector circulation Γϕ of longitudinal velocity and the scalar inflow Qψ of transverse velocity. Since this theorem is not directly observable either experimentally or computationally, a testable version is also derived, which, however, holds only in the linear far field. We name this version the testable unified force formula. After that, a general principle to increase the lift-drag ratio is proposed.

  3. Comparison of Three Ionic Liquid-Tolerant Cellulases by Molecular Dynamics

    PubMed Central

    Jaeger, Vance; Burney, Patrick; Pfaendtner, Jim

    2015-01-01

    We have employed molecular dynamics to investigate the differences in ionic liquid tolerance among three distinct family 5 cellulases from Trichoderma viride, Thermogata maritima, and Pyrococcus horikoshii. Simulations of the three cellulases were conducted at a range of temperatures in various binary mixtures of the ionic liquid 1-ethyl-3-methyl-imidazolium acetate with water. Our analysis demonstrates that the effects of ionic liquids on the enzymes vary in each individual case from local structural disturbances to loss of much of one of the enzyme’s secondary structure. Enzymes with more negatively charged surfaces tend to resist destabilization by ionic liquids. Specific and unique structural changes in the enzymes are induced by the presence of ionic liquids. Disruption of the secondary structure, changes in dynamical motion, and local changes in the binding pocket are observed in less tolerant enzymes. Ionic-liquid-induced denaturation of one of the enzymes is indicated over the 500 ns timescale. In contrast, the most tolerant cellulase behaves similarly in water and in ionic-liquid-containing mixtures. Unlike the heuristic approaches that attempt to predict enzyme stability using macroscopic properties, molecular dynamics allows us to predict specific atomic-level structural and dynamical changes in an enzyme’s behavior induced by ionic liquids and other mixed solvents. Using these insights, we propose specific experimentally testable hypotheses regarding the origin of activity loss for each of the systems investigated in this study. PMID:25692593

  4. Reheating predictions in gravity theories with derivative coupling

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dalianis, Ioannis; Koutsoumbas, George; Ntrekis, Konstantinos

    2017-02-01

    We investigate the inflationary predictions of a simple Horndeski theory where the inflaton scalar field has a non-minimal derivative coupling (NMDC) to the Einstein tensor. The NMDC is very motivated for the construction of successful models for inflation, nevertheless its inflationary predictions are not observationally distinct. We show that it is possible to probe the effects of the NMDC on the CMB observables by taking into account both the dynamics of the inflationary slow-roll phase and the subsequent reheating. We perform a comparative study between representative inflationary models with canonical fields minimally coupled to gravity and models with NMDC. Wemore » find that the inflation models with dominant NMDC generically predict a higher reheating temperature and a different range for the tilt of the scalar perturbation spectrum n {sub s} and scalar-to-tensor ratio r , potentially testable by current and future CMB experiments.« less

  5. Phenemenological vs. biophysical models of thermal stress in aquatic eggs

    NASA Astrophysics Data System (ADS)

    Martin, B.

    2016-12-01

    Predicting species responses to climate change is a central challenge in ecology, with most efforts relying on lab derived phenomenological relationships between temperature and fitness metrics. We tested one of these models using the embryonic stage of a Chinook salmon population. We parameterized the model with laboratory data, applied it to predict survival in the field, and found that it significantly underestimated field-derived estimates of thermal mortality. We used a biophysical model based on mass-transfer theory to show that the discrepancy was due to the differences in water flow velocities between the lab and the field. This mechanistic approach provides testable predictions for how the thermal tolerance of embryos depends on egg size and flow velocity of the surrounding water. We found support for these predictions across more than 180 fish species, suggesting that flow and temperature mediated oxygen limitation is a general mechanism underlying the thermal tolerance of embryos.

  6. The CRISP theory of hippocampal function in episodic memory

    PubMed Central

    Cheng, Sen

    2013-01-01

    Over the past four decades, a “standard framework” has emerged to explain the neural mechanisms of episodic memory storage. This framework has been instrumental in driving hippocampal research forward and now dominates the design and interpretation of experimental and theoretical studies. It postulates that cortical inputs drive plasticity in the recurrent cornu ammonis 3 (CA3) synapses to rapidly imprint memories as attractor states in CA3. Here we review a range of experimental studies and argue that the evidence against the standard framework is mounting, notwithstanding the considerable evidence in its support. We propose CRISP as an alternative theory to the standard framework. CRISP is based on Context Reset by dentate gyrus (DG), Intrinsic Sequences in CA3, and Pattern completion in cornu ammonis 1 (CA1). Compared to previous models, CRISP uses a radically different mechanism for storing episodic memories in the hippocampus. Neural sequences are intrinsic to CA3, and inputs are mapped onto these intrinsic sequences through synaptic plasticity in the feedforward projections of the hippocampus. Hence, CRISP does not require plasticity in the recurrent CA3 synapses during the storage process. Like in other theories DG and CA1 play supporting roles, however, their function in CRISP have distinct implications. For instance, CA1 performs pattern completion in the absence of CA3 and DG contributes to episodic memory retrieval, increasing the speed, precision, and robustness of retrieval. We propose the conceptual theory, discuss its implications for experimental results and suggest testable predictions. It appears that CRISP not only accounts for those experimental results that are consistent with the standard framework, but also for results that are at odds with the standard framework. We therefore suggest that CRISP is a viable, and perhaps superior, theory for the hippocampal function in episodic memory. PMID:23653597

  7. Induction and modulation of persistent activity in a layer V PFC microcircuit model

    PubMed Central

    Papoutsi, Athanasia; Sidiropoulou, Kyriaki; Cutsuridis, Vassilis; Poirazi, Panayiota

    2013-01-01

    Working memory refers to the temporary storage of information and is strongly associated with the prefrontal cortex (PFC). Persistent activity of cortical neurons, namely the activity that persists beyond the stimulus presentation, is considered the cellular correlate of working memory. Although past studies suggested that this type of activity is characteristic of large scale networks, recent experimental evidence imply that small, tightly interconnected clusters of neurons in the cortex may support similar functionalities. However, very little is known about the biophysical mechanisms giving rise to persistent activity in small-sized microcircuits in the PFC. Here, we present a detailed biophysically—yet morphologically simplified—microcircuit model of layer V PFC neurons that incorporates connectivity constraints and is validated against a multitude of experimental data. We show that (a) a small-sized network can exhibit persistent activity under realistic stimulus conditions. (b) Its emergence depends strongly on the interplay of dADP, NMDA, and GABAB currents. (c) Although increases in stimulus duration increase the probability of persistent activity induction, variability in the stimulus firing frequency does not consistently influence it. (d) Modulation of ionic conductances (Ih, ID, IsAHP, IcaL, IcaN, IcaR) differentially controls persistent activity properties in a location dependent manner. These findings suggest that modulation of the microcircuit's firing characteristics is achieved primarily through changes in its intrinsic mechanism makeup, supporting the hypothesis of multiple bi-stable units in the PFC. Overall, the model generates a number of experimentally testable predictions that may lead to a better understanding of the biophysical mechanisms of persistent activity induction and modulation in the PFC. PMID:24130519

  8. Evolution of the CRISPR-Cas adaptive immunity systems in prokaryotes: models and observations on virus-host coevolution.

    PubMed

    Koonin, Eugene V; Wolf, Yuri I

    2015-01-01

    CRISPR-Cas is an adaptive immunity system in prokaryotes that functions via a unique mechanism which involves incorporation of foreign DNA fragments into CRISPR arrays and subsequent utilization of transcripts of these inserts (known as spacers) as guide RNAs to cleave the cognate selfish element genome. Multiple attempts have been undertaken to explore the coevolution of viruses and microbial hosts carrying CRISPR-Cas using mathematical models that employ either systems of differential equations or an agent-based approach, or combinations thereof. Analysis of these models reveals highly complex co-evolutionary dynamics that ensues from the combination of the heritability of the CRISPR-mediated adaptive immunity with the existence of different degrees of immunity depending on the number of cognate spacers and the cost of carrying a CRISPR-Cas locus. Depending on the details of the models, a variety of testable, sometimes conflicting predictions have been made on the dependence of the degree of immunity and the benefit of maintaining CRISPR-Cas on the abundance and diversity of hosts and viruses. Some of these predictions have already been directly validated experimentally. In particular, both the reality of the virus-host arms race, with viruses escaping resistance and hosts reacquiring it through the capture of new spacers, and the fitness cost of CRISPR-Cas due to the curtailment of beneficial HGT have been reproduced in the laboratory. However, to test the predictions of the models more specifically, detailed studies of coevolving populations of microbes and viruses both in nature and in the laboratory are essential. Such analyses are expected to yield disagreements with the predictions of the current, oversimplified models and to trigger a new round of theoretical developments.

  9. Generating Testable Questions in the Science Classroom: The BDC Model

    ERIC Educational Resources Information Center

    Tseng, ChingMei; Chen, Shu-Bi Shu-Bi; Chang, Wen-Hua

    2015-01-01

    Guiding students to generate testable scientific questions is essential in the inquiry classroom, but it is not easy. The purpose of the BDC ("Big Idea, Divergent Thinking, and Convergent Thinking") instructional model is to to scaffold students' inquiry learning. We illustrate the use of this model with an example lesson, designed…

  10. Easily Testable PLA-Based Finite State Machines

    DTIC Science & Technology

    1989-03-01

    PLATYPUS (20]. Then, justifi- type 1, 4 and 5 can be guaranteed to be testable via cation paths are obtained from the STG using simple logic...next state lines is found, if such a vector par that is gnrt d y the trupt eexists, using PLATYPUS [20]. pair that is generated by the first corrupted

  11. LSI (Large Scale Integrated) Design for Testability. Final Report of Design, Demonstration, and Testability Analysis.

    DTIC Science & Technology

    1983-11-01

    compound operations, with status. (h) Pre-programmed CRC and double-precision multiply/divide algo- rithms. (i) Double length accumulator with full...IH1.25 _ - MICROCOP ’ RESOLUTION TEST CHART NATIONAL BUREAU OF STANDARDS-1963-A .4 ’* • • . - . .. •. . . . . . . . . . . . . . • - -. .• ,. o. . . .- "o

  12. Genetic models of homosexuality: generating testable predictions

    PubMed Central

    Gavrilets, Sergey; Rice, William R

    2006-01-01

    Homosexuality is a common occurrence in humans and other species, yet its genetic and evolutionary basis is poorly understood. Here, we formulate and study a series of simple mathematical models for the purpose of predicting empirical patterns that can be used to determine the form of selection that leads to polymorphism of genes influencing homosexuality. Specifically, we develop theory to make contrasting predictions about the genetic characteristics of genes influencing homosexuality including: (i) chromosomal location, (ii) dominance among segregating alleles and (iii) effect sizes that distinguish between the two major models for their polymorphism: the overdominance and sexual antagonism models. We conclude that the measurement of the genetic characteristics of quantitative trait loci (QTLs) found in genomic screens for genes influencing homosexuality can be highly informative in resolving the form of natural selection maintaining their polymorphism. PMID:17015344

  13. New physics at the TeV scale

    NASA Astrophysics Data System (ADS)

    Chakdar, Shreyashi

    The Standard Model of particle physics is assumed to be a low-energy effective theory with new physics theoretically motivated to be around TeV scale. The thesis presents theories with new physics beyond the Standard Model in the TeV scale testable in the colliders. Work done in chapters 2, 3 and 5 in this thesis present some models incorporating different approaches of enlarging the Standard Model gauge group to a grand unified symmetry with each model presenting its unique signatures in the colliders. The study on leptoquarks gauge bosons in reference to TopSU(5) model in chapter 2 showed that their discovery mass range extends up to 1.5 TeV at 14 TeV LHC with luminosity of 100 fb--1. On the other hand, in chapter 3 we studied the collider phenomenology of TeV scale mirror fermions in Left-Right Mirror model finding that the reaches for the mirror quarks goes upto 750 GeV at the 14 TeV LHC with 300 fb--1 luminosity. In chapter 4 we have enlarged the bosonic symmetry to fermi-bose symmetry e.g. supersymmetry and have shown that SUSY with non-universalities in gaugino or scalar masses within high scale SUGRA set up can still be accessible at LHC with 14 TeV. In chapter 5, we performed a study in respect to the e+e-- collider and find that precise measurements of the higgs boson mass splittings up to ˜ 100 MeV may be possible with high luminosity in the International Linear Collider (ILC). In chapter 6 we have shown that the experimental data on neutrino masses and mixings are consistent with the proposed 4/5 parameter Dirac neutrino models yielding a solution for the neutrino masses with inverted mass hierarchy and large CP violating phase delta and thus can be tested experimentally. Chapter 7 of the thesis incorporates a warm dark matter candidate in context of two Higgs doublet model. The model has several testable consequences at colliders with the charged scalar and pseudoscalar being in few hundred GeV mass range. This thesis presents an endeavor to study beyond standard model physics at the TeV scale with testable signals in the Colliders.

  14. Eye Examination Testability in Children with Autism and in Typical Peers

    PubMed Central

    Coulter, Rachel Anastasia; Bade, Annette; Tea, Yin; Fecho, Gregory; Amster, Deborah; Jenewein, Erin; Rodena, Jacqueline; Lyons, Kara Kelley; Mitchell, G. Lynn; Quint, Nicole; Dunbar, Sandra; Ricamato, Michele; Trocchio, Jennie; Kabat, Bonnie; Garcia, Chantel; Radik, Irina

    2015-01-01

    ABSTRACT Purpose To compare testability of vision and eye tests in an examination protocol of 9- to 17-year-old patients with autism spectrum disorder (ASD) to typically developing (TD) peers. Methods In a prospective pilot study, 61 children and adolescents (34 with ASD and 27 who were TD) aged 9 to 17 years completed an eye examination protocol including tests of visual acuity, refraction, convergence (eye teaming), stereoacuity (depth perception), ocular motility, and ocular health. Patients who required new refractive correction were retested after wearing their updated spectacle prescription for 1 month. The specialized protocol incorporated visual, sensory, and communication supports. A psychologist determined group status/eligibility using DSM-IV-TR (Diagnostic and Statistical Manual of Mental Disorders, Fourth Edition, Text Revision) criteria by review of previous evaluations and parent responses on the Social Communication Questionnaire. Before the examination, parents provided information regarding patients’ sex, race, ethnicity, and, for ASD patients, verbal communication level (nonverbal, uses short words, verbal). Parents indicated whether the patient wore a refractive correction, whether the patient had ever had an eye examination, and the age at the last examination. Chi-square tests compared testability results for TD and ASD groups. Results Typically developing and ASD groups did not differ by age (p = 0.54), sex (p = 0.53), or ethnicity (p = 0.22). Testability was high on most tests (TD, 100%; ASD, 88 to 100%), except for intraocular pressure (IOP), which was reduced for both the ASD (71%) and the TD (89%) patients. Among ASD patients, IOP testability varied greatly with verbal communication level (p < 0.001). Although IOP measurements were completed on all verbal patients, only 37.5% of nonverbal and 44.4% of ASD patients who used short words were successful. Conclusions Patients with ASD can complete most vision and eye tests within an examination protocol. Testability of IOPs is reduced, particularly for nonverbal patients and patients who use short words to communicate. PMID:25415280

  15. Estimating outflow facility through pressure dependent pathways of the human eye

    PubMed Central

    Gardiner, Bruce S.

    2017-01-01

    We develop and test a new theory for pressure dependent outflow from the eye. The theory comprises three main parameters: (i) a constant hydraulic conductivity, (ii) an exponential decay constant and (iii) a no-flow intraocular pressure, from which the total pressure dependent outflow, average outflow facilities and local outflow facilities for the whole eye may be evaluated. We use a new notation to specify precisely the meaning of model parameters and so model outputs. Drawing on a range of published data, we apply the theory to animal eyes, enucleated eyes and in vivo human eyes, and demonstrate how to evaluate model parameters. It is shown that the theory can fit high quality experimental data remarkably well. The new theory predicts that outflow facilities and total pressure dependent outflow for the whole eye are more than twice as large as estimates based on the Goldman equation and fluorometric analysis of anterior aqueous outflow. It appears likely that this discrepancy can be largely explained by pseudofacility and aqueous flow through the retinal pigmented epithelium, while any residual discrepancy may be due to pathological processes in aged eyes. The model predicts that if the hydraulic conductivity is too small, or the exponential decay constant is too large, then intraocular eye pressure may become unstable when subjected to normal circadian changes in aqueous production. The model also predicts relationships between variables that may be helpful when planning future experiments, and the model generates many novel testable hypotheses. With additional research, the analysis described here may find application in the differential diagnosis, prognosis and monitoring of glaucoma. PMID:29261696

  16. Julius Edgar Lilienfeld Prize Lecture: The Higgs Boson, String Theory, and the Real World

    NASA Astrophysics Data System (ADS)

    Kane, Gordon

    2012-03-01

    In this talk I'll describe how string theory is exciting because it can address most, perhaps all, of the questions we hope to understand about our world: why quarks and leptons make up our world, what forces form our world, cosmology, parity violation, and much more. I'll explain why string theory is testable in basically the same ways as the rest of physics, and why much of what is written about that is misleading. String theory is already or soon being tested in several ways, including correctly predicting the recently observed Higgs boson properties and mass, and predictions for dark matter, LHC physics, cosmological history, and more, from work in the increasingly active subfield ``string phenomenology.''

  17. A Framework for Evidence-Based Licensure of Adaptive Autonomous Systems

    DTIC Science & Technology

    2016-03-01

    insights gleaned to DoD. The autonomy community has identified significant challenges associated with test, evaluation verification and validation of...licensure as a test, evaluation, verification , and validation (TEVV) framework that can address these challenges. IDA found that traditional...language requirements to testable (preferably machine testable) specifications • Design of architectures that treat development and verification of

  18. Monte Carlo modeling of single-molecule cytoplasmic dynein.

    PubMed

    Singh, Manoranjan P; Mallik, Roop; Gross, Steven P; Yu, Clare C

    2005-08-23

    Molecular motors are responsible for active transport and organization in the cell, underlying an enormous number of crucial biological processes. Dynein is more complicated in its structure and function than other motors. Recent experiments have found that, unlike other motors, dynein can take different size steps along microtubules depending on load and ATP concentration. We use Monte Carlo simulations to model the molecular motor function of cytoplasmic dynein at the single-molecule level. The theory relates dynein's enzymatic properties to its mechanical force production. Our simulations reproduce the main features of recent single-molecule experiments that found a discrete distribution of dynein step sizes, depending on load and ATP concentration. The model reproduces the large steps found experimentally under high ATP and no load by assuming that the ATP binding affinities at the secondary sites decrease as the number of ATP bound to these sites increases. Additionally, to capture the essential features of the step-size distribution at very low ATP concentration and no load, the ATP hydrolysis of the primary site must be dramatically reduced when none of the secondary sites have ATP bound to them. We make testable predictions that should guide future experiments related to dynein function.

  19. Potential benefits of plant diversity on vegetated roofs: a literature review.

    PubMed

    Cook-Patton, Susan C; Bauerle, Taryn L

    2012-09-15

    Although vegetated green roofs can be difficult to establish and maintain, they are an increasingly popular method for mitigating the negative environmental impacts of urbanization. Most green roof development has focused on maximizing green roof performance by planting one or a few drought-tolerant species. We present an alternative approach, which recognizes green roofs as dynamic ecosystems and employs a diversity of species. We draw links between the ecological and green roof literature to generate testable predictions about how increasing plant diversity could improve short- and long-term green roof functioning. Although we found few papers that experimentally manipulated diversity on green roofs, those that did revealed ecological dynamics similar to those in more natural systems. However, there are many unresolved issues. To improve overall green roof performance, we should (1) elucidate the links among plant diversity, structural complexity, and green roof performance, (2) describe feedback mechanisms between plant and animal diversity on green roofs, (3) identify species with complementary traits, and (4) determine whether diverse green roof communities are more resilient to disturbance and environmental change than less diverse green roofs. Copyright © 2012 Elsevier Ltd. All rights reserved.

  20. Context-Dependent Encoding of Fear and Extinction Memories in a Large-Scale Network Model of the Basal Amygdala

    PubMed Central

    Vlachos, Ioannis; Herry, Cyril; Lüthi, Andreas; Aertsen, Ad; Kumar, Arvind

    2011-01-01

    The basal nucleus of the amygdala (BA) is involved in the formation of context-dependent conditioned fear and extinction memories. To understand the underlying neural mechanisms we developed a large-scale neuron network model of the BA, composed of excitatory and inhibitory leaky-integrate-and-fire neurons. Excitatory BA neurons received conditioned stimulus (CS)-related input from the adjacent lateral nucleus (LA) and contextual input from the hippocampus or medial prefrontal cortex (mPFC). We implemented a plasticity mechanism according to which CS and contextual synapses were potentiated if CS and contextual inputs temporally coincided on the afferents of the excitatory neurons. Our simulations revealed a differential recruitment of two distinct subpopulations of BA neurons during conditioning and extinction, mimicking the activation of experimentally observed cell populations. We propose that these two subgroups encode contextual specificity of fear and extinction memories, respectively. Mutual competition between them, mediated by feedback inhibition and driven by contextual inputs, regulates the activity in the central amygdala (CEA) thereby controlling amygdala output and fear behavior. The model makes multiple testable predictions that may advance our understanding of fear and extinction memories. PMID:21437238

  1. The evolution of social and semantic networks in epistemic communities

    NASA Astrophysics Data System (ADS)

    Margolin, Drew Berkley

    This study describes and tests a model of scientific inquiry as an evolving, organizational phenomenon. Arguments are derived from organizational ecology and evolutionary theory. The empirical subject of study is an epistemic community of scientists publishing on a research topic in physics: the string theoretic concept of "D-branes." The study uses evolutionary theory as a means of predicting change in the way members of the community choose concepts to communicate acceptable knowledge claims. It is argued that the pursuit of new knowledge is risky, because the reliability of a novel knowledge claim cannot be verified until after substantial resources have been invested. Using arguments from both philosophy of science and organizational ecology, it is suggested that scientists can mitigate and sensibly share the risks of knowledge discovery within the community by articulating their claims in legitimate forms, i.e., forms that are testable within and relevant to the community. Evidence from empirical studies of semantic usage suggests that the legitimacy of a knowledge claim is influenced by the characteristics of the concepts in which it is articulated. A model of conceptual retention, variation, and selection is then proposed for predicting the usage of concepts and conceptual co-occurrences in the future publications of the community, based on its past. Results substantially supported hypothesized retention and selection mechanisms. Future concept usage was predictable from previous concept usage, but was limited by conceptual carrying capacity as predicted by density dependence theory. Also as predicted, retention was stronger when the community showed a more cohesive social structure. Similarly, concepts that showed structural signatures of high testability and relevance were more likely to be selected after previous usage frequency was controlled for. By contrast, hypotheses for variation mechanisms were not supported. Surprisingly, concepts whose structural position suggested they would be easiest to discover through search processes were used less frequently, once previous usage frequency was controlled for. The study also makes a theoretical contribution by suggesting ways that evolutionary theory can be used to integrate findings from the study of science with insights from organizational communication. A variety of concrete directions for future studies of social and semantic network evolution are also proposed.

  2. Testable solution of the cosmological constant and coincidence problems

    NASA Astrophysics Data System (ADS)

    Shaw, Douglas J.; Barrow, John D.

    2011-02-01

    We present a new solution to the cosmological constant (CC) and coincidence problems in which the observed value of the CC, Λ, is linked to other observable properties of the Universe. This is achieved by promoting the CC from a parameter that must be specified, to a field that can take many possible values. The observed value of Λ≈(9.3Gyrs)-2 [≈10-120 in Planck units] is determined by a new constraint equation which follows from the application of a causally restricted variation principle. When applied to our visible Universe, the model makes a testable prediction for the dimensionless spatial curvature of Ωk0=-0.0056(ζb/0.5), where ζb˜1/2 is a QCD parameter. Requiring that a classical history exist, our model determines the probability of observing a given Λ. The observed CC value, which we successfully predict, is typical within our model even before the effects of anthropic selection are included. When anthropic selection effects are accounted for, we find that the observed coincidence between tΛ=Λ-1/2 and the age of the Universe, tU, is a typical occurrence in our model. In contrast to multiverse explanations of the CC problems, our solution is independent of the choice of a prior weighting of different Λ values and does not rely on anthropic selection effects. Our model includes no unnatural small parameters and does not require the introduction of new dynamical scalar fields or modifications to general relativity, and it can be tested by astronomical observations in the near future.

  3. Superstitiousness in obsessive-compulsive disorder

    PubMed Central

    Brugger, Peter; Viaud-Delmon, Isabelle

    2010-01-01

    It has been speculated that superstitiousness and obsessivecompulsive disorder (OCD) exist along a continuum. The distinction between superstitious behavior italic>and superstitious belief, however, is crucial for any theoretical account of claimed associations between superstitiousness and OCD. By demonstrating that there is a dichotomy between behavior and belief, which is experimentally testable, we can differentiate superstitious behavior from superstitious belief, or magical ideation. Different brain circuits are responsible for these two forms of superstitiousness; thus, determining which type of superstition is prominent in the symptomatology of an individual patient may inform us about the primarily affected neurocognitive systems. PMID:20623929

  4. Authors’ response: mirror neurons: tests and testability.

    PubMed

    Catmur, Caroline; Press, Clare; Cook, Richard; Bird, Geoffrey; Heyes, Cecilia

    2014-04-01

    Commentators have tended to focus on the conceptual framework of our article, the contrast between genetic and associative accounts of mirror neurons, and to challenge it with additional possibilities rather than empirical data. This makes the empirically focused comments especially valuable. The mirror neuron debate is replete with ideas; what it needs now are system-level theories and careful experiments – tests and testability.

  5. A test of the hypothesis that correlational selection generates genetic correlations.

    PubMed

    Roff, Derek A; Fairbairn, Daphne J

    2012-09-01

    Theory predicts that correlational selection on two traits will cause the major axis of the bivariate G matrix to orient itself in the same direction as the correlational selection gradient. Two testable predictions follow from this: for a given pair of traits, (1) the sign of correlational selection gradient should be the same as that of the genetic correlation, and (2) the correlational selection gradient should be positively correlated with the value of the genetic correlation. We test this hypothesis with a meta-analysis utilizing empirical estimates of correlational selection gradients and measures of the correlation between the two focal traits. Our results are consistent with both predictions and hence support the underlying hypothesis that correlational selection generates a genetic correlation between the two traits and hence orients the bivariate G matrix. © 2012 The Author(s). Evolution© 2012 The Society for the Study of Evolution.

  6. Cognitive architectures and language acquisition: a case study in pronoun comprehension.

    PubMed

    VAN Rij, Jacolien; VAN Rijn, Hedderik; Hendriks, Petra

    2010-06-01

    In this paper we discuss a computational cognitive model of children's poor performance on pronoun interpretation (the so-called Delay of Principle B Effect, or DPBE). This cognitive model is based on a theoretical account that attributes the DPBE to children's inability as hearers to also take into account the speaker's perspective. The cognitive model predicts that child hearers are unable to do so because their speed of linguistic processing is too limited to perform this second step in interpretation. We tested this hypothesis empirically in a psycholinguistic study, in which we slowed down the speech rate to give children more time for interpretation, and in a computational simulation study. The results of the two studies confirm the predictions of our model. Moreover, these studies show that embedding a theory of linguistic competence in a cognitive architecture allows for the generation of detailed and testable predictions with respect to linguistic performance.

  7. The possible consequences for cognitive functions of external electric fields at power line frequency on hippocampal CA1 pyramidal neurons.

    PubMed

    Migliore, Rosanna; De Simone, Giada; Leinekugel, Xavier; Migliore, Michele

    2017-04-01

    The possible effects on cognitive processes of external electric fields, such as those generated by power line pillars and household appliances are of increasing public concern. They are difficult to study experimentally, and the relatively scarce and contradictory evidence make it difficult to clearly assess these effects. In this study, we investigate how, why and to what extent external perturbations of the intrinsic neuronal activity, such as those that can be caused by generation, transmission and use of electrical energy can affect neuronal activity during cognitive processes. For this purpose, we used a morphologically and biophysically realistic three-dimensional model of CA1 pyramidal neurons. The simulation findings suggest that an electric field oscillating at power lines frequency, and environmentally measured strength, can significantly alter both the average firing rate and temporal spike distribution properties of a hippocampal CA1 pyramidal neuron. This effect strongly depends on the specific and instantaneous relative spatial location of the neuron with respect to the field, and on the synaptic input properties. The model makes experimentally testable predictions on the possible functional consequences for normal hippocampal functions such as object recognition and spatial navigation. The results suggest that, although EF effects on cognitive processes may be difficult to occur in everyday life, their functional consequences deserve some consideration, especially when they constitute a systematic presence in living environments. © 2016 Federation of European Neuroscience Societies and John Wiley & Sons Ltd.

  8. Structural similitude and design of scaled down laminated models

    NASA Technical Reports Server (NTRS)

    Simitses, G. J.; Rezaeepazhand, J.

    1993-01-01

    The excellent mechanical properties of laminated composite structures make them prime candidates for wide variety of applications in aerospace, mechanical and other branches of engineering. The enormous design flexibility of advanced composites is obtained at the cost of large number of design parameters. Due to complexity of the systems and lack of complete design based informations, designers tend to be conservative in their design. Furthermore, any new design is extensively evaluated experimentally until it achieves the necessary reliability, performance and safety. However, the experimental evaluation of composite structures are costly and time consuming. Consequently, it is extremely useful if a full-scale structure can be replaced by a similar scaled-down model which is much easier to work with. Furthermore, a dramatic reduction in cost and time can be achieved, if available experimental data of a specific structure can be used to predict the behavior of a group of similar systems. This study investigates problems associated with the design of scaled models. Such study is important since it provides the necessary scaling laws, and the factors which affect the accuracy of the scale models. Similitude theory is employed to develop the necessary similarity conditions (scaling laws). Scaling laws provide relationship between a full-scale structure and its scale model, and can be used to extrapolate the experimental data of a small, inexpensive, and testable model into design information for a large prototype. Due to large number of design parameters, the identification of the principal scaling laws by conventional method (dimensional analysis) is tedious. Similitude theory based on governing equations of the structural system is more direct and simpler in execution. The difficulty of making completely similar scale models often leads to accept certain type of distortion from exact duplication of the prototype (partial similarity). Both complete and partial similarity are discussed. The procedure consists of systematically observing the effect of each parameter and corresponding scaling laws. Then acceptable intervals and limitations for these parameters and scaling laws are discussed. In each case, a set of valid scaling factors and corresponding response scaling laws that accurately predict the response of prototypes from experimental models is introduced. The examples used include rectangular laminated plates under destabilizing loads, applied individually, vibrational characteristics of same plates, as well as cylindrical bending of beam-plates.

  9. Fingerprinting the impacts of global change on tropical forests.

    PubMed

    Lewis, Simon L; Malhi, Yadvinder; Phillips, Oliver L

    2004-03-29

    Recent observations of widespread changes in mature tropical forests such as increasing tree growth, recruitment and mortality rates and increasing above-ground biomass suggest that 'global change' agents may be causing predictable changes in tropical forests. However, consensus over both the robustness of these changes and the environmental drivers that may be causing them is yet to emerge. This paper focuses on the second part of this debate. We review (i) the evidence that the physical, chemical and biological environment that tropical trees grow in has been altered over recent decades across large areas of the tropics, and (ii) the theoretical, experimental and observational evidence regarding the most likely effects of each of these changes on tropical forests. Ten potential widespread drivers of environmental change were identified: temperature, precipitation, solar radiation, climatic extremes (including El Niño-Southern Oscillation events), atmospheric CO2 concentrations, nutrient deposition, O3/acid depositions, hunting, land-use change and increasing liana numbers. We note that each of these environmental changes is expected to leave a unique 'fingerprint' in tropical forests, as drivers directly force different processes, have different distributions in space and time and may affect some forests more than others (e.g. depending on soil fertility). Thus, in the third part of the paper we present testable a priori predictions of forest responses to assist ecologists in attributing particular changes in forests to particular causes across multiple datasets. Finally, we discuss how these drivers may change in the future and the possible consequences for tropical forests.

  10. Representing high throughput expression profiles via perturbation barcodes reveals compound targets.

    PubMed

    Filzen, Tracey M; Kutchukian, Peter S; Hermes, Jeffrey D; Li, Jing; Tudor, Matthew

    2017-02-01

    High throughput mRNA expression profiling can be used to characterize the response of cell culture models to perturbations such as pharmacologic modulators and genetic perturbations. As profiling campaigns expand in scope, it is important to homogenize, summarize, and analyze the resulting data in a manner that captures significant biological signals in spite of various noise sources such as batch effects and stochastic variation. We used the L1000 platform for large-scale profiling of 978 representative genes across thousands of compound treatments. Here, a method is described that uses deep learning techniques to convert the expression changes of the landmark genes into a perturbation barcode that reveals important features of the underlying data, performing better than the raw data in revealing important biological insights. The barcode captures compound structure and target information, and predicts a compound's high throughput screening promiscuity, to a higher degree than the original data measurements, indicating that the approach uncovers underlying factors of the expression data that are otherwise entangled or masked by noise. Furthermore, we demonstrate that visualizations derived from the perturbation barcode can be used to more sensitively assign functions to unknown compounds through a guilt-by-association approach, which we use to predict and experimentally validate the activity of compounds on the MAPK pathway. The demonstrated application of deep metric learning to large-scale chemical genetics projects highlights the utility of this and related approaches to the extraction of insights and testable hypotheses from big, sometimes noisy data.

  11. Representing high throughput expression profiles via perturbation barcodes reveals compound targets

    PubMed Central

    Kutchukian, Peter S.; Li, Jing; Tudor, Matthew

    2017-01-01

    High throughput mRNA expression profiling can be used to characterize the response of cell culture models to perturbations such as pharmacologic modulators and genetic perturbations. As profiling campaigns expand in scope, it is important to homogenize, summarize, and analyze the resulting data in a manner that captures significant biological signals in spite of various noise sources such as batch effects and stochastic variation. We used the L1000 platform for large-scale profiling of 978 representative genes across thousands of compound treatments. Here, a method is described that uses deep learning techniques to convert the expression changes of the landmark genes into a perturbation barcode that reveals important features of the underlying data, performing better than the raw data in revealing important biological insights. The barcode captures compound structure and target information, and predicts a compound’s high throughput screening promiscuity, to a higher degree than the original data measurements, indicating that the approach uncovers underlying factors of the expression data that are otherwise entangled or masked by noise. Furthermore, we demonstrate that visualizations derived from the perturbation barcode can be used to more sensitively assign functions to unknown compounds through a guilt-by-association approach, which we use to predict and experimentally validate the activity of compounds on the MAPK pathway. The demonstrated application of deep metric learning to large-scale chemical genetics projects highlights the utility of this and related approaches to the extraction of insights and testable hypotheses from big, sometimes noisy data. PMID:28182661

  12. Hypothesis testing and earthquake prediction.

    PubMed

    Jackson, D D

    1996-04-30

    Requirements for testing include advance specification of the conditional rate density (probability per unit time, area, and magnitude) or, alternatively, probabilities for specified intervals of time, space, and magnitude. Here I consider testing fully specified hypotheses, with no parameter adjustments or arbitrary decisions allowed during the test period. Because it may take decades to validate prediction methods, it is worthwhile to formulate testable hypotheses carefully in advance. Earthquake prediction generally implies that the probability will be temporarily higher than normal. Such a statement requires knowledge of "normal behavior"--that is, it requires a null hypothesis. Hypotheses can be tested in three ways: (i) by comparing the number of actual earth-quakes to the number predicted, (ii) by comparing the likelihood score of actual earthquakes to the predicted distribution, and (iii) by comparing the likelihood ratio to that of a null hypothesis. The first two tests are purely self-consistency tests, while the third is a direct comparison of two hypotheses. Predictions made without a statement of probability are very difficult to test, and any test must be based on the ratio of earthquakes in and out of the forecast regions.

  13. Hypothesis testing and earthquake prediction.

    PubMed Central

    Jackson, D D

    1996-01-01

    Requirements for testing include advance specification of the conditional rate density (probability per unit time, area, and magnitude) or, alternatively, probabilities for specified intervals of time, space, and magnitude. Here I consider testing fully specified hypotheses, with no parameter adjustments or arbitrary decisions allowed during the test period. Because it may take decades to validate prediction methods, it is worthwhile to formulate testable hypotheses carefully in advance. Earthquake prediction generally implies that the probability will be temporarily higher than normal. Such a statement requires knowledge of "normal behavior"--that is, it requires a null hypothesis. Hypotheses can be tested in three ways: (i) by comparing the number of actual earth-quakes to the number predicted, (ii) by comparing the likelihood score of actual earthquakes to the predicted distribution, and (iii) by comparing the likelihood ratio to that of a null hypothesis. The first two tests are purely self-consistency tests, while the third is a direct comparison of two hypotheses. Predictions made without a statement of probability are very difficult to test, and any test must be based on the ratio of earthquakes in and out of the forecast regions. PMID:11607663

  14. Artificial Intelligence Applications to Testability.

    DTIC Science & Technology

    1984-10-01

    general software assistant; examining testability utilization of it should wait a few years until the software assistant is a well defined product ...ago. It provides a single host which satisfies the needs of developers, product developers, and end users . As shown in table 5.10-2, it also provides...follows a trend towards more user -oriented design approaches to interactive computer systems. The implicit goal in this trend is the

  15. The need for theory to guide concussion research.

    PubMed

    Molfese, Dennis L

    2015-01-01

    Although research into concussion has greatly expanded over the past decade, progress in identifying the mechanisms and consequences of head injury and recovery are largely absent. Instead, data are accumulated without the guidance of a systematic theory to direct research questions or generate testable hypotheses. As part of this special issue on sports concussion, I advance a theory that emphasizes changes in spatial and temporal distributions of the brain's neural networks during normal learning and the disruptions of these networks following injury. Specific predictions are made regarding both the development of the network as well as its breakdown following injury.

  16. Predicting rates of interspecific interaction from phylogenetic trees.

    PubMed

    Nuismer, Scott L; Harmon, Luke J

    2015-01-01

    Integrating phylogenetic information can potentially improve our ability to explain species' traits, patterns of community assembly, the network structure of communities, and ecosystem function. In this study, we use mathematical models to explore the ecological and evolutionary factors that modulate the explanatory power of phylogenetic information for communities of species that interact within a single trophic level. We find that phylogenetic relationships among species can influence trait evolution and rates of interaction among species, but only under particular models of species interaction. For example, when interactions within communities are mediated by a mechanism of phenotype matching, phylogenetic trees make specific predictions about trait evolution and rates of interaction. In contrast, if interactions within a community depend on a mechanism of phenotype differences, phylogenetic information has little, if any, predictive power for trait evolution and interaction rate. Together, these results make clear and testable predictions for when and how evolutionary history is expected to influence contemporary rates of species interaction. © 2014 John Wiley & Sons Ltd/CNRS.

  17. Simple neural substrate predicts complex rhythmic structure in duetting birds

    NASA Astrophysics Data System (ADS)

    Amador, Ana; Trevisan, M. A.; Mindlin, G. B.

    2005-09-01

    Horneros (Furnarius Rufus) are South American birds well known for their oven-looking nests and their ability to sing in couples. Previous work has analyzed the rhythmic organization of the duets, unveiling a mathematical structure behind the songs. In this work we analyze in detail an extended database of duets. The rhythms of the songs are compatible with the dynamics presented by a wide class of dynamical systems: forced excitable systems. Compatible with this nonlinear rule, we build a biologically inspired model for how the neural and the anatomical elements may interact to produce the observed rhythmic patterns. This model allows us to synthesize songs presenting the acoustic and rhythmic features observed in real songs. We also make testable predictions in order to support our hypothesis.

  18. Empirical approaches to the study of language evolution.

    PubMed

    Fitch, W Tecumseh

    2017-02-01

    The study of language evolution, and human cognitive evolution more generally, has often been ridiculed as unscientific, but in fact it differs little from many other disciplines that investigate past events, such as geology or cosmology. Well-crafted models of language evolution make numerous testable hypotheses, and if the principles of strong inference (simultaneous testing of multiple plausible hypotheses) are adopted, there is an increasing amount of relevant data allowing empirical evaluation of such models. The articles in this special issue provide a concise overview of current models of language evolution, emphasizing the testable predictions that they make, along with overviews of the many sources of data available to test them (emphasizing comparative, neural, and genetic data). The key challenge facing the study of language evolution is not a lack of data, but rather a weak commitment to hypothesis-testing approaches and strong inference, exacerbated by the broad and highly interdisciplinary nature of the relevant data. This introduction offers an overview of the field, and a summary of what needed to evolve to provide our species with language-ready brains. It then briefly discusses different contemporary models of language evolution, followed by an overview of different sources of data to test these models. I conclude with my own multistage model of how different components of language could have evolved.

  19. Are there two processes in reasoning? The dimensionality of inductive and deductive inferences.

    PubMed

    Stephens, Rachel G; Dunn, John C; Hayes, Brett K

    2018-03-01

    Single-process accounts of reasoning propose that the same cognitive mechanisms underlie inductive and deductive inferences. In contrast, dual-process accounts propose that these inferences depend upon 2 qualitatively different mechanisms. To distinguish between these accounts, we derived a set of single-process and dual-process models based on an overarching signal detection framework. We then used signed difference analysis to test each model against data from an argument evaluation task, in which induction and deduction judgments are elicited for sets of valid and invalid arguments. Three data sets were analyzed: data from Singmann and Klauer (2011), a database of argument evaluation studies, and the results of an experiment designed to test model predictions. Of the large set of testable models, we found that almost all could be rejected, including all 2-dimensional models. The only testable model able to account for all 3 data sets was a model with 1 dimension of argument strength and independent decision criteria for induction and deduction judgments. We conclude that despite the popularity of dual-process accounts, current results from the argument evaluation task are best explained by a single-process account that incorporates separate decision thresholds for inductive and deductive inferences. (PsycINFO Database Record (c) 2018 APA, all rights reserved).

  20. Induction and modulation of persistent activity in a layer V PFC microcircuit model.

    PubMed

    Papoutsi, Athanasia; Sidiropoulou, Kyriaki; Cutsuridis, Vassilis; Poirazi, Panayiota

    2013-01-01

    Working memory refers to the temporary storage of information and is strongly associated with the prefrontal cortex (PFC). Persistent activity of cortical neurons, namely the activity that persists beyond the stimulus presentation, is considered the cellular correlate of working memory. Although past studies suggested that this type of activity is characteristic of large scale networks, recent experimental evidence imply that small, tightly interconnected clusters of neurons in the cortex may support similar functionalities. However, very little is known about the biophysical mechanisms giving rise to persistent activity in small-sized microcircuits in the PFC. Here, we present a detailed biophysically-yet morphologically simplified-microcircuit model of layer V PFC neurons that incorporates connectivity constraints and is validated against a multitude of experimental data. We show that (a) a small-sized network can exhibit persistent activity under realistic stimulus conditions. (b) Its emergence depends strongly on the interplay of dADP, NMDA, and GABAB currents. (c) Although increases in stimulus duration increase the probability of persistent activity induction, variability in the stimulus firing frequency does not consistently influence it. (d) Modulation of ionic conductances (I h , I D , I sAHP, I caL, I caN, I caR) differentially controls persistent activity properties in a location dependent manner. These findings suggest that modulation of the microcircuit's firing characteristics is achieved primarily through changes in its intrinsic mechanism makeup, supporting the hypothesis of multiple bi-stable units in the PFC. Overall, the model generates a number of experimentally testable predictions that may lead to a better understanding of the biophysical mechanisms of persistent activity induction and modulation in the PFC.

  1. The diffusion of evidence-based decision making among local health department practitioners in the United States.

    PubMed

    Harris, Jenine K; Erwin, Paul C; Smith, Carson; Brownson, Ross C

    2015-01-01

    Evidence-based decision making (EBDM) is the process, in local health departments (LHDs) and other settings, of translating the best available scientific evidence into practice. Local health departments are more likely to be successful if they use evidence-based strategies. However, EBDM and use of evidence-based strategies by LHDs are not widespread. Drawing on diffusion of innovations theory, we sought to understand how LHD directors and program managers perceive the relative advantage, compatibility, simplicity, and testability of EBDM. Directors and managers of programs in chronic disease, environmental health, and infectious disease from LHDs nationwide completed a survey including demographic information and questions about diffusion attributes (advantage, compatibility, simplicity, and testability) related to EBDM. Bivariate inferential tests were used to compare responses between directors and managers and to examine associations between participant characteristics and diffusion attributes. Relative advantage and compatibility scores were high for directors and managers, whereas simplicity and testability scores were lower. Although health department directors and managers of programs in chronic disease generally had higher scores than other groups, there were few significant or large differences between directors and managers across the diffusion attributes. Larger jurisdiction population size was associated with higher relative advantage and compatibility scores for both directors and managers. Overall, directors and managers were in strong agreement on the relative advantage of an LHD using EBDM, with directors in stronger agreement than managers. Perceived relative advantage has been demonstrated to be the most important factor in the rate of innovation adoption, suggesting an opportunity for directors to speed EBDM adoption. However, lower average scores across all groups for simplicity and testability may be hindering EBDM adoption. Recommended strategies for increasing perceived EBDM simplicity and testability are provided.

  2. The Williams' legacy: A critical reappraisal of his nine predictions about the evolution of senescence.

    PubMed

    Gaillard, Jean-Michel; Lemaître, Jean-François

    2017-12-01

    Williams' evolutionary theory of senescence based on antagonistic pleiotropy has become a landmark in evolutionary biology, and more recently in biogerontology and evolutionary medicine. In his original article, Williams launched a set of nine "testable deductions" from his theory. Although some of these predictions have been repeatedly discussed, most have been overlooked and no systematic evaluation of the whole set of Williams' original predictions has been performed. For the sixtieth anniversary of the publication of the Williams' article, we provide an updated evaluation of all these predictions. We present the pros and cons of each prediction based on recent accumulation of both theoretical and empirical studies performed in the laboratory and in the wild. From our viewpoint, six predictions are mostly supported by our current knowledge at least under some conditions (although Williams' theory cannot thoroughly explain why for some of them). Three predictions, all involving the timing of senescence, are not supported. Our critical review of Williams' predictions highlights the importance of William's contribution and clearly demonstrates that, 60 years after its publication, his article does not show any sign of senescence. © 2017 The Author(s). Evolution © 2017 The Society for the Study of Evolution.

  3. Bridging the gap between structural bioinformatics and receptor research: the membrane-embedded, ligand-gated, P2X glycoprotein receptor.

    PubMed

    Mager, Peter P; Weber, Anje; Illes, Peter

    2004-01-01

    No details on P2X receptor architecture had been known at the atomic resolution level. Using comparative homology-based molecular modelling and threading, it was attempted to predict the three-dimensional structure of P2X receptors. This prediction could not be carried out, however, because important properties of the P2X family differ considerably from that of the potential template proteins. This paper reviews an alternative approach consisting of three research fields: bioinformatics, structural modelling, and a variety of the results of biological experiments. Starting point is the amino acid sequence. Using the sequential data, the first step is a secondary structure prediction. The resulting secondary structure is converted into a three-dimensional geometry. Then, the secondary and tertiary structures are optimized by using the quantum chemistry RHF/3-21G minimal basic set and the all-atom molecular mechanics AMBER96 force field. The fold of the membrane-embedded protein is simulated by a suitable dielectricum. The structure is refined using a conjugate gradient minimizer (Fletcher-Reeves modification of the Polak-Ribiere method). The results of the geometry optimization were checked by a Ramanchandran plot, rotamer analysis, all-atom contact dots, and the C(beta) deviation. As additional tools for the model building, multiple alignment analysis and comparative sequence-function analysis were used. The approach is exemplified on the membrane-embedded, ligand-gated P2X3 receptor subunit, a monovalent-bivalent cation channel-forming glycoprotein that is activated by extracellular adenosine 5'-triphosphate. From these results, a topology of the pore-forming motif of the P2X3 receptor subunit was proposed. It is believed that a fully functional P2X channel requires a precise coupling between (i) two distinct peptide modules, an extracellularly occurring ATP-binding module and a pore module that includes a long transmembrane and short intracellular part, (ii) an interaction surface with membranes, and (iii) hydrogen bonding forces of the residues and hydrated cations. Furthermore, this paper demonstrates the role of quantitative structure-activity relationships (QSARs) in P2X research (calcium ion permeability of the wild-type and after site-directed mutagenesis of the rat P2X2 receptor protein, KN-62 analogs as competitive antagonists of the human P2X7 receptor). EXPERIMENTAL PROOFS: The predictions are experimentally testable and may provide an additional interpretation of experimental observations published in literature. In particular, there is the good agreement of the geometry optimized P2X3 structure with experimentally proposed P2X receptor models obtained by neurophysiological, biochemical, pharmacological, and mutation experiments. Although the rat P2X3 receptor subunit is more complex (397 amino acids) than the KcsA protein (160 amino acids), the overall folds of the peptide backbone atoms are similar. To avoid semantic confusion, it should be noted that "prediction" is defined in a probabilistic sense. Matches to generic rules do not mean "this is true" but rather "this might be true". Only biological and chemical knowledge can determine whether or not these predictions are meaningful. Thus, the results from the computational tools are probabilistic predictions and subject to further experimental verification. The geometry optimized P2X3 receptor subunit is freely available for academic researchers on e-mail request (PDB format).

  4. Testable solution of the cosmological constant and coincidence problems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Shaw, Douglas J.; Barrow, John D.

    2011-02-15

    We present a new solution to the cosmological constant (CC) and coincidence problems in which the observed value of the CC, {Lambda}, is linked to other observable properties of the Universe. This is achieved by promoting the CC from a parameter that must be specified, to a field that can take many possible values. The observed value of {Lambda}{approx_equal}(9.3 Gyrs){sup -2}[{approx_equal}10{sup -120} in Planck units] is determined by a new constraint equation which follows from the application of a causally restricted variation principle. When applied to our visible Universe, the model makes a testable prediction for the dimensionless spatial curvaturemore » of {Omega}{sub k0}=-0.0056({zeta}{sub b}/0.5), where {zeta}{sub b}{approx}1/2 is a QCD parameter. Requiring that a classical history exist, our model determines the probability of observing a given {Lambda}. The observed CC value, which we successfully predict, is typical within our model even before the effects of anthropic selection are included. When anthropic selection effects are accounted for, we find that the observed coincidence between t{sub {Lambda}={Lambda}}{sup -1/2} and the age of the Universe, t{sub U}, is a typical occurrence in our model. In contrast to multiverse explanations of the CC problems, our solution is independent of the choice of a prior weighting of different {Lambda} values and does not rely on anthropic selection effects. Our model includes no unnatural small parameters and does not require the introduction of new dynamical scalar fields or modifications to general relativity, and it can be tested by astronomical observations in the near future.« less

  5. Integral method for the calculation of Hawking radiation in dispersive media. I. Symmetric asymptotics.

    PubMed

    Robertson, Scott; Leonhardt, Ulf

    2014-11-01

    Hawking radiation has become experimentally testable thanks to the many analog systems which mimic the effects of the event horizon on wave propagation. These systems are typically dominated by dispersion and give rise to a numerically soluble and stable ordinary differential equation only if the rest-frame dispersion relation Ω^{2}(k) is a polynomial of relatively low degree. Here we present a new method for the calculation of wave scattering in a one-dimensional medium of arbitrary dispersion. It views the wave equation as an integral equation in Fourier space, which can be solved using standard and efficient numerical techniques.

  6. Proposed experiment to test fundamentally binary theories

    NASA Astrophysics Data System (ADS)

    Kleinmann, Matthias; Vértesi, Tamás; Cabello, Adán

    2017-09-01

    Fundamentally binary theories are nonsignaling theories in which measurements of many outcomes are constructed by selecting from binary measurements. They constitute a sensible alternative to quantum theory and have never been directly falsified by any experiment. Here we show that fundamentally binary theories are experimentally testable with current technology. For that, we identify a feasible Bell-type experiment on pairs of entangled qutrits. In addition, we prove that, for any n , quantum n -ary correlations are not fundamentally (n -1 ) -ary. For that, we introduce a family of inequalities that hold for fundamentally (n -1 ) -ary theories but are violated by quantum n -ary correlations.

  7. Emergent quantum mechanics without wavefunctions

    NASA Astrophysics Data System (ADS)

    Mesa Pascasio, J.; Fussy, S.; Schwabl, H.; Grössing, G.

    2016-03-01

    We present our model of an Emergent Quantum Mechanics which can be characterized by “realism without pre-determination”. This is illustrated by our analytic description and corresponding computer simulations of Bohmian-like “surreal” trajectories, which are obtained classically, i.e. without the use of any quantum mechanical tool such as wavefunctions. However, these trajectories do not necessarily represent ontological paths of particles but rather mappings of the probability density flux in a hydrodynamical sense. Modelling emergent quantum mechanics in a high-low intesity double slit scenario gives rise to the “quantum sweeper effect” with a characteristic intensity pattern. This phenomenon should be experimentally testable via weak measurement techniques.

  8. Subluxation: dogma or science?

    PubMed Central

    Keating, Joseph C; Charlton, Keith H; Grod, Jaroslaw P; Perle, Stephen M; Sikorski, David; Winterstein, James F

    2005-01-01

    Subluxation syndrome is a legitimate, potentially testable, theoretical construct for which there is little experimental evidence. Acceptable as hypothesis, the widespread assertion of the clinical meaningfulness of this notion brings ridicule from the scientific and health care communities and confusion within the chiropractic profession. We believe that an evidence-orientation among chiropractors requires that we distinguish between subluxation dogma vs. subluxation as the potential focus of clinical research. We lament efforts to generate unity within the profession through consensus statements concerning subluxation dogma, and believe that cultural authority will continue to elude us so long as we assert dogma as though it were validated clinical theory. PMID:16092955

  9. A possible molecular mechanism for the pressure reversal of general anaesthetics: Aggregation of halothane in POPC bilayers at high pressure

    NASA Astrophysics Data System (ADS)

    Tu, K. M.; Matubayasi, N.; Liang, K. K.; Todorov, I. T.; Chan, S. L.; Chau, P.-L.

    2012-08-01

    We placed halothane, a general anaesthetic, inside palmitoyloleoylphosphatidylcholine (POPC) bilayers and performed molecular dynamics simulations at atmospheric and raised pressures. We demonstrated that halothane aggregated inside POPC membranes at 20 MPa but not at 40 MPa. The pressure range of aggregation matches that of pressure reversal in whole animals, and strongly suggests that this could be the mechanism for this effect. Combining these results with previous experimental data, we describe a testable hypothesis of how aggregation of general anaesthetics at high pressure can lead to pressure reversal, the effect whereby these drugs lose the efficacy at high pressure.

  10. Smart substrates: Making multi-chip modules smarter

    NASA Astrophysics Data System (ADS)

    Wunsch, T. F.; Treece, R. K.

    1995-05-01

    A novel multi-chip module (MCM) design and manufacturing methodology which utilizes active CMOS circuits in what is normally a passive substrate realizes the 'smart substrate' for use in highly testable, high reliability MCMS. The active devices are used to test the bare substrate, diagnose assembly errors or integrated circuit (IC) failures that require rework, and improve the testability of the final MCM assembly. A static random access memory (SRAM) MCM has been designed and fabricated in Sandia Microelectronics Development Laboratory in order to demonstrate the technical feasibility of this concept and to examine design and manufacturing issues which will ultimately determine the economic viability of this approach. The smart substrate memory MCM represents a first in MCM packaging. At the time the first modules were fabricated, no other company or MCM vendor had incorporated active devices in the substrate to improve manufacturability and testability, and thereby improve MCM reliability and reduce cost.

  11. BioCompoundML: A General Biofuel Property Screening Tool for Biological Molecules Using Random Forest Classifiers

    DOE PAGES

    Whitmore, Leanne S.; Davis, Ryan W.; McCormick, Robert L.; ...

    2016-09-15

    Screening a large number of biologically derived molecules for potential fuel compounds without recourse to experimental testing is important in identifying understudied yet valuable molecules. Experimental testing, although a valuable standard for measuring fuel properties, has several major limitations, including the requirement of testably high quantities, considerable expense, and a large amount of time. This paper discusses the development of a general-purpose fuel property tool, using machine learning, whose outcome is to screen molecules for desirable fuel properties. BioCompoundML adopts a general methodology, requiring as input only a list of training compounds (with identifiers and measured values) and a listmore » of testing compounds (with identifiers). For the training data, BioCompoundML collects open data from the National Center for Biotechnology Information, incorporates user-provided features, imputes missing values, performs feature reduction, builds a classifier, and clusters compounds. BioCompoundML then collects data for the testing compounds, predicts class membership, and determines whether compounds are found in the range of variability of the training data set. We demonstrate this tool using three different fuel properties: research octane number (RON), threshold soot index (TSI), and melting point (MP). Here we provide measures of its success with these properties using randomized train/test measurements: average accuracy is 88% in RON, 85% in TSI, and 94% in MP; average precision is 88% in RON, 88% in TSI, and 95% in MP; and average recall is 88% in RON, 82% in TSI, and 97% in MP. The receiver operator characteristics (area under the curve) were estimated at 0.88 in RON, 0.86 in TSI, and 0.87 in MP. We also measured the success of BioCompoundML by sending 16 compounds for direct RON determination. Finally, we provide a screen of 1977 hydrocarbons/oxygenates within the 8696 compounds in MetaCyc, identifying compounds with high predictive strength for high or low RON.« less

  12. Constant-roll (quasi-)linear inflation

    NASA Astrophysics Data System (ADS)

    Karam, A.; Marzola, L.; Pappas, T.; Racioppi, A.; Tamvakis, K.

    2018-05-01

    In constant-roll inflation, the scalar field that drives the accelerated expansion of the Universe is rolling down its potential at a constant rate. Within this framework, we highlight the relations between the Hubble slow-roll parameters and the potential ones, studying in detail the case of a single-field Coleman-Weinberg model characterised by a non-minimal coupling of the inflaton to gravity. With respect to the exact constant-roll predictions, we find that assuming an approximate slow-roll behaviour yields a difference of Δ r = 0.001 in the tensor-to-scalar ratio prediction. Such a discrepancy is in principle testable by future satellite missions. As for the scalar spectral index ns, we find that the existing 2-σ bound constrains the value of the non-minimal coupling to ξphi ~ 0.29–0.31 in the model under consideration.

  13. Computational model for living nematic

    NASA Astrophysics Data System (ADS)

    Genkin, Mikhail; Sokolov, Andrey; Lavrentovich, Oleg; Aranson, Igor

    A realization of an active system has been conceived by combining swimming bacteria and a lyotropic nematic liquid crystal. Here, by coupling the well-established and validated model of nematic liquid crystals with the bacterial dynamics we developed a computational model describing intricate properties of such a living nematic. In faithful agreement with the experiment, the model reproduces the onset of periodic undulation of the nematic director and consequent proliferation of topological defects with the increase in bacterial concentration. It yields testable prediction on the accumulation and transport of bacteria in the cores of +1/2 topological defects and depletion of bacteria in the cores of -1/2 defects. Our new experiment on motile bacteria suspended in a free-standing liquid crystalline film fully confirmed this prediction. This effect can be used to capture and manipulation of small amounts of bacteria.

  14. A one-dimensional statistical mechanics model for nucleosome positioning on genomic DNA.

    PubMed

    Tesoro, S; Ali, I; Morozov, A N; Sulaiman, N; Marenduzzo, D

    2016-02-12

    The first level of folding of DNA in eukaryotes is provided by the so-called '10 nm chromatin fibre', where DNA wraps around histone proteins (∼10 nm in size) to form nucleosomes, which go on to create a zig-zagging bead-on-a-string structure. In this work we present a one-dimensional statistical mechanics model to study nucleosome positioning within one such 10 nm fibre. We focus on the case of genomic sheep DNA, and we start from effective potentials valid at infinite dilution and determined from high-resolution in vitro salt dialysis experiments. We study positioning within a polynucleosome chain, and compare the results for genomic DNA to that obtained in the simplest case of homogeneous DNA, where the problem can be mapped to a Tonks gas. First, we consider the simple, analytically solvable, case where nucleosomes are assumed to be point-like. Then, we perform numerical simulations to gauge the effect of their finite size on the nucleosomal distribution probabilities. Finally we compare nucleosome distributions and simulated nuclease digestion patterns for the two cases (homogeneous and sheep DNA), thereby providing testable predictions of the effect of sequence on experimentally observable quantities in experiments on polynucleosome chromatin fibres reconstituted in vitro.

  15. Slot-like capacity and resource-like coding in a neural model of multiple-item working memory.

    PubMed

    Standage, Dominic; Pare, Martin

    2018-06-27

    For the past decade, research on the storage limitations of working memory has been dominated by two fundamentally different hypotheses. On the one hand, the contents of working memory may be stored in a limited number of `slots', each with a fixed resolution. On the other hand, any number of items may be stored, but with decreasing resolution. These two hypotheses have been invaluable in characterizing the computational structure of working memory, but neither provides a complete account of the available experimental data, nor speaks to the neural basis of the limitations it characterizes. To address these shortcomings, we simulated a multiple-item working memory task with a cortical network model, the cellular resolution of which allowed us to quantify the coding fidelity of memoranda as a function of memory load, as measured by the discriminability, regularity and reliability of simulated neural spiking. Our simulations account for a wealth of neural and behavioural data from human and non-human primate studies, and they demonstrate that feedback inhibition lowers both capacity and coding fidelity. Because the strength of inhibition scales with the number of items stored by the network, increasing this number progressively lowers fidelity until capacity is reached. Crucially, the model makes specific, testable predictions for neural activity on multiple-item working memory tasks.

  16. Multiscale Modeling in the Clinic: Drug Design and Development

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Clancy, Colleen E.; An, Gary; Cannon, William R.

    A wide range of length and time scales are relevant to pharmacology, especially in drug development, drug design and drug delivery. Therefore, multi-scale computational modeling and simulation methods and paradigms that advance the linkage of phenomena occurring at these multiple scales have become increasingly important. Multi-scale approaches present in silico opportunities to advance laboratory research to bedside clinical applications in pharmaceuticals research. This is achievable through the capability of modeling to reveal phenomena occurring across multiple spatial and temporal scales, which are not otherwise readily accessible to experimentation. The resultant models, when validated, are capable of making testable predictions tomore » guide drug design and delivery. In this review we describe the goals, methods, and opportunities of multi-scale modeling in drug design and development. We demonstrate the impact of multiple scales of modeling in this field. We indicate the common mathematical techniques employed for multi-scale modeling approaches used in pharmacology and present several examples illustrating the current state-of-the-art regarding drug development for: Excitable Systems (Heart); Cancer (Metastasis and Differentiation); Cancer (Angiogenesis and Drug Targeting); Metabolic Disorders; and Inflammation and Sepsis. We conclude with a focus on barriers to successful clinical translation of drug development, drug design and drug delivery multi-scale models.« less

  17. The Contribution of Psychosocial Stress to the Obesity Epidemic

    PubMed Central

    Siervo, M.; Wells, J. C. K.; Cizza, G.

    2009-01-01

    The Thrifty Gene hypothesis theorizes that during evolution a set of genes has been selected to ensure survival in environments with limited food supply and marked seasonality. Contemporary environments have predictable and unlimited food availability, an attenuated seasonality due to artificial lighting, indoor heating during the winter and air conditioning during the summer, and promote sedentariness and overeating. In this setting the thrifty genes are constantly activated to enhance energy storage. Psychosocial stress and sleep deprivation are other features of modern societies. Stress-induced hypercortisolemia in the setting of unlimited food supply promotes adiposity. Modern man is becoming obese because these ancient mechanisms are efficiently promoting a positive energy balance. We propose that in today’s plentifully provisioned societies, where sedentariness and mental stress have become typical traits, chronic activation of the neuroendocrine systems may contribute to the increased prevalence of obesity. We suggest that some of the yet unidentified thrifty genes may be linked to highly conserved energy sensing mechanisms (AMP kinase, mTOR kinase). These hypotheses are testable. Rural societies that are becoming rapidly industrialized and are witnessing a dramatic increase in obesity may provide a historical opportunity to conduct epidemiological studies of the thrifty genotype. In experimental settings, the effects of various forms of psychosocial stress in increasing metabolic efficiency and gene expression can be further tested. PMID:19156597

  18. Trajectory Recognition as the Basis for Object Individuation: A Functional Model of Object File Instantiation and Object-Token Encoding

    PubMed Central

    Fields, Chris

    2011-01-01

    The perception of persisting visual objects is mediated by transient intermediate representations, object files, that are instantiated in response to some, but not all, visual trajectories. The standard object file concept does not, however, provide a mechanism sufficient to account for all experimental data on visual object persistence, object tracking, and the ability to perceive spatially disconnected stimuli as continuously existing objects. Based on relevant anatomical, functional, and developmental data, a functional model is constructed that bases visual object individuation on the recognition of temporal sequences of apparent center-of-mass positions that are specifically identified as trajectories by dedicated “trajectory recognition networks” downstream of the medial–temporal motion-detection area. This model is shown to account for a wide range of data, and to generate a variety of testable predictions. Individual differences in the recognition, abstraction, and encoding of trajectory information are expected to generate distinct object persistence judgments and object recognition abilities. Dominance of trajectory information over feature information in stored object tokens during early infancy, in particular, is expected to disrupt the ability to re-identify human and other individuals across perceptual episodes, and lead to developmental outcomes with characteristics of autism spectrum disorders. PMID:21716599

  19. Mapping the landscape of metabolic goals of a cell

    DOE PAGES

    Zhao, Qi; Stettner, Arion I.; Reznik, Ed; ...

    2016-05-23

    Here, genome-scale flux balance models of metabolism provide testable predictions of all metabolic rates in an organism, by assuming that the cell is optimizing a metabolic goal known as the objective function. We introduce an efficient inverse flux balance analysis (invFBA) approach, based on linear programming duality, to characterize the space of possible objective functions compatible with measured fluxes. After testing our algorithm on simulated E. coli data and time-dependent S. oneidensis fluxes inferred from gene expression, we apply our inverse approach to flux measurements in long-term evolved E. coli strains, revealing objective functions that provide insight into metabolic adaptationmore » trajectories.« less

  20. Advanced Deployable Structural Systems for Small Satellites

    NASA Technical Reports Server (NTRS)

    Belvin, W. Keith; Straubel, Marco; Wilkie, W. Keats; Zander, Martin E.; Fernandez, Juan M.; Hillebrandt, Martin F.

    2016-01-01

    One of the key challenges for small satellites is packaging and reliable deployment of structural booms and arrays used for power, communication, and scientific instruments. The lack of reliable and efficient boom and membrane deployment concepts for small satellites is addressed in this work through a collaborative project between NASA and DLR. The paper provides a state of the art overview on existing spacecraft deployable appendages, the special requirements for small satellites, and initial concepts for deployable booms and arrays needed for various small satellite applications. The goal is to enhance deployable boom predictability and ground testability, develop designs that are tolerant of manufacturing imperfections, and incorporate simple and reliable deployment systems.

  1. The Central Role of Tether-Cutting Reconnection in the Production of CMEs

    NASA Technical Reports Server (NTRS)

    Moore, Ron; Sterling, Alphonse; Suess, Steve

    2007-01-01

    This viewgraph presentation describes tether-cutting reconnection in the production of Coronal Mass Ejections (CMEs). The topics include: 1) Birth and Release of the CME Plasmoid; 2) Resulting CME in Outer Corona; 3) Governing Role of Surrounding Field; 4) Testable Prediction of the Standard Scenario Magnetic Bubble CME Model; 5) Lateral Pressure in Outer Corona; 6) Measured Angular Widths of 3 CMEs; 7) LASCO Image of each CME at Final Width; 8) Source of the CME of 2002 May 20; 9) Source of the CME of 1999 Feb 9; 10) Source of the CME of 2003 Nov 4; and 11) Test Results.

  2. Field-aligned currents and ion convection at high altitudes

    NASA Technical Reports Server (NTRS)

    Burch, J. L.; Reiff, P. H.

    1985-01-01

    Hot plasma observations from Dynamics Explorer 1 have been used to investigate solar-wind ion injection, Birkeland currents, and plasma convection at altitudes above 2 earth-radii in the morning sector. The results of the study, along with the antiparallel merging hypothesis, have been used to construct a By-dependent global convection model. A significant element of the model is the coexistence of three types of convection cells (merging cells, viscous cells, and lobe cells). As the IMF direction varies, the model accounts for the changing roles of viscous and merging processes and makes testable predictions about several magnetospheric phenomena, including the newly-observed theta aurora in the polar cap.

  3. Mechanism of Genome Interrogation: How CRISPR RNA-Guided Cas9 Proteins Locate Specific Targets on DNA.

    PubMed

    Shvets, Alexey A; Kolomeisky, Anatoly B

    2017-10-03

    The ability to precisely edit and modify a genome opens endless opportunities to investigate fundamental properties of living systems as well as to advance various medical techniques and bioengineering applications. This possibility is now close to reality due to a recent discovery of the adaptive bacterial immune system, which is based on clustered regularly interspaced short palindromic repeats (CRISPR)-associated proteins (Cas) that utilize RNA to find and cut the double-stranded DNA molecules at specific locations. Here we develop a quantitative theoretical approach to analyze the mechanism of target search on DNA by CRISPR RNA-guided Cas9 proteins, which is followed by a selective cleavage of nucleic acids. It is based on a discrete-state stochastic model that takes into account the most relevant physical-chemical processes in the system. Using a method of first-passage processes, a full dynamic description of the target search is presented. It is found that the location of specific sites on DNA by CRISPR Cas9 proteins is governed by binding first to protospacer adjacent motif sequences on DNA, which is followed by reversible transitions into DNA interrogation states. In addition, the search dynamics is strongly influenced by the off-target cutting. Our theoretical calculations allow us to explain the experimental observations and to give experimentally testable predictions. Thus, the presented theoretical model clarifies some molecular aspects of the genome interrogation by CRISPR RNA-guided Cas9 proteins. Copyright © 2017 Biophysical Society. Published by Elsevier Inc. All rights reserved.

  4. Prediction and typicality in multiverse cosmology

    NASA Astrophysics Data System (ADS)

    Azhar, Feraz

    2014-02-01

    In the absence of a fundamental theory that precisely predicts values for observable parameters, anthropic reasoning attempts to constrain probability distributions over those parameters in order to facilitate the extraction of testable predictions. The utility of this approach has been vigorously debated of late, particularly in light of theories that claim we live in a multiverse, where parameters may take differing values in regions lying outside our observable horizon. Within this cosmological framework, we investigate the efficacy of top-down anthropic reasoning based on the weak anthropic principle. We argue contrary to recent claims that it is not clear one can either dispense with notions of typicality altogether or presume typicality, in comparing resulting probability distributions with observations. We show in a concrete, top-down setting related to dark matter, that assumptions about typicality can dramatically affect predictions, thereby providing a guide to how errors in reasoning regarding typicality translate to errors in the assessment of predictive power. We conjecture that this dependence on typicality is an integral feature of anthropic reasoning in broader cosmological contexts, and argue in favour of the explicit inclusion of measures of typicality in schemes invoking anthropic reasoning, with a view to extracting predictions from multiverse scenarios.

  5. Probing the folded state and mechanical unfolding pathways of T4 lysozyme using all-atom and coarse-grained molecular simulation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zheng, Wenjun, E-mail: wjzheng@buffalo.edu; Glenn, Paul

    2015-01-21

    The Bacteriophage T4 Lysozyme (T4L) is a prototype modular protein comprised of an N-terminal and a C-domain domain, which was extensively studied to understand the folding/unfolding mechanism of modular proteins. To offer detailed structural and dynamic insights to the folded-state stability and the mechanical unfolding behaviors of T4L, we have performed extensive equilibrium and steered molecular dynamics simulations of both the wild-type (WT) and a circular permutation (CP) variant of T4L using all-atom and coarse-grained force fields. Our all-atom and coarse-grained simulations of the folded state have consistently found greater stability of the C-domain than the N-domain in isolation, whichmore » is in agreement with past thermostatic studies of T4L. While the all-atom simulation cannot fully explain the mechanical unfolding behaviors of the WT and the CP variant observed in an optical tweezers study, the coarse-grained simulations based on the Go model or a modified elastic network model (mENM) are in qualitative agreement with the experimental finding of greater unfolding cooperativity in the WT than the CP variant. Interestingly, the two coarse-grained models predict different structural mechanisms for the observed change in cooperativity between the WT and the CP variant—while the Go model predicts minor modification of the unfolding pathways by circular permutation (i.e., preserving the general order that the N-domain unfolds before the C-domain), the mENM predicts a dramatic change in unfolding pathways (e.g., different order of N/C-domain unfolding in the WT and the CP variant). Based on our simulations, we have analyzed the limitations of and the key differences between these models and offered testable predictions for future experiments to resolve the structural mechanism for cooperative folding/unfolding of T4L.« less

  6. Serotonergic Psychedelics: Experimental Approaches for Assessing Mechanisms of Action.

    PubMed

    Canal, Clinton E

    2018-03-13

    Recent, well-controlled - albeit small-scale - clinical trials show that serotonergic psychedelics, including psilocybin and lysergic acid diethylamide, possess great promise for treating psychiatric disorders, including treatment-resistant depression. Additionally, fresh results from a deluge of clinical neuroimaging studies are unveiling the dynamic effects of serotonergic psychedelics on functional activity within, and connectivity across, discrete neural systems. These observations have led to testable hypotheses regarding neural processing mechanisms that contribute to psychedelic effects and therapeutic benefits. Despite these advances and a plethora of preclinical and clinical observations supporting a central role for brain serotonin 5-HT 2A receptors in producing serotonergic psychedelic effects, lingering and new questions about mechanisms abound. These chiefly pertain to molecular neuropharmacology. This chapter is devoted to illuminating and discussing such questions in the context of preclinical experimental approaches for studying mechanisms of action of serotonergic psychedelics, classic and new.

  7. What are health-related users tweeting? A qualitative content analysis of health-related users and their messages on twitter.

    PubMed

    Lee, Joy L; DeCamp, Matthew; Dredze, Mark; Chisolm, Margaret S; Berger, Zackary D

    2014-10-15

    Twitter is home to many health professionals who send messages about a variety of health-related topics. Amid concerns about physicians posting inappropriate content online, more in-depth knowledge about these messages is needed to understand health professionals' behavior on Twitter. Our goal was to characterize the content of Twitter messages, specifically focusing on health professionals and their tweets relating to health. We performed an in-depth content analysis of 700 tweets. Qualitative content analysis was conducted on tweets by health users on Twitter. The primary objective was to describe the general type of content (ie, health-related versus non-health related) on Twitter authored by health professionals and further to describe health-related tweets on the basis of the type of statement made. Specific attention was given to whether a tweet was personal (as opposed to professional) or made a claim that users would expect to be supported by some level of medical evidence (ie, a "testable" claim). A secondary objective was to compare content types among different users, including patients, physicians, nurses, health care organizations, and others. Health-related users are posting a wide range of content on Twitter. Among health-related tweets, 53.2% (184/346) contained a testable claim. Of health-related tweets by providers, 17.6% (61/346) were personal in nature; 61% (59/96) made testable statements. While organizations and businesses use Twitter to promote their services and products, patient advocates are using this tool to share their personal experiences with health. Twitter users in health-related fields tweet about both testable claims and personal experiences. Future work should assess the relationship between testable tweets and the actual level of evidence supporting them, including how Twitter users-especially patients-interpret the content of tweets posted by health providers.

  8. Effect of water flow and chemical environment on microbiota growth and composition in the human colon.

    PubMed

    Cremer, Jonas; Arnoldini, Markus; Hwa, Terence

    2017-06-20

    The human gut harbors a dynamic microbial community whose composition bears great importance for the health of the host. Here, we investigate how colonic physiology impacts bacterial growth, which ultimately dictates microbiota composition. Combining measurements of bacterial physiology with analysis of published data on human physiology into a quantitative, comprehensive modeling framework, we show how water flow in the colon, in concert with other physiological factors, determine the abundances of the major bacterial phyla. Mechanistically, our model shows that local pH values in the lumen, which differentially affect the growth of different bacteria, drive changes in microbiota composition. It identifies key factors influencing the delicate regulation of colonic pH, including epithelial water absorption, nutrient inflow, and luminal buffering capacity, and generates testable predictions on their effects. Our findings show that a predictive and mechanistic understanding of microbial ecology in the gut is possible. Such predictive understanding is needed for the rational design of intervention strategies to actively control the microbiota.

  9. Effect of water flow and chemical environment on microbiota growth and composition in the human colon

    PubMed Central

    Cremer, Jonas; Arnoldini, Markus; Hwa, Terence

    2017-01-01

    The human gut harbors a dynamic microbial community whose composition bears great importance for the health of the host. Here, we investigate how colonic physiology impacts bacterial growth, which ultimately dictates microbiota composition. Combining measurements of bacterial physiology with analysis of published data on human physiology into a quantitative, comprehensive modeling framework, we show how water flow in the colon, in concert with other physiological factors, determine the abundances of the major bacterial phyla. Mechanistically, our model shows that local pH values in the lumen, which differentially affect the growth of different bacteria, drive changes in microbiota composition. It identifies key factors influencing the delicate regulation of colonic pH, including epithelial water absorption, nutrient inflow, and luminal buffering capacity, and generates testable predictions on their effects. Our findings show that a predictive and mechanistic understanding of microbial ecology in the gut is possible. Such predictive understanding is needed for the rational design of intervention strategies to actively control the microbiota. PMID:28588144

  10. Evolutionary Dynamics on Protein Bi-stability Landscapes can Potentially Resolve Adaptive Conflicts

    PubMed Central

    Sikosek, Tobias; Bornberg-Bauer, Erich; Chan, Hue Sun

    2012-01-01

    Experimental studies have shown that some proteins exist in two alternative native-state conformations. It has been proposed that such bi-stable proteins can potentially function as evolutionary bridges at the interface between two neutral networks of protein sequences that fold uniquely into the two different native conformations. Under adaptive conflict scenarios, bi-stable proteins may be of particular advantage if they simultaneously provide two beneficial biological functions. However, computational models that simulate protein structure evolution do not yet recognize the importance of bi-stability. Here we use a biophysical model to analyze sequence space to identify bi-stable or multi-stable proteins with two or more equally stable native-state structures. The inclusion of such proteins enhances phenotype connectivity between neutral networks in sequence space. Consideration of the sequence space neighborhood of bridge proteins revealed that bi-stability decreases gradually with each mutation that takes the sequence further away from an exactly bi-stable protein. With relaxed selection pressures, we found that bi-stable proteins in our model are highly successful under simulated adaptive conflict. Inspired by these model predictions, we developed a method to identify real proteins in the PDB with bridge-like properties, and have verified a clear bi-stability gradient for a series of mutants studied by Alexander et al. (Proc Nat Acad Sci USA 2009, 106:21149–21154) that connect two sequences that fold uniquely into two different native structures via a bridge-like intermediate mutant sequence. Based on these findings, new testable predictions for future studies on protein bi-stability and evolution are discussed. PMID:23028272

  11. Biodiversity and agriculture in dynamic landscapes: Integrating ground and remotely-sensed baseline surveys.

    PubMed

    Gillison, Andrew N; Asner, Gregory P; Fernandes, Erick C M; Mafalacusser, Jacinto; Banze, Aurélio; Izidine, Samira; da Fonseca, Ambrósio R; Pacate, Hermenegildo

    2016-07-15

    Sustainable biodiversity and land management require a cost-effective means of forecasting landscape response to environmental change. Conventional species-based, regional biodiversity assessments are rarely adequate for policy planning and decision making. We show how new ground and remotely-sensed survey methods can be coordinated to help elucidate and predict relationships between biodiversity, land use and soil properties along complex biophysical gradients that typify many similar landscapes worldwide. In the lower Zambezi valley, Mozambique we used environmental, gradient-directed transects (gradsects) to sample vascular plant species, plant functional types, vegetation structure, soil properties and land-use characteristics. Soil fertility indices were derived using novel multidimensional scaling of soil properties. To facilitate spatial analysis, we applied a probabilistic remote sensing approach, analyzing Landsat 7 satellite imagery to map photosynthetically active and inactive vegetation and bare soil along each gradsect. Despite the relatively low sample number, we found highly significant correlations between single and combined sets of specific plant, soil and remotely sensed variables that permitted testable spatial projections of biodiversity and soil fertility across the regional land-use mosaic. This integrative and rapid approach provides a low-cost, high-return and readily transferable methodology that permits the ready identification of testable biodiversity indicators for adaptive management of biodiversity and potential agricultural productivity. Copyright © 2016 Elsevier Ltd. All rights reserved.

  12. Broadening conceptions of learning in medical education: the message from teamworking.

    PubMed

    Bleakley, Alan

    2006-02-01

    There is a mismatch between the broad range of learning theories offered in the wider education literature and a relatively narrow range of theories privileged in the medical education literature. The latter are usually described under the heading of 'adult learning theory'. This paper critically addresses the limitations of the current dominant learning theories informing medical education. An argument is made that such theories, which address how an individual learns, fail to explain how learning occurs in dynamic, complex and unstable systems such as fluid clinical teams. Models of learning that take into account distributed knowing, learning through time as well as space, and the complexity of a learning environment including relationships between persons and artefacts, are more powerful in explaining and predicting how learning occurs in clinical teams. Learning theories may be privileged for ideological reasons, such as medicine's concern with autonomy. Where an increasing amount of medical education occurs in workplace contexts, sociocultural learning theories offer a best-fit exploration and explanation of such learning. We need to continue to develop testable models of learning that inform safe work practice. One type of learning theory will not inform all practice contexts and we need to think about a range of fit-for-purpose theories that are testable in practice. Exciting current developments include dynamicist models of learning drawing on complexity theory.

  13. A model that helps explain Sr-isotope disequilibrium between feldspar phenocrysts and melt in large-volume silicic magma systems

    USGS Publications Warehouse

    Duffield, W.A.; Ruiz, J.

    1998-01-01

    Feldspar phenocrysts of silicic volcanic rocks are commonly in Sr-isotopic disequilibrium with groundmass. In some cases the feldspar is more radiogenic, and in others it is less radiogenic. Several explanations have been published previously, but none of these is able to accommodate both senses of disequilibrium. We present a model by which either more- or less-radiogenic feldspar (or even both within a single eruptive unit) can originate. The model requires a magma body open to interaction with biotite- and feldspar-bearing wall rock. Magma is incrementally contaminated as wall rock melts incongruently. Biotite preferentially melts first, followed by feldspar. Such melting behavior, which is supported by both field and experimental studies, first contaminates magma with a relatively radiogenic addition, followed by a less-radiogenic addition. Feldspar phenocrysts lag behind melt (groundmass of volcanic rock) in incorporating the influx of contaminant, thus resulting in Sr-isotopic disequilibrium between the crystals and melt. The sense of disequilibrium recorded in a volcanic rock depends on when eruption quenches the contamination process. This model is testable by isotopic fingerprinting of individual feldspar crystals. For a given set of geologic boundary conditions, specific core-to-rim Sr-isotopic profiles are expectable. Moreover, phenocrysts that nucleate at different times during the contamination process should record different and predictable parts of the history. Initial results of Sr-isotopic fingerprinting of sanidine phenocrysts from the Taylor Creek Rhyolite are consistent with the model. More tests of the model are desirable.Feldspar phenocrysts of silicic volcanic rocks are commonly in Sr-isotopic disequilibrium with groundmass. In some cases the feldspar is more radiogenic, and in others it is less radiogenic. Several explanations have been published previously, but none of these is able to accommodate both senses of disequilibrium. We present a model by which either more- or less-radiogenic feldspar (or even both within a single eruptive unit) can originate. The model requires a magma body open to interaction with biotite- and feldspar-bearing wall rock. Magma is incrementally contaminated as wall rock melts incongruently. Biotite preferentially melts first, followed by feldspar. Such melting behavior, which is supported by both field and experimental studies, first contaminates magma with a relatively radiogenic addition, followed by a less-radiogenic addition. Feldspar phenocrysts lag behind melt (groundmass of volcanic rock) in incorporating the influx of contaminant, thus resulting in Sr-isotopic disequilibrium between the crystals and melt. The sense of disequilibrium recorded in a volcanic rock depends on when eruption quenches the contamination process. This model is testable by isotopic fingerprinting of individual feldspar crystals. For a given set of geologic boundary conditions, specific core-to-rim Sr-isotopic profiles are expectable. Moreover, phenocrysts that nucleate at different times during the contamination process should record different and predictable parts of the history. Initial results of Sr-isotopic fingerprinting of sanidine phenocrysts from the Taylor Creek Rhyolite are consistent with the model. More tests of the model are desirable.

  14. Flight control system design factors for applying automated testing techniques

    NASA Technical Reports Server (NTRS)

    Sitz, Joel R.; Vernon, Todd H.

    1990-01-01

    The principal design features and operational experiences of the X-29 forward-swept-wing aircraft and F-18 high alpha research vehicle (HARV) automated test systems are discussed. It is noted that operational experiences in developing and using these automated testing techniques have highlighted the need for incorporating target system features to improve testability. Improved target system testability can be accomplished with the addition of nonreal-time and real-time features. Online access to target system implementation details, unobtrusive real-time access to internal user-selectable variables, and proper software instrumentation are all desirable features of the target system. Also, test system and target system design issues must be addressed during the early stages of the target system development. Processing speeds of up to 20 million instructions/s and the development of high-bandwidth reflective memory systems have improved the ability to integrate the target system and test system for the application of automated testing techniques. It is concluded that new methods of designing testability into the target systems are required.

  15. Digging deeper into noise. Reply to comment on "Extracting physics of life at the molecular level: A review of single-molecule data analyses"

    NASA Astrophysics Data System (ADS)

    Colomb, Warren; Sarkar, Susanta K.

    2015-06-01

    We would like to thank all the commentators for their constructive comments on our paper. Commentators agree that a proper analysis of noisy single-molecule data is important for extracting meaningful and accurate information about the system. We concur with their views and indeed, motivating an accurate analysis of experimental data is precisely the point of our paper. After a model about the system of interest is constructed based on the experimental single-molecule data, it is very helpful to simulate the model to generate theoretical single-molecule data and analyze exactly the same way. In our experience, such self-consistent approach involving experiments, simulations, and analyses often forces us to revise our model and make experimentally testable predictions. In light of comments from the commentators with different expertise, we would also like to point out that a single model should be able to connect different experimental techniques because the underlying science does not depend on the experimental techniques used. Wohland [1] has made a strong case for fluorescence correlation spectroscopy (FCS) as an important experimental technique to bridge single-molecule and ensemble experiments. FCS is a very powerful technique that can measure ensemble parameters with single-molecule sensitivity. Therefore, it is logical to simulate any proposed model and predict both single-molecule data and FCS data, and confirm with experimental data. Fitting the diffraction-limited point spread function (PSF) of an isolated fluorescent marker to localize a labeled biomolecule is a critical step in many single-molecule tracking experiments. Flyvbjerg et al. [2] have rigorously pointed out some important drawbacks of the prevalent practice of fitting diffraction-limited PSF with 2D Gaussian. As we try to achieve more accurate and precise localization of biomolecules, we need to consider subtle points as mentioned by Flyvbjerg et al. Shepherd [3] has mentioned specific examples of PSF that have been used for localization and has rightly mentioned the importance of detector noise in single-molecule localization. Meroz [4] has pointed out more clearly that the signal itself could be noisy and it is necessary to distinguish the noise of interest from the background noise. Krapf [5] has pointed out different origins of fluctuations in biomolecular systems and commented on their possible Gaussian and non-Gaussian nature. Importance of noise along with the possibility that the noise itself can be the signal of interest has been discussed in our paper [6]. However, Meroz [4] and Krapf [5] have provided specific examples to guide the readers in a better way. Sachs et al. [7] have discussed kinetic analysis in the presence of indistinguishable states and have pointed to the free software for the general kinetic analysis that originated from their research.

  16. Beyond the bucket: testing the effect of experimental design on rate and sequence of decay

    NASA Astrophysics Data System (ADS)

    Gabbott, Sarah; Murdock, Duncan; Purnell, Mark

    2016-04-01

    Experimental decay has revealed the potential for profound biases in our interpretations of exceptionally preserved fossils, with non-random sequences of character loss distorting the position of fossil taxa in phylogenetic trees. By characterising these sequences we can rewind this distortion and make better-informed interpretations of the affinity of enigmatic fossil taxa. Equally, rate of character loss is crucial for estimating the preservation potential of phylogentically informative characters, and revealing the mechanisms of preservation themselves. However, experimental decay has been criticised for poorly modeling 'real' conditions, and dismissed as unsophisticated 'bucket science'. Here we test the effect of a differing experimental parameters on the rate and sequence of decay. By doing so, we can test the assumption that the results of decay experiments are applicable to informing interpretations of exceptionally preserved fossils from diverse preservational settings. The results of our experiments demonstrate the validity of using the sequence of character loss as a phylogenetic tool, and sheds light on the extent to which environment must be considered before making decay-informed interpretations, or reconstructing taphonomic pathways. With careful consideration of experimental design, driven by testable hypotheses, decay experiments are robust and informative - experimental taphonomy needn't kick the bucket just yet.

  17. Comparison on testability of visual acuity, stereo acuity and colour vision tests between children with learning disabilities and children without learning disabilities in government primary schools.

    PubMed

    Abu Bakar, Nurul Farhana; Chen, Ai-Hong

    2014-02-01

    Children with learning disabilities might have difficulties to communicate effectively and give reliable responses as required in various visual function testing procedures. The purpose of this study was to compare the testability of visual acuity using the modified Early Treatment Diabetic Retinopathy Study (ETDRS) and Cambridge Crowding Cards, stereo acuity using Lang Stereo test II and Butterfly stereo tests and colour perception using Colour Vision Test Made Easy (CVTME) and Ishihara's Test for Colour Deficiency (Ishihara Test) between children in mainstream classes and children with learning disabilities in special education classes in government primary schools. A total of 100 primary school children (50 children from mainstream classes and 50 children from special education classes) matched in age were recruited in this cross-sectional comparative study. The testability was determined by the percentage of children who were able to give reliable respond as required by the respective tests. 'Unable to test' was defined as inappropriate response or uncooperative despite best efforts of the screener. The testability of the modified ETDRS, Butterfly stereo test and Ishihara test for respective visual function tests were found lower among children in special education classes ( P < 0.001) but not in Cambridge Crowding Cards, Lang Stereo test II and CVTME. Non verbal or "matching" approaches were found to be more superior in testing visual functions in children with learning disabilities. Modifications of vision testing procedures are essential for children with learning disabilities.

  18. The spatial scaling of species interaction networks.

    PubMed

    Galiana, Nuria; Lurgi, Miguel; Claramunt-López, Bernat; Fortin, Marie-Josée; Leroux, Shawn; Cazelles, Kevin; Gravel, Dominique; Montoya, José M

    2018-05-01

    Species-area relationships (SARs) are pivotal to understand the distribution of biodiversity across spatial scales. We know little, however, about how the network of biotic interactions in which biodiversity is embedded changes with spatial extent. Here we develop a new theoretical framework that enables us to explore how different assembly mechanisms and theoretical models affect multiple properties of ecological networks across space. We present a number of testable predictions on network-area relationships (NARs) for multi-trophic communities. Network structure changes as area increases because of the existence of different SARs across trophic levels, the preferential selection of generalist species at small spatial extents and the effect of dispersal limitation promoting beta-diversity. Developing an understanding of NARs will complement the growing body of knowledge on SARs with potential applications in conservation ecology. Specifically, combined with further empirical evidence, NARs can generate predictions of potential effects on ecological communities of habitat loss and fragmentation in a changing world.

  19. Solving puzzles of GW150914 by primordial black holes

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Blinnikov, S.; Dolgov, A.; Porayko, N.K.

    The black hole binary properties inferred from the LIGO gravitational wave signal GW150914 posed several serious problems. The high masses and low effective spin of black hole binary can be explained if they are primordial (PBH) rather than the products of the stellar binary evolution. Such PBH properties are postulated ad hoc but not derived from fundamental theory. We show that the necessary features of PBHs naturally follow from the slightly modified Affleck-Dine (AD) mechanism of baryogenesis. The log-normal distribution of PBHs, predicted within the AD paradigm, is adjusted to provide an abundant population of low-spin stellar mass black holes.more » The same distribution gives a sufficient number of quickly growing seeds of supermassive black holes observed at high redshifts and may comprise an appreciable fraction of Dark Matter which does not contradict any existing observational limits. Testable predictions of this scenario are discussed.« less

  20. Predicting the High Redshift Galaxy Population for JWST

    NASA Astrophysics Data System (ADS)

    Flynn, Zoey; Benson, Andrew

    2017-01-01

    The James Webb Space Telescope will be launched in Oct 2018 with the goal of observing galaxies in the redshift range of z = 10 - 15. As redshift increases, the age of the Universe decreases, allowing us to study objects formed only a few hundred million years after the Big Bang. This will provide a valuable opportunity to test and improve current galaxy formation theory by comparing predictions for mass, luminosity, and number density to the observed data. We have made testable predictions with the semi-analytical galaxy formation model Galacticus. The code uses Markov Chain Monte Carlo methods to determine viable sets of model parameters that match current astronomical data. The resulting constrained model was then set to match the specifications of the JWST Ultra Deep Field Imaging Survey. Predictions utilizing up to 100 viable parameter sets were calculated, allowing us to assess the uncertainty in current theoretical expectations. We predict that the planned UDF will be able to observe a significant number of objects past redshift z > 9 but nothing at redshift z > 11. In order to detect these faint objects at redshifts z = 11-15 we need to increase exposure time by at least a factor of 1.66.

  1. Testability, Test Automation and Test Driven Development for the Trick Simulation Toolkit

    NASA Technical Reports Server (NTRS)

    Penn, John

    2014-01-01

    This paper describes the adoption of a Test Driven Development approach and a Continuous Integration System in the development of the Trick Simulation Toolkit, a generic simulation development environment for creating high fidelity training and engineering simulations at the NASA Johnson Space Center and many other NASA facilities. It describes the approach, and the significant benefits seen, such as fast, thorough and clear test feedback every time code is checked into the code repository. It also describes an approach that encourages development of code that is testable and adaptable.

  2. Stochastic recruitment leads to symmetry breaking in foraging populations

    NASA Astrophysics Data System (ADS)

    Biancalani, Tommaso; Dyson, Louise; McKane, Alan

    2014-03-01

    When an ant colony is faced with two identical equidistant food sources, the foraging ants are found to concentrate more on one source than the other. Analogous symmetry-breaking behaviours have been reported in various population systems, (such as queueing or stock market trading) suggesting the existence of a simple universal mechanism. Past studies have neglected the effect of demographic noise and required rather complicated models to qualitatively reproduce this behaviour. I will show how including the effects of demographic noise leads to a radically different conclusion. The symmetry-breaking arises solely due to the process of recruitment and ceases to occur for large population sizes. The latter fact provides a testable prediction for a real system.

  3. Creativity, information, and consciousness: The information dynamics of thinking.

    PubMed

    Wiggins, Geraint A

    2018-05-07

    This paper presents a theory of the basic operation of mind, Information Dynamics of Thinking, which is intended for computational implementation and thence empirical testing. It is based on the information theory of Shannon, and treats the mind/brain as an information processing organ that aims to be information-efficient, in that it predicts its world, so as to use information efficiently, and regularly re-represents it, so as to store information efficiently. The theory is presented in context of a background review of various research areas that impinge upon its development. Consequences of the theory and testable hypotheses arising from it are discussed. Copyright © 2018. Published by Elsevier B.V.

  4. The polyadenylation code: a unified model for the regulation of mRNA alternative polyadenylation*

    PubMed Central

    Davis, Ryan; Shi, Yongsheng

    2014-01-01

    The majority of eukaryotic genes produce multiple mRNA isoforms with distinct 3′ ends through a process called mRNA alternative polyadenylation (APA). Recent studies have demonstrated that APA is dynamically regulated during development and in response to environmental stimuli. A number of mechanisms have been described for APA regulation. In this review, we attempt to integrate all the known mechanisms into a unified model. This model not only explains most of previous results, but also provides testable predictions that will improve our understanding of the mechanistic details of APA regulation. Finally, we briefly discuss the known and putative functions of APA regulation. PMID:24793760

  5. Modelling the spread of innovation in wild birds.

    PubMed

    Shultz, Thomas R; Montrey, Marcel; Aplin, Lucy M

    2017-06-01

    We apply three plausible algorithms in agent-based computer simulations to recent experiments on social learning in wild birds. Although some of the phenomena are simulated by all three learning algorithms, several manifestations of social conformity bias are simulated by only the approximate majority (AM) algorithm, which has roots in chemistry, molecular biology and theoretical computer science. The simulations generate testable predictions and provide several explanatory insights into the diffusion of innovation through a population. The AM algorithm's success raises the possibility of its usefulness in studying group dynamics more generally, in several different scientific domains. Our differential-equation model matches simulation results and provides mathematical insights into the dynamics of these algorithms. © 2017 The Author(s).

  6. Quantitative model of price diffusion and market friction based on trading as a mechanistic random process.

    PubMed

    Daniels, Marcus G; Farmer, J Doyne; Gillemot, László; Iori, Giulia; Smith, Eric

    2003-03-14

    We model trading and price formation in a market under the assumption that order arrival and cancellations are Poisson random processes. This model makes testable predictions for the most basic properties of markets, such as the diffusion rate of prices (which is the standard measure of financial risk) and the spread and price impact functions (which are the main determinants of transaction cost). Guided by dimensional analysis, simulation, and mean-field theory, we find scaling relations in terms of order flow rates. We show that even under completely random order flow the need to store supply and demand to facilitate trading induces anomalous diffusion and temporal structure in prices.

  7. Quantitative Model of Price Diffusion and Market Friction Based on Trading as a Mechanistic Random Process

    NASA Astrophysics Data System (ADS)

    Daniels, Marcus G.; Farmer, J. Doyne; Gillemot, László; Iori, Giulia; Smith, Eric

    2003-03-01

    We model trading and price formation in a market under the assumption that order arrival and cancellations are Poisson random processes. This model makes testable predictions for the most basic properties of markets, such as the diffusion rate of prices (which is the standard measure of financial risk) and the spread and price impact functions (which are the main determinants of transaction cost). Guided by dimensional analysis, simulation, and mean-field theory, we find scaling relations in terms of order flow rates. We show that even under completely random order flow the need to store supply and demand to facilitate trading induces anomalous diffusion and temporal structure in prices.

  8. Are perytons signatures of ball lightning?

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dodin, I. Y.; Fisch, N. J.

    2014-10-20

    The enigmatic downchirped signals, called 'perytons', that are detected by radio telescopes in the GHz frequency range may be produced by an atmospheric phenomenon known as ball lightning (BL). If BLs act as nonstationary radio frequency cavities, their characteristic emission frequencies and evolution timescales are consistent with peryton observations, and so are general patterns in which BLs are known to occur. Based on this evidence, testable predictions are made that can confirm or rule out a causal connection between perytons and BLs. In either case, how perytons are searched for in observational data may warrant reconsideration because existing procedures maymore » be discarding events that have the same nature as known perytons.« less

  9. Neutrino mass, dark matter, and Baryon asymmetry via TeV-scale physics without fine-tuning.

    PubMed

    Aoki, Mayumi; Kanemura, Shinya; Seto, Osamu

    2009-02-06

    We propose an extended version of the standard model, in which neutrino oscillation, dark matter, and the baryon asymmetry of the Universe can be simultaneously explained by the TeV-scale physics without assuming a large hierarchy among the mass scales. Tiny neutrino masses are generated at the three-loop level due to the exact Z2 symmetry, by which the stability of the dark matter candidate is guaranteed. The extra Higgs doublet is required not only for the tiny neutrino masses but also for successful electroweak baryogenesis. The model provides discriminative predictions especially in Higgs phenomenology, so that it is testable at current and future collider experiments.

  10. A forecast experiment of earthquake activity in Japan under Collaboratory for the Study of Earthquake Predictability (CSEP)

    NASA Astrophysics Data System (ADS)

    Hirata, N.; Yokoi, S.; Nanjo, K. Z.; Tsuruoka, H.

    2012-04-01

    One major focus of the current Japanese earthquake prediction research program (2009-2013), which is now integrated with the research program for prediction of volcanic eruptions, is to move toward creating testable earthquake forecast models. For this purpose we started an experiment of forecasting earthquake activity in Japan under the framework of the Collaboratory for the Study of Earthquake Predictability (CSEP) through an international collaboration. We established the CSEP Testing Centre, an infrastructure to encourage researchers to develop testable models for Japan, and to conduct verifiable prospective tests of their model performance. We started the 1st earthquake forecast testing experiment in Japan within the CSEP framework. We use the earthquake catalogue maintained and provided by the Japan Meteorological Agency (JMA). The experiment consists of 12 categories, with 4 testing classes with different time spans (1 day, 3 months, 1 year, and 3 years) and 3 testing regions called "All Japan," "Mainland," and "Kanto." A total of 105 models were submitted, and are currently under the CSEP official suite of tests for evaluating the performance of forecasts. The experiments were completed for 92 rounds for 1-day, 6 rounds for 3-month, and 3 rounds for 1-year classes. For 1-day testing class all models passed all the CSEP's evaluation tests at more than 90% rounds. The results of the 3-month testing class also gave us new knowledge concerning statistical forecasting models. All models showed a good performance for magnitude forecasting. On the other hand, observation is hardly consistent in space distribution with most models when many earthquakes occurred at a spot. Now we prepare the 3-D forecasting experiment with a depth range of 0 to 100 km in Kanto region. The testing center is improving an evaluation system for 1-day class experiment to finish forecasting and testing results within one day. The special issue of 1st part titled Earthquake Forecast Testing Experiment in Japan was published on the Earth, Planets and Space Vol. 63, No.3, 2011 on March, 2011. The 2nd part of this issue, which is now on line, will be published soon. An outline of the experiment and activities of the Japanese Testing Center are published on our WEB site; http://wwweic.eri.u-tokyo.ac.jp/ZISINyosoku/wiki.en/wiki.cgi

  11. Comparison on testability of visual acuity, stereo acuity and colour vision tests between children with learning disabilities and children without learning disabilities in government primary schools

    PubMed Central

    Abu Bakar, Nurul Farhana; Chen, Ai-Hong

    2014-01-01

    Context: Children with learning disabilities might have difficulties to communicate effectively and give reliable responses as required in various visual function testing procedures. Aims: The purpose of this study was to compare the testability of visual acuity using the modified Early Treatment Diabetic Retinopathy Study (ETDRS) and Cambridge Crowding Cards, stereo acuity using Lang Stereo test II and Butterfly stereo tests and colour perception using Colour Vision Test Made Easy (CVTME) and Ishihara's Test for Colour Deficiency (Ishihara Test) between children in mainstream classes and children with learning disabilities in special education classes in government primary schools. Materials and Methods: A total of 100 primary school children (50 children from mainstream classes and 50 children from special education classes) matched in age were recruited in this cross-sectional comparative study. The testability was determined by the percentage of children who were able to give reliable respond as required by the respective tests. ‘Unable to test’ was defined as inappropriate response or uncooperative despite best efforts of the screener. Results: The testability of the modified ETDRS, Butterfly stereo test and Ishihara test for respective visual function tests were found lower among children in special education classes (P < 0.001) but not in Cambridge Crowding Cards, Lang Stereo test II and CVTME. Conclusion: Non verbal or “matching” approaches were found to be more superior in testing visual functions in children with learning disabilities. Modifications of vision testing procedures are essential for children with learning disabilities. PMID:24008790

  12. Spiking and Excitatory/Inhibitory Input Dynamics of Barrel Cells in Response to Whisker Deflections of Varying Velocity and Angular Direction.

    PubMed

    Patel, Mainak

    2018-01-15

    The spiking of barrel regular-spiking (RS) cells is tuned for both whisker deflection direction and velocity. Velocity tuning arises due to thalamocortical (TC) synchrony (but not spike quantity) varying with deflection velocity, coupled with feedforward inhibition, while direction selectivity is not fully understood, though may be due partly to direction tuning of TC spiking. Data show that as deflection direction deviates from the preferred direction of an RS cell, excitatory input to the RS cell diminishes minimally, but temporally shifts to coincide with the time-lagged inhibitory input. This work constructs a realistic large-scale model of a barrel; model RS cells exhibit velocity and direction selectivity due to TC input dynamics, with the experimentally observed sharpening of direction tuning with decreasing velocity. The model puts forth the novel proposal that RS→RS synapses can naturally and simply account for the unexplained direction dependence of RS cell inputs - as deflection direction deviates from the preferred direction of an RS cell, and TC input declines, RS→RS synaptic transmission buffers the decline in total excitatory input and causes a shift in timing of the excitatory input peak from the peak in TC input to the delayed peak in RS input. The model also provides several experimentally testable predictions on the velocity dependence of RS cell inputs. This model is the first, to my knowledge, to study the interaction of direction and velocity and propose physiological mechanisms for the stimulus dependence in the timing and amplitude of RS cell inputs. Copyright © 2017 IBRO. Published by Elsevier Ltd. All rights reserved.

  13. Separating the Role of Protein Restraints and Local Metal-Site Interaction Chemistry in the Thermodynamics of a Zinc Finger Protein

    PubMed Central

    Dixit, Purushottam D.; Asthagiri, D.

    2011-01-01

    We express the effective Hamiltonian of an ion-binding site in a protein as a combination of the Hamiltonian of the ion-bound site in vacuum and the restraints of the protein on the site. The protein restraints are described by the quadratic elastic network model. The Hamiltonian of the ion-bound site in vacuum is approximated as a generalized Hessian around the minimum energy configuration. The resultant of the two quadratic Hamiltonians is cast into a pure quadratic form. In the canonical ensemble, the quadratic nature of the resultant Hamiltonian allows us to express analytically the excess free energy, enthalpy, and entropy of ion binding to the protein. The analytical expressions allow us to separate the roles of the dynamic restraints imposed by the protein on the binding site and the temperature-independent chemical effects in metal-ligand coordination. For the consensus zinc-finger peptide, relative to the aqueous phase, the calculated free energy of exchanging Zn2+ with Fe2+, Co2+, Ni2+, and Cd2+ are in agreement with experiments. The predicted excess enthalpy of ion exchange between Zn2+ and Co2+ also agrees with the available experimental estimate. The free energy of applying the protein restraints reveals that relative to Zn2+, the Co2+, and Cd2+-site clusters are more destabilized by the protein restraints. This leads to an experimentally testable hypothesis that a tetrahedral metal binding site with minimal protein restraints will be less selective for Zn2+ over Co2+ and Cd2+ compared to a zinc finger peptide. No appreciable change is expected for Fe2+ and Ni2+. The framework presented here may prove useful in protein engineering to tune metal selectivity. PMID:21943427

  14. Order-disorder transition of intrinsically disordered kinase inducible transactivation domain of CREB

    NASA Astrophysics Data System (ADS)

    Liu, Hao; Guo, Xiang; Han, Jingcheng; Luo, Ray; Chen, Hai-Feng

    2018-06-01

    Transcription factor cyclic Adenosine monophosphate response-element binding protein plays a critical role in the cyclic AMP response pathway via its intrinsically disordered kinase inducible transactivation domain (KID). KID is one of the most studied intrinsically disordered proteins (IDPs), although most previous studies focus on characterizing its disordered state structures. An interesting question that remains to be answered is how the order-disorder transition occurs at experimental conditions. Thanks to the newly developed IDP-specific force field ff14IDPSFF, the quality of conformer sampling for IDPs has been dramatically improved. In this study, molecular dynamics (MD) simulations were used to study the order-to-disorder transition kinetics of KID based on the good agreement with the experiment on its disordered-state properties. Specifically, we tested four force fields, ff99SBildn, ff99IDPs, ff14IDPSFF, and ff14IDPs in the simulations of KID and found that ff14IDPSFF can generate more diversified disordered conformers and also reproduce more accurate experimental secondary chemical shifts. Kinetics analysis of MD simulations demonstrates that the order-disorder transition of KID obeys the first-order kinetics, and the transition nucleus is I127/L128/L141. The possible transition pathways from the nucleus to the last folded residues were identified as I127-R125-L138-L141-S143-A145 and L128-R125-L138-L141-S143-A145 based on a residue-level dynamical network analysis. These computational studies not only provide testable prediction/hypothesis on the order-disorder transition of KID but also confirm that the ff14IDPSFF force field can be used to explore the correlation between the structure and function of IDPs.

  15. A cerebellar learning model of vestibulo-ocular reflex adaptation in wild-type and mutant mice.

    PubMed

    Clopath, Claudia; Badura, Aleksandra; De Zeeuw, Chris I; Brunel, Nicolas

    2014-05-21

    Mechanisms of cerebellar motor learning are still poorly understood. The standard Marr-Albus-Ito theory posits that learning involves plasticity at the parallel fiber to Purkinje cell synapses under control of the climbing fiber input, which provides an error signal as in classical supervised learning paradigms. However, a growing body of evidence challenges this theory, in that additional sites of plasticity appear to contribute to motor adaptation. Here, we consider phase-reversal training of the vestibulo-ocular reflex (VOR), a simple form of motor learning for which a large body of experimental data is available in wild-type and mutant mice, in which the excitability of granule cells or inhibition of Purkinje cells was affected in a cell-specific fashion. We present novel electrophysiological recordings of Purkinje cell activity measured in naive wild-type mice subjected to this VOR adaptation task. We then introduce a minimal model that consists of learning at the parallel fibers to Purkinje cells with the help of the climbing fibers. Although the minimal model reproduces the behavior of the wild-type animals and is analytically tractable, it fails at reproducing the behavior of mutant mice and the electrophysiology data. Therefore, we build a detailed model involving plasticity at the parallel fibers to Purkinje cells' synapse guided by climbing fibers, feedforward inhibition of Purkinje cells, and plasticity at the mossy fiber to vestibular nuclei neuron synapse. The detailed model reproduces both the behavioral and electrophysiological data of both the wild-type and mutant mice and allows for experimentally testable predictions. Copyright © 2014 the authors 0270-6474/14/347203-13$15.00/0.

  16. A normative inference approach for optimal sample sizes in decisions from experience

    PubMed Central

    Ostwald, Dirk; Starke, Ludger; Hertwig, Ralph

    2015-01-01

    “Decisions from experience” (DFE) refers to a body of work that emerged in research on behavioral decision making over the last decade. One of the major experimental paradigms employed to study experience-based choice is the “sampling paradigm,” which serves as a model of decision making under limited knowledge about the statistical structure of the world. In this paradigm respondents are presented with two payoff distributions, which, in contrast to standard approaches in behavioral economics, are specified not in terms of explicit outcome-probability information, but by the opportunity to sample outcomes from each distribution without economic consequences. Participants are encouraged to explore the distributions until they feel confident enough to decide from which they would prefer to draw from in a final trial involving real monetary payoffs. One commonly employed measure to characterize the behavior of participants in the sampling paradigm is the sample size, that is, the number of outcome draws which participants choose to obtain from each distribution prior to terminating sampling. A natural question that arises in this context concerns the “optimal” sample size, which could be used as a normative benchmark to evaluate human sampling behavior in DFE. In this theoretical study, we relate the DFE sampling paradigm to the classical statistical decision theoretic literature and, under a probabilistic inference assumption, evaluate optimal sample sizes for DFE. In our treatment we go beyond analytically established results by showing how the classical statistical decision theoretic framework can be used to derive optimal sample sizes under arbitrary, but numerically evaluable, constraints. Finally, we critically evaluate the value of deriving optimal sample sizes under this framework as testable predictions for the experimental study of sampling behavior in DFE. PMID:26441720

  17. Computational Model of the Insect Pheromone Transduction Cascade

    PubMed Central

    Gu, Yuqiao; Lucas, Philippe; Rospars, Jean-Pierre

    2009-01-01

    A biophysical model of receptor potential generation in the male moth olfactory receptor neuron is presented. It takes into account all pre-effector processes—the translocation of pheromone molecules from air to sensillum lymph, their deactivation and interaction with the receptors, and the G-protein and effector enzyme activation—and focuses on the main post-effector processes. These processes involve the production and degradation of second messengers (IP3 and DAG), the opening and closing of a series of ionic channels (IP3-gated Ca2+ channel, DAG-gated cationic channel, Ca2+-gated Cl− channel, and Ca2+- and voltage-gated K+ channel), and Ca2+ extrusion mechanisms. The whole network is regulated by modulators (protein kinase C and Ca2+-calmodulin) that exert feedback inhibition on the effector and channels. The evolution in time of these linked chemical species and currents and the resulting membrane potentials in response to single pulse stimulation of various intensities were simulated. The unknown parameter values were fitted by comparison to the amplitude and temporal characteristics (rising and falling times) of the experimentally measured receptor potential at various pheromone doses. The model obtained captures the main features of the dose–response curves: the wide dynamic range of six decades with the same amplitudes as the experimental data, the short rising time, and the long falling time. It also reproduces the second messenger kinetics. It suggests that the two main types of depolarizing ionic channels play different roles at low and high pheromone concentrations; the DAG-gated cationic channel plays the major role for depolarization at low concentrations, and the Ca2+-gated Cl− channel plays the major role for depolarization at middle and high concentrations. Several testable predictions are proposed, and future developments are discussed. PMID:19300479

  18. Input-dependent frequency modulation of cortical gamma oscillations shapes spatial synchronization and enables phase coding.

    PubMed

    Lowet, Eric; Roberts, Mark; Hadjipapas, Avgis; Peter, Alina; van der Eerden, Jan; De Weerd, Peter

    2015-02-01

    Fine-scale temporal organization of cortical activity in the gamma range (∼25-80Hz) may play a significant role in information processing, for example by neural grouping ('binding') and phase coding. Recent experimental studies have shown that the precise frequency of gamma oscillations varies with input drive (e.g. visual contrast) and that it can differ among nearby cortical locations. This has challenged theories assuming widespread gamma synchronization at a fixed common frequency. In the present study, we investigated which principles govern gamma synchronization in the presence of input-dependent frequency modulations and whether they are detrimental for meaningful input-dependent gamma-mediated temporal organization. To this aim, we constructed a biophysically realistic excitatory-inhibitory network able to express different oscillation frequencies at nearby spatial locations. Similarly to cortical networks, the model was topographically organized with spatially local connectivity and spatially-varying input drive. We analyzed gamma synchronization with respect to phase-locking, phase-relations and frequency differences, and quantified the stimulus-related information represented by gamma phase and frequency. By stepwise simplification of our models, we found that the gamma-mediated temporal organization could be reduced to basic synchronization principles of weakly coupled oscillators, where input drive determines the intrinsic (natural) frequency of oscillators. The gamma phase-locking, the precise phase relation and the emergent (measurable) frequencies were determined by two principal factors: the detuning (intrinsic frequency difference, i.e. local input difference) and the coupling strength. In addition to frequency coding, gamma phase contained complementary stimulus information. Crucially, the phase code reflected input differences, but not the absolute input level. This property of relative input-to-phase conversion, contrasting with latency codes or slower oscillation phase codes, may resolve conflicting experimental observations on gamma phase coding. Our modeling results offer clear testable experimental predictions. We conclude that input-dependency of gamma frequencies could be essential rather than detrimental for meaningful gamma-mediated temporal organization of cortical activity.

  19. Structure-Based Sequence Alignment of the Transmembrane Domains of All Human GPCRs: Phylogenetic, Structural and Functional Implications

    PubMed Central

    Cvicek, Vaclav; Goddard, William A.; Abrol, Ravinder

    2016-01-01

    The understanding of G-protein coupled receptors (GPCRs) is undergoing a revolution due to increased information about their signaling and the experimental determination of structures for more than 25 receptors. The availability of at least one receptor structure for each of the GPCR classes, well separated in sequence space, enables an integrated superfamily-wide analysis to identify signatures involving the role of conserved residues, conserved contacts, and downstream signaling in the context of receptor structures. In this study, we align the transmembrane (TM) domains of all experimental GPCR structures to maximize the conserved inter-helical contacts. The resulting superfamily-wide GpcR Sequence-Structure (GRoSS) alignment of the TM domains for all human GPCR sequences is sufficient to generate a phylogenetic tree that correctly distinguishes all different GPCR classes, suggesting that the class-level differences in the GPCR superfamily are encoded at least partly in the TM domains. The inter-helical contacts conserved across all GPCR classes describe the evolutionarily conserved GPCR structural fold. The corresponding structural alignment of the inactive and active conformations, available for a few GPCRs, identifies activation hot-spot residues in the TM domains that get rewired upon activation. Many GPCR mutations, known to alter receptor signaling and cause disease, are located at these conserved contact and activation hot-spot residue positions. The GRoSS alignment places the chemosensory receptor subfamilies for bitter taste (TAS2R) and pheromones (Vomeronasal, VN1R) in the rhodopsin family, known to contain the chemosensory olfactory receptor subfamily. The GRoSS alignment also enables the quantification of the structural variability in the TM regions of experimental structures, useful for homology modeling and structure prediction of receptors. Furthermore, this alignment identifies structurally and functionally important residues in all human GPCRs. These residues can be used to make testable hypotheses about the structural basis of receptor function and about the molecular basis of disease-associated single nucleotide polymorphisms. PMID:27028541

  20. Input-Dependent Frequency Modulation of Cortical Gamma Oscillations Shapes Spatial Synchronization and Enables Phase Coding

    PubMed Central

    Lowet, Eric; Roberts, Mark; Hadjipapas, Avgis; Peter, Alina; van der Eerden, Jan; De Weerd, Peter

    2015-01-01

    Fine-scale temporal organization of cortical activity in the gamma range (∼25–80Hz) may play a significant role in information processing, for example by neural grouping (‘binding’) and phase coding. Recent experimental studies have shown that the precise frequency of gamma oscillations varies with input drive (e.g. visual contrast) and that it can differ among nearby cortical locations. This has challenged theories assuming widespread gamma synchronization at a fixed common frequency. In the present study, we investigated which principles govern gamma synchronization in the presence of input-dependent frequency modulations and whether they are detrimental for meaningful input-dependent gamma-mediated temporal organization. To this aim, we constructed a biophysically realistic excitatory-inhibitory network able to express different oscillation frequencies at nearby spatial locations. Similarly to cortical networks, the model was topographically organized with spatially local connectivity and spatially-varying input drive. We analyzed gamma synchronization with respect to phase-locking, phase-relations and frequency differences, and quantified the stimulus-related information represented by gamma phase and frequency. By stepwise simplification of our models, we found that the gamma-mediated temporal organization could be reduced to basic synchronization principles of weakly coupled oscillators, where input drive determines the intrinsic (natural) frequency of oscillators. The gamma phase-locking, the precise phase relation and the emergent (measurable) frequencies were determined by two principal factors: the detuning (intrinsic frequency difference, i.e. local input difference) and the coupling strength. In addition to frequency coding, gamma phase contained complementary stimulus information. Crucially, the phase code reflected input differences, but not the absolute input level. This property of relative input-to-phase conversion, contrasting with latency codes or slower oscillation phase codes, may resolve conflicting experimental observations on gamma phase coding. Our modeling results offer clear testable experimental predictions. We conclude that input-dependency of gamma frequencies could be essential rather than detrimental for meaningful gamma-mediated temporal organization of cortical activity. PMID:25679780

  1. Quantitative imaging biomarker ontology (QIBO) for knowledge representation of biomedical imaging biomarkers.

    PubMed

    Buckler, Andrew J; Liu, Tiffany Ting; Savig, Erica; Suzek, Baris E; Ouellette, M; Danagoulian, J; Wernsing, G; Rubin, Daniel L; Paik, David

    2013-08-01

    A widening array of novel imaging biomarkers is being developed using ever more powerful clinical and preclinical imaging modalities. These biomarkers have demonstrated effectiveness in quantifying biological processes as they occur in vivo and in the early prediction of therapeutic outcomes. However, quantitative imaging biomarker data and knowledge are not standardized, representing a critical barrier to accumulating medical knowledge based on quantitative imaging data. We use an ontology to represent, integrate, and harmonize heterogeneous knowledge across the domain of imaging biomarkers. This advances the goal of developing applications to (1) improve precision and recall of storage and retrieval of quantitative imaging-related data using standardized terminology; (2) streamline the discovery and development of novel imaging biomarkers by normalizing knowledge across heterogeneous resources; (3) effectively annotate imaging experiments thus aiding comprehension, re-use, and reproducibility; and (4) provide validation frameworks through rigorous specification as a basis for testable hypotheses and compliance tests. We have developed the Quantitative Imaging Biomarker Ontology (QIBO), which currently consists of 488 terms spanning the following upper classes: experimental subject, biological intervention, imaging agent, imaging instrument, image post-processing algorithm, biological target, indicated biology, and biomarker application. We have demonstrated that QIBO can be used to annotate imaging experiments with standardized terms in the ontology and to generate hypotheses for novel imaging biomarker-disease associations. Our results established the utility of QIBO in enabling integrated analysis of quantitative imaging data.

  2. VirtualPlant: A Software Platform to Support Systems Biology Research1[W][OA

    PubMed Central

    Katari, Manpreet S.; Nowicki, Steve D.; Aceituno, Felipe F.; Nero, Damion; Kelfer, Jonathan; Thompson, Lee Parnell; Cabello, Juan M.; Davidson, Rebecca S.; Goldberg, Arthur P.; Shasha, Dennis E.; Coruzzi, Gloria M.; Gutiérrez, Rodrigo A.

    2010-01-01

    Data generation is no longer the limiting factor in advancing biological research. In addition, data integration, analysis, and interpretation have become key bottlenecks and challenges that biologists conducting genomic research face daily. To enable biologists to derive testable hypotheses from the increasing amount of genomic data, we have developed the VirtualPlant software platform. VirtualPlant enables scientists to visualize, integrate, and analyze genomic data from a systems biology perspective. VirtualPlant integrates genome-wide data concerning the known and predicted relationships among genes, proteins, and molecules, as well as genome-scale experimental measurements. VirtualPlant also provides visualization techniques that render multivariate information in visual formats that facilitate the extraction of biological concepts. Importantly, VirtualPlant helps biologists who are not trained in computer science to mine lists of genes, microarray experiments, and gene networks to address questions in plant biology, such as: What are the molecular mechanisms by which internal or external perturbations affect processes controlling growth and development? We illustrate the use of VirtualPlant with three case studies, ranging from querying a gene of interest to the identification of gene networks and regulatory hubs that control seed development. Whereas the VirtualPlant software was developed to mine Arabidopsis (Arabidopsis thaliana) genomic data, its data structures, algorithms, and visualization tools are designed in a species-independent way. VirtualPlant is freely available at www.virtualplant.org. PMID:20007449

  3. NOXclass: prediction of protein-protein interaction types.

    PubMed

    Zhu, Hongbo; Domingues, Francisco S; Sommer, Ingolf; Lengauer, Thomas

    2006-01-19

    Structural models determined by X-ray crystallography play a central role in understanding protein-protein interactions at the molecular level. Interpretation of these models requires the distinction between non-specific crystal packing contacts and biologically relevant interactions. This has been investigated previously and classification approaches have been proposed. However, less attention has been devoted to distinguishing different types of biological interactions. These interactions are classified as obligate and non-obligate according to the effect of the complex formation on the stability of the protomers. So far no automatic classification methods for distinguishing obligate, non-obligate and crystal packing interactions have been made available. Six interface properties have been investigated on a dataset of 243 protein interactions. The six properties have been combined using a support vector machine algorithm, resulting in NOXclass, a classifier for distinguishing obligate, non-obligate and crystal packing interactions. We achieve an accuracy of 91.8% for the classification of these three types of interactions using a leave-one-out cross-validation procedure. NOXclass allows the interpretation and analysis of protein quaternary structures. In particular, it generates testable hypotheses regarding the nature of protein-protein interactions, when experimental results are not available. We expect this server will benefit the users of protein structural models, as well as protein crystallographers and NMR spectroscopists. A web server based on the method and the datasets used in this study are available at http://noxclass.bioinf.mpi-inf.mpg.de/.

  4. An analytical model of non-photorespiratory CO₂release in the light and dark in leaves of C₃species based on stoichiometric flux balance.

    PubMed

    Buckley, Thomas N; Adams, Mark A

    2011-01-01

    Leaf respiration continues in the light but at a reduced rate. This inhibition is highly variable, and the mechanisms are poorly known, partly due to the lack of a formal model that can generate testable hypotheses. We derived an analytical model for non-photorespiratory CO₂ release by solving steady-state supply/demand equations for ATP, NADH and NADPH, coupled to a widely used photosynthesis model. We used this model to evaluate causes for suppression of respiration by light. The model agrees with many observations, including highly variable suppression at saturating light, greater suppression in mature leaves, reduced assimilatory quotient (ratio of net CO₂ and O₂ exchange) concurrent with nitrate reduction and a Kok effect (discrete change in quantum yield at low light). The model predicts engagement of non-phosphorylating pathways at moderate to high light, or concurrent with processes that yield ATP and NADH, such as fatty acid or terpenoid synthesis. Suppression of respiration is governed largely by photosynthetic adenylate balance, although photorespiratory NADH may contribute at sub-saturating light. Key questions include the precise diel variation of anabolism and the ATP : 2e⁻ ratio for photophosphorylation. Our model can focus experimental research and is a step towards a fully process-based model of CO₂ exchange. © 2010 Blackwell Publishing Ltd.

  5. The effective application of a discrete transition model to explore cell-cycle regulation in yeast

    PubMed Central

    2013-01-01

    Background Bench biologists often do not take part in the development of computational models for their systems, and therefore, they frequently employ them as “black-boxes”. Our aim was to construct and test a model that does not depend on the availability of quantitative data, and can be directly used without a need for intensive computational background. Results We present a discrete transition model. We used cell-cycle in budding yeast as a paradigm for a complex network, demonstrating phenomena such as sequential protein expression and activity, and cell-cycle oscillation. The structure of the network was validated by its response to computational perturbations such as mutations, and its response to mating-pheromone or nitrogen depletion. The model has a strong predicative capability, demonstrating how the activity of a specific transcription factor, Hcm1, is regulated, and what determines commitment of cells to enter and complete the cell-cycle. Conclusion The model presented herein is intuitive, yet is expressive enough to elucidate the intrinsic structure and qualitative behavior of large and complex regulatory networks. Moreover our model allowed us to examine multiple hypotheses in a simple and intuitive manner, giving rise to testable predictions. This methodology can be easily integrated as a useful approach for the study of networks, enriching experimental biology with computational insights. PMID:23915717

  6. Microbial Brokers of Insect-Plant Interactions Revisited

    PubMed Central

    Douglas, Angela E

    2013-01-01

    Recent advances in sequencing methods have transformed the field of microbial ecology, making it possible to determine the composition and functional capabilities of uncultured microorganisms. These technologies have been instrumental in the recognition that resident microorganisms can have profound effects on the phenotype and fitness of their animal hosts by modulating the animal signaling networks that regulate growth, development, behavior, etc. Against this backdrop, this review assesses the impact of microorganisms on insect-plant interactions, in the context of the hypothesis that microorganisms are biochemical brokers of plant utilization by insects. There is now overwhelming evidence for a microbial role in insect utilization of certain plant diets with an extremely low or unbalanced nutrient content. Specifically, microorganisms enable insect utilization of plant sap by synthesizing essential amino acids. They also can broker insect utilization of plant products of extremely high lignocellulose content, by enzymatic breakdown of complex plant polysaccharides, nitrogen fixation, and sterol synthesis. However, the experimental evidence for microbial-mediated detoxification of plant allelochemicals is limited. The significance of microorganisms as brokers of plant utilization by insects is predicted to vary, possibly widely, as a result of potentially complex interactions between the composition of the microbiota and the diet and insect developmental age or genotype. For every insect species feeding on plant material, the role of resident microbiota as biochemical brokers of plant utilization is a testable hypothesis. PMID:23793897

  7. The attention schema theory: a mechanistic account of subjective awareness

    PubMed Central

    Graziano, Michael S. A.; Webb, Taylor W.

    2015-01-01

    We recently proposed the attention schema theory, a novel way to explain the brain basis of subjective awareness in a mechanistic and scientifically testable manner. The theory begins with attention, the process by which signals compete for the brain’s limited computing resources. This internal signal competition is partly under a bottom–up influence and partly under top–down control. We propose that the top–down control of attention is improved when the brain has access to a simplified model of attention itself. The brain therefore constructs a schematic model of the process of attention, the ‘attention schema,’ in much the same way that it constructs a schematic model of the body, the ‘body schema.’ The content of this internal model leads a brain to conclude that it has a subjective experience. One advantage of this theory is that it explains how awareness and attention can sometimes become dissociated; the brain’s internal models are never perfect, and sometimes a model becomes dissociated from the object being modeled. A second advantage of this theory is that it explains how we can be aware of both internal and external events. The brain can apply attention to many types of information including external sensory information and internal information about emotions and cognitive states. If awareness is a model of attention, then this model should pertain to the same domains of information to which attention pertains. A third advantage of this theory is that it provides testable predictions. If awareness is the internal model of attention, used to help control attention, then without awareness, attention should still be possible but should suffer deficits in control. In this article, we review the existing literature on the relationship between attention and awareness, and suggest that at least some of the predictions of the theory are borne out by the evidence. PMID:25954242

  8. Linking Microbiota to Human Diseases: A Systems Biology Perspective.

    PubMed

    Wu, Hao; Tremaroli, Valentina; Bäckhed, Fredrik

    2015-12-01

    The human gut microbiota encompasses a densely populated ecosystem that provides essential functions for host development, immune maturation, and metabolism. Alterations to the gut microbiota have been observed in numerous diseases, including human metabolic diseases such as obesity, type 2 diabetes (T2D), and irritable bowel syndrome, and some animal experiments have suggested causality. However, few studies have validated causality in humans and the underlying mechanisms remain largely to be elucidated. We discuss how systems biology approaches combined with new experimental technologies may disentangle some of the mechanistic details in the complex interactions of diet, microbiota, and host metabolism and may provide testable hypotheses for advancing our current understanding of human-microbiota interaction. Copyright © 2015 Elsevier Ltd. All rights reserved.

  9. Is Psychoanalysis a Folk Psychology?

    PubMed Central

    Arminjon, Mathieu

    2013-01-01

    Even as the neuro-psychoanalytic field has matured, from a naturalist point of view, the epistemological status of Freudian interpretations still remains problematic at a naturalist point of view. As a result of the resurgence of hermeneutics, the claim has been made that psychoanalysis is an extension of folk psychology. For these “extensionists,” asking psychoanalysis to prove its interpretations would be as absurd as demanding the proofs of the scientific accuracy of folk psychology. I propose to show how Dennett’s theory of the intentional stance allows us to defend an extensionist position while sparing us certain hermeneutic difficulties. In conclusion, I will consider how Shevrin et al. (1996) experiments could turn extensionist conceptual considerations into experimentally testable issues. PMID:23525879

  10. Multicomponent Gas Diffusion and an Appropriate Momentum Boundary Condition

    NASA Technical Reports Server (NTRS)

    Noever, David A.

    1994-01-01

    Multicomponent gas diffusion is reviewed with particular emphasis on gas flows near solid boundaries-the so-called Kramers-Kistemaker effect. The aim is to derive an appropriate momentum boundary condition which governs many gaseous species diffusing together. The many species' generalization of the traditional single gas condition, either as slip or stick (no-slip), is not obvious, particularly for technologically important cases of lower gas pressures and very dissimilar molecular weight gases. No convincing theoretical case exists for why two gases should interact with solid boundaries equally but in opposite flow directions, such that the total gas flow exactly vanishes. ln this way, the multicomponent no-slip boundary requires careful treatment The approaches discussed here generally adopt a microscopic model for gas-solid contact. The method has the advantage that the mathematics remain tractable and hence experimentally testable. Two new proposals are put forward, the first building in some molecular collision physics, the second drawing on a detailed view of surface diffusion which does not unphysically extrapolate bulk gas properties to govern the adsorbed molecules. The outcome is a better accounting of previously anomalous experiments. Models predict novel slip conditions appearing even for the case of equal molecular weight components. These approaches become particularly significant in view of a conceptual contradiction found to arise in previous derivations of the appropriate boundary conditions. The analogous case of three gases, one of which is uniformly distributed and hence non-diffusing, presents a further refinement which gives unexpected flow reversals near solid boundaries. This case is investigated alone and for aggregating gas species near their condensation point. In addition to predicting new physics, this investigation carries practical implications for controlling vapor diffusion in the growth of crystals used in medical diagnosis (e.g. mercuric iodide) and semiconductors.

  11. Predicting Adverse Drug Effects from Literature- and Database-Mined Assertions.

    PubMed

    La, Mary K; Sedykh, Alexander; Fourches, Denis; Muratov, Eugene; Tropsha, Alexander

    2018-06-06

    Given that adverse drug effects (ADEs) have led to post-market patient harm and subsequent drug withdrawal, failure of candidate agents in the drug development process, and other negative outcomes, it is essential to attempt to forecast ADEs and other relevant drug-target-effect relationships as early as possible. Current pharmacologic data sources, providing multiple complementary perspectives on the drug-target-effect paradigm, can be integrated to facilitate the inference of relationships between these entities. This study aims to identify both existing and unknown relationships between chemicals (C), protein targets (T), and ADEs (E) based on evidence in the literature. Cheminformatics and data mining approaches were employed to integrate and analyze publicly available clinical pharmacology data and literature assertions interrelating drugs, targets, and ADEs. Based on these assertions, a C-T-E relationship knowledge base was developed. Known pairwise relationships between chemicals, targets, and ADEs were collected from several pharmacological and biomedical data sources. These relationships were curated and integrated according to Swanson's paradigm to form C-T-E triangles. Missing C-E edges were then inferred as C-E relationships. Unreported associations between drugs, targets, and ADEs were inferred, and inferences were prioritized as testable hypotheses. Several C-E inferences, including testosterone → myocardial infarction, were identified using inferences based on the literature sources published prior to confirmatory case reports. Timestamping approaches confirmed the predictive ability of this inference strategy on a larger scale. The presented workflow, based on free-access databases and an association-based inference scheme, provided novel C-E relationships that have been validated post hoc in case reports. With refinement of prioritization schemes for the generated C-E inferences, this workflow may provide an effective computational method for the early detection of potential drug candidate ADEs that can be followed by targeted experimental investigations.

  12. How drugs get into cells: tested and testable predictions to help discriminate between transporter-mediated uptake and lipoidal bilayer diffusion

    PubMed Central

    Kell, Douglas B.; Oliver, Stephen G.

    2014-01-01

    One approach to experimental science involves creating hypotheses, then testing them by varying one or more independent variables, and assessing the effects of this variation on the processes of interest. We use this strategy to compare the intellectual status and available evidence for two models or views of mechanisms of transmembrane drug transport into intact biological cells. One (BDII) asserts that lipoidal phospholipid Bilayer Diffusion Is Important, while a second (PBIN) proposes that in normal intact cells Phospholipid Bilayer diffusion Is Negligible (i.e., may be neglected quantitatively), because evolution selected against it, and with transmembrane drug transport being effected by genetically encoded proteinaceous carriers or pores, whose “natural” biological roles, and substrates are based in intermediary metabolism. Despite a recent review elsewhere, we can find no evidence able to support BDII as we can find no experiments in intact cells in which phospholipid bilayer diffusion was either varied independently or measured directly (although there are many papers where it was inferred by seeing a covariation of other dependent variables). By contrast, we find an abundance of evidence showing cases in which changes in the activities of named and genetically identified transporters led to measurable changes in the rate or extent of drug uptake. PBIN also has considerable predictive power, and accounts readily for the large differences in drug uptake between tissues, cells and species, in accounting for the metabolite-likeness of marketed drugs, in pharmacogenomics, and in providing a straightforward explanation for the late-stage appearance of toxicity and of lack of efficacy during drug discovery programmes despite macroscopically adequate pharmacokinetics. Consequently, the view that Phospholipid Bilayer diffusion Is Negligible (PBIN) provides a starting hypothesis for assessing cellular drug uptake that is much better supported by the available evidence, and is both more productive and more predictive. PMID:25400580

  13. Symmetry in locomotor central pattern generators and animal gaits

    NASA Astrophysics Data System (ADS)

    Golubitsky, Martin; Stewart, Ian; Buono, Pietro-Luciano; Collins, J. J.

    1999-10-01

    Animal locomotion is controlled, in part, by a central pattern generator (CPG), which is an intraspinal network of neurons capable of generating a rhythmic output. The spatio-temporal symmetries of the quadrupedal gaits walk, trot and pace lead to plausible assumptions about the symmetries of locomotor CPGs. These assumptions imply that the CPG of a quadruped should consist of eight nominally identical subcircuits, arranged in an essentially unique matter. Here we apply analogous arguments to myriapod CPGs. Analyses based on symmetry applied to these networks lead to testable predictions, including a distinction between primary and secondary gaits, the existence of a new primary gait called `jump', and the occurrence of half-integer wave numbers in myriapod gaits. For bipeds, our analysis also predicts two gaits with the out-of-phase symmetry of the walk and two gaits with the in-phase symmetry of the hop. We present data that support each of these predictions. This work suggests that symmetry can be used to infer a plausible class of CPG network architectures from observed patterns of animal gaits.

  14. Polarization modeling and predictions for Daniel K. Inouye Solar Telescope part 1: telescope and example instrument configurations

    NASA Astrophysics Data System (ADS)

    Harrington, David M.; Sueoka, Stacey R.

    2017-01-01

    We outline polarization performance calculations and predictions for the Daniel K. Inouye Solar Telescope (DKIST) optics and show Mueller matrices for two of the first light instruments. Telescope polarization is due to polarization-dependent mirror reflectivity and rotations between groups of mirrors as the telescope moves in altitude and azimuth. The Zemax optical modeling software has polarization ray-trace capabilities and predicts system performance given a coating prescription. We develop a model coating formula that approximates measured witness sample polarization properties. Estimates show the DKIST telescope Mueller matrix as functions of wavelength, azimuth, elevation, and field angle for the cryogenic near infra-red spectro-polarimeter (CryoNIRSP) and visible spectro-polarimeter. Footprint variation is substantial and shows vignetted field points will have strong polarization effects. We estimate 2% variation of some Mueller matrix elements over the 5-arc min CryoNIRSP field. We validate the Zemax model by showing limiting cases for flat mirrors in collimated and powered designs that compare well with theoretical approximations and are testable with lab ellipsometers.

  15. When should we expect early bursts of trait evolution in comparative data? Predictions from an evolutionary food web model.

    PubMed

    Ingram, T; Harmon, L J; Shurin, J B

    2012-09-01

    Conceptual models of adaptive radiation predict that competitive interactions among species will result in an early burst of speciation and trait evolution followed by a slowdown in diversification rates. Empirical studies often show early accumulation of lineages in phylogenetic trees, but usually fail to detect early bursts of phenotypic evolution. We use an evolutionary simulation model to assemble food webs through adaptive radiation, and examine patterns in the resulting phylogenetic trees and species' traits (body size and trophic position). We find that when foraging trade-offs result in food webs where all species occupy integer trophic levels, lineage diversity and trait disparity are concentrated early in the tree, consistent with the early burst model. In contrast, in food webs in which many omnivorous species feed at multiple trophic levels, high levels of turnover of species' identities and traits tend to eliminate the early burst signal. These results suggest testable predictions about how the niche structure of ecological communities may be reflected by macroevolutionary patterns. © 2012 The Authors. Journal of Evolutionary Biology © 2012 European Society For Evolutionary Biology.

  16. Density-dependent recruitment rates in great tits: the importance of being heavier

    PubMed Central

    Both, C.; Visser, M. E.; Verboven, N.

    1999-01-01

    In birds, individuals with a higher mass at fledging have a higher probability of recruiting into the breeding population. This can be because mass is an indicator of general condition and thereby of the ability to survive adverse circumstances and/or because fledging mass is positively related to competitive strength in interactions with other fledglings. This latter explanation leads to two testable predictions: (i) there is stronger selection for fledging mass when there is more severe competition (i.e. at higher densities); and (ii) that besides absolute fledging mass, relative mass of fledglings within a cohort is important. We test these two predictions in two great tit (Parus major) populations. The first prediction was met for one of the populations, showing that competition affects the importance of mass-dependent recruitment. The second prediction, that fledglings recruit relatively well if they are heavy compared to the other fledglings, is met for both populations. The consequence of the importance of relative rather than absolute fledging mass is that the fitness consequences of reproductive decisions affecting fledging mass, such as clutch size, depend on the decisions of the other individuals in the population.

  17. A General, Synthetic Model for Predicting Biodiversity Gradients from Environmental Geometry.

    PubMed

    Gross, Kevin; Snyder-Beattie, Andrew

    2016-10-01

    Latitudinal and elevational biodiversity gradients fascinate ecologists, and have inspired dozens of explanations. The geometry of the abiotic environment is sometimes thought to contribute to these gradients, yet evaluations of geometric explanations are limited by a fragmented understanding of the diversity patterns they predict. This article presents a mathematical model that synthesizes multiple pathways by which environmental geometry can drive diversity gradients. The model characterizes species ranges by their environmental niches and limits on range sizes and places those ranges onto the simplified geometries of a sphere or cone. The model predicts nuanced and realistic species-richness gradients, including latitudinal diversity gradients with tropical plateaus and mid-latitude inflection points and elevational diversity gradients with low-elevation diversity maxima. The model also illustrates the importance of a mid-environment effect that augments species richness at locations with intermediate environments. Model predictions match multiple empirical biodiversity gradients, depend on ecological traits in a testable fashion, and formally synthesize elements of several geometric models. Together, these results suggest that previous assessments of geometric hypotheses should be reconsidered and that environmental geometry may play a deeper role in driving biodiversity gradients than is currently appreciated.

  18. Earthquake prediction evaluation standards applied to the VAN Method

    NASA Astrophysics Data System (ADS)

    Jackson, David D.

    Earthquake prediction research must meet certain standards before it can be suitably evaluated for potential application in decision making. For methods that result in a binary (on or off) alarm condition, requirements include (1) a quantitative description of observables that trigger an alarm, (2) a quantitative description, including ranges of time, location, and magnitude, of the predicted earthquakes, (3) documented evidence of all previous alarms, (4) a complete list of predicted earthquakes, (5) a complete list of unpredicted earthquakes. The VAN technique [Varotsos and Lazaridou, 1991; Varotsos et al., 1996] has not yet been stated as a testable hypothesis. It fails criteria (1) and (2) so it is not ready to be evaluated properly. Although telegrams were transmitted in advance of claimed successes, these telegrams did not fully specify the predicted events, and all of the published statistical evaluations involve many subjective ex post facto decisions. Lacking a statistically demonstrated relationship to earthquakes, a candidate prediction technique should satisfy several plausibility criteria, including: (1) a reasonable relationship between the location of the candidate precursor and that of the predicted earthquake, (2) some demonstration that the candidate precursory observations are related to stress, strain, or other quantities related to earthquakes, and (3) the existence of co-seismic as well as pre-seismic variations of the candidate precursor. The VAN technique meets none of these criteria.

  19. Premises for fowl sperm preservation based on applied bioenergetics.

    PubMed

    Froman, D P

    2014-02-01

    The primary goal of this work was to test whether the sperm mobility assay could be used to derive mathematical relationships from which predictions could be made about sperm cell function. A precondition was random sampling from a pool of sperm. This precondition was met by centrifuging mobile sperm through 12% (wt/vol) Accudenz containing the Ca(2+) chelator 1,2-bis-(o-aminophenoxy)ethane-N,N,N',N'-tetraacetic acid (BAPTA) and then holding washed sperm at 20°C within buffered potassium chloride. These 2 conditions rendered washed sperm immobile at 20°C. Resumption of sperm mobility was independent of time (P > 0.8558) when sperm were reactivated at body temperature with 2 mM Ca(2+) in isotonic sodium chloride at pH 7.4. Reactivated sperm mobility was 93% of the prewash control. Subsequent experiments served to define a dose response, predict optimal conditions for in vitro sperm mobility, and show how sperm can recover from an imposed non-physiological condition. Thus, functions were derived from which predictions were made. Whereas the utility of BAPTA treatment was confirmed in a new context, such utility did not address the question of whole-cell Ca(2+) flux during sperm cell manipulation. This issue is pivotal for the application of bioenergetics to fowl sperm preservation. Therefore, the secondary goal of this research was to investigate sperm cell Ca(2+) flux using a simulation of conditions encountered by sperm during centrifugation through 12% (wt/vol) Accudenz. These conditions included a temperature of 30°C, a Ca(2+) sink, and no exogenous substrate. Sperm motion was measured with a Hobson SpermTracker. Data points conformed to parabolic functions when motile concentration and velocity were plotted as functions of time. In each case, maximums were observed, e.g., 26 min for motile concentration. The upswing was attributed to a redistribution of intracellular Ca(2+) whereas the downswing was attributed to sperm cell Ca(2+) depletion. A pronounced isothermal increase was observed for each variable when the Ca(2+) sink was overcome with exogenous Ca(2+). Experimental outcomes supported four testable premises applicable to fowl sperm preservation research: 1) the importance of sperm mobility phenotype, 2) the relationship between mitochondrial Ca(2+) cycling and sperm mobility, 3) the utility of the sperm mobility assay for predicting experimental outcomes, and 4) understanding mitochondrial Ca(2+) cycling in terms of whole-cell Ca(2+) flux.

  20. Modeling the eco-physiology of the purple mauve stinger, Pelagia noctiluca using Dynamic Energy Budget theory

    NASA Astrophysics Data System (ADS)

    Augustine, Starrlight; Rosa, Sara; Kooijman, Sebastiaan A. L. M.; Carlotti, François; Poggiale, Jean-Christophe

    2014-11-01

    Parameters for the standard Dynamic Energy Budget (DEB) model were estimated for the purple mauve stinger, Pelagia noctiluca, using literature data. Overall, the model predictions are in good agreement with data covering the full life-cycle. The parameter set we obtain suggests that P. noctiluca is well adapted to survive long periods of starvation since the predicted maximum reserve capacity is extremely high. Moreover we predict that the reproductive output of larger individuals is relatively insensitive to changes in food level while wet mass and length are. Furthermore, the parameters imply that even if food were scarce (ingestion levels only 14% of the maximum for a given size) an individual would still mature and be able to reproduce. We present detailed model predictions for embryo development and discuss the developmental energetics of the species such as the fact that the metabolism of ephyrae accelerates for several days after birth. Finally we explore a number of concrete testable model predictions which will help to guide future research. The application of DEB theory to the collected data allowed us to conclude that P. noctiluca combines maximizing allocation to reproduction with rather extreme capabilities to survive starvation. The combination of these properties might explain why P. noctiluca is a rapidly growing concern to fisheries and tourism.

  1. Dynamical Properties of a Living Nematic

    NASA Astrophysics Data System (ADS)

    Genkin, Mikhail

    The systems, which are made of a large number or interacting particles, or agents that convert the energy stored in the environment into mechanical motion, are called active systems, or active matter. The examples of active matter include both living and synthetic systems. The size of agents varies significantly: bird flocks and fish schools represent macroscopic active systems, while suspensions of living organisms or artificial colloidal particles are examples of microscopic ones. In this work, I studied one of the simplest realization of active matter termed living (or active) nematics, that can be conceived by mixing swimming bacteria and nematic liquid crystal. Using modeling, numerical simulations and experiments I studied various dynamical properties of active nematics. This work hints into new methods of control and manipulation of active matter. Active nematic exhibits complex spatiotemporal behavior manifested by formation, proliferation, and annihilation of topological defects. A new computational 2D model coupling nematic liquid crystal and swimming bacteria dynamics have been proposed. We investigated the developed system of partial differential equations analytically and integrated it numerically using the highly efficient parallel GPU code. The integration results are in a very good agreement with other theoretical and experimental studies. In addition, our model revealed a number of testable phenomena. The major model prediction (bacteria accumulation in positive and depletion in negative topological defects) was tested by a dedicated experiment. We extended our model to study active nematics in a biphasic state, where nematic and isotropic phases coexist. Typically this coexistence is manifested by formation of tactoids - isotropic elongated regions surrounded by nematic phase, or nematic regions surrounded by isotropic phase. Using numerical integration, we revealed fundamental properties of such systems. Our main model outcome - spontaneous negative charging of isotropic-nematic interfaces - was confirmed by the experiment. The provided modeling and experimental results are in a very good qualitative and quantitative agreement. At last, we studied living nematics experimentally. We worked with swimming bacteria Bacillus subtilis suspended in disodium cromoglycate (DSCG) liquid crystal. Using cylindrical confinement, we were able to observe quantization of nematics' bending instability. Our experimental results revealed a complex interplay between bacteria self-propulsion and nematics' elasticity in the presence of cylindrical confinements of different sizes.

  2. Emotional foundations of cognitive control.

    PubMed

    Inzlicht, Michael; Bartholow, Bruce D; Hirsh, Jacob B

    2015-03-01

    Often seen as the paragon of higher cognition, here we suggest that cognitive control is dependent on emotion. Rather than asking whether control is influenced by emotion, we ask whether control itself can be understood as an emotional process. Reviewing converging evidence from cybernetics, animal research, cognitive neuroscience, and social and personality psychology, we suggest that cognitive control is initiated when goal conflicts evoke phasic changes to emotional primitives that both focus attention on the presence of goal conflicts and energize conflict resolution to support goal-directed behavior. Critically, we propose that emotion is not an inert byproduct of conflict but is instrumental in recruiting control. Appreciating the emotional foundations of control leads to testable predictions that can spur future research. Copyright © 2015 Elsevier Ltd. All rights reserved.

  3. Initial eccentricity fluctuations and their relation to higher-order flow harmonics

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lacey, R.; Wei,R.; Jia,J.

    2011-06-01

    Monte Carlo simulations are used to compute the centrality dependence of the participant eccentricities ({var_epsilon}{sub n}) in Au+Au collisions for the two primary models currently employed for eccentricity estimates - the Glauber and the factorized Kharzeev-Levin-Nardi (fKLN) models. They suggest specific testable predictions for the magnitude and centrality dependence of the flow coefficients v{sub n}, respectively measured relative to the event planes {Psi}{sub n}. They also indicate that the ratios of several of these coefficients may provide an additional constraint for distinguishing between the models. Such a constraint could be important for a more precise determination of the specific viscositymore » of the matter produced in heavy ion collisions.« less

  4. Fast proton decay

    NASA Astrophysics Data System (ADS)

    Li, Tianjun; Nanopoulos, Dimitri V.; Walker, Joel W.

    2010-10-01

    We consider proton decay in the testable flipped SU(5)×U(1)X models with TeV-scale vector-like particles which can be realized in free fermionic string constructions and F-theory model building. We significantly improve upon the determination of light threshold effects from prior studies, and perform a fresh calculation of the second loop for the process p→eπ from the heavy gauge boson exchange. The cumulative result is comparatively fast proton decay, with a majority of the most plausible parameter space within reach of the future Hyper-Kamiokande and DUSEL experiments. Because the TeV-scale vector-like particles can be produced at the LHC, we predict a strong correlation between the most exciting particle physics experiments of the coming decade.

  5. Emotional foundations of cognitive control

    PubMed Central

    Inzlicht, Michael; Bartholow, Bruce D.; Hirsh, Jacob B.

    2015-01-01

    Often seen as the paragon of higher cognition, here we suggest that cognitive control is dependent on emotion. Rather than asking whether control is influenced by emotion, we ask whether control itself can be understood as an emotional process. Reviewing converging evidence from cybernetics, animal research, cognitive neuroscience, and social and personality psychology, we suggest that cognitive control is initiated when goal conflicts evoke phasic changes to emotional primitives that both focus attention on the presence of goal conflicts and energize conflict resolution to support goal-directed behavior. Critically, we propose that emotion is not an inert byproduct of conflict but is instrumental in recruiting control. Appreciating the emotional foundations of control leads to testable predictions that can spur future research. PMID:25659515

  6. Metabolic network flux analysis for engineering plant systems.

    PubMed

    Shachar-Hill, Yair

    2013-04-01

    Metabolic network flux analysis (NFA) tools have proven themselves to be powerful aids to metabolic engineering of microbes by providing quantitative insights into the flows of material and energy through cellular systems. The development and application of NFA tools to plant systems has advanced in recent years and are yielding significant insights and testable predictions. Plants present substantial opportunities for the practical application of NFA but they also pose serious challenges related to the complexity of plant metabolic networks and to deficiencies in our knowledge of their structure and regulation. By considering the tools available and selected examples, this article attempts to assess where and how NFA is most likely to have a real impact on plant biotechnology. Copyright © 2013 Elsevier Ltd. All rights reserved.

  7. Supermassive Black Holes and Galaxy Evolution

    NASA Technical Reports Server (NTRS)

    Merritt, D.

    2004-01-01

    Supermassive black holes appear to be generic components of galactic nuclei. The formation and growth of black holes is intimately connected with the evolution of galaxies on a wide range of scales. For instance, mergers between galaxies containing nuclear black holes would produce supermassive binaries which eventually coalesce via the emission of gravitational radiation. The formation and decay of these binaries is expected to produce a number of observable signatures in the stellar distribution. Black holes can also affect the large-scale structure of galaxies by perturbing the orbits of stars that pass through the nucleus. Large-scale N-body simulations are beginning to generate testable predictions about these processes which will allow us to draw inferences about the formation history of supermassive black holes.

  8. Majorana dark matter with B+L gauge symmetry

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chao, Wei; Guo, Huai-Ke; Zhang, Yongchao

    Here, we present a new model that extends the Standard Model (SM) with the local B + L symmetry, and point out that the lightest new fermion, introduced to cancel anomalies and stabilized automatically by the B + L symmetry, can serve as the cold dark matter candidate. We also study constraints on the model from Higgs measurements, electroweak precision measurements as well as the relic density and direct detections of the dark matter. Our numerical results reveal that the pseudo-vector coupling of with Z and the Yukawa coupling with the SM Higgs are highly constrained by the latest resultsmore » of LUX, while there are viable parameter space that could satisfy all the constraints and give testable predictions.« less

  9. Majorana dark matter with B+L gauge symmetry

    DOE PAGES

    Chao, Wei; Guo, Huai-Ke; Zhang, Yongchao

    2017-04-07

    Here, we present a new model that extends the Standard Model (SM) with the local B + L symmetry, and point out that the lightest new fermion, introduced to cancel anomalies and stabilized automatically by the B + L symmetry, can serve as the cold dark matter candidate. We also study constraints on the model from Higgs measurements, electroweak precision measurements as well as the relic density and direct detections of the dark matter. Our numerical results reveal that the pseudo-vector coupling of with Z and the Yukawa coupling with the SM Higgs are highly constrained by the latest resultsmore » of LUX, while there are viable parameter space that could satisfy all the constraints and give testable predictions.« less

  10. Domain generality vs. modality specificity: The paradox of statistical learning

    PubMed Central

    Frost, Ram; Armstrong, Blair C.; Siegelman, Noam; Christiansen, Morten H.

    2015-01-01

    Statistical learning is typically considered to be a domain-general mechanism by which cognitive systems discover the underlying distributional properties of the input. Recent studies examining whether there are commonalities in the learning of distributional information across different domains or modalities consistently reveal, however, modality and stimulus specificity. An important question is, therefore, how and why a hypothesized domain-general learning mechanism systematically produces such effects. We offer a theoretical framework according to which statistical learning is not a unitary mechanism, but a set of domain-general computational principles, that operate in different modalities and therefore are subject to the specific constraints characteristic of their respective brain regions. This framework offers testable predictions and we discuss its computational and neurobiological plausibility. PMID:25631249

  11. Saccadic vector optokinetic perimetry in children with neurodisability or isolated visual pathway lesions: observational cohort study.

    PubMed

    Tailor, Vijay; Glaze, Selina; Unwin, Hilary; Bowman, Richard; Thompson, Graham; Dahlmann-Noor, Annegret

    2016-10-01

    Children and adults with neurological impairments are often not able to access conventional perimetry; however, information about the visual field is valuable. A new technology, saccadic vector optokinetic perimetry (SVOP), may have improved accessibility, but its accuracy has not been evaluated. We aimed to explore accessibility, testability and accuracy of SVOP in children with neurodisability or isolated visual pathway deficits. Cohort study; recruitment October 2013-May 2014, at children's eye clinics at a tertiary referral centre and a regional Child Development Centre; full orthoptic assessment, SVOP (central 30° of the visual field) and confrontation visual fields (CVF). Group 1: age 1-16 years, neurodisability (n=16), group 2: age 10-16 years, confirmed or suspected visual field defect (n=21); group 2 also completed Goldmann visual field testing (GVFT). Group 1: testability with a full 40-point test protocol is 12.5%; with reduced test protocols, testability is 100%, but plots may be clinically meaningless. Children (44%) and parents/carers (62.5%) find the test easy. SVOP and CVF agree in 50%. Group 2: testability is 62% for the 40-point protocol, and 90.5% for reduced protocols. Corneal changes in childhood glaucoma interfere with SVOP testing. All children and parents/carers find SVOP easy. Overall agreement with GVFT is 64.7%. While SVOP is highly accessible to children, many cannot complete a full 40-point test. Agreement with current standard tests is moderate to poor. Abnormal saccades cause an apparent non-specific visual field defect. In children with glaucoma or nystagmus SVOP calibration often fails. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://www.bmj.com/company/products-services/rights-and-licensing/

  12. Pollinator-driven ecological speciation in plants: new evidence and future perspectives

    PubMed Central

    Van der Niet, Timotheüs; Peakall, Rod; Johnson, Steven D.

    2014-01-01

    Background The hypothesis that pollinators have been important drivers of angiosperm diversity dates back to Darwin, and remains an important research topic today. Mounting evidence indicates that pollinators have the potential to drive diversification at several different stages of the evolutionary process. Microevolutionary studies have provided evidence for pollinator-mediated floral adaptation, while macroevolutionary evidence supports a general pattern of pollinator-driven diversification of angiosperms. However, the overarching issue of whether, and how, shifts in pollination system drive plant speciation represents a critical gap in knowledge. Bridging this gap is crucial to fully understand whether pollinator-driven microevolution accounts for the observed macroevolutionary patterns. Testable predictions about pollinator-driven speciation can be derived from the theory of ecological speciation, according to which adaptation (microevolution) and speciation (macroevolution) are directly linked. This theory is a particularly suitable framework for evaluating evidence for the processes underlying shifts in pollination systems and their potential consequences for the evolution of reproductive isolation and speciation. Scope This Viewpoint paper focuses on evidence for the four components of ecological speciation in the context of plant-pollinator interactions, namely (1) the role of pollinators as selective agents, (2) floral trait divergence, including the evolution of ‘pollination ecotypes‘, (3) the geographical context of selection on floral traits, and (4) the role of pollinators in the evolution of reproductive isolation. This Viewpoint also serves as the introduction to a Special Issue on Pollinator-Driven Speciation in Plants. The 13 papers in this Special Issue range from microevolutionary studies of ecotypes to macroevolutionary studies of historical ecological shifts, and span a wide range of geographical areas and plant families. These studies further illustrate innovative experimental approaches, and they employ modern tools in genetics and floral trait quantification. Future advances to the field require better quantification of selection through male fitness and pollinator isolation, for instance by exploiting next-generation sequencing technologies. By combining these new tools with strategically chosen study systems, and smart experimental design, we predict that examples of pollinator-driven speciation will be among the most widespread and compelling of all cases of ecological speciation. PMID:24418954

  13. A "Wear and Tear" Hypothesis to Explain Sudden Infant Death Syndrome.

    PubMed

    Elhaik, Eran

    2016-01-01

    Sudden infant death syndrome (SIDS) is the leading cause of death among USA infants under 1 year of age accounting for ~2,700 deaths per year. Although formally SIDS dates back at least 2,000 years and was even mentioned in the Hebrew Bible (Kings 3:19), its etiology remains unexplained prompting the CDC to initiate a sudden unexpected infant death case registry in 2010. Due to their total dependence, the ability of the infant to allostatically regulate stressors and stress responses shaped by genetic and environmental factors is severely constrained. We propose that SIDS is the result of cumulative painful, stressful, or traumatic exposures that begin in utero and tax neonatal regulatory systems incompatible with allostasis. We also identify several putative biochemical mechanisms involved in SIDS. We argue that the important characteristics of SIDS, namely male predominance (60:40), the significantly different SIDS rate among USA Hispanics (80% lower) compared to whites, 50% of cases occurring between 7.6 and 17.6 weeks after birth with only 10% after 24.7 weeks, and seasonal variation with most cases occurring during winter, are all associated with common environmental stressors, such as neonatal circumcision and seasonal illnesses. We predict that neonatal circumcision is associated with hypersensitivity to pain and decreased heart rate variability, which increase the risk for SIDS. We also predict that neonatal male circumcision will account for the SIDS gender bias and that groups that practice high male circumcision rates, such as USA whites, will have higher SIDS rates compared to groups with lower circumcision rates. SIDS rates will also be higher in USA states where Medicaid covers circumcision and lower among people that do not practice neonatal circumcision and/or cannot afford to pay for circumcision. We last predict that winter-born premature infants who are circumcised will be at higher risk of SIDS compared to infants who experienced fewer nociceptive exposures. All these predictions are testable experimentally using animal models or cohort studies in humans. Our hypothesis provides new insights into novel risk factors for SIDS that can reduce its risk by modifying current infant care practices to reduce nociceptive exposures.

  14. Proteome-wide Structural Analysis of PTM Hotspots Reveals Regulatory Elements Predicted to Impact Biological Function and Disease.

    PubMed

    Torres, Matthew P; Dewhurst, Henry; Sundararaman, Niveda

    2016-11-01

    Post-translational modifications (PTMs) regulate protein behavior through modulation of protein-protein interactions, enzymatic activity, and protein stability essential in the translation of genotype to phenotype in eukaryotes. Currently, less than 4% of all eukaryotic PTMs are reported to have biological function - a statistic that continues to decrease with an increasing rate of PTM detection. Previously, we developed SAPH-ire (Structural Analysis of PTM Hotspots) - a method for the prioritization of PTM function potential that has been used effectively to reveal novel PTM regulatory elements in discrete protein families (Dewhurst et al., 2015). Here, we apply SAPH-ire to the set of eukaryotic protein families containing experimental PTM and 3D structure data - capturing 1,325 protein families with 50,839 unique PTM sites organized into 31,747 modified alignment positions (MAPs), of which 2010 (∼6%) possess known biological function. Here, we show that using an artificial neural network model (SAPH-ire NN) trained to identify MAP hotspots with biological function results in prediction outcomes that far surpass the use of single hotspot features, including nearest neighbor PTM clustering methods. We find the greatest enhancement in prediction for positions with PTM counts of five or less, which represent 98% of all MAPs in the eukaryotic proteome and 90% of all MAPs found to have biological function. Analysis of the top 1092 MAP hotspots revealed 267 of truly unknown function (containing 5443 distinct PTMs). Of these, 165 hotspots could be mapped to human KEGG pathways for normal and/or disease physiology. Many high-ranking hotspots were also found to be disease-associated pathogenic sites of amino acid substitution despite the lack of observable PTM in the human protein family member. Taken together, these experiments demonstrate that the functional relevance of a PTM can be predicted very effectively by neural network models, revealing a large but testable body of potential regulatory elements that impact hundreds of different biological processes important in eukaryotic biology and human health. © 2016 by The American Society for Biochemistry and Molecular Biology, Inc.

  15. Proteome-wide Structural Analysis of PTM Hotspots Reveals Regulatory Elements Predicted to Impact Biological Function and Disease*

    PubMed Central

    Dewhurst, Henry; Sundararaman, Niveda

    2016-01-01

    Post-translational modifications (PTMs) regulate protein behavior through modulation of protein-protein interactions, enzymatic activity, and protein stability essential in the translation of genotype to phenotype in eukaryotes. Currently, less than 4% of all eukaryotic PTMs are reported to have biological function - a statistic that continues to decrease with an increasing rate of PTM detection. Previously, we developed SAPH-ire (Structural Analysis of PTM Hotspots) - a method for the prioritization of PTM function potential that has been used effectively to reveal novel PTM regulatory elements in discrete protein families (Dewhurst et al., 2015). Here, we apply SAPH-ire to the set of eukaryotic protein families containing experimental PTM and 3D structure data - capturing 1,325 protein families with 50,839 unique PTM sites organized into 31,747 modified alignment positions (MAPs), of which 2010 (∼6%) possess known biological function. Here, we show that using an artificial neural network model (SAPH-ire NN) trained to identify MAP hotspots with biological function results in prediction outcomes that far surpass the use of single hotspot features, including nearest neighbor PTM clustering methods. We find the greatest enhancement in prediction for positions with PTM counts of five or less, which represent 98% of all MAPs in the eukaryotic proteome and 90% of all MAPs found to have biological function. Analysis of the top 1092 MAP hotspots revealed 267 of truly unknown function (containing 5443 distinct PTMs). Of these, 165 hotspots could be mapped to human KEGG pathways for normal and/or disease physiology. Many high-ranking hotspots were also found to be disease-associated pathogenic sites of amino acid substitution despite the lack of observable PTM in the human protein family member. Taken together, these experiments demonstrate that the functional relevance of a PTM can be predicted very effectively by neural network models, revealing a large but testable body of potential regulatory elements that impact hundreds of different biological processes important in eukaryotic biology and human health. PMID:27697855

  16. "Don׳t" versus "won׳t": principles, mechanisms, and intention in action inhibition.

    PubMed

    Ridderinkhof, K Richard; van den Wildenberg, Wery P M; Brass, Marcel

    2014-12-01

    The aim of the present review is to provide a theoretical analysis of the role of intentions in inhibition. We will first outline four dimensions along which inhibition can be categorized: intentionality, timing, specificity, and the nature of the to-be-inhibited action. Next, we relate the concept of inhibition to theories of intentional action. In particular, we integrate ideomotor theory with motor control theories that involve predictive forward modeling of the consequences of one׳s action, and evaluate how the dimensional classification of inhibition fits into such an integrative approach. Furthermore, we will outline testable predictions that derive from this novel hypothesis of ideomotor inhibition. We then discuss the viability of the ideomotor inhibition hypothesis and our classification in view of the available evidence on the neural mechanisms of action inhibition, indicating that sensorimotor and ideomotor inhibition engages largely overlapping networks with additional recruitment of dFMC for ideomotor inhibition. Copyright © 2014 Elsevier Ltd. All rights reserved.

  17. Is ``the Theory of Everything'' Merely the Ultimate Ensemble Theory?

    NASA Astrophysics Data System (ADS)

    Tegmark, Max

    1998-11-01

    We discuss some physical consequences of what might be called "the ultimate ensemble theory,", where not only worlds corresponding to say different sets of initial data or different physical constants are considered equally real, but also worlds ruled by altogether different equations. The only postulate in this theory is that all structures that exist mathematically exist also physically, by which we mean that in those complex enough to contain self-aware substructures (SASs), these SASs will subjectively perceive themselves as existing in a physically "real" world. We find that it is far from clear that this simple theory, which has no free parameters whatsoever, is observationally ruled out. The predictions of the theory take the form of probability distributions for the outcome of experiments, which makes it testable. In addition, it may be possible to rule it out by comparing its a priori predictions for the observable attributes of nature (the particle masses, the dimensionality of spacetime, etc.) with what is observed.

  18. Transcriptomic and macroevolutionary evidence for phenotypic uncoupling between frog life history phases

    PubMed Central

    Wollenberg Valero, Katharina C.; Garcia-Porta, Joan; Rodríguez, Ariel; Arias, Mónica; Shah, Abhijeet; Randrianiaina, Roger Daniel; Brown, Jason L.; Glaw, Frank; Amat, Felix; Künzel, Sven; Metzler, Dirk; Isokpehi, Raphael D.; Vences, Miguel

    2017-01-01

    Anuran amphibians undergo major morphological transitions during development, but the contribution of their markedly different life-history phases to macroevolution has rarely been analysed. Here we generate testable predictions for coupling versus uncoupling of phenotypic evolution of tadpole and adult life-history phases, and for the underlying expression of genes related to morphological feature formation. We test these predictions by combining evidence from gene expression in two distantly related frogs, Xenopus laevis and Mantidactylus betsileanus, with patterns of morphological evolution in the entire radiation of Madagascan mantellid frogs. Genes linked to morphological structure formation are expressed in a highly phase-specific pattern, suggesting uncoupling of phenotypic evolution across life-history phases. This gene expression pattern agrees with uncoupled rates of trait evolution among life-history phases in the mantellids, which we show to have undergone an adaptive radiation. Our results validate a prevalence of uncoupling in the evolution of tadpole and adult phenotypes of frogs. PMID:28504275

  19. Mechanisms underlying REBT in mood disordered patients: predicting depression from the hybrid model of learning.

    PubMed

    Jackson, Chris J; Izadikah, Zahra; Oei, Tian P S

    2012-06-01

    Jackson's (2005, 2008a) hybrid model of learning identifies a number of learning mechanisms that lead to the emergence and maintenance of the balance between rationality and irrationality. We test a general hypothesis that Jackson's model will predict depressive symptoms, such that poor learning is related to depression. We draw comparisons between Jackson's model and Ellis' (2004) Rational Emotive Behavior Therapy and Theory (REBT) and thereby provide a set of testable learning mechanisms potentially underlying REBT. Results from 80 patients diagnosed with depression completed the learning styles profiler (LSP; Jackson, 2005) and two measures of depression. Results provide support for the proposed model of learning and further evidence that low rationality is a key predictor of depression. We conclude that the hybrid model of learning has the potential to explain some of the learning and cognitive processes related to the development and maintenance of irrational beliefs and depression. Copyright © 2011. Published by Elsevier B.V.

  20. Lightning Scaling Laws Revisited

    NASA Technical Reports Server (NTRS)

    Boccippio, D. J.; Arnold, James E. (Technical Monitor)

    2000-01-01

    Scaling laws relating storm electrical generator power (and hence lightning flash rate) to charge transport velocity and storm geometry were originally posed by Vonnegut (1963). These laws were later simplified to yield simple parameterizations for lightning based upon cloud top height, with separate parameterizations derived over land and ocean. It is demonstrated that the most recent ocean parameterization: (1) yields predictions of storm updraft velocity which appear inconsistent with observation, and (2) is formally inconsistent with the theory from which it purports to derive. Revised formulations consistent with Vonnegut's original framework are presented. These demonstrate that Vonnegut's theory is, to first order, consistent with observation. The implications of assuming that flash rate is set by the electrical generator power, rather than the electrical generator current, are examined. The two approaches yield significantly different predictions about the dependence of charge transfer per flash on storm dimensions, which should be empirically testable. The two approaches also differ significantly in their explanation of regional variability in lightning observations.

  1. Bicultural identity conflict in second-generation Asian Canadians.

    PubMed

    Stroink, Mirella L; Lalonde, Richard N

    2009-02-01

    Researchers have shown that bicultural individuals, including 2nd-generation immigrants, face a potential conflict between 2 cultural identities. The present authors extended this primarily qualitative research on the bicultural experience by adopting the social identity perspective (H. Tajfel & J. C. Turner, 1986). They developed and tested an empirically testable model of the role of cultural construals, in-group prototypicality, and identity in bicultural conflict in 2 studies with 2nd-generation Asian Canadians. In both studies, the authors expected and found that participants' construals of their 2 cultures as different predicted lower levels of simultaneous identification with both cultures. Furthermore, the authors found this relation was mediated by participants' feelings of prototypicality as members of both groups. Although the perception of cultural difference did not predict well-being as consistently and directly as the authors expected, levels of simultaneous identification did show these relations. The authors discuss results in the context of social identity theory (H. Tajfel & J. C. Turner) as a framework for understanding bicultural conflict.

  2. The ORF1 Protein Encoded by LINE-1: Structure and Function During L1 Retrotransposition

    PubMed Central

    Martin, Sandra L.

    2006-01-01

    LINE-1, or L1 is an autonomous non-LTR retrotransposon in mammals. Retrotransposition requires the function of the two, L1-encoded polypeptides, ORF1p and ORF2p. Early recognition of regions of homology between the predicted amino acid sequence of ORF2 and known endonuclease and reverse transcriptase enzymes led to testable hypotheses regarding the function of ORF2p in retrotransposition. As predicted, ORF2p has been demonstrated to have both endonuclease and reverse transcriptase activities. In contrast, no homologs of known function have contributed to our understanding of the function of ORF1p during retrotransposition. Nevertheless, significant advances have been made such that we now know that ORF1p is a high affinity RNA binding protein that forms a ribonucleoprotein particle together with L1 RNA. Furthermore, ORF1p is a nucleic acid chaperone and this nucleic acid chaperone activity is required for L1 retrotransposition. PMID:16877816

  3. Five potential consequences of climate change for invasive species.

    PubMed

    Hellmann, Jessica J; Byers, James E; Bierwagen, Britta G; Dukes, Jeffrey S

    2008-06-01

    Scientific and societal unknowns make it difficult to predict how global environmental changes such as climate change and biological invasions will affect ecological systems. In the long term, these changes may have interacting effects and compound the uncertainty associated with each individual driver. Nonetheless, invasive species are likely to respond in ways that should be qualitatively predictable, and some of these responses will be distinct from those of native counterparts. We used the stages of invasion known as the "invasion pathway" to identify 5 nonexclusive consequences of climate change for invasive species: (1) altered transport and introduction mechanisms, (2) establishment of new invasive species, (3) altered impact of existing invasive species, (4) altered distribution of existing invasive species, and (5) altered effectiveness of control strategies. We then used these consequences to identify testable hypotheses about the responses of invasive species to climate change and provide suggestions for invasive-species management plans. The 5 consequences also emphasize the need for enhanced environmental monitoring and expanded coordination among entities involved in invasive-species management.

  4. Multidisciplinary analysis and design of printed wiring boards

    NASA Astrophysics Data System (ADS)

    Fulton, Robert E.; Hughes, Joseph L.; Scott, Waymond R., Jr.; Umeagukwu, Charles; Yeh, Chao-Pin

    1991-04-01

    Modern printed wiring board design depends on electronic prototyping using computer-based simulation and design tools. Existing electrical computer-aided design (ECAD) tools emphasize circuit connectivity with only rudimentary analysis capabilities. This paper describes a prototype integrated PWB design environment denoted Thermal Structural Electromagnetic Testability (TSET) being developed at Georgia Tech in collaboration with companies in the electronics industry. TSET provides design guidance based on enhanced electrical and mechanical CAD capabilities including electromagnetic modeling testability analysis thermal management and solid mechanics analysis. TSET development is based on a strong analytical and theoretical science base and incorporates an integrated information framework and a common database design based on a systematic structured methodology.

  5. Soy-Based Therapeutic Baby Formulas: Testable Hypotheses Regarding the Pros and Cons.

    PubMed

    Westmark, Cara J

    2016-01-01

    Soy-based infant formulas have been consumed in the United States since 1909, and currently constitute a significant portion of the infant formula market. There are efforts underway to generate genetically modified soybeans that produce therapeutic agents of interest with the intent to deliver those agents in a soy-based infant formula platform. The threefold purpose of this review article is to first discuss the pros and cons of soy-based infant formulas, then present testable hypotheses to discern the suitability of a soy platform for drug delivery in babies, and finally start a discussion to inform public policy on this important area of infant nutrition.

  6. Bayesian naturalness, simplicity, and testability applied to the B ‑ L MSSM GUT

    NASA Astrophysics Data System (ADS)

    Fundira, Panashe; Purves, Austin

    2018-04-01

    Recent years have seen increased use of Bayesian model comparison to quantify notions such as naturalness, simplicity, and testability, especially in the area of supersymmetric model building. After demonstrating that Bayesian model comparison can resolve a paradox that has been raised in the literature concerning the naturalness of the proton mass, we apply Bayesian model comparison to GUTs, an area to which it has not been applied before. We find that the GUTs are substantially favored over the nonunifying puzzle model. Of the GUTs we consider, the B ‑ L MSSM GUT is the most favored, but the MSSM GUT is almost equally favored.

  7. Should tumbling E go out of date in amblyopia screening? Evidence from a population-based sample normative in children aged 3-4 years.

    PubMed

    Guimaraes, Sandra; Fernandes, Tiago; Costa, Patrício; Silva, Eduardo

    2018-06-01

    To determine a normative of tumbling E optotype and its feasibility for visual acuity (VA) assessment in children aged 3-4 years. A cross-sectional study of 1756 children who were invited to participate in a comprehensive non-invasive eye exam. Uncorrected monocular VA with crowded tumbling E with a comprehensive ophthalmological examination were assessed. Testability rates of the whole population and VA of the healthy children for different age subgroups, gender, school type and the order of testing in which the ophthalmological examination was performed were evaluated. The overall testability rate was 95% (92% and 98% for children aged 3 and 4 years, respectively). The mean VA of the first-day assessment (first-VA) and best-VA over 2 days' assessments was 0.14 logMAR (95% CI 0.14 to 0.15) (decimal=0.72, 95% CI 0.71 to 0.73) and 0.13 logMAR (95% CI 0.13 to 0.14) (decimal=0.74, 95% CI 0.73 to 0.74). Analysis with age showed differences between groups in first-VA (F(3,1146)=10.0; p<0.001; η2=0.026) and best-VA (F(3,1155)=8.8; p<0.001; η2=0.022). Our normative was very highly correlated with previous reported HOTV-Amblyopia-Treatment-Study (HOTV-ATS) (first-VA, r=0.97; best-VA, r=0.99), with 0.8 to 0.7 lines consistent overestimation for HOTV-ATS as described in literature. Overall false-positive referral was 1.3%, being specially low regarding anisometropias of ≥2 logMAR lines (0.17%). Interocular difference ≥1 line VA logMAR was not associated with age (p=0.195). This is the first normative for European Caucasian children with single crowded tumbling E in healthy eyes and the largest study comparing 3 and 4 years old testability. Testability rates are higher than found in literature with other optotypes, especially in children aged 3 years, where we found 5%-11% better testability rates. © Article author(s) (or their employer(s) unless otherwise stated in the text of the article) 2018. All rights reserved. No commercial use is permitted unless otherwise expressly granted.

  8. Automated System Checkout to Support Predictive Maintenance for the Reusable Launch Vehicle

    NASA Technical Reports Server (NTRS)

    Patterson-Hine, Ann; Deb, Somnath; Kulkarni, Deepak; Wang, Yao; Lau, Sonie (Technical Monitor)

    1998-01-01

    The Propulsion Checkout and Control System (PCCS) is a predictive maintenance software system. The real-time checkout procedures and diagnostics are designed to detect components that need maintenance based on their condition, rather than using more conventional approaches such as scheduled or reliability centered maintenance. Predictive maintenance can reduce turn-around time and cost and increase safety as compared to conventional maintenance approaches. Real-time sensor validation, limit checking, statistical anomaly detection, and failure prediction based on simulation models are employed. Multi-signal models, useful for testability analysis during system design, are used during the operational phase to detect and isolate degraded or failed components. The TEAMS-RT real-time diagnostic engine was developed to utilize the multi-signal models by Qualtech Systems, Inc. Capability of predicting the maintenance condition was successfully demonstrated with a variety of data, from simulation to actual operation on the Integrated Propulsion Technology Demonstrator (IPTD) at Marshall Space Flight Center (MSFC). Playback of IPTD valve actuations for feature recognition updates identified an otherwise undetectable Main Propulsion System 12 inch prevalve degradation. The algorithms were loaded into the Propulsion Checkout and Control System for further development and are the first known application of predictive Integrated Vehicle Health Management to an operational cryogenic testbed. The software performed successfully in real-time, meeting the required performance goal of 1 second cycle time.

  9. Testability of evolutionary game dynamics based on experimental economics data

    NASA Astrophysics Data System (ADS)

    Wang, Yijia; Chen, Xiaojie; Wang, Zhijian

    In order to better understand the dynamic processes of a real game system, we need an appropriate dynamics model, so to evaluate the validity of a model is not a trivial task. Here, we demonstrate an approach, considering the dynamical macroscope patterns of angular momentum and speed as the measurement variables, to evaluate the validity of various dynamics models. Using the data in real time Rock-Paper-Scissors (RPS) games experiments, we obtain the experimental dynamic patterns, and then derive the related theoretical dynamic patterns from a series of typical dynamics models respectively. By testing the goodness-of-fit between the experimental and theoretical patterns, the validity of the models can be evaluated. One of the results in our study case is that, among all the nonparametric models tested, the best-known Replicator dynamics model performs almost worst, while the Projection dynamics model performs best. Besides providing new empirical macroscope patterns of social dynamics, we demonstrate that the approach can be an effective and rigorous tool to test game dynamics models. Fundamental Research Funds for the Central Universities (SSEYI2014Z) and the National Natural Science Foundation of China (Grants No. 61503062).

  10. The Colloquium

    NASA Astrophysics Data System (ADS)

    Amoroso, Richard L.

    HÉCTOR A.A brief introductory survey of Unified Field Mechanics (UFM) is given from the perspective of a Holographic Anthropic Multiverse cosmology in 12 `continuous-state' dimensions. The paradigm with many new parameters is cast in a scale-invariant conformal covariant Dirac polarized vacuum utilizing extended HD forms of the de Broglie-Bohm and Cramer interpretations of quantum theory. The model utilizes a unique form of M-Theory based in part on the original hadronic form of string theory that had a variable string tension, TS and included a tachyon. The model is experimentally testable, thus putatively able to demonstrate the existence of large-scale additional dimensionality (LSXD), test for QED violating tight-bound state spectral lines in hydrogen `below' the lowest Bohr orbit, and surmount the quantum uncertainty principle utilizing a hyperincursive Sagnac Effect resonance hierarchy.

  11. Eventful horizons: String theory in de Sitter and anti-de Sitter

    NASA Astrophysics Data System (ADS)

    Kleban, Matthew Benjamin

    String theory purports to be a theory of quantum gravity. As such, it should have much to say about the deep mysteries surrounding the very early stages of our universe. For this reason, although the theory is notoriously difficult to directly test, data from experimental cosmology may provide a way to probe the high energy physics of string theory. In the first part of this thesis, I will address the important issue of the testability of string theory using observations of the cosmic microwave background radiation. In the second part, I will study some formal difficulties that arise in attempting to understand string theory in de Sitter spacetime. In the third part, I will study the singularity of an eternal anti de Sitter Schwarzschild black hole, using the AdS/CFT correspondence.

  12. a Heavy Higgs Boson from Flavor and Electroweak Symmetry Unification

    NASA Astrophysics Data System (ADS)

    Fabbrichesi, Marco

    2005-08-01

    We present a unified picture of flavor and electroweak symmetry breaking based on a nonlinear sigma model spontaneously broken at the TeV scale. Flavor and Higgs bosons arise as pseudo-Goldstone modes. Explicit collective symmetry breaking yields stable vacuum expectation values and masses protected at one loop by the little-Higgs mechanism. The coupling to the fermions generates well-definite mass textures--according to a U(1) global flavor symmetry--that correctly reproduce the mass hierarchies and mixings of quarks and leptons. The model is more constrained than usual little-Higgs models because of bounds on weak and flavor physics. The main experimental signatures testable at the LHC are a rather large mass mh0 = 317 ± 80 GeV for the (lightest) Higgs boson.

  13. Kalman filter control of a model of spatiotemporal cortical dynamics

    PubMed Central

    Schiff, Steven J; Sauer, Tim

    2007-01-01

    Recent advances in Kalman filtering to estimate system state and parameters in nonlinear systems have offered the potential to apply such approaches to spatiotemporal nonlinear systems. We here adapt the nonlinear method of unscented Kalman filtering to observe the state and estimate parameters in a computational spatiotemporal excitable system that serves as a model for cerebral cortex. We demonstrate the ability to track spiral wave dynamics, and to use an observer system to calculate control signals delivered through applied electrical fields. We demonstrate how this strategy can control the frequency of such a system, or quench the wave patterns, while minimizing the energy required for such results. These findings are readily testable in experimental applications, and have the potential to be applied to the treatment of human disease. PMID:18310806

  14. A genetic programming approach for Burkholderia Pseudomallei diagnostic pattern discovery

    PubMed Central

    Yang, Zheng Rong; Lertmemongkolchai, Ganjana; Tan, Gladys; Felgner, Philip L.; Titball, Richard

    2009-01-01

    Motivation: Finding diagnostic patterns for fighting diseases like Burkholderia pseudomallei using biomarkers involves two key issues. First, exhausting all subsets of testable biomarkers (antigens in this context) to find a best one is computationally infeasible. Therefore, a proper optimization approach like evolutionary computation should be investigated. Second, a properly selected function of the antigens as the diagnostic pattern which is commonly unknown is a key to the diagnostic accuracy and the diagnostic effectiveness in clinical use. Results: A conversion function is proposed to convert serum tests of antigens on patients to binary values based on which Boolean functions as the diagnostic patterns are developed. A genetic programming approach is designed for optimizing the diagnostic patterns in terms of their accuracy and effectiveness. During optimization, it is aimed to maximize the coverage (the rate of positive response to antigens) in the infected patients and minimize the coverage in the non-infected patients while maintaining the fewest number of testable antigens used in the Boolean functions as possible. The final coverage in the infected patients is 96.55% using 17 of 215 (7.4%) antigens with zero coverage in the non-infected patients. Among these 17 antigens, BPSL2697 is the most frequently selected one for the diagnosis of Burkholderia Pseudomallei. The approach has been evaluated using both the cross-validation and the Jack–knife simulation methods with the prediction accuracy as 93% and 92%, respectively. A novel approach is also proposed in this study to evaluate a model with binary data using ROC analysis. Contact: z.r.yang@ex.ac.uk PMID:19561021

  15. Evolutionary Perspectives on Genetic and Environmental Risk Factors for Psychiatric Disorders.

    PubMed

    Keller, Matthew C

    2018-05-07

    Evolutionary medicine uses evolutionary theory to help elucidate why humans are vulnerable to disease and disorders. I discuss two different types of evolutionary explanations that have been used to help understand human psychiatric disorders. First, a consistent finding is that psychiatric disorders are moderately to highly heritable, and many, such as schizophrenia, are also highly disabling and appear to decrease Darwinian fitness. Models used in evolutionary genetics to understand why genetic variation exists in fitness-related traits can be used to understand why risk alleles for psychiatric disorders persist in the population. The usual explanation for species-typical adaptations-natural selection-is less useful for understanding individual differences in genetic risk to disorders. Rather, two other types of models, mutation-selection-drift and balancing selection, offer frameworks for understanding why genetic variation in risk to psychiatric (and other) disorders exists, and each makes predictions that are now testable using whole-genome data. Second, species-typical capacities to mount reactions to negative events are likely to have been crafted by natural selection to minimize fitness loss. The pain reaction to tissue damage is almost certainly such an example, but it has been argued that the capacity to experience depressive symptoms such as sadness, anhedonia, crying, and fatigue in the face of adverse life situations may have been crafted by natural selection as well. I review the rationale and strength of evidence for this hypothesis. Evolutionary hypotheses of psychiatric disorders are important not only for offering explanations for why psychiatric disorders exist, but also for generating new, testable hypotheses and understanding how best to design studies and analyze data.

  16. A prospective earthquake forecast experiment for Japan

    NASA Astrophysics Data System (ADS)

    Yokoi, Sayoko; Nanjo, Kazuyoshi; Tsuruoka, Hiroshi; Hirata, Naoshi

    2013-04-01

    One major focus of the current Japanese earthquake prediction research program (2009-2013) is to move toward creating testable earthquake forecast models. For this purpose we started an experiment of forecasting earthquake activity in Japan under the framework of the Collaboratory for the Study of Earthquake Predictability (CSEP) through an international collaboration. We established the CSEP Testing Centre, an infrastructure to encourage researchers to develop testable models for Japan, and to conduct verifiable prospective tests of their model performance. On 1 November in 2009, we started the 1st earthquake forecast testing experiment for the Japan area. We use the unified JMA catalogue compiled by the Japan Meteorological Agency as authorized catalogue. The experiment consists of 12 categories, with 4 testing classes with different time spans (1 day, 3 months, 1 year, and 3 years) and 3 testing regions called All Japan, Mainland, and Kanto. A total of 91 models were submitted to CSEP-Japan, and are evaluated with the CSEP official suite of tests about forecast performance. In this presentation, we show the results of the experiment of the 3-month testing class for 5 rounds. HIST-ETAS7pa, MARFS and RI10K models corresponding to the All Japan, Mainland and Kanto regions showed the best score based on the total log-likelihood. It is also clarified that time dependency of model parameters is no effective factor to pass the CSEP consistency tests for the 3-month testing class in all regions. Especially, spatial distribution in the All Japan region was too difficult to pass consistency test due to multiple events at a bin. Number of target events for a round in the Mainland region tended to be smaller than model's expectation during all rounds, which resulted in rejections of consistency test because of overestimation. In the Kanto region, pass ratios of consistency tests in each model showed more than 80%, which was associated with good balanced forecasting of event number and spatial distribution. Due to the multiple rounds of the experiment, we are now understanding the stability of models, robustness of model selection and earthquake predictability in each region beyond stochastic fluctuations of seismicity. We plan to use the results for design of 3 dimensional earthquake forecasting model in Kanto region, which is supported by the special project for reducing vulnerability for urban mega earthquake disasters from Ministy of Education, Culture, Sports and Technology of Japan.

  17. Nurses' intention to leave: critically analyse the theory of reasoned action and organizational commitment model.

    PubMed

    Liou, Shwu-Ru

    2009-01-01

    To systematically analyse the Organizational Commitment model and Theory of Reasoned Action and determine concepts that can better explain nurses' intention to leave their job. The Organizational Commitment model and Theory of Reasoned Action have been proposed and applied to understand intention to leave and turnover behaviour, which are major contributors to nursing shortage. However, the appropriateness of applying these two models in nursing was not analysed. Three main criteria of a useful model were used for the analysis: consistency in the use of concepts, testability and predictability. Both theories use concepts consistently. Concepts in the Theory of Reasoned Action are defined broadly whereas they are operationally defined in the Organizational Commitment model. Predictability of the Theory of Reasoned Action is questionable whereas the Organizational Commitment model can be applied to predict intention to leave. A model was proposed based on this analysis. Organizational commitment, intention to leave, work experiences, job characteristics and personal characteristics can be concepts for predicting nurses' intention to leave. Nursing managers may consider nurses' personal characteristics and experiences to increase their organizational commitment and enhance their intention to stay. Empirical studies are needed to test and cross-validate the re-synthesized model for nurses' intention to leave their job.

  18. Life histories of hosts and pathogens predict patterns in tropical fungal plant diseases.

    PubMed

    García-Guzmán, Graciela; Heil, Martin

    2014-03-01

    Plant pathogens affect the fitness of their hosts and maintain biodiversity. However, we lack theories to predict the type and intensity of infections in wild plants. Here we demonstrate using fungal pathogens of tropical plants that an examination of the life histories of hosts and pathogens can reveal general patterns in their interactions. Fungal infections were more commonly reported for light-demanding than for shade-tolerant species and for evergreen rather than for deciduous hosts. Both patterns are consistent with classical defence theory, which predicts lower resistance in fast-growing species and suggests that the deciduous habit can reduce enemy populations. In our literature survey, necrotrophs were found mainly to infect shade-tolerant woody species whereas biotrophs dominated in light-demanding herbaceous hosts. Far-red signalling and its inhibitory effects on jasmonic acid signalling are likely to explain this phenomenon. Multiple changes between the necrotrophic and the symptomless endophytic lifestyle at the ecological and evolutionary scale indicate that endophytes should be considered when trying to understand large-scale patterns in the fungal infections of plants. Combining knowledge about the molecular mechanisms of pathogen resistance with classical defence theory enables the formulation of testable predictions concerning general patterns in the infections of wild plants by fungal pathogens. © 2013 The Authors. New Phytologist © 2013 New Phytologist Trust.

  19. Why is there a dearth of close-in planets around fast-rotating stars?

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Teitler, Seth; Königl, Arieh, E-mail: satelite@gmail.com, E-mail: akonigl@uchicago.edu

    2014-05-10

    We propose that the reported dearth of Kepler objects of interest (KOIs) with orbital periods P {sub orb} ≲ 2-3 days around stars with rotation periods P {sub rot} ≲ 5-10 days can be attributed to tidal ingestion of close-in planets by their host stars. We show that the planet distribution in this region of the log P {sub orb}-log P {sub rot} plane is qualitatively reproduced with a model that incorporates tidal interaction and magnetic braking as well as the dependence on the stellar core-envelope coupling timescale. We demonstrate the consistency of this scenario with the inferred break inmore » the P {sub orb} distribution of close-in KOIs and point out a potentially testable prediction of this interpretation.« less

  20. The Synchrotron Shock Model Confronts a "Line of Death" in the BATSE Gamma-Ray Burst Data

    NASA Technical Reports Server (NTRS)

    Preece, Robert D.; Briggs, Michael S.; Mallozzi, Robert S.; Pendleton, Geoffrey N.; Paciesas, W. S.; Band, David L.

    1998-01-01

    The synchrotron shock model (SSM) for gamma-ray burst emission makes a testable prediction: that the observed low-energy power-law photon number spectral index cannot exceed -2/3 (where the photon model is defined with a positive index: $dN/dE \\propto E{alpha}$). We have collected time-resolved spectral fit parameters for over 100 bright bursts observed by the Burst And Transient Source Experiment on board the {\\it Compton Gamma Ray Observatory}. Using this database, we find 23 bursts in which the spectral index limit of the SSM is violated, We discuss elements of the analysis methodology that affect the robustness of this result, as well as some of the escape hatches left for the SSM by theory.

  1. Fault Management Technology Maturation for NASA's Constellation Program

    NASA Technical Reports Server (NTRS)

    Waterman, Robert D.

    2010-01-01

    This slide presentation reviews the maturation of fault management technology in preparation for the Constellation Program. There is a review of the Space Shuttle Main Engine (SSME) and a discussion of a couple of incidents with the shuttle main engine and tanking that indicated the necessity for predictive maintenance. Included is a review of the planned Ares I-X Ground Diagnostic Prototype (GDP) and further information about detection and isolation of faults using Testability Engineering and Maintenance System (TEAMS). Another system that being readied for use that detects anomalies, the Inductive Monitoring System (IMS). The IMS automatically learns how the system behaves and alerts operations it the current behavior is anomalous. The comparison of STS-83 and STS-107 (i.e., the Columbia accident) is shown as an example of the anomaly detection capabilities.

  2. Toward a Graded Psycholexical Space Mapping Model: Sublexical and Lexical Representations in Chinese Character Reading Development.

    PubMed

    Tong, Xiuli; McBride, Catherine

    2017-07-01

    Following a review of contemporary models of word-level processing for reading and their limitations, we propose a new hypothetical model of Chinese character reading, namely, the graded lexical space mapping model that characterizes how sublexical radicals and lexical information are involved in Chinese character reading development. The underlying assumption of this model is that Chinese character recognition is a process of competitive mappings of phonology, semantics, and orthography in both lexical and sublexical systems, operating as functions of statistical properties of print input based on the individual's specific level of reading. This model leads to several testable predictions concerning how the quasiregularity and continuity of Chinese-specific radicals are organized in memory for both child and adult readers at different developmental stages of reading.

  3. Observational exclusion of a consistent loop quantum cosmology scenario

    NASA Astrophysics Data System (ADS)

    Bolliet, Boris; Barrau, Aurélien; Grain, Julien; Schander, Susanne

    2016-06-01

    It is often argued that inflation erases all the information about what took place before it started. Quantum gravity, relevant in the Planck era, seems therefore mostly impossible to probe with cosmological observations. In general, only very ad hoc scenarios or hyper fine-tuned initial conditions can lead to observationally testable theories. Here we consider a well-defined and well-motivated candidate quantum cosmology model that predicts inflation. Using the most recent observational constraints on the cosmic microwave background B-modes, we show that the model is excluded for all its parameter space, without any tuning. Some important consequences are drawn for the deformed algebra approach to loop quantum cosmology. We emphasize that neither loop quantum cosmology in general nor loop quantum gravity are disfavored by this study but their falsifiability is established.

  4. Research review: dopamine transfer deficit: a neurobiological theory of altered reinforcement mechanisms in ADHD.

    PubMed

    Tripp, Gail; Wickens, Jeff R

    2008-07-01

    This review considers the hypothesis that changes in dopamine signalling might account for altered sensitivity to positive reinforcement in children with ADHD. The existing evidence regarding dopamine cell activity in relation to positive reinforcement is reviewed. We focus on the anticipatory firing of dopamine cells brought about by a transfer of dopamine cell responses to cues that precede reinforcers. It is proposed that in children with ADHD there is diminished anticipatory dopamine cell firing, which we call the dopamine transfer deficit (DTD). The DTD theory leads to specific and testable predictions for human and animal research. The extent to which DTD explains symptoms of ADHD and effects of pharmacological interventions is discussed. We conclude by considering the neural changes underlying the etiology of DTD.

  5. Causes and consequences of reduced blood volume in space flight - A multi-discipline modeling study

    NASA Technical Reports Server (NTRS)

    Leonard, J. I.

    1983-01-01

    A group of mathematical models of various physiological systems have been developed and applied to studying problems associated with adaptation to weightlessness. One biomedical issue which could be addressed by at least three of these models from varying perspectives was the reduction in blood volume that universally occurs in astronauts. Accordingly, models of fluid-electrolyte, erythropoiesis, and cardiovascular regulation were employed to study the causes and consequences of blood volume loss during space flight. This analysis confirms the notion that alterations of blood volume are central to an understanding of adaptation to prolonged space flight. More importantly, the modeling studies resulted in specific hypotheses accounting for plasma volume and red cell mass losses and testable predictions concerning the behavior of the circulatory system.

  6. Computational Psychiatry of ADHD: Neural Gain Impairments across Marrian Levels of Analysis

    PubMed Central

    Hauser, Tobias U.; Fiore, Vincenzo G.; Moutoussis, Michael; Dolan, Raymond J.

    2016-01-01

    Attention-deficit hyperactivity disorder (ADHD), one of the most common psychiatric disorders, is characterised by unstable response patterns across multiple cognitive domains. However, the neural mechanisms that explain these characteristic features remain unclear. Using a computational multilevel approach, we propose that ADHD is caused by impaired gain modulation in systems that generate this phenotypic increased behavioural variability. Using Marr's three levels of analysis as a heuristic framework, we focus on this variable behaviour, detail how it can be explained algorithmically, and how it might be implemented at a neural level through catecholamine influences on corticostriatal loops. This computational, multilevel, approach to ADHD provides a framework for bridging gaps between descriptions of neuronal activity and behaviour, and provides testable predictions about impaired mechanisms. PMID:26787097

  7. Thalamocortical mechanisms for integrating musical tone and rhythm

    PubMed Central

    Musacchia, Gabriella; Large, Edward

    2014-01-01

    Studies over several decades have identified many of the neuronal substrates of music perception by pursuing pitch and rhythm perception separately. Here, we address the question of how these mechanisms interact, starting with the observation that the peripheral pathways of the so-called “Core” and “Matrix” thalamocortical system provide the anatomical bases for tone and rhythm channels. We then examine the hypothesis that these specialized inputs integrate tonal content within rhythm context in auditory cortex using classical types of “driving” and “modulatory” mechanisms. This hypothesis provides a framework for deriving testable predictions about the early stages of music processing. Furthermore, because thalamocortical circuits are shared by speech and music processing, such a model provides concrete implications for how music experience contributes to the development of robust speech encoding mechanisms. PMID:24103509

  8. Complex Causal Process Diagrams for Analyzing the Health Impacts of Policy Interventions

    PubMed Central

    Joffe, Michael; Mindell, Jennifer

    2006-01-01

    Causal diagrams are rigorous tools for controlling confounding. They also can be used to describe complex causal systems, which is done routinely in communicable disease epidemiology. The use of change diagrams has advantages over static diagrams, because change diagrams are more tractable, relate better to interventions, and have clearer interpretations. Causal diagrams are a useful basis for modeling. They make assumptions explicit, provide a framework for analysis, generate testable predictions, explore the effects of interventions, and identify data gaps. Causal diagrams can be used to integrate different types of information and to facilitate communication both among public health experts and between public health experts and experts in other fields. Causal diagrams allow the use of instrumental variables, which can help control confounding and reverse causation. PMID:16449586

  9. Sneutrino dark matter in gauged inverse seesaw models for neutrinos.

    PubMed

    An, Haipeng; Dev, P S Bhupal; Cai, Yi; Mohapatra, R N

    2012-02-24

    Extending the minimal supersymmetric standard model to explain small neutrino masses via the inverse seesaw mechanism can lead to a new light supersymmetric scalar partner which can play the role of inelastic dark matter (IDM). It is a linear combination of the superpartners of the neutral fermions in the theory (the light left-handed neutrino and two heavy standard model singlet neutrinos) which can be very light with mass in ~5-20 GeV range, as suggested by some current direct detection experiments. The IDM in this class of models has keV-scale mass splitting, which is intimately connected to the small Majorana masses of neutrinos. We predict the differential scattering rate and annual modulation of the IDM signal which can be testable at future germanium- and xenon-based detectors.

  10. Phenomenology of left-right symmetric dark matter

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Garcia-Cely, Camilo; Heeck, Julian, E-mail: Camilo.Alfredo.Garcia.Cely@ulb.ac.be, E-mail: Julian.Heeck@ulb.ac.be

    We present a detailed study of dark matter phenomenology in low-scale left-right symmetric models. Stability of new fermion or scalar multiplets is ensured by an accidental matter parity that survives the spontaneous symmetry breaking of the gauge group by scalar triplets. The relic abundance of these particles is set by gauge interactions and gives rise to dark matter candidates with masses above the electroweak scale. Dark matter annihilations are thus modified by the Sommerfeld effect, not only in the early Universe, but also today, for instance, in the Center of the Galaxy. Majorana candidates—triplet, quintuplet, bi-doublet, and bi-triplet—bring only onemore » new parameter to the model, their mass, and are hence highly testable at colliders and through astrophysical observations. Scalar candidates—doublet and 7-plet, the latter being only stable at the renormalizable level—have additional scalar-scalar interactions that give rise to rich phenomenology. The particles under discussion share many features with the well-known candidates wino, Higgsino, inert doublet scalar, sneutrino, and Minimal Dark Matter. In particular, they all predict a large gamma-ray flux from dark matter annihilations, which can be searched for with Cherenkov telescopes. We furthermore discuss models with unequal left-right gauge couplings, g{sub R} ≠ g{sub L}, taking the recent experimental hints for a charged gauge boson with 2 TeV mass as a benchmark point. In this case, the dark matter mass is determined by the observed relic density.« less

  11. The length but not the sequence of peptide linker modules exerts the primary influence on the conformations of protein domains in cellulosome multi-enzyme complexes.

    PubMed

    Różycki, Bartosz; Cazade, Pierre-André; O'Mahony, Shane; Thompson, Damien; Cieplak, Marek

    2017-08-16

    Cellulosomes are large multi-protein catalysts produced by various anaerobic microorganisms to efficiently degrade plant cell-wall polysaccharides down into simple sugars. X-ray and physicochemical structural characterisations show that cellulosomes are composed of numerous protein domains that are connected by unstructured polypeptide segments, yet the properties and possible roles of these 'linker' peptides are largely unknown. We have performed coarse-grained and all-atom molecular dynamics computer simulations of a number of cellulosomal linkers of different lengths and compositions. Our data demonstrates that the effective stiffness of the linker peptides, as quantified by the equilibrium fluctuations in the end-to-end distances, depends primarily on the length of the linker and less so on the specific amino acid sequence. The presence of excluded volume - provided by the domains that are connected - dampens the motion of the linker residues and reduces the effective stiffness of the linkers. Simultaneously, the presence of the linkers alters the conformations of the protein domains that are connected. We demonstrate that short, stiff linkers induce significant rearrangements in the folded domains of the mini-cellulosome composed of endoglucanase Cel8A in complex with scaffoldin ScafT (Cel8A-ScafT) of Clostridium thermocellum as well as in a two-cohesin system derived from the scaffoldin ScaB of Acetivibrio cellulolyticus. We give experimentally testable predictions on structural changes in protein domains that depend on the length of linkers.

  12. Temporal Processing in the Visual Cortex of the Awake and Anesthetized Rat.

    PubMed

    Aasebø, Ida E J; Lepperød, Mikkel E; Stavrinou, Maria; Nøkkevangen, Sandra; Einevoll, Gaute; Hafting, Torkel; Fyhn, Marianne

    2017-01-01

    The activity pattern and temporal dynamics within and between neuron ensembles are essential features of information processing and believed to be profoundly affected by anesthesia. Much of our general understanding of sensory information processing, including computational models aimed at mathematically simulating sensory information processing, rely on parameters derived from recordings conducted on animals under anesthesia. Due to the high variety of neuronal subtypes in the brain, population-based estimates of the impact of anesthesia may conceal unit- or ensemble-specific effects of the transition between states. Using chronically implanted tetrodes into primary visual cortex (V1) of rats, we conducted extracellular recordings of single units and followed the same cell ensembles in the awake and anesthetized states. We found that the transition from wakefulness to anesthesia involves unpredictable changes in temporal response characteristics. The latency of single-unit responses to visual stimulation was delayed in anesthesia, with large individual variations between units. Pair-wise correlations between units increased under anesthesia, indicating more synchronized activity. Further, the units within an ensemble show reproducible temporal activity patterns in response to visual stimuli that is changed between states, suggesting state-dependent sequences of activity. The current dataset, with recordings from the same neural ensembles across states, is well suited for validating and testing computational network models. This can lead to testable predictions, bring a deeper understanding of the experimental findings and improve models of neural information processing. Here, we exemplify such a workflow using a Brunel network model.

  13. Coexistence of Reward and Unsupervised Learning During the Operant Conditioning of Neural Firing Rates

    PubMed Central

    Kerr, Robert R.; Grayden, David B.; Thomas, Doreen A.; Gilson, Matthieu; Burkitt, Anthony N.

    2014-01-01

    A fundamental goal of neuroscience is to understand how cognitive processes, such as operant conditioning, are performed by the brain. Typical and well studied examples of operant conditioning, in which the firing rates of individual cortical neurons in monkeys are increased using rewards, provide an opportunity for insight into this. Studies of reward-modulated spike-timing-dependent plasticity (RSTDP), and of other models such as R-max, have reproduced this learning behavior, but they have assumed that no unsupervised learning is present (i.e., no learning occurs without, or independent of, rewards). We show that these models cannot elicit firing rate reinforcement while exhibiting both reward learning and ongoing, stable unsupervised learning. To fix this issue, we propose a new RSTDP model of synaptic plasticity based upon the observed effects that dopamine has on long-term potentiation and depression (LTP and LTD). We show, both analytically and through simulations, that our new model can exhibit unsupervised learning and lead to firing rate reinforcement. This requires that the strengthening of LTP by the reward signal is greater than the strengthening of LTD and that the reinforced neuron exhibits irregular firing. We show the robustness of our findings to spike-timing correlations, to the synaptic weight dependence that is assumed, and to changes in the mean reward. We also consider our model in the differential reinforcement of two nearby neurons. Our model aligns more strongly with experimental studies than previous models and makes testable predictions for future experiments. PMID:24475240

  14. Towards a theory of individual differences in statistical learning

    PubMed Central

    Bogaerts, Louisa; Christiansen, Morten H.; Frost, Ram

    2017-01-01

    In recent years, statistical learning (SL) research has seen a growing interest in tracking individual performance in SL tasks, mainly as a predictor of linguistic abilities. We review studies from this line of research and outline three presuppositions underlying the experimental approach they employ: (i) that SL is a unified theoretical construct; (ii) that current SL tasks are interchangeable, and equally valid for assessing SL ability; and (iii) that performance in the standard forced-choice test in the task is a good proxy of SL ability. We argue that these three critical presuppositions are subject to a number of theoretical and empirical issues. First, SL shows patterns of modality- and informational-specificity, suggesting that SL cannot be treated as a unified construct. Second, different SL tasks may tap into separate sub-components of SL that are not necessarily interchangeable. Third, the commonly used forced-choice tests in most SL tasks are subject to inherent limitations and confounds. As a first step, we offer a methodological approach that explicitly spells out a potential set of different SL dimensions, allowing for better transparency in choosing a specific SL task as a predictor of a given linguistic outcome. We then offer possible methodological solutions for better tracking and measuring SL ability. Taken together, these discussions provide a novel theoretical and methodological approach for assessing individual differences in SL, with clear testable predictions. This article is part of the themed issue ‘New frontiers for statistical learning in the cognitive sciences’. PMID:27872377

  15. Temporal Processing in the Visual Cortex of the Awake and Anesthetized Rat

    PubMed Central

    Aasebø, Ida E. J.; Stavrinou, Maria; Nøkkevangen, Sandra; Einevoll, Gaute

    2017-01-01

    Abstract The activity pattern and temporal dynamics within and between neuron ensembles are essential features of information processing and believed to be profoundly affected by anesthesia. Much of our general understanding of sensory information processing, including computational models aimed at mathematically simulating sensory information processing, rely on parameters derived from recordings conducted on animals under anesthesia. Due to the high variety of neuronal subtypes in the brain, population-based estimates of the impact of anesthesia may conceal unit- or ensemble-specific effects of the transition between states. Using chronically implanted tetrodes into primary visual cortex (V1) of rats, we conducted extracellular recordings of single units and followed the same cell ensembles in the awake and anesthetized states. We found that the transition from wakefulness to anesthesia involves unpredictable changes in temporal response characteristics. The latency of single-unit responses to visual stimulation was delayed in anesthesia, with large individual variations between units. Pair-wise correlations between units increased under anesthesia, indicating more synchronized activity. Further, the units within an ensemble show reproducible temporal activity patterns in response to visual stimuli that is changed between states, suggesting state-dependent sequences of activity. The current dataset, with recordings from the same neural ensembles across states, is well suited for validating and testing computational network models. This can lead to testable predictions, bring a deeper understanding of the experimental findings and improve models of neural information processing. Here, we exemplify such a workflow using a Brunel network model. PMID:28791331

  16. Soy-Based Therapeutic Baby Formulas: Testable Hypotheses Regarding the Pros and Cons

    PubMed Central

    Westmark, Cara J.

    2017-01-01

    Soy-based infant formulas have been consumed in the United States since 1909, and currently constitute a significant portion of the infant formula market. There are efforts underway to generate genetically modified soybeans that produce therapeutic agents of interest with the intent to deliver those agents in a soy-based infant formula platform. The threefold purpose of this review article is to first discuss the pros and cons of soy-based infant formulas, then present testable hypotheses to discern the suitability of a soy platform for drug delivery in babies, and finally start a discussion to inform public policy on this important area of infant nutrition. PMID:28149839

  17. Predictive genetic testing in minors for late-onset conditions: a chronological and analytical review of the ethical arguments.

    PubMed

    Mand, Cara; Gillam, Lynn; Delatycki, Martin B; Duncan, Rony E

    2012-09-01

    Predictive genetic testing is now routinely offered to asymptomatic adults at risk for genetic disease. However, testing of minors at risk for adult-onset conditions, where no treatment or preventive intervention exists, has evoked greater controversy and inspired a debate spanning two decades. This review aims to provide a detailed longitudinal analysis and concludes by examining the debate's current status and prospects for the future. Fifty-three relevant theoretical papers published between 1990 and December 2010 were identified, and interpretative content analysis was employed to catalogue discrete arguments within these papers. Novel conclusions were drawn from this review. While the debate's first voices were raised in opposition of testing and their arguments have retained currency over many years, arguments in favour of testing, which appeared sporadically at first, have gained momentum more recently. Most arguments on both sides are testable empirical claims, so far untested, rather than abstract ethical or philosophical positions. The dispute, therein, lies not so much in whether minors should be permitted to access predictive genetic testing but whether these empirical claims on the relative benefits or harms of testing should be assessed.

  18. Bell Nonlocality, Signal Locality and Unpredictability (or What Bohr Could Have Told Einstein at Solvay Had He Known About Bell Experiments)

    NASA Astrophysics Data System (ADS)

    Cavalcanti, Eric G.; Wiseman, Howard M.

    2012-10-01

    The 1964 theorem of John Bell shows that no model that reproduces the predictions of quantum mechanics can simultaneously satisfy the assumptions of locality and determinism. On the other hand, the assumptions of signal locality plus predictability are also sufficient to derive Bell inequalities. This simple theorem, previously noted but published only relatively recently by Masanes, Acin and Gisin, has fundamental implications not entirely appreciated. Firstly, nothing can be concluded about the ontological assumptions of locality or determinism independently of each other—it is possible to reproduce quantum mechanics with deterministic models that violate locality as well as indeterministic models that satisfy locality. On the other hand, the operational assumption of signal locality is an empirically testable (and well-tested) consequence of relativity. Thus Bell inequality violations imply that we can trust that some events are fundamentally unpredictable, even if we cannot trust that they are indeterministic. This result grounds the quantum-mechanical prohibition of arbitrarily accurate predictions on the assumption of no superluminal signalling, regardless of any postulates of quantum mechanics. It also sheds a new light on an early stage of the historical debate between Einstein and Bohr.

  19. Optimal flight initiation distance.

    PubMed

    Cooper, William E; Frederick, William G

    2007-01-07

    Decisions regarding flight initiation distance have received scant theoretical attention. A graphical model by Ydenberg and Dill (1986. The economics of fleeing from predators. Adv. Stud. Behav. 16, 229-249) that has guided research for the past 20 years specifies when escape begins. In the model, a prey detects a predator, monitors its approach until costs of escape and of remaining are equal, and then flees. The distance between predator and prey when escape is initiated (approach distance = flight initiation distance) occurs where decreasing cost of remaining and increasing cost of fleeing intersect. We argue that prey fleeing as predicted cannot maximize fitness because the best prey can do is break even during an encounter. We develop two optimality models, one applying when all expected future contribution to fitness (residual reproductive value) is lost if the prey dies, the other when any fitness gained (increase in expected RRV) during the encounter is retained after death. Both models predict optimal flight initiation distance from initial expected fitness, benefits obtainable during encounters, costs of escaping, and probability of being killed. Predictions match extensively verified predictions of Ydenberg and Dill's (1986) model. Our main conclusion is that optimality models are preferable to break-even models because they permit fitness maximization, offer many new testable predictions, and allow assessment of prey decisions in many naturally occurring situations through modification of benefit, escape cost, and risk functions.

  20. Reliability/maintainability/testability design for dormancy

    NASA Astrophysics Data System (ADS)

    Seman, Robert M.; Etzl, Julius M.; Purnell, Arthur W.

    1988-05-01

    This document has been prepared as a tool for designers of dormant military equipment and systems. The purpose of this handbook is to provide design engineers with Reliability/Maintainability/Testability design guidelines for systems which spend significant portions of their life cycle in a dormant state. The dormant state is defined as a nonoperating mode where a system experiences very little or no electrical stress. The guidelines in this report present design criteria in the following categories: (1) Part Selection and Control; (2) Derating Practices; (3) Equipment/System Packaging; (4) Transportation and Handling; (5) Maintainability Design; (6) Testability Design; (7) Evaluation Methods for In-Plant and Field Evaluation; and (8) Product Performance Agreements. Whereever applicable, design guidelines for operating systems were included with the dormant design guidelines. This was done in an effort to produce design guidelines for a more complete life cycle. Although dormant systems spend significant portions of their life cycle in a nonoperating mode, the designer must design the system for the complete life cycle, including nonoperating as well as operating modes. The guidelines are primarily intended for use in the design of equipment composed of electronic parts and components. However, they can also be used for the design of systems which encompass both electronic and nonelectronic parts, as well as for the modification of existing systems.

  1. Ocular biometric parameters among 3-year-old Chinese children: testability, distribution and association with anthropometric parameters

    PubMed Central

    Huang, Dan; Chen, Xuejuan; Gong, Qi; Yuan, Chaoqun; Ding, Hui; Bai, Jing; Zhu, Hui; Fu, Zhujun; Yu, Rongbin; Liu, Hu

    2016-01-01

    This survey was conducted to determine the testability, distribution and associations of ocular biometric parameters in Chinese preschool children. Ocular biometric examinations, including the axial length (AL) and corneal radius of curvature (CR), were conducted on 1,688 3-year-old subjects by using an IOLMaster in August 2015. Anthropometric parameters, including height and weight, were measured according to a standardized protocol, and body mass index (BMI) was calculated. The testability was 93.7% for the AL and 78.6% for the CR overall, and both measures improved with age. Girls performed slightly better in AL measurements (P = 0.08), and the difference in CR was statistically significant (P < 0.05). The AL distribution was normal in girls (P = 0.12), whereas it was not in boys (P < 0.05). For CR1, all subgroups presented normal distributions (P = 0.16 for boys; P = 0.20 for girls), but the distribution varied when the subgroups were combined (P < 0.05). CR2 presented a normal distribution (P = 0.11), whereas the AL/CR ratio was abnormal (P < 0.001). Boys exhibited a significantly longer AL, a greater CR and a greater AL/CR ratio than girls (all P < 0.001). PMID:27384307

  2. Questioning the Faith - Models and Prediction in Stream Restoration (Invited)

    NASA Astrophysics Data System (ADS)

    Wilcock, P.

    2013-12-01

    River management and restoration demand prediction at and beyond our present ability. Management questions, framed appropriately, can motivate fundamental advances in science, although the connection between research and application is not always easy, useful, or robust. Why is that? This presentation considers the connection between models and management, a connection that requires critical and creative thought on both sides. Essential challenges for managers include clearly defining project objectives and accommodating uncertainty in any model prediction. Essential challenges for the research community include matching the appropriate model to project duration, space, funding, information, and social constraints and clearly presenting answers that are actually useful to managers. Better models do not lead to better management decisions or better designs if the predictions are not relevant to and accepted by managers. In fact, any prediction may be irrelevant if the need for prediction is not recognized. The predictive target must be developed in an active dialog between managers and modelers. This relationship, like any other, can take time to develop. For example, large segments of stream restoration practice have remained resistant to models and prediction because the foundational tenet - that channels built to a certain template will be able to transport the supplied sediment with the available flow - has no essential physical connection between cause and effect. Stream restoration practice can be steered in a predictive direction in which project objectives are defined as predictable attributes and testable hypotheses. If stream restoration design is defined in terms of the desired performance of the channel (static or dynamic, sediment surplus or deficit), then channel properties that provide these attributes can be predicted and a basis exists for testing approximations, models, and predictions.

  3. Probing leptogenesis

    NASA Astrophysics Data System (ADS)

    Chun, E. J.; Cvetič, G.; Dev, P. S. B.; Drewes, M.; Fong, C. S.; Garbrecht, B.; Hambye, T.; Harz, J.; Hernández, P.; Kim, C. S.; Molinaro, E.; Nardi, E.; Racker, J.; Rius, N.; Zamora-Saa, J.

    2018-02-01

    The focus of this paper lies on the possible experimental tests of leptogenesis scenarios. We consider both leptogenesis generated from oscillations, as well as leptogenesis from out-of-equilibrium decays. As the Akhmedov-Rubakov-Smirnov (ARS) mechanism allows for heavy neutrinos in the GeV range, this opens up a plethora of possible experimental tests, e.g. at neutrino oscillation experiments, neutrinoless double beta decay, and direct searches for neutral heavy leptons at future facilities. In contrast, testing leptogenesis from out-of-equilibrium decays is a quite difficult task. We comment on the necessary conditions for having successful leptogenesis at the TeV-scale. We further discuss possible realizations and their model specific testability in extended seesaw models, models with extended gauge sectors, and supersymmetric leptogenesis. Not being able to test high-scale leptogenesis directly, we present a way to falsify such scenarios by focusing on their washout processes. This is discussed specifically for the left-right symmetric model and the observation of a heavy WR, as well as model independently when measuring ΔL = 2 washout processes at the LHC or neutrinoless double beta decay.

  4. Stereoacuity of preschool children with and without vision disorders.

    PubMed

    Ciner, Elise B; Ying, Gui-Shuang; Kulp, Marjean Taylor; Maguire, Maureen G; Quinn, Graham E; Orel-Bixler, Deborah; Cyert, Lynn A; Moore, Bruce; Huang, Jiayan

    2014-03-01

    To evaluate associations between stereoacuity and presence, type, and severity of vision disorders in Head Start preschool children and determine testability and levels of stereoacuity by age in children without vision disorders. Stereoacuity of children aged 3 to 5 years (n = 2898) participating in the Vision in Preschoolers (VIP) Study was evaluated using the Stereo Smile II test during a comprehensive vision examination. This test uses a two-alternative forced-choice paradigm with four stereoacuity levels (480 to 60 seconds of arc). Children were classified by the presence (n = 871) or absence (n = 2027) of VIP Study-targeted vision disorders (amblyopia, strabismus, significant refractive error, or unexplained reduced visual acuity), including type and severity. Median stereoacuity between groups and among severity levels of vision disorders was compared using Wilcoxon rank sum and Kruskal-Wallis tests. Testability and stereoacuity levels were determined for children without VIP Study-targeted disorders overall and by age. Children with VIP Study-targeted vision disorders had significantly worse median stereoacuity than that of children without vision disorders (120 vs. 60 seconds of arc, p < 0.001). Children with the most severe vision disorders had worse stereoacuity than that of children with milder disorders (median 480 vs. 120 seconds of arc, p < 0.001). Among children without vision disorders, testability was 99.6% overall, increasing with age to 100% for 5-year-olds (p = 0.002). Most of the children without vision disorders (88%) had stereoacuity at the two best disparities (60 or 120 seconds of arc); the percentage increasing with age (82% for 3-, 89% for 4-, and 92% for 5-year-olds; p < 0.001). The presence of any VIP Study-targeted vision disorder was associated with significantly worse stereoacuity in preschool children. Severe vision disorders were more likely associated with poorer stereopsis than milder or no vision disorders. Testability was excellent at all ages. These results support the validity of the Stereo Smile II for assessing random-dot stereoacuity in preschool children.

  5. What Are Health-Related Users Tweeting? A Qualitative Content Analysis of Health-Related Users and Their Messages on Twitter

    PubMed Central

    DeCamp, Matthew; Dredze, Mark; Chisolm, Margaret S; Berger, Zackary D

    2014-01-01

    Background Twitter is home to many health professionals who send messages about a variety of health-related topics. Amid concerns about physicians posting inappropriate content online, more in-depth knowledge about these messages is needed to understand health professionals’ behavior on Twitter. Objective Our goal was to characterize the content of Twitter messages, specifically focusing on health professionals and their tweets relating to health. Methods We performed an in-depth content analysis of 700 tweets. Qualitative content analysis was conducted on tweets by health users on Twitter. The primary objective was to describe the general type of content (ie, health-related versus non-health related) on Twitter authored by health professionals and further to describe health-related tweets on the basis of the type of statement made. Specific attention was given to whether a tweet was personal (as opposed to professional) or made a claim that users would expect to be supported by some level of medical evidence (ie, a “testable” claim). A secondary objective was to compare content types among different users, including patients, physicians, nurses, health care organizations, and others. Results Health-related users are posting a wide range of content on Twitter. Among health-related tweets, 53.2% (184/346) contained a testable claim. Of health-related tweets by providers, 17.6% (61/346) were personal in nature; 61% (59/96) made testable statements. While organizations and businesses use Twitter to promote their services and products, patient advocates are using this tool to share their personal experiences with health. Conclusions Twitter users in health-related fields tweet about both testable claims and personal experiences. Future work should assess the relationship between testable tweets and the actual level of evidence supporting them, including how Twitter users—especially patients—interpret the content of tweets posted by health providers. PMID:25591063

  6. Is Ecosystem-Atmosphere Observation in Long-Term Networks actually Science?

    NASA Astrophysics Data System (ADS)

    Schmid, H. P. E.

    2015-12-01

    Science uses observations to build knowledge by testable explanations and predictions. The "scientific method" requires controlled systematic observation to examine questions, hypotheses and predictions. Thus, enquiry along the scientific method responds to questions of the type "what if …?" In contrast, long-term observation programs follow a different strategy: we commonly take great care to minimize our influence on the environment of our measurements, with the aim to maximize their external validity. We observe what we think are key variables for ecosystem-atmosphere exchange and ask questions such as "what happens next?" or "how did this happen?" This apparent deviation from the scientific method begs the question whether any explanations we come up with for the phenomena we observe are actually contributing to testable knowledge, or whether their value remains purely anecdotal. Here, we present examples to argue that, under certain conditions, data from long-term observations and observation networks can have equivalent or even higher scientific validity than controlled experiments. Internal validity is particularly enhanced if observations are combined with modeling. Long-term observations of ecosystem-atmosphere fluxes identify trends and temporal scales of variability. Observation networks reveal spatial patterns and variations, and long-term observation networks combine both aspects. A necessary condition for such observations to gain validity beyond the anecdotal is the requirement that the data are comparable: a comparison of two measured values, separated in time or space, must inform us objectively whether (e.g.) one value is larger than the other. In turn, a necessary condition for the comparability of data is the compatibility of the sensors and procedures used to generate them. Compatibility ensures that we compare "apples to apples": that measurements conducted in identical conditions give the same values (within suitable uncertainty intervals). In principle, a useful tool to achieve comparability and compatibility is the standardization of sensors and methods. However, due to the diversity of ecosystems and settings, standardization in ecosystem-atmosphere exchange is difficult. We discuss some of the challenges and pitfalls of standardization across networks.

  7. The diffusion decision model: theory and data for two-choice decision tasks.

    PubMed

    Ratcliff, Roger; McKoon, Gail

    2008-04-01

    The diffusion decision model allows detailed explanations of behavior in two-choice discrimination tasks. In this article, the model is reviewed to show how it translates behavioral data-accuracy, mean response times, and response time distributions-into components of cognitive processing. Three experiments are used to illustrate experimental manipulations of three components: stimulus difficulty affects the quality of information on which a decision is based; instructions emphasizing either speed or accuracy affect the criterial amounts of information that a subject requires before initiating a response; and the relative proportions of the two stimuli affect biases in drift rate and starting point. The experiments also illustrate the strong constraints that ensure the model is empirically testable and potentially falsifiable. The broad range of applications of the model is also reviewed, including research in the domains of aging and neurophysiology.

  8. Astrobiological Phase Transition: Towards Resolution of Fermi's Paradox

    NASA Astrophysics Data System (ADS)

    Ćirković, Milan M.; Vukotić, Branislav

    2008-12-01

    Can astrophysics explain Fermi’s paradox or the “Great Silence” problem? If available, such explanation would be advantageous over most of those suggested in literature which rely on unverifiable cultural and/or sociological assumptions. We suggest, instead, a general astrobiological paradigm which might offer a physical and empirically testable paradox resolution. Based on the idea of James Annis, we develop a model of an astrobiological phase transition of the Milky Way, based on the concept of the global regulation mechanism(s). The dominant regulation mechanisms, arguably, are γ-ray bursts, whose properties and cosmological evolution are becoming well-understood. Secular evolution of regulation mechanisms leads to the brief epoch of phase transition: from an essentially dead place, with pockets of low-complexity life restricted to planetary surfaces, it will, on a short (Fermi-Hart) timescale, become filled with high-complexity life. An observation selection effect explains why we are not, in spite of the very small prior probability, to be surprised at being located in that brief phase of disequilibrium. In addition, we show that, although the phase-transition model may explain the “Great Silence”, it is not supportive of the “contact pessimist” position. To the contrary, the phase-transition model offers a rational motivation for continuation and extension of our present-day Search for ExtraTerrestrial Intelligence (SETI) endeavours. Some of the unequivocal and testable predictions of our model include the decrease of extinction risk in the history of terrestrial life, the absence of any traces of Galactic societies significantly older than human society, complete lack of any extragalactic intelligent signals or phenomena, and the presence of ubiquitous low-complexity life in the Milky Way.

  9. Astrobiological phase transition: towards resolution of Fermi's paradox.

    PubMed

    Cirković, Milan M; Vukotić, Branislav

    2008-12-01

    Can astrophysics explain Fermi's paradox or the "Great Silence" problem? If available, such explanation would be advantageous over most of those suggested in literature which rely on unverifiable cultural and/or sociological assumptions. We suggest, instead, a general astrobiological paradigm which might offer a physical and empirically testable paradox resolution. Based on the idea of James Annis, we develop a model of an astrobiological phase transition of the Milky Way, based on the concept of the global regulation mechanism(s). The dominant regulation mechanisms, arguably, are gamma-ray bursts, whose properties and cosmological evolution are becoming well-understood. Secular evolution of regulation mechanisms leads to the brief epoch of phase transition: from an essentially dead place, with pockets of low-complexity life restricted to planetary surfaces, it will, on a short (Fermi-Hart) timescale, become filled with high-complexity life. An observation selection effect explains why we are not, in spite of the very small prior probability, to be surprised at being located in that brief phase of disequilibrium. In addition, we show that, although the phase-transition model may explain the "Great Silence", it is not supportive of the "contact pessimist" position. To the contrary, the phase-transition model offers a rational motivation for continuation and extension of our present-day Search for ExtraTerrestrial Intelligence (SETI) endeavours. Some of the unequivocal and testable predictions of our model include the decrease of extinction risk in the history of terrestrial life, the absence of any traces of Galactic societies significantly older than human society, complete lack of any extragalactic intelligent signals or phenomena, and the presence of ubiquitous low-complexity life in the Milky Way.

  10. Development of a dynamic computational model of social cognitive theory.

    PubMed

    Riley, William T; Martin, Cesar A; Rivera, Daniel E; Hekler, Eric B; Adams, Marc A; Buman, Matthew P; Pavel, Misha; King, Abby C

    2016-12-01

    Social cognitive theory (SCT) is among the most influential theories of behavior change and has been used as the conceptual basis of health behavior interventions for smoking cessation, weight management, and other health behaviors. SCT and other behavior theories were developed primarily to explain differences between individuals, but explanatory theories of within-person behavioral variability are increasingly needed as new technologies allow for intensive longitudinal measures and interventions adapted from these inputs. These within-person explanatory theoretical applications can be modeled as dynamical systems. SCT constructs, such as reciprocal determinism, are inherently dynamical in nature, but SCT has not been modeled as a dynamical system. This paper describes the development of a dynamical system model of SCT using fluid analogies and control systems principles drawn from engineering. Simulations of this model were performed to assess if the model performed as predicted based on theory and empirical studies of SCT. This initial model generates precise and testable quantitative predictions for future intensive longitudinal research. Dynamic modeling approaches provide a rigorous method for advancing health behavior theory development and refinement and for guiding the development of more potent and efficient interventions.

  11. Steps in the bacterial flagellar motor.

    PubMed

    Mora, Thierry; Yu, Howard; Sowa, Yoshiyuki; Wingreen, Ned S

    2009-10-01

    The bacterial flagellar motor is a highly efficient rotary machine used by many bacteria to propel themselves. It has recently been shown that at low speeds its rotation proceeds in steps. Here we propose a simple physical model, based on the storage of energy in protein springs, that accounts for this stepping behavior as a random walk in a tilted corrugated potential that combines torque and contact forces. We argue that the absolute angular position of the rotor is crucial for understanding step properties and show this hypothesis to be consistent with the available data, in particular the observation that backward steps are smaller on average than forward steps. We also predict a sublinear speed versus torque relationship for fixed load at low torque, and a peak in rotor diffusion as a function of torque. Our model provides a comprehensive framework for understanding and analyzing stepping behavior in the bacterial flagellar motor and proposes novel, testable predictions. More broadly, the storage of energy in protein springs by the flagellar motor may provide useful general insights into the design of highly efficient molecular machines.

  12. On Geomagnetism and Paleomagnetism I

    NASA Technical Reports Server (NTRS)

    Voorhies, Coerte V.

    2000-01-01

    A partial description of Earth's broad scale, core-source magnetic field has been developed and tested three ways. The description features an expected, or mean, spatial magnetic power spectrum that is approximately inversely proportional to horizontal wavenumber atop Earth's core. This multipole spectrum describes a magnetic energy range; it is not steep enough for Gubbins' magnetic dissipation range. Temporal variations of core multipole powers about mean values are to be expected and are described statistically, via trial probability distribution functions, instead of deterministically, via trial solution of closed transport equations. The distributions considered here are closed and neither require nor prohibit magnetic isotropy. The description is therefore applicable to, and tested against, both dipole and low degree non-dipole fields. In Part 1, a physical basis for an expectation spectrum is developed and checked. The description is then combined with main field models of twentieth century satellite and surface geomagnetic field measurements to make testable predictions of the radius of Earth's core. The predicted core radius is 0.7% above the 3480 km seismological value. Partial descriptions of other planetary dipole fields are noted.

  13. Optimal assessment of multiple cues.

    PubMed Central

    Fawcett, Tim W; Johnstone, Rufus A

    2003-01-01

    In a wide range of contexts from mate choice to foraging, animals are required to discriminate between alternative options on the basis of multiple cues. How should they best assess such complex multicomponent stimuli? Here, we construct a model to investigate this problem, focusing on a simple case where a 'chooser' faces a discrimination task involving two cues. These cues vary in their accuracy and in how costly they are to assess. As an example, we consider a mate-choice situation where females choose between males of differing quality. Our model predicts the following: (i) females should become less choosy as the cost of finding new males increases; (ii) females should prioritize cues differently depending on how choosy they are; (iii) females may sometimes prioritize less accurate cues; and (iv) which cues are most important depends on the abundance of desirable mates. These predictions are testable in mate-choice experiments where the costs of choice can be manipulated. Our findings are applicable to other discrimination tasks besides mate choice, for example a predator's choice between palatable and unpalatable prey, or an altruist's choice between kin and non-kin. PMID:12908986

  14. Symbiotic immuno-suppression: is disease susceptibility the price of bleaching resistance?

    PubMed

    Merselis, Daniel G; Lirman, Diego; Rodriguez-Lanetty, Mauricio

    2018-01-01

    Accelerating anthropogenic climate change threatens to destroy coral reefs worldwide through the processes of bleaching and disease. These major contributors to coral mortality are both closely linked with thermal stress intensified by anthropogenic climate change. Disease outbreaks typically follow bleaching events, but a direct positive linkage between bleaching and disease has been debated. By tracking 152 individual coral ramets through the 2014 mass bleaching in a South Florida coral restoration nursery, we revealed a highly significant negative correlation between bleaching and disease in the Caribbean staghorn coral, Acropora cervicornis . To explain these results, we propose a mechanism for transient immunological protection through coral bleaching: removal of Symbiodinium during bleaching may also temporarily eliminate suppressive symbiont modulation of host immunological function. We contextualize this hypothesis within an ecological perspective in order to generate testable predictions for future investigation.

  15. Transport dynamics of molecular motors that switch between an active and inactive state

    NASA Astrophysics Data System (ADS)

    Pinkoviezky, I.; Gov, N. S.

    2013-08-01

    Molecular motors are involved in key transport processes in the cell. Many of these motors can switch from an active to a nonactive state, either spontaneously or depending on their interaction with other molecules. When active, the motors move processively along the filaments, while when inactive they are stationary. We treat here the simple case of spontaneously switching motors, between the active and inactive states, along an open linear track. We use our recent analogy with vehicular traffic, where we go beyond the mean-field description. We map the phase diagram of this system, and find that it clearly breaks the symmetry between the different phases, as compared to the standard total asymmetric exclusion process. We make several predictions that may be testable using molecular motors in vitro and in living cells.

  16. Human Cognition and a Pile of Sand: A Discussion on Serial Correlations and Self-Organized Criticality

    PubMed Central

    Wagenmakers, Eric-Jan; Farrell, Simon; Ratcliff, Roger

    2005-01-01

    Recently, G. C. Van Orden, J. G. Holden, and M. T. Turvey (2003) proposed to abandon the conventional framework of cognitive psychology in favor of the framework of nonlinear dynamical systems theory. Van Orden et al. presented evidence that “purposive behavior originates in self-organized criticality” (p. 333). Here, the authors show that Van Orden et al.’s analyses do not test their hypotheses. Further, the authors argue that a confirmation of Van Orden et al.’s hypotheses would not have constituted firm evidence in support of their framework. Finally, the absence of a specific model for how self-organized criticality produces the observed behavior makes it very difficult to derive testable predictions. The authors conclude that the proposed paradigm shift is presently unwarranted. PMID:15702966

  17. A simple testable model of baryon number violation: Baryogenesis, dark matter, neutron-antineutron oscillation and collider signals

    NASA Astrophysics Data System (ADS)

    Allahverdi, Rouzbeh; Dev, P. S. Bhupal; Dutta, Bhaskar

    2018-04-01

    We study a simple TeV-scale model of baryon number violation which explains the observed proximity of the dark matter and baryon abundances. The model has constraints arising from both low and high-energy processes, and in particular, predicts a sizable rate for the neutron-antineutron (n - n bar) oscillation at low energy and the monojet signal at the LHC. We find an interesting complementarity among the constraints arising from the observed baryon asymmetry, ratio of dark matter and baryon abundances, n - n bar oscillation lifetime and the LHC monojet signal. There are regions in the parameter space where the n - n bar oscillation lifetime is found to be more constraining than the LHC constraints, which illustrates the importance of the next-generation n - n bar oscillation experiments.

  18. Brain Evolution and Human Neuropsychology: The Inferential Brain Hypothesis

    PubMed Central

    Koscik, Timothy R.; Tranel, Daniel

    2013-01-01

    Collaboration between human neuropsychology and comparative neuroscience has generated invaluable contributions to our understanding of human brain evolution and function. Further cross-talk between these disciplines has the potential to continue to revolutionize these fields. Modern neuroimaging methods could be applied in a comparative context, yielding exciting new data with the potential of providing insight into brain evolution. Conversely, incorporating an evolutionary base into the theoretical perspectives from which we approach human neuropsychology could lead to novel hypotheses and testable predictions. In the spirit of these objectives, we present here a new theoretical proposal, the Inferential Brain Hypothesis, whereby the human brain is thought to be characterized by a shift from perceptual processing to inferential computation, particularly within the social realm. This shift is believed to be a driving force for the evolution of the large human cortex. PMID:22459075

  19. How hierarchical is language use?

    PubMed Central

    Frank, Stefan L.; Bod, Rens; Christiansen, Morten H.

    2012-01-01

    It is generally assumed that hierarchical phrase structure plays a central role in human language. However, considerations of simplicity and evolutionary continuity suggest that hierarchical structure should not be invoked too hastily. Indeed, recent neurophysiological, behavioural and computational studies show that sequential sentence structure has considerable explanatory power and that hierarchical processing is often not involved. In this paper, we review evidence from the recent literature supporting the hypothesis that sequential structure may be fundamental to the comprehension, production and acquisition of human language. Moreover, we provide a preliminary sketch outlining a non-hierarchical model of language use and discuss its implications and testable predictions. If linguistic phenomena can be explained by sequential rather than hierarchical structure, this will have considerable impact in a wide range of fields, such as linguistics, ethology, cognitive neuroscience, psychology and computer science. PMID:22977157

  20. Prioritizing Information during Working Memory: Beyond Sustained Internal Attention.

    PubMed

    Myers, Nicholas E; Stokes, Mark G; Nobre, Anna C

    2017-06-01

    Working memory (WM) has limited capacity. This leaves attention with the important role of allowing into storage only the most relevant information. It is increasingly evident that attention is equally crucial for prioritizing representations within WM as the importance of individual items changes. Retrospective prioritization has been proposed to result from a focus of internal attention highlighting one of several representations. Here, we suggest an updated model, in which prioritization acts in multiple steps: first orienting towards and selecting a memory, and then reconfiguring its representational state in the service of upcoming task demands. Reconfiguration sets up an optimized perception-action mapping, obviating the need for sustained attention. This view is consistent with recent literature, makes testable predictions, and links WM with task switching and action preparation. Copyright © 2017 Elsevier Ltd. All rights reserved.

  1. Neuromechanics of crawling in D. melanogaster larvae

    NASA Astrophysics Data System (ADS)

    Pehlevan, Cengiz; Paoletti, Paolo; Mahadevan, L.

    2015-03-01

    Nervous system, body and environment interact in non-trivial ways to generate locomotion and thence behavior in an organism. Here we present a minimal integrative mathematical model to describe the simple behavior of forward crawling in Drosophila larvae. Our model couples the excitation-inhibition circuits in the nervous system to force production in the muscles and body movement in a frictional environment, which in turn leads to a proprioceptive signal that feeds back to the nervous system. Our results explain the basic observed phenomenology of crawling with or without proprioception, and elucidate the stabilizing role of proprioception in crawling with respect to external and internal perturbations. Our integrated approach allows us to make testable predictions on the effect of changing body-environment interactions on crawling, and serves as a substrate for the development of hierarchical models linking cellular processes to behavior.

  2. Towards Understanding The Origin And Evolution Of Ultra-Diffuse Galaxies

    NASA Astrophysics Data System (ADS)

    van der Burg, Remco F. J.; Sifón, Cristóbal; Muzzin, Adam; Hoekstra, Henk; KiDS Collaboration; GAMA Collaboration

    2017-06-01

    Recent observations have shown that Ultra-Diffuse Galaxies (UDGs, which have the luminosities of dwarfs but sizes of giant galaxies) are surprisingly abundant in clusters of galaxies. The origin of these galaxies remains unclear, since one would naively expect them to be easily disrupted by tidal interactions in the cluster environment. Several formation scenarios have been proposed for UDGs, but these make a wide range of different testable observational predictions. I'll summarise recent results on two key observables that have the potential to differentiate between the proposed models, namely 1) a measurement of their (sub)halo masses using weak gravitational lensing, and 2) their abundance in lower-mass haloes using data from the GAMA and KiDS surveys. I'll discuss implications and future prospects to learn more about the properties and formation histories of these elusive galaxies.

  3. The Transition to Minimal Consciousness through the Evolution of Associative Learning

    PubMed Central

    Bronfman, Zohar Z.; Ginsburg, Simona; Jablonka, Eva

    2016-01-01

    The minimal state of consciousness is sentience. This includes any phenomenal sensory experience – exteroceptive, such as vision and olfaction; interoceptive, such as pain and hunger; or proprioceptive, such as the sense of bodily position and movement. We propose unlimited associative learning (UAL) as the marker of the evolutionary transition to minimal consciousness (or sentience), its phylogenetically earliest sustainable manifestation and the driver of its evolution. We define and describe UAL at the behavioral and functional level and argue that the structural-anatomical implementations of this mode of learning in different taxa entail subjective feelings (sentience). We end with a discussion of the implications of our proposal for the distribution of consciousness in the animal kingdom, suggesting testable predictions, and revisiting the ongoing debate about the function of minimal consciousness in light of our approach. PMID:28066282

  4. Symbiotic immuno-suppression: is disease susceptibility the price of bleaching resistance?

    PubMed Central

    Merselis, Daniel G.; Lirman, Diego

    2018-01-01

    Accelerating anthropogenic climate change threatens to destroy coral reefs worldwide through the processes of bleaching and disease. These major contributors to coral mortality are both closely linked with thermal stress intensified by anthropogenic climate change. Disease outbreaks typically follow bleaching events, but a direct positive linkage between bleaching and disease has been debated. By tracking 152 individual coral ramets through the 2014 mass bleaching in a South Florida coral restoration nursery, we revealed a highly significant negative correlation between bleaching and disease in the Caribbean staghorn coral, Acropora cervicornis. To explain these results, we propose a mechanism for transient immunological protection through coral bleaching: removal of Symbiodinium during bleaching may also temporarily eliminate suppressive symbiont modulation of host immunological function. We contextualize this hypothesis within an ecological perspective in order to generate testable predictions for future investigation. PMID:29682405

  5. Modeling the fish community population dynamics and forecasting the eradication success of an exotic fish from an alpine stream

    USGS Publications Warehouse

    Laplanche, Christophe; Elger, Arnaud; Santoul, Frédéric; Thiede, Gary P.; Budy, Phaedra

    2018-01-01

    Management actions aimed at eradicating exotic fish species from riverine ecosystems can be better informed by forecasting abilities of mechanistic models. We illustrate this point with an example of the Logan River, Utah, originally populated with endemic cutthroat trout (Oncorhynchus clarkii utah), which compete with exotic brown trout (Salmo trutta). The coexistence equilibrium was disrupted by a large scale, experimental removal of the exotic species in 2009–2011 (on average, 8.2% of the stock each year), followed by an increase in the density of the native species. We built a spatially-explicit, reaction-diffusion model encompassing four key processes: population growth in heterogeneous habitat, competition, dispersal, and a management action. We calibrated the model with detailed long-term monitoring data (2001–2016) collected along the 35.4-km long river main channel. Our model, although simple, did a remarkable job reproducing the system steady state prior to the management action. Insights gained from the model independent predictions are consistent with available knowledge and indicate that the exotic species is more competitive; however, the native species still occupies more favorable habitat upstream. Dynamic runs of the model also recreated the observed increase of the native species following the management action. The model can simulate two possible distinct long-term outcomes: recovery or eradication of the exotic species. The processing of available knowledge using Bayesian methods allowed us to conclude that the chance for eradication of the invader was low at the beginning of the experimental removal (0.7% in 2009) and increased (20.5% in 2016) by using more recent monitoring data. We show that accessible mathematical and numerical tools can provide highly informative insights for managers (e.g., outcome of their conservation actions), identify knowledge gaps, and provide testable theory for researchers.

  6. The role of prediction in social neuroscience

    PubMed Central

    Brown, Elliot C.; Brüne, Martin

    2012-01-01

    Research has shown that the brain is constantly making predictions about future events. Theories of prediction in perception, action and learning suggest that the brain serves to reduce the discrepancies between expectation and actual experience, i.e., by reducing the prediction error. Forward models of action and perception propose the generation of a predictive internal representation of the expected sensory outcome, which is matched to the actual sensory feedback. Shared neural representations have been found when experiencing one's own and observing other's actions, rewards, errors, and emotions such as fear and pain. These general principles of the “predictive brain” are well established and have already begun to be applied to social aspects of cognition. The application and relevance of these predictive principles to social cognition are discussed in this article. Evidence is presented to argue that simple non-social cognitive processes can be extended to explain complex cognitive processes required for social interaction, with common neural activity seen for both social and non-social cognitions. A number of studies are included which demonstrate that bottom-up sensory input and top-down expectancies can be modulated by social information. The concept of competing social forward models and a partially distinct category of social prediction errors are introduced. The evolutionary implications of a “social predictive brain” are also mentioned, along with the implications on psychopathology. The review presents a number of testable hypotheses and novel comparisons that aim to stimulate further discussion and integration between currently disparate fields of research, with regard to computational models, behavioral and neurophysiological data. This promotes a relatively new platform for inquiry in social neuroscience with implications in social learning, theory of mind, empathy, the evolution of the social brain, and potential strategies for treating social cognitive deficits. PMID:22654749

  7. The evolutionary logic of sepsis.

    PubMed

    Rózsa, Lajos; Apari, Péter; Sulyok, Mihály; Tappe, Dennis; Bodó, Imre; Hardi, Richárd; Müller, Viktor

    2017-11-01

    The recently proposed Microbiome Mutiny Hypothesis posits that members of the human microbiome obtain information about the host individuals' health status and, when host survival is compromised, switch to an intensive exploitation strategy to maximize residual transmission. In animals and humans, sepsis is an acute systemic reaction to microbes invading the normally sterile body compartments. When induced by formerly mutualistic or neutral microbes, possibly in response to declining host health, sepsis appears to fit the 'microbiome mutiny' scenario except for its apparent failure to enhance transmission of the causative organisms. We propose that the ability of certain species of the microbiome to induce sepsis is not a fortuitous side effect of within-host replication, but rather it might, in some cases, be the result of their adaptive evolution. Whenever host health declines, inducing sepsis can be adaptive for those members of the healthy human microbiome that are capable of colonizing the future cadaver and spread by cadaver-borne transmission. We hypothesize that such microbes might exhibit switches along the 'mutualist - lethal pathogen - decomposer - mutualist again' scenario, implicating a previously unsuspected, surprising level of phenotypic plasticity. This hypothesis predicts that those species of the healthy microbiome that are recurring causative agents of sepsis can participate in the decomposition of cadavers, and can be transmitted as soil-borne or water-borne infections. Furthermore, in individual sepsis cases, the same microbial clones that dominate the systemic infection that precipitates sepsis, should also be present in high concentration during decomposition following death: this prediction is testable by molecular fingerprinting in experimentally induced animal models. Sepsis is a leading cause of human death worldwide. If further research confirms that some cases of sepsis indeed involve the 'mutiny' (facultative phenotypic switching) of normal members of the microbiome, then new strategies could be devised to prevent or treat sepsis by interfering with this process. Copyright © 2017 The Authors. Published by Elsevier B.V. All rights reserved.

  8. Sting, Carry and Stock: How Corpse Availability Can Regulate De-Centralized Task Allocation in a Ponerine Ant Colony

    PubMed Central

    Schmickl, Thomas; Karsai, Istvan

    2014-01-01

    We develop a model to produce plausible patterns of task partitioning in the ponerine ant Ectatomma ruidum based on the availability of living prey and prey corpses. The model is based on the organizational capabilities of a “common stomach” through which the colony utilizes the availability of a natural (food) substance as a major communication channel to regulate the income and expenditure of the very same substance. This communication channel has also a central role in regulating task partitioning of collective hunting behavior in a supply&demand-driven manner. Our model shows that task partitioning of the collective hunting behavior in E. ruidum can be explained by regulation due to a common stomach system. The saturation of the common stomach provides accessible information to individual ants so that they can adjust their hunting behavior accordingly by engaging in or by abandoning from stinging or transporting tasks. The common stomach is able to establish and to keep stabilized an effective mix of workforce to exploit the prey population and to transport food into the nest. This system is also able to react to external perturbations in a de-centralized homeostatic way, such as to changes in the prey density or to accumulation of food in the nest. In case of stable conditions the system develops towards an equilibrium concerning colony size and prey density. Our model shows that organization of work through a common stomach system can allow Ectatomma ruidum to collectively forage for food in a robust, reactive and reliable way. The model is compared to previously published models that followed a different modeling approach. Based on our model analysis we also suggest a series of experiments for which our model gives plausible predictions. These predictions are used to formulate a set of testable hypotheses that should be investigated empirically in future experimentation. PMID:25493558

  9. A Physiologically Based Model of Orexinergic Stabilization of Sleep and Wake

    PubMed Central

    Fulcher, Ben D.; Phillips, Andrew J. K.; Postnova, Svetlana; Robinson, Peter A.

    2014-01-01

    The orexinergic neurons of the lateral hypothalamus (Orx) are essential for regulating sleep-wake dynamics, and their loss causes narcolepsy, a disorder characterized by severe instability of sleep and wake states. However, the mechanisms through which Orx stabilize sleep and wake are not well understood. In this work, an explanation of the stabilizing effects of Orx is presented using a quantitative model of important physiological connections between Orx and the sleep-wake switch. In addition to Orx and the sleep-wake switch, which is composed of mutually inhibitory wake-active monoaminergic neurons in brainstem and hypothalamus (MA) and the sleep-active ventrolateral preoptic neurons of the hypothalamus (VLPO), the model also includes the circadian and homeostatic sleep drives. It is shown that Orx stabilizes prolonged waking episodes via its excitatory input to MA and by relaying a circadian input to MA, thus sustaining MA firing activity during the circadian day. During sleep, both Orx and MA are inhibited by the VLPO, and the subsequent reduction in Orx input to the MA indirectly stabilizes sustained sleep episodes. Simulating a loss of Orx, the model produces dynamics resembling narcolepsy, including frequent transitions between states, reduced waking arousal levels, and a normal daily amount of total sleep. The model predicts a change in sleep timing with differences in orexin levels, with higher orexin levels delaying the normal sleep episode, suggesting that individual differences in Orx signaling may contribute to chronotype. Dynamics resembling sleep inertia also emerge from the model as a gradual sleep-to-wake transition on a timescale that varies with that of Orx dynamics. The quantitative, physiologically based model developed in this work thus provides a new explanation of how Orx stabilizes prolonged episodes of sleep and wake, and makes a range of experimentally testable predictions, including a role for Orx in chronotype and sleep inertia. PMID:24651580

  10. Structural insight of dopamine β-hydroxylase, a drug target for complex traits, and functional significance of exonic single nucleotide polymorphisms.

    PubMed

    Kapoor, Abhijeet; Shandilya, Manish; Kundu, Suman

    2011-01-01

    Human dopamine β-hydroxylase (DBH) is an important therapeutic target for complex traits. Several single nucleotide polymorphisms (SNPs) have also been identified in DBH with potential adverse physiological effect. However, difficulty in obtaining diffractable crystals and lack of a suitable template for modeling the protein has ensured that neither crystallographic three-dimensional structure nor computational model for the enzyme is available to aid rational drug design, prediction of functional significance of SNPs or analytical protein engineering. Adequate biochemical information regarding human DBH, structural coordinates for peptidylglycine alpha-hydroxylating monooxygenase and computational data from a partial model of rat DBH were used along with logical manual intervention in a novel way to build an in silico model of human DBH. The model provides structural insight into the active site, metal coordination, subunit interface, substrate recognition and inhibitor binding. It reveals that DOMON domain potentially promotes tetramerization, while substrate dopamine and a potential therapeutic inhibitor nepicastat are stabilized in the active site through multiple hydrogen bonding. Functional significance of several exonic SNPs could be described from a structural analysis of the model. The model confirms that SNP resulting in Ala318Ser or Leu317Pro mutation may not influence enzyme activity, while Gly482Arg might actually do so being in the proximity of the active site. Arg549Cys may cause abnormal oligomerization through non-native disulfide bond formation. Other SNPs like Glu181, Glu250, Lys239 and Asp290 could potentially inhibit tetramerization thus affecting function. The first three-dimensional model of full-length human DBH protein was obtained in a novel manner with a set of experimental data as guideline for consistency of in silico prediction. Preliminary physicochemical tests validated the model. The model confirms, rationalizes and provides structural basis for several biochemical data and claims testable hypotheses regarding function. It provides a reasonable template for drug design as well.

  11. In vitro screening for population variability in toxicity of pesticide-containing mixtures

    PubMed Central

    Abdo, Nour; Wetmore, Barbara A.; Chappell, Grace A.; Shea, Damian; Wright, Fred A.; Rusyna, Ivan

    2016-01-01

    Population-based human in vitro models offer exceptional opportunities for evaluating the potential hazard and mode of action of chemicals, as well as variability in responses to toxic insults among individuals. This study was designed to test the hypothesis that comparative population genomics with efficient in vitro experimental design can be used for evaluation of the potential for hazard, mode of action, and the extent of population variability in responses to chemical mixtures. We selected 146 lymphoblast cell lines from 4 ancestrally and geographically diverse human populations based on the availability of genome sequence and basal RNA-seq data. Cells were exposed to two pesticide mixtures – an environmental surface water sample comprised primarily of organochlorine pesticides and a laboratory-prepared mixture of 36 currently used pesticides – in concentration response and evaluated for cytotoxicity. On average, the two mixtures exhibited a similar range of in vitro cytotoxicity and showed considerable inter-individual variability across screened cell lines. However, when in vitroto-in vivo extrapolation (IVIVE) coupled with reverse dosimetry was employed to convert the in vitro cytotoxic concentrations to oral equivalent doses and compared to the upper bound of predicted human exposure, we found that a nominally more cytotoxic chlorinated pesticide mixture is expected to have greater margin of safety (more than 5 orders of magnitude) as compared to the current use pesticide mixture (less than 2 orders of magnitude) due primarily to differences in exposure predictions. Multivariate genome-wide association mapping revealed an association between the toxicity of current use pesticide mixture and a polymorphism in rs1947825 in C17orf54. We conclude that a combination of in vitro human population-based cytotoxicity screening followed by dosimetric adjustment and comparative population genomics analyses enables quantitative evaluation of human health hazard from complex environmental mixtures. Additionally, such an approach yields testable hypotheses regarding potential toxicity mechanisms. PMID:26386728

  12. Strategies for systemic radiotherapy of micrometastases using antibody-targeted 131I.

    PubMed

    Wheldon, T E; O'Donoghue, J A; Hilditch, T E; Barrett, A

    1988-02-01

    A simple analysis is developed to evaluate the likely effectiveness of treatment of micrometastases by antibody-targeted 131I. Account is taken of the low levels of tumour uptake of antibody-conjugated 131I presently achievable and of the "energy wastage" in targeting microscopic tumours with a radionuclide whose disintegration energy is widely dissipated. The analysis shows that only modest doses can be delivered to micrometastases when total body dose is restricted to levels which allow recovery of bone marrow. Much higher doses could be delivered to micrometastases when bone marrow rescue is used. A rationale is presented for targeted systemic radiotherapy used in combination with external beam total body irradiation (TBI) and bone marrow rescue. This has some practical advantages. The effect of the targeted component is to impose a biological non-uniformity on the total body dose distribution with regions of high tumour cell density receiving higher doses. Where targeting results in high doses to particular normal organs (e.g. liver, kidney) the total dose to these organs could be kept within tolerable limits by appropriate shielding of the external beam radiation component of the treatment. Greater levels of tumour cell kill should be achievable by the combination regime without any increase in normal tissue damage over that inflicted by conventional TBI. The predicted superiority of the combination regime is especially marked for tumours just below the threshold for detectability (e.g. approximately 1 mm-1 cm diameter). This approach has the advantage that targeted radiotherapy provides only a proportion of the total body dose, most of which is given by a familiar technique. The proportion of dose given by the targeted component could be increased as experience is gained. The predicted superiority of the combination strategy should be experimentally testable using laboratory animals. Clinical applications should be cautiously approached, with due regard to the limitations of the theoretical analysis.

  13. On testing VLSI chips for the big Viterbi decoder

    NASA Technical Reports Server (NTRS)

    Hsu, I. S.

    1989-01-01

    A general technique that can be used in testing very large scale integrated (VLSI) chips for the Big Viterbi Decoder (BVD) system is described. The test technique is divided into functional testing and fault-coverage testing. The purpose of functional testing is to verify that the design works functionally. Functional test vectors are converted from outputs of software simulations which simulate the BVD functionally. Fault-coverage testing is used to detect and, in some cases, to locate faulty components caused by bad fabrication. This type of testing is useful in screening out bad chips. Finally, design for testability, which is included in the BVD VLSI chip design, is described in considerable detail. Both the observability and controllability of a VLSI chip are greatly enhanced by including the design for the testability feature.

  14. Model-Based Testability Assessment and Directed Troubleshooting of Shuttle Wiring Systems

    NASA Technical Reports Server (NTRS)

    Deb, Somnath; Domagala, Chuck; Shrestha, Roshan; Malepati, Venkatesh; Cavanaugh, Kevin; Patterson-Hine, Ann; Sanderfer, Dwight; Cockrell, Jim; Norvig, Peter (Technical Monitor)

    2000-01-01

    We have recently completed a pilot study on the Space shuttle wiring system commissioned by the Wiring Integrity Research (WIRe) team at NASA Ames Research Center, As the space shuttle ages, it is experiencing wiring degradation problems including arcing, chaffing insulation breakdown and broken conductors. A systematic and comprehensive test process is required to thoroughly test and quality assure (QA) the wiring systems. The NASA WIRe team recognized the value of a formal model based analysis for risk-assessment and fault coverage analysis. However. wiring systems are complex and involve over 50,000 wire segments. Therefore, NASA commissioned this pilot study with Qualtech Systems. Inc. (QSI) to explore means of automatically extracting high fidelity multi-signal models from wiring information database for use with QSI's Testability Engineering and Maintenance System (TEAMS) tool.

  15. Higher-order Fourier analysis over finite fields and applications

    NASA Astrophysics Data System (ADS)

    Hatami, Pooya

    Higher-order Fourier analysis is a powerful tool in the study of problems in additive and extremal combinatorics, for instance the study of arithmetic progressions in primes, where the traditional Fourier analysis comes short. In recent years, higher-order Fourier analysis has found multiple applications in computer science in fields such as property testing and coding theory. In this thesis, we develop new tools within this theory with several new applications such as a characterization theorem in algebraic property testing. One of our main contributions is a strong near-equidistribution result for regular collections of polynomials. The densities of small linear structures in subsets of Abelian groups can be expressed as certain analytic averages involving linear forms. Higher-order Fourier analysis examines such averages by approximating the indicator function of a subset by a function of bounded number of polynomials. Then, to approximate the average, it suffices to know the joint distribution of the polynomials applied to the linear forms. We prove a near-equidistribution theorem that describes these distributions for the group F(n/p) when p is a fixed prime. This fundamental fact was previously known only under various extra assumptions about the linear forms or the field size. We use this near-equidistribution theorem to settle a conjecture of Gowers and Wolf on the true complexity of systems of linear forms. Our next application is towards a characterization of testable algebraic properties. We prove that every locally characterized affine-invariant property of functions f : F(n/p) → R with n∈ N, is testable. In fact, we prove that any such property P is proximity-obliviously testable. More generally, we show that any affine-invariant property that is closed under subspace restrictions and has "bounded complexity" is testable. We also prove that any property that can be described as the property of decomposing into a known structure of low-degree polynomials is locally characterized and is, hence, testable. We discuss several notions of regularity which allow us to deduce algorithmic versions of various regularity lemmas for polynomials by Green and Tao and by Kaufman and Lovett. We show that our algorithmic regularity lemmas for polynomials imply algorithmic versions of several results relying on regularity, such as decoding Reed-Muller codes beyond the list decoding radius (for certain structured errors), and prescribed polynomial decompositions. Finally, motivated by the definition of Gowers norms, we investigate norms defined by different systems of linear forms. We give necessary conditions on the structure of systems of linear forms that define norms. We prove that such norms can be one of only two types, and assuming that |F p| is sufficiently large, they essentially are equivalent to either a Gowers norm or Lp norms.

  16. A Review of Recent Advancement in Integrating Omics Data with Literature Mining towards Biomedical Discoveries

    PubMed Central

    Raja, Kalpana; Patrick, Matthew; Gao, Yilin; Madu, Desmond; Yang, Yuyang

    2017-01-01

    In the past decade, the volume of “omics” data generated by the different high-throughput technologies has expanded exponentially. The managing, storing, and analyzing of this big data have been a great challenge for the researchers, especially when moving towards the goal of generating testable data-driven hypotheses, which has been the promise of the high-throughput experimental techniques. Different bioinformatics approaches have been developed to streamline the downstream analyzes by providing independent information to interpret and provide biological inference. Text mining (also known as literature mining) is one of the commonly used approaches for automated generation of biological knowledge from the huge number of published articles. In this review paper, we discuss the recent advancement in approaches that integrate results from omics data and information generated from text mining approaches to uncover novel biomedical information. PMID:28331849

  17. Implementation of a quantum random number generator based on the optimal clustering of photocounts

    NASA Astrophysics Data System (ADS)

    Balygin, K. A.; Zaitsev, V. I.; Klimov, A. N.; Kulik, S. P.; Molotkov, S. N.

    2017-10-01

    To implement quantum random number generators, it is fundamentally important to have a mathematically provable and experimentally testable process of measurements of a system from which an initial random sequence is generated. This makes sure that randomness indeed has a quantum nature. A quantum random number generator has been implemented with the use of the detection of quasi-single-photon radiation by a silicon photomultiplier (SiPM) matrix, which makes it possible to reliably reach the Poisson statistics of photocounts. The choice and use of the optimal clustering of photocounts for the initial sequence of photodetection events and a method of extraction of a random sequence of 0's and 1's, which is polynomial in the length of the sequence, have made it possible to reach a yield rate of 64 Mbit/s of the output certainly random sequence.

  18. Random blebbing motion: A simple model linking cell structural properties to migration characteristics.

    PubMed

    Woolley, Thomas E; Gaffney, Eamonn A; Goriely, Alain

    2017-07-01

    If the plasma membrane of a cell is able to delaminate locally from its actin cortex, a cellular bleb can be produced. Blebs are pressure-driven protrusions, which are noteworthy for their ability to produce cellular motion. Starting from a general continuum mechanics description, we restrict ourselves to considering cell and bleb shapes that maintain approximately spherical forms. From this assumption, we obtain a tractable algebraic system for bleb formation. By including cell-substrate adhesions, we can model blebbing cell motility. Further, by considering mechanically isolated blebbing events, which are randomly distributed over the cell, we can derive equations linking the macroscopic migration characteristics to the microscopic structural parameters of the cell. This multiscale modeling framework is then used to provide parameter estimates, which are in agreement with current experimental data. In summary, the construction of the mathematical model provides testable relationships between the bleb size and cell motility.

  19. Advancing Biological Understanding and Therapeutics Discovery with Small Molecule Probes

    PubMed Central

    Schreiber, Stuart L.; Kotz, Joanne D.; Li, Min; Aubé, Jeffrey; Austin, Christopher P.; Reed, John C.; Rosen, Hugh; White, E. Lucile; Sklar, Larry A.; Lindsley, Craig W.; Alexander, Benjamin R.; Bittker, Joshua A.; Clemons, Paul A.; de Souza, Andrea; Foley, Michael A.; Palmer, Michelle; Shamji, Alykhan F.; Wawer, Mathias J.; McManus, Owen; Wu, Meng; Zou, Beiyan; Yu, Haibo; Golden, Jennifer E.; Schoenen, Frank J.; Simeonov, Anton; Jadhav, Ajit; Jackson, Michael R.; Pinkerton, Anthony B.; Chung, Thomas D.Y.; Griffin, Patrick R.; Cravatt, Benjamin F.; Hodder, Peter S.; Roush, William R.; Roberts, Edward; Chung, Dong-Hoon; Jonsson, Colleen B.; Noah, James W.; Severson, William E.; Ananthan, Subramaniam; Edwards, Bruce; Oprea, Tudor I.; Conn, P. Jeffrey; Hopkins, Corey R.; Wood, Michael R.; Stauffer, Shaun R.; Emmitte, Kyle A.

    2015-01-01

    Small-molecule probes can illuminate biological processes and aid in the assessment of emerging therapeutic targets by perturbing biological systems in a manner distinct from other experimental approaches. Despite the tremendous promise of chemical tools for investigating biology and disease, small-molecule probes were unavailable for most targets and pathways as recently as a decade ago. In 2005, the U.S. National Institutes of Health launched the decade-long Molecular Libraries Program with the intent of innovating in and broadening access to small-molecule science. This Perspective describes how novel small-molecule probes identified through the program are enabling the exploration of biological pathways and therapeutic hypotheses not otherwise testable. These experiences illustrate how small-molecule probes can help bridge the chasm between biological research and the development of medicines, but also highlight the need to innovate the science of therapeutic discovery. PMID:26046436

  20. Hypercharged dark matter and direct detection as a probe of reheating.

    PubMed

    Feldstein, Brian; Ibe, Masahiro; Yanagida, Tsutomu T

    2014-03-14

    The lack of new physics at the LHC so far weakens the argument for TeV scale thermal dark matter. On the other hand, heavier, nonthermal dark matter is generally difficult to test experimentally. Here we consider the interesting and generic case of hypercharged dark matter, which can allow for heavy dark matter masses without spoiling testability. Planned direct detection experiments will be able to see a signal for masses up to an incredible 1010  GeV, and this can further serve to probe the reheating temperature up to about 109  GeV, as determined by the nonthermal dark matter relic abundance. The Z-mediated nature of the dark matter scattering may be determined in principle by comparing scattering rates on different detector nuclei, which in turn can reveal the dark matter mass. We will discuss the extent to which future experiments may be able to make such a determination.

  1. Quantifying quantum coherence with quantum Fisher information.

    PubMed

    Feng, X N; Wei, L F

    2017-11-14

    Quantum coherence is one of the old but always important concepts in quantum mechanics, and now it has been regarded as a necessary resource for quantum information processing and quantum metrology. However, the question of how to quantify the quantum coherence has just been paid the attention recently (see, e.g., Baumgratz et al. PRL, 113. 140401 (2014)). In this paper we verify that the well-known quantum Fisher information (QFI) can be utilized to quantify the quantum coherence, as it satisfies the monotonicity under the typical incoherent operations and the convexity under the mixing of the quantum states. Differing from most of the pure axiomatic methods, quantifying quantum coherence by QFI could be experimentally testable, as the bound of the QFI is practically measurable. The validity of our proposal is specifically demonstrated with the typical phase-damping and depolarizing evolution processes of a generic single-qubit state, and also by comparing it with the other quantifying methods proposed previously.

  2. Is health care infected by Baumol's cost disease? Test of a new model.

    PubMed

    Atanda, Akinwande; Menclova, Andrea Kutinova; Reed, W Robert

    2018-05-01

    Rising health care costs are a policy concern across the Organisation for Economic Co-operation and Development, and relatively little consensus exists concerning their causes. One explanation that has received revived attention is Baumol's cost disease (BCD). However, developing a theoretically appropriate test of BCD has been a challenge. In this paper, we construct a 2-sector model firmly based on Baumol's axioms. We then derive several testable propositions. In particular, the model predicts that (a) the share of total labor employed in the health care sector and (b) the relative price index of the health and non-health care sectors should both be positively related to economy-wide productivity. The model also predicts that (c) the share of labor in the health sector will be negatively related and (d) the ratio of prices in the health and non-health sectors unrelated, to the demand for non-health services. Using annual data from 28 Organisation for Economic Co-operation and Development countries over the years 1995-2016 and from 14 U.S. industry groups over the years 1947-2015, we find little evidence to support the predictions of BCD once we address spurious correlation due to coincident trending and other econometric issues. Copyright © 2018 John Wiley & Sons, Ltd.

  3. Z-portal dark matter

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Arcadi, Giorgio; Institute for Theoretical Physics, Georg-August University Göttingen, Friedrich-Hund-Platz 1, Göttingen, D-37077; Mambrini, Yann

    2015-03-11

    We propose to generalize the extensions of the Standard Model where the Z boson serves as a mediator between the Standard Model sector and the dark sector χ. We show that, like in the Higgs portal case, the combined constraints from the recent direct searches restrict severely the nature of the coupling of the dark matter to the Z boson and set a limit m{sub χ}≳200 GeV (except in a very narrow region around the Z-pole region). Using complementarity between spin dependent, spin independent and FERMI limits, we predict the nature of this coupling, more specifically the axial/vectorial ratio thatmore » respects a thermal dark matter coupled through a Z-portal while not being excluded by the current observations. We also show that the next generation of experiments of the type LZ or XENON1T will test Z-portal scenario for dark matter mass up to 2 TeV. The condition of a thermal dark matter naturally predicts the spin-dependent scattering cross section on the neutron to be σ{sub χn}{sup SD}≃10{sup −40} cm{sup 2}, which then becomes a clear prediction of the model and a signature testable in the near future experiments.« less

  4. Z-portal dark matter

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Arcadi, Giorgio; Mambrini, Yann; Richard, Francois, E-mail: giorgio.arcadi@th.u-psud.fr, E-mail: yann.mambrini@th.u-psud.fr, E-mail: richard@lal.in2p3.fr

    2015-03-01

    We propose to generalize the extensions of the Standard Model where the Z boson serves as a mediator between the Standard Model sector and the dark sector χ. We show that, like in the Higgs portal case, the combined constraints from the recent direct searches restrict severely the nature of the coupling of the dark matter to the Z boson and set a limit m{sub χ} ∼> 200 GeV (except in a very narrow region around the Z-pole region). Using complementarity between spin dependent, spin independent and FERMI limits, we predict the nature of this coupling, more specifically the axial/vectorial ratio thatmore » respects a thermal dark matter coupled through a Z-portal while not being excluded by the current observations. We also show that the next generation of experiments of the type LZ or XENON1T will test Z-portal scenario for dark matter mass up to 2 TeV . The condition of a thermal dark matter naturally predicts the spin-dependent scattering cross section on the neutron to be σ{sup SD}{sub χn} ≅ 10{sup −40} cm{sup 2}, which then becomes a clear prediction of the model and a signature testable in the near future experiments.« less

  5. Bovid mortality profiles in paleoecological context falsify hypotheses of endurance running-hunting and passive scavenging by early Pleistocene hominins

    NASA Astrophysics Data System (ADS)

    Bunn, Henry T.; Pickering, Travis Rayne

    2010-11-01

    The world's first archaeological traces from 2.6 million years ago (Ma) at Gona, in Ethiopia, include sharp-edged cutting tools and cut-marked animal bones, which indicate consumption of skeletal muscle by early hominin butchers. From that point, evidence of hominin meat-eating becomes increasingly more common throughout the Pleistocene archaeological record. Thus, the substantive debate about hominin meat-eating now centers on mode(s) of carcass resource acquisition. Two prominent hypotheses suggest, alternatively, (1) that early Homo hunted ungulate prey by running them to physiological failure and then dispatching them, or (2) that early Homo was relegated to passively scavenging carcass residues abandoned by carnivore predators. Various paleontologically testable predictions can be formulated for both hypotheses. Here we test four predictions concerning age-frequency distributions for bovids that contributed carcass remains to the 1.8 Ma. old FLK 22 Zinjanthropus (FLK Zinj, Olduvai Gorge, Tanzania) fauna, which zooarchaeological and taphonomic data indicate was formed predominantly by early Homo. In all but one case, the bovid mortality data from FLK Zinj violate test predictions of the endurance running-hunting and passive scavenging hypotheses. When combined with other taphonomic data, these results falsify both hypotheses, and lead to the hypothesis that early Homo operated successfully as an ambush predator.

  6. Testing the low scale seesaw and leptogenesis

    NASA Astrophysics Data System (ADS)

    Drewes, Marco; Garbrecht, Björn; Gueter, Dario; Klarić, Juraj

    2017-08-01

    Heavy neutrinos with masses below the electroweak scale can simultaneously generate the light neutrino masses via the seesaw mechanism and the baryon asymmetry of the universe via leptogenesis. The requirement to explain these phenomena imposes constraints on the mass spectrum of the heavy neutrinos, their flavour mixing pattern and their CP properties. We first combine bounds from different experiments in the past to map the viable parameter regions in which the minimal low scale seesaw model can explain the observed neutrino oscillations, while being consistent with the negative results of past searches for physics beyond the Standard Model. We then study which additional predictions for the properties of the heavy neutrinos can be made based on the requirement to explain the observed baryon asymmetry of the universe. Finally, we comment on the perspectives to find traces of heavy neutrinos in future experimental searches at the LHC, NA62, BELLE II, T2K, SHiP or a future high energy collider, such as ILC, CEPC or FCC-ee. If any heavy neutral leptons are discovered in the future, our results can be used to assess whether these particles are indeed the common origin of the light neutrino masses and the baryon asymmetry of the universe. If the magnitude of their couplings to all Standard Model flavours can be measured individually, and if the Dirac phase in the lepton mixing matrix is determined in neutrino oscillation experiments, then all model parameters can in principle be determined from this data. This makes the low scale seesaw a fully testable model of neutrino masses and baryogenesis.

  7. Testing two principles of the Health Action Process Approach in individuals with type 2 diabetes.

    PubMed

    Lippke, Sonia; Plotnikoff, Ronald C

    2014-01-01

    The Health Action Process Approach (HAPA) proposes principles that can be translated into testable hypotheses. This is one of the first studies to have explicitly tested HAPA's first 2 principles, which are (1) health behavior change process can be subdivided into motivation and volition, and (2) volition can be grouped into intentional and action stages. The 3 stage groups are labeled preintenders, intenders, and actors. The hypotheses of the HAPA model were investigated in a sample of 1,193 individuals with Type 2 diabetes. Study participants completed a questionnaire assessing the HAPA variables. The hypotheses were evaluated by examining mean differences of test variables and by the use of multigroup structural equation modeling (MSEM). Findings support the HAPA's 2 principles and 3 distinct stages. The 3 HAPA stages were significantly different in several stage-specific variables, and discontinuity patterns were found in terms of nonlinear trends across means. In terms of predicting goals, action planning, and behavior, differences transpired between the 2 motivational stages (preintenders and intenders), and between the 2 volitional stages (intenders and actors). Results indicate implications for supporting behavior change processes, depending on in which stage a person is at: All individuals should be helped to increase self-efficacy. Preintenders and intenders require interventions targeting outcome expectancies. Actors benefit from an improvement in action planning to maintain and increase their previous behavior. Overall, the first 2 principles of the HAPA were supported and some evidence for the other principles was found. Future research should experimentally test these conclusions. 2014 APA, all rights reserved

  8. Computations underlying the visuomotor transformation for smooth pursuit eye movements

    PubMed Central

    Murdison, T. Scott; Leclercq, Guillaume; Lefèvre, Philippe

    2014-01-01

    Smooth pursuit eye movements are driven by retinal motion and enable us to view moving targets with high acuity. Complicating the generation of these movements is the fact that different eye and head rotations can produce different retinal stimuli but giving rise to identical smooth pursuit trajectories. However, because our eyes accurately pursue targets regardless of eye and head orientation (Blohm G, Lefèvre P. J Neurophysiol 104: 2103–2115, 2010), the brain must somehow take these signals into account. To learn about the neural mechanisms potentially underlying this visual-to-motor transformation, we trained a physiologically inspired neural network model to combine two-dimensional (2D) retinal motion signals with three-dimensional (3D) eye and head orientation and velocity signals to generate a spatially correct 3D pursuit command. We then simulated conditions of 1) head roll-induced ocular counterroll, 2) oblique gaze-induced retinal rotations, 3) eccentric gazes (invoking the half-angle rule), and 4) optokinetic nystagmus to investigate how units in the intermediate layers of the network accounted for different 3D constraints. Simultaneously, we simulated electrophysiological recordings (visual and motor tunings) and microstimulation experiments to quantify the reference frames of signals at each processing stage. We found a gradual retinal-to-intermediate-to-spatial feedforward transformation through the hidden layers. Our model is the first to describe the general 3D transformation for smooth pursuit mediated by eye- and head-dependent gain modulation. Based on several testable experimental predictions, our model provides a mechanism by which the brain could perform the 3D visuomotor transformation for smooth pursuit. PMID:25475344

  9. Modulation of hippocampal rhythms by subthreshold electric fields and network topology

    PubMed Central

    Berzhanskaya, Julia; Chernyy, Nick; Gluckman, Bruce J.; Schiff, Steven J.; Ascoli, Giorgio A.

    2012-01-01

    Theta (4–12 Hz) and gamma (30–80 Hz) rhythms are considered important for cortical and hippocampal function. Although several neuron types are implicated in rhythmogenesis, the exact cellular mechanisms remain unknown. Subthreshold electric fields provide a flexible, area-specific tool to modulate neural activity and directly test functional hypotheses. Here we present experimental and computational evidence of the interplay among hippocampal synaptic circuitry, neuronal morphology, external electric fields, and network activity. Electrophysiological data are used to constrain and validate an anatomically and biophysically realistic model of area CA1 containing pyramidal cells and two interneuron types: dendritic- and perisomatic-targeting. We report two lines of results: addressing the network structure capable of generating theta-modulated gamma rhythms, and demonstrating electric field effects on those rhythms. First, theta-modulated gamma rhythms require specific inhibitory connectivity. In one configuration, GABAergic axo-dendritic feedback on pyramidal cells is only effective in proximal but not distal layers. An alternative configuration requires two distinct perisomatic interneuron classes, one exclusively receiving excitatory contacts, the other additionally targeted by inhibition. These observations suggest novel roles for particular classes of oriens and basket cells. The second major finding is that subthreshold electric fields robustly alter the balance between different rhythms. Independent of network configuration, positive electric fields decrease, while negative fields increase the theta/gamma ratio. Moreover, electric fields differentially affect average theta frequency depending on specific synaptic connectivity. These results support the testable prediction that subthreshold electric fields can alter hippocampal rhythms, suggesting new approaches to explore their cognitive functions and underlying circuitry. PMID:23053863

  10. The underlying emotion and the dream relating dream imagery to the dreamer's underlying emotion can help elucidate the nature of dreaming.

    PubMed

    Hartmann, Ernest

    2010-01-01

    There is a widespread consensus that emotion is important in dreams, deriving from both biological and psychological studies. However, the emphasis on examining emotions explicitly mentioned in dreams is misplaced. The dream is basically made of imagery. The focus of our group has been on relating the dream imagery to the dreamer's underlying emotion. What is most important is the underlying emotion--the emotion of the dreamer, not the emotion in the dream. This chapter discusses many studies relating the dream-especially the central image of the dream--to the dreamer's underlying emotion. Focusing on the underlying emotion leads to a coherent and testable view of the nature of dreaming. It also helps to clarify some important puzzling features of the literature on dreams, such as why the clinical literature is different in so many ways from the experimental literature, especially the laboratory-based experimental literature. Based on central image intensity and the associated underlying emotion, we can identify a hierarchy of dreams, from the highest-intensity, "big dreams," to the lowest-intensity dreams from laboratory awakenings. Copyright © 2010 Elsevier Inc. All rights reserved.

  11. Measurement uncertainty relations: characterising optimal error bounds for qubits

    NASA Astrophysics Data System (ADS)

    Bullock, T.; Busch, P.

    2018-07-01

    In standard formulations of the uncertainty principle, two fundamental features are typically cast as impossibility statements: two noncommuting observables cannot in general both be sharply defined (for the same state), nor can they be measured jointly. The pioneers of quantum mechanics were acutely aware and puzzled by this fact, and it motivated Heisenberg to seek a mitigation, which he formulated in his seminal paper of 1927. He provided intuitive arguments to show that the values of, say, the position and momentum of a particle can at least be unsharply defined, and they can be measured together provided some approximation errors are allowed. Only now, nine decades later, a working theory of approximate joint measurements is taking shape, leading to rigorous and experimentally testable formulations of associated error tradeoff relations. Here we briefly review this new development, explaining the concepts and steps taken in the construction of optimal joint approximations of pairs of incompatible observables. As a case study, we deduce measurement uncertainty relations for qubit observables using two distinct error measures. We provide an operational interpretation of the error bounds and discuss some of the first experimental tests of such relations.

  12. Possible seasonality in large deep-focus earthquakes

    NASA Astrophysics Data System (ADS)

    Zhan, Zhongwen; Shearer, Peter M.

    2015-09-01

    Large deep-focus earthquakes (magnitude > 7.0, depth > 500 km) have exhibited strong seasonality in their occurrence times since the beginning of global earthquake catalogs. Of 60 such events from 1900 to the present, 42 have occurred in the middle half of each year. The seasonality appears strongest in the northwest Pacific subduction zones and weakest in the Tonga region. Taken at face value, the surplus of northern hemisphere summer events is statistically significant, but due to the ex post facto hypothesis testing, the absence of seasonality in smaller deep earthquakes, and the lack of a known physical triggering mechanism, we cannot rule out that the observed seasonality is just random chance. However, we can make a testable prediction of seasonality in future large deep-focus earthquakes, which, given likely earthquake occurrence rates, should be verified or falsified within a few decades. If confirmed, deep earthquake seasonality would challenge our current understanding of deep earthquakes.

  13. Strength and Vulnerability Integration (SAVI): A Model of Emotional Well-Being Across Adulthood

    PubMed Central

    Charles, Susan Turk

    2010-01-01

    The following paper presents the theoretical model of Strength and Vulnerability Integration (SAVI) to explain factors that influence emotion regulation and emotional well-being across adulthood. The model posits that trajectories of adult development are marked by age-related enhancement in the use of strategies that serve to avoid or limit exposure to negative stimuli, but age-related vulnerabilities in situations that elicit high levels of sustained emotional arousal. When older adults avoid or reduce exposure to emotional distress, they often respond better than younger adults; when they experience high levels of sustained emotional arousal, however, age-related advantages in emotional well-being are attenuated, and older adults are hypothesized to have greater difficulties returning to homeostasis. SAVI provides a testable model to understand the literature on emotion and aging and to predict trajectories of emotional experience across the adult life span. PMID:21038939

  14. Strength and vulnerability integration: a model of emotional well-being across adulthood.

    PubMed

    Charles, Susan Turk

    2010-11-01

    The following article presents the theoretical model of strength and vulnerability integration (SAVI) to explain factors that influence emotion regulation and emotional well-being across adulthood. The model posits that trajectories of adult development are marked by age-related enhancement in the use of strategies that serve to avoid or limit exposure to negative stimuli but by age-related vulnerabilities in situations that elicit high levels of sustained emotional arousal. When older adults avoid or reduce exposure to emotional distress, they often respond better than younger adults; when they experience high levels of sustained emotional arousal, however, age-related advantages in emotional well-being are attenuated, and older adults are hypothesized to have greater difficulties returning to homeostasis. SAVI provides a testable model to understand the literature on emotion and aging and to predict trajectories of emotional experience across the adult life span.

  15. Modelling the molecular mechanisms of aging

    PubMed Central

    Mc Auley, Mark T.; Guimera, Alvaro Martinez; Hodgson, David; Mcdonald, Neil; Mooney, Kathleen M.; Morgan, Amy E.

    2017-01-01

    The aging process is driven at the cellular level by random molecular damage that slowly accumulates with age. Although cells possess mechanisms to repair or remove damage, they are not 100% efficient and their efficiency declines with age. There are many molecular mechanisms involved and exogenous factors such as stress also contribute to the aging process. The complexity of the aging process has stimulated the use of computational modelling in order to increase our understanding of the system, test hypotheses and make testable predictions. As many different mechanisms are involved, a wide range of models have been developed. This paper gives an overview of the types of models that have been developed, the range of tools used, modelling standards and discusses many specific examples of models that have been grouped according to the main mechanisms that they address. We conclude by discussing the opportunities and challenges for future modelling in this field. PMID:28096317

  16. John S. Bell's concept of local causality

    NASA Astrophysics Data System (ADS)

    Norsen, Travis

    2011-12-01

    John Stewart Bell's famous theorem is widely regarded as one of the most important developments in the foundations of physics. Yet even as we approach the 50th anniversary of Bell's discovery, its meaning and implications remain controversial. Many workers assert that Bell's theorem refutes the possibility suggested by Einstein, Podolsky, and Rosen (EPR) of supplementing ordinary quantum theory with ``hidden'' variables that might restore determinism and/or some notion of an observer-independent reality. But Bell himself interpreted the theorem very differently--as establishing an ``essential conflict'' between the well-tested empirical predictions of quantum theory and relativistic local causality. Our goal is to make Bell's own views more widely known and to explain Bell's little-known formulation of the concept of relativistic local causality on which his theorem rests. We also show precisely how Bell's formulation of local causality can be used to derive an empirically testable Bell-type inequality and to recapitulate the EPR argument.

  17. John S. Bell's concept of local causality

    NASA Astrophysics Data System (ADS)

    Norsen, Travis

    2011-12-01

    John Stewart Bell's famous theorem is widely regarded as one of the most important developments in the foundations of physics. Yet even as we approach the 50th anniversary of Bell's discovery, its meaning and implications remain controversial. Many workers assert that Bell's theorem refutes the possibility suggested by Einstein, Podolsky, and Rosen (EPR) of supplementing ordinary quantum theory with "hidden" variables that might restore determinism and/or some notion of an observer-independent reality. But Bell himself interpreted the theorem very differently—as establishing an "essential conflict" between the well-tested empirical predictions of quantum theory and relativistic local causality. Our goal is to make Bell's own views more widely known and to explain Bell's little-known formulation of the concept of relativistic local causality on which his theorem rests. We also show precisely how Bell's formulation of local causality can be used to derive an empirically testable Bell-type inequality and to recapitulate the EPR argument.

  18. An application of statistics to comparative metagenomics

    PubMed Central

    Rodriguez-Brito, Beltran; Rohwer, Forest; Edwards, Robert A

    2006-01-01

    Background Metagenomics, sequence analyses of genomic DNA isolated directly from the environments, can be used to identify organisms and model community dynamics of a particular ecosystem. Metagenomics also has the potential to identify significantly different metabolic potential in different environments. Results Here we use a statistical method to compare curated subsystems, to predict the physiology, metabolism, and ecology from metagenomes. This approach can be used to identify those subsystems that are significantly different between metagenome sequences. Subsystems that were overrepresented in the Sargasso Sea and Acid Mine Drainage metagenome when compared to non-redundant databases were identified. Conclusion The methodology described herein applies statistics to the comparisons of metabolic potential in metagenomes. This analysis reveals those subsystems that are more, or less, represented in the different environments that are compared. These differences in metabolic potential lead to several testable hypotheses about physiology and metabolism of microbes from these ecosystems. PMID:16549025

  19. An application of statistics to comparative metagenomics.

    PubMed

    Rodriguez-Brito, Beltran; Rohwer, Forest; Edwards, Robert A

    2006-03-20

    Metagenomics, sequence analyses of genomic DNA isolated directly from the environments, can be used to identify organisms and model community dynamics of a particular ecosystem. Metagenomics also has the potential to identify significantly different metabolic potential in different environments. Here we use a statistical method to compare curated subsystems, to predict the physiology, metabolism, and ecology from metagenomes. This approach can be used to identify those subsystems that are significantly different between metagenome sequences. Subsystems that were overrepresented in the Sargasso Sea and Acid Mine Drainage metagenome when compared to non-redundant databases were identified. The methodology described herein applies statistics to the comparisons of metabolic potential in metagenomes. This analysis reveals those subsystems that are more, or less, represented in the different environments that are compared. These differences in metabolic potential lead to several testable hypotheses about physiology and metabolism of microbes from these ecosystems.

  20. p p →A →Z h and the wrong-sign limit of the two-Higgs-doublet model

    NASA Astrophysics Data System (ADS)

    Ferreira, Pedro M.; Liebler, Stefan; Wittbrodt, Jonas

    2018-03-01

    We point out the importance of the decay channels A →Z h and H →V V in the wrong-sign limit of the two-Higgs-doublet model (2HDM) of type II. They can be the dominant decay modes at moderate values of tan β , even if the (pseudo)scalar mass is above the threshold where the decay into a pair of top quarks is kinematically open. Accordingly, large cross sections p p →A →Z h and p p →H →V V are obtained and currently probed by the LHC experiments, yielding conclusive statements about the remaining parameter space of the wrong-sign limit. In addition, mild excesses—as recently found in the ATLAS analysis b b ¯→A →Z h —could be explained. The wrong-sign limit makes other important testable predictions for the light Higgs boson couplings.

  1. COAGULATION CALCULATIONS OF ICY PLANET FORMATION AT 15-150 AU: A CORRELATION BETWEEN THE MAXIMUM RADIUS AND THE SLOPE OF THE SIZE DISTRIBUTION FOR TRANS-NEPTUNIAN OBJECTS

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kenyon, Scott J.; Bromley, Benjamin C., E-mail: skenyon@cfa.harvard.edu, E-mail: bromley@physics.utah.edu

    2012-03-15

    We investigate whether coagulation models of planet formation can explain the observed size distributions of trans-Neptunian objects (TNOs). Analyzing published and new calculations, we demonstrate robust relations between the size of the largest object and the slope of the size distribution for sizes 0.1 km and larger. These relations yield clear, testable predictions for TNOs and other icy objects throughout the solar system. Applying our results to existing observations, we show that a broad range of initial disk masses, planetesimal sizes, and fragmentation parameters can explain the data. Adding dynamical constraints on the initial semimajor axis of 'hot' Kuiper Beltmore » objects along with probable TNO formation times of 10-700 Myr restricts the viable models to those with a massive disk composed of relatively small (1-10 km) planetesimals.« less

  2. The evolution of mimicry under constraints.

    PubMed

    Holen, Øistein Haugsten; Johnstone, Rufus A

    2004-11-01

    The resemblance between mimetic organisms and their models varies from near perfect to very crude. One possible explanation, which has received surprisingly little attention, is that evolution can improve mimicry only at some cost to the mimetic organism. In this article, an evolutionary game theory model of mimicry is presented that incorporates such constraints. The model generates novel and testable predictions. First, Batesian mimics that are very common and/or mimic very weakly defended models should evolve either inaccurate mimicry (by stabilizing selection) or mimetic polymorphism. Second, Batesian mimics that are very common and/or mimic very weakly defended models are more likely to evolve mimetic polymorphism if they encounter predators at high rates and/or are bad at evading predator attacks. The model also examines how cognitive constraints acting on signal receivers may help determine evolutionarily stable levels of mimicry. Surprisingly, improved discrimination abilities among signal receivers may sometimes select for less accurate mimicry.

  3. Beyond Λ CDM: Problems, solutions, and the road ahead

    NASA Astrophysics Data System (ADS)

    Bull, Philip; Akrami, Yashar; Adamek, Julian; Baker, Tessa; Bellini, Emilio; Beltrán Jiménez, Jose; Bentivegna, Eloisa; Camera, Stefano; Clesse, Sébastien; Davis, Jonathan H.; Di Dio, Enea; Enander, Jonas; Heavens, Alan; Heisenberg, Lavinia; Hu, Bin; Llinares, Claudio; Maartens, Roy; Mörtsell, Edvard; Nadathur, Seshadri; Noller, Johannes; Pasechnik, Roman; Pawlowski, Marcel S.; Pereira, Thiago S.; Quartin, Miguel; Ricciardone, Angelo; Riemer-Sørensen, Signe; Rinaldi, Massimiliano; Sakstein, Jeremy; Saltas, Ippocratis D.; Salzano, Vincenzo; Sawicki, Ignacy; Solomon, Adam R.; Spolyar, Douglas; Starkman, Glenn D.; Steer, Danièle; Tereno, Ismael; Verde, Licia; Villaescusa-Navarro, Francisco; von Strauss, Mikael; Winther, Hans A.

    2016-06-01

    Despite its continued observational successes, there is a persistent (and growing) interest in extending cosmology beyond the standard model, Λ CDM. This is motivated by a range of apparently serious theoretical issues, involving such questions as the cosmological constant problem, the particle nature of dark matter, the validity of general relativity on large scales, the existence of anomalies in the CMB and on small scales, and the predictivity and testability of the inflationary paradigm. In this paper, we summarize the current status of Λ CDM as a physical theory, and review investigations into possible alternatives along a number of different lines, with a particular focus on highlighting the most promising directions. While the fundamental problems are proving reluctant to yield, the study of alternative cosmologies has led to considerable progress, with much more to come if hopes about forthcoming high-precision observations and new theoretical ideas are fulfilled.

  4. The evolution of dispersal in a Levins' type metapopulation model.

    PubMed

    Jansen, Vincent A A; Vitalis, Renaud

    2007-10-01

    We study the evolution of the dispersal rate in a metapopulation model with extinction and colonization dynamics, akin to the model as originally described by Levins. To do so we extend the metapopulation model with a description of the within patch dynamics. By means of a separation of time scales we analytically derive a fitness expression from first principles for this model. The fitness function can be written as an inclusive fitness equation (Hamilton's rule). By recasting this equation in a form that emphasizes the effects of competition we show the effect of the local competition and the local population size on the evolution of dispersal. We find that the evolution of dispersal cannot be easily interpreted in terms of avoidance of kin competition, but rather that increased dispersal reduces the competitive ability. Our model also yields a testable prediction in term of relatedness and life-history parameters.

  5. Minimal model linking two great mysteries: Neutrino mass and dark matter

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Farzan, Yasaman

    2009-10-01

    We present an economic model that establishes a link between neutrino masses and properties of the dark matter candidate. The particle content of the model can be divided into two groups: light particles with masses lighter than the electroweak scale and heavy particles. The light particles, which also include the dark matter candidate, are predicted to show up in the low energy experiments such as (K{yields}l+missing energy), making the model testable. The heavy sector can show up at the LHC and may give rise to Br(l{sub i}{yields}l{sub j}{gamma}) close to the present bounds. In principle, the new couplings of themore » model can independently be derived from the data from the LHC and from the information on neutrino masses and lepton flavor violating rare decays, providing the possibility of an intensive cross-check of the model.« less

  6. Perception as a closed-loop convergence process.

    PubMed

    Ahissar, Ehud; Assa, Eldad

    2016-05-09

    Perception of external objects involves sensory acquisition via the relevant sensory organs. A widely-accepted assumption is that the sensory organ is the first station in a serial chain of processing circuits leading to an internal circuit in which a percept emerges. This open-loop scheme, in which the interaction between the sensory organ and the environment is not affected by its concurrent downstream neuronal processing, is strongly challenged by behavioral and anatomical data. We present here a hypothesis in which the perception of external objects is a closed-loop dynamical process encompassing loops that integrate the organism and its environment and converging towards organism-environment steady-states. We discuss the consistency of closed-loop perception (CLP) with empirical data and show that it can be synthesized in a robotic setup. Testable predictions are proposed for empirical distinction between open and closed loop schemes of perception.

  7. Perception as a closed-loop convergence process

    PubMed Central

    Ahissar, Ehud; Assa, Eldad

    2016-01-01

    Perception of external objects involves sensory acquisition via the relevant sensory organs. A widely-accepted assumption is that the sensory organ is the first station in a serial chain of processing circuits leading to an internal circuit in which a percept emerges. This open-loop scheme, in which the interaction between the sensory organ and the environment is not affected by its concurrent downstream neuronal processing, is strongly challenged by behavioral and anatomical data. We present here a hypothesis in which the perception of external objects is a closed-loop dynamical process encompassing loops that integrate the organism and its environment and converging towards organism-environment steady-states. We discuss the consistency of closed-loop perception (CLP) with empirical data and show that it can be synthesized in a robotic setup. Testable predictions are proposed for empirical distinction between open and closed loop schemes of perception. DOI: http://dx.doi.org/10.7554/eLife.12830.001 PMID:27159238

  8. Resolving Microzooplankton Functional Groups In A Size-Structured Planktonic Model

    NASA Astrophysics Data System (ADS)

    Taniguchi, D.; Dutkiewicz, S.; Follows, M. J.; Jahn, O.; Menden-Deuer, S.

    2016-02-01

    Microzooplankton are important marine grazers, often consuming a large fraction of primary productivity. They consist of a great diversity of organisms with different behaviors, characteristics, and rates. This functional diversity, and its consequences, are not currently reflected in large-scale ocean ecological simulations. How should these organisms be represented, and what are the implications for their biogeography? We develop a size-structured, trait-based model to characterize a diversity of microzooplankton functional groups. We compile and examine size-based laboratory data on the traits, revealing some patterns with size and functional group that we interpret with mechanistic theory. Fitting the model to the data provides parameterizations of key rates and properties, which we employ in a numerical ocean model. The diversity of grazing preference, rates, and trophic strategies enables the coexistence of different functional groups of micro-grazers under various environmental conditions, and the model produces testable predictions of the biogeography.

  9. Examining the nature of retrocausal effects in biology and psychology

    NASA Astrophysics Data System (ADS)

    Mossbridge, Julia

    2017-05-01

    Multiple laboratories have reported physiological and psychological changes associated with future events that are designed to be unpredictable by normal sensory means. Such phenomena seem to be examples of retrocausality at the macroscopic level. Here I will discuss the characteristics of seemingly retrocausal effects in biology and psychology, specifically examining a biological and a psychological form of precognition, predictive anticipatory activity (PAA) and implicit precognition. The aim of this examination is to offer an analysis of the constraints posed by the characteristics of macroscopic retrocausal effects. Such constraints are critical to assessing any physical theory that purports to explain these effects. Following a brief introduction to recent research on PAA and implicit precognition, I will describe what I believe we have learned so far about the nature of these effects, and conclude with a testable, yet embryonic, model of macroscopic retrocausal phenomena.

  10. New streams and springs after the 2014 Mw6.0 South Napa earthquake.

    PubMed

    Wang, Chi-Yuen; Manga, Michael

    2015-07-09

    Many streams and springs, which were dry or nearly dry before the 2014 Mw6.0 South Napa earthquake, started to flow after the earthquake. A United States Geological Survey stream gauge also registered a coseismic increase in discharge. Public interest was heightened by a state of extreme drought in California. Since the new flows were not contaminated by pre-existing surface water, their composition allowed unambiguous identification of their origin. Following the earthquake we repeatedly surveyed the new flows, collecting data to test hypotheses about their origin. We show that the new flows originated from groundwater in nearby mountains released by the earthquake. The estimated total amount of new water is ∼ 10(6) m(3), about 1/40 of the annual water use in the Napa-Sonoma area. Our model also makes a testable prediction of a post-seismic decrease of seismic velocity in the shallow crust of the affected region.

  11. Singlet-triplet fermionic dark matter and LHC phenomenology

    NASA Astrophysics Data System (ADS)

    Choubey, Sandhya; Khan, Sarif; Mitra, Manimala; Mondal, Subhadeep

    2018-04-01

    It is well known that for the pure standard model triplet fermionic WIMP-type dark matter (DM), the relic density is satisfied around 2 TeV. For such a heavy mass particle, the production cross-section at 13 TeV run of LHC will be very small. Extending the model further with a singlet fermion and a triplet scalar, DM relic density can be satisfied for even much lower masses. The lower mass DM can be copiously produced at LHC and hence the model can be tested at collider. For the present model we have studied the multi jet (≥ 2 j) + missing energy ([InlineEquation not available: see fulltext.]) signal and show that this can be detected in the near future of the LHC 13 TeV run. We also predict that the present model is testable by the earth based DM direct detection experiments like Xenon-1T and in future by Darwin.

  12. Conceptual frameworks and methods for advancing invasion ecology.

    PubMed

    Heger, Tina; Pahl, Anna T; Botta-Dukát, Zoltan; Gherardi, Francesca; Hoppe, Christina; Hoste, Ivan; Jax, Kurt; Lindström, Leena; Boets, Pieter; Haider, Sylvia; Kollmann, Johannes; Wittmann, Meike J; Jeschke, Jonathan M

    2013-09-01

    Invasion ecology has much advanced since its early beginnings. Nevertheless, explanation, prediction, and management of biological invasions remain difficult. We argue that progress in invasion research can be accelerated by, first, pointing out difficulties this field is currently facing and, second, looking for measures to overcome them. We see basic and applied research in invasion ecology confronted with difficulties arising from (A) societal issues, e.g., disparate perceptions of invasive species; (B) the peculiarity of the invasion process, e.g., its complexity and context dependency; and (C) the scientific methodology, e.g., imprecise hypotheses. To overcome these difficulties, we propose three key measures: (1) a checklist for definitions to encourage explicit definitions; (2) implementation of a hierarchy of hypotheses (HoH), where general hypotheses branch into specific and precisely testable hypotheses; and (3) platforms for improved communication. These measures may significantly increase conceptual clarity and enhance communication, thus advancing invasion ecology.

  13. Mercury's magnetic field - A thermoelectric dynamo?

    NASA Technical Reports Server (NTRS)

    Stevenson, D. J.

    1987-01-01

    Permanent magnetism and conventional dynamo theory are possible but problematic explanations for the magnitude of the Mercurian magnetic field. A new model is proposed in which thermoelectric currents driven by temperature differences at a bumpy core-mantle boundary are responsible for the (unobserved) toroidal field, and the helicity of convective motions in a thin outer core (thickness of about 100 km) induces the observed poloidal field from the toroidal field. The observed field of about 3 x 10 to the -7th T can be reproduced provided the electrical conductivity of Mercury's semiconducting mantle approaches 1000/ohm per m. This model may be testable by future missions to Mercury because it predicts a more complicated field geometry than conventional dynamo theories. However, it is argued that polar wander may cause the core-mantle topography to migrate so that some aspects of the rotational symmetry may be reflected in the observed field.

  14. A SEU-Hard Flip-Flop for Antifuse FPGAs

    NASA Technical Reports Server (NTRS)

    Katz, R.; Wang, J. J.; McCollum, J.; Cronquist, B.; Chan, R.; Yu, D.; Kleyner, I.; Day, John H. (Technical Monitor)

    2001-01-01

    A single event upset (SEU)-hardened flip-flop has been designed and developed for antifuse Field Programmable Gate Array (FPGA) application. Design and application issues, testability, test methods, simulation, and results are discussed.

  15. The changing features of the body-mind problem.

    PubMed

    Agassi, Joseph

    2007-01-01

    The body-mind problem invites scientific study, since mental events are repeated and repeatable and invite testable explanations. They seemed troublesome because of the classical theory of substance that failed to solve its own central problems. These are soluble with the aid of the theory of the laws of nature, particularly in its emergentist version [Bunge, M., 1980. The Body-mind Problem, Pergamon, Oxford] that invites refutable explanations [Popper, K.R., 1959. The Logic of Scientific Discovery, Hutchinson, London]. The view of mental properties as emergent is a modification of the two chief classical views, materialism and dualism. As this view invites testable explanations of events of the inner world, it is better than the quasi-behaviorist view of self-awareness as computer-style self-monitoring [Minsky, M., Laske, O., 1992. A conversation with Marvin Minsky. AI Magazine 13 (3), 31-45].

  16. Design for testability and diagnosis at the system-level

    NASA Technical Reports Server (NTRS)

    Simpson, William R.; Sheppard, John W.

    1993-01-01

    The growing complexity of full-scale systems has surpassed the capabilities of most simulation software to provide detailed models or gate-level failure analyses. The process of system-level diagnosis approaches the fault-isolation problem in a manner that differs significantly from the traditional and exhaustive failure mode search. System-level diagnosis is based on a functional representation of the system. For example, one can exercise one portion of a radar algorithm (the Fast Fourier Transform (FFT) function) by injecting several standard input patterns and comparing the results to standardized output results. An anomalous output would point to one of several items (including the FFT circuit) without specifying the gate or failure mode. For system-level repair, identifying an anomalous chip is sufficient. We describe here an information theoretic and dependency modeling approach that discards much of the detailed physical knowledge about the system and analyzes its information flow and functional interrelationships. The approach relies on group and flow associations and, as such, is hierarchical. Its hierarchical nature allows the approach to be applicable to any level of complexity and to any repair level. This approach has been incorporated in a product called STAMP (System Testability and Maintenance Program) which was developed and refined through more than 10 years of field-level applications to complex system diagnosis. The results have been outstanding, even spectacular in some cases. In this paper we describe system-level testability, system-level diagnoses, and the STAMP analysis approach, as well as a few STAMP applications.

  17. A mathematical model of physiological processes and its application to the study of aging

    NASA Technical Reports Server (NTRS)

    Hibbs, A. R.; Walford, R. L.

    1989-01-01

    The behavior of a physiological system which, after displacement, returns by homeostatic mechanisms to its original condition can be described by a simple differential equation in which the "recovery time" is a parameter. Two such systems, which influence one another, can be linked mathematically by the use of "coupling" or "feedback" coefficients. These concepts are the basis for many mathematical models of physiological behavior, and we describe the general nature of such models. Next, we introduce the concept of a "fatal limit" for the displacement of a physiological system, and show how measures of such limits can be included in mathematical models. We show how the numerical values of such limits depend on the values of other system parameters, i.e., recovery times and coupling coefficients, and suggest ways of measuring all these parameters experimentally, for example by monitoring changes induced by X-irradiation. Next, we discuss age-related changes in these parameters, and show how the parameters of mortality statistics, such as the famous Gompertz parameters, can be derived from experimentally measurable changes. Concepts of onset-of-aging, critical or fatal limits, equilibrium value (homeostasis), recovery times and coupling constants are involved. Illustrations are given using published data from mouse and rat populations. We believe that this method of deriving survival patterns from model that is experimentally testable is unique.

  18. Sources, Sinks, and Model Accuracy

    EPA Science Inventory

    Spatial demographic models are a necessary tool for understanding how to manage landscapes sustainably for animal populations. These models, therefore, must offer precise and testable predications about animal population dynamics and how animal demographic parameters respond to ...

  19. Global Change And Water Availability And Quality: Challenges Ahead

    NASA Astrophysics Data System (ADS)

    Larsen, M. C.; Ryker, S. J.

    2012-12-01

    The United States is in the midst of a continental-scale, multi-year water-resources experiment, in which society has not defined testable hypotheses or set the duration and scope of the experiment. What are we doing? We are expanding population at two to three times the national growth rate in our most water-scarce states, in the southwest, where water stress is already great and modeling predicts decreased streamflow by the middle of this century. We are expanding irrigated agriculture from the west into the east, particularly to the southeastern states, where increased competition for ground and surface water has urban, agricultural, and environmental interests at odds, and increasingly, in court. We are expanding our consumption of pharmaceutical and personal care products to historic high levels and disposing of them in surface and groundwater, through sewage treatment plants and individual septic systems that were not designed to treat them. These and other examples of our national-scale experiment are likely to continue well into the 21st century. This experiment and related challenges will continue and likely intensify as non-climatic and climatic factors, such as predicted rising temperature and changes in the distribution of precipitation in time and space, continue to develop.

  20. How could (should) we make contact between string/M-theory and our four-dimensional world, and associated LHC predictions?

    NASA Astrophysics Data System (ADS)

    Kane, Gordon

    2015-12-01

    String/M-theory is an exciting framework within which we try to understand our universe and its properties. Compactified string/M-theories address and offer solutions to almost every important question and issue in particle physics and particle cosmology. But earlier goals of finding a top-down “vacuum selection” principle and deriving the 4D theory have not yet been realized. Does that mean we should stop trying, as nearly all string theorists have? Or can we proceed in the historical way to make a few generic, robust assumptions not closely related to observables, and follow where they lead to testable predictions and explanations? Making only very generic assumptions is a significant issue. I discuss how to try to proceed with this approach, particularly in M-theory compactified on a 7D manifold of G2 holonomy. One goal is to understand our universe as a string/M-theory vacuum for its own sake, in the long tradition of trying to understand our world, and what that implies. In addition, understanding our vacuum may be a prelude to understanding its connection to the multiverse.

  1. Computational Approaches to Drug Repurposing and Pharmacology

    PubMed Central

    Hodos, Rachel A; Kidd, Brian A; Khader, Shameer; Readhead, Ben P; Dudley, Joel T

    2016-01-01

    Data in the biological, chemical, and clinical domains are accumulating at ever-increasing rates and have the potential to accelerate and inform drug development in new ways. Challenges and opportunities now lie in developing analytic tools to transform these often complex and heterogeneous data into testable hypotheses and actionable insights. This is the aim of computational pharmacology, which uses in silico techniques to better understand and predict how drugs affect biological systems, which can in turn improve clinical use, avoid unwanted side effects, and guide selection and development of better treatments. One exciting application of computational pharmacology is drug repurposing- finding new uses for existing drugs. Already yielding many promising candidates, this strategy has the potential to improve the efficiency of the drug development process and reach patient populations with previously unmet needs such as those with rare diseases. While current techniques in computational pharmacology and drug repurposing often focus on just a single data modality such as gene expression or drug-target interactions, we rationalize that methods such as matrix factorization that can integrate data within and across diverse data types have the potential to improve predictive performance and provide a fuller picture of a drug's pharmacological action. PMID:27080087

  2. Asymmetric patch size distribution leads to disruptive selection on dispersal.

    PubMed

    Massol, François; Duputié, Anne; David, Patrice; Jarne, Philippe

    2011-02-01

    Numerous models have been designed to understand how dispersal ability evolves when organisms live in a fragmented landscape. Most of them predict a single dispersal rate at evolutionary equilibrium, and when diversification of dispersal rates has been predicted, it occurs as a response to perturbation or environmental fluctuation regimes. Yet abundant variation in dispersal ability is observed in natural populations and communities, even in relatively stable environments. We show that this diversification can operate in a simple island model without temporal variability: disruptive selection on dispersal occurs when the environment consists of many small and few large patches, a common feature in natural spatial systems. This heterogeneity in patch size results in a high variability in the number of related patch mates by individual, which, in turn, triggers disruptive selection through a high per capita variance of inclusive fitness. Our study provides a likely, parsimonious and testable explanation for the diversity of dispersal rates encountered in nature. It also suggests that biological conservation policies aiming at preserving ecological communities should strive to keep the distribution of patch size sufficiently asymmetric and variable. © 2010 The Author(s). Evolution© 2010 The Society for the Study of Evolution.

  3. Mirror neurons in the tree of life: mosaic evolution, plasticity and exaptation of sensorimotor matching responses.

    PubMed

    Tramacere, Antonella; Pievani, Telmo; Ferrari, Pier F

    2017-08-01

    Considering the properties of mirror neurons (MNs) in terms of development and phylogeny, we offer a novel, unifying, and testable account of their evolution according to the available data and try to unify apparently discordant research, including the plasticity of MNs during development, their adaptive value and their phylogenetic relationships and continuity. We hypothesize that the MN system reflects a set of interrelated traits, each with an independent natural history due to unique selective pressures, and propose that there are at least three evolutionarily significant trends that gave raise to three subtypes: hand visuomotor, mouth visuomotor, and audio-vocal. Specifically, we put forward a mosaic evolution hypothesis, which posits that different types of MNs may have evolved at different rates within and among species. This evolutionary hypothesis represents an alternative to both adaptationist and associative models. Finally, the review offers a strong heuristic potential in predicting the circumstances under which specific variations and properties of MNs are expected. Such predictive value is critical to test new hypotheses about MN activity and its plastic changes, depending on the species, the neuroanatomical substrates, and the ecological niche. © 2016 Cambridge Philosophical Society.

  4. Covariations in ecological scaling laws fostered by community dynamics.

    PubMed

    Zaoli, Silvia; Giometto, Andrea; Maritan, Amos; Rinaldo, Andrea

    2017-10-03

    Scaling laws in ecology, intended both as functional relationships among ecologically relevant quantities and the probability distributions that characterize their occurrence, have long attracted the interest of empiricists and theoreticians. Empirical evidence exists of power laws associated with the number of species inhabiting an ecosystem, their abundances, and traits. Although their functional form appears to be ubiquitous, empirical scaling exponents vary with ecosystem type and resource supply rate. The idea that ecological scaling laws are linked has been entertained before, but the full extent of macroecological pattern covariations, the role of the constraints imposed by finite resource supply, and a comprehensive empirical verification are still unexplored. Here, we propose a theoretical scaling framework that predicts the linkages of several macroecological patterns related to species' abundances and body sizes. We show that such a framework is consistent with the stationary-state statistics of a broad class of resource-limited community dynamics models, regardless of parameterization and model assumptions. We verify predicted theoretical covariations by contrasting empirical data and provide testable hypotheses for yet unexplored patterns. We thus place the observed variability of ecological scaling exponents into a coherent statistical framework where patterns in ecology embed constrained fluctuations.

  5. Temporal Structure in Cooperative Interactions: What Does the Timing of Exploitation Tell Us about Its Cost?

    PubMed Central

    Barker, Jessica L.; Bronstein, Judith L.

    2016-01-01

    Exploitation in cooperative interactions both within and between species is widespread. Although it is assumed to be costly to be exploited, mechanisms to control exploitation are surprisingly rare, making the persistence of cooperation a fundamental paradox in evolutionary biology and ecology. Focusing on between-species cooperation (mutualism), we hypothesize that the temporal sequence in which exploitation occurs relative to cooperation affects its net costs and argue that this can help explain when and where control mechanisms are observed in nature. Our principal prediction is that when exploitation occurs late relative to cooperation, there should be little selection to limit its effects (analogous to “tolerated theft” in human cooperative groups). Although we focus on cases in which mutualists and exploiters are different individuals (of the same or different species), our inferences can readily be extended to cases in which individuals exhibit mixed cooperative-exploitative strategies. We demonstrate that temporal structure should be considered alongside spatial structure as an important process affecting the evolution of cooperation. We also provide testable predictions to guide future empirical research on interspecific as well as intraspecific cooperation. PMID:26841169

  6. Timing of birth: Parsimony favors strategic over dysregulated parturition.

    PubMed

    Catalano, Ralph; Goodman, Julia; Margerison-Zilko, Claire; Falconi, April; Gemmill, Alison; Karasek, Deborah; Anderson, Elizabeth

    2016-01-01

    The "dysregulated parturition" narrative posits that the human stress response includes a cascade of hormones that "dysregulates" and accelerates parturition but provides questionable utility as a guide to understand or prevent preterm birth. We offer and test a "strategic parturition" narrative that not only predicts the excess preterm births that dysregulated parturition predicts but also makes testable, sex-specific predictions of the effect of stressful environments on the timing of birth among term pregnancies. We use interrupted time-series modeling of cohorts conceived over 101 months to test for lengthening of early term male gestations in stressed population. We use an event widely reported to have stressed Americans and to have increased the incidence of low birth weight and fetal death across the country-the terrorist attacks of September 2001. We tested the hypothesis that the odds of male infants conceived in December 2000 (i.e., at term in September 2001) being born early as opposed to full term fell below the value expected from those conceived in the 50 prior and 50 following months. We found that term male gestations exposed to the terrorist attacks exhibited 4% lower likelihood of early, as opposed to full or late, term birth. Strategic parturition explains observed data for which the dysregulated parturition narrative offers no prediction-the timing of birth among gestations stressed at term. Our narrative may help explain why findings from studies examining associations between population- and/or individual-level stressors and preterm birth are generally mixed. © 2015 Wiley Periodicals, Inc.

  7. A provisional regulatory gene network for specification of endomesoderm in the sea urchin embryo

    NASA Technical Reports Server (NTRS)

    Davidson, Eric H.; Rast, Jonathan P.; Oliveri, Paola; Ransick, Andrew; Calestani, Cristina; Yuh, Chiou-Hwa; Minokawa, Takuya; Amore, Gabriele; Hinman, Veronica; Arenas-Mena, Cesar; hide

    2002-01-01

    We present the current form of a provisional DNA sequence-based regulatory gene network that explains in outline how endomesodermal specification in the sea urchin embryo is controlled. The model of the network is in a continuous process of revision and growth as new genes are added and new experimental results become available; see http://www.its.caltech.edu/mirsky/endomeso.htm (End-mes Gene Network Update) for the latest version. The network contains over 40 genes at present, many newly uncovered in the course of this work, and most encoding DNA-binding transcriptional regulatory factors. The architecture of the network was approached initially by construction of a logic model that integrated the extensive experimental evidence now available on endomesoderm specification. The internal linkages between genes in the network have been determined functionally, by measurement of the effects of regulatory perturbations on the expression of all relevant genes in the network. Five kinds of perturbation have been applied: (1) use of morpholino antisense oligonucleotides targeted to many of the key regulatory genes in the network; (2) transformation of other regulatory factors into dominant repressors by construction of Engrailed repressor domain fusions; (3) ectopic expression of given regulatory factors, from genetic expression constructs and from injected mRNAs; (4) blockade of the beta-catenin/Tcf pathway by introduction of mRNA encoding the intracellular domain of cadherin; and (5) blockade of the Notch signaling pathway by introduction of mRNA encoding the extracellular domain of the Notch receptor. The network model predicts the cis-regulatory inputs that link each gene into the network. Therefore, its architecture is testable by cis-regulatory analysis. Strongylocentrotus purpuratus and Lytechinus variegatus genomic BAC recombinants that include a large number of the genes in the network have been sequenced and annotated. Tests of the cis-regulatory predictions of the model are greatly facilitated by interspecific computational sequence comparison, which affords a rapid identification of likely cis-regulatory elements in advance of experimental analysis. The network specifies genomically encoded regulatory processes between early cleavage and gastrula stages. These control the specification of the micromere lineage and of the initial veg(2) endomesodermal domain; the blastula-stage separation of the central veg(2) mesodermal domain (i.e., the secondary mesenchyme progenitor field) from the peripheral veg(2) endodermal domain; the stabilization of specification state within these domains; and activation of some downstream differentiation genes. Each of the temporal-spatial phases of specification is represented in a subelement of the network model, that treats regulatory events within the relevant embryonic nuclei at particular stages. (c) 2002 Elsevier Science (USA).

  8. Work-Centered Technology Development (WTD)

    DTIC Science & Technology

    2005-03-01

    theoretical, testable, inductive, and repeatable foundations of science. o Theoretical foundations include notions such as statistical versus analytical...Human Factors and Ergonomics Society, 263-267. 179 Eggleston, R. G. (2005). Coursebook : Work-Centered Design (WCD). AFRL/HECS WCD course training

  9. Writing testable software requirements

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Knirk, D.

    1997-11-01

    This tutorial identifies common problems in analyzing requirements in the problem and constructing a written specification of what the software is to do. It deals with two main problem areas: identifying and describing problem requirements, and analyzing and describing behavior specifications.

  10. All pure bipartite entangled states can be self-tested

    PubMed Central

    Coladangelo, Andrea; Goh, Koon Tong; Scarani, Valerio

    2017-01-01

    Quantum technologies promise advantages over their classical counterparts in the fields of computation, security and sensing. It is thus desirable that classical users are able to obtain guarantees on quantum devices, even without any knowledge of their inner workings. That such classical certification is possible at all is remarkable: it is a consequence of the violation of Bell inequalities by entangled quantum systems. Device-independent self-testing refers to the most complete such certification: it enables a classical user to uniquely identify the quantum state shared by uncharacterized devices by simply inspecting the correlations of measurement outcomes. Self-testing was first demonstrated for the singlet state and a few other examples of self-testable states were reported in recent years. Here, we address the long-standing open question of whether every pure bipartite entangled state is self-testable. We answer it affirmatively by providing explicit self-testing correlations for all such states. PMID:28548093

  11. All pure bipartite entangled states can be self-tested

    NASA Astrophysics Data System (ADS)

    Coladangelo, Andrea; Goh, Koon Tong; Scarani, Valerio

    2017-05-01

    Quantum technologies promise advantages over their classical counterparts in the fields of computation, security and sensing. It is thus desirable that classical users are able to obtain guarantees on quantum devices, even without any knowledge of their inner workings. That such classical certification is possible at all is remarkable: it is a consequence of the violation of Bell inequalities by entangled quantum systems. Device-independent self-testing refers to the most complete such certification: it enables a classical user to uniquely identify the quantum state shared by uncharacterized devices by simply inspecting the correlations of measurement outcomes. Self-testing was first demonstrated for the singlet state and a few other examples of self-testable states were reported in recent years. Here, we address the long-standing open question of whether every pure bipartite entangled state is self-testable. We answer it affirmatively by providing explicit self-testing correlations for all such states.

  12. All pure bipartite entangled states can be self-tested.

    PubMed

    Coladangelo, Andrea; Goh, Koon Tong; Scarani, Valerio

    2017-05-26

    Quantum technologies promise advantages over their classical counterparts in the fields of computation, security and sensing. It is thus desirable that classical users are able to obtain guarantees on quantum devices, even without any knowledge of their inner workings. That such classical certification is possible at all is remarkable: it is a consequence of the violation of Bell inequalities by entangled quantum systems. Device-independent self-testing refers to the most complete such certification: it enables a classical user to uniquely identify the quantum state shared by uncharacterized devices by simply inspecting the correlations of measurement outcomes. Self-testing was first demonstrated for the singlet state and a few other examples of self-testable states were reported in recent years. Here, we address the long-standing open question of whether every pure bipartite entangled state is self-testable. We answer it affirmatively by providing explicit self-testing correlations for all such states.

  13. Eco-genetic modeling of contemporary life-history evolution.

    PubMed

    Dunlop, Erin S; Heino, Mikko; Dieckmann, Ulf

    2009-10-01

    We present eco-genetic modeling as a flexible tool for exploring the course and rates of multi-trait life-history evolution in natural populations. We build on existing modeling approaches by combining features that facilitate studying the ecological and evolutionary dynamics of realistically structured populations. In particular, the joint consideration of age and size structure enables the analysis of phenotypically plastic populations with more than a single growth trajectory, and ecological feedback is readily included in the form of density dependence and frequency dependence. Stochasticity and life-history trade-offs can also be implemented. Critically, eco-genetic models permit the incorporation of salient genetic detail such as a population's genetic variances and covariances and the corresponding heritabilities, as well as the probabilistic inheritance and phenotypic expression of quantitative traits. These inclusions are crucial for predicting rates of evolutionary change on both contemporary and longer timescales. An eco-genetic model can be tightly coupled with empirical data and therefore may have considerable practical relevance, in terms of generating testable predictions and evaluating alternative management measures. To illustrate the utility of these models, we present as an example an eco-genetic model used to study harvest-induced evolution of multiple traits in Atlantic cod. The predictions of our model (most notably that harvesting induces a genetic reduction in age and size at maturation, an increase or decrease in growth capacity depending on the minimum-length limit, and an increase in reproductive investment) are corroborated by patterns observed in wild populations. The predicted genetic changes occur together with plastic changes that could phenotypically mask the former. Importantly, our analysis predicts that evolutionary changes show little signs of reversal following a harvest moratorium. This illustrates how predictions offered by eco-genetic models can enable and guide evolutionarily sustainable resource management.

  14. Using biological markets principles to examine patterns of grooming exchange in Macaca thibetana.

    PubMed

    Balasubramaniam, K N; Berman, C M; Ogawa, H; Li, J

    2011-12-01

    Biological markets principles offer testable hypotheses to explain variation in grooming exchange patterns among nonhuman primates. They predict that when within-group contest competition (WGC) is high and dominance hierarchies steep, grooming interchange with other "commodity" behaviors (such as agonistic support) should prevail. In contrast, when WGC is low and gradients shallow, market theory predicts that grooming reciprocity should prevail. We tested these predictions in a wild, provisioned Tibetan macaque (Macaca thibetana) group across six time periods during which the group had been subjected to varying degrees of range restriction. Data on female-female aggression, grooming, and support were collected using all-occurrences and focal animal sampling techniques, and analyzed using ANCOVA methods and correlation analyses. We found that hierarchical steepness varied significantly across periods, but did not correlate with two indirect indicators of WGC (group size and range restriction) in predicted directions. Contrary to expectations, we found a negative correlation between steepness and group size, perhaps because the responses of group members to external risks (i.e. prolonged and unavoidable exposure to humans) may have overshadowed the effects of WGC. As predicted, grooming reciprocity was significant in each period and negatively correlated with steepness, even after we controlled group size, kinship, rank differences, and proximity. In contrast, there was no evidence for grooming interchange with agonistic support or for a positive relationship between interchange and steepness. We hypothesize that stressful conditions and/or the presence of stable hierarchies during each period may have led to a greater market demand for grooming than support. We suggest that future studies testing these predictions consider more direct measures of WGC and commodities in addition to support, such as feeding tolerance and access to infants. © 2011 Wiley Periodicals, Inc.

  15. Prediction of gene-phenotype associations in humans, mice, and plants using phenologs.

    PubMed

    Woods, John O; Singh-Blom, Ulf Martin; Laurent, Jon M; McGary, Kriston L; Marcotte, Edward M

    2013-06-21

    Phenotypes and diseases may be related to seemingly dissimilar phenotypes in other species by means of the orthology of underlying genes. Such "orthologous phenotypes," or "phenologs," are examples of deep homology, and may be used to predict additional candidate disease genes. In this work, we develop an unsupervised algorithm for ranking phenolog-based candidate disease genes through the integration of predictions from the k nearest neighbor phenologs, comparing classifiers and weighting functions by cross-validation. We also improve upon the original method by extending the theory to paralogous phenotypes. Our algorithm makes use of additional phenotype data--from chicken, zebrafish, and E. coli, as well as new datasets for C. elegans--establishing that several types of annotations may be treated as phenotypes. We demonstrate the use of our algorithm to predict novel candidate genes for human atrial fibrillation (such as HRH2, ATP4A, ATP4B, and HOPX) and epilepsy (e.g., PAX6 and NKX2-1). We suggest gene candidates for pharmacologically-induced seizures in mouse, solely based on orthologous phenotypes from E. coli. We also explore the prediction of plant gene-phenotype associations, as for the Arabidopsis response to vernalization phenotype. We are able to rank gene predictions for a significant portion of the diseases in the Online Mendelian Inheritance in Man database. Additionally, our method suggests candidate genes for mammalian seizures based only on bacterial phenotypes and gene orthology. We demonstrate that phenotype information may come from diverse sources, including drug sensitivities, gene ontology biological processes, and in situ hybridization annotations. Finally, we offer testable candidates for a variety of human diseases, plant traits, and other classes of phenotypes across a wide array of species.

  16. Phase 1 Space Fission Propulsion Energy Source Design

    NASA Technical Reports Server (NTRS)

    Houts, Mike; VanDyke, Melissa; Godfroy, Tom; Pedersen, Kevin; Martin, James; Dickens, Ricky; Salvail, Pat; Hrbud, Ivana; Carter, Robert; Rodgers, Stephen L. (Technical Monitor)

    2002-01-01

    Fission technology can enable rapid, affordable access to any point in the solar system. If fission propulsion systems are to be developed to their full potential; however, near-term customers must be identified and initial fission systems successfully developed, launched, and operated. Studies conducted in fiscal year 2001 (IISTP, 2001) show that fission electric propulsion (FEP) systems with a specific mass at or below 50 kg/kWjet could enhance or enable numerous robotic outer solar system missions of interest. At the required specific mass, it is possible to develop safe, affordable systems that meet mission requirements. To help select the system design to pursue, eight evaluation criteria were identified: system integration, safety, reliability, testability, specific mass, cost, schedule, and programmatic risk. A top-level comparison of four potential concepts was performed: a Testable, Passive, Redundant Reactor (TPRR), a Testable Multi-Cell In-Core Thermionic Reactor (TMCT), a Direct Gas Cooled Reactor (DGCR), and a Pumped Liquid Metal Reactor.(PLMR). Development of any of the four systems appears feasible. However, for power levels up to at least 500 kWt (enabling electric power levels of 125-175 kWe, given 25-35% power conversion efficiency) the TPRR has advantages related to several criteria and is competitive with respect to all. Hardware-based research and development has further increased confidence in the TPRR approach. Successful development and utilization of a "Phase I" fission electric propulsion system will enable advanced Phase 2 and Phase 3 systems capable of providing rapid, affordable access to any point in the solar system.

  17. Pediatric Amblyopia Risk Investigation Study (PARIS).

    PubMed

    Savage, Howard I; Lee, Hester H; Zaetta, Deneen; Olszowy, Ronald; Hamburger, Ellie; Weissman, Mark; Frick, Kevin

    2005-12-01

    To assess the learning curve, testability, and reliability of vision screening modalities administered by pediatric health extenders. Prospective masked clinical trial. Two hundred subjects aged 3 to 6 underwent timed screening for amblyopia by physician extenders, including LEA visual acuity (LEA), stereopsis (RDE), and noncycloplegic autorefraction (NCAR). Patients returned for a comprehensive diagnostic eye examination performed by an ophthalmologist or optometrist. Average screening time was 5.4 +/- 1.6 minutes (LEA), 1.9 +/- 0.9 minutes (RDE), and 1.7 +/- 1.0 minutes (NCAR). Test time for NCAR and RDE fell by 40% during the study period. Overall testability was 92% (LEA), 96% (RDE), and 94% (NCAR). Testability among 3-year-olds was 73% (LEA), 96% (RDE), and 89% (NCAR). Reliability of LEA was moderate (r = .59). Reliability of NCAR was high for astigmatism (Cyl) (r = .89), moderate for spherical equivalent (SE) (r = .66), and low for anisometropia (ANISO) (r = .38). Correlation of cycloplegic autorefraction (CAR) with gold standard cycloplegic retinoscopic refraction (CRR) was very high for SE (.85), CYL (.77), and moderate for ANISO (.48). With NCAR, physician extenders can quickly and reliably detect astigmatism and spherical refractive error in one-third the time it takes to obtain visual acuity. LEA has a lower initial cost, but is time consuming, moderately reliable, and more difficult for 3-year-olds. Shorter examination time and higher reliability may make NCAR a more efficient screening tool for refractive amblyopia in younger children. Future study is needed to determine the sensitivity and specificity of NCAR and other screening methods in detecting amblyopia and amblyopia risk factors.

  18. Modeling Physiological Processes That Relate Toxicant Exposure and Bacterial Population Dynamics

    PubMed Central

    Klanjscek, Tin; Nisbet, Roger M.; Priester, John H.; Holden, Patricia A.

    2012-01-01

    Quantifying effects of toxicant exposure on metabolic processes is crucial to predicting microbial growth patterns in different environments. Mechanistic models, such as those based on Dynamic Energy Budget (DEB) theory, can link physiological processes to microbial growth. Here we expand the DEB framework to include explicit consideration of the role of reactive oxygen species (ROS). Extensions considered are: (i) additional terms in the equation for the “hazard rate” that quantifies mortality risk; (ii) a variable representing environmental degradation; (iii) a mechanistic description of toxic effects linked to increase in ROS production and aging acceleration, and to non-competitive inhibition of transport channels; (iv) a new representation of the “lag time” based on energy required for acclimation. We estimate model parameters using calibrated Pseudomonas aeruginosa optical density growth data for seven levels of cadmium exposure. The model reproduces growth patterns for all treatments with a single common parameter set, and bacterial growth for treatments of up to 150 mg(Cd)/L can be predicted reasonably well using parameters estimated from cadmium treatments of 20 mg(Cd)/L and lower. Our approach is an important step towards connecting levels of biological organization in ecotoxicology. The presented model reveals possible connections between processes that are not obvious from purely empirical considerations, enables validation and hypothesis testing by creating testable predictions, and identifies research required to further develop the theory. PMID:22328915

  19. Search performance is better predicted by tileability than presence of a unique basic feature.

    PubMed

    Chang, Honghua; Rosenholtz, Ruth

    2016-08-01

    Traditional models of visual search such as feature integration theory (FIT; Treisman & Gelade, 1980), have suggested that a key factor determining task difficulty consists of whether or not the search target contains a "basic feature" not found in the other display items (distractors). Here we discriminate between such traditional models and our recent texture tiling model (TTM) of search (Rosenholtz, Huang, Raj, Balas, & Ilie, 2012b), by designing new experiments that directly pit these models against each other. Doing so is nontrivial, for two reasons. First, the visual representation in TTM is fully specified, and makes clear testable predictions, but its complexity makes getting intuitions difficult. Here we elucidate a rule of thumb for TTM, which enables us to easily design new and interesting search experiments. FIT, on the other hand, is somewhat ill-defined and hard to pin down. To get around this, rather than designing totally new search experiments, we start with five classic experiments that FIT already claims to explain: T among Ls, 2 among 5s, Q among Os, O among Qs, and an orientation/luminance-contrast conjunction search. We find that fairly subtle changes in these search tasks lead to significant changes in performance, in a direction predicted by TTM, providing definitive evidence in favor of the texture tiling model as opposed to traditional views of search.

  20. Search performance is better predicted by tileability than presence of a unique basic feature

    PubMed Central

    Chang, Honghua; Rosenholtz, Ruth

    2016-01-01

    Traditional models of visual search such as feature integration theory (FIT; Treisman & Gelade, 1980), have suggested that a key factor determining task difficulty consists of whether or not the search target contains a “basic feature” not found in the other display items (distractors). Here we discriminate between such traditional models and our recent texture tiling model (TTM) of search (Rosenholtz, Huang, Raj, Balas, & Ilie, 2012b), by designing new experiments that directly pit these models against each other. Doing so is nontrivial, for two reasons. First, the visual representation in TTM is fully specified, and makes clear testable predictions, but its complexity makes getting intuitions difficult. Here we elucidate a rule of thumb for TTM, which enables us to easily design new and interesting search experiments. FIT, on the other hand, is somewhat ill-defined and hard to pin down. To get around this, rather than designing totally new search experiments, we start with five classic experiments that FIT already claims to explain: T among Ls, 2 among 5s, Q among Os, O among Qs, and an orientation/luminance-contrast conjunction search. We find that fairly subtle changes in these search tasks lead to significant changes in performance, in a direction predicted by TTM, providing definitive evidence in favor of the texture tiling model as opposed to traditional views of search. PMID:27548090

  1. Encoding dependence in Bayesian causal networks

    USDA-ARS?s Scientific Manuscript database

    Bayesian networks (BNs) represent complex, uncertain spatio-temporal dynamics by propagation of conditional probabilities between identifiable states with a testable causal interaction model. Typically, they assume random variables are discrete in time and space with a static network structure that ...

  2. A Multilayer Network Approach for Guiding Drug Repositioning in Neglected Diseases

    PubMed Central

    Chernomoretz, Ariel; Agüero, Fernán

    2016-01-01

    Drug development for neglected diseases has been historically hampered due to lack of market incentives. The advent of public domain resources containing chemical information from high throughput screenings is changing the landscape of drug discovery for these diseases. In this work we took advantage of data from extensively studied organisms like human, mouse, E. coli and yeast, among others, to develop a novel integrative network model to prioritize and identify candidate drug targets in neglected pathogen proteomes, and bioactive drug-like molecules. We modeled genomic (proteins) and chemical (bioactive compounds) data as a multilayer weighted network graph that takes advantage of bioactivity data across 221 species, chemical similarities between 1.7 105 compounds and several functional relations among 1.67 105 proteins. These relations comprised orthology, sharing of protein domains, and shared participation in defined biochemical pathways. We showcase the application of this network graph to the problem of prioritization of new candidate targets, based on the information available in the graph for known compound-target associations. We validated this strategy by performing a cross validation procedure for known mouse and Trypanosoma cruzi targets and showed that our approach outperforms classic alignment-based approaches. Moreover, our model provides additional flexibility as two different network definitions could be considered, finding in both cases qualitatively different but sensible candidate targets. We also showcase the application of the network to suggest targets for orphan compounds that are active against Plasmodium falciparum in high-throughput screens. In this case our approach provided a reduced prioritization list of target proteins for the query molecules and showed the ability to propose new testable hypotheses for each compound. Moreover, we found that some predictions highlighted by our network model were supported by independent experimental validations as found post-facto in the literature. PMID:26735851

  3. Towards a liquid self: how time, geography, and life experiences reshape the biological identity.

    PubMed

    Grignolio, Andrea; Mishto, Michele; Faria, Ana Maria Caetano; Garagnani, Paolo; Franceschi, Claudio; Tieri, Paolo

    2014-01-01

    The conceptualization of immunological self is amongst the most important theories of modern biology, representing a sort of theoretical guideline for experimental immunologists, in order to understand how host constituents are ignored by the immune system (IS). A consistent advancement in this field has been represented by the danger/damage theory and its subsequent refinements, which at present represents the most comprehensive conceptualization of immunological self. Here, we present the new hypothesis of "liquid self," which integrates and extends the danger/damage theory. The main novelty of the liquid self hypothesis lies in the full integration of the immune response mechanisms into the host body's ecosystems, i.e., in adding the temporal, as well as the geographical/evolutionary and environmental, dimensions, which we suggested to call "immunological biography." Our hypothesis takes into account the important biological changes occurring with time (age) in the IS (including immunosenescence and inflammaging), as well as changes in the organismal context related to nutrition, lifestyle, and geography (populations). We argue that such temporal and geographical dimensions impinge upon, and continuously reshape, the antigenicity of physical entities (molecules, cells, bacteria, viruses), making them switching between "self" and "non-self" states in a dynamical, "liquid" fashion. Particular attention is devoted to oral tolerance and gut microbiota, as well as to a new potential source of unexpected self epitopes produced by proteasome splicing. Finally, our framework allows the set up of a variety of testable predictions, the most straightforward suggesting that the immune responses to defined molecules representing potentials antigens will be quantitatively and qualitatively quite different according to the immuno-biographical background of the host.

  4. Impact of sex steroids and reproductive stage on sleep-dependent memory consolidation in women.

    PubMed

    Baker, Fiona C; Sattari, Negin; de Zambotti, Massimiliano; Goldstone, Aimee; Alaynick, William A; Mednick, Sara C

    2018-03-21

    Age and sex are two of the three major risk factors for Alzheimer's disease (ApoE-e4 allele is the third), with women having a twofold greater risk for Alzheimer's disease after the age of 75 years. Sex differences have been shown across a wide range of cognitive skills in young and older adults, and evidence supports a role for sex steroids, especially estradiol, in protecting against the development of cognitive decline in women. Sleep may also be a protective factor against age-related cognitive decline, since specific electrophysiological sleep events (e.g. sleep spindle/slow oscillation coupling) are critical for offline memory consolidation. Furthermore, studies in young women have shown fluctuations in sleep events and sleep-dependent memory consolidation during different phases of the menstrual cycle that are associated with the levels of sex steroids. An under-appreciated possibility is that there may be an important interaction between these two protective factors (sex steroids and sleep) that may play a role in daily fluctuations in cognitive processing, in particular memory, across a woman's lifespan. Here, we summarize the current knowledge of sex steroid-dependent influences on sleep and cognition across the lifespan in women, with special emphasis on sleep-dependent memory processing. We further indicate gaps in knowledge that require further experimental examination in order to fully appreciate the complex and changing landscape of sex steroids and cognition. Lastly, we propose a series of testable predictions for how sex steroids impact sleep events and sleep-dependent cognition across the three major reproductive stages in women (reproductive years, menopause transition, and post-menopause). Copyright © 2018 Elsevier Inc. All rights reserved.

  5. Computational identification of potential multi-drug combinations for reduction of microglial inflammation in Alzheimer disease

    PubMed Central

    Anastasio, Thomas J.

    2015-01-01

    Like other neurodegenerative diseases, Alzheimer Disease (AD) has a prominent inflammatory component mediated by brain microglia. Reducing microglial inflammation could potentially halt or at least slow the neurodegenerative process. A major challenge in the development of treatments targeting brain inflammation is the sheer complexity of the molecular mechanisms that determine whether microglia become inflammatory or take on a more neuroprotective phenotype. The process is highly multifactorial, raising the possibility that a multi-target/multi-drug strategy could be more effective than conventional monotherapy. This study takes a computational approach in finding combinations of approved drugs that are potentially more effective than single drugs in reducing microglial inflammation in AD. This novel approach exploits the distinct advantages of two different computer programming languages, one imperative and the other declarative. Existing programs written in both languages implement the same model of microglial behavior, and the input/output relationships of both programs agree with each other and with data on microglia over an extensive test battery. Here the imperative program is used efficiently to screen the model for the most efficacious combinations of 10 drugs, while the declarative program is used to analyze in detail the mechanisms of action of the most efficacious combinations. Of the 1024 possible drug combinations, the simulated screen identifies only 7 that are able to move simulated microglia at least 50% of the way from a neurotoxic to a neuroprotective phenotype. Subsequent analysis shows that of the 7 most efficacious combinations, 2 stand out as superior both in strength and reliability. The model offers many experimentally testable and therapeutically relevant predictions concerning effective drug combinations and their mechanisms of action. PMID:26097457

  6. Modelling Systemic Iron Regulation during Dietary Iron Overload and Acute Inflammation: Role of Hepcidin-Independent Mechanisms.

    PubMed

    Enculescu, Mihaela; Metzendorf, Christoph; Sparla, Richard; Hahnel, Maximilian; Bode, Johannes; Muckenthaler, Martina U; Legewie, Stefan

    2017-01-01

    Systemic iron levels must be maintained in physiological concentrations to prevent diseases associated with iron deficiency or iron overload. A key role in this process plays ferroportin, the only known mammalian transmembrane iron exporter, which releases iron from duodenal enterocytes, hepatocytes, or iron-recycling macrophages into the blood stream. Ferroportin expression is tightly controlled by transcriptional and post-transcriptional mechanisms in response to hypoxia, iron deficiency, heme iron and inflammatory cues by cell-autonomous and systemic mechanisms. At the systemic level, the iron-regulatory hormone hepcidin is released from the liver in response to these cues, binds to ferroportin and triggers its degradation. The relative importance of individual ferroportin control mechanisms and their interplay at the systemic level is incompletely understood. Here, we built a mathematical model of systemic iron regulation. It incorporates the dynamics of organ iron pools as well as regulation by the hepcidin/ferroportin system. We calibrated and validated the model with time-resolved measurements of iron responses in mice challenged with dietary iron overload and/or inflammation. The model demonstrates that inflammation mainly reduces the amount of iron in the blood stream by reducing intracellular ferroportin transcription, and not by hepcidin-dependent ferroportin protein destabilization. In contrast, ferroportin regulation by hepcidin is the predominant mechanism of iron homeostasis in response to changing iron diets for a big range of dietary iron contents. The model further reveals that additional homeostasis mechanisms must be taken into account at very high dietary iron levels, including the saturation of intestinal uptake of nutritional iron and the uptake of circulating, non-transferrin-bound iron, into liver. Taken together, our model quantitatively describes systemic iron metabolism and generated experimentally testable predictions for additional ferroportin-independent homeostasis mechanisms.

  7. A model study on the circuit mechanism underlying decision-making in Drosophila.

    PubMed

    Wu, Zhihua; Guo, Aike

    2011-05-01

    Previous elegant experiments in a flight simulator showed that conditioned Drosophila is able to make a clear-cut decision to avoid potential danger. When confronted with conflicting visual cues, the relative saliency of two competing cues is found to be a sensory ruler for flies to judge which cue should be used for decision-making. Further genetic manipulations and immunohistological analysis revealed that the dopamine system and mushroom bodies are indispensable for such a clear-cut or nonlinear decision. The neural circuit mechanism, however, is far from being clear. In this paper, we adopt a computational modeling approach to investigate how different brain areas and the dopamine system work together to drive a fly to make a decision. By developing a systems-level neural network, a two-pathway circuit is proposed. Besides a direct pathway from a feature binding area to the motor center, another connects two areas via the mushroom body, a target of dopamine release. A raised dopamine level is hypothesized to be induced by complex choice tasks and to enhance lateral inhibition and steepen the units' response gain in the mushroom body. Simulations show that training helps to assign values to formerly neutral features. For a circuit model with a blocked mushroom body, the direct pathway passes all alternatives to the motor center without changing original values, giving rise to a simple choice characterized by a linear choice curve. With respect to an intact circuit, enhanced lateral inhibition dependent on dopamine critically promotes competition between alternatives, turning the linear- into nonlinear choice behavior. Results account well for experimental data, supporting the reasonableness of model working hypotheses. Several testable predictions are made for future studies. Copyright © 2011 Elsevier Ltd. All rights reserved.

  8. A Multilayer Network Approach for Guiding Drug Repositioning in Neglected Diseases.

    PubMed

    Berenstein, Ariel José; Magariños, María Paula; Chernomoretz, Ariel; Agüero, Fernán

    2016-01-01

    Drug development for neglected diseases has been historically hampered due to lack of market incentives. The advent of public domain resources containing chemical information from high throughput screenings is changing the landscape of drug discovery for these diseases. In this work we took advantage of data from extensively studied organisms like human, mouse, E. coli and yeast, among others, to develop a novel integrative network model to prioritize and identify candidate drug targets in neglected pathogen proteomes, and bioactive drug-like molecules. We modeled genomic (proteins) and chemical (bioactive compounds) data as a multilayer weighted network graph that takes advantage of bioactivity data across 221 species, chemical similarities between 1.7 105 compounds and several functional relations among 1.67 105 proteins. These relations comprised orthology, sharing of protein domains, and shared participation in defined biochemical pathways. We showcase the application of this network graph to the problem of prioritization of new candidate targets, based on the information available in the graph for known compound-target associations. We validated this strategy by performing a cross validation procedure for known mouse and Trypanosoma cruzi targets and showed that our approach outperforms classic alignment-based approaches. Moreover, our model provides additional flexibility as two different network definitions could be considered, finding in both cases qualitatively different but sensible candidate targets. We also showcase the application of the network to suggest targets for orphan compounds that are active against Plasmodium falciparum in high-throughput screens. In this case our approach provided a reduced prioritization list of target proteins for the query molecules and showed the ability to propose new testable hypotheses for each compound. Moreover, we found that some predictions highlighted by our network model were supported by independent experimental validations as found post-facto in the literature.

  9. Modelling Systemic Iron Regulation during Dietary Iron Overload and Acute Inflammation: Role of Hepcidin-Independent Mechanisms

    PubMed Central

    Sparla, Richard; Hahnel, Maximilian; Bode, Johannes; Muckenthaler, Martina U.; Legewie, Stefan

    2017-01-01

    Systemic iron levels must be maintained in physiological concentrations to prevent diseases associated with iron deficiency or iron overload. A key role in this process plays ferroportin, the only known mammalian transmembrane iron exporter, which releases iron from duodenal enterocytes, hepatocytes, or iron-recycling macrophages into the blood stream. Ferroportin expression is tightly controlled by transcriptional and post-transcriptional mechanisms in response to hypoxia, iron deficiency, heme iron and inflammatory cues by cell-autonomous and systemic mechanisms. At the systemic level, the iron-regulatory hormone hepcidin is released from the liver in response to these cues, binds to ferroportin and triggers its degradation. The relative importance of individual ferroportin control mechanisms and their interplay at the systemic level is incompletely understood. Here, we built a mathematical model of systemic iron regulation. It incorporates the dynamics of organ iron pools as well as regulation by the hepcidin/ferroportin system. We calibrated and validated the model with time-resolved measurements of iron responses in mice challenged with dietary iron overload and/or inflammation. The model demonstrates that inflammation mainly reduces the amount of iron in the blood stream by reducing intracellular ferroportin transcription, and not by hepcidin-dependent ferroportin protein destabilization. In contrast, ferroportin regulation by hepcidin is the predominant mechanism of iron homeostasis in response to changing iron diets for a big range of dietary iron contents. The model further reveals that additional homeostasis mechanisms must be taken into account at very high dietary iron levels, including the saturation of intestinal uptake of nutritional iron and the uptake of circulating, non-transferrin-bound iron, into liver. Taken together, our model quantitatively describes systemic iron metabolism and generated experimentally testable predictions for additional ferroportin-independent homeostasis mechanisms. PMID:28068331

  10. Towards a Liquid Self: How Time, Geography, and Life Experiences Reshape the Biological Identity

    PubMed Central

    Grignolio, Andrea; Mishto, Michele; Faria, Ana Maria Caetano; Garagnani, Paolo; Franceschi, Claudio; Tieri, Paolo

    2014-01-01

    The conceptualization of immunological self is amongst the most important theories of modern biology, representing a sort of theoretical guideline for experimental immunologists, in order to understand how host constituents are ignored by the immune system (IS). A consistent advancement in this field has been represented by the danger/damage theory and its subsequent refinements, which at present represents the most comprehensive conceptualization of immunological self. Here, we present the new hypothesis of “liquid self,” which integrates and extends the danger/damage theory. The main novelty of the liquid self hypothesis lies in the full integration of the immune response mechanisms into the host body’s ecosystems, i.e., in adding the temporal, as well as the geographical/evolutionary and environmental, dimensions, which we suggested to call “immunological biography.” Our hypothesis takes into account the important biological changes occurring with time (age) in the IS (including immunosenescence and inflammaging), as well as changes in the organismal context related to nutrition, lifestyle, and geography (populations). We argue that such temporal and geographical dimensions impinge upon, and continuously reshape, the antigenicity of physical entities (molecules, cells, bacteria, viruses), making them switching between “self” and “non-self” states in a dynamical, “liquid” fashion. Particular attention is devoted to oral tolerance and gut microbiota, as well as to a new potential source of unexpected self epitopes produced by proteasome splicing. Finally, our framework allows the set up of a variety of testable predictions, the most straightforward suggesting that the immune responses to defined molecules representing potentials antigens will be quantitatively and qualitatively quite different according to the immuno-biographical background of the host. PMID:24782860

  11. Evolution beyond neo-Darwinism: a new conceptual framework.

    PubMed

    Noble, Denis

    2015-01-01

    Experimental results in epigenetics and related fields of biological research show that the Modern Synthesis (neo-Darwinist) theory of evolution requires either extension or replacement. This article examines the conceptual framework of neo-Darwinism, including the concepts of 'gene', 'selfish', 'code', 'program', 'blueprint', 'book of life', 'replicator' and 'vehicle'. This form of representation is a barrier to extending or replacing existing theory as it confuses conceptual and empirical matters. These need to be clearly distinguished. In the case of the central concept of 'gene', the definition has moved all the way from describing a necessary cause (defined in terms of the inheritable phenotype itself) to an empirically testable hypothesis (in terms of causation by DNA sequences). Neo-Darwinism also privileges 'genes' in causation, whereas in multi-way networks of interactions there can be no privileged cause. An alternative conceptual framework is proposed that avoids these problems, and which is more favourable to an integrated systems view of evolution. © 2015. Published by The Company of Biologists Ltd.

  12. Certification trails and software design for testability

    NASA Technical Reports Server (NTRS)

    Sullivan, Gregory F.; Wilson, Dwight S.; Masson, Gerald M.

    1993-01-01

    Design techniques which may be applied to make program testing easier were investigated. Methods for modifying a program to generate additional data which we refer to as a certification trail are presented. This additional data is designed to allow the program output to be checked more quickly and effectively. Certification trails were described primarily from a theoretical perspective. A comprehensive attempt to assess experimentally the performance and overall value of the certification trail method is reported. The method was applied to nine fundamental, well-known algorithms for the following problems: convex hull, sorting, huffman tree, shortest path, closest pair, line segment intersection, longest increasing subsequence, skyline, and voronoi diagram. Run-time performance data for each of these problems is given, and selected problems are described in more detail. Our results indicate that there are many cases in which certification trails allow for significantly faster overall program execution time than a 2-version programming approach, and also give further evidence of the breadth of applicability of this method.

  13. Simulating Cancer Growth with Multiscale Agent-Based Modeling

    PubMed Central

    Wang, Zhihui; Butner, Joseph D.; Kerketta, Romica; Cristini, Vittorio; Deisboeck, Thomas S.

    2014-01-01

    There have been many techniques developed in recent years to in silico model a variety of cancer behaviors. Agent-based modeling is a specific discrete-based hybrid modeling approach that allows simulating the role of diversity in cell populations as well as within each individual cell; it has therefore become a powerful modeling method widely used by computational cancer researchers. Many aspects of tumor morphology including phenotype-changing mutations, the adaptation to microenvironment, the process of angiogenesis, the influence of extracellular matrix, reactions to chemotherapy or surgical intervention, the effects of oxygen and nutrient availability, and metastasis and invasion of healthy tissues have been incorporated and investigated in agent-based models. In this review, we introduce some of the most recent agent-based models that have provided insight into the understanding of cancer growth and invasion, spanning multiple biological scales in time and space, and we further describe several experimentally testable hypotheses generated by those models. We also discuss some of the current challenges of multiscale agent-based cancer models. PMID:24793698

  14. Self-organization in the limb: a Turing mechanism for digit development.

    PubMed

    Cooper, Kimberly L

    2015-06-01

    The statistician George E. P. Box stated, 'Essentially all models are wrong, but some are useful.' (Box GEP, Draper NR: Empirical Model-Building and Response Surfaces. Wiley; 1987). Modeling biological processes is challenging for many of the reasons classically trained developmental biologists often resist the idea that black and white equations can explain the grayscale subtleties of living things. Although a simplified mathematical model of development will undoubtedly fall short of precision, a good model is exceedingly useful if it raises at least as many testable questions as it answers. Self-organizing Turing models that simulate the pattern of digits in the hand replicate events that have not yet been explained by classical approaches. The union of theory and experimentation has recently identified and validated the minimal components of a Turing network for digit pattern and triggered a cascade of questions that will undoubtedly be well-served by the continued merging of disciplines. Copyright © 2015 Elsevier Ltd. All rights reserved.

  15. Deciphering Epithelial–Mesenchymal Transition Regulatory Networks in Cancer through Computational Approaches

    PubMed Central

    Burger, Gerhard A.; Danen, Erik H. J.; Beltman, Joost B.

    2017-01-01

    Epithelial–mesenchymal transition (EMT), the process by which epithelial cells can convert into motile mesenchymal cells, plays an important role in development and wound healing but is also involved in cancer progression. It is increasingly recognized that EMT is a dynamic process involving multiple intermediate or “hybrid” phenotypes rather than an “all-or-none” process. However, the role of EMT in various cancer hallmarks, including metastasis, is debated. Given the complexity of EMT regulation, computational modeling has proven to be an invaluable tool for cancer research, i.e., to resolve apparent conflicts in experimental data and to guide experiments by generating testable hypotheses. In this review, we provide an overview of computational modeling efforts that have been applied to regulation of EMT in the context of cancer progression and its associated tumor characteristics. Moreover, we identify possibilities to bridge different modeling approaches and point out outstanding questions in which computational modeling can contribute to advance our understanding of pathological EMT. PMID:28824874

  16. Using Backward Design in Education Research: A Research Methods Essay †

    PubMed Central

    Jensen, Jamie L.; Bailey, Elizabeth G.; Kummer, Tyler A.; Weber, K. Scott

    2017-01-01

    Education research within the STEM disciplines applies a scholarly approach to teaching and learning, with the intent of better understanding how people learn and of improving pedagogy at the undergraduate level. Most of the professionals practicing in this field have ‘crossed over’ from other disciplinary fields and thus have faced challenges in becoming experts in a new discipline. In this article, we offer a novel framework for approaching education research design called Backward Design in Education Research. It is patterned on backward curricular design and provides a three-step, systematic approach to designing education projects: 1) Define a research question that leads to a testable causal hypothesis based on a theoretical rationale; 2) Choose or design the assessment instruments to test the research hypothesis; and 3) Develop an experimental protocol that will be effective in testing the research hypothesis. This approach provides a systematic method to develop and carry out evidence-based research design. PMID:29854045

  17. Modeling T-cell activation using gene expression profiling and state-space models.

    PubMed

    Rangel, Claudia; Angus, John; Ghahramani, Zoubin; Lioumi, Maria; Sotheran, Elizabeth; Gaiba, Alessia; Wild, David L; Falciani, Francesco

    2004-06-12

    We have used state-space models to reverse engineer transcriptional networks from highly replicated gene expression profiling time series data obtained from a well-established model of T-cell activation. State space models are a class of dynamic Bayesian networks that assume that the observed measurements depend on some hidden state variables that evolve according to Markovian dynamics. These hidden variables can capture effects that cannot be measured in a gene expression profiling experiment, e.g. genes that have not been included in the microarray, levels of regulatory proteins, the effects of messenger RNA and protein degradation, etc. Bootstrap confidence intervals are developed for parameters representing 'gene-gene' interactions over time. Our models represent the dynamics of T-cell activation and provide a methodology for the development of rational and experimentally testable hypotheses. Supplementary data and Matlab computer source code will be made available on the web at the URL given below. http://public.kgi.edu/~wild/LDS/index.htm

  18. [Mechanisms of action of voltage-gated sodium channel ligands].

    PubMed

    Tikhonov, D B

    2007-05-01

    The voltage-gated sodium channels play a key role in the generation of action potential in excitable cells. Sodium channels are targeted by a number of modulating ligands. Despite numerous studies, the mechanisms of action of many ligands are still unknown. The main cause of the problem is the absence of the channel structure. Sodium channels belong to the superfamily of P-loop channels that also the data abowt includes potassium and calcium channels and the channels of ionotropic glutamate receptors. Crystallization of several potassium channels has opened a possibility to analyze the structure of other members of the superfamily using the homology modeling approach. The present study summarizes the results of several recent modelling studies of such sodium channel ligands as tetrodotoxin, batrachotoxin and local anesthetics. Comparison of available experimental data with X-ray structures of potassium channels has provided a new level of understanding of the mechanisms of action of sodium channel ligands and has allowed proposing several testable hypotheses.

  19. On the importance of scientific rhetoric in stuttering: a reply to Finn, Bothe, and Bramlett (2005).

    PubMed

    Kalinowski, Joseph; Saltuklaroglu, Tim; Stuart, Andrew; Guntupalli, Vijaya K

    2007-02-01

    To refute the alleged practice of "pseudoscience" by P. Finn, A. K. Bothe, and R. E. Bramlett (2005) and to illustrate their experimental and systematic bias when evaluating the SpeechEasy, an altered auditory feedback device used in the management of stuttering. We challenged the experimental design that led to the seemingly predetermined outcome of pseudoscience rather than science: Limited preselected literature was submitted to a purposely sampled panel of judges (i.e., their own students). Each criterion deemed pseudoscientific was contested with published peer-reviewed data illustrating the importance of good rhetoric, testability, and logical outcomes from decades of scientific research. Stuttering is an involuntary disorder that is highly resistant to therapy. Altered auditory feedback is a derivation of choral speech (nature's most powerful stuttering "inhibitor") that can be synergistically combined with other methods for optimal stuttering inhibition. This approach is logical considering that in stuttering no single treatment is universally helpful. Also, caution is suggested when attempting to differentiate science from pseudoscience in stuttering treatments using the criteria employed by Finn et al. For example, evaluating behavioral therapy outcomes implements a post hoc or untestable system. Speech outcome (i.e., stuttered or fluent speech) determines success or failure of technique use, placing responsibility for failure on those who stutter.

  20. Synchronous versus asynchronous modeling of gene regulatory networks.

    PubMed

    Garg, Abhishek; Di Cara, Alessandro; Xenarios, Ioannis; Mendoza, Luis; De Micheli, Giovanni

    2008-09-01

    In silico modeling of gene regulatory networks has gained some momentum recently due to increased interest in analyzing the dynamics of biological systems. This has been further facilitated by the increasing availability of experimental data on gene-gene, protein-protein and gene-protein interactions. The two dynamical properties that are often experimentally testable are perturbations and stable steady states. Although a lot of work has been done on the identification of steady states, not much work has been reported on in silico modeling of cellular differentiation processes. In this manuscript, we provide algorithms based on reduced ordered binary decision diagrams (ROBDDs) for Boolean modeling of gene regulatory networks. Algorithms for synchronous and asynchronous transition models have been proposed and their corresponding computational properties have been analyzed. These algorithms allow users to compute cyclic attractors of large networks that are currently not feasible using existing software. Hereby we provide a framework to analyze the effect of multiple gene perturbation protocols, and their effect on cell differentiation processes. These algorithms were validated on the T-helper model showing the correct steady state identification and Th1-Th2 cellular differentiation process. The software binaries for Windows and Linux platforms can be downloaded from http://si2.epfl.ch/~garg/genysis.html.

  1. Actinobacteria phylogenomics, selective isolation from an iron oligotrophic environment and siderophore functional characterization, unveil new desferrioxamine traits.

    PubMed

    Cruz-Morales, Pablo; Ramos-Aboites, Hilda E; Licona-Cassani, Cuauhtémoc; Selem-Mójica, Nelly; Mejía-Ponce, Paulina M; Souza-Saldívar, Valeria; Barona-Gómez, Francisco

    2017-09-01

    Desferrioxamines are hydroxamate siderophores widely conserved in both aquatic and soil-dwelling Actinobacteria. While the genetic and enzymatic bases of siderophore biosynthesis and their transport in model families of this phylum are well understood, evolutionary studies are lacking. Here, we perform a comprehensive desferrioxamine-centric (des genes) phylogenomic analysis, which includes the genomes of six novel strains isolated from an iron and phosphorous depleted oasis in the Chihuahuan desert of Mexico. Our analyses reveal previously unnoticed desferrioxamine evolutionary patterns, involving both biosynthetic and transport genes, likely to be related to desferrioxamines chemical diversity. The identified patterns were used to postulate experimentally testable hypotheses after phenotypic characterization, including profiling of siderophores production and growth stimulation of co-cultures under iron deficiency. Based in our results, we propose a novel des gene, which we term desG, as responsible for incorporation of phenylacetyl moieties during biosynthesis of previously reported arylated desferrioxamines. Moreover, a genomic-based classification of the siderophore-binding proteins responsible for specific and generalist siderophore assimilation is postulated. This report provides a much-needed evolutionary framework, with specific insights supported by experimental data, to direct the future ecological and functional analysis of desferrioxamines in the environment. © FEMS 2017.

  2. Theoretical prediction and impact of fundamental electric dipole moments

    DOE PAGES

    Ellis, Sebastian A. R.; Kane, Gordon L.

    2016-01-13

    The predicted Standard Model (SM) electric dipole moments (EDMs) of electrons and quarks are tiny, providing an important window to observe new physics. Theories beyond the SM typically allow relatively large EDMs. The EDMs depend on the relative phases of terms in the effective Lagrangian of the extended theory, which are generally unknown. Underlying theories, such as string/M-theories compactified to four dimensions, could predict the phases and thus EDMs in the resulting supersymmetric (SUSY) theory. Earlier one of us, with collaborators, made such a prediction and found, unexpectedly, that the phases were predicted to be zero at tree level inmore » the theory at the unification or string scale ~O(10 16 GeV). Electroweak (EW) scale EDMs still arise via running from the high scale, and depend only on the SM Yukawa couplings that also give the CKM phase. Here we extend the earlier work by studying the dependence of the low scale EDMs on the constrained but not fully known fundamental Yukawa couplings. The dominant contribution is from two loop diagrams and is not sensitive to the choice of Yukawa texture. The electron EDM should not be found to be larger than about 5 × 10 –30e cm, and the neutron EDM should not be larger than about 5 × 10 –29e cm. These values are quite a bit smaller than the reported predictions from Split SUSY and typical effective theories, but much larger than the Standard Model prediction. Also, since models with random phases typically give much larger EDMs, it is a significant testable prediction of compactified M-theory that the EDMs should not be above these upper limits. The actual EDMs can be below the limits, so once they are measured they could provide new insight into the fundamental Yukawa couplings of leptons and quarks. As a result, we comment also on the role of strong CP violation. EDMs probe fundamental physics near the Planck scale.« less

  3. Theoretical prediction and impact of fundamental electric dipole moments

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ellis, Sebastian A. R.; Kane, Gordon L.

    The predicted Standard Model (SM) electric dipole moments (EDMs) of electrons and quarks are tiny, providing an important window to observe new physics. Theories beyond the SM typically allow relatively large EDMs. The EDMs depend on the relative phases of terms in the effective Lagrangian of the extended theory, which are generally unknown. Underlying theories, such as string/M-theories compactified to four dimensions, could predict the phases and thus EDMs in the resulting supersymmetric (SUSY) theory. Earlier one of us, with collaborators, made such a prediction and found, unexpectedly, that the phases were predicted to be zero at tree level inmore » the theory at the unification or string scale ~O(10 16 GeV). Electroweak (EW) scale EDMs still arise via running from the high scale, and depend only on the SM Yukawa couplings that also give the CKM phase. Here we extend the earlier work by studying the dependence of the low scale EDMs on the constrained but not fully known fundamental Yukawa couplings. The dominant contribution is from two loop diagrams and is not sensitive to the choice of Yukawa texture. The electron EDM should not be found to be larger than about 5 × 10 –30e cm, and the neutron EDM should not be larger than about 5 × 10 –29e cm. These values are quite a bit smaller than the reported predictions from Split SUSY and typical effective theories, but much larger than the Standard Model prediction. Also, since models with random phases typically give much larger EDMs, it is a significant testable prediction of compactified M-theory that the EDMs should not be above these upper limits. The actual EDMs can be below the limits, so once they are measured they could provide new insight into the fundamental Yukawa couplings of leptons and quarks. As a result, we comment also on the role of strong CP violation. EDMs probe fundamental physics near the Planck scale.« less

  4. Automated Testability Decision Tool

    DTIC Science & Technology

    1991-09-01

    Vol. 16,1968, pp. 538-558. Bertsekas, D. P., "Constraints Optimization and Lagrange Multiplier Methods," Academic Press, New York. McLeavey , D.W... McLeavey , J.A., "Parallel Optimization Methods in Standby Reliability, " University of Connecticut, School of Business Administration, Bureau of Business

  5. Role of the Epistemic Subject in Piaget's Genetic Epistemology and Its Importance for Science Education.

    ERIC Educational Resources Information Center

    Niaz, Mansoor

    1991-01-01

    Discusses differences between the epistemic and the psychological subject, the relationship between the epistemic subject and the ideal gas law, the development of general cognitive operations, and the empirical testability of Piaget's epistemic subject. (PR)

  6. Small Town in Mass Society Revisited.

    ERIC Educational Resources Information Center

    Young, Frank W.

    1996-01-01

    A 1958 New York community study dramatized the thesis that macro forces (urbanization, industrialization, bureaucratization) have undermined all small communities' autonomy. Such "oppositional case studies" succeed when they render the dominant view immediately obsolete, have plausible origins, are testable, and generate new research.…

  7. Testing the inhibitory cascade model in Mesozoic and Cenozoic mammaliaforms

    PubMed Central

    2013-01-01

    Background Much of the current research in the growing field of evolutionary development concerns relating developmental pathways to large-scale patterns of morphological evolution, with developmental constraints on variation, and hence diversity, a field of particular interest. Tooth morphology offers an excellent model system for such ‘evo-devo’ studies, because teeth are well preserved in the fossil record, and are commonly used in phylogenetic analyses and as ecological proxies. Moreover, tooth development is relatively well studied, and has provided several testable hypotheses of developmental influences on macroevolutionary patterns. The recently-described Inhibitory Cascade (IC) Model provides just such a hypothesis for mammalian lower molar evolution. Derived from experimental data, the IC Model suggests that a balance between mesenchymal activators and molar-derived inhibitors determines the size of the immediately posterior molar, predicting firstly that molars either decrease in size along the tooth row, or increase in size, or are all of equal size, and secondly that the second lower molar should occupy one third of lower molar area. Here, we tested the IC Model in a large selection of taxa from diverse extant and fossil mammalian groups, ranging from the Middle Jurassic (~176 to 161 Ma) to the Recent. Results Results show that most taxa (~65%) fell within the predicted areas of the Inhibitory Cascade Model. However, members of several extinct groups fell into the regions where m2 was largest, or rarely, smallest, including the majority of the polyphyletic “condylarths”. Most Mesozoic mammals fell near the centre of the space with equality of size in all three molars. The distribution of taxa was significantly clustered by diet and by phylogenetic group. Conclusions Overall, the IC Model was supported as a plesiomorphic developmental system for Mammalia, suggesting that mammal tooth size has been subjected to this developmental constraint at least since the divergence of australosphenidans and boreosphenidans approximately 180 Ma. Although exceptions exist, including many ‘condylarths’, these are most likely to be secondarily derived states, rather than alternative ancestral developmental models for Mammalia. PMID:23565593

  8. Non-animal photosafety assessment approaches for cosmetics based on the photochemical and photobiochemical properties.

    PubMed

    Onoue, Satomi; Suzuki, Gen; Kato, Masashi; Hirota, Morihiko; Nishida, Hayato; Kitagaki, Masato; Kouzuki, Hirokazu; Yamada, Shizuo

    2013-12-01

    The main purpose of the present study was to establish a non-animal photosafety assessment approach for cosmetics using in vitro photochemical and photobiochemical screening systems. Fifty-one cosmetics, pharmaceutics and other chemicals were selected as model chemicals on the basis of animal and/or clinical photosafety information. The model chemicals were assessed in terms of photochemical properties by UV/VIS spectral analysis, reactive oxygen species (ROS) assay and 3T3 neutral red uptake phototoxicity testing (3T3 NRU PT). Most phototoxins exhibited potent UV/VIS absorption with molar extinction coefficients of over 1000M(-1)cm(-1), although false-negative prediction occurred for 2 cosmetic phototoxins owing to weak UV/VIS absorption. Among all the cosmetic ingredients, ca. 42% of tested chemicals were non-testable in the ROS assay because of low water solubility; thereby, micellar ROS (mROS) assay using a solubilizing surfactant was employed for follow-up screening. Upon combination use of ROS and mROS assays, the individual specificity was 88.2%, and the positive and negative predictivities were estimated to be 94.4% and 100%, respectively. In the 3T3 NRU PT, 3 cosmetics and 4 drugs were incorrectly predicted not to be phototoxic, although some of them were typical photoallergens. Thus, these in vitro screening systems individually provide false predictions; however, a systematic tiered approach using these assays could provide reliable photosafety assessment without any false-negatives. The combined use of in vitro assays might enable simple and fast non-animal photosafety evaluation of cosmetic ingredients. Copyright © 2013 Elsevier Ltd. All rights reserved.

  9. Evaluation of phylogenetic footprint discovery for predicting bacterial cis-regulatory elements and revealing their evolution.

    PubMed

    Janky, Rekin's; van Helden, Jacques

    2008-01-23

    The detection of conserved motifs in promoters of orthologous genes (phylogenetic footprints) has become a common strategy to predict cis-acting regulatory elements. Several software tools are routinely used to raise hypotheses about regulation. However, these tools are generally used as black boxes, with default parameters. A systematic evaluation of optimal parameters for a footprint discovery strategy can bring a sizeable improvement to the predictions. We evaluate the performances of a footprint discovery approach based on the detection of over-represented spaced motifs. This method is particularly suitable for (but not restricted to) Bacteria, since such motifs are typically bound by factors containing a Helix-Turn-Helix domain. We evaluated footprint discovery in 368 Escherichia coli K12 genes with annotated sites, under 40 different combinations of parameters (taxonomical level, background model, organism-specific filtering, operon inference). Motifs are assessed both at the levels of correctness and significance. We further report a detailed analysis of 181 bacterial orthologs of the LexA repressor. Distinct motifs are detected at various taxonomical levels, including the 7 previously characterized taxon-specific motifs. In addition, we highlight a significantly stronger conservation of half-motifs in Actinobacteria, relative to Firmicutes, suggesting an intermediate state in specificity switching between the two Gram-positive phyla, and thereby revealing the on-going evolution of LexA auto-regulation. The footprint discovery method proposed here shows excellent results with E. coli and can readily be extended to predict cis-acting regulatory signals and propose testable hypotheses in bacterial genomes for which nothing is known about regulation.

  10. Disturbance, neutral theory, and patterns of beta diversity in soil communities.

    PubMed

    Maaß, Stefanie; Migliorini, Massimo; Rillig, Matthias C; Caruso, Tancredi

    2014-12-01

    Beta diversity describes how local communities within an area or region differ in species composition/abundance. There have been attempts to use changes in beta diversity as a biotic indicator of disturbance, but lack of theory and methodological caveats have hampered progress. We here propose that the neutral theory of biodiversity plus the definition of beta diversity as the total variance of a community matrix provide a suitable, novel, starting point for ecological applications. Observed levels of beta diversity (BD) can be compared to neutral predictions with three possible outcomes: Observed BD equals neutral prediction or is larger (divergence) or smaller (convergence) than the neutral prediction. Disturbance might lead to either divergence or convergence, depending on type and strength. We here apply these ideas to datasets collected on oribatid mites (a key, very diverse soil taxon) under several regimes of disturbances. When disturbance is expected to increase the heterogeneity of soil spatial properties or the sampling strategy encompassed a range of diverging environmental conditions, we observed diverging assemblages. On the contrary, we observed patterns consistent with neutrality when disturbance could determine homogenization of soil properties in space or the sampling strategy encompassed fairly homogeneous areas. With our method, spatial and temporal changes in beta diversity can be directly and easily monitored to detect significant changes in community dynamics, although the method itself cannot inform on underlying mechanisms. However, human-driven disturbances and the spatial scales at which they operate are usually known. In this case, our approach allows the formulation of testable predictions in terms of expected changes in beta diversity, thereby offering a promising monitoring tool.

  11. The evolutionary neuroscience of musical beat perception: the Action Simulation for Auditory Prediction (ASAP) hypothesis

    PubMed Central

    Patel, Aniruddh D.; Iversen, John R.

    2013-01-01

    Every human culture has some form of music with a beat: a perceived periodic pulse that structures the perception of musical rhythm and which serves as a framework for synchronized movement to music. What are the neural mechanisms of musical beat perception, and how did they evolve? One view, which dates back to Darwin and implicitly informs some current models of beat perception, is that the relevant neural mechanisms are relatively general and are widespread among animal species. On the basis of recent neural and cross-species data on musical beat processing, this paper argues for a different view. Here we argue that beat perception is a complex brain function involving temporally-precise communication between auditory regions and motor planning regions of the cortex (even in the absence of overt movement). More specifically, we propose that simulation of periodic movement in motor planning regions provides a neural signal that helps the auditory system predict the timing of upcoming beats. This “action simulation for auditory prediction” (ASAP) hypothesis leads to testable predictions. We further suggest that ASAP relies on dorsal auditory pathway connections between auditory regions and motor planning regions via the parietal cortex, and suggest that these connections may be stronger in humans than in non-human primates due to the evolution of vocal learning in our lineage. This suggestion motivates cross-species research to determine which species are capable of human-like beat perception, i.e., beat perception that involves accurate temporal prediction of beat times across a fairly broad range of tempi. PMID:24860439

  12. Making robust policy decisions using global biodiversity indicators.

    PubMed

    Nicholson, Emily; Collen, Ben; Barausse, Alberto; Blanchard, Julia L; Costelloe, Brendan T; Sullivan, Kathryn M E; Underwood, Fiona M; Burn, Robert W; Fritz, Steffen; Jones, Julia P G; McRae, Louise; Possingham, Hugh P; Milner-Gulland, E J

    2012-01-01

    In order to influence global policy effectively, conservation scientists need to be able to provide robust predictions of the impact of alternative policies on biodiversity and measure progress towards goals using reliable indicators. We present a framework for using biodiversity indicators predictively to inform policy choices at a global level. The approach is illustrated with two case studies in which we project forwards the impacts of feasible policies on trends in biodiversity and in relevant indicators. The policies are based on targets agreed at the Convention on Biological Diversity (CBD) meeting in Nagoya in October 2010. The first case study compares protected area policies for African mammals, assessed using the Red List Index; the second example uses the Living Planet Index to assess the impact of a complete halt, versus a reduction, in bottom trawling. In the protected areas example, we find that the indicator can aid in decision-making because it is able to differentiate between the impacts of the different policies. In the bottom trawling example, the indicator exhibits some counter-intuitive behaviour, due to over-representation of some taxonomic and functional groups in the indicator, and contrasting impacts of the policies on different groups caused by trophic interactions. Our results support the need for further research on how to use predictive models and indicators to credibly track trends and inform policy. To be useful and relevant, scientists must make testable predictions about the impact of global policy on biodiversity to ensure that targets such as those set at Nagoya catalyse effective and measurable change.

  13. Making Robust Policy Decisions Using Global Biodiversity Indicators

    PubMed Central

    Nicholson, Emily; Collen, Ben; Barausse, Alberto; Blanchard, Julia L.; Costelloe, Brendan T.; Sullivan, Kathryn M. E.; Underwood, Fiona M.; Burn, Robert W.; Fritz, Steffen; Jones, Julia P. G.; McRae, Louise; Possingham, Hugh P.; Milner-Gulland, E. J.

    2012-01-01

    In order to influence global policy effectively, conservation scientists need to be able to provide robust predictions of the impact of alternative policies on biodiversity and measure progress towards goals using reliable indicators. We present a framework for using biodiversity indicators predictively to inform policy choices at a global level. The approach is illustrated with two case studies in which we project forwards the impacts of feasible policies on trends in biodiversity and in relevant indicators. The policies are based on targets agreed at the Convention on Biological Diversity (CBD) meeting in Nagoya in October 2010. The first case study compares protected area policies for African mammals, assessed using the Red List Index; the second example uses the Living Planet Index to assess the impact of a complete halt, versus a reduction, in bottom trawling. In the protected areas example, we find that the indicator can aid in decision-making because it is able to differentiate between the impacts of the different policies. In the bottom trawling example, the indicator exhibits some counter-intuitive behaviour, due to over-representation of some taxonomic and functional groups in the indicator, and contrasting impacts of the policies on different groups caused by trophic interactions. Our results support the need for further research on how to use predictive models and indicators to credibly track trends and inform policy. To be useful and relevant, scientists must make testable predictions about the impact of global policy on biodiversity to ensure that targets such as those set at Nagoya catalyse effective and measurable change. PMID:22815938

  14. A tweaking principle for executive control: neuronal circuit mechanism for rule-based task switching and conflict resolution.

    PubMed

    Ardid, Salva; Wang, Xiao-Jing

    2013-12-11

    A hallmark of executive control is the brain's agility to shift between different tasks depending on the behavioral rule currently in play. In this work, we propose a "tweaking hypothesis" for task switching: a weak rule signal provides a small bias that is dramatically amplified by reverberating attractor dynamics in neural circuits for stimulus categorization and action selection, leading to an all-or-none reconfiguration of sensory-motor mapping. Based on this principle, we developed a biologically realistic model with multiple modules for task switching. We found that the model quantitatively accounts for complex task switching behavior: switch cost, congruency effect, and task-response interaction; as well as monkey's single-neuron activity associated with task switching. The model yields several testable predictions, in particular, that category-selective neurons play a key role in resolving sensory-motor conflict. This work represents a neural circuit model for task switching and sheds insights in the brain mechanism of a fundamental cognitive capability.

  15. Not all emotions are created equal: The negativity bias in social-emotional development

    PubMed Central

    Vaish, Amrisha; Grossmann, Tobias; Woodward, Amanda

    2013-01-01

    There is ample empirical evidence for an asymmetry in the way that adults use positive versus negative information to make sense of their world; specifically, across an array of psychological situations and tasks, adults display a negativity bias, or the propensity to attend to, learn from, and use negative information far more than positive information. This bias is argued to serve critical evolutionarily adaptive functions, but its developmental presence and ontogenetic emergence have never seriously been considered. Here, we argue for the existence of the negativity bias in early development, evident especially in research on infant social referencing but also in other developmental domains. We discuss ontogenetic mechanisms underlying the emergence of this bias, and explore not only its evolutionary but also its developmental functions and consequences. Throughout, we suggest ways to further examine the negativity bias in infants and older children, and we make testable predictions that would help clarify the nature of the negativity bias during early development. PMID:18444702

  16. Single-Dose Testosterone Administration Impairs Cognitive Reflection in Men.

    PubMed

    Nave, Gideon; Nadler, Amos; Zava, David; Camerer, Colin

    2017-10-01

    In nonhumans, the sex steroid testosterone regulates reproductive behaviors such as fighting between males and mating. In humans, correlational studies have linked testosterone with aggression and disorders associated with poor impulse control, but the neuropsychological processes at work are poorly understood. Building on a dual-process framework, we propose a mechanism underlying testosterone's behavioral effects in humans: reduction in cognitive reflection. In the largest study of behavioral effects of testosterone administration to date, 243 men received either testosterone or placebo and took the Cognitive Reflection Test (CRT), which estimates the capacity to override incorrect intuitive judgments with deliberate correct responses. Testosterone administration reduced CRT scores. The effect remained after we controlled for age, mood, math skills, whether participants believed they had received the placebo or testosterone, and the effects of 14 additional hormones, and it held for each of the CRT questions in isolation. Our findings suggest a mechanism underlying testosterone's diverse effects on humans' judgments and decision making and provide novel, clear, and testable predictions.

  17. Topographic variations in chaos on Europa: Implications for diapiric formation

    NASA Technical Reports Server (NTRS)

    Schenk, Paul M.; Pappalardo, Robert T.

    2004-01-01

    Disrupted terrain, or chaos, on Europa, might have formed through melting of a floating ice shell from a subsurface ocean [Cam et al., 1998; Greenberg et al., 19991, or breakup by diapirs rising from the warm lower portion of the ice shell [Head and Pappalardo, 1999; Collins et al., 20001. Each model makes specific and testable predictions for topographic expression within chaos and relative to surrounding terrains on local and regional scales. High-resolution stereo-controlled photoclinometric topography indicates that chaos topography, including the archetypal Conamara Chaos region, is uneven and commonly higher than surrounding plains by up to 250 m. Elevated and undulating topography is more consistent with diapiric uplift of deep material in a relatively thick ice shell, rather than melt-through and refreezing of regionally or globally thin ice by a subsurface ocean. Vertical and horizontal scales of topographic doming in Conamara Chaos are consistent with a total ice shell thickness >15 km. Contact between Europa's ocean and surface may most likely be indirectly via diapirism or convection.

  18. Topographic variations in chaos on Europa: Implications for diapiric formation

    NASA Astrophysics Data System (ADS)

    Schenk, Paul M.; Pappalardo, Robert T.

    2004-08-01

    Disrupted terrain, or chaos, on Europa, might have formed through melting of a floating ice shell from a subsurface ocean [Carr et al., 1998; Greenberg et al., 1999], or breakup by diapirs rising from the warm lower portion of the ice shell [Head and Pappalardo, 1999; Collins et al., 2000]. Each model makes specific and testable predictions for topographic expression within chaos and relative to surrounding terrains on local and regional scales. High-resolution stereo-controlled photoclinometric topography indicates that chaos topography, including the archetypal Conamara Chaos region, is uneven and commonly higher than surrounding plains by up to 250 m. Elevated and undulating topography is more consistent with diapiric uplift of deep material in a relatively thick ice shell, rather than melt-through and refreezing of regionally or globally thin ice by a subsurface ocean. Vertical and horizontal scales of topographic doming in Conamara Chaos are consistent with a total ice shell thickness >15 km. Contact between Europa's ocean and surface may most likely be indirectly via diapirism or convection.

  19. Mechanisms of mindfulness training: Monitor and Acceptance Theory (MAT).

    PubMed

    Lindsay, Emily K; Creswell, J David

    2017-02-01

    Despite evidence linking trait mindfulness and mindfulness training with a broad range of effects, still little is known about its underlying active mechanisms. Mindfulness is commonly defined as (1) the ongoing monitoring of present-moment experience (2) with an orientation of acceptance. Building on conceptual, clinical, and empirical work, we describe a testable theoretical account to help explain mindfulness effects on cognition, affect, stress, and health outcomes. Specifically, Monitor and Acceptance Theory (MAT) posits that (1), by enhancing awareness of one's experiences, the skill of attention monitoring explains how mindfulness improves cognitive functioning outcomes, yet this same skill can increase affective reactivity. Second (2), by modifying one's relation to monitored experience, acceptance is necessary for reducing affective reactivity, such that attention monitoring and acceptance skills together explain how mindfulness improves negative affectivity, stress, and stress-related health outcomes. We discuss how MAT contributes to mindfulness science, suggest plausible alternatives to the account, and offer specific predictions for future research. Copyright © 2016 Elsevier Ltd. All rights reserved.

  20. Mechanisms of Mindfulness Training: Monitor and Acceptance Theory (MAT)1

    PubMed Central

    Lindsay, Emily K.; Creswell, J. David

    2016-01-01

    Despite evidence linking trait mindfulness and mindfulness training with a broad range of effects, still little is known about its underlying active mechanisms. Mindfulness is commonly defined as (1) the ongoing monitoring of present-moment experience (2) with an orientation of acceptance. Building on conceptual, clinical, and empirical work, we describe a testable theoretical account to help explain mindfulness effects on cognition, affect, stress, and health outcomes. Specifically, Monitor and Acceptance Theory (MAT) posits that (1), by enhancing awareness of one’s experiences, the skill of attention monitoring explains how mindfulness improves cognitive functioning outcomes, yet this same skill can increase affective reactivity. Second (2), by modifying one’s relation to monitored experience, acceptance is necessary for reducing affective reactivity, such that attention monitoring and acceptance skills together explain how mindfulness improves negative affectivity, stress, and stress-related health outcomes. We discuss how MAT contributes to mindfulness science, suggest plausible alternatives to the account, and offer specific predictions for future research. PMID:27835764

  1. Hybrid regulatory models: a statistically tractable approach to model regulatory network dynamics.

    PubMed

    Ocone, Andrea; Millar, Andrew J; Sanguinetti, Guido

    2013-04-01

    Computational modelling of the dynamics of gene regulatory networks is a central task of systems biology. For networks of small/medium scale, the dominant paradigm is represented by systems of coupled non-linear ordinary differential equations (ODEs). ODEs afford great mechanistic detail and flexibility, but calibrating these models to data is often an extremely difficult statistical problem. Here, we develop a general statistical inference framework for stochastic transcription-translation networks. We use a coarse-grained approach, which represents the system as a network of stochastic (binary) promoter and (continuous) protein variables. We derive an exact inference algorithm and an efficient variational approximation that allows scalable inference and learning of the model parameters. We demonstrate the power of the approach on two biological case studies, showing that the method allows a high degree of flexibility and is capable of testable novel biological predictions. http://homepages.inf.ed.ac.uk/gsanguin/software.html. Supplementary data are available at Bioinformatics online.

  2. New streams and springs after the 2014 Mw6.0 South Napa earthquake

    PubMed Central

    Wang, Chi-Yuen; Manga, Michael

    2015-01-01

    Many streams and springs, which were dry or nearly dry before the 2014 Mw6.0 South Napa earthquake, started to flow after the earthquake. A United States Geological Survey stream gauge also registered a coseismic increase in discharge. Public interest was heightened by a state of extreme drought in California. Since the new flows were not contaminated by pre-existing surface water, their composition allowed unambiguous identification of their origin. Following the earthquake we repeatedly surveyed the new flows, collecting data to test hypotheses about their origin. We show that the new flows originated from groundwater in nearby mountains released by the earthquake. The estimated total amount of new water is ∼106 m3, about 1/40 of the annual water use in the Napa–Sonoma area. Our model also makes a testable prediction of a post-seismic decrease of seismic velocity in the shallow crust of the affected region. PMID:26158898

  3. Theory and Metatheory in the Study of Dual Processing: Reply to Comments.

    PubMed

    Evans, Jonathan St B T; Stanovich, Keith E

    2013-05-01

    In this article, we respond to the four comments on our target article. Some of the commentators suggest that we have formulated our proposals in a way that renders our account of dual-process theory untestable and less interesting than the broad theory that has been critiqued in recent literature. Our response is that there is a confusion of levels. Falsifiable predictions occur not at the level of paradigm or metatheory-where this debate is taking place-but rather in the instantiation of such a broad framework in task level models. Our proposal that many dual-processing characteristics are only correlated features does not weaken the testability of task-level dual-processing accounts. We also respond to arguments that types of processing are not qualitatively distinct and discuss specific evidence disputed by the commentators. Finally, we welcome the constructive comments of one commentator who provides strong arguments for the reality of the dual-process distinction. © The Author(s) 2013.

  4. Niche construction, sources of selection and trait coevolution.

    PubMed

    Laland, Kevin; Odling-Smee, John; Endler, John

    2017-10-06

    Organisms modify and choose components of their local environments. This 'niche construction' can alter ecological processes, modify natural selection and contribute to inheritance through ecological legacies. Here, we propose that niche construction initiates and modifies the selection directly affecting the constructor, and on other species, in an orderly, directed and sustained manner. By dependably generating specific environmental states, niche construction co-directs adaptive evolution by imposing a consistent statistical bias on selection. We illustrate how niche construction can generate this evolutionary bias by comparing it with artificial selection. We suggest that it occupies the middle ground between artificial and natural selection. We show how the perspective leads to testable predictions related to: (i) reduced variance in measures of responses to natural selection in the wild; (ii) multiple trait coevolution, including the evolution of sequences of traits and patterns of parallel evolution; and (iii) a positive association between niche construction and biodiversity. More generally, we submit that evolutionary biology would benefit from greater attention to the diverse properties of all sources of selection.

  5. Visual attention and flexible normalization pools

    PubMed Central

    Schwartz, Odelia; Coen-Cagli, Ruben

    2013-01-01

    Attention to a spatial location or feature in a visual scene can modulate the responses of cortical neurons and affect perceptual biases in illusions. We add attention to a cortical model of spatial context based on a well-founded account of natural scene statistics. The cortical model amounts to a generalized form of divisive normalization, in which the surround is in the normalization pool of the center target only if they are considered statistically dependent. Here we propose that attention influences this computation by accentuating the neural unit activations at the attended location, and that the amount of attentional influence of the surround on the center thus depends on whether center and surround are deemed in the same normalization pool. The resulting form of model extends a recent divisive normalization model of attention (Reynolds & Heeger, 2009). We simulate cortical surround orientation experiments with attention and show that the flexible model is suitable for capturing additional data and makes nontrivial testable predictions. PMID:23345413

  6. Flood basalts and mass extinctions

    NASA Technical Reports Server (NTRS)

    Morgan, W. Jason

    1988-01-01

    There appears to be a correlation between the times of flood basalts and mass-extinction events. There is a correlation of flood basalts and hotspot tracks--flood basalts appear to mark the beginning of a new hotspot. Perhaps there is an initial instability in the mantle that bursts forth as a flood basalt but then becomes a steady trickle that persists for many tens of millions of years. Suppose that flood basalts and not impacts cause the environmental changes that lead to mass-extinctions. This is a very testable hypothesis: it predicts that the ages of the flows should agree exactly with the times of extinctions. The Deccan and K-T ages agree with this hypothesis; An iridium anomaly at extinction boundaries apparently can be explained by a scaled-up eruption of the Hawaiian type; the occurrence of shocked-quartz is more of a problem. However if the flood basalts are all well dated and their ages indeed agree with extinction times, then surely some mechanism to appropriately produce shocked-quartz will be found.

  7. Encoding sensory and motor patterns as time-invariant trajectories in recurrent neural networks

    PubMed Central

    2018-01-01

    Much of the information the brain processes and stores is temporal in nature—a spoken word or a handwritten signature, for example, is defined by how it unfolds in time. However, it remains unclear how neural circuits encode complex time-varying patterns. We show that by tuning the weights of a recurrent neural network (RNN), it can recognize and then transcribe spoken digits. The model elucidates how neural dynamics in cortical networks may resolve three fundamental challenges: first, encode multiple time-varying sensory and motor patterns as stable neural trajectories; second, generalize across relevant spatial features; third, identify the same stimuli played at different speeds—we show that this temporal invariance emerges because the recurrent dynamics generate neural trajectories with appropriately modulated angular velocities. Together our results generate testable predictions as to how recurrent networks may use different mechanisms to generalize across the relevant spatial and temporal features of complex time-varying stimuli. PMID:29537963

  8. End-of-life healthcare expenditure: Testing economic explanations using a discrete choice experiment.

    PubMed

    Fischer, Barbara; Telser, Harry; Zweifel, Peter

    2018-06-07

    Healthcare expenditure (HCE) spent during an individual's last year of life accounts for a high share of lifetime HCE. This finding is puzzling because an investment in health is unlikely to have a sufficiently long payback period. However, Becker et al. (2007) and Philipson et al. (2010) have advanced a theory designed to explain high willingness to pay (WTP) for an extension of life close to its end. Their testable implications are complemented by the concept of 'pain of risk bearing' introduced by Eeckhoudt and Schlesinger (2006). They are tested using a discrete choice experiment performed in 2014, involving 1,529 Swiss adults. An individual setting where the price attribute is substantial out-of-pocket payment for a novel drug for treatment of terminal cancer is distinguished from a societal one, where it is an increase in contributions to social health insurance. Most of the economic predictions receive empirical support. Copyright © 2018. Published by Elsevier B.V.

  9. The spatial extent of star formation in interacting galaxies

    NASA Astrophysics Data System (ADS)

    Moreno, Jorge

    2015-08-01

    We employ a suite of 75 simulations of galaxies in idealized major mergers (stellar mass ratio ˜2.5:1), with a wide range of orbital parameters, to investigate the spatial extent of interaction-induced star formation. Although the total star formation in galaxy encounters is generally elevated relative to isolated galaxies, we find that this elevation is a combination of intense enhancements within the central kpc and moderately suppressed activity at larger galactocentric radii. The radial dependence of the star formation enhancement is stronger in the less massive galaxy than in the primary, and is also more pronounced in mergers of more closely aligned disc spin orientations. Conversely, these trends are almost entirely independent of the encounter’s impact parameter and orbital eccentricity. Our predictions of the radial dependence of triggered star formation, and specifically the suppression of star formation beyond kpc-scales, will be testable with the next generation of integral-field spectroscopic surveys.Co-authors: Paul Torrey, Sara Ellison, David Patton, Asa Bluck, Gunjan Bansal & Lars Hernquist

  10. Integrating Environmental Genomics and Biogeochemical Models: a Gene-centric Approach

    NASA Astrophysics Data System (ADS)

    Reed, D. C.; Algar, C. K.; Huber, J. A.; Dick, G.

    2013-12-01

    Rapid advances in molecular microbial ecology have yielded an unprecedented amount of data about the evolutionary relationships and functional traits of microbial communities that regulate global geochemical cycles. Biogeochemical models, however, are trailing in the wake of the environmental genomics revolution and such models rarely incorporate explicit representations of bacteria and archaea, nor are they compatible with nucleic acid or protein sequence data. Here, we present a functional gene-based framework for describing microbial communities in biogeochemical models that uses genomics data and provides predictions that are readily testable using cutting-edge molecular tools. To demonstrate the approach in practice, nitrogen cycling in the Arabian Sea oxygen minimum zone (OMZ) was modelled to examine key questions about cryptic sulphur cycling and dinitrogen production pathways in OMZs. By directly linking geochemical dynamics to the genetic composition of microbial communities, the method provides mechanistic insights into patterns and biogeochemical consequences of marine microbes. Such an approach is critical for informing our understanding of the key role microbes play in modulating Earth's biogeochemistry.

  11. Encoding sensory and motor patterns as time-invariant trajectories in recurrent neural networks.

    PubMed

    Goudar, Vishwa; Buonomano, Dean V

    2018-03-14

    Much of the information the brain processes and stores is temporal in nature-a spoken word or a handwritten signature, for example, is defined by how it unfolds in time. However, it remains unclear how neural circuits encode complex time-varying patterns. We show that by tuning the weights of a recurrent neural network (RNN), it can recognize and then transcribe spoken digits. The model elucidates how neural dynamics in cortical networks may resolve three fundamental challenges: first, encode multiple time-varying sensory and motor patterns as stable neural trajectories; second, generalize across relevant spatial features; third, identify the same stimuli played at different speeds-we show that this temporal invariance emerges because the recurrent dynamics generate neural trajectories with appropriately modulated angular velocities. Together our results generate testable predictions as to how recurrent networks may use different mechanisms to generalize across the relevant spatial and temporal features of complex time-varying stimuli. © 2018, Goudar et al.

  12. Simple Model for Identifying Critical Regions in Atrial Fibrillation

    NASA Astrophysics Data System (ADS)

    Christensen, Kim; Manani, Kishan A.; Peters, Nicholas S.

    2015-01-01

    Atrial fibrillation (AF) is the most common abnormal heart rhythm and the single biggest cause of stroke. Ablation, destroying regions of the atria, is applied largely empirically and can be curative but with a disappointing clinical success rate. We design a simple model of activation wave front propagation on an anisotropic structure mimicking the branching network of heart muscle cells. This integration of phenomenological dynamics and pertinent structure shows how AF emerges spontaneously when the transverse cell-to-cell coupling decreases, as occurs with age, beyond a threshold value. We identify critical regions responsible for the initiation and maintenance of AF, the ablation of which terminates AF. The simplicity of the model allows us to calculate analytically the risk of arrhythmia and express the threshold value of transversal cell-to-cell coupling as a function of the model parameters. This threshold value decreases with increasing refractory period by reducing the number of critical regions which can initiate and sustain microreentrant circuits. These biologically testable predictions might inform ablation therapies and arrhythmic risk assessment.

  13. Conway's "Game of Life" and the Epigenetic Principle.

    PubMed

    Caballero, Lorena; Hodge, Bob; Hernandez, Sergio

    2016-01-01

    Cellular automatons and computer simulation games are widely used as heuristic devices in biology, to explore implications and consequences of specific theories. Conway's Game of Life has been widely used for this purpose. This game was designed to explore the evolution of ecological communities. We apply it to other biological processes, including symbiopoiesis. We show that Conway's organization of rules reflects the epigenetic principle, that genetic action and developmental processes are inseparable dimensions of a single biological system, analogous to the integration processes in symbiopoiesis. We look for similarities and differences between two epigenetic models, by Turing and Edelman, as they are realized in Game of Life objects. We show the value of computer simulations to experiment with and propose generalizations of broader scope with novel testable predictions. We use the game to explore issues in symbiopoiesis and evo-devo, where we explore a fractal hypothesis: that self-similarity exists at different levels (cells, organisms, ecological communities) as a result of homologous interactions of two as processes modeled in the Game of Life.

  14. Cognitive Scientists Prefer Theories and Testable Principles with Teeth

    ERIC Educational Resources Information Center

    Graesser, Arthur C.

    2009-01-01

    Alexander, Schallert, and Reynolds (2009/this issue) proposed a definition and landscape of learning that included 9 principles and 4 dimensions ("what," "who," "where," "when"). This commentary reflects on the utility of this definition and 4-dimensional landscape from the standpoint of educational…

  15. A systems framework for identifying candidate microbial assemblages for disease management

    USDA-ARS?s Scientific Manuscript database

    Network models of soil and plant microbiomes present new opportunities for enhancing disease management, but also challenges for interpretation. We present a framework for interpreting microbiome networks, illustrating how the observed structure of networks can be used to generate testable hypothese...

  16. What is a delusion? Epistemological dimensions.

    PubMed

    Leeser, J; O'Donohue, W

    1999-11-01

    Although the Diagnostic and Statistical Manual of Mental Disorders (American Psychiatric Association, 1994) clearly indicates delusions have an epistemic dimension, it fails to accurately identify the epistemic properties of delusions. The authors explicate the regulative causes of belief revision for rational agents and argue that delusions are unresponsive to these. They argue that delusions are (a) protected beliefs made unfalsifiable either in principle or because the agent refuses to admit anything as a potential falsifier; (b) the protected belief is not typically considered a "properly basic" belief; (c) the belief is not of the variety of protected scientific beliefs; (d) in response to an apparent falsification, the subject posits not a simple, testable explanation for the inconsistency but one that is more complicated, less testable, and provides no new corroborations; (e) the subject has a strong emotional attachment to the belief; and (f) the belief is typically supported by (or originates from) trivial occurrences that are interpreted by the subject as highly unusual, significant, having personal reference, or some combination of these.

  17. Trichromatic reconstruction from the interleaved cone mosaic: Bayesian model and the color appearance of small spots

    PubMed Central

    Brainard, David H.; Williams, David R.; Hofer, Heidi

    2009-01-01

    Observers use a wide range of color names, including white, to describe monochromatic flashes with a retinal size comparable to that of a single cone. We model such data as a consequence of information loss arising from trichromatic sampling. The model starts with the simulated responses of the individual L, M, and S cones actually present in the cone mosaic and uses these to reconstruct the L-, M-, and S-cone signals that were present at every image location. We incorporate the optics and the mosaic topography of individual observers, as well as the spatio-chromatic statistics of natural images. We simulated the experiment of H. Hofer, B. Singer, & D. R. Williams (2005) and predicted the color name on each simulated trial from the average chromaticity of the spot reconstructed by our model. Broad features of the data across observers emerged naturally as a consequence of the measured individual variation in the relative numbers of L, M, and S cones. The model’s output is also consistent with the appearance of larger spots and of sinusoidal contrast modulations. Finally, the model makes testable predictions for future experiments that study how color naming varies with the fine structure of the retinal mosaic. PMID:18842086

  18. Drivers and mechanisms of tree mortality in moist tropical forests.

    PubMed

    McDowell, Nate; Allen, Craig D; Anderson-Teixeira, Kristina; Brando, Paulo; Brienen, Roel; Chambers, Jeff; Christoffersen, Brad; Davies, Stuart; Doughty, Chris; Duque, Alvaro; Espirito-Santo, Fernando; Fisher, Rosie; Fontes, Clarissa G; Galbraith, David; Goodsman, Devin; Grossiord, Charlotte; Hartmann, Henrik; Holm, Jennifer; Johnson, Daniel J; Kassim, Abd Rahman; Keller, Michael; Koven, Charlie; Kueppers, Lara; Kumagai, Tomo'omi; Malhi, Yadvinder; McMahon, Sean M; Mencuccini, Maurizio; Meir, Patrick; Moorcroft, Paul; Muller-Landau, Helene C; Phillips, Oliver L; Powell, Thomas; Sierra, Carlos A; Sperry, John; Warren, Jeff; Xu, Chonggang; Xu, Xiangtao

    2018-02-16

    Tree mortality rates appear to be increasing in moist tropical forests (MTFs) with significant carbon cycle consequences. Here, we review the state of knowledge regarding MTF tree mortality, create a conceptual framework with testable hypotheses regarding the drivers, mechanisms and interactions that may underlie increasing MTF mortality rates, and identify the next steps for improved understanding and reduced prediction. Increasing mortality rates are associated with rising temperature and vapor pressure deficit, liana abundance, drought, wind events, fire and, possibly, CO 2 fertilization-induced increases in stand thinning or acceleration of trees reaching larger, more vulnerable heights. The majority of these mortality drivers may kill trees in part through carbon starvation and hydraulic failure. The relative importance of each driver is unknown. High species diversity may buffer MTFs against large-scale mortality events, but recent and expected trends in mortality drivers give reason for concern regarding increasing mortality within MTFs. Models of tropical tree mortality are advancing the representation of hydraulics, carbon and demography, but require more empirical knowledge regarding the most common drivers and their subsequent mechanisms. We outline critical datasets and model developments required to test hypotheses regarding the underlying causes of increasing MTF mortality rates, and improve prediction of future mortality under climate change. No claim to original US government works New Phytologist © 2018 New Phytologist Trust.

  19. Forest management under uncertainty for multiple bird population objectives

    USGS Publications Warehouse

    Moore, C.T.; Plummer, W.T.; Conroy, M.J.; Ralph, C. John; Rich, Terrell D.

    2005-01-01

    We advocate adaptive programs of decision making and monitoring for the management of forest birds when responses by populations to management, and particularly management trade-offs among populations, are uncertain. Models are necessary components of adaptive management. Under this approach, uncertainty about the behavior of a managed system is explicitly captured in a set of alternative models. The models generate testable predictions about the response of populations to management, and monitoring data provide the basis for assessing these predictions and informing future management decisions. To illustrate these principles, we examine forest management at the Piedmont National Wildlife Refuge, where management attention is focused on the recovery of the Red-cockaded Woodpecker (Picoides borealis) population. However, managers are also sensitive to the habitat needs of many non-target organisms, including Wood Thrushes (Hylocichla mustelina) and other forest interior Neotropical migratory birds. By simulating several management policies on a set of-alternative forest and bird models, we found a decision policy that maximized a composite response by woodpeckers and Wood Thrushes despite our complete uncertainty regarding system behavior. Furthermore, we used monitoring data to update our measure of belief in each alternative model following one cycle of forest management. This reduction of uncertainty translates into a reallocation of model influence on the choice of optimal decision action at the next decision opportunity.

  20. Small-Molecule Effectors of Hepatitis B Virus Capsid Assembly Give Insight into Virus Life Cycle▿

    PubMed Central

    Bourne, Christina; Lee, Sejin; Venkataiah, Bollu; Lee, Angela; Korba, Brent; Finn, M. G.; Zlotnick, Adam

    2008-01-01

    The relationship between the physical chemistry and biology of self-assembly is poorly understood, but it will be critical to quantitatively understand infection and for the design of antivirals that target virus genesis. Here we take advantage of heteroaryldihydropyrimidines (HAPs), which affect hepatitis B virus (HBV) assembly, to gain insight and correlate in vitro assembly with HBV replication in culture. Based on a low-resolution crystal structure of a capsid-HAP complex, a closely related series of HAPs were designed and synthesized. These differentially strengthen the association between neighboring capsid proteins, alter the kinetics of assembly, and give rise to aberrant structures incompatible with a functional capsid. The chemical nature of the HAP variants correlated well with the structure of the HAP binding pocket. The thermodynamics and kinetics of in vitro assembly had strong and predictable effects on product morphology. However, only the kinetics of in vitro assembly had a strong correlation with inhibition of HBV replication in HepG2.2.15 cells; there was at best a weak correlation between assembly thermodynamics and replication. The correlation between assembly kinetics and virus suppression implies a competition between successful assembly and misassembly, small molecule induced or otherwise. This is a predictive and testable model for the mechanism of action of assembly effectors. PMID:18684823

Top